Science.gov

Sample records for acceptable error range

  1. Tropospheric range error parameters: Further studies

    NASA Technical Reports Server (NTRS)

    Hopfield, H. S.

    1972-01-01

    Improved parameters are presented for predicting the tropospheric effect on electromagnetic range measurements from surface meteorological data. More geographic locations have been added to the earlier list. Parameters are given for computing the dry component of the zenith radio range effect from surface pressure alone with an rms error of 1 to 2 mm, or the total range effect from the dry and wet components of the surface refractivity and a two-part quartic profile model. The new parameters are obtained, as before, from meteorological balloon data but with improved procedures, including the conversion of the geopotential heights of the balloon data to actual or geometric heights before using the data. The revised values of the parameter k (dry component of vertical radio range effect per unit pressure at the surface) show more latitude variation than is accounted for by the variation of g, the acceleration of gravity.

  2. Preliminary error budget for an optical ranging system: Range, range rate, and differenced range observables

    NASA Technical Reports Server (NTRS)

    Folkner, W. M.; Finger, M. H.

    1990-01-01

    Future missions to the outer solar system or human exploration of Mars may use telemetry systems based on optical rather than radio transmitters. Pulsed laser transmission can be used to deliver telemetry rates of about 100 kbits/sec with an efficiency of several bits for each detected photon. Navigational observables that can be derived from timing pulsed laser signals are discussed. Error budgets are presented based on nominal ground stations and spacecraft-transceiver designs. Assuming a pulsed optical uplink signal, two-way range accuracy may approach the few centimeter level imposed by the troposphere uncertainty. Angular information can be achieved from differenced one-way range using two ground stations with the accuracy limited by the length of the available baseline and by clock synchronization and troposphere errors. A method of synchronizing the ground station clocks using optical ranging measurements is presented. This could allow differenced range accuracy to reach the few centimeter troposphere limit.

  3. Tropospheric range error parameters: Further studies

    NASA Technical Reports Server (NTRS)

    Hopfield, H. S.

    1972-01-01

    Improved parameters are presented for predicting the tropospheric effect on electromagnetic range measurements from surface meteorological data. Parameters are given for computing the dry component of the zenith radio range effect from surface pressure alone with an rms error of 1 to 2 mm, or the total range effect from the dry and wet components of the surface refractivity, N, and a two-part quartic profile model. The parameters were obtained from meteorological balloon data with improved procedures, including the conversion of the geopotential heights of the balloon data to actual or geometric heights before using the data. The revised values of the parameter k show more latitude variation than is accounted for by the variation of g. This excess variation of k indicates a small latitude variation in the mean molecular weight of air and yields information about the latitude-varying water vapor content of air.

  4. Atmospheric refraction errors in laser ranging systems

    NASA Technical Reports Server (NTRS)

    Gardner, C. S.; Rowlett, J. R.

    1976-01-01

    The effects of horizontal refractivity gradients on the accuracy of laser ranging systems were investigated by ray tracing through three dimensional refractivity profiles. The profiles were generated by performing a multiple regression on measurements from seven or eight radiosondes, using a refractivity model which provided for both linear and quadratic variations in the horizontal direction. The range correction due to horizontal gradients was found to be an approximately sinusoidal function of azimuth having a minimum near 0 deg azimuth and a maximum near 180 deg azimuth. The peak to peak variation was approximately 5 centimeters at 10 deg elevation and decreased to less than 1 millimeter at 80 deg elevation.

  5. Statistics of the residual refraction errors in laser ranging data

    NASA Technical Reports Server (NTRS)

    Gardner, C. S.

    1977-01-01

    A theoretical model for the range error covariance was derived by assuming that the residual refraction errors are due entirely to errors in the meteorological data which are used to calculate the atmospheric correction. The properties of the covariance function are illustrated by evaluating the theoretical model for the special case of a dense network of weather stations uniformly distributed within a circle.

  6. Laser ranging error budget for the TOPEX/POSEIDON satellite.

    PubMed

    Schwartz, J A

    1990-09-01

    A laser ranging error budget is detailed, and a specific error budget is derived for the TOPEX/POSEIDON satellite. A ranging uncertainty of 0.76 cm is predicted for TOPEX/POSEIDON at 20 degrees elevation using the presently designed laser retroreflector array and only modest improvements in present system operations. Atmospheric refraction and satellite attitude effects cause the predicted range error to vary with satellite elevation angle from 0.71 cm at zenith to 0.76 cm at 20 degrees elevation. This a priori error budget compares well with the ~1.2-cm rms a posteriori polynomial orbital fit using existing data taken for an extant satellite of similar size and orbit.

  7. Error analysis of two methods for range-images registration

    NASA Astrophysics Data System (ADS)

    Liu, Xiaoli; Yin, Yongkai; Li, Ameng; He, Dong; Peng, Xiang

    2010-08-01

    With the improvements in range image registration techniques, this paper focuses on error analysis of two registration methods being generally applied in industry metrology including the algorithm comparison, matching error, computing complexity and different application areas. One method is iterative closest points, by which beautiful matching results with little error can be achieved. However some limitations influence its application in automatic and fast metrology. The other method is based on landmarks. We also present a algorithm for registering multiple range-images with non-coding landmarks, including the landmarks' auto-identification and sub-pixel location, 3D rigid motion, point pattern matching, global iterative optimization techniques et al. The registering results by the two methods are illustrated and a thorough error analysis is performed.

  8. Simulation of error in optical radar range measurements.

    PubMed

    Der, S; Redman, B; Chellappa, R

    1997-09-20

    We describe a computer simulation of atmospheric and target effects on the accuracy of range measurements using pulsed laser radars with p-i-n or avalanche photodiodes for direct detection. The computer simulation produces simulated images as a function of a wide variety of atmospheric, target, and sensor parameters for laser radars with range accuracies smaller than the pulse width. The simulation allows arbitrary target geometries and simulates speckle, turbulence, and near-field and far-field effects. We compare simulation results with actual range error data collected in field tests.

  9. Error analysis of combined stereo/optical-flow passive ranging

    NASA Technical Reports Server (NTRS)

    Barniv, Yair

    1991-01-01

    The motion of an imaging sensor causes each imaged point of the scene to correspondingly describe a time trajectory on the image plane. The trajectories of all imaged points are reminiscent of a flow (e.g., of liquid) which is the source of the term 'optical flow'. Optical-flow ranging is a method by which the stream of two-dimensional images obtained from a forward-looking forward-moving passive sensor is used to compute depth (or range) to points in the field of view. Another well-known ranging method consists of triangulation based on stereo images obtained from at least two stationary sensors. In this paper we analyze the potential accuracies of a combined optical flow and stereo passive-ranging system in the context of helicopter nap-of-the-earth obstacle avoidance. The Cramer-Rao lower bound is developed for the combined system under the assumption of an unknown angular bias error common to both cameras of a stereo pair. It is shown that the depth accuracy degradations caused by a bias error is negligible for a combined optical-flow and stereo system as compared to a monocular optical-flow system.

  10. Wind shear proportional errors in the horizontal wind speed sensed by focused, range gated lidars

    NASA Astrophysics Data System (ADS)

    Lindelöw, P.; Courtney, M.; Parmentier, R.; Cariou, J. P.

    2008-05-01

    The 10-minute average horizontal wind speeds sensed with lidar and mast mounted cup anemometers, at 60 to 116 meters altitude at HØvsØre, are compared. The lidar deviation from the cup value as a function of wind velocity and wind shear is studied in a 2-parametric regression analysis which reveals an altitude dependent relation between the lidar error and the wind shear. A likely explanation for this relation is an error in the intended sensing altitude. At most this error is estimated to 9 m which induced errors in the horizontal wind velocity of up to 0.5 m/s as compared to a cup at the intended altitude. The altitude errors of focused range gated lidars are likely to arise partly from an unaccounted shift of the weighting functions, describing the sample volume, due to the range dependent collection efficiency of the focused telescope. Possibilities of correcting the lidar measurements both for wind velocity and wind shear dependent errors are discussed. The 2-parametric regression analysis described in this paper is proven to be a better approach when acceptance testing and calibrating lidars.

  11. Entropy-Based TOA Estimation and SVM-Based Ranging Error Mitigation in UWB Ranging Systems.

    PubMed

    Yin, Zhendong; Cui, Kai; Wu, Zhilu; Yin, Liang

    2015-05-21

    The major challenges for Ultra-wide Band (UWB) indoor ranging systems are the dense multipath and non-line-of-sight (NLOS) problems of the indoor environment. To precisely estimate the time of arrival (TOA) of the first path (FP) in such a poor environment, a novel approach of entropy-based TOA estimation and support vector machine (SVM) regression-based ranging error mitigation is proposed in this paper. The proposed method can estimate the TOA precisely by measuring the randomness of the received signals and mitigate the ranging error without the recognition of the channel conditions. The entropy is used to measure the randomness of the received signals and the FP can be determined by the decision of the sample which is followed by a great entropy decrease. The SVM regression is employed to perform the ranging-error mitigation by the modeling of the regressor between the characteristics of received signals and the ranging error. The presented numerical simulation results show that the proposed approach achieves significant performance improvements in the CM1 to CM4 channels of the IEEE 802.15.4a standard, as compared to conventional approaches.

  12. Close-range radar rainfall estimation and error analysis

    NASA Astrophysics Data System (ADS)

    van de Beek, C. Z.; Leijnse, H.; Hazenberg, P.; Uijlenhoet, R.

    2016-08-01

    measurements, with a difference of 5-8 %. This shows the potential of radar as a tool for rainfall estimation, especially at close ranges, but also underlines the importance of applying radar correction methods as individual errors can have a large detrimental impact on the QPE performance of the radar.

  13. Customization of user interfaces to reduce errors and enhance user acceptance.

    PubMed

    Burkolter, Dina; Weyers, Benjamin; Kluge, Annette; Luther, Wolfram

    2014-03-01

    Customization is assumed to reduce error and increase user acceptance in the human-machine relation. Reconfiguration gives the operator the option to customize a user interface according to his or her own preferences. An experimental study with 72 computer science students using a simulated process control task was conducted. The reconfiguration group (RG) interactively reconfigured their user interfaces and used the reconfigured user interface in the subsequent test whereas the control group (CG) used a default user interface. Results showed significantly lower error rates and higher acceptance of the RG compared to the CG while there were no significant differences between the groups regarding situation awareness and mental workload. Reconfiguration seems to be promising and therefore warrants further exploration.

  14. Influence of satellite geometry, range, clock, and altimeter errors on two-satellite GPS navigation

    NASA Astrophysics Data System (ADS)

    Bridges, Philip D.

    Flight tests were conducted at Yuma Proving Grounds, Yuma, AZ, to determine the performance of a navigation system capable of using only two GPS satellites. The effect of satellite geometry, range error, and altimeter error on the horizontal position solution were analyzed for time and altitude aided GPS navigation (two satellites + altimeter + clock). The east and north position errors were expressed as a function of satellite range error, altimeter error, and east and north Dilution of Precision. The equations for the Dilution of Precision were derived as a function of satellite azimuth and elevation angles for the two satellite case. The expressions for the position error were then used to analyze the flight test data. The results showed the correlation between satellite geometry and position error, the increase in range error due to clock drift, and the impact of range and altimeter error on the east and north position error.

  15. Decreasing range resolution of a SAR image to permit correction of motion measurement errors beyond the SAR range resolution

    DOEpatents

    Doerry, Armin W.; Heard, Freddie E.; Cordaro, J. Thomas

    2010-07-20

    Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.

  16. Atmospheric refraction effects on baseline error in satellite laser ranging systems

    NASA Technical Reports Server (NTRS)

    Im, K. E.; Gardner, C. S.

    1982-01-01

    Because of the mathematical complexities involved in exact analyses of baseline errors, it is not easy to isolate atmospheric refraction effects; however, by making certain simplifying assumptions about the ranging system geometry, relatively simple expressions can be derived which relate the baseline errors directly to the refraction errors. The results indicate that even in the absence of other errors, the baseline error for intercontinental baselines can be more than an order of magnitude larger than the refraction error.

  17. Technical note: The effect of midshaft location on the error ranges of femoral and tibial cross-sectional parameters.

    PubMed

    Sládek, Vladimír; Berner, Margit; Galeta, Patrik; Friedl, Lukás; Kudrnová, Sárka

    2010-02-01

    In comparing long-bone cross-sectional geometric properties between individuals, percentages of bone length are often used to identify equivalent locations along the diaphysis. In fragmentary specimens where bone lengths cannot be measured, however, these locations must be estimated more indirectly. In this study, we examine the effect of inaccurately located femoral and tibial midshafts on estimation of geometric properties. The error ranges were compared on 30 femora and tibiae from the Eneolithic and Bronze Age. Cross-sections were obtained at each 1% interval from 60 to 40% of length using CT scans. Five percent of deviation from midshaft properties was used as the maximum acceptable error. Reliability was expressed by mean percentage differences, standard deviation of percentage differences, mean percentage absolute differences, limits of agreement, and mean accuracy range (MAR) (range within which mean deviation from true midshaft values was less than 5%). On average, tibial cortical area and femoral second moments of area are the least sensitive to positioning error, with mean accuracy ranges wide enough for practical application in fragmentary specimens (MAR = 40-130 mm). In contrast, tibial second moments of area are the most sensitive to error in midshaft location (MAR = 14-20 mm). Individuals present significant variation in morphology and thus in error ranges for different properties. For highly damaged fossil femora and tibiae we recommend carrying out additional tests to better establish specific errors associated with uncertain length estimates.

  18. Example Procedures for Developing Acceptance-Range Criteria for BESTEST-EX

    SciTech Connect

    Judkoff, R.; Polly, B.; Bianchi, M.; Neymark, J.

    2010-08-01

    This document provides an example procedure for establishing acceptance-range criteria to assess results from software undergoing BESTEST-EX. This example method for BESTEST-EX is a modified version of the method described in HERS BESTEST.

  19. 76 FR 37793 - Viking Range Corporation, Provisional Acceptance of a Settlement Agreement and Order

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-28

    ... COMMISSION Viking Range Corporation, Provisional Acceptance of a Settlement Agreement and Order AGENCY... Agreement with Viking Range Corporation, containing a civil penalty of $450,000.00. DATES: Any interested... 1. In accordance with 16 CFR 1118.20, Viking Range Corporation (``Viking'') and the staff...

  20. Correction of motion measurement errors beyond the range resolution of a synthetic aperture radar

    DOEpatents

    Doerry, Armin W.; Heard, Freddie E.; Cordaro, J. Thomas

    2008-06-24

    Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.

  1. Error analysis for a spaceborne laser ranging system

    NASA Technical Reports Server (NTRS)

    Pavlis, E. C.

    1979-01-01

    The dependence (or independence) of baseline accuracies, obtained from a typical mission of a spaceborne ranging system, on several factors is investigated. The emphasis is placed on a priori station information, but factors such as the elevation cut-off angle, the geometry of the network, the mean orbital height, and to a limited extent geopotential modeling are also examined. The results are obtained through simulations, but some theoretical justification is also given. Guidelines for freeing the results from these dependencies are suggested for most of the factors.

  2. Error range in proximal femoral osteotomy using computer tomography-based navigation.

    PubMed

    Takao, Masaki; Sakai, Takashi; Hamada, Hidetoshi; Sugano, Nobuhiko

    2017-04-01

    PURPOSE  : The purpose of this preliminary study was to determine the error range compared with preoperative plans in proximal femoral osteotomy conducted using a computed tomography (CT)-based navigation system. METHODS  : Four patients (four hips) underwent transtrochanteric rotational osteotomy (TRO), and three patients (four hips) underwent curved varus osteotomy (CVO) using CT-based navigation. Volume registration of pre- and postoperative CT was performed for error assessment. RESULTS  : In TRO, the mean osteotomy angle error was [Formula: see text] (range [Formula: see text]) in the valgus direction and [Formula: see text] (range [Formula: see text]) in the retroversion direction. The mean osteotomy position error, with the femoral head side as positive, was -0.4 mm (range -1.4 to 0 mm). The bone fragment rotational movement error was [Formula: see text] (range [Formula: see text]). In CVO, the mean osteotomy position error, with the femoral head side as positive, was -0.2 mm (range -2.0 to 1.7 mm) at the level of the lesser trochanter and 0.8 mm (range 0-3.2 mm) at the level of the greater trochanter. Bone fragment varus accuracy was [Formula: see text] (range [Formula: see text]). CONCLUSIONS  : In proximal femoral osteotomy using CT-based navigation, the angle error of osteotomy was within [Formula: see text] and the positional error was within 4 mm. The rotational movement error of the proximal fragment was within [Formula: see text]. These margins of error should be considered in preoperative planning. To improve surgical accuracy, it would be necessary to develop a computer-assisted device which can track the osteotomized fragment.

  3. Deconstructing the "reign of error": interpersonal warmth explains the self-fulfilling prophecy of anticipated acceptance.

    PubMed

    Stinson, Danu Anthony; Cameron, Jessica J; Wood, Joanne V; Gaucher, Danielle; Holmes, John G

    2009-09-01

    People's expectations of acceptance often come to create the acceptance or rejection they anticipate. The authors tested the hypothesis that interpersonal warmth is the behavioral key to this acceptance prophecy: If people expect acceptance, they will behave warmly, which in turn will lead other people to accept them; if they expect rejection, they will behave coldly, which will lead to less acceptance. A correlational study and an experiment supported this model. Study 1 confirmed that participants' warm and friendly behavior was a robust mediator of the acceptance prophecy compared to four plausible alternative explanations. Study 2 demonstrated that situational cues that reduced the risk of rejection also increased socially pessimistic participants' warmth and thus improved their social outcomes.

  4. Cramer-Rao lower bound on range error for LADARs with Geiger-mode avalanche photodiodes.

    PubMed

    Johnson, Steven E

    2010-08-20

    The Cramer-Rao lower bound (CRLB) on range error is calculated for laser detection and ranging (LADAR) systems using Geiger-mode avalanche photodiodes (GMAPDs) to detect reflected laser pulses. For the cases considered, the GMAPD range error CRLB is greater than the CRLB for a photon-counting device. It is also shown that the GMAPD range error CRLB is minimized when the mean energy in the received laser pulse is finite. Given typical LADAR system parameters, a Gaussian-envelope received pulse, and a noise detection rate of less than 4 MHz, the GMAPD range error CRLB is minimized when the quantum efficiency times the mean number of received laser pulse photons is between 2.2 and 2.3.

  5. The effect of proficiency level on measurement error of range of motion

    PubMed Central

    Akizuki, Kazunori; Yamaguchi, Kazuto; Morita, Yoshiyuki; Ohashi, Yukari

    2016-01-01

    [Purpose] The aims of this study were to evaluate the type and extent of error in the measurement of range of motion and to evaluate the effect of evaluators’ proficiency level on measurement error. [Subjects and Methods] The participants were 45 university students, in different years of their physical therapy education, and 21 physical therapists, with up to three years of clinical experience in a general hospital. Range of motion of right knee flexion was measured using a universal goniometer. An electrogoniometer attached to the right knee and hidden from the view of the participants was used as the criterion to evaluate error in measurement using the universal goniometer. The type and magnitude of error were evaluated using the Bland-Altman method. [Results] Measurements with the universal goniometer were not influenced by systematic bias. The extent of random error in measurement decreased as the level of proficiency and clinical experience increased. [Conclusion] Measurements of range of motion obtained using a universal goniometer are influenced by random errors, with the extent of error being a factor of proficiency. Therefore, increasing the amount of practice would be an effective strategy for improving the accuracy of range of motion measurements. PMID:27799712

  6. Acceptance Control Charts with Stipulated Error Probabilities Based on Poisson Count Data

    DTIC Science & Technology

    1973-01-01

    Richard L / Scheaffer ’.* Richard S eavenwort December,... 198 *Department of Industrial and Systems Engineering University of Florida Gainesville...L. Scheaffer N00014-75-C-0783 Richard S. Leavenworth 9. PERFORMING ORGANIZATION NAME AND ADDRESS . PROGRAM ELEMENT. PROJECT, TASK Industrial and...PROBABILITIES BASED ON POISSON COUNT DATA by Suresh 1Ihatre Richard L. Scheaffer S..Richard S. Leavenworth ABSTRACT An acceptance control charting

  7. Experiments and error analysis of laser ranging based on frequency-sweep polarization modulation

    NASA Astrophysics Data System (ADS)

    Gao, Shuyuan; Ji, Rongyi; Li, Yao; Cheng, Zhi; Zhou, Weihu

    2016-11-01

    Frequency-sweep polarization modulation ranging uses a polarization-modulated laser beam to determine the distance to the target, the modulation frequency is swept and frequency values are measured when transmitted and received signals are in phase, thus the distance can be calculated through these values. This method gets much higher theoretical measuring accuracy than phase difference method because of the prevention of phase measurement. However, actual accuracy of the system is limited since additional phase retardation occurs in the measuring optical path when optical elements are imperfectly processed and installed. In this paper, working principle of frequency sweep polarization modulation ranging method is analyzed, transmission model of polarization state in light path is built based on the theory of Jones Matrix, additional phase retardation of λ/4 wave plate and PBS, their impact on measuring performance is analyzed. Theoretical results show that wave plate's azimuth error dominates the limitation of ranging accuracy. According to the system design index, element tolerance and error correcting method of system is proposed, ranging system is built and ranging experiment is performed. Experiential results show that with proposed tolerance, the system can satisfy the accuracy requirement. The present work has a guide value for further research about system design and error distribution.

  8. Comparing range data across the slow-time dimension to correct motion measurement errors beyond the range resolution of a synthetic aperture radar

    DOEpatents

    Doerry, Armin W.; Heard, Freddie E.; Cordaro, J. Thomas

    2010-08-17

    Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.

  9. Technique Errors and Limiting Factors in Laser Ranging to Geodetic Satellites

    NASA Astrophysics Data System (ADS)

    Appleby, G. M.; Luceri, V.; Mueller, H.; Noll, C. E.; Otsubo, T.; Wilkinson, M.

    2012-12-01

    The tracking stations of the International Laser Ranging Service (ILRS) global network provide to the Data Centres a steady stream of very precise laser range normal points to the primary geodetic spherical satellites LAGEOS (-1 and -2) and Etalon (-1 and -2). Analysis of these observations to determine instantaneous site coordinates and Earth orientation parameters provides a major contribution to ongoing international efforts to define a precise terrestrial reference frame, which itself supports research into geophysical processes at the few mm level of precision. For example, the latest realization of the reference frame, ITRF2008, used weekly laser range solutions from 1983 to 2009, the origin of the Frame being determined solely by the SLR technique. However, in the ITRF2008 publication, Altamimi et al (2011, Journal of Geodesy) point out that further improvement in the ITRF is partly dependent upon improving an understanding of sources of technique error. In this study we look at SLR station hardware configuration that has been subject to major improvements over the last four decades, at models that strive to provide accurate translations of the laser range observations to the centres of mass of the small geodetic satellites and at the considerable body of work that has been carried out via orbital analyses to determine range corrections for some of the tracking stations. Through this study, with specific examples, we start to put together an inventory of system-dependent technique errors that will be important information for SLR re-analysis towards the next realization of the ITRF.

  10. Nuclear numerical range and quantum error correction codes for non-unitary noise models

    NASA Astrophysics Data System (ADS)

    Lipka-Bartosik, Patryk; Życzkowski, Karol

    2017-01-01

    We introduce a notion of nuclear numerical range defined as the set of expectation values of a given operator A among normalized pure states, which belong to the nucleus of an auxiliary operator Z. This notion proves to be applicable to investigate models of quantum noise with block-diagonal structure of the corresponding Kraus operators. The problem of constructing a suitable quantum error correction code for this model can be restated as a geometric problem of finding intersection points of certain sets in the complex plane. This technique, worked out in the case of two-qubit systems, can be generalized for larger dimensions.

  11. Coordinative task difficulty and behavioural errors are associated with increased long-range beta band synchronization.

    PubMed

    Rueda-Delgado, L M; Solesio-Jofre, E; Mantini, D; Dupont, P; Daffertshofer, A; Swinnen, S P

    2017-02-01

    The neural network and the task-dependence of (local) activity changes involved in bimanual coordination are well documented. However, much less is known about the functional connectivity within this neural network and its modulation according to manipulations of task complexity. Here, we assessed neural activity via high-density electroencephalography, focussing on changes of activity in the beta frequency band (~15-30Hz) across the motor network in 26 young adult participants (19-29 years old). We investigated how network connectivity was modulated with task difficulty and errors of performance during a bimanual visuomotor movement consisting of dial rotation according to three different ratios of speed: an isofrequency movement (1:1), a non-isofrequency movement with the right hand keeping the fast pace (1:3), and the converse ratio with the left hand keeping the fast pace (3:1). To quantify functional coupling, we determined neural synchronization which might be key for the timing of the activity within brain regions during task execution. Individual source activity with realistic head models was reconstructed at seven regions of interest including frontal and parietal areas, among which we estimated phase-based connectivity. Partial least squares analysis revealed a significant modulation of connectivity with task difficulty, and significant correlations between connectivity and errors in performance, in particular between sensorimotor cortices. Our findings suggest that modulation of long-range synchronization is instrumental for coping with increasing task demands in bimanual coordination.

  12. Earth orientation from lunar laser ranging and an error analysis of polar motion services

    NASA Technical Reports Server (NTRS)

    Dickey, J. O.; Newhall, X. X.; Williams, J. G.

    1985-01-01

    Lunar laser ranging (LLR) data are obtained on the basis of the timing of laser pulses travelling from observatories on earth to retroreflectors placed on the moon's surface during the Apollo program. The modeling and analysis of the LLR data can provide valuable insights into earth's dynamics. The feasibility to model accurately the lunar orbit over the full 13-year observation span makes it possible to conduct relatively long-term studies of variations in the earth's rotation. A description is provided of general analysis techniques, and the calculation of universal time (UT1) from LLR is discussed. Attention is also given to a summary of intercomparisons with different techniques, polar motion results and intercomparisons, and a polar motion error analysis.

  13. Development of Algorithms and Error Analyses for the Short Baseline Lightning Detection and Ranging System

    NASA Technical Reports Server (NTRS)

    Starr, Stanley O.

    1998-01-01

    NASA, at the John F. Kennedy Space Center (KSC), developed and operates a unique high-precision lightning location system to provide lightning-related weather warnings. These warnings are used to stop lightning- sensitive operations such as space vehicle launches and ground operations where equipment and personnel are at risk. The data is provided to the Range Weather Operations (45th Weather Squadron, U.S. Air Force) where it is used with other meteorological data to issue weather advisories and warnings for Cape Canaveral Air Station and KSC operations. This system, called Lightning Detection and Ranging (LDAR), provides users with a graphical display in three dimensions of 66 megahertz radio frequency events generated by lightning processes. The locations of these events provide a sound basis for the prediction of lightning hazards. This document provides the basis for the design approach and data analysis for a system of radio frequency receivers to provide azimuth and elevation data for lightning pulses detected simultaneously by the LDAR system. The intent is for this direction-finding system to correct and augment the data provided by LDAR and, thereby, increase the rate of valid data and to correct or discard any invalid data. This document develops the necessary equations and algorithms, identifies sources of systematic errors and means to correct them, and analyzes the algorithms for random error. This data analysis approach is not found in the existing literature and was developed to facilitate the operation of this Short Baseline LDAR (SBLDAR). These algorithms may also be useful for other direction-finding systems using radio pulses or ultrasonic pulse data.

  14. Single-plane versus three-plane methods for relative range error evaluation of medium-range 3D imaging systems

    NASA Astrophysics Data System (ADS)

    MacKinnon, David K.; Cournoyer, Luc; Beraldin, J.-Angelo

    2015-05-01

    Within the context of the ASTM E57 working group WK12373, we compare the two methods that had been initially proposed for calculating the relative range error of medium-range (2 m to 150 m) optical non-contact 3D imaging systems: the first is based on a single plane (single-plane assembly) and the second on an assembly of three mutually non-orthogonal planes (three-plane assembly). Both methods are evaluated for their utility in generating a metric to quantify the relative range error of medium-range optical non-contact 3D imaging systems. We conclude that the three-plane assembly is comparable to the single-plane assembly with regard to quantification of relative range error while eliminating the requirement to isolate the edges of the target plate face.

  15. On the determination of the effect of horizontal ionospheric gradients on ranging errors in GNSS positioning

    NASA Astrophysics Data System (ADS)

    Danilogorskaya, Ekaterina A.; Zernov, Nikolay N.; Gherm, Vadim E.; Strangeways, Hal J.

    2016-12-01

    An alternative approach to the traditionally employed method is proposed for treating the ionospheric range errors in transionospheric propagation such as for GNSS positioning or satellite-borne SAR. It enables the effects due to horizontal gradients of electron density (as well as vertical gradients) in the ionosphere to be explicitly accounted for. By contrast with many previous treatments, where the expansion of the solution for the phase advance is represented as the series in the inverse frequency powers and the main term of the expansion corresponds to the true line-of-sight distance from the transmitter to the receiver, in the alternative technique the zero-order term is the rigorous solution for a spherically layered ionosphere with any given vertical electron density profile. The first-order term represents the effects due to the horizontal gradients of the electron density of the ionosphere, and the second-order correction appears to be negligibly small for any reasonable parameters of the path of propagation and its geometry for VHF/UHF frequencies. Additionally, an "effective" spherically symmetric model of the ionosphere has been introduced, which accounts for the major contribution of the horizontal gradients of the ionosphere and provides very high accuracy in calculations of the phase advance.

  16. Reducing the sensitivity of IMPT treatment plans to setup errors and range uncertainties via probabilistic treatment planning

    SciTech Connect

    Unkelbach, Jan; Bortfeld, Thomas; Martin, Benjamin C.; Soukup, Martin

    2009-01-15

    Treatment plans optimized for intensity modulated proton therapy (IMPT) may be very sensitive to setup errors and range uncertainties. If these errors are not accounted for during treatment planning, the dose distribution realized in the patient may by strongly degraded compared to the planned dose distribution. The authors implemented the probabilistic approach to incorporate uncertainties directly into the optimization of an intensity modulated treatment plan. Following this approach, the dose distribution depends on a set of random variables which parameterize the uncertainty, as does the objective function used to optimize the treatment plan. The authors optimize the expected value of the objective function. They investigate IMPT treatment planning regarding range uncertainties and setup errors. They demonstrate that incorporating these uncertainties into the optimization yields qualitatively different treatment plans compared to conventional plans which do not account for uncertainty. The sensitivity of an IMPT plan depends on the dose contributions of individual beam directions. Roughly speaking, steep dose gradients in beam direction make treatment plans sensitive to range errors. Steep lateral dose gradients make plans sensitive to setup errors. More robust treatment plans are obtained by redistributing dose among different beam directions. This can be achieved by the probabilistic approach. In contrast, the safety margin approach as widely applied in photon therapy fails in IMPT and is neither suitable for handling range variations nor setup errors.

  17. Reducing the sensitivity of IMPT treatment plans to setup errors and range uncertainties via probabilistic treatment planning.

    PubMed

    Unkelbach, Jan; Bortfeld, Thomas; Martin, Benjamin C; Soukup, Martin

    2009-01-01

    Treatment plans optimized for intensity modulated proton therapy (IMPT) may be very sensitive to setup errors and range uncertainties. If these errors are not accounted for during treatment planning, the dose distribution realized in the patient may by strongly degraded compared to the planned dose distribution. The authors implemented the probabilistic approach to incorporate uncertainties directly into the optimization of an intensity modulated treatment plan. Following this approach, the dose distribution depends on a set of random variables which parameterize the uncertainty, as does the objective function used to optimize the treatment plan. The authors optimize the expected value of the objective function. They investigate IMPT treatment planning regarding range uncertainties and setup errors. They demonstrate that incorporating these uncertainties into the optimization yields qualitatively different treatment plans compared to conventional plans which do not account for uncertainty. The sensitivity of an IMPT plan depends on the dose contributions of individual beam directions. Roughly speaking, steep dose gradients in beam direction make treatment plans sensitive to range errors. Steep lateral dose gradients make plans sensitive to setup errors. More robust treatment plans are obtained by redistributing dose among different beam directions. This can be achieved by the probabilistic approach. In contrast, the safety margin approach as widely applied in photon therapy fails in IMPT and is neither suitable for handling range variations nor setup errors.

  18. Acceptance of embryonic stem cells by a wide developmental range of mouse tetraploid embryos.

    PubMed

    Lin, Chih-Jen; Amano, Tomokazu; Zhang, Jifeng; Chen, Yuqing Eugene; Tian, X Cindy

    2010-08-01

    Tetraploid (4N) complementation assay is regard as the most stringent characterization test for the pluripotency of embryonic stem (ES) cells. The technology can generate mice fully derived from the injected ES cell (ES-4N) with 4N placentas. However, it remains a very inefficient procedure owing to a lack of information on the optimal conditions for ES incorporation into the 4N embryos. In the present study, we injected ES cells from embryos of natural fertilization (fES) and somatic cell nuclear transfer (ntES) into 4N embryos at various stages of development to determine the optimal stage of ES cells integration by comparing the efficiency of full-term ES-4N mouse generation. Our results demonstrate that fES/ntES cells can be incorporated into 4N embryos at 2-cell, 4-cell and blastocyst stages and full-term mice can be generated. Interestingly, ntES cells injected into the 4-cell group resulted in the lowest efficiency (5.6%) compared to the 2-cell (13.8%, P > 0.05) and blastocyst (16.7%, P < 0.05) stages. Because 4N embryos start to form compacted morulae at the 4-cell stage, we investigated whether the lower efficiency at this stage was due to early compaction by injecting ntES cells into artificially de-compacted embryos treated with calcium free medium. Although the treatment changed the embryonic morphology, it did not increase the efficiency of ES-4N mice generation. Immunochemistry of the cytoskeleton displayed microtubule and microfilament polarization at the late 4-cell stage in 4N embryos, which suggests that de-compaction treatment cannot reverse the polarization process. Taken together, we show here that a wide developmental range of 4N embryos can be used for 4N complementation and embryo polarization and compaction may restrict incorporation of ES cells into 4N embryos.

  19. Technique-Dependent Errors in the Satellite Laser Ranging Contributions to the ITRF

    NASA Astrophysics Data System (ADS)

    Pavlis, Erricos C.; Kuzmicz-Cieslak, Magdalena; König, Daniel

    2013-04-01

    Over the past decade Satellite Laser Ranging (SLR) has focused on its unique strength of providing accurate observations of the origin and scale of the International Terrestrial Reference Frame (ITRF). The origin of the ITRF is defined to coincide with the center of mass of the Earth system (geocenter). SLR realizes this origin as the focal point of the tracked satellite orbits, and being the only (nominally) unbiased ranging technique, it provides the best realization for it. The goal of GGOS is to provide an ITRF with accuracy at epoch of 1 mm or better and a stability of 0.1 mm/y. In order to meet this stringent goal, Space Geodesy is taking a two-pronged approach: modernizing the engineering components (ground and space segments), and revising the modeling standards to take advantage of recent improvements in many areas of geophysical modeling for system Earth components. As we gain improved understanding of the Earth system components, space geodesy adjusts its underlying modeling of the system to better and more completely describe it. Similarly, from the engineering side we examine the observational process for improvement of the calibration and reduction procedures that will enhance the accuracy of the individual observations thence the final SLR products. Two areas that are currently under scrutiny are (a) the station-dependent and tracking-mode-dependent correction of the observations for the "center-of-mass-offset" of each satellite target, and (b) the station- and pass-dependent correction for the calibrated delay that refers each measurement to the nominal "zero" of the instrument. The former affects primarily the accuracy of the scale definition, while the latter affects both, the scale and the origin. However, because of the non-uniform data volume and non-symmetric geographic locations of the SLR stations, the major impact of the latter is on the definition of the origin. The ILRS is currently investigating the quality of models available for the

  20. Bait acceptability for delivery of oral rabies vaccine to free-ranging dogs on the Navajo and Hopi Nations.

    PubMed

    Bergman, D; Bender, S; Wenning, K; Slate, D; Rupprecht, C; Heuser, C; DeLiberto, T

    2008-01-01

    In many areas of the world, only 30 to 50% of dogs are vaccinated against rabies. On some US Indian Reservations, vaccination rates may be as low as 5 to 20%. In 2003 and 2004, we evaluated the effectiveness of commercially available baits to deliver oral rabies vaccine to feral and free-ranging dogs on the Navajo and Hopi Nations. Dogs were offered one of the following baits containing a plastic packet filled with placebo vaccine: vegetable shortening-based Ontario slim baits (Artemis Technologies, Inc.), fish-meal-crumble coated sachets (Merial, Ltd.), dog food polymer baits (Bait-Tek, Inc.), or fish meal polymer baits (Bait-Tek, Inc.). One bait was offered to each animal and its behaviour toward the bait was recorded. Behaviours included: bait ignored, bait swallowed whole, bait chewed and discarded (sachet intact), bait chewed and discarded (sachet punctured), or bait chewed and consumed (sachet punctured). Bait acceptance ranged from 30.7% to 77.8% with the fish-meal-crumble coated sachets having the highest acceptance rate of the tested baits.

  1. GPS (Global Positioning System) Error Budgets, Accuracy and Applications Considerations for Test and Training Ranges.

    DTIC Science & Technology

    1982-12-01

    aligment , accelerometer and gyro instrument error parameters for the IGS. Estimates of these parameters can be used to isolate hardware and software...the Heavenly Bodies Moving About the Sun in Conic Sections, New York, Dover Publications Inc., 1963 (Reprint). 66. Kalman, R. E., "A New Approach to

  2. SU-E-T-550: Range Effects in Proton Therapy Caused by Systematic Errors in the Stoichiometric Calibration

    SciTech Connect

    Doolan, P; Dias, M; Collins Fekete, C; Seco, J

    2014-06-01

    Purpose: The procedure for proton treatment planning involves the conversion of the patient's X-ray CT from Hounsfield units into relative stopping powers (RSP), using a stoichiometric calibration curve (Schneider 1996). In clinical practice a 3.5% margin is added to account for the range uncertainty introduced by this process and other errors. RSPs for real tissues are calculated using composition data and the Bethe-Bloch formula (ICRU 1993). The purpose of this work is to investigate the impact that systematic errors in the stoichiometric calibration have on the proton range. Methods: Seven tissue inserts of the Gammex 467 phantom were imaged using our CT scanner. Their known chemical compositions (Watanabe 1999) were then used to calculate the theoretical RSPs, using the same formula as would be used for human tissues in the stoichiometric procedure. The actual RSPs of these inserts were measured using a Bragg peak shift measurement in the proton beam at our institution. Results: The theoretical calculation of the RSP was lower than the measured RSP values, by a mean/max error of - 1.5/-3.6%. For all seven inserts the theoretical approach underestimated the RSP, with errors variable across the range of Hounsfield units. Systematic errors for lung (average of two inserts), adipose and cortical bone were - 3.0/-2.1/-0.5%, respectively. Conclusion: There is a systematic underestimation caused by the theoretical calculation of RSP; a crucial step in the stoichiometric calibration procedure. As such, we propose that proton calibration curves should be based on measured RSPs. Investigations will be made to see if the same systematic errors exist for biological tissues. The impact of these differences on the range of proton beams, for phantoms and patient scenarios, will be investigated. This project was funded equally by the Engineering and Physical Sciences Research Council (UK) and Ion Beam Applications (Louvain-La-Neuve, Belgium)

  3. Reduction of influence of gain errors on performance of adaptive sub-ranging A/D converters with simplified architecture

    NASA Astrophysics Data System (ADS)

    Jedrzejewski, Konrad; Malkiewicz, Łukasz

    2016-09-01

    The paper presents the results of studies pertaining to the influence of gain errors of inter-stage amplifiers on performance of adaptive sub-ranging analog-to-digital converters (ADCs). It focuses on adaptive sub-ranging ADCs with simplified architecture of the analog part - using only one amplifier and a low resolution digital-to-analog converter, that is identical to that of known conventional sub-ranging ADCs. The only difference between adaptive subranging ADCs with simplified architecture and conventional sub-ranging ADCs is the process of determination of output codes of converted samples. The adaptive sub-ranging ADCs calculate the output codes on the basis of sub-codes obtained in particular stages of conversion using an adaptive algorithm. Thanks to application of the optimal adaptive algorithm, adjusted to the parameters of possible components imperfections and internal noises, the adaptive ADCs outperform, in terms of effective resolution per cycle, conventional sub-ranging ADCs forming the output codes using simple lower-level bit operations. Optimization of the conversion algorithm used in adaptive ADCs leads however to high sensitivity of adaptive ADCs performance to the inter-stage gain error. An effective method for reduction of this sensitivity in adaptive sub-ranging ADCs with simplified architecture is proposed and discussed in the paper.

  4. Post-manufacturing, 17-times acceptable raw bit error rate enhancement, dynamic codeword transition ECC scheme for highly reliable solid-state drives, SSDs

    NASA Astrophysics Data System (ADS)

    Tanakamaru, Shuhei; Fukuda, Mayumi; Higuchi, Kazuhide; Esumi, Atsushi; Ito, Mitsuyoshi; Li, Kai; Takeuchi, Ken

    2011-04-01

    A dynamic codeword transition ECC scheme is proposed for highly reliable solid-state drives, SSDs. By monitoring the error number or the write/erase cycles, the ECC codeword dynamically increases from 512 Byte (+parity) to 1 KByte, 2 KByte, 4 KByte…32 KByte. The proposed ECC with a larger codeword decreases the failure rate after ECC. As a result, the acceptable raw bit error rate, BER, before ECC is enhanced. Assuming a NAND Flash memory which requires 8-bit correction in 512 Byte codeword ECC, a 17-times higher acceptable raw BER than the conventional fixed 512 Byte codeword ECC is realized for the mobile phone application without an interleaving. For the MP3 player, digital-still camera and high-speed memory card applications with a dual channel interleaving, 15-times higher acceptable raw BER is achieved. Finally, for the SSD application with 8 channel interleaving, 13-times higher acceptable raw BER is realized. Because the ratio of the user data to the parity bits is the same in each ECC codeword, no additional memory area is required. Note that the reliability of SSD is improved after the manufacturing without cost penalty. Compared with the conventional ECC with the fixed large 32 KByte codeword, the proposed scheme achieves a lower power consumption by introducing the "best-effort" type operation. In the proposed scheme, during the most of the lifetime of SSD, a weak ECC with a shorter codeword such as 512 Byte (+parity), 1 KByte and 2 KByte is used and 98% lower power consumption is realized. At the life-end of SSD, a strong ECC with a 32 KByte codeword is used and the highly reliable operation is achieved. The random read performance is also discussed. The random read performance is estimated by the latency. The latency is below 1.5 ms for ECC codeword up to 32 KByte. This latency is below the average latency of 15,000 rpm HDD, 2 ms.

  5. Error Analysis of Combined Optical-Flow and Stereo Passive Ranging

    NASA Technical Reports Server (NTRS)

    Barniv, Yair

    1992-01-01

    The motion of an imaging sensor causes each imaged point of the seem to describe a time trajectory on the image plane. The trajectories of all imaged points are reminiscent of a flow (eg, of liquid) which is the source of the term "optical flow". Optical-flow ranging is a method by which the stream of two-dimensional images obtained from a forward-looking forward-moving passive sensor is used to compute range to points in the field of view. Another well-known ranging method consists of triangulation based on stereo images obtained from at least two stationary sensors. In this paper we analyze the potential accuracies of a combined optical flow and stereo passive-ranging system in the context of helicopter nap-of-the-earth obstacle avoidance. The Cramer-Rao lower bound (CRLB) is developed for the combined system under the assumption of a random angular misalignment common to both cameras of a stereo pair. It is shown that the range accuracy degradations caused by misalignment is negligible for a combined optical-flow and stereo system as compared with a monocular optical-flow system.

  6. Building Energy Simulation Test for Existing Homes (BESTEST-EX): Instructions for Implementing the Test Procedure, Calibration Test Reference Results, and Example Acceptance-Range Criteria

    SciTech Connect

    Judkoff, R.; Polly, B.; Bianchi, M.; Neymark, J.; Kennedy, M.

    2011-08-01

    This publication summarizes building energy simulation test for existing homes (BESTEST-EX): instructions for implementing the test procedure, calibration tests reference results, and example acceptance-range criteria.

  7. Pearson's distribution of type VII of the errors of satellite laser ranging data.

    NASA Astrophysics Data System (ADS)

    Dzhun', I. V.

    1991-06-01

    The distribution of differences O-C of the laser ranging data is studied. It has been found that actual distribution of these differences is better described by the Pearson curve of type VII than by the normal law which may be explained by variability of the observational conditions. The estimates of the parameter m of the Pearson law vary from 2.7 to ∞ (normal law).

  8. Range Profile Specific Optimal Waveforms for Minimum Mean Square Error Estimation

    DTIC Science & Technology

    2010-01-01

    requires as input the result of a single dwell. This is because of the effect of changing a fil- ter can be observed by simply processing the radar return...only be observed by transmitting another dwell. We have posited a joint measurement and adaptation process that assumes range cell specific waveforms...The observation resulting from using waveform s(l) is assumed to be given by ỹ(l) = AT (l)s(l) + ṽ(l) (1) where ṽ(l) is the measurement noise and A

  9. Perception of Acceptable Range of Smiles by Specialists, General Dentists and Lay Persons and Evaluation of Different Aesthetic Paradigms

    PubMed Central

    Saha, Mainak Kanti; Saha, Suparna Ganguly; Dubey, Sandeep; Saxena, Divya; Vijaywargiya, Neelam; Kala, Shubham

    2017-01-01

    Introduction One of the most important goals of restorative dentistry is to restore the patient’s aesthetic. Smile analysis is subjective and it differs from person to person. An aesthetic smile involves a harmonious relationship between various parameters including the hard and soft tissues. Aim The aim of the study was to identify the acceptable range of several smiles (alone and in conjunction with the face) by specialists, general dentists as well as lay persons; and to identify the values of different criteria i.e., the Golden Proportion (GP), the Recurrent Esthetic Dental proportion (RED), Width to Height ratio (W/H ratio), the Apparent Contact Dimension (ACD), and lateral incisor position in a smile. Materials and Methods Hundred photographs of 50 subjects were taken, 50 of the smile alone and 50 of the individual’s frontal view of face. The photographs of the smiles and the faces were assessed for the aesthetic acceptability by 30 evaluators including 10 specialists with advanced training, 10 general dentists and 10 lay persons. Irreversible hydrocolloid impressions were made of the dentitions of all the individuals using stock trays and were poured in dental stone. Measurements were made on the facial surface of the teeth on the models and were recorded in millimeters using a sharp tipped digital vernier calliper. Data was analyzed to evaluate the presence of different parameters assessed in the smiles. Mean and standard deviation values for the percentage of only the agreeable smiles were calculated in both individual smile analysis and in conjunction with the face. The non agreeable smiles were excluded from further statistical analysis. Pearson Correlation Coefficient was calculated to compare the values obtained in all the three groups. Results More number of smiles were considered agreeable by the general dentists when compared to the specialists and the number even increased in case of evaluation by lay persons. Greater number of smiles was found

  10. TOPEX/POSEIDON Microwave Radiometer (TMR): III. Wet Troposphere Range Correction Algorithm and Pre-Launch Error Budget

    NASA Technical Reports Server (NTRS)

    Keihm, S. J.; Janssen, M. A.; Ruf, C. S.

    1993-01-01

    The sole mission function of the TOPEX/POSEIDON Microwave Radiometer (TMR) is to provide corrections for the altimeter range errors induced by the highly variable atmospheric water vapor content. The three TMR frequencies are shown to be near-optimum for measuring the vapor-induced path delay within an environment of variable cloud cover and variable sea surface flux background. After a review of the underlying physics relevant to the prediction of 5-40 GHz nadir-viewing microwave brightness temperatures, we describe the development of the statistical, iterative algorithm used for the TMR retrieval of path delay. Test simulations are presented which demonstrate the uniformity of algorithm performance over a range of cloud liquid and sea surface wind speed conditions...

  11. Assessment of long-range kinematic GPS positioning errors by comparison with airborne laser altimetry and satellite altimetry

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaohong; Forsberg, Rene

    2007-03-01

    Long-range airborne laser altimetry and laser scanning (LIDAR) or airborne gravity surveys in, for example, polar or oceanic areas require airborne kinematic GPS baselines of many hundreds of kilometers in length. In such instances, with the complications of ionospheric biases, it can be a real challenge for traditional differential kinematic GPS software to obtain reasonable solutions. In this paper, we will describe attempts to validate an implementation of the precise point positioning (PPP) technique on an aircraft without the use of a local GPS reference station. We will compare PPP solutions with other conventional GPS solutions, as well as with independent data by comparison of airborne laser data with “ground truth” heights. The comparisons involve two flights: A July 5, 2003, airborne laser flight line across the North Atlantic from Iceland to Scotland, and a May 24, 2004, flight in an area of the Arctic Ocean north of Greenland, near-coincident in time and space with the ICESat satellite laser altimeter. Both of these flights were more than 800 km long. Comparisons between different GPS methods and four different software packages do not suggest a clear preference for any one, with the heights generally showing decimeter-level agreement. For the comparison with the independent ICESat- and LIDAR-derived “ground truth” of ocean or sea-ice heights, the statistics of comparison show a typical fit of around 10 cm RMS in the North Atlantic, and 30 cm in the sea-ice region north of Greenland. Part of the latter 30 cm error is likely due to errors in the airborne LIDAR measurement and calibration, as well as errors in the “ground truth” ocean surfaces due to drifting sea-ice. Nevertheless, the potential of the PPP method for generating 10 cm level kinematic height positioning over long baselines is illustrated.

  12. Restraint of range walk error in a Geiger-mode avalanche photodiode lidar to acquire high-precision depth and intensity information.

    PubMed

    Xu, Lu; Zhang, Yu; Zhang, Yong; Yang, Chenghua; Yang, Xu; Zhao, Yuan

    2016-03-01

    There exists a range walk error in a Geiger-mode avalanche photodiode (Gm-APD) lidar because of the fluctuation in the number of signal photoelectrons. To restrain this range walk error, we propose a new returning-wave signal processing technique based on the Poisson probability response model and the Gaussian functions fitting method. High-precision depth and intensity information of the target at the distance of 5 m is obtained by a Gm-APD lidar using a 6 ns wide pulsed laser. The experiment results show that the range and intensity precisions are 1.2 cm and 0.015 photoelectrons, respectively.

  13. Public acceptance as a determinant of management strategies for bovine tuberculosis in free-ranging U.S. wildlife.

    PubMed

    Carstensen, Michelle; O'Brien, Daniel J; Schmitt, Stephen M

    2011-07-05

    When bovine tuberculosis (bTB) is detected in free-ranging wildlife populations, preventing geographic spread and the establishment of a wildlife reservoir requires rapid, often aggressive response. Public tolerance can exert a significant effect on potential control measures available to managers, and thus on the success of disease management efforts. Separate outbreaks of bTB in free-ranging white-tailed deer (Odocoileus virginianus) in two midwestern states provide a case study. In Minnesota, bTB was first discovered in cattle in 2005 and subsequently in deer. To date, 12 beef cattle farms and 26 white-tailed deer have been found infected with the disease. From 2005 to 2008, disease prevalence in deer has decreased from 0.4% (SE=0.2%) to <0.1% and remained confined to a small (<425 km(2)) geographic area. Deer population reduction through liberalized hunting and targeted culling by ground sharpshooting and aerial gunning, combined with a prohibition on baiting and recreational feeding, have likely been major drivers preventing disease spread thus far. Without support from cattle producers, deer hunters and the general public, as well as politicians, implementation of these aggressive strategies by state and federal authorities would not have been possible. In contrast, Michigan first discovered bovine bTB in free-ranging deer in 1975, and disease management efforts were not instituted until 1995. The first infected cattle herd was diagnosed in 1998. Since 1995, disease prevalence in free-ranging deer has decreased from 4.9% to 1.8% in the ∼ 1500 km(2) core outbreak area. Culture positive deer have been found as far as 188 km from the core area. Liberalized harvest and restrictions on baiting and feeding have facilitated substantial reductions in prevalence. However, there has been little support on the part of hunters, farmers or the general public for more aggressive population reduction measures such as culling, and compliance with baiting and feeding

  14. Assessment of the accuracy of global geodetic satellite laser ranging observations and estimated impact on ITRF scale: estimation of systematic errors in LAGEOS observations 1993-2014

    NASA Astrophysics Data System (ADS)

    Appleby, Graham; Rodríguez, José; Altamimi, Zuheir

    2016-12-01

    Satellite laser ranging (SLR) to the geodetic satellites LAGEOS and LAGEOS-2 uniquely determines the origin of the terrestrial reference frame and, jointly with very long baseline interferometry, its scale. Given such a fundamental role in satellite geodesy, it is crucial that any systematic errors in either technique are at an absolute minimum as efforts continue to realise the reference frame at millimetre levels of accuracy to meet the present and future science requirements. Here, we examine the intrinsic accuracy of SLR measurements made by tracking stations of the International Laser Ranging Service using normal point observations of the two LAGEOS satellites in the period 1993 to 2014. The approach we investigate in this paper is to compute weekly reference frame solutions solving for satellite initial state vectors, station coordinates and daily Earth orientation parameters, estimating along with these weekly average range errors for each and every one of the observing stations. Potential issues in any of the large number of SLR stations assumed to have been free of error in previous realisations of the ITRF may have been absorbed in the reference frame, primarily in station height. Likewise, systematic range errors estimated against a fixed frame that may itself suffer from accuracy issues will absorb network-wide problems into station-specific results. Our results suggest that in the past two decades, the scale of the ITRF derived from the SLR technique has been close to 0.7 ppb too small, due to systematic errors either or both in the range measurements and their treatment. We discuss these results in the context of preparations for ITRF2014 and additionally consider the impact of this work on the currently adopted value of the geocentric gravitational constant, GM.

  15. Passive ranging errors due to multipath distortion of deterministic transient signals with application to the localization of small arms fire

    NASA Astrophysics Data System (ADS)

    Ferguson, Brian G.; Lo, Kam W.

    2002-01-01

    A passive ranging technique based on wavefront curvature is used to estimate the ranges and bearings of impulsive sound sources represented by small arms fire. The discharge of a firearm results in the generation of a transient acoustic signal whose energy propagates radially outwards from the omnidirectional source. The radius of curvature of the spherical wavefront at any instant is equal to the instantaneous range from the source. The curvature of the acoustic wavefront is sensed with a three-microphone linear array by first estimating the differential time of arrival (or time delay) of the acoustic wavefront at each of the two adjacent sensor pairs and then processing the time-delay information to extract the range and bearing of the source. However, modeling the passive ranging performance of the wavefront curvature method for a deterministic transient signal source in a multipath environment shows that when the multipath and direct path arrivals are unresolvable, the time-delay estimates are biased which, in turn, biases the range estimates. The model explains the observed under-ranging of small arms firing positions during a field experiment.

  16. Passive ranging errors due to multipath distortion of deterministic transient signals with application to the localization of small arms fire.

    PubMed

    Ferguson, Brian G; Lo, Kam W

    2002-01-01

    A passive ranging technique based on wavefront curvature is used to estimate the ranges and bearings of impulsive sound sources represented by small arms fire. The discharge of a firearm results in the generation of a transient acoustic signal whose energy propagates radially outwards from the omnidirectional source. The radius of curvature of the spherical wavefront at any instant is equal to the instantaneous range from the source. The curvature of the acoustic wavefront is sensed with a three-microphone linear array by first estimating the differential time of arrival (or time delay) of the acoustic wavefront at each of the two adjacent sensor pairs and then processing the time-delay information to extract the range and bearing of the source. However, modeling the passive ranging performance of the wavefront curvature method for a deterministic transient signal source in a multipath environment shows that when the multipath and direct path arrivals are unresolvable, the time-delay estimates are biased which, in turn, biases the range estimates. The model explains the observed under-ranging of small arms firing positions during a field experiment.

  17. Personal digital assistants to collect tuberculosis bacteriology data in Peru reduce delays, errors, and workload, and are acceptable to users: cluster randomized controlled trial

    PubMed Central

    Blaya, Joaquín A.; Cohen, Ted; Rodríguez, Pablo; Kim, Jihoon; Fraser, Hamish S.F.

    2009-01-01

    Summary Objectives To evaluate the effectiveness of a personal digital assistant (PDA)-based system for collecting tuberculosis test results and to compare this new system to the previous paper-based system. The PDA- and paper-based systems were evaluated based on processing times, frequency of errors, and number of work-hours expended by data collectors. Methods We conducted a cluster randomized controlled trial in 93 health establishments in Peru. Baseline data were collected for 19 months. Districts (n = 4) were then randomly assigned to intervention (PDA) or control (paper) groups, and further data were collected for 6 months. Comparisons were made between intervention and control districts and within-districts before and after the introduction of the intervention. Results The PDA-based system had a significant effect on processing times (p < 0.001) and errors (p = 0.005). In the between-districts comparison, the median processing time for cultures was reduced from 23 to 8 days and for smears was reduced from 25 to 12 days. In that comparison, the proportion of cultures with delays >90 days was reduced from 9.2% to 0.1% and the number of errors was decreased by 57.1%. The intervention reduced the work-hours necessary to process results by 70% and was preferred by all users. Conclusions A well-designed PDA-based system to collect data from institutions over a large, resource-poor area can significantly reduce delays, errors, and person-hours spent processing data. PMID:19097925

  18. Effects of diffraction by ionospheric electron density irregularities on the range error in GNSS dual-frequency positioning and phase decorrelation

    NASA Astrophysics Data System (ADS)

    Gherm, Vadim E.; Zernov, Nikolay N.; Strangeways, Hal J.

    2011-06-01

    It can be important to determine the correlation of different frequency signals in L band that have followed transionospheric paths. In the future, both GPS and the new Galileo satellite system will broadcast three frequencies enabling more advanced three frequency correction schemes so that knowledge of correlations of different frequency pairs for scintillation conditions is desirable. Even at present, it would be helpful to know how dual-frequency Global Navigation Satellite Systems positioning can be affected by lack of correlation between the L1 and L2 signals. To treat this problem of signal correlation for the case of strong scintillation, a previously constructed simulator program, based on the hybrid method, has been further modified to simulate the fields for both frequencies on the ground, taking account of their cross correlation. Then, the errors in the two-frequency range finding method caused by scintillation have been estimated for particular ionospheric conditions and for a realistic fully three-dimensional model of the ionospheric turbulence. The results which are presented for five different frequency pairs (L1/L2, L1/L3, L1/L5, L2/L3, and L2/L5) show the dependence of diffractional errors on the scintillation index S4 and that the errors diverge from a linear relationship, the stronger are scintillation effects, and may reach up to ten centimeters, or more. The correlation of the phases at spaced frequencies has also been studied and found that the correlation coefficients for different pairs of frequencies depend on the procedure of phase retrieval, and reduce slowly as both the variance of the electron density fluctuations and cycle slips increase.

  19. Action errors, error management, and learning in organizations.

    PubMed

    Frese, Michael; Keith, Nina

    2015-01-03

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  20. The influence of the oscillation angle and the neck anteversion of the prosthesis on the cup safe-zone that fulfills the criteria for range of motion in total hip replacements. The required oscillation angle for an acceptable cup safe-zone.

    PubMed

    Yoshimine, Fumihiro

    2005-01-01

    A normal hip joint has more than 120 degrees flexion. The reduced range of motion (ROM) of total hip arthroplast leads to frequent prosthetic impingement, subluxation and dislocation. Prosthetic impingement may be more serious for metal-on-metal and ceramic-on-ceramic total hip prosthesis (THP). A larger oscillation angle of THP (OsA) and proper cup and neck positions make a larger theoretical ROM of a patient's artificial hip joint. But what OsA is required and what range of cup positions is kinetically accepted are not clearly understood. A ROM of more than 120 degrees flexion, 45 degrees internal-rotation at 90 degrees flexion, 30 degrees extension and 40 degrees external-rotation was defined as severe criteria for an acceptable ROM. Theoretical cup safe-zones were created that fulfill the severe criteria of ROM for (OsA=110 degrees , 120 degrees , 135 degrees ) by the mathematical formulas. The size of the cup safe-zone mainly depends on the size of the OsA. There is no cup safe-zone for 110 degrees OsA, an extremely small safe-zone for 120 degrees OsA and an acceptable safe-zone for 135 degrees OsA. Each THP has its own OsA, because OsA is the function of head and neck diameter and cup design. More than 135 degrees OsA enlarges the safe-zone of the prosthetic position, so it extends the acceptable range of error that surgeons cannot avoid completely. However, few THPs with more than 135 degrees OsA are currently clinically available. Both surgeons and manufacturers must realize that OsA is as essential as cup and neck orientations for ROM.

  1. Acceptance threshold theory can explain occurrence of homosexual behaviour

    PubMed Central

    Engel, Katharina C.; Männer, Lisa; Ayasse, Manfred; Steiger, Sandra

    2015-01-01

    Same-sex sexual behaviour (SSB) has been documented in a wide range of animals, but its evolutionary causes are not well understood. Here, we investigated SSB in the light of Reeve's acceptance threshold theory. When recognition is not error-proof, the acceptance threshold used by males to recognize potential mating partners should be flexibly adjusted to maximize the fitness pay-off between the costs of erroneously accepting males and the benefits of accepting females. By manipulating male burying beetles' search time for females and their reproductive potential, we influenced their perceived costs of making an acceptance or rejection error. As predicted, when the costs of rejecting females increased, males exhibited more permissive discrimination decisions and showed high levels of SSB; when the costs of accepting males increased, males were more restrictive and showed low levels of SSB. Our results support the idea that in animal species, in which the recognition cues of females and males overlap to a certain degree, SSB is a consequence of an adaptive discrimination strategy to avoid the costs of making rejection errors. PMID:25631226

  2. Acceptance threshold theory can explain occurrence of homosexual behaviour.

    PubMed

    Engel, Katharina C; Männer, Lisa; Ayasse, Manfred; Steiger, Sandra

    2015-01-01

    Same-sex sexual behaviour (SSB) has been documented in a wide range of animals, but its evolutionary causes are not well understood. Here, we investigated SSB in the light of Reeve's acceptance threshold theory. When recognition is not error-proof, the acceptance threshold used by males to recognize potential mating partners should be flexibly adjusted to maximize the fitness pay-off between the costs of erroneously accepting males and the benefits of accepting females. By manipulating male burying beetles' search time for females and their reproductive potential, we influenced their perceived costs of making an acceptance or rejection error. As predicted, when the costs of rejecting females increased, males exhibited more permissive discrimination decisions and showed high levels of SSB; when the costs of accepting males increased, males were more restrictive and showed low levels of SSB. Our results support the idea that in animal species, in which the recognition cues of females and males overlap to a certain degree, SSB is a consequence of an adaptive discrimination strategy to avoid the costs of making rejection errors.

  3. [CIRRNET® - learning from errors, a success story].

    PubMed

    Frank, O; Hochreutener, M; Wiederkehr, P; Staender, S

    2012-06-01

    CIRRNET® is the network of local error-reporting systems of the Swiss Patient Safety Foundation. The network has been running since 2006 together with the Swiss Society for Anaesthesiology and Resuscitation (SGAR), and network participants currently include 39 healthcare institutions from all four different language regions of Switzerland. Further institutions can join at any time. Local error reports in CIRRNET® are bundled at a supraregional level, categorised in accordance with the WHO classification, and analysed by medical experts. The CIRRNET® database offers a solid pool of data with error reports from a wide range of medical specialist's areas and provides the basis for identifying relevant problem areas in patient safety. These problem areas are then processed in cooperation with specialists with extremely varied areas of expertise, and recommendations for avoiding these errors are developed by changing care processes (Quick-Alerts®). Having been approved by medical associations and professional medical societies, Quick-Alerts® are widely supported and well accepted in professional circles. The CIRRNET® database also enables any affiliated CIRRNET® participant to access all error reports in the 'closed user area' of the CIRRNET® homepage and to use these error reports for in-house training. A healthcare institution does not have to make every mistake itself - it can learn from the errors of others, compare notes with other healthcare institutions, and use existing knowledge to advance its own patient safety.

  4. UGV acceptance testing

    NASA Astrophysics Data System (ADS)

    Kramer, Jeffrey A.; Murphy, Robin R.

    2006-05-01

    With over 100 models of unmanned vehicles now available for military and civilian safety, security or rescue applications, it is important to for agencies to establish acceptance testing. However, there appears to be no general guidelines for what constitutes a reasonable acceptance test. This paper describes i) a preliminary method for acceptance testing by a customer of the mechanical and electrical components of an unmanned ground vehicle system, ii) how it has been applied to a man-packable micro-robot, and iii) discusses the value of testing both to ensure that the customer has a workable system and to improve design. The test method automated the operation of the robot to repeatedly exercise all aspects and combinations of components on the robot for 6 hours. The acceptance testing process uncovered many failures consistent with those shown to occur in the field, showing that testing by the user does predict failures. The process also demonstrated that the testing by the manufacturer can provide important design data that can be used to identify, diagnose, and prevent long-term problems. Also, the structured testing environment showed that sensor systems can be used to predict errors and changes in performance, as well as uncovering unmodeled behavior in subsystems.

  5. Emperical Tests of Acceptance Sampling Plans

    NASA Technical Reports Server (NTRS)

    White, K. Preston, Jr.; Johnson, Kenneth L.

    2012-01-01

    Acceptance sampling is a quality control procedure applied as an alternative to 100% inspection. A random sample of items is drawn from a lot to determine the fraction of items which have a required quality characteristic. Both the number of items to be inspected and the criterion for determining conformance of the lot to the requirement are given by an appropriate sampling plan with specified risks of Type I and Type II sampling errors. In this paper, we present the results of empirical tests of the accuracy of selected sampling plans reported in the literature. These plans are for measureable quality characteristics which are known have either binomial, exponential, normal, gamma, Weibull, inverse Gaussian, or Poisson distributions. In the main, results support the accepted wisdom that variables acceptance plans are superior to attributes (binomial) acceptance plans, in the sense that these provide comparable protection against risks at reduced sampling cost. For the Gaussian and Weibull plans, however, there are ranges of the shape parameters for which the required sample sizes are in fact larger than the corresponding attributes plans, dramatically so for instances of large skew. Tests further confirm that the published inverse-Gaussian (IG) plan is flawed, as reported by White and Johnson (2011).

  6. Reduction of Surface Errors over a Wide Range of Spatial Frequencies Using a Combination of Electrolytic In-Process Dressing Grinding and Magnetorheological Finishing

    NASA Astrophysics Data System (ADS)

    Kunimura, Shinsuke; Ohmori, Hitoshi

    We present a rapid process for producing flat and smooth surfaces. In this technical note, a fabrication result of a carbon mirror is shown. Electrolytic in-process dressing (ELID) grinding with a metal bonded abrasive wheel, then a metal-resin bonded abrasive wheel, followed by a conductive rubber bonded abrasive wheel, and finally magnetorheological finishing (MRF) were performed as the first, second, third, and final steps, respectively in this process. Flatness over the whole surface was improved by performing the first and second steps. After the third step, peak to valley (PV) and root mean square (rms) values in an area of 0.72 x 0.54 mm2 on the surface were improved. These values were further improved after the final step, and a PV value of 10 nm and an rms value of 1 nm were obtained. Form errors and small surface irregularities such as surface waviness and micro roughness were efficiently reduced by performing ELID grinding using the above three kinds of abrasive wheels because of the high removal rate of ELID grinding, and residual small irregularities were reduced by short time MRF. This process makes it possible to produce flat and smooth surfaces in several hours.

  7. ALTIMETER ERRORS,

    DTIC Science & Technology

    CIVIL AVIATION, *ALTIMETERS, FLIGHT INSTRUMENTS, RELIABILITY, ERRORS , PERFORMANCE(ENGINEERING), BAROMETERS, BAROMETRIC PRESSURE, ATMOSPHERIC TEMPERATURE, ALTITUDE, CORRECTIONS, AVIATION SAFETY, USSR.

  8. Reliable long-range ensemble streamflow forecasts: Combining calibrated climate forecasts with a conceptual runoff model and a staged error model

    NASA Astrophysics Data System (ADS)

    Bennett, James C.; Wang, Q. J.; Li, Ming; Robertson, David E.; Schepen, Andrew

    2016-10-01

    We present a new streamflow forecasting system called forecast guided stochastic scenarios (FoGSS). FoGSS makes use of ensemble seasonal precipitation forecasts from a coupled ocean-atmosphere general circulation model (CGCM). The CGCM forecasts are post-processed with the method of calibration, bridging and merging (CBaM) to produce ensemble precipitation forecasts over river catchments. CBaM corrects biases and removes noise from the CGCM forecasts, and produces highly reliable ensemble precipitation forecasts. The post-processed CGCM forecasts are used to force the Wapaba monthly rainfall-runoff model. Uncertainty in the hydrological modeling is accounted for with a three-stage error model. Stage 1 applies the log-sinh transformation to normalize residuals and homogenize their variance; Stage 2 applies a conditional bias-correction to correct biases and help remove negative forecast skill; Stage 3 applies an autoregressive model to improve forecast accuracy at short lead-times and propagate uncertainty through the forecast. FoGSS generates ensemble forecasts in the form of time series for the coming 12 months. In a case study of two catchments, FoGSS produces reliable forecasts at all lead-times. Forecast skill with respect to climatology is evident to lead-times of about 3 months. At longer lead-times, forecast skill approximates that of climatology forecasts; that is, forecasts become like stochastic scenarios. Because forecast skill is virtually never negative at long lead-times, forecasts of accumulated volumes can be skillful. Forecasts of accumulated 12 month streamflow volumes are significantly skillful in several instances, and ensembles of accumulated volumes are reliable. We conclude that FoGSS forecasts could be highly useful to water managers.

  9. Acceptance speech.

    PubMed

    Yusuf, C K

    1994-01-01

    I am proud and honored to accept this award on behalf of the Government of Bangladesh, and the millions of Bangladeshi children saved by oral rehydration solution. The Government of Bangladesh is grateful for this recognition of its commitment to international health and population research and cost-effective health care for all. The Government of Bangladesh has already made remarkable strides forward in the health and population sector, and this was recognized in UNICEF's 1993 "State of the World's Children". The national contraceptive prevalence rate, at 40%, is higher than that of many developed countries. It is appropriate that Bangladesh, where ORS was discovered, has the largest ORS production capacity in the world. It was remarkable that after the devastating cyclone in 1991, the country was able to produce enough ORS to meet the needs and remain self-sufficient. Similarly, Bangladesh has one of the most effective, flexible and efficient control of diarrheal disease and epidemic response program in the world. Through the country, doctors have been trained in diarrheal disease management, and stores of ORS are maintained ready for any outbreak. Despite grim predictions after the 1991 cyclone and the 1993 floods, relatively few people died from diarrheal disease. This is indicative of the strength of the national program. I want to take this opportunity to acknowledge the contribution of ICDDR, B and the important role it plays in supporting the Government's efforts in the health and population sector. The partnership between the Government of Bangladesh and ICDDR, B has already borne great fruit, and I hope and believe that it will continue to do so for many years in the future. Thank you.

  10. Freeform solar concentrator with a highly asymmetric acceptance cone

    NASA Astrophysics Data System (ADS)

    Wheelwright, Brian; Angel, J. Roger P.; Coughenour, Blake; Hammer, Kimberly

    2014-10-01

    A solar concentrator with a highly asymmetric acceptance cone is investigated. Concentrating photovoltaic systems require dual-axis sun tracking to maintain nominal concentration throughout the day. In addition to collecting direct rays from the solar disk, which subtends ~0.53 degrees, concentrating optics must allow for in-field tracking errors due to mechanical misalignment of the module, wind loading, and control loop biases. The angular range over which the concentrator maintains <90% of on-axis throughput is defined as the optical acceptance angle. Concentrators with substantial rotational symmetry likewise exhibit rotationally symmetric acceptance angles. In the field, this is sometimes a poor match with azimuth-elevation trackers, which have inherently asymmetric tracking performance. Pedestal-mounted trackers with low torsional stiffness about the vertical axis have better elevation tracking than azimuthal tracking. Conversely, trackers which rotate on large-footprint circular tracks are often limited by elevation tracking performance. We show that a line-focus concentrator, composed of a parabolic trough primary reflector and freeform refractive secondary, can be tailored to have a highly asymmetric acceptance angle. The design is suitable for a tracker with excellent tracking accuracy in the elevation direction, and poor accuracy in the azimuthal direction. In the 1000X design given, when trough optical errors (2mrad rms slope deviation) are accounted for, the azimuthal acceptance angle is +/- 1.65°, while the elevation acceptance angle is only +/-0.29°. This acceptance angle does not include the angular width of the sun, which consumes nearly all of the elevation tolerance at this concentration level. By decreasing the average concentration, the elevation acceptance angle can be increased. This is well-suited for a pedestal alt-azimuth tracker with a low cost slew bearing (without anti-backlash features).

  11. Errors in general practice: development of an error classification and pilot study of a method for detecting errors

    PubMed Central

    Rubin, G; George, A; Chinn, D; Richardson, C

    2003-01-01

    Objective: To describe a classification of errors and to assess the feasibility and acceptability of a method for recording staff reported errors in general practice. Design: An iterative process in a pilot practice was used to develop a classification of errors. This was incorporated in an anonymous self-report form which was then used to collect information on errors during June 2002. The acceptability of the reporting process was assessed using a self-completion questionnaire. Setting: UK general practice. Participants: Ten general practices in the North East of England. Main outcome measures: Classification of errors, frequency of errors, error rates per 1000 appointments, acceptability of the process to participants. Results: 101 events were used to create an initial error classification. This contained six categories: prescriptions, communication, appointments, equipment, clinical care, and "other" errors. Subsequently, 940 errors were recorded in a single 2 week period from 10 practices, providing additional information. 42% (397/940) were related to prescriptions, although only 6% (22/397) of these were medication errors. Communication errors accounted for 30% (282/940) of errors and clinical errors 3% (24/940). The overall error rate was 75.6/1000 appointments (95% CI 71 to 80). The method of error reporting was found to be acceptable by 68% (36/53) of respondents with only 8% (4/53) finding the process threatening. Conclusion: We have developed a classification of errors and described a practical and acceptable method for reporting them that can be used as part of the process of risk management. Errors are common and, although all have the potential to lead to an adverse event, most are administrative. PMID:14645760

  12. Software error detection

    NASA Technical Reports Server (NTRS)

    Buechler, W.; Tucker, A. G.

    1981-01-01

    Several methods were employed to detect both the occurrence and source of errors in the operational software of the AN/SLQ-32. A large embedded real time electronic warfare command and control system for the ROLM 1606 computer are presented. The ROLM computer provides information about invalid addressing, improper use of privileged instructions, stack overflows, and unimplemented instructions. Additionally, software techniques were developed to detect invalid jumps, indices out of range, infinte loops, stack underflows, and field size errors. Finally, data are saved to provide information about the status of the system when an error is detected. This information includes I/O buffers, interrupt counts, stack contents, and recently passed locations. The various errors detected, techniques to assist in debugging problems, and segment simulation on a nontarget computer are discussed. These error detection techniques were a major factor in the success of finding the primary cause of error in 98% of over 500 system dumps.

  13. Acceptance of tinnitus: validation of the tinnitus acceptance questionnaire.

    PubMed

    Weise, Cornelia; Kleinstäuber, Maria; Hesser, Hugo; Westin, Vendela Zetterqvist; Andersson, Gerhard

    2013-01-01

    The concept of acceptance has recently received growing attention within tinnitus research due to the fact that tinnitus acceptance is one of the major targets of psychotherapeutic treatments. Accordingly, acceptance-based treatments will most likely be increasingly offered to tinnitus patients and assessments of acceptance-related behaviours will thus be needed. The current study investigated the factorial structure of the Tinnitus Acceptance Questionnaire (TAQ) and the role of tinnitus acceptance as mediating link between sound perception (i.e. subjective loudness of tinnitus) and tinnitus distress. In total, 424 patients with chronic tinnitus completed the TAQ and validated measures of tinnitus distress, anxiety, and depression online. Confirmatory factor analysis provided support to a good fit of the data to the hypothesised bifactor model (root-mean-square-error of approximation = .065; Comparative Fit Index = .974; Tucker-Lewis Index = .958; standardised root mean square residual = .032). In addition, mediation analysis, using a non-parametric joint coefficient approach, revealed that tinnitus-specific acceptance partially mediated the relation between subjective tinnitus loudness and tinnitus distress (path ab = 5.96; 95% CI: 4.49, 7.69). In a multiple mediator model, tinnitus acceptance had a significantly stronger indirect effect than anxiety. The results confirm the factorial structure of the TAQ and suggest the importance of a general acceptance factor that contributes important unique variance beyond that of the first-order factors activity engagement and tinnitus suppression. Tinnitus acceptance as measured with the TAQ is proposed to be a key construct in tinnitus research and should be further implemented into treatment concepts to reduce tinnitus distress.

  14. Medication Errors

    MedlinePlus

    ... common links HHS U.S. Department of Health and Human Services U.S. Food and Drug Administration A to Z Index Follow ... Practices National Patient Safety Foundation To Err is Human: ... Errors: Quality Chasm Series National Coordinating Council for Medication Error ...

  15. Error Analysis

    NASA Astrophysics Data System (ADS)

    Scherer, Philipp O. J.

    Input data as well as the results of elementary operations have to be represented by machine numbers, the subset of real numbers which is used by the arithmetic unit of today's computers. Generally this generates rounding errors. This kind of numerical error can be avoided in principle by using arbitrary precision arithmetics or symbolic algebra programs. But this is unpractical in many cases due to the increase in computing time and memory requirements. Results from more complex operations like square roots or trigonometric functions can have even larger errors since series expansions have to be truncated and iterations accumulate the errors of the individual steps. In addition, the precision of input data from an experiment is limited. In this chapter we study the influence of numerical errors on the uncertainties of the calculated results and the stability of simple algorithms.

  16. High acceptance recoil polarimeter

    SciTech Connect

    The HARP Collaboration

    1992-12-05

    In order to detect neutrons and protons in the 50 to 600 MeV energy range and measure their polarization, an efficient, low-noise, self-calibrating device is being designed. This detector, known as the High Acceptance Recoil Polarimeter (HARP), is based on the recoil principle of proton detection from np[r arrow]n[prime]p[prime] or pp[r arrow]p[prime]p[prime] scattering (detected particles are underlined) which intrinsically yields polarization information on the incoming particle. HARP will be commissioned to carry out experiments in 1994.

  17. Estimating patient specific uncertainty parameters for adaptive treatment re-planning in proton therapy using in vivo range measurements and Bayesian inference: application to setup and stopping power errors

    NASA Astrophysics Data System (ADS)

    Labarbe, Rudi; Janssens, Guillaume; Sterpin, Edmond

    2016-09-01

    In proton therapy, quantification of the proton range uncertainty is important to achieve dose distribution compliance. The promising accuracy of prompt gamma imaging (PGI) suggests the development of a mathematical framework using the range measurements to convert population based estimates of uncertainties into patient specific estimates with the purpose of plan adaptation. We present here such framework using Bayesian inference. The sources of uncertainty were modeled by three parameters: setup bias m, random setup precision r and water equivalent path length bias u. The evolution of the expectation values E(m), E(r) and E(u) during the treatment was simulated. The expectation values converged towards the true simulation parameters after 5 and 10 fractions, for E(m) and E(u), respectively. E(r) settle on a constant value slightly lower than the true value after 10 fractions. In conclusion, the simulation showed that there is enough information in the frequency distribution of the range errors measured by PGI to estimate the expectation values and the confidence interval of the model parameters by Bayesian inference. The updated model parameters were used to compute patient specific lateral and local distal margins for adaptive re-planning.

  18. Towards error-free interaction.

    PubMed

    Tsoneva, Tsvetomira; Bieger, Jordi; Garcia-Molina, Gary

    2010-01-01

    Human-machine interaction (HMI) relies on pat- tern recognition algorithms that are not perfect. To improve the performance and usability of these systems we can utilize the neural mechanisms in the human brain dealing with error awareness. This study aims at designing a practical error detection algorithm using electroencephalogram signals that can be integrated in an HMI system. Thus, real-time operation, customization, and operation convenience are important. We address these requirements in an experimental framework simulating machine errors. Our results confirm the presence of brain potentials related to processing of machine errors. These are used to implement an error detection algorithm emphasizing the differences in error processing on a per subject basis. The proposed algorithm uses the individual best bipolar combination of electrode sites and requires short calibration. The single-trial error detection performance on six subjects, characterized by the area under the ROC curve ranges from 0.75 to 0.98.

  19. Operational Interventions to Maintenance Error

    NASA Technical Reports Server (NTRS)

    Kanki, Barbara G.; Walter, Diane; Dulchinos, VIcki

    1997-01-01

    A significant proportion of aviation accidents and incidents are known to be tied to human error. However, research of flight operational errors has shown that so-called pilot error often involves a variety of human factors issues and not a simple lack of individual technical skills. In aircraft maintenance operations, there is similar concern that maintenance errors which may lead to incidents and accidents are related to a large variety of human factors issues. Although maintenance error data and research are limited, industry initiatives involving human factors training in maintenance have become increasingly accepted as one type of maintenance error intervention. Conscientious efforts have been made in re-inventing the team7 concept for maintenance operations and in tailoring programs to fit the needs of technical opeRAtions. Nevertheless, there remains a dual challenge: 1) to develop human factors interventions which are directly supported by reliable human error data, and 2) to integrate human factors concepts into the procedures and practices of everyday technical tasks. In this paper, we describe several varieties of human factors interventions and focus on two specific alternatives which target problems related to procedures and practices; namely, 1) structured on-the-job training and 2) procedure re-design. We hope to demonstrate that the key to leveraging the impact of these solutions comes from focused interventions; that is, interventions which are derived from a clear understanding of specific maintenance errors, their operational context and human factors components.

  20. Offer/Acceptance Ratio.

    ERIC Educational Resources Information Center

    Collins, Mimi

    1997-01-01

    Explores how human resource professionals, with above average offer/acceptance ratios, streamline their recruitment efforts. Profiles company strategies with internships, internal promotion, cooperative education programs, and how to get candidates to accept offers. Also discusses how to use the offer/acceptance ratio as a measure of program…

  1. Consequences of leaf calibration errors on IMRT delivery

    NASA Astrophysics Data System (ADS)

    Sastre-Padro, M.; Welleweerd, J.; Malinen, E.; Eilertsen, K.; Olsen, D. R.; van der Heide, U. A.

    2007-02-01

    IMRT treatments using multi-leaf collimators may involve a large number of segments in order to spare the organs at risk. When a large proportion of these segments are small, leaf positioning errors may become relevant and have therapeutic consequences. The performance of four head and neck IMRT treatments under eight different cases of leaf positioning errors has been studied. Systematic leaf pair offset errors in the range of ±2.0 mm were introduced, thus modifying the segment sizes of the original IMRT plans. Thirty-six films were irradiated with the original and modified segments. The dose difference and the gamma index (with 2%/2 mm criteria) were used for evaluating the discrepancies between the irradiated films. The median dose differences were linearly related to the simulated leaf pair errors. In the worst case, a 2.0 mm error generated a median dose difference of 1.5%. Following the gamma analysis, two out of the 32 modified plans were not acceptable. In conclusion, small systematic leaf bank positioning errors have a measurable impact on the delivered dose and may have consequences for the therapeutic outcome of IMRT.

  2. Error estimates of elastic components in stress-dependent VTI media

    NASA Astrophysics Data System (ADS)

    Spikes, Kyle T.

    2014-09-01

    This work examines the ranges of physically acceptable elastic components for a vertical transversely isotropic (VTI) laboratory shale data set. A stochastic rock-physics approach combined with physically based acceptance and rejection criteria determined the ranges. The importance of this work is to demonstrate that multiple constrained models explain independently calculated measurement error bars. The data set consisted of pressure- and directional-dependent velocity measurements conducted on a low porosity, brine-saturated hard shale. Error bars were calculated for all five elastic stiffnesses and compliances as a function of pressure. The rock physics model is pressure dependent and represents simultaneously five elastic compliances for a VTI medium. A non-linear least squares fitting routine established a best-fit model to the five compliances at all pressures. Perturbations of the best-fit model provided the statistical parameter space. Twelve physical constraints or data-set-specific conditions comprised the acceptance/rejection criteria. These constraints and conditions included strain-energy requirements, inequalities among stiffnesses and anisotropy parameters, and rates of change of moduli with pressure. The largest number of rejected models resulted from violating a criterion relating a compressional and shear stiffness. Minimum misfits between the accepted models and the data illustrate that a fraction of the accepted models best explain the data. The misfits between these accepted models and data explain the error in the data and/or inhomogeneities at the measurement scale. The ranges of acceptable elastic component values and the corresponding uncertainty estimates could be incorporated into seismic-inversion, imaging, and velocity-modeling schemes.

  3. TU-C-BRE-07: Quantifying the Clinical Impact of VMAT Delivery Errors Relative to Prior Patients’ Plans and Adjusted for Anatomical Differences

    SciTech Connect

    Stanhope, C; Wu, Q; Yuan, L; Liu, J; Hood, R; Yin, F; Adamson, J

    2014-06-15

    Purpose: There is increased interest in the Radiation Oncology Physics community regarding sensitivity of pre-treatment IMRT/VMAT QA to delivery errors. Consequently, tools mapping pre-treatment QA to the patient DVH have been developed. However, the quantity of plan degradation that is acceptable remains uncertain. Using DVHs adapted from prior patients’ plans, we developed a technique to determine the magnitude of various delivery errors required to degrade a treatment plan to outside the clinically accepted range. Methods: DVHs for relevant organs at risk were adapted from a population of prior patients’ plans using a machine learning algorithm to establish the clinically acceptable DVH range specific to the patient’s anatomy. We applied this technique to six low-risk prostate cancer patients treated with single-arc VMAT and compared error-induced DVH changes to the adapted DVHs to determine the magnitude of error required to push the plan outside of the acceptable range. The procedure follows: (1) Errors (systematic ' random shift of MLCs, gantry-MLC desynchronization, dose rate fluctuations, etc.) were simulated and degraded DVHs calculated using the Varian Eclipse TPS. (2) Adapted DVHs and acceptable ranges for DVHs were established. (3) Relevant dosimetric indices and corresponding acceptable ranges were calculated from the DVHs. Key indices included NTCP (Lyman-Kutcher-Burman Model) and QUANTEC’s dose-volume Objectives: s of V75Gy≤0.15 for the rectum and V75Gy≤0.25 for the bladder. Results: Degradations to the clinical plan became “unacceptable” for 19±29mm and 1.9±2.0mm systematic outward shifts of a single leaf and leaf bank, respectively. All other simulated errors fell within the acceptable range. Conclusion: Utilizing machine learning and prior patients’ plans one can predict a clinically acceptable range of DVH degradation for a specific patient. Comparing error-induced DVH degradations to this range, it is shown that single

  4. LIMS user acceptance testing.

    PubMed

    Klein, Corbett S

    2003-01-01

    Laboratory Information Management Systems (LIMS) play a key role in the pharmaceutical industry. Thorough and accurate validation of such systems is critical and is a regulatory requirement. LIMS user acceptance testing is one aspect of this testing and enables the user to make a decision to accept or reject implementation of the system. This paper discusses key elements in facilitating the development and execution of a LIMS User Acceptance Test Plan (UATP).

  5. On Maximum FODO Acceptance

    SciTech Connect

    Batygin, Yuri Konstantinovich

    2014-12-24

    This note illustrates maximum acceptance of FODO quadrupole focusing channel. Acceptance is the largest Floquet ellipse of a matched beam: A = $\\frac{a^2}{β}$$_{max}$ where a is the aperture of the channel and βmax is the largest value of beta-function in the channel. If aperture of the channel is restricted by a circle of radius a, the s-s acceptance is available for particles oscillating at median plane, y=0. Particles outside median plane will occupy smaller phase space area. In x-y plane, cross section of the accepted beam has a shape of ellipse with truncated boundaries.

  6. Comparison of analytical error and sampling error for contaminated soil.

    PubMed

    Gustavsson, Björn; Luthbom, Karin; Lagerkvist, Anders

    2006-11-16

    Investigation of soil from contaminated sites requires several sample handling steps that, most likely, will induce uncertainties in the sample. The theory of sampling describes seven sampling errors that can be calculated, estimated or discussed in order to get an idea of the size of the sampling uncertainties. With the aim of comparing the size of the analytical error to the total sampling error, these seven errors were applied, estimated and discussed, to a case study of a contaminated site. The manageable errors were summarized, showing a range of three orders of magnitudes between the examples. The comparisons show that the quotient between the total sampling error and the analytical error is larger than 20 in most calculation examples. Exceptions were samples taken in hot spots, where some components of the total sampling error get small and the analytical error gets large in comparison. Low concentration of contaminant, small extracted sample size and large particles in the sample contribute to the extent of uncertainty.

  7. North error estimation based on solar elevation errors in the third step of sky-polarimetric Viking navigation.

    PubMed

    Száz, Dénes; Farkas, Alexandra; Barta, András; Kretzer, Balázs; Egri, Ádám; Horváth, Gábor

    2016-07-01

    The theory of sky-polarimetric Viking navigation has been widely accepted for decades without any information about the accuracy of this method. Previously, we have measured the accuracy of the first and second steps of this navigation method in psychophysical laboratory and planetarium experiments. Now, we have tested the accuracy of the third step in a planetarium experiment, assuming that the first and second steps are errorless. Using the fists of their outstretched arms, 10 test persons had to estimate the elevation angles (measured in numbers of fists and fingers) of black dots (representing the position of the occluded Sun) projected onto the planetarium dome. The test persons performed 2400 elevation estimations, 48% of which were more accurate than ±1°. We selected three test persons with the (i) largest and (ii) smallest elevation errors and (iii) highest standard deviation of the elevation error. From the errors of these three persons, we calculated their error function, from which the North errors (the angles with which they deviated from the geographical North) were determined for summer solstice and spring equinox, two specific dates of the Viking sailing period. The range of possible North errors ΔωN was the lowest and highest at low and high solar elevations, respectively. At high elevations, the maximal ΔωN was 35.6° and 73.7° at summer solstice and 23.8° and 43.9° at spring equinox for the best and worst test person (navigator), respectively. Thus, the best navigator was twice as good as the worst one. At solstice and equinox, high elevations occur the most frequently during the day, thus high North errors could occur more frequently than expected before. According to our findings, the ideal periods for sky-polarimetric Viking navigation are immediately after sunrise and before sunset, because the North errors are the lowest at low solar elevations.

  8. North error estimation based on solar elevation errors in the third step of sky-polarimetric Viking navigation

    NASA Astrophysics Data System (ADS)

    Száz, Dénes; Farkas, Alexandra; Barta, András; Kretzer, Balázs; Egri, Ádám; Horváth, Gábor

    2016-07-01

    The theory of sky-polarimetric Viking navigation has been widely accepted for decades without any information about the accuracy of this method. Previously, we have measured the accuracy of the first and second steps of this navigation method in psychophysical laboratory and planetarium experiments. Now, we have tested the accuracy of the third step in a planetarium experiment, assuming that the first and second steps are errorless. Using the fists of their outstretched arms, 10 test persons had to estimate the elevation angles (measured in numbers of fists and fingers) of black dots (representing the position of the occluded Sun) projected onto the planetarium dome. The test persons performed 2400 elevation estimations, 48% of which were more accurate than ±1°. We selected three test persons with the (i) largest and (ii) smallest elevation errors and (iii) highest standard deviation of the elevation error. From the errors of these three persons, we calculated their error function, from which the North errors (the angles with which they deviated from the geographical North) were determined for summer solstice and spring equinox, two specific dates of the Viking sailing period. The range of possible North errors ΔωN was the lowest and highest at low and high solar elevations, respectively. At high elevations, the maximal ΔωN was 35.6° and 73.7° at summer solstice and 23.8° and 43.9° at spring equinox for the best and worst test person (navigator), respectively. Thus, the best navigator was twice as good as the worst one. At solstice and equinox, high elevations occur the most frequently during the day, thus high North errors could occur more frequently than expected before. According to our findings, the ideal periods for sky-polarimetric Viking navigation are immediately after sunrise and before sunset, because the North errors are the lowest at low solar elevations.

  9. Correcting numerical integration errors caused by small aliasing errors

    SciTech Connect

    Smallwood, D.O.

    1997-11-01

    Small sampling errors can have a large effect on numerically integrated waveforms. An example is the integration of acceleration to compute velocity and displacement waveforms. These large integration errors complicate checking the suitability of the acceleration waveform for reproduction on shakers. For waveforms typically used for shaker reproduction, the errors become significant when the frequency content of the waveform spans a large frequency range. It is shown that these errors are essentially independent of the numerical integration method used, and are caused by small aliasing errors from the frequency components near the Nyquist frequency. A method to repair the integrated waveforms is presented. The method involves using a model of the acceleration error, and fitting this model to the acceleration, velocity, and displacement waveforms to force the waveforms to fit the assumed initial and final values. The correction is then subtracted from the acceleration before integration. The method is effective where the errors are isolated to a small section of the time history. It is shown that the common method to repair these errors using a high pass filter is sometimes ineffective for this class of problem.

  10. Newbery Medal Acceptance.

    ERIC Educational Resources Information Center

    Freedman, Russell

    1988-01-01

    Presents the Newbery Medal acceptance speech of Russell Freedman, writer of children's nonfiction. Discusses the place of nonfiction in the world of children's literature, the evolution of children's biographies, and the author's work on "Lincoln." (ARH)

  11. Effectiveness and relevance of MR acceptance testing: results of an 8 year audit.

    PubMed

    McRobbie, D W; Quest, R A

    2002-06-01

    The effectiveness and relevance of independent acceptance testing was assessed by means of an audit of acceptance procedures for 17 MRI systems, with field strengths in the range 0.5-1.5 T, acquired over 8 years. Signal-to-noise ratio and geometric linearity were found to be the image quality parameters most likely to fall below acceptable or expected standards. These received confirmed successful corrective action in 69% of instances. Non-uniformity, ghosting and poor fat suppression were the next most common non-compliant parameters, but yielded less satisfactory outcomes. Spatial resolution was not found to be a sensitive parameter in determining acceptability. 49% of all non-compliant parameters received verifiable corrective attention. A schedule of actual acceptance criteria is presented and shown to be reasonable. Parameter failure rates were shown not to have improved with time. A safety audit of 11 of the installations revealed the most common failings to be inadequate suite layout and poor use of signs. The mean number of safety issues per installation identified as requiring attention was 5, from a questionnaire of 100 points. A number of anecdotal errors and omissions are reported. The data support the importance of an appropriate acceptance procedure for new clinical MRI equipment and for the involvement of a suitably qualified safety adviser on the project team from the outset.

  12. Bayesian Error Estimation Functionals

    NASA Astrophysics Data System (ADS)

    Jacobsen, Karsten W.

    The challenge of approximating the exchange-correlation functional in Density Functional Theory (DFT) has led to the development of numerous different approximations of varying accuracy on different calculated properties. There is therefore a need for reliable estimation of prediction errors within the different approximation schemes to DFT. The Bayesian Error Estimation Functionals (BEEF) have been developed with this in mind. The functionals are constructed by fitting to experimental and high-quality computational databases for molecules and solids including chemisorption and van der Waals systems. This leads to reasonably accurate general-purpose functionals with particual focus on surface science. The fitting procedure involves considerations on how to combine different types of data, and applies Tikhonov regularization and bootstrap cross validation. The methodology has been applied to construct GGA and metaGGA functionals with and without inclusion of long-ranged van der Waals contributions. The error estimation is made possible by the generation of not only a single functional but through the construction of a probability distribution of functionals represented by a functional ensemble. The use of the functional ensemble is illustrated on compound heat of formation and by investigations of the reliability of calculated catalytic ammonia synthesis rates.

  13. The Cline of Errors in the Writing of Japanese University Students

    ERIC Educational Resources Information Center

    French, Gary

    2005-01-01

    In this study, errors in the English writing of students in the College of World Englishes at Chukyo University, Japan are examined to determine if there is a level of acceptance among teachers. If there is, are these errors becoming part of an accepted, standardized Japanese English Results show there is little acceptance of third person…

  14. Accepting space radiation risks.

    PubMed

    Schimmerling, Walter

    2010-08-01

    The human exploration of space inevitably involves exposure to radiation. Associated with this exposure are multiple risks, i.e., probabilities that certain aspects of an astronaut's health or performance will be degraded. The management of these risks requires that such probabilities be accurately predicted, that the actual exposures be verified, and that comprehensive records be maintained. Implicit in these actions is the fact that, at some point, a decision has been made to accept a certain level of risk. This paper examines ethical and practical considerations involved in arriving at a determination that risks are acceptable, roles that the parties involved may play, and obligations arising out of reliance on the informed consent paradigm seen as the basis for ethical radiation risk acceptance in space.

  15. Errors, error detection, error correction and hippocampal-region damage: data and theories.

    PubMed

    MacKay, Donald G; Johnson, Laura W

    2013-11-01

    This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test.

  16. Why was Relativity Accepted?

    NASA Astrophysics Data System (ADS)

    Brush, S. G.

    Historians of science have published many studies of the reception of Einstein's special and general theories of relativity. Based on a review of these studies, and my own research on the role of the light-bending prediction in the reception of general relativity, I discuss the role of three kinds of reasons for accepting relativity (1) empirical predictions and explanations; (2) social-psychological factors; and (3) aesthetic-mathematical factors. According to the historical studies, acceptance was a three-stage process. First, a few leading scientists adopted the special theory for aesthetic-mathematical reasons. In the second stage, their enthusiastic advocacy persuaded other scientists to work on the theory and apply it to problems currently of interest in atomic physics. The special theory was accepted by many German physicists by 1910 and had begun to attract some interest in other countries. In the third stage, the confirmation of Einstein's light-bending prediction attracted much public attention and forced all physicists to take the general theory of relativity seriously. In addition to light-bending, the explanation of the advance of Mercury's perihelion was considered strong evidence by theoretical physicists. The American astronomers who conducted successful tests of general relativity became defenders of the theory. There is little evidence that relativity was `socially constructed' but its initial acceptance was facilitated by the prestige and resources of its advocates.

  17. Approaches to acceptable risk

    SciTech Connect

    Whipple, C.

    1997-04-30

    Several alternative approaches to address the question {open_quotes}How safe is safe enough?{close_quotes} are reviewed and an attempt is made to apply the reasoning behind these approaches to the issue of acceptability of radiation exposures received in space. The approaches to the issue of the acceptability of technological risk described here are primarily analytical, and are drawn from examples in the management of environmental health risks. These include risk-based approaches, in which specific quantitative risk targets determine the acceptability of an activity, and cost-benefit and decision analysis, which generally focus on the estimation and evaluation of risks, benefits and costs, in a framework that balances these factors against each other. These analytical methods tend by their quantitative nature to emphasize the magnitude of risks, costs and alternatives, and to downplay other factors, especially those that are not easily expressed in quantitative terms, that affect acceptance or rejection of risk. Such other factors include the issues of risk perceptions and how and by whom risk decisions are made.

  18. Improved ranging systems

    NASA Technical Reports Server (NTRS)

    Young, Larry E.

    1989-01-01

    Spacecraft range measurements have provided the most accurate tests, to date, of some relativistic gravitational parameters, even though the measurements were made with ranging systems having error budgets of about 10 meters. Technology is now available to allow an improvement of two orders of magnitude in the accuracy of spacecraft ranging. The largest gains in accuracy result from the replacement of unstable analog components with high speed digital circuits having precisely known delays and phase shifts.

  19. Structure and dating errors in the geologic time scale and periodicity in mass extinctions

    NASA Technical Reports Server (NTRS)

    Stothers, Richard B.

    1989-01-01

    Structure in the geologic time scale reflects a partly paleontological origin. As a result, ages of Cenozoic and Mesozoic stage boundaries exhibit a weak 28-Myr periodicity that is similar to the strong 26-Myr periodicity detected in mass extinctions of marine life by Raup and Sepkoski. Radiometric dating errors in the geologic time scale, to which the mass extinctions are stratigraphically tied, do not necessarily lessen the likelihood of a significant periodicity in mass extinctions, but do spread the acceptable values of the period over the range 25-27 Myr for the Harland et al. time scale or 25-30 Myr for the DNAG time scale. If the Odin time scale is adopted, acceptable periods fall between 24 and 33 Myr, but are not robust against dating errors. Some indirect evidence from independently-dated flood-basalt volcanic horizons tends to favor the Odin time scale.

  20. Advanced error-prediction LDPC with temperature compensation for highly reliable SSDs

    NASA Astrophysics Data System (ADS)

    Tokutomi, Tsukasa; Tanakamaru, Shuhei; Iwasaki, Tomoko Ogura; Takeuchi, Ken

    2015-09-01

    To improve the reliability of NAND Flash memory based solid-state drives (SSDs), error-prediction LDPC (EP-LDPC) has been proposed for multi-level-cell (MLC) NAND Flash memory (Tanakamaru et al., 2012, 2013), which is effective for long retention times. However, EP-LDPC is not as effective for triple-level cell (TLC) NAND Flash memory, because TLC NAND Flash has higher error rates and is more sensitive to program-disturb error. Therefore, advanced error-prediction LDPC (AEP-LDPC) has been proposed for TLC NAND Flash memory (Tokutomi et al., 2014). AEP-LDPC can correct errors more accurately by precisely describing the error phenomena. In this paper, the effects of AEP-LDPC are investigated in a 2×nm TLC NAND Flash memory with temperature characterization. Compared with LDPC-with-BER-only, the SSD's data-retention time is increased by 3.4× and 9.5× at room-temperature (RT) and 85 °C, respectively. Similarly, the acceptable BER is increased by 1.8× and 2.3×, respectively. Moreover, AEP-LDPC can correct errors with pre-determined tables made at higher temperatures to shorten the measurement time before shipping. Furthermore, it is found that one table can cover behavior over a range of temperatures in AEP-LDPC. As a result, the total table size can be reduced to 777 kBytes, which makes this approach more practical.

  1. A fourier analysis on the maximum acceptable grid size for discrete proton beam dose calculation.

    PubMed

    Li, Haisen S; Romeijn, H Edwin; Dempsey, James F

    2006-09-01

    orientation of the beam with respect to the dose grid was also investigated. The maximum acceptable dose grid size depends on the gradient of dose profile and in turn the range of proton beam. In the case that only the phantom scattering was considered and that the beam was aligned with the dose grid, grid sizes from 0.4 to 6.8 mm were required for proton beams with ranges from 2 to 30 cm for 2% error limit at the Bragg peak point. A near linear relation between the maximum acceptable grid size and beam range was observed. For this analysis model, the resolution requirement was not significantly related to the orientation of the beam with respect to the grid.

  2. Error Field Correction in ITER

    SciTech Connect

    Park, Jong-kyu; Boozer, Allen H.; Menard, Jonathan E.; Schaffer, Michael J.

    2008-05-22

    A new method for correcting magnetic field errors in the ITER tokamak is developed using the Ideal Perturbed Equilibrium Code (IPEC). The dominant external magnetic field for driving islands is shown to be localized to the outboard midplane for three ITER equilibria that represent the projected range of operational scenarios. The coupling matrices between the poloidal harmonics of the external magnetic perturbations and the resonant fields on the rational surfaces that drive islands are combined for different equilibria and used to determine an ordered list of the dominant errors in the external magnetic field. It is found that efficient and robust error field correction is possible with a fixed setting of the correction currents relative to the currents in the main coils across the range of ITER operating scenarios that was considered.

  3. Acceptability of human risk.

    PubMed

    Kasperson, R E

    1983-10-01

    This paper has three objectives: to explore the nature of the problem implicit in the term "risk acceptability," to examine the possible contributions of scientific information to risk standard-setting, and to argue that societal response is best guided by considerations of process rather than formal methods of analysis. Most technological risks are not accepted but are imposed. There is also little reason to expect consensus among individuals on their tolerance of risk. Moreover, debates about risk levels are often at base debates over the adequacy of the institutions which manage the risks. Scientific information can contribute three broad types of analyses to risk-setting deliberations: contextual analysis, equity assessment, and public preference analysis. More effective risk-setting decisions will involve attention to the process used, particularly in regard to the requirements of procedural justice and democratic responsibility.

  4. Acceptability of human risk.

    PubMed Central

    Kasperson, R E

    1983-01-01

    This paper has three objectives: to explore the nature of the problem implicit in the term "risk acceptability," to examine the possible contributions of scientific information to risk standard-setting, and to argue that societal response is best guided by considerations of process rather than formal methods of analysis. Most technological risks are not accepted but are imposed. There is also little reason to expect consensus among individuals on their tolerance of risk. Moreover, debates about risk levels are often at base debates over the adequacy of the institutions which manage the risks. Scientific information can contribute three broad types of analyses to risk-setting deliberations: contextual analysis, equity assessment, and public preference analysis. More effective risk-setting decisions will involve attention to the process used, particularly in regard to the requirements of procedural justice and democratic responsibility. PMID:6418541

  5. Acceptance Test Plan.

    DTIC Science & Technology

    2014-09-26

    7 RD-Ai507 154 CCEPTANCE TEST PLN(U) WESTINGHOUSE DEFENSE ND i/i ELECTRO ICS CENTER BALTIMORE MD DEVELOPMENT AND OPERATIONS DIY D C KRRiJS 28 JUN...Ln ACCEPTANCE TEST PLAN FOR SPECIAL RELIABILITY TESTS FOR BROADBAND MICROWAVE AMPLIFIER PANEL David C. Kraus, Reliability Engineer WESTINGHOUSE ...ORGANIZATION b. OFFICE SYMBOL 7g& NAME OF MONITORING ORGANIZATION tIf appdeg ble) WESTINGHOUSE ELECTRIC CORP. - NAVAL RESEARCH LABORATORY e. AOORES$ (Ci7t

  6. Leaf position error during conformal dynamic arc and intensity modulated arc treatments.

    PubMed

    Ramsey, C R; Spencer, K M; Alhakeem, R; Oliver, A L

    2001-01-01

    Conformal dynamic arc (CD-ARC) and intensity modulated arc treatments (IMAT) are both treatment modalities where the multileaf collimator (MLC) can change leaf position dynamically during gantry rotation. These treatment techniques can be used to generate complex isodose distributions, similar to those used in fix-gantry intensity modulation. However, a beam-hold delay cannot be used during CD-ARC or IMAT treatments to reduce spatial error. Consequently, a certain amount of leaf position error will have to be accepted in order to make the treatment deliverable. Measurements of leaf position accuracy were taken with leaf velocities ranging from 0.3 to 3.0 cm/s. The average and maximum leaf position errors were measured, and a least-squares linear regression analysis was performed on the measured data to determine the MLC velocity error coefficient. The average position errors range from 0.03 to 0.21 cm, with the largest deviations occurring at the maximum achievable leaf velocity (3.0 cm/s). The measured MLC velocity error coefficient was 0.0674 s for a collimator rotation of 0 degrees and 0.0681 s for a collimator rotation of 90 degrees. The distribution in leaf position error between the 0 degrees and 90 degrees collimator rotations was within statistical uncertainty. A simple formula was developed based on these results for estimating the velocity-dependent dosimetric error. Using this technique, a dosimetric error index for plan evaluation can be calculated from the treatment time and the dynamic MLC leaf controller file.

  7. Age and Acceptance of Euthanasia.

    ERIC Educational Resources Information Center

    Ward, Russell A.

    1980-01-01

    Study explores relationship between age (and sex and race) and acceptance of euthanasia. Women and non-Whites were less accepting because of religiosity. Among older people less acceptance was attributable to their lesser education and greater religiosity. Results suggest that quality of life in old age affects acceptability of euthanasia. (Author)

  8. Passive infrared ranging

    NASA Astrophysics Data System (ADS)

    Leonpacher, N. K.

    1983-12-01

    The range of an infrared source was estimated by analyzing the atmospheric absorption by CO2 in several wavelength intervals of its spectrum. These bandpasses were located at the edge of the CO2 absorption band near 2300 1/cm (4.3 micron). A specific algorithm to predict range was determined based on numerous computer generated spectra. When tested with these spectra, range estimates within 0.8 km were obtained for ranges between 0 and 18 km. Accuracy decreased when actual source spectra were tested. Although actual spectra were available only for ranges to 5 km, 63% of these spectra resulted in range estimates that were within 1.6 km of the actual range. Specific spectral conditions that affected the range predictions were found. Methods to correct the deficiencies were discussed. Errors from atmospheric variations, and the effects of background noise, were also investigated. Limits on accuracy and range resolution were determined.

  9. Field error lottery

    SciTech Connect

    Elliott, C.J.; McVey, B. ); Quimby, D.C. )

    1990-01-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  10. Inborn errors of metabolism

    MedlinePlus

    Metabolism - inborn errors of ... Bodamer OA. Approach to inborn errors of metabolism. In: Goldman L, Schafer AI, eds. Goldman's Cecil Medicine . 25th ed. Philadelphia, PA: Elsevier Saunders; 2015:chap 205. Rezvani I, Rezvani G. An ...

  11. Baby-Crying Acceptance

    NASA Astrophysics Data System (ADS)

    Martins, Tiago; de Magalhães, Sérgio Tenreiro

    The baby's crying is his most important mean of communication. The crying monitoring performed by devices that have been developed doesn't ensure the complete safety of the child. It is necessary to join, to these technological resources, means of communicating the results to the responsible, which would involve the digital processing of information available from crying. The survey carried out, enabled to understand the level of adoption, in the continental territory of Portugal, of a technology that will be able to do such a digital processing. It was used the TAM as the theoretical referential. The statistical analysis showed that there is a good probability of acceptance of such a system.

  12. Programming Errors in APL.

    ERIC Educational Resources Information Center

    Kearsley, Greg P.

    This paper discusses and provides some preliminary data on errors in APL programming. Data were obtained by analyzing listings of 148 complete and partial APL sessions collected from student terminal rooms at the University of Alberta. Frequencies of errors for the various error messages are tabulated. The data, however, are limited because they…

  13. Acceptability of Emission Offsets

    EPA Pesticide Factsheets

    This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.

  14. Reduction of Maintenance Error Through Focused Interventions

    NASA Technical Reports Server (NTRS)

    Kanki, Barbara G.; Walter, Diane; Rosekind, Mark R. (Technical Monitor)

    1997-01-01

    It is well known that a significant proportion of aviation accidents and incidents are tied to human error. In flight operations, research of operational errors has shown that so-called "pilot error" often involves a variety of human factors issues and not a simple lack of individual technical skills. In aircraft maintenance operations, there is similar concern that maintenance errors which may lead to incidents and accidents are related to a large variety of human factors issues. Although maintenance error data and research are limited, industry initiatives involving human factors training in maintenance have become increasingly accepted as one type of maintenance error intervention. Conscientious efforts have been made in re-inventing the "team" concept for maintenance operations and in tailoring programs to fit the needs of technical operations. Nevertheless, there remains a dual challenge: to develop human factors interventions which are directly supported by reliable human error data, and to integrate human factors concepts into the procedures and practices of everyday technical tasks. In this paper, we describe several varieties of human factors interventions and focus on two specific alternatives which target problems related to procedures and practices; namely, 1) structured on-the-job training and 2) procedure re-design. We hope to demonstrate that the key to leveraging the impact of these solutions comes from focused interventions; that is, interventions which are derived from a clear understanding of specific maintenance errors, their operational context and human factors components.

  15. Empathy and error processing.

    PubMed

    Larson, Michael J; Fair, Joseph E; Good, Daniel A; Baldwin, Scott A

    2010-05-01

    Recent research suggests a relationship between empathy and error processing. Error processing is an evaluative control function that can be measured using post-error response time slowing and the error-related negativity (ERN) and post-error positivity (Pe) components of the event-related potential (ERP). Thirty healthy participants completed two measures of empathy, the Interpersonal Reactivity Index (IRI) and the Empathy Quotient (EQ), and a modified Stroop task. Post-error slowing was associated with increased empathic personal distress on the IRI. ERN amplitude was related to overall empathy score on the EQ and the fantasy subscale of the IRI. The Pe and measures of empathy were not related. Results remained consistent when negative affect was controlled via partial correlation, with an additional relationship between ERN amplitude and empathic concern on the IRI. Findings support a connection between empathy and error processing mechanisms.

  16. Burst error correction extensions for large Reed Solomon codes

    NASA Technical Reports Server (NTRS)

    Owsley, P.

    1990-01-01

    Reed Solomon codes are powerful error correcting codes that include some of the best random and burst correcting codes currently known. It is well known that an (n,k) Reed Solomon code can correct up to (n - k)/2 errors. Many applications utilizing Reed Solomon codes require corrections of errors consisting primarily of bursts. In this paper, it is shown that the burst correcting ability of Reed Solomon codes can be increased beyond (n - k)/2 with an acceptable probability of miscorrect.

  17. Aircraft system modeling error and control error

    NASA Technical Reports Server (NTRS)

    Kulkarni, Nilesh V. (Inventor); Kaneshige, John T. (Inventor); Krishnakumar, Kalmanje S. (Inventor); Burken, John J. (Inventor)

    2012-01-01

    A method for modeling error-driven adaptive control of an aircraft. Normal aircraft plant dynamics is modeled, using an original plant description in which a controller responds to a tracking error e(k) to drive the component to a normal reference value according to an asymptote curve. Where the system senses that (1) at least one aircraft plant component is experiencing an excursion and (2) the return of this component value toward its reference value is not proceeding according to the expected controller characteristics, neural network (NN) modeling of aircraft plant operation may be changed. However, if (1) is satisfied but the error component is returning toward its reference value according to expected controller characteristics, the NN will continue to model operation of the aircraft plant according to an original description.

  18. Error detection method

    DOEpatents

    Olson, Eric J.

    2013-06-11

    An apparatus, program product, and method that run an algorithm on a hardware based processor, generate a hardware error as a result of running the algorithm, generate an algorithm output for the algorithm, compare the algorithm output to another output for the algorithm, and detect the hardware error from the comparison. The algorithm is designed to cause the hardware based processor to heat to a degree that increases the likelihood of hardware errors to manifest, and the hardware error is observable in the algorithm output. As such, electronic components may be sufficiently heated and/or sufficiently stressed to create better conditions for generating hardware errors, and the output of the algorithm may be compared at the end of the run to detect a hardware error that occurred anywhere during the run that may otherwise not be detected by traditional methodologies (e.g., due to cooling, insufficient heat and/or stress, etc.).

  19. Predictability in the extended range

    NASA Technical Reports Server (NTRS)

    Roads, John O.

    1987-01-01

    This paper describes the results of extended range predictability experiments using an efficient two-level spherical quasi-geostrophic model. The experiments have an initial rms doubling time of about two days. This growth rate, along with an initial error of about one-half the initial error of present operational models, produces an rms error equal to the climatological rms error and a correlation of 0.5 on about day 12 of the forecast. On the largest scales, this limiting point is reached shortly thereafter. The error continues to grow at a decreasing rate until at about 30 days the forecast skill is extremely small and comparable to the skill of a persistence forecast. Various time averages at various lags are examined for skill in the extended range. Filters that weighted most strongly in the initial forecast days provide increased skill.

  20. Model Error Budgets

    NASA Technical Reports Server (NTRS)

    Briggs, Hugh C.

    2008-01-01

    An error budget is a commonly used tool in design of complex aerospace systems. It represents system performance requirements in terms of allowable errors and flows these down through a hierarchical structure to lower assemblies and components. The requirements may simply be 'allocated' based upon heuristics or experience, or they may be designed through use of physics-based models. This paper presents a basis for developing an error budget for models of the system, as opposed to the system itself. The need for model error budgets arises when system models are a principle design agent as is increasingly more common for poorly testable high performance space systems.

  1. Error coding simulations

    NASA Technical Reports Server (NTRS)

    Noble, Viveca K.

    1993-01-01

    There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.

  2. Sonic boom acceptability studies

    NASA Technical Reports Server (NTRS)

    Shepherd, Kevin P.; Sullivan, Brenda M.; Leatherwood, Jack D.; Mccurdy, David A.

    1992-01-01

    The determination of the magnitude of sonic boom exposure which would be acceptable to the general population requires, as a starting point, a method to assess and compare individual sonic booms. There is no consensus within the scientific and regulatory communities regarding an appropriate sonic boom assessment metric. Loudness, being a fundamental and well-understood attribute of human hearing was chosen as a means of comparing sonic booms of differing shapes and amplitudes. The figure illustrates the basic steps which yield a calculated value of loudness. Based upon the aircraft configuration and its operating conditions, the sonic boom pressure signature which reaches the ground is calculated. This pressure-time history is transformed to the frequency domain and converted into a one-third octave band spectrum. The essence of the loudness method is to account for the frequency response and integration characteristics of the auditory system. The result of the calculation procedure is a numerical description (perceived level, dB) which represents the loudness of the sonic boom waveform.

  3. Spaceborne scanner imaging system errors

    NASA Technical Reports Server (NTRS)

    Prakash, A.

    1982-01-01

    The individual sensor system design elements which are the priori components in the registration and rectification process, and the potential impact of error budgets on multitemporal registration and side-lap registration are analyzed. The properties of scanner, MLA, and SAR imaging systems are reviewed. Each sensor displays internal distortion properties which to varying degrees make it difficult to generate on orthophoto projection of the data acceptable for multiple pass registration or meeting national map accuracy standards and is also affected to varying degrees by relief displacements in moderate to hilly terrain. Nonsensor related distortions, associated with the accuracy of ephemeris determination and platform stability, have a major impact on local geometric distortions. Platform stability improvements expected from the new multi mission spacecraft series and improved ephemeris and ground control point determination from the NAVSTAR/global positioning satellite systems are reviewed.

  4. Error margin for antenna gain measurements

    NASA Technical Reports Server (NTRS)

    Cable, V.

    2002-01-01

    The specification of measured antenna gain is incomplete without knowing the error of the measurement. Also, unless gain is measured many times for a single antenna or over many identical antennas, the uncertainty or error in a single measurement is only an estimate. In this paper, we will examine in detail a typical error budget for common antenna gain measurements. We will also compute the gain uncertainty for a specific UHF horn test that was recently performed on the Jet Propulsion Laboratory (JPL) antenna range. The paper concludes with comments on these results and how they compare with the 'unofficial' JPL range standard of +/- ?.

  5. Fast processing techniques for accurate ultrasonic range measurements

    NASA Astrophysics Data System (ADS)

    Barshan, Billur

    2000-01-01

    Four methods of range measurement for airborne ultrasonic systems - namely simple thresholding, curve-fitting, sliding-window, and correlation detection - are compared on the basis of bias error, standard deviation, total error, robustness to noise, and the difficulty/complexity of implementation. Whereas correlation detection is theoretically optimal, the other three methods can offer acceptable performance at much lower cost. Performances of all methods have been investigated as a function of target range, azimuth, and signal-to-noise ratio. Curve fitting, sliding window, and thresholding follow correlation detection in the order of decreasing complexity. Apart from correlation detection, minimum bias and total error is most consistently obtained with the curve-fitting method. On the other hand, the sliding-window method is always better than the thresholding and curve-fitting methods in terms of minimizing the standard deviation. The experimental results are in close agreement with the corresponding simulation results. Overall, the three simple and fast processing methods provide a variety of attractive compromises between measurement accuracy and system complexity. Although this paper concentrates on ultrasonic range measurement in air, the techniques described may also find application in underwater acoustics.

  6. Twenty Questions about Student Errors.

    ERIC Educational Resources Information Center

    Fisher, Kathleen M.; Lipson, Joseph Isaac

    1986-01-01

    Discusses the value of studying errors made by students in the process of learning science. Addresses 20 research questions dealing with student learning errors. Attempts to characterize errors made by students and clarify some terms used in error research. (TW)

  7. Simulation of Error in Optical Radar Range Measurements

    DTIC Science & Technology

    1998-01-01

    Bruno Evans, Kim Jenkins, Don Nimblett, and James Tomlin of Lockheed Martin Vought Systems for their support in providing the ladar for gathering...63120-1798 US Army CECOM Rsrch, Dev, & Engrg Attn R F Giordano FT Monmouth NJ 07703-5201 US Army Edgewood Rsrch, Dev, & Engrg Ctr Attn SCBRD-TD J

  8. Mid-Range Spatial Frequency Errors in Optical Components.

    DTIC Science & Technology

    1983-01-01

    pattern. Malacara (1978, pp. 356-359) describes the diffraction intensity distri- bution on either side of the focal plane and presents a diagram of the...Leoble and Co., Ltd., Aug. 1963. Kintner, Eric C., and Richard M. Sillitto. "A New Analytic Method for Computing the Optical Transfer Function." OPTICA ...2, 1976. Malacara , Daniel (ed). Optical Shop Testing. New York: John Wiley and Sons, 1978. Reticon Corporation. Reticon G Series Data Sheet. Sunnyvale, CA: Reticon, 1976. 41 FILMED 9-85 DTIC

  9. Refractive error blindness.

    PubMed Central

    Dandona, R.; Dandona, L.

    2001-01-01

    Recent data suggest that a large number of people are blind in different parts of the world due to high refractive error because they are not using appropriate refractive correction. Refractive error as a cause of blindness has been recognized only recently with the increasing use of presenting visual acuity for defining blindness. In addition to blindness due to naturally occurring high refractive error, inadequate refractive correction of aphakia after cataract surgery is also a significant cause of blindness in developing countries. Blindness due to refractive error in any population suggests that eye care services in general in that population are inadequate since treatment of refractive error is perhaps the simplest and most effective form of eye care. Strategies such as vision screening programmes need to be implemented on a large scale to detect individuals suffering from refractive error blindness. Sufficient numbers of personnel to perform reasonable quality refraction need to be trained in developing countries. Also adequate infrastructure has to be developed in underserved areas of the world to facilitate the logistics of providing affordable reasonable-quality spectacles to individuals suffering from refractive error blindness. Long-term success in reducing refractive error blindness worldwide will require attention to these issues within the context of comprehensive approaches to reduce all causes of avoidable blindness. PMID:11285669

  10. Teacher-Induced Errors.

    ERIC Educational Resources Information Center

    Richmond, Kent C.

    Students of English as a second language (ESL) often come to the classroom with little or no experience in writing in any language and with inaccurate assumptions about writing. Rather than correct these assumptions, teachers often seem to unwittingly reinforce them, actually inducing errors into their students' work. Teacher-induced errors occur…

  11. Reduced discretization error in HZETRN

    SciTech Connect

    Slaba, Tony C.; Blattnig, Steve R.; Tweed, John

    2013-02-01

    The deterministic particle transport code HZETRN is an efficient analysis tool for studying the effects of space radiation on humans, electronics, and shielding materials. In a previous work, numerical methods in the code were reviewed, and new methods were developed that further improved efficiency and reduced overall discretization error. It was also shown that the remaining discretization error could be attributed to low energy light ions (A < 4) with residual ranges smaller than the physical step-size taken by the code. Accurately resolving the spectrum of low energy light particles is important in assessing risk associated with astronaut radiation exposure. In this work, modifications to the light particle transport formalism are presented that accurately resolve the spectrum of low energy light ion target fragments. The modified formalism is shown to significantly reduce overall discretization error and allows a physical approximation to be removed. For typical step-sizes and energy grids used in HZETRN, discretization errors for the revised light particle transport algorithms are shown to be less than 4% for aluminum and water shielding thicknesses as large as 100 g/cm{sup 2} exposed to both solar particle event and galactic cosmic ray environments.

  12. Interpolation Errors in Thermistor Calibration Equations

    NASA Astrophysics Data System (ADS)

    White, D. R.

    2017-04-01

    Thermistors are widely used temperature sensors capable of measurement uncertainties approaching those of standard platinum resistance thermometers. However, the extreme nonlinearity of thermistors means that complicated calibration equations are required to minimize the effects of interpolation errors and achieve low uncertainties. This study investigates the magnitude of interpolation errors as a function of temperature range and the number of terms in the calibration equation. Approximation theory is used to derive an expression for the interpolation error and indicates that the temperature range and the number of terms in the calibration equation are the key influence variables. Numerical experiments based on published resistance-temperature data confirm these conclusions and additionally give guidelines on the maximum and minimum interpolation error likely to occur for a given temperature range and number of terms in the calibration equation.

  13. The effect of ocular aberrations on steady-state errors of accommodative response.

    PubMed

    Plainis, Sotiris; Ginis, Harilaos S; Pallikaris, Aristophanis

    2005-05-23

    It is well accepted that the accommodation system is characterized by steady-state errors in focus. The purpose of this study was to correlate these errors with changes in ocular wavefront aberration and corresponding image quality when accommodating. A wavefront analyzing system, the Complete Ophthalmic Analysis System (COAS), was used in conjunction with a Badal optometer to allow continuous recording of the aberration structure of the eye for a range of accommodative demands (up to 8 D). Fifty consecutive recordings from seven subjects were taken. Monocular accommodative response was calculated as (i) the equivalent refraction minimizing wavefront error and (ii) the defocus needed to optimize the modulation transfer function at high spatial frequencies. Previously reported changes in ocular aberrations with accommodation (e.g., the shift of spherical aberration to negative values) were confirmed. Increased accommodation errors for near targets (lags) were evident for all subjects, although their magnitude showed a significant intersubject variability. It is concluded that the one-to-one stimulus/response slope in accommodation function should not always be considered as ideal, because higher order aberrations, especially changes of spherical aberration, may influence the actual accommodative demand. Fluctuations may serve to preserve image quality when errors of accommodation are moderate, by temporarily searching for the best focus.

  14. Uncorrected refractive errors.

    PubMed

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship.

  15. Cognitive illusions of authorship reveal hierarchical error detection in skilled typists.

    PubMed

    Logan, Gordon D; Crump, Matthew J C

    2010-10-29

    The ability to detect errors is an essential component of cognitive control. Studies of error detection in humans typically use simple tasks and propose single-process theories of detection. We examined error detection by skilled typists and found illusions of authorship that provide evidence for two error-detection processes. We corrected errors that typists made and inserted errors in correct responses. When asked to report errors, typists took credit for corrected errors and accepted blame for inserted errors, claiming authorship for the appearance of the screen. However, their typing rate showed no evidence of these illusions, slowing down after corrected errors but not after inserted errors. This dissociation suggests two error-detection processes: one sensitive to the appearance of the screen and the other sensitive to keystrokes.

  16. Identifying subset errors in multiple sequence alignments.

    PubMed

    Roy, Aparna; Taddese, Bruck; Vohra, Shabana; Thimmaraju, Phani K; Illingworth, Christopher J R; Simpson, Lisa M; Mukherjee, Keya; Reynolds, Christopher A; Chintapalli, Sree V

    2014-01-01

    Multiple sequence alignment (MSA) accuracy is important, but there is no widely accepted method of judging the accuracy that different alignment algorithms give. We present a simple approach to detecting two types of error, namely block shifts and the misplacement of residues within a gap. Given a MSA, subsets of very similar sequences are generated through the use of a redundancy filter, typically using a 70-90% sequence identity cut-off. Subsets thus produced are typically small and degenerate, and errors can be easily detected even by manual examination. The errors, albeit minor, are inevitably associated with gaps in the alignment, and so the procedure is particularly relevant to homology modelling of protein loop regions. The usefulness of the approach is illustrated in the context of the universal but little known [K/R]KLH motif that occurs in intracellular loop 1 of G protein coupled receptors (GPCR); other issues relevant to GPCR modelling are also discussed.

  17. CO2 laser ranging systems study

    NASA Technical Reports Server (NTRS)

    Filippi, C. A.

    1975-01-01

    The conceptual design and error performance of a CO2 laser ranging system are analyzed. Ranging signal and subsystem processing alternatives are identified, and their comprehensive evaluation yields preferred candidate solutions which are analyzed to derive range and range rate error contributions. The performance results are presented in the form of extensive tables and figures which identify the ranging accuracy compromises as a function of the key system design parameters and subsystem performance indexes. The ranging errors obtained are noted to be within the high accuracy requirements of existing NASA/GSFC missions with a proper system design.

  18. Human Factors Process Task Analysis: Liquid Oxygen Pump Acceptance Test Procedure at the Advanced Technology Development Center

    NASA Technical Reports Server (NTRS)

    Diorio, Kimberly A.; Voska, Ned (Technical Monitor)

    2002-01-01

    This viewgraph presentation provides information on Human Factors Process Failure Modes and Effects Analysis (HF PFMEA). HF PFMEA includes the following 10 steps: Describe mission; Define System; Identify human-machine; List human actions; Identify potential errors; Identify factors that effect error; Determine likelihood of error; Determine potential effects of errors; Evaluate risk; Generate solutions (manage error). The presentation also describes how this analysis was applied to a liquid oxygen pump acceptance test.

  19. Radar range measurements in the atmosphere.

    SciTech Connect

    Doerry, Armin Walter

    2013-02-01

    The earths atmosphere affects the velocity of propagation of microwave signals. This imparts a range error to radar range measurements that assume the typical simplistic model for propagation velocity. This range error is a function of atmospheric constituents, such as water vapor, as well as the geometry of the radar data collection, notably altitude and range. Models are presented for calculating atmospheric effects on radar range measurements, and compared against more elaborate atmospheric models.

  20. Error Prevention Aid

    NASA Technical Reports Server (NTRS)

    1987-01-01

    In a complex computer environment there is ample opportunity for error, a mistake by a programmer, or a software-induced undesirable side effect. In insurance, errors can cost a company heavily, so protection against inadvertent change is a must for the efficient firm. The data processing center at Transport Life Insurance Company has taken a step to guard against accidental changes by adopting a software package called EQNINT (Equations Interpreter Program). EQNINT cross checks the basic formulas in a program against the formulas that make up the major production system. EQNINT assures that formulas are coded correctly and helps catch errors before they affect the customer service or its profitability.

  1. 14 CFR 415.35 - Acceptable flight risk.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Launch Range § 415.35 Acceptable flight risk. (a) Flight risk through orbital insertion or impact. Acceptable flight risk through orbital insertion for an orbital launch vehicle, and through impact for a... collective members of the public exposed to debris hazards from any one launch. To obtain safety approval,...

  2. 14 CFR 415.35 - Acceptable flight risk.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Launch Range § 415.35 Acceptable flight risk. (a) Flight risk through orbital insertion or impact. Acceptable flight risk through orbital insertion for an orbital launch vehicle, and through impact for a... collective members of the public exposed to debris hazards from any one launch. To obtain safety approval,...

  3. 14 CFR 415.35 - Acceptable flight risk.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Launch Range § 415.35 Acceptable flight risk. (a) Flight risk through orbital insertion or impact. Acceptable flight risk through orbital insertion for an orbital launch vehicle, and through impact for a... collective members of the public exposed to debris hazards from any one launch. To obtain safety approval,...

  4. 14 CFR 415.35 - Acceptable flight risk.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... Launch Range § 415.35 Acceptable flight risk. (a) Flight risk through orbital insertion or impact. Acceptable flight risk through orbital insertion for an orbital launch vehicle, and through impact for a... collective members of the public exposed to debris hazards from any one launch. To obtain safety approval,...

  5. 14 CFR 415.35 - Acceptable flight risk.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Launch Range § 415.35 Acceptable flight risk. (a) Flight risk through orbital insertion or impact. Acceptable flight risk through orbital insertion for an orbital launch vehicle, and through impact for a... collective members of the public exposed to debris hazards from any one launch. To obtain safety approval,...

  6. Common Risk Criteria for National Test Ranges

    DTIC Science & Technology

    2010-12-01

    operations, excluding aviation operations. 15. SUBJECT TERMS Range Safety Group; debris injury thresholds; debris hazard thresholds; acceptable...excluding aviation operations. 15. SUBJECT TERMS Range Safety Group; debris injury thresholds; debris hazard thresholds; acceptable risk criteria 16...uncertainty applies to all hazards, not just debris . h. Modifying the description of catastrophic risk in Chapter 4 and Chapter 7 of the Supplement

  7. Dose error analysis for a scanned proton beam delivery system

    NASA Astrophysics Data System (ADS)

    Coutrakon, G.; Wang, N.; Miller, D. W.; Yang, Y.

    2010-12-01

    All particle beam scanning systems are subject to dose delivery errors due to errors in position, energy and intensity of the delivered beam. In addition, finite scan speeds, beam spill non-uniformities, and delays in detector, detector electronics and magnet responses will all contribute errors in delivery. In this paper, we present dose errors for an 8 × 10 × 8 cm3 target of uniform water equivalent density with 8 cm spread out Bragg peak and a prescribed dose of 2 Gy. Lower doses are also analyzed and presented later in the paper. Beam energy errors and errors due to limitations of scanning system hardware have been included in the analysis. By using Gaussian shaped pencil beams derived from measurements in the research room of the James M Slater Proton Treatment and Research Center at Loma Linda, CA and executing treatment simulations multiple times, statistical dose errors have been calculated in each 2.5 mm cubic voxel in the target. These errors were calculated by delivering multiple treatments to the same volume and calculating the rms variation in delivered dose at each voxel in the target. The variations in dose were the result of random beam delivery errors such as proton energy, spot position and intensity fluctuations. The results show that with reasonable assumptions of random beam delivery errors, the spot scanning technique yielded an rms dose error in each voxel less than 2% or 3% of the 2 Gy prescribed dose. These calculated errors are within acceptable clinical limits for radiation therapy.

  8. Cone penetrometer acceptance test report

    SciTech Connect

    Boechler, G.N.

    1996-09-19

    This Acceptance Test Report (ATR) documents the results of acceptance test procedure WHC-SD-WM-ATR-151. Included in this report is a summary of the tests, the results and issues, the signature and sign- off ATP pages, and a summarized table of the specification vs. ATP section that satisfied the specification.

  9. Estimating Bias Error Distributions

    NASA Technical Reports Server (NTRS)

    Liu, Tian-Shu; Finley, Tom D.

    2001-01-01

    This paper formulates the general methodology for estimating the bias error distribution of a device in a measuring domain from less accurate measurements when a minimal number of standard values (typically two values) are available. A new perspective is that the bias error distribution can be found as a solution of an intrinsic functional equation in a domain. Based on this theory, the scaling- and translation-based methods for determining the bias error distribution arc developed. These methods are virtually applicable to any device as long as the bias error distribution of the device can be sufficiently described by a power series (a polynomial) or a Fourier series in a domain. These methods have been validated through computational simulations and laboratory calibration experiments for a number of different devices.

  10. The surveillance error grid.

    PubMed

    Klonoff, David C; Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B; Kirkman, M Sue; Kovatchev, Boris

    2014-07-01

    Currently used error grids for assessing clinical accuracy of blood glucose monitors are based on out-of-date medical practices. Error grids have not been widely embraced by regulatory agencies for clearance of monitors, but this type of tool could be useful for surveillance of the performance of cleared products. Diabetes Technology Society together with representatives from the Food and Drug Administration, the American Diabetes Association, the Endocrine Society, and the Association for the Advancement of Medical Instrumentation, and representatives of academia, industry, and government, have developed a new error grid, called the surveillance error grid (SEG) as a tool to assess the degree of clinical risk from inaccurate blood glucose (BG) monitors. A total of 206 diabetes clinicians were surveyed about the clinical risk of errors of measured BG levels by a monitor. The impact of such errors on 4 patient scenarios was surveyed. Each monitor/reference data pair was scored and color-coded on a graph per its average risk rating. Using modeled data representative of the accuracy of contemporary meters, the relationships between clinical risk and monitor error were calculated for the Clarke error grid (CEG), Parkes error grid (PEG), and SEG. SEG action boundaries were consistent across scenarios, regardless of whether the patient was type 1 or type 2 or using insulin or not. No significant differences were noted between responses of adult/pediatric or 4 types of clinicians. Although small specific differences in risk boundaries between US and non-US clinicians were noted, the panel felt they did not justify separate grids for these 2 types of clinicians. The data points of the SEG were classified in 15 zones according to their assigned level of risk, which allowed for comparisons with the classic CEG and PEG. Modeled glucose monitor data with realistic self-monitoring of blood glucose errors derived from meter testing experiments plotted on the SEG when compared to

  11. Quantum Error Correction

    NASA Astrophysics Data System (ADS)

    Lidar, Daniel A.; Brun, Todd A.

    2013-09-01

    Prologue; Preface; Part I. Background: 1. Introduction to decoherence and noise in open quantum systems Daniel Lidar and Todd Brun; 2. Introduction to quantum error correction Dave Bacon; 3. Introduction to decoherence-free subspaces and noiseless subsystems Daniel Lidar; 4. Introduction to quantum dynamical decoupling Lorenza Viola; 5. Introduction to quantum fault tolerance Panos Aliferis; Part II. Generalized Approaches to Quantum Error Correction: 6. Operator quantum error correction David Kribs and David Poulin; 7. Entanglement-assisted quantum error-correcting codes Todd Brun and Min-Hsiu Hsieh; 8. Continuous-time quantum error correction Ognyan Oreshkov; Part III. Advanced Quantum Codes: 9. Quantum convolutional codes Mark Wilde; 10. Non-additive quantum codes Markus Grassl and Martin Rötteler; 11. Iterative quantum coding systems David Poulin; 12. Algebraic quantum coding theory Andreas Klappenecker; 13. Optimization-based quantum error correction Andrew Fletcher; Part IV. Advanced Dynamical Decoupling: 14. High order dynamical decoupling Zhen-Yu Wang and Ren-Bao Liu; 15. Combinatorial approaches to dynamical decoupling Martin Rötteler and Pawel Wocjan; Part V. Alternative Quantum Computation Approaches: 16. Holonomic quantum computation Paolo Zanardi; 17. Fault tolerance for holonomic quantum computation Ognyan Oreshkov, Todd Brun and Daniel Lidar; 18. Fault tolerant measurement-based quantum computing Debbie Leung; Part VI. Topological Methods: 19. Topological codes Héctor Bombín; 20. Fault tolerant topological cluster state quantum computing Austin Fowler and Kovid Goyal; Part VII. Applications and Implementations: 21. Experimental quantum error correction Dave Bacon; 22. Experimental dynamical decoupling Lorenza Viola; 23. Architectures Jacob Taylor; 24. Error correction in quantum communication Mark Wilde; Part VIII. Critical Evaluation of Fault Tolerance: 25. Hamiltonian methods in QEC and fault tolerance Eduardo Novais, Eduardo Mucciolo and

  12. Telemetry location error in a forested habitat

    USGS Publications Warehouse

    Chu, D.S.; Hoover, B.A.; Fuller, M.R.; Geissler, P.H.; Amlaner, Charles J.

    1989-01-01

    The error associated with locations estimated by radio-telemetry triangulation can be large and variable in a hardwood forest. We assessed the magnitude and cause of telemetry location errors in a mature hardwood forest by using a 4-element Yagi antenna and compass bearings toward four transmitters, from 21 receiving sites. The distance error from the azimuth intersection to known transmitter locations ranged from 0 to 9251 meters. Ninety-five percent of the estimated locations were within 16 to 1963 meters, and 50% were within 99 to 416 meters of actual locations. Angles with 20o of parallel had larger distance errors than other angles. While angle appeared most important, greater distances and the amount of vegetation between receivers and transmitters also contributed to distance error.

  13. Smoothing error pitfalls

    NASA Astrophysics Data System (ADS)

    von Clarmann, T.

    2014-04-01

    The difference due to the content of a priori information between a constrained retrieval and the true atmospheric state is usually represented by the so-called smoothing error. In this paper it is shown that the concept of the smoothing error is questionable because it is not compliant with Gaussian error propagation. The reason for this is that the smoothing error does not represent the expected deviation of the retrieval from the true state but the expected deviation of the retrieval from the atmospheric state sampled on an arbitrary grid, which is itself a smoothed representation of the true state. The idea of a sufficiently fine sampling of this reference atmospheric state is untenable because atmospheric variability occurs on all scales, implying that there is no limit beyond which the sampling is fine enough. Even the idealization of infinitesimally fine sampling of the reference state does not help because the smoothing error is applied to quantities which are only defined in a statistical sense, which implies that a finite volume of sufficient spatial extent is needed to meaningfully talk about temperature or concentration. Smoothing differences, however, which play a role when measurements are compared, are still a useful quantity if the involved a priori covariance matrix has been evaluated on the comparison grid rather than resulting from interpolation. This is, because the undefined component of the smoothing error, which is the effect of smoothing implied by the finite grid on which the measurements are compared, cancels out when the difference is calculated.

  14. Thermodynamics of Error Correction

    NASA Astrophysics Data System (ADS)

    Sartori, Pablo; Pigolotti, Simone

    2015-10-01

    Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.

  15. Effect of Field Errors in Muon Collider IR Magnets on Beam Dynamics

    SciTech Connect

    Alexahin, Y.; Gianfelice-Wendt, E.; Kapin, V.V.; /Fermilab

    2012-05-01

    In order to achieve peak luminosity of a Muon Collider (MC) in the 10{sup 35} cm{sup -2}s{sup -1} range very small values of beta-function at the interaction point (IP) are necessary ({beta}* {le} 1 cm) while the distance from IP to the first quadrupole can not be made shorter than {approx}6 m as dictated by the necessity of detector protection from backgrounds. In the result the beta-function at the final focus quadrupoles can reach 100 km making beam dynamics very sensitive to all kind of errors. In the present report we consider the effects on momentum acceptance and dynamic aperture of multipole field errors in the body of IR dipoles as well as of fringe-fields in both dipoles and quadrupoles in the ase of 1.5 TeV (c.o.m.) MC. Analysis shows these effects to be strong but correctable with dedicated multipole correctors.

  16. Performance Errors in Weight Training and Their Correction.

    ERIC Educational Resources Information Center

    Downing, John H.; Lander, Jeffrey E.

    2002-01-01

    Addresses general performance errors in weight training, also discussing each category of error separately. The paper focuses on frequency and intensity, incorrect training velocities, full range of motion, and symmetrical training. It also examines specific errors related to the bench press, squat, military press, and bent- over and seated row…

  17. Tracking errors in 2D multiple particle tracking microrheology

    NASA Astrophysics Data System (ADS)

    Kowalczyk, Anne; Oelschlaeger, Claude; Willenbacher, Norbert

    2015-01-01

    Tracking errors due to particles moving in and out of the focal plane are a fundamental problem of multiple particle tracking microrheology. Here, we present a new approach to treat these errors so that a statistically significant number of particle trajectories with reasonable length are received, which is important for an unbiased analysis of multiple particle tracking data from inhomogeneous fluids. Starting from Crocker and Grier’s tracking algorithm, we identify particle displacements between subsequent images as artificial jumps; if this displacement deviates more than four standard deviations from the mean value, trajectories are terminated at such positions. In a further processing step, trajectories separated by a time gap Δ {τ\\text{max}} are merged based on an adaptive search radius criterion accounting for individual particle mobility. For a series of Newtonian fluids covering the viscosity range 6-1300 mPa s, this approach yields the correct viscosity but also results in a viscosity-independent number of trajectories equal to the average number of particles in an image with a minimum length covering at least two orders of magnitude in time. This allows for an unbiased characterization of heterogeneous fluids. For a Carbopol ETD 2050 solution we recover the expected broad variation of particle mobility. Consistent with the widely accepted structural model of highly swollen microgel particles suspended in a polymer solution, we find about 2/3 of the tracers are elastically trapped.

  18. Phase and amplitude errors in FM radars

    NASA Astrophysics Data System (ADS)

    Griffiths, Hugh D.

    The constraints on phase and amplitude errors are determined for various types of FM radar by calculating the range sidelobe levels on the point target response due to the phase and amplitude modulation of the target echo. It is shown that under certain circumstances the constraints on phase linearity appropriate for conventional pulse compression radars are unnecessarily stringent, and quite large phase errors can be tolerated provided the relative delay of the local oscillator with respect to the target echo is small compared with the periodicity of the phase error characteristic. The constraints on amplitude flatness, however, are severe under almost all circumstances.

  19. Error monitoring in musicians

    PubMed Central

    Maidhof, Clemens

    2013-01-01

    To err is human, and hence even professional musicians make errors occasionally during their performances. This paper summarizes recent work investigating error monitoring in musicians, i.e., the processes and their neural correlates associated with the monitoring of ongoing actions and the detection of deviations from intended sounds. Electroencephalography (EEG) studies reported an early component of the event-related potential (ERP) occurring before the onsets of pitch errors. This component, which can be altered in musicians with focal dystonia, likely reflects processes of error detection and/or error compensation, i.e., attempts to cancel the undesired sensory consequence (a wrong tone) a musician is about to perceive. Thus, auditory feedback seems not to be a prerequisite for error detection, consistent with previous behavioral results. In contrast, when auditory feedback is externally manipulated and thus unexpected, motor performance can be severely distorted, although not all feedback alterations result in performance impairments. Recent studies investigating the neural correlates of feedback processing showed that unexpected feedback elicits an ERP component after note onsets, which shows larger amplitudes during music performance than during mere perception of the same musical sequences. Hence, these results stress the role of motor actions for the processing of auditory information. Furthermore, recent methodological advances like the combination of 3D motion capture techniques with EEG will be discussed. Such combinations of different measures can potentially help to disentangle the roles of different feedback types such as proprioceptive and auditory feedback, and in general to derive at a better understanding of the complex interactions between the motor and auditory domain during error monitoring. Finally, outstanding questions and future directions in this context will be discussed. PMID:23898255

  20. Errata: Papers in Error Analysis.

    ERIC Educational Resources Information Center

    Svartvik, Jan, Ed.

    Papers presented at the symposium of error analysis in Lund, Sweden, in September 1972, approach error analysis specifically in its relation to foreign language teaching and second language learning. Error analysis is defined as having three major aspects: (1) the description of the errors, (2) the explanation of errors by means of contrastive…

  1. Passive Ranging

    DTIC Science & Technology

    1988-08-01

    1981). 5. R. Courant and D. Hilbert, Methods of Mathematical Physics , Vol. I, English ed., * Interscience, New York, 1953. 32 32 APPENDIX A CALCULATION...K Courant and D. Hilbert, Methods of Mathematical Physics , Vol. I, English ed., * Interscience, New York, 1953. A-8 APPENDIX B * RANGING ACCURACY IN

  2. Design of Large Momentum Acceptance Transport Systems

    SciTech Connect

    D.R. Douglas

    2005-05-01

    The use of energy recovery to enable high power linac operation often gives rise to an attendant challenge--the transport of high power beams subtending large phase space volumes. In particular applications--such as FEL driver accelerators--this manifests itself as a requirement for beam transport systems with large momentum acceptance. We will discuss the design, implementation, and operation of such systems. Though at times counterintuitive in behavior (perturbative descriptions may, for example, be misleading), large acceptance systems have been successfully utilized for generations as spectrometers and accelerator recirculators [1]. Such systems are in fact often readily designed using appropriate geometric descriptions of beam behavior; insight provided using such a perspective may in addition reveal inherent symmetries that simplify construction and improve operability. Our discussion will focus on two examples: the Bates-clone recirculator used in the Jefferson Lab 10 kW IR U pgrade FEL (which has an observed acceptance of 10% or more) and a compaction-managed mirror-bend achromat concept with an acceptance ranging from 50 to 150 MeV.

  3. Extending the Technology Acceptance Model: Policy Acceptance Model (PAM)

    NASA Astrophysics Data System (ADS)

    Pierce, Tamra

    There has been extensive research on how new ideas and technologies are accepted in society. This has resulted in the creation of many models that are used to discover and assess the contributing factors. The Technology Acceptance Model (TAM) is one that is a widely accepted model. This model examines people's acceptance of new technologies based on variables that directly correlate to how the end user views the product. This paper introduces the Policy Acceptance Model (PAM), an expansion of TAM, which is designed for the analysis and evaluation of acceptance of new policy implementation. PAM includes the traditional constructs of TAM and adds the variables of age, ethnicity, and family. The model is demonstrated using a survey of people's attitude toward the upcoming healthcare reform in the United States (US) from 72 survey respondents. The aim is that the theory behind this model can be used as a framework that will be applicable to studies looking at the introduction of any new or modified policies.

  4. An error control system with multiple-stage forward error corrections

    NASA Technical Reports Server (NTRS)

    Takata, Toyoo; Fujiwara, Toru; Kasami, Tadao; Lin, Shu

    1990-01-01

    A robust error-control coding system is presented. This system is a cascaded FEC (forward error control) scheme supported by parity retransmissions for further error correction in the erroneous data words. The error performance and throughput efficiency of the system are analyzed. Two specific examples of the error-control system are studied. The first example does not use an inner code, and the outer code, which is not interleaved, is a shortened code of the NASA standard RS code over GF(28). The second example, as proposed for NASA, uses the same shortened RS code as the base outer code C2, except that it is interleaved to a depth of 2. It is shown that both examples provide high reliability and throughput efficiency even for high channel bit-error rates in the range of 0.01.

  5. Smoothing error pitfalls

    NASA Astrophysics Data System (ADS)

    von Clarmann, T.

    2014-09-01

    The difference due to the content of a priori information between a constrained retrieval and the true atmospheric state is usually represented by a diagnostic quantity called smoothing error. In this paper it is shown that, regardless of the usefulness of the smoothing error as a diagnostic tool in its own right, the concept of the smoothing error as a component of the retrieval error budget is questionable because it is not compliant with Gaussian error propagation. The reason for this is that the smoothing error does not represent the expected deviation of the retrieval from the true state but the expected deviation of the retrieval from the atmospheric state sampled on an arbitrary grid, which is itself a smoothed representation of the true state; in other words, to characterize the full loss of information with respect to the true atmosphere, the effect of the representation of the atmospheric state on a finite grid also needs to be considered. The idea of a sufficiently fine sampling of this reference atmospheric state is problematic because atmospheric variability occurs on all scales, implying that there is no limit beyond which the sampling is fine enough. Even the idealization of infinitesimally fine sampling of the reference state does not help, because the smoothing error is applied to quantities which are only defined in a statistical sense, which implies that a finite volume of sufficient spatial extent is needed to meaningfully discuss temperature or concentration. Smoothing differences, however, which play a role when measurements are compared, are still a useful quantity if the covariance matrix involved has been evaluated on the comparison grid rather than resulting from interpolation and if the averaging kernel matrices have been evaluated on a grid fine enough to capture all atmospheric variations that the instruments are sensitive to. This is, under the assumptions stated, because the undefined component of the smoothing error, which is the

  6. Market Acceptance of Smart Growth

    EPA Pesticide Factsheets

    This report finds that smart growth developments enjoy market acceptance because of stability in prices over time. Housing resales in smart growth developments often have greater appreciation than their conventional suburban counterparts.

  7. L-286 Acceptance Test Record

    SciTech Connect

    HARMON, B.C.

    2000-01-14

    This document provides a detailed account of how the acceptance testing was conducted for Project L-286, ''200E Area Sanitary Water Plant Effluent Stream Reduction''. The testing of the L-286 instrumentation system was conducted under the direct supervision

  8. Acceptance criteria for urban dispersion model evaluation

    NASA Astrophysics Data System (ADS)

    Hanna, Steven; Chang, Joseph

    2012-05-01

    The authors suggested acceptance criteria for rural dispersion models' performance measures in this journal in 2004. The current paper suggests modified values of acceptance criteria for urban applications and tests them with tracer data from four urban field experiments. For the arc-maximum concentrations, the fractional bias should have a magnitude <0.67 (i.e., the relative mean bias is less than a factor of 2); the normalized mean-square error should be <6 (i.e., the random scatter is less than about 2.4 times the mean); and the fraction of predictions that are within a factor of two of the observations (FAC2) should be >0.3. For all data paired in space, for which a threshold concentration must always be defined, the normalized absolute difference should be <0.50, when the threshold is three times the instrument's limit of quantification (LOQ). An overall criterion is then applied that the total set of acceptance criteria should be satisfied in at least half of the field experiments. These acceptance criteria are applied to evaluations of the US Department of Defense's Joint Effects Model (JEM) with tracer data from US urban field experiments in Salt Lake City (U2000), Oklahoma City (JU2003), and Manhattan (MSG05 and MID05). JEM includes the SCIPUFF dispersion model with the urban canopy option and the urban dispersion model (UDM) option. In each set of evaluations, three or four likely options are tested for meteorological inputs (e.g., a local building top wind speed, the closest National Weather Service airport observations, or outputs from numerical weather prediction models). It is found that, due to large natural variability in the urban data, there is not a large difference between the performance measures for the two model options and the three or four meteorological input options. The more detailed UDM and the state-of-the-art numerical weather models do provide a slight improvement over the other options. The proposed urban dispersion model acceptance

  9. Error Sensitivity Model.

    DTIC Science & Technology

    1980-04-01

    Philosophy The Positioning/Error Model has been defined in three dis- tinct phases: I - Error Sensitivity Model II - Operonal Positioning Model III...X inv VH,’itat NX*YImpY -IY+X 364: mat AX+R 365: ara R+L+R 366: if NC1,1J-N[2,2)=O and N[1,2<135+T;j, 6 367: if NC1,1]-N2,2J=6 and NCI2=;0.T;jmp 5

  10. Error Free Software

    NASA Technical Reports Server (NTRS)

    1985-01-01

    A mathematical theory for development of "higher order" software to catch computer mistakes resulted from a Johnson Space Center contract for Apollo spacecraft navigation. Two women who were involved in the project formed Higher Order Software, Inc. to develop and market the system of error analysis and correction. They designed software which is logically error-free, which, in one instance, was found to increase productivity by 600%. USE.IT defines its objectives using AXES -- a user can write in English and the system converts to computer languages. It is employed by several large corporations.

  11. Grazing function g and collimation angular acceptance

    SciTech Connect

    Peggs, S.G.; Previtali, V.

    2009-11-02

    The grazing function g is introduced - a synchrobetatron optical quantity that is analogous (and closely connected) to the Twiss and dispersion functions {beta}, {alpha}, {eta}, and {eta}'. It parametrizes the rate of change of total angle with respect to synchrotron amplitude for grazing particles, which just touch the surface of an aperture when their synchrotron and betatron oscillations are simultaneously (in time) at their extreme displacements. The grazing function can be important at collimators with limited acceptance angles. For example, it is important in both modes of crystal collimation operation - in channeling and in volume reflection. The grazing function is independent of the collimator type - crystal or amorphous - but can depend strongly on its azimuthal location. The rigorous synchrobetatron condition g = 0 is solved, by invoking the close connection between the grazing function and the slope of the normalized dispersion. Propagation of the grazing function is described, through drifts, dipoles, and quadrupoles. Analytic expressions are developed for g in perfectly matched periodic FODO cells, and in the presence of {beta} or {eta} error waves. These analytic approximations are shown to be, in general, in good agreement with realistic numerical examples. The grazing function is shown to scale linearly with FODO cell bend angle, but to be independent of FODO cell length. The ideal value is g = 0 at the collimator, but finite nonzero values are acceptable. Practically achievable grazing functions are described and evaluated, for both amorphous and crystal primary collimators, at RHIC, the SPS (UA9), the Tevatron (T-980), and the LHC.

  12. Validation and acceptance of synthetic infrared imagery

    NASA Astrophysics Data System (ADS)

    Smith, Moira I.; Bernhardt, Mark; Angell, Christopher R.; Hickman, Duncan; Whitehead, Philip; Patel, Dilip

    2004-08-01

    This paper describes the use of an image query database (IQ-DB) tool as a means of implementing a validation strategy for synthetic long-wave infrared images of sea clutter. Specifically it was required to determine the validity of the synthetic imagery for use in developing and testing automatic target detection algorithms. The strategy adopted for exploiting synthetic imagery is outlined and the key issues of validation and acceptance are discussed in detail. A wide range of image metrics has been developed to achieve pre-defined validation criteria. A number of these metrics, which include post processing algorithms, are presented. Furthermore, the IQ-DB provides a robust mechanism for configuration management and control of the large volume of data used. The implementation of the IQ-DB is reviewed in terms of its cardinal point specification and its central role in synthetic imagery validation and EOSS progressive acceptance.

  13. Orwell's Instructive Errors

    ERIC Educational Resources Information Center

    Julian, Liam

    2009-01-01

    In this article, the author talks about George Orwell, his instructive errors, and the manner in which Orwell pierced worthless theory, faced facts and defended decency (with fluctuating success), and largely ignored the tradition of accumulated wisdom that has rendered him a timeless teacher--one whose inadvertent lessons, while infrequently…

  14. Correcting for sequencing error in maximum likelihood phylogeny inference.

    PubMed

    Kuhner, Mary K; McGill, James

    2014-11-04

    Accurate phylogenies are critical to taxonomy as well as studies of speciation processes and other evolutionary patterns. Accurate branch lengths in phylogenies are critical for dating and rate measurements. Such accuracy may be jeopardized by unacknowledged sequencing error. We use simulated data to test a correction for DNA sequencing error in maximum likelihood phylogeny inference. Over a wide range of data polymorphism and true error rate, we found that correcting for sequencing error improves recovery of the branch lengths, even if the assumed error rate is up to twice the true error rate. Low error rates have little effect on recovery of the topology. When error is high, correction improves topological inference; however, when error is extremely high, using an assumed error rate greater than the true error rate leads to poor recovery of both topology and branch lengths. The error correction approach tested here was proposed in 2004 but has not been widely used, perhaps because researchers do not want to commit to an estimate of the error rate. This study shows that correction with an approximate error rate is generally preferable to ignoring the issue.

  15. Report of the Subpanel on Error Characterization and Error Budgets

    NASA Technical Reports Server (NTRS)

    1982-01-01

    The state of knowledge of both user positioning requirements and error models of current and proposed satellite systems is reviewed. In particular the error analysis models for LANDSAT D are described. Recommendations are given concerning the geometric error model for the thematic mapper; interactive user involvement in system error budgeting and modeling and verification on real data sets; and the identification of a strawman mission for modeling key error sources.

  16. Errors in thermochromic liquid crystal thermometry

    NASA Astrophysics Data System (ADS)

    Wiberg, Roland; Lior, Noam

    2004-09-01

    This article experimentally investigates and assesses the errors that may be incurred in the hue-based thermochromic liquid crystal thermochromic liquid crystal (TLC) method, and their causes. The errors include response time, hysteresis, aging, surrounding illumination disturbance, direct illumination and viewing angle, amount of light into the camera, TLC thickness, digital resolution of the image conversion system, and measurement noise. Some of the main conclusions are that: (1) The 3×8 bits digital representation of the red green and blue TLC color values produces a temperature measurement error of typically 1% of the TLC effective temperature range, (2) an eight-fold variation of the light intensity into the camera produced variations, which were not discernable from the digital resolution error, (3) this temperature depends on the TLC film thickness, and (4) thicker films are less susceptible to aging and thickness nonuniformities.

  17. Determination of Laser Tracker Angle Encoder Errors

    NASA Astrophysics Data System (ADS)

    Nasr, Karim M.; Hughes, Ben; Forbes, Alistair; Lewis, Andrew

    2014-08-01

    Errors in the angle encoders of a laser tracker may potentially produce large errors in long range coordinate measurements. To determine the azimuth angle encoder errors and verify their values stored in the tracker's internal error map, several methodologies were evaluated, differing in complexity, measurement time and the need for specialised measuring equipment. These methodologies are: an artefact-based technique developed by NIST; a multi-target network technique developed by NPL; and the classical precision angular indexing table technique. It is shown that the three methodologies agree within their respective measurement uncertainties and that the NPL technique has the advantages of a short measurement time and no reliance on specialised measurement equipment or artefacts.

  18. Errors in thermochromic liquid crystal thermometry

    SciTech Connect

    Wiberg, Roland; Lior, Noam

    2004-09-01

    This article experimentally investigates and assesses the errors that may be incurred in the hue-based thermochromic liquid crystal thermochromic liquid crystal (TLC) method, and their causes. The errors include response time, hysteresis, aging, surrounding illumination disturbance, direct illumination and viewing angle, amount of light into the camera, TLC thickness, digital resolution of the image conversion system, and measurement noise. Some of the main conclusions are that: (1) The 3x8 bits digital representation of the red green and blue TLC color values produces a temperature measurement error of typically 1% of the TLC effective temperature range, (2) an eight-fold variation of the light intensity into the camera produced variations, which were not discernable from the digital resolution error, (3) this temperature depends on the TLC film thickness, and (4) thicker films are less susceptible to aging and thickness nonuniformities.

  19. Automatic Error Analysis Using Intervals

    ERIC Educational Resources Information Center

    Rothwell, E. J.; Cloud, M. J.

    2012-01-01

    A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

  20. Control by model error estimation

    NASA Technical Reports Server (NTRS)

    Likins, P. W.; Skelton, R. E.

    1976-01-01

    Modern control theory relies upon the fidelity of the mathematical model of the system. Truncated modes, external disturbances, and parameter errors in linear system models are corrected by augmenting to the original system of equations an 'error system' which is designed to approximate the effects of such model errors. A Chebyshev error system is developed for application to the Large Space Telescope (LST).

  1. Imagery of Errors in Typing

    ERIC Educational Resources Information Center

    Rieger, Martina; Martinez, Fanny; Wenke, Dorit

    2011-01-01

    Using a typing task we investigated whether insufficient imagination of errors and error corrections is related to duration differences between execution and imagination. In Experiment 1 spontaneous error imagination was investigated, whereas in Experiment 2 participants were specifically instructed to imagine errors. Further, in Experiment 2 we…

  2. Speech Errors across the Lifespan

    ERIC Educational Resources Information Center

    Vousden, Janet I.; Maylor, Elizabeth A.

    2006-01-01

    Dell, Burger, and Svec (1997) proposed that the proportion of speech errors classified as anticipations (e.g., "moot and mouth") can be predicted solely from the overall error rate, such that the greater the error rate, the lower the anticipatory proportion (AP) of errors. We report a study examining whether this effect applies to changes in error…

  3. Type I Error Control for Tree Classification

    PubMed Central

    Jung, Sin-Ho; Chen, Yong; Ahn, Hongshik

    2014-01-01

    Binary tree classification has been useful for classifying the whole population based on the levels of outcome variable that is associated with chosen predictors. Often we start a classification with a large number of candidate predictors, and each predictor takes a number of different cutoff values. Because of these types of multiplicity, binary tree classification method is subject to severe type I error probability. Nonetheless, there have not been many publications to address this issue. In this paper, we propose a binary tree classification method to control the probability to accept a predictor below certain level, say 5%. PMID:25452689

  4. Optical range and range rate estimation for teleoperator systems

    NASA Technical Reports Server (NTRS)

    Shields, N. L., Jr.; Kirkpatrick, M., III; Malone, T. B.; Huggins, C. T.

    1974-01-01

    Range and range rate are crucial parameters which must be available to the operator during remote controlled orbital docking operations. A method was developed for the estimation of both these parameters using an aided television system. An experiment was performed to determine the human operator's capability to measure displayed image size using a fixed reticle or movable cursor as the television aid. The movable cursor was found to yield mean image size estimation errors on the order of 2.3 per cent of the correct value. This error rate was significantly lower than that for the fixed reticle. Performance using the movable cursor was found to be less sensitive to signal-to-noise ratio variation than was that for the fixed reticle. The mean image size estimation errors for the movable cursor correspond to an error of approximately 2.25 per cent in range suggesting that the system has some merit. Determining the accuracy of range rate estimation using a rate controlled cursor will require further experimentation.

  5. From requirements to acceptance tests

    NASA Technical Reports Server (NTRS)

    Baize, Lionel; Pasquier, Helene

    1993-01-01

    From user requirements definition to accepted software system, the software project management wants to be sure that the system will meet the requirements. For the development of a telecommunication satellites Control Centre, C.N.E.S. has used new rules to make the use of tracing matrix easier. From Requirements to Acceptance Tests, each item of a document must have an identifier. A unique matrix traces the system and allows the tracking of the consequences of a change in the requirements. A tool has been developed, to import documents into a relational data base. Each record of the data base corresponds to an item of a document, the access key is the item identifier. Tracing matrix is also processed, providing automatically links between the different documents. It enables the reading on the same screen of traced items. For example one can read simultaneously the User Requirements items, the corresponding Software Requirements items and the Acceptance Tests.

  6. Defining acceptable conditions in wilderness

    NASA Astrophysics Data System (ADS)

    Roggenbuck, J. W.; Williams, D. R.; Watson, A. E.

    1993-03-01

    The limits of acceptable change (LAC) planning framework recognizes that forest managers must decide what indicators of wilderness conditions best represent resource naturalness and high-quality visitor experiences and how much change from the pristine is acceptable for each indicator. Visitor opinions on the aspects of the wilderness that have great impact on their experience can provide valuable input to selection of indicators. Cohutta, Georgia; Caney Creek, Arkansas; Upland Island, Texas; and Rattlesnake, Montana, wilderness visitors have high shared agreement that littering and damage to trees in campsites, noise, and seeing wildlife are very important influences on wilderness experiences. Camping within sight or sound of other people influences experience quality more than do encounters on the trails. Visitors’ standards of acceptable conditions within wilderness vary considerably, suggesting a potential need to manage different zones within wilderness for different clientele groups and experiences. Standards across wildernesses, however, are remarkably similar.

  7. Propagation of atmospheric density errors to satellite orbits

    NASA Astrophysics Data System (ADS)

    Emmert, J. T.; Warren, H. P.; Segerman, A. M.; Byers, J. M.; Picone, J. M.

    2017-01-01

    We develop and test approximate analytic expressions relating time-dependent atmospheric density errors to errors in the mean motion and mean anomaly orbital elements. The mean motion and mean anomaly errors are proportional to the first and second integrals, respectively, of the density error. This means that the mean anomaly (and hence the in-track position) error variance grows with time as t3 for a white noise density error process and as t5 for a Brownian motion density error process. Our approximate expressions are accurate over a wide range of orbital configurations, provided the perigee altitude change is less than ∼0.2 atmospheric scale heights. For orbit prediction, density forecasts are driven in large part by forecasts of solar extreme ultraviolet (EUV) irradiance; we show that errors in EUV ten-day forecasts (and consequently in the density forecasts) approximately follow a Brownian motion process.

  8. Laser Ranging Simulation Program

    NASA Technical Reports Server (NTRS)

    Piazolla, Sabino; Hemmati, Hamid; Tratt, David

    2003-01-01

    Laser Ranging Simulation Program (LRSP) is a computer program that predicts selected aspects of the performances of a laser altimeter or other laser ranging or remote-sensing systems and is especially applicable to a laser-based system used to map terrain from a distance of several kilometers. Designed to run in a more recent version (5 or higher) of the MATLAB programming language, LRSP exploits the numerical and graphical capabilities of MATLAB. LRSP generates a graphical user interface that includes a pop-up menu that prompts the user for the input of data that determine the performance of a laser ranging system. Examples of input data include duration and energy of the laser pulse, the laser wavelength, the width of the laser beam, and several parameters that characterize the transmitting and receiving optics, the receiving electronic circuitry, and the optical properties of the atmosphere and the terrain. When the input data have been entered, LRSP computes the signal-to-noise ratio as a function of range, signal and noise currents, and ranging and pointing errors.

  9. Hyponatremia: management errors.

    PubMed

    Seo, Jang Won; Park, Tae Jin

    2006-11-01

    Rapid correction of hyponatremia is frequently associated with increased morbidity and mortality. Therefore, it is important to estimate the proper volume and type of infusate required to increase the serum sodium concentration predictably. The major common management errors during the treatment of hyponatremia are inadequate investigation, treatment with fluid restriction for diuretic-induced hyponatremia and treatment with fluid restriction plus intravenous isotonic saline simultaneously. We present two cases of management errors. One is about the problem of rapid correction of hyponatremia in a patient with sepsis and acute renal failure during continuous renal replacement therapy in the intensive care unit. The other is the case of hypothyroidism in which hyponatremia was aggravated by intravenous infusion of dextrose water and isotonic saline infusion was erroneously used to increase serum sodium concentration.

  10. Error-Free Software

    NASA Technical Reports Server (NTRS)

    1989-01-01

    001 is an integrated tool suited for automatically developing ultra reliable models, simulations and software systems. Developed and marketed by Hamilton Technologies, Inc. (HTI), it has been applied in engineering, manufacturing, banking and software tools development. The software provides the ability to simplify the complex. A system developed with 001 can be a prototype or fully developed with production quality code. It is free of interface errors, consistent, logically complete and has no data or control flow errors. Systems can be designed, developed and maintained with maximum productivity. Margaret Hamilton, President of Hamilton Technologies, also directed the research and development of USE.IT, an earlier product which was the first computer aided software engineering product in the industry to concentrate on automatically supporting the development of an ultrareliable system throughout its life cycle. Both products originated in NASA technology developed under a Johnson Space Center contract.

  11. Modular error embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Ettinger, J. Mark

    1999-01-01

    A method of embedding auxiliary information into the digital representation of host data containing noise in the low-order bits. The method applies to digital data representing analog signals, for example digital images. The method reduces the error introduced by other methods that replace the low-order bits with auxiliary information. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user through use of a digital key. The modular error embedding method includes a process to permute the order in which the host data values are processed. The method doubles the amount of auxiliary information that can be added to host data values, in comparison with bit-replacement methods for high bit-rate coding. The invention preserves human perception of the meaning and content of the host data, permitting the addition of auxiliary data in the amount of 50% or greater of the original host data.

  12. Error-correction coding

    NASA Technical Reports Server (NTRS)

    Hinds, Erold W. (Principal Investigator)

    1996-01-01

    This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.

  13. Surface temperature measurement errors

    SciTech Connect

    Keltner, N.R.; Beck, J.V.

    1983-05-01

    Mathematical models are developed for the response of surface mounted thermocouples on a thick wall. These models account for the significant causes of errors in both the transient and steady-state response to changes in the wall temperature. In many cases, closed form analytical expressions are given for the response. The cases for which analytical expressions are not obtained can be easily evaluated on a programmable calculator or a small computer.

  14. Acceptance Criteria Framework for Autonomous Biological Detectors

    SciTech Connect

    Dzenitis, J M

    2006-12-12

    The purpose of this study was to examine a set of user acceptance criteria for autonomous biological detection systems for application in high-traffic, public facilities. The test case for the acceptance criteria was the Autonomous Pathogen Detection System (APDS) operating in high-traffic facilities in New York City (NYC). However, the acceptance criteria were designed to be generally applicable to other biological detection systems in other locations. For such detection systems, ''users'' will include local authorities (e.g., facility operators, public health officials, and law enforcement personnel) and national authorities [including personnel from the Department of Homeland Security (DHS), the BioWatch Program, the Centers for Disease Control and Prevention (CDC), and the Federal Bureau of Investigation (FBI)]. The panel members brought expertise from a broad range of backgrounds to complete this picture. The goals of this document are: (1) To serve as informal guidance for users in considering the benefits and costs of these systems. (2) To serve as informal guidance for developers in understanding the needs of users. In follow-up work, this framework will be used to systematically document the APDS for appropriateness and readiness for use in NYC.

  15. Clock error, jitter, phase error, and differential time of arrival in satellite communications

    NASA Astrophysics Data System (ADS)

    Sorace, Ron

    The maintenance of synchronization in satellite communication systems is critical in contemporary systems, since many signal processing and detection algorithms depend on ascertaining time references. Unfortunately, proper synchronism becomes more difficult to maintain at higher frequencies. Factors such as clock error or jitter, noise, and phase error at a coherent receiver may corrupt a transmitted signal and degrade synchronism at the terminations of a communication link. Further, in some systems an estimate of propagation delay is necessary, but this delay may vary stochastically with the range of the link. This paper presents a model of the components of synchronization error including a simple description of clock error and examination of recursive estimation of the propagation delay time for messages between elements in a satellite communication system. Attention is devoted to jitter, the sources of which are considered to be phase error in coherent reception and jitter in the clock itself.

  16. Development of quantitative risk acceptance criteria

    SciTech Connect

    Griesmeyer, J. M.; Okrent, D.

    1981-01-01

    Some of the major considerations for effective management of risk are discussed, with particular emphasis on risks due to nuclear power plant operations. Although there are impacts associated with the rest of the fuel cycle, they are not addressed here. Several previously published proposals for quantitative risk criteria are reviewed. They range from a simple acceptance criterion on individual risk of death to a quantitative risk management framework. The final section discussed some of the problems in the establishment of a framework for the quantitative management of risk.

  17. The Location of Error: Reflections on a Research Project

    ERIC Educational Resources Information Center

    Cook, Devan

    2010-01-01

    Andrea Lunsford and Karen Lunsford conclude "Mistakes Are a Fact of Life: A National Comparative Study," a discussion of their research project exploring patterns of formal grammar and usage error in first-year writing, with an invitation to "conduct a local version of this study." The author was eager to accept their invitation; learning and…

  18. GY SAMPLING THEORY IN ENVIRONMENTAL STUDIES 2: SUBSAMPLING ERROR MEASUREMENTS

    EPA Science Inventory

    Sampling can be a significant source of error in the measurement process. The characterization and cleanup of hazardous waste sites require data that meet site-specific levels of acceptable quality if scientifically supportable decisions are to be made. In support of this effort,...

  19. Acceptance Probability (P a) Analysis for Process Validation Lifecycle Stages.

    PubMed

    Alsmeyer, Daniel; Pazhayattil, Ajay; Chen, Shu; Munaretto, Francesco; Hye, Maksuda; Sanghvi, Pradeep

    2016-04-01

    This paper introduces an innovative statistical approach towards understanding how variation impacts the acceptance criteria of quality attributes. Because of more complex stage-wise acceptance criteria, traditional process capability measures are inadequate for general application in the pharmaceutical industry. The probability of acceptance concept provides a clear measure, derived from specific acceptance criteria for each quality attribute. In line with the 2011 FDA Guidance, this approach systematically evaluates data and scientifically establishes evidence that a process is capable of consistently delivering quality product. The probability of acceptance provides a direct and readily understandable indication of product risk. As with traditional capability indices, the acceptance probability approach assumes that underlying data distributions are normal. The computational solutions for dosage uniformity and dissolution acceptance criteria are readily applicable. For dosage uniformity, the expected AV range may be determined using the s lo and s hi values along with the worst case estimates of the mean. This approach permits a risk-based assessment of future batch performance of the critical quality attributes. The concept is also readily applicable to sterile/non sterile liquid dose products. Quality attributes such as deliverable volume and assay per spray have stage-wise acceptance that can be converted into an acceptance probability. Accepted statistical guidelines indicate processes with C pk > 1.33 as performing well within statistical control and those with C pk < 1.0 as "incapable" (1). A C pk > 1.33 is associated with a centered process that will statistically produce less than 63 defective units per million. This is equivalent to an acceptance probability of >99.99%.

  20. Design Optimization for the Measurement Accuracy Improvement of a Large Range Nanopositioning Stage.

    PubMed

    Torralba, Marta; Yagüe-Fabra, José Antonio; Albajez, José Antonio; Aguilar, Juan José

    2016-01-11

    Both an accurate machine design and an adequate metrology loop definition are critical factors when precision positioning represents a key issue for the final system performance. This article discusses the error budget methodology as an advantageous technique to improve the measurement accuracy of a 2D-long range stage during its design phase. The nanopositioning platform NanoPla is here presented. Its specifications, e.g., XY-travel range of 50 mm × 50 mm and sub-micrometric accuracy; and some novel designed solutions, e.g., a three-layer and two-stage architecture are described. Once defined the prototype, an error analysis is performed to propose improvement design features. Then, the metrology loop of the system is mathematically modelled to define the propagation of the different sources. Several simplifications and design hypothesis are justified and validated, including the assumption of rigid body behavior, which is demonstrated after a finite element analysis verification. The different error sources and their estimated contributions are enumerated in order to conclude with the final error values obtained from the error budget. The measurement deviations obtained demonstrate the important influence of the working environmental conditions, the flatness error of the plane mirror reflectors and the accurate manufacture and assembly of the components forming the metrological loop. Thus, a temperature control of ±0.1 °C results in an acceptable maximum positioning error for the developed NanoPla stage, i.e., 41 nm, 36 nm and 48 nm in X-, Y- and Z-axis, respectively.

  1. Design Optimization for the Measurement Accuracy Improvement of a Large Range Nanopositioning Stage

    PubMed Central

    Torralba, Marta; Yagüe-Fabra, José Antonio; Albajez, José Antonio; Aguilar, Juan José

    2016-01-01

    Both an accurate machine design and an adequate metrology loop definition are critical factors when precision positioning represents a key issue for the final system performance. This article discusses the error budget methodology as an advantageous technique to improve the measurement accuracy of a 2D-long range stage during its design phase. The nanopositioning platform NanoPla is here presented. Its specifications, e.g., XY-travel range of 50 mm × 50 mm and sub-micrometric accuracy; and some novel designed solutions, e.g., a three-layer and two-stage architecture are described. Once defined the prototype, an error analysis is performed to propose improvement design features. Then, the metrology loop of the system is mathematically modelled to define the propagation of the different sources. Several simplifications and design hypothesis are justified and validated, including the assumption of rigid body behavior, which is demonstrated after a finite element analysis verification. The different error sources and their estimated contributions are enumerated in order to conclude with the final error values obtained from the error budget. The measurement deviations obtained demonstrate the important influence of the working environmental conditions, the flatness error of the plane mirror reflectors and the accurate manufacture and assembly of the components forming the metrological loop. Thus, a temperature control of ±0.1 °C results in an acceptable maximum positioning error for the developed NanoPla stage, i.e., 41 nm, 36 nm and 48 nm in X-, Y- and Z-axis, respectively. PMID:26761014

  2. "Let it be and keep on going! Acceptance and daily occupational well-being in relation to negative work events": Correction to Kuba and Scheibe (2017).

    PubMed

    2017-01-01

    Reports an error in "Let It Be and Keep on Going! Acceptance and Daily Occupational Well-Being in Relation to Negative Work Events" by Katharina Kuba and Susanne Scheibe (Journal of Occupational Health Psychology, Advanced Online Publication, Feb 25, 2016, np). In the article, there were errors in the Participants subsection in the Method section. The last three sentences should read "Job tenure ranged from less than 1 year to 32 years, with an average of 8.83 years (SD 7.80). Participants interacted with clients on average 5.44 hr a day (SD 2.41). The mean working time was 7.36 hr per day (SD 1.91)." (The following abstract of the original article appeared in record 2016-09702-001.) Negative work events can diminish daily occupational well-being, yet the degree to which they do so depends on the way in which people deal with their emotions. The aim of the current study was to examine the role of acceptance in the link between daily negative work events and occupational well-being. We hypothesized that acceptance would be associated with better daily occupational well-being, operationalized as low end-of-day negative emotions and fatigue, and high work engagement. Furthermore, we predicted that acceptance would buffer the adverse impact of negative work events on daily well-being. A microlongitudinal study across 10 work days was carried out with 92 employees of the health care sector, yielding a total of 832 daily observations. As expected, acceptance was associated with lower end-of-day negative emotions and fatigue (though there was no association with work engagement) across the 10-day period. Furthermore, acceptance moderated the effect of negative event occurrence on daily well-being: Highly accepting employees experienced less increase in negative emotions and less reduction in work engagement (though comparable end-of-day fatigue) on days with negative work events, relative to days without negative work events, than did less accepting employees. These

  3. Agriculture, forestry, range resources

    NASA Technical Reports Server (NTRS)

    Crea, W. J.

    1974-01-01

    In the area of crop specie identification, it has been found that temporal data analysis, preliminary stratification, and unequal probability analysis were several of the factors that contributed to high identification accuracies. Single data set accuracies on fields of greater than 80,000 sq m (20 acres) are in the 70- to 90-percent range; however, with the use of temporal data, accuracies of 95 percent have been reported. Identification accuracy drops off significantly on areas of less than 80,000 sq m (20 acres) as does measurement accuracy. Forest stratification into coniferous and deciduous areas has been accomplished to a 90- to 95-percent accuracy level. Using multistage sampling techniques, the timber volume of a national forest district has been estimated to a confidence level and standard deviation acceptable to the Forest Service at a very favorable cost-benefit time ratio. Range specie/plant community vegetation mapping has been accomplished at various levels of success (69- to 90-percent accuracy). However, several investigators have obtained encouraging initial results in range biomass (forage production) estimation and range readiness predictions. Soil association map correction and soil association mapping in new area appear to have been proven feasible on large areas; however, testing in a complex soil area should be undertaken.

  4. Further Conceptualization of Treatment Acceptability

    ERIC Educational Resources Information Center

    Carter, Stacy L.

    2008-01-01

    A review and extension of previous conceptualizations of treatment acceptability is provided in light of progress within the area of behavior treatment development and implementation. Factors including legislation, advances in research, and service delivery models are examined as to their relationship with a comprehensive conceptualization of…

  5. Acceptance and Commitment Therapy: Introduction

    ERIC Educational Resources Information Center

    Twohig, Michael P.

    2012-01-01

    This is the introductory article to a special series in Cognitive and Behavioral Practice on Acceptance and Commitment Therapy (ACT). Instead of each article herein reviewing the basics of ACT, this article contains that review. This article provides a description of where ACT fits within the larger category of cognitive behavior therapy (CBT):…

  6. Nitrogen trailer acceptance test report

    SciTech Connect

    Kostelnik, A.J.

    1996-02-12

    This Acceptance Test Report documents compliance with the requirements of specification WHC-S-0249. The equipment was tested according to WHC-SD-WM-ATP-108 Rev.0. The equipment being tested is a portable contained nitrogen supply. The test was conducted at Norco`s facility.

  7. Imaginary Companions and Peer Acceptance

    ERIC Educational Resources Information Center

    Gleason, Tracy R.

    2004-01-01

    Early research on imaginary companions suggests that children who create them do so to compensate for poor social relationships. Consequently, the peer acceptance of children with imaginary companions was compared to that of their peers. Sociometrics were conducted on 88 preschool-aged children; 11 had invisible companions, 16 had personified…

  8. Euthanasia Acceptance: An Attitudinal Inquiry.

    ERIC Educational Resources Information Center

    Klopfer, Fredrick J.; Price, William F.

    The study presented was conducted to examine potential relationships between attitudes regarding the dying process, including acceptance of euthanasia, and other attitudinal or demographic attributes. The data of the survey was comprised of responses given by 331 respondents to a door-to-door interview. Results are discussed in terms of preferred…

  9. Helping Our Children Accept Themselves.

    ERIC Educational Resources Information Center

    Gamble, Mae

    1984-01-01

    Parents of a child with muscular dystrophy recount their reactions to learning of the diagnosis, their gradual acceptance, and their son's resistance, which was gradually lessened when he was provided with more information and treated more normally as a member of the family. (CL)

  10. Ultra-high degree spherical harmonic analysis and synthesis using extended-range arithmetic

    NASA Astrophysics Data System (ADS)

    Wittwer, Tobias; Klees, Roland; Seitz, Kurt; Heck, Bernhard

    2008-04-01

    We present software for spherical harmonic analysis (SHA) and spherical harmonic synthesis (SHS), which can be used for essentially arbitrary degrees and all co-latitudes in the interval (0°, 180°). The routines use extended-range floating-point arithmetic, in particular for the computation of the associated Legendre functions. The price to be paid is an increased computation time; for degree 3,000, the extended-range arithmetic SHS program takes 49 times longer than its standard arithmetic counterpart. The extended-range SHS and SHA routines allow us to test existing routines for SHA and SHS. A comparison with the publicly available SHS routine GEOGFG18 by Wenzel and HARMONIC SYNTH by Holmes and Pavlis confirms what is known about the stability of these programs. GEOGFG18 gives errors <1 mm for latitudes [-89°57.5', 89°57.5'] and maximum degree 1,800. Higher degrees significantly limit the range of acceptable latitudes for a given accuracy. HARMONIC SYNTH gives good results up to degree 2,700 for almost the whole latitude range. The errors increase towards the North pole and exceed 1 mm at latitude 82° for degree 2,700. For a maximum degree 3,000, HARMONIC SYNTH produces errors exceeding 1 mm at latitudes of about 60°, whereas GEOGFG18 is limited to latitudes below 45°. Further extending the latitudinal band towards the poles may produce errors of several metres for both programs. A SHA of a uniform random signal on the sphere shows significant errors beyond degree 1,700 for the SHA program SHA by Heck and Seitz.

  11. Biasing errors and corrections

    NASA Technical Reports Server (NTRS)

    Meyers, James F.

    1991-01-01

    The dependence of laser velocimeter measurement rate on flow velocity is discussed. Investigations outlining that any dependence is purely statistical, and is nonstationary both spatially and temporally, are described. Main conclusions drawn are that the times between successive particle arrivals should be routinely measured and the calculation of the velocity data rate correlation coefficient should be performed to determine if a dependency exists. If none is found, accept the data ensemble as an independent sample of the flow. If a dependency is found, the data should be modified to obtain an independent sample. Universal correcting procedures should never be applied because their underlying assumptions are not valid.

  12. Modeling error analysis of stationary linear discrete-time filters

    NASA Technical Reports Server (NTRS)

    Patel, R.; Toda, M.

    1977-01-01

    The performance of Kalman-type, linear, discrete-time filters in the presence of modeling errors is considered. The discussion is limited to stationary performance, and bounds are obtained for the performance index, the mean-squared error of estimates for suboptimal and optimal (Kalman) filters. The computation of these bounds requires information on only the model matrices and the range of errors for these matrices. Consequently, a design can easily compare the performance of a suboptimal filter with that of the optimal filter, when only the range of errors in the elements of the model matrices is available.

  13. Let it be and keep on going! Acceptance and daily occupational well-being in relation to negative work events.

    PubMed

    Kuba, Katharina; Scheibe, Susanne

    2017-01-01

    [Correction Notice: An Erratum for this article was reported in Vol 22(1) of Journal of Occupational Health Psychology (see record 2016-25216-001). In the article, there were errors in the Participants subsection in the Method section. The last three sentences should read "Job tenure ranged from less than 1 year to 32 years, with an average of 8.83 years (SD 7.80). Participants interacted with clients on average 5.44 hr a day (SD 2.41). The mean working time was 7.36 hr per day (SD 1.91)."] Negative work events can diminish daily occupational well-being, yet the degree to which they do so depends on the way in which people deal with their emotions. The aim of the current study was to examine the role of acceptance in the link between daily negative work events and occupational well-being. We hypothesized that acceptance would be associated with better daily occupational well-being, operationalized as low end-of-day negative emotions and fatigue, and high work engagement. Furthermore, we predicted that acceptance would buffer the adverse impact of negative work events on daily well-being. A microlongitudinal study across 10 work days was carried out with 92 employees of the health care sector, yielding a total of 832 daily observations. As expected, acceptance was associated with lower end-of-day negative emotions and fatigue (though there was no association with work engagement) across the 10-day period. Furthermore, acceptance moderated the effect of negative event occurrence on daily well-being: Highly accepting employees experienced less increase in negative emotions and less reduction in work engagement (though comparable end-of-day fatigue) on days with negative work events, relative to days without negative work events, than did less accepting employees. These findings highlight affective, resource-saving, and motivational benefits of acceptance for daily occupational well-being and demonstrate that acceptance is associated with enhanced resilience to daily

  14. Self-Acceptance: The Evaluative Component of the Self-concept Contruct.

    ERIC Educational Resources Information Center

    Shepard, Lorrie A.

    1979-01-01

    Seven methods were used to measure the self-acceptance, self-description, and acceptance of others in 137 people ranging in age from 14 to 82. Results indicated that self-acceptance and self-description could not be clearly distinguished. (MH)

  15. Wavefront error sensing

    NASA Technical Reports Server (NTRS)

    Tubbs, Eldred F.

    1986-01-01

    A two-step approach to wavefront sensing for the Large Deployable Reflector (LDR) was examined as part of an effort to define wavefront-sensing requirements and to determine particular areas for more detailed study. A Hartmann test for coarse alignment, particularly segment tilt, seems feasible if LDR can operate at 5 microns or less. The direct measurement of the point spread function in the diffraction limited region may be a way to determine piston error, but this can only be answered by a detailed software model of the optical system. The question of suitable astronomical sources for either test must also be addressed.

  16. Detecting Errors in Programs

    DTIC Science & Technology

    1979-02-01

    unclassified c. THIS PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 DETECTING ERRORS IN PROGRAMS* Lloyd D. Fosdick...from a finite set of tests [35,36]a Recently Howden [37] presented a result showing that for a particular class of Lindenmayer grammars it was possible...Diego, CA. 37o Howden, W.E.: Lindenmayer grammars and symbolic testing. Information Processing Letters 7,1 (Jano 1978), 36-39. 38~ Fitzsimmons, Ann

  17. Speech Errors, Error Correction, and the Construction of Discourse.

    ERIC Educational Resources Information Center

    Linde, Charlotte

    Speech errors have been used in the construction of production models of the phonological and semantic components of language, and for a model of interactional processes. Errors also provide insight into how speakers plan discourse and syntactic structure,. Different types of discourse exhibit different types of error. The present data are taken…

  18. Errors in CT colonography.

    PubMed

    Trilisky, Igor; Ward, Emily; Dachman, Abraham H

    2015-10-01

    CT colonography (CTC) is a colorectal cancer screening modality which is becoming more widely implemented and has shown polyp detection rates comparable to those of optical colonoscopy. CTC has the potential to improve population screening rates due to its minimal invasiveness, no sedation requirement, potential for reduced cathartic examination, faster patient throughput, and cost-effectiveness. Proper implementation of a CTC screening program requires careful attention to numerous factors, including patient preparation prior to the examination, the technical aspects of image acquisition, and post-processing of the acquired data. A CTC workstation with dedicated software is required with integrated CTC-specific display features. Many workstations include computer-aided detection software which is designed to decrease errors of detection by detecting and displaying polyp-candidates to the reader for evaluation. There are several pitfalls which may result in false-negative and false-positive reader interpretation. We present an overview of the potential errors in CTC and a systematic approach to avoid them.

  19. Inborn Errors in Immunity

    PubMed Central

    Lionakis, M.S.; Hajishengallis, G.

    2015-01-01

    In recent years, the study of genetic defects arising from inborn errors in immunity has resulted in the discovery of new genes involved in the function of the immune system and in the elucidation of the roles of known genes whose importance was previously unappreciated. With the recent explosion in the field of genomics and the increasing number of genetic defects identified, the study of naturally occurring mutations has become a powerful tool for gaining mechanistic insight into the functions of the human immune system. In this concise perspective, we discuss emerging evidence that inborn errors in immunity constitute real-life models that are indispensable both for the in-depth understanding of human biology and for obtaining critical insights into common diseases, such as those affecting oral health. In the field of oral mucosal immunity, through the study of patients with select gene disruptions, the interleukin-17 (IL-17) pathway has emerged as a critical element in oral immune surveillance and susceptibility to inflammatory disease, with disruptions in the IL-17 axis now strongly linked to mucosal fungal susceptibility, whereas overactivation of the same pathways is linked to inflammatory periodontitis. PMID:25900229

  20. Modeling Errors in Daily Precipitation Measurements: Additive or Multiplicative?

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Huffman, George J.; Adler, Robert F.; Tang, Ling; Sapiano, Matthew; Maggioni, Viviana; Wu, Huan

    2013-01-01

    The definition and quantification of uncertainty depend on the error model used. For uncertainties in precipitation measurements, two types of error models have been widely adopted: the additive error model and the multiplicative error model. This leads to incompatible specifications of uncertainties and impedes intercomparison and application.In this letter, we assess the suitability of both models for satellite-based daily precipitation measurements in an effort to clarify the uncertainty representation. Three criteria were employed to evaluate the applicability of either model: (1) better separation of the systematic and random errors; (2) applicability to the large range of variability in daily precipitation; and (3) better predictive skills. It is found that the multiplicative error model is a much better choice under all three criteria. It extracted the systematic errors more cleanly, was more consistent with the large variability of precipitation measurements, and produced superior predictions of the error characteristics. The additive error model had several weaknesses, such as non constant variance resulting from systematic errors leaking into random errors, and the lack of prediction capability. Therefore, the multiplicative error model is a better choice.

  1. Acceptability of reactors in space

    SciTech Connect

    Buden, D.

    1981-01-01

    Reactors are the key to our future expansion into space. However, there has been some confusion in the public as to whether they are a safe and acceptable technology for use in space. The answer to these questions is explored. The US position is that when reactors are the preferred technical choice, that they can be used safely. In fact, it does not appear that reactors add measurably to the risk associated with the Space Transportation System.

  2. Acceptability of reactors in space

    SciTech Connect

    Buden, D.

    1981-04-01

    Reactors are the key to our future expansion into space. However, there has been some confusion in the public as to whether they are a safe and acceptable technology for use in space. The answer to these questions is explored. The US position is that when reactors are the preferred technical choice, that they can be used safely. In fact, it dies not appear that reactors add measurably to the risk associated with the Space Transportation System.

  3. Error Analysis in Mathematics Education.

    ERIC Educational Resources Information Center

    Rittner, Max

    1982-01-01

    The article reviews the development of mathematics error analysis as a means of diagnosing students' cognitive reasoning. Errors specific to addition, subtraction, multiplication, and division are described, and suggestions for remediation are provided. (CL)

  4. Skylab water balance error analysis

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1977-01-01

    Estimates of the precision of the net water balance were obtained for the entire Skylab preflight and inflight phases as well as for the first two weeks of flight. Quantitative estimates of both total sampling errors and instrumentation errors were obtained. It was shown that measurement error is minimal in comparison to biological variability and little can be gained from improvement in analytical accuracy. In addition, a propagation of error analysis demonstrated that total water balance error could be accounted for almost entirely by the errors associated with body mass changes. Errors due to interaction between terms in the water balance equation (covariances) represented less than 10% of the total error. Overall, the analysis provides evidence that daily measurements of body water changes obtained from the indirect balance technique are reasonable, precise, and relaible. The method is not biased toward net retention or loss.

  5. Prospective issues for error detection.

    PubMed

    Blavier, Adélaïde; Rouy, Emmanuelle; Nyssen, Anne-Sophie; de Keyser, Véronique

    2005-06-10

    From the literature on error detection, the authors select several concepts relating error detection mechanisms and prospective memory features. They emphasize the central role of intention in the classification of the errors into slips/lapses/mistakes, in the error handling process and in the usual distinction between action-based and outcome-based detection. Intention is again a core concept in their investigation of prospective memory theory, where they point out the contribution of intention retrievals, intention persistence and output monitoring in the individual's possibilities for detecting their errors. The involvement of the frontal lobes in prospective memory and in error detection is also analysed. From the chronology of a prospective memory task, the authors finally suggest a model for error detection also accounting for neural mechanisms highlighted by studies on error-related brain activity.

  6. Drug Administration Errors in Hospital Inpatients: A Systematic Review

    PubMed Central

    Berdot, Sarah; Gillaizeau, Florence; Caruba, Thibaut; Prognon, Patrice; Durieux, Pierre; Sabatier, Brigitte

    2013-01-01

    Context Drug administration in the hospital setting is the last barrier before a possible error reaches the patient. Objectives We aimed to analyze the prevalence and nature of administration error rate detected by the observation method. Data Sources Embase, MEDLINE, Cochrane Library from 1966 to December 2011 and reference lists of included studies. Study Selection Observational studies, cross-sectional studies, before-and-after studies, and randomized controlled trials that measured the rate of administration errors in inpatients were included. Data Extraction Two reviewers (senior pharmacists) independently identified studies for inclusion. One reviewer extracted the data; the second reviewer checked the data. The main outcome was the error rate calculated as being the number of errors without wrong time errors divided by the Total Opportunity for Errors (TOE, sum of the total number of doses ordered plus the unordered doses given), and multiplied by 100. For studies that reported it, clinical impact was reclassified into four categories from fatal to minor or no impact. Due to a large heterogeneity, results were expressed as median values (interquartile range, IQR), according to their study design. Results Among 2088 studies, a total of 52 reported TOE. Most of the studies were cross-sectional studies (N=46). The median error rate without wrong time errors for the cross-sectional studies using TOE was 10.5% [IQR: 7.3%-21.7%]. No fatal error was observed and most errors were classified as minor in the 18 studies in which clinical impact was analyzed. We did not find any evidence of publication bias. Conclusions Administration errors are frequent among inpatients. The median error rate without wrong time errors for the cross-sectional studies using TOE was about 10%. A standardization of administration error rate using the same denominator (TOE), numerator and types of errors is essential for further publications. PMID:23818992

  7. Analysis and comparison of rangerange positioning mode and hyperbolic positioning mode

    NASA Astrophysics Data System (ADS)

    Chen, Shi-Ru; Xu, Ding-Jie; Sun, Yao

    2002-06-01

    Three key factors are discussed, which affect positioning accuracy of rangerange positioning mode and hyperbolic positioning mode. Based on the error elliptical theory, the expressions of positioning error and of positioning geometric factor of rangerange positioning mode and hyperbolic positioning mode are derived, and the positioning error and the blind positioning area of two different positioning modes are analyzed. According to the requirement of navigation area, an optimum positional configuration among navigation stations of hyperbolic positioning mode is provided. Some considerable conclusions are obtained, and some graphs of distribution are presented, which are important to study and design a reasonable, precise radio navigation system.

  8. Error Sources in Asteroid Astrometry

    NASA Technical Reports Server (NTRS)

    Owen, William M., Jr.

    2000-01-01

    Asteroid astrometry, like any other scientific measurement process, is subject to both random and systematic errors, not all of which are under the observer's control. To design an astrometric observing program or to improve an existing one requires knowledge of the various sources of error, how different errors affect one's results, and how various errors may be minimized by careful observation or data reduction techniques.

  9. Impacts of frequency increment errors on frequency diverse array beampattern

    NASA Astrophysics Data System (ADS)

    Gao, Kuandong; Chen, Hui; Shao, Huaizong; Cai, Jingye; Wang, Wen-Qin

    2015-12-01

    Different from conventional phased array, which provides only angle-dependent beampattern, frequency diverse array (FDA) employs a small frequency increment across the antenna elements and thus results in a range angle-dependent beampattern. However, due to imperfect electronic devices, it is difficult to ensure accurate frequency increments, and consequently, the array performance will be degraded by unavoidable frequency increment errors. In this paper, we investigate the impacts of frequency increment errors on FDA beampattern. We derive the beampattern errors caused by deterministic frequency increment errors. For stochastic frequency increment errors, the corresponding upper and lower bounds of FDA beampattern error are derived. They are verified by numerical results. Furthermore, the statistical characteristics of FDA beampattern with random frequency increment errors, which obey Gaussian distribution and uniform distribution, are also investigated.

  10. Rotational error in path integration: encoding and execution errors in angle reproduction.

    PubMed

    Chrastil, Elizabeth R; Warren, William H

    2017-03-16

    Path integration is fundamental to human navigation. When a navigator leaves home on a complex outbound path, they are able to keep track of their approximate position and orientation and return to their starting location on a direct homebound path. However, there are several sources of error during path integration. Previous research has focused almost exclusively on encoding error-the error in registering the outbound path in memory. Here, we also consider execution error-the error in the response, such as turning and walking a homebound trajectory. In two experiments conducted in ambulatory virtual environments, we examined the contribution of execution error to the rotational component of path integration using angle reproduction tasks. In the reproduction tasks, participants rotated once and then rotated again to face the original direction, either reproducing the initial turn or turning through the supplementary angle. One outstanding difficulty in disentangling encoding and execution error during a typical angle reproduction task is that as the encoding angle increases, so does the required response angle. In Experiment 1, we dissociated these two variables by asking participants to report each encoding angle using two different responses: by turning to walk on a path parallel to the initial facing direction in the same (reproduction) or opposite (supplementary angle) direction. In Experiment 2, participants reported the encoding angle by turning both rightward and leftward onto a path parallel to the initial facing direction, over a larger range of angles. The results suggest that execution error, not encoding error, is the predominant source of error in angular path integration. These findings also imply that the path integrator uses an intrinsic (action-scaled) rather than an extrinsic (objective) metric.

  11. Error Patterns in Problem Solving.

    ERIC Educational Resources Information Center

    Babbitt, Beatrice C.

    Although many common problem-solving errors within the realm of school mathematics have been previously identified, a compilation of such errors is not readily available within learning disabilities textbooks, mathematics education texts, or teacher's manuals for school mathematics texts. Using data on error frequencies drawn from both the Fourth…

  12. Measurement Error. For Good Measure....

    ERIC Educational Resources Information Center

    Johnson, Stephen; Dulaney, Chuck; Banks, Karen

    No test, however well designed, can measure a student's true achievement because numerous factors interfere with the ability to measure achievement. These factors are sources of measurement error, and the goal in creating tests is to have as little measurement error as possible. Error can result from the test design, factors related to individual…

  13. Uncertainty quantification and error analysis

    SciTech Connect

    Higdon, Dave M; Anderson, Mark C; Habib, Salman; Klein, Richard; Berliner, Mark; Covey, Curt; Ghattas, Omar; Graziani, Carlo; Seager, Mark; Sefcik, Joseph; Stark, Philip

    2010-01-01

    UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.

  14. Feature Referenced Error Correction Apparatus.

    DTIC Science & Technology

    A feature referenced error correction apparatus utilizing the multiple images of the interstage level image format to compensate for positional...images and by the generation of an error correction signal in response to the sub-frame registration errors. (Author)

  15. On typographical errors.

    PubMed

    Hamilton, J W

    1993-09-01

    In his overall assessment of parapraxes in 1901, Freud included typographical mistakes but did not elaborate on or study this subject nor did he have anything to say about it in his later writings. This paper lists textual errors from a variety of current literary sources and explores the dynamic importance of their execution and the failure to make necessary corrections during the editorial process. While there has been a deemphasis of the role of unconscious determinants in the genesis of all slips as a result of recent findings in cognitive psychology, the examples offered suggest that, with respect to motivation, lapses in compulsivity contribute to their original commission while thematic compliance and voyeuristic issues are important in their not being discovered prior to publication.

  16. Correction of subtle refractive error in aviators.

    PubMed

    Rabin, J

    1996-02-01

    Optimal visual acuity is a requirement for piloting aircraft in military and civilian settings. While acuity can be corrected with glasses, spectacle wear can limit or even prohibit use of certain devices such as night vision goggles, helmet mounted displays, and/or chemical protective masks. Although current Army policy is directed toward selection of pilots who do not require spectacle correction for acceptable vision, refractive error can become manifest over time, making optical correction necessary. In such cases, contact lenses have been used quite successfully. Another approach is to neglect small amounts of refractive error, provided that vision is at least 20/20 without correction. This report describes visual findings in an aviator who was fitted with a contact lens to correct moderate astigmatism in one eye, while the other eye, with lesser refractive error, was left uncorrected. Advanced methods of testing visual resolution, including high and low contrast visual acuity and small letter contrast sensitivity, were used to compare vision achieved with full spectacle correction to that attained with the habitual, contact lens correction. Although the patient was pleased with his habitual correction, vision was significantly better with full spectacle correction, particularly on the small letter contrast test. Implications of these findings are considered.

  17. How perioperative nurses define, attribute causes of, and react to intraoperative nursing errors.

    PubMed

    Chard, Robin

    2010-01-01

    Errors in nursing practice pose a continuing threat to patient safety. A descriptive, correlational study was conducted to examine the definitions, circumstances, and perceived causes of intraoperative nursing errors; reactions of perioperative nurses to intraoperative nursing errors; and the relationships among coping with intraoperative nursing errors, emotional distress, and changes in practice made as a result of error. The results indicate that strategies of accepting responsibility and using self-control are significant predictors of emotional distress. Seeking social support and planful problem solving emerged as significant predictors of constructive changes in practice. Most predictive of defensive changes was the strategy of escape/avoidance.

  18. Rapid mapping of volumetric errors

    SciTech Connect

    Krulewich, D.; Hale, L.; Yordy, D.

    1995-09-13

    This paper describes a relatively inexpensive, fast, and easy to execute approach to mapping the volumetric errors of a machine tool, coordinate measuring machine, or robot. An error map is used to characterize a machine or to improve its accuracy by compensating for the systematic errors. The method consists of three steps: (1) modeling the relationship between the volumetric error and the current state of the machine; (2) acquiring error data based on length measurements throughout the work volume; and (3) optimizing the model to the particular machine.

  19. Register file soft error recovery

    DOEpatents

    Fleischer, Bruce M.; Fox, Thomas W.; Wait, Charles D.; Muff, Adam J.; Watson, III, Alfred T.

    2013-10-15

    Register file soft error recovery including a system that includes a first register file and a second register file that mirrors the first register file. The system also includes an arithmetic pipeline for receiving data read from the first register file, and error detection circuitry to detect whether the data read from the first register file includes corrupted data. The system further includes error recovery circuitry to insert an error recovery instruction into the arithmetic pipeline in response to detecting the corrupted data. The inserted error recovery instruction replaces the corrupted data in the first register file with a copy of the data from the second register file.

  20. Mechanisms of mindfulness training: Monitor and Acceptance Theory (MAT).

    PubMed

    Lindsay, Emily K; Creswell, J David

    2017-02-01

    Despite evidence linking trait mindfulness and mindfulness training with a broad range of effects, still little is known about its underlying active mechanisms. Mindfulness is commonly defined as (1) the ongoing monitoring of present-moment experience (2) with an orientation of acceptance. Building on conceptual, clinical, and empirical work, we describe a testable theoretical account to help explain mindfulness effects on cognition, affect, stress, and health outcomes. Specifically, Monitor and Acceptance Theory (MAT) posits that (1), by enhancing awareness of one's experiences, the skill of attention monitoring explains how mindfulness improves cognitive functioning outcomes, yet this same skill can increase affective reactivity. Second (2), by modifying one's relation to monitored experience, acceptance is necessary for reducing affective reactivity, such that attention monitoring and acceptance skills together explain how mindfulness improves negative affectivity, stress, and stress-related health outcomes. We discuss how MAT contributes to mindfulness science, suggest plausible alternatives to the account, and offer specific predictions for future research.

  1. Per-beam, planar IMRT QA passing rates do not predict clinically relevant patient dose errors

    SciTech Connect

    Nelms, Benjamin E.; Zhen Heming; Tome, Wolfgang A.

    2011-02-15

    Purpose: The purpose of this work is to determine the statistical correlation between per-beam, planar IMRT QA passing rates and several clinically relevant, anatomy-based dose errors for per-patient IMRT QA. The intent is to assess the predictive power of a common conventional IMRT QA performance metric, the Gamma passing rate per beam. Methods: Ninety-six unique data sets were created by inducing four types of dose errors in 24 clinical head and neck IMRT plans, each planned with 6 MV Varian 120-leaf MLC linear accelerators using a commercial treatment planning system and step-and-shoot delivery. The error-free beams/plans were used as ''simulated measurements'' (for generating the IMRT QA dose planes and the anatomy dose metrics) to compare to the corresponding data calculated by the error-induced plans. The degree of the induced errors was tuned to mimic IMRT QA passing rates that are commonly achieved using conventional methods. Results: Analysis of clinical metrics (parotid mean doses, spinal cord max and D1cc, CTV D95, and larynx mean) vs IMRT QA Gamma analysis (3%/3 mm, 2/2, 1/1) showed that in all cases, there were only weak to moderate correlations (range of Pearson's r-values: -0.295 to 0.653). Moreover, the moderate correlations actually had positive Pearson's r-values (i.e., clinically relevant metric differences increased with increasing IMRT QA passing rate), indicating that some of the largest anatomy-based dose differences occurred in the cases of high IMRT QA passing rates, which may be called ''false negatives.'' The results also show numerous instances of false positives or cases where low IMRT QA passing rates do not imply large errors in anatomy dose metrics. In none of the cases was there correlation consistent with high predictive power of planar IMRT passing rates, i.e., in none of the cases did high IMRT QA Gamma passing rates predict low errors in anatomy dose metrics or vice versa. Conclusions: There is a lack of correlation between

  2. Computer-socket manufacturing error: How much before it is clinically apparent?

    PubMed Central

    Sanders, Joan E.; Severance, Michael R.; Allyn, Kathryn J.

    2015-01-01

    The purpose of this research was to pursue quality standards for computer-manufacturing of prosthetic sockets for people with transtibial limb loss. Thirty-three duplicates of study participants’ normally used sockets were fabricated using central fabrication facilities. Socket-manufacturing errors were compared with clinical assessments of socket fit. Of the 33 sockets tested, 23 were deemed clinically to need modification. All 13 sockets with mean radial error (MRE) greater than 0.25 mm were clinically unacceptable, and 11 of those were deemed in need of sizing reduction. Of the remaining 20 sockets, 5 sockets with interquartile range (IQR) greater than 0.40 mm were deemed globally or regionally oversized and in need of modification. Of the remaining 15 sockets, 5 sockets with closed contours of elevated surface normal angle error (SNAE) were deemed clinically to need shape modification at those closed contour locations. The remaining 10 sockets were deemed clinically acceptable and not in need modification. MRE, IQR, and SNAE may serve as effective metrics to characterize quality of computer-manufactured prosthetic sockets, helping facilitate the development of quality standards for the socket manufacturing industry. PMID:22773260

  3. Computer-socket manufacturing error: how much before it is clinically apparent?

    PubMed

    Sanders, Joan E; Severance, Michael R; Allyn, Kathryn J

    2012-01-01

    The purpose of this research was to pursue quality standards for computer-manufacturing of prosthetic sockets for people with transtibial limb loss. Thirty-three duplicates of study participants' normally used sockets were fabricated using central fabrication facilities. Socket-manufacturing errors were compared with clinical assessments of socket fit. Of the 33 sockets tested, 23 were deemed clinically to need modification. All 13 sockets with mean radial error (MRE) greater than 0.25 mm were clinically unacceptable, and 11 of those were deemed in need of sizing reduction. Of the remaining 20 sockets, 5 sockets with interquartile range (IQR) greater than 0.40 mm were deemed globally or regionally oversized and in need of modification. Of the remaining 15 sockets, 5 sockets with closed contours of elevated surface normal angle error (SNAE) were deemed clinically to need shape modification at those closed contour locations. The remaining 10 sockets were deemed clinically acceptable and not in need modification. MRE, IQR, and SNAE may serve as effective metrics to characterize quality of computer-manufactured prosthetic sockets, helping facilitate the development of quality standards for the socket manufacturing industry.

  4. Optimal input design for aircraft instrumentation systematic error estimation

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    1991-01-01

    A new technique for designing optimal flight test inputs for accurate estimation of instrumentation systematic errors was developed and demonstrated. A simulation model of the F-18 High Angle of Attack Research Vehicle (HARV) aircraft was used to evaluate the effectiveness of the optimal input compared to input recorded during flight test. Instrumentation systematic error parameter estimates and their standard errors were compared. It was found that the optimal input design improved error parameter estimates and their accuracies for a fixed time input design. Pilot acceptability of the optimal input design was demonstrated using a six degree-of-freedom fixed base piloted simulation of the F-18 HARV. The technique described in this work provides a practical, optimal procedure for designing inputs for data compatibility experiments.

  5. Multipoint Lods Provide Reliable Linkage Evidence Despite Unknown Limiting Distribution: Type I Error Probabilities Decrease with Sample Size for Multipoint Lods and Mods

    PubMed Central

    Hodge, Susan E.; Rodriguez-Murillo, Laura; Strug, Lisa J.; Greenberg, David A.

    2009-01-01

    We investigate the behavior of type I error rates in model-based multipoint (MP) linkage analysis, as a function of sample size (N). We consider both MP lods (i.e., MP linkage analysis that uses the correct genetic model) and MP mods (maximizing MP lods over 18 dominant and recessive models). Following Xing & Elston [2006], we first consider MP linkage analysis limited to a single position; then we enlarge the scope and maximize the lods and mods over a span of positions. In all situations we examined, type I error rates decrease with increasing sample size, apparently approaching zero. We show: (a) For MP lods analyzed only at a single position, well-known statistical theory predicts that type I error rates approach zero. (b) For MP lods and mods maximized over position, this result has a different explanation, related to the fact that one maximizes the scores over only a finite portion of the parameter range. The implications of these findings may be far-reaching: Although it is widely accepted that fixed nominal critical values for MP lods and mods are not known, this study shows that whatever the nominal error rates are, the actual error rates appear to decrease with increasing sample size. Moreover, the actual (observed) type I error rate may be quite small for any given study. We conclude that multipoint lod and mod scores provide reliable linkage evidence for complex diseases, despite the unknown limiting distributions of these multipoint scores. PMID:18613118

  6. Precise Orbit Determination for GEOSAT Follow-On Using Satellite Laser Ranging Data and Intermission Altimeter Crossovers

    NASA Technical Reports Server (NTRS)

    Lemoine, Frank G.; Rowlands, David D.; Luthcke, Scott B.; Zelensky, Nikita P.; Chinn, Douglas S.; Pavlis, Despina E.; Marr, Gregory

    2001-01-01

    The US Navy's GEOSAT Follow-On Spacecraft was launched on February 10, 1998 with the primary objective of the mission to map the oceans using a radar altimeter. Following an extensive set of calibration campaigns in 1999 and 2000, the US Navy formally accepted delivery of the satellite on November 29, 2000. Satellite laser ranging (SLR) and Doppler (Tranet-style) beacons track the spacecraft. Although limited amounts of GPS data were obtained, the primary mode of tracking remains satellite laser ranging. The GFO altimeter measurements are highly precise, with orbit error the largest component in the error budget. We have tuned the non-conservative force model for GFO and the gravity model using SLR, Doppler and altimeter crossover data sampled over one year. Gravity covariance projections to 70x70 show the radial orbit error on GEOSAT was reduced from 2.6 cm in EGM96 to 1.3 cm with the addition of SLR, GFO/GFO and TOPEX/GFO crossover data. Evaluation of the gravity fields using SLR and crossover data support the covariance projections and also show a dramatic reduction in geographically-correlated error for the tuned fields. In this paper, we report on progress in orbit determination for GFO using GFO/GFO and TOPEX/GFO altimeter crossovers. We will discuss improvements in satellite force modeling and orbit determination strategy, which allows reduction in GFO radial orbit error from 10-15 cm to better than 5 cm.

  7. Axelrod model: accepting or discussing

    NASA Astrophysics Data System (ADS)

    Dybiec, Bartlomiej; Mitarai, Namiko; Sneppen, Kim

    2012-10-01

    Agents building social systems are characterized by complex states, and interactions among individuals can align their opinions. The Axelrod model describes how local interactions can result in emergence of cultural domains. We propose two variants of the Axelrod model where local consensus is reached either by listening and accepting one of neighbors' opinion or two agents discuss their opinion and achieve an agreement with mixed opinions. We show that the local agreement rule affects the character of the transition between the single culture and the multiculture regimes.

  8. Parental Reports of Children's Scale Errors in Everyday Life

    ERIC Educational Resources Information Center

    Rosengren, Karl S.; Gutierrez, Isabel T.; Anderson, Kathy N.; Schein, Stevie S.

    2009-01-01

    Scale errors refer to behaviors where young children attempt to perform an action on an object that is too small to effectively accommodate the behavior. The goal of this study was to examine the frequency and characteristics of scale errors in everyday life. To do so, the researchers collected parental reports of children's (age range = 13-21…

  9. Improved Error Thresholds for Measurement-Free Error Correction

    NASA Astrophysics Data System (ADS)

    Crow, Daniel; Joynt, Robert; Saffman, M.

    2016-09-01

    Motivated by limitations and capabilities of neutral atom qubits, we examine whether measurement-free error correction can produce practical error thresholds. We show that this can be achieved by extracting redundant syndrome information, giving our procedure extra fault tolerance and eliminating the need for ancilla verification. The procedure is particularly favorable when multiqubit gates are available for the correction step. Simulations of the bit-flip, Bacon-Shor, and Steane codes indicate that coherent error correction can produce threshold error rates that are on the order of 10-3 to 10-4—comparable with or better than measurement-based values, and much better than previous results for other coherent error correction schemes. This indicates that coherent error correction is worthy of serious consideration for achieving protected logical qubits.

  10. Improved Error Thresholds for Measurement-Free Error Correction.

    PubMed

    Crow, Daniel; Joynt, Robert; Saffman, M

    2016-09-23

    Motivated by limitations and capabilities of neutral atom qubits, we examine whether measurement-free error correction can produce practical error thresholds. We show that this can be achieved by extracting redundant syndrome information, giving our procedure extra fault tolerance and eliminating the need for ancilla verification. The procedure is particularly favorable when multiqubit gates are available for the correction step. Simulations of the bit-flip, Bacon-Shor, and Steane codes indicate that coherent error correction can produce threshold error rates that are on the order of 10^{-3} to 10^{-4}-comparable with or better than measurement-based values, and much better than previous results for other coherent error correction schemes. This indicates that coherent error correction is worthy of serious consideration for achieving protected logical qubits.

  11. Contour Error Map Algorithm

    NASA Technical Reports Server (NTRS)

    Merceret, Francis; Lane, John; Immer, Christopher; Case, Jonathan; Manobianco, John

    2005-01-01

    The contour error map (CEM) algorithm and the software that implements the algorithm are means of quantifying correlations between sets of time-varying data that are binarized and registered on spatial grids. The present version of the software is intended for use in evaluating numerical weather forecasts against observational sea-breeze data. In cases in which observational data come from off-grid stations, it is necessary to preprocess the observational data to transform them into gridded data. First, the wind direction is gridded and binarized so that D(i,j;n) is the input to CEM based on forecast data and d(i,j;n) is the input to CEM based on gridded observational data. Here, i and j are spatial indices representing 1.25-km intervals along the west-to-east and south-to-north directions, respectively; and n is a time index representing 5-minute intervals. A binary value of D or d = 0 corresponds to an offshore wind, whereas a value of D or d = 1 corresponds to an onshore wind. CEM includes two notable subalgorithms: One identifies and verifies sea-breeze boundaries; the other, which can be invoked optionally, performs an image-erosion function for the purpose of attempting to eliminate river-breeze contributions in the wind fields.

  12. Error correction for IFSAR

    DOEpatents

    Doerry, Armin W.; Bickel, Douglas L.

    2002-01-01

    IFSAR images of a target scene are generated by compensating for variations in vertical separation between collection surfaces defined for each IFSAR antenna by adjusting the baseline projection during image generation. In addition, height information from all antennas is processed before processing range and azimuth information in a normal fashion to create the IFSAR image.

  13. Some mathematical refinements concerning error minimization in the genetic code.

    PubMed

    Buhrman, Harry; van der Gulik, Peter T S; Kelk, Steven M; Koolen, Wouter M; Stougie, Leen

    2011-01-01

    The genetic code is known to have a high level of error robustness and has been shown to be very error robust compared to randomly selected codes, but to be significantly less error robust than a certain code found by a heuristic algorithm. We formulate this optimization problem as a Quadratic Assignment Problem and use this to formally verify that the code found by the heuristic algorithm is the global optimum. We also argue that it is strongly misleading to compare the genetic code only with codes sampled from the fixed block model, because the real code space is orders of magnitude larger. We thus enlarge the space from which random codes can be sampled from approximately 2.433 × 10(18) codes to approximately 5.908 × 10(45) codes. We do this by leaving the fixed block model, and using the wobble rules to formulate the characteristics acceptable for a genetic code. By relaxing more constraints, three larger spaces are also constructed. Using a modified error function, the genetic code is found to be more error robust compared to a background of randomly generated codes with increasing space size. We point out that these results do not necessarily imply that the code was optimized during evolution for error minimization, but that other mechanisms could be the reason for this error robustness.

  14. Critical evidence for the prediction error theory in associative learning.

    PubMed

    Terao, Kanta; Matsumoto, Yukihisa; Mizunami, Makoto

    2015-03-10

    In associative learning in mammals, it is widely accepted that the discrepancy, or error, between actual and predicted reward determines whether learning occurs. Complete evidence for the prediction error theory, however, has not been obtained in any learning systems: Prediction error theory stems from the finding of a blocking phenomenon, but blocking can also be accounted for by other theories, such as the attentional theory. We demonstrated blocking in classical conditioning in crickets and obtained evidence to reject the attentional theory. To obtain further evidence supporting the prediction error theory and rejecting alternative theories, we constructed a neural model to match the prediction error theory, by modifying our previous model of learning in crickets, and we tested a prediction from the model: the model predicts that pharmacological intervention of octopaminergic transmission during appetitive conditioning impairs learning but not formation of reward prediction itself, and it thus predicts no learning in subsequent training. We observed such an "auto-blocking", which could be accounted for by the prediction error theory but not by other competitive theories to account for blocking. This study unambiguously demonstrates validity of the prediction error theory in associative learning.

  15. Dopamine reward prediction error coding.

    PubMed

    Schultz, Wolfram

    2016-03-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards-an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware.

  16. Dopamine reward prediction error coding

    PubMed Central

    Schultz, Wolfram

    2016-01-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards—an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware. PMID:27069377

  17. Processor register error correction management

    SciTech Connect

    Bose, Pradip; Cher, Chen-Yong; Gupta, Meeta S.

    2016-12-27

    Processor register protection management is disclosed. In embodiments, a method of processor register protection management can include determining a sensitive logical register for executable code generated by a compiler, generating an error-correction table identifying the sensitive logical register, and storing the error-correction table in a memory accessible by a processor. The processor can be configured to generate a duplicate register of the sensitive logical register identified by the error-correction table.

  18. Acceptance and Commitment Therapy (ACT): An Overview for Practitioners

    ERIC Educational Resources Information Center

    Bowden, Tim; Bowden, Sandra

    2012-01-01

    Acceptance and Commitment Therapy (ACT) offers school counsellors a practical and meaningful approach to helping students deal with a range of issues. This is achieved through encouraging psychological flexibility through the application of six key principles. This article describes our introduction to ACT, ACT's application to children and…

  19. Studying Student Teachers' Acceptance of Role Responsibility.

    ERIC Educational Resources Information Center

    Davis, Michael D.; Davis, Concetta M.

    1980-01-01

    There is variance in the way in which student teachers accept responsibility for the teaching act. This study explains why some variables may affect student teachers' acceptance of role responsibilities. (CM)

  20. [Subjective well-being and self acceptance].

    PubMed

    Makino, Y; Tagami, F

    1998-06-01

    The purpose of the present study was to examine the relationship between subjective well-being and self acceptance, and to design a happiness self-writing program to increase self acceptance and subjective well-being of adolescents. In study 1, we examined the relationship between social interaction and self acceptance. In study 2, we created a happiness self-writing program in cognitive behavioral approach, and examined whether the program promoted self acceptance and subjective well-being. Results indicated that acceptance of self-openness, an aspect of self acceptance, was related to subjective well-being. The happiness self-writing program increased subjective well-being, but it was not found to have increased self acceptance. It was discussed why the program could promote subjective well-being, but not self acceptance.

  1. Compensating For GPS Ephemeris Error

    NASA Technical Reports Server (NTRS)

    Wu, Jiun-Tsong

    1992-01-01

    Method of computing position of user station receiving signals from Global Positioning System (GPS) of navigational satellites compensates for most of GPS ephemeris error. Present method enables user station to reduce error in its computed position substantially. User station must have access to two or more reference stations at precisely known positions several hundred kilometers apart and must be in neighborhood of reference stations. Based on fact that when GPS data used to compute baseline between reference station and user station, vector error in computed baseline is proportional ephemeris error and length of baseline.

  2. A theory of human error

    NASA Technical Reports Server (NTRS)

    Mcruer, D. T.; Clement, W. F.; Allen, R. W.

    1980-01-01

    Human error, a significant contributing factor in a very high proportion of civil transport, general aviation, and rotorcraft accidents is investigated. Correction of the sources of human error requires that one attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation operations is presented. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.

  3. Confidence limits and their errors

    SciTech Connect

    Rajendran Raja

    2002-03-22

    Confidence limits are common place in physics analysis. Great care must be taken in their calculation and use especially in cases of limited statistics. We introduce the concept of statistical errors of confidence limits and argue that not only should limits be calculated but also their errors in order to represent the results of the analysis to the fullest. We show that comparison of two different limits from two different experiments becomes easier when their errors are also quoted. Use of errors of confidence limits will lead to abatement of the debate on which method is best suited to calculate confidence limits.

  4. Measurement Error and Equating Error in Power Analysis

    ERIC Educational Resources Information Center

    Phillips, Gary W.; Jiang, Tao

    2016-01-01

    Power analysis is a fundamental prerequisite for conducting scientific research. Without power analysis the researcher has no way of knowing whether the sample size is large enough to detect the effect he or she is looking for. This paper demonstrates how psychometric factors such as measurement error and equating error affect the power of…

  5. Anxiety and Error Monitoring: Increased Error Sensitivity or Altered Expectations?

    ERIC Educational Resources Information Center

    Compton, Rebecca J.; Carp, Joshua; Chaddock, Laura; Fineman, Stephanie L.; Quandt, Lorna C.; Ratliff, Jeffrey B.

    2007-01-01

    This study tested the prediction that the error-related negativity (ERN), a physiological measure of error monitoring, would be enhanced in anxious individuals, particularly in conditions with threatening cues. Participants made gender judgments about faces whose expressions were either happy, angry, or neutral. Replicating prior studies, midline…

  6. Error Analysis and Propagation in Metabolomics Data Analysis.

    PubMed

    Moseley, Hunter N B

    2013-01-01

    Error analysis plays a fundamental role in describing the uncertainty in experimental results. It has several fundamental uses in metabolomics including experimental design, quality control of experiments, the selection of appropriate statistical methods, and the determination of uncertainty in results. Furthermore, the importance of error analysis has grown with the increasing number, complexity, and heterogeneity of measurements characteristic of 'omics research. The increase in data complexity is particularly problematic for metabolomics, which has more heterogeneity than other omics technologies due to the much wider range of molecular entities detected and measured. This review introduces the fundamental concepts of error analysis as they apply to a wide range of metabolomics experimental designs and it discusses current methodologies for determining the propagation of uncertainty in appropriate metabolomics data analysis. These methodologies include analytical derivation and approximation techniques, Monte Carlo error analysis, and error analysis in metabolic inverse problems. Current limitations of each methodology with respect to metabolomics data analysis are also discussed.

  7. Inductively Coupled Plasma Mass Spectrometry Uranium Error Propagation

    SciTech Connect

    Hickman, D P; Maclean, S; Shepley, D; Shaw, R K

    2001-07-01

    The Hazards Control Department at Lawrence Livermore National Laboratory (LLNL) uses Inductively Coupled Plasma Mass Spectrometer (ICP/MS) technology to analyze uranium in urine. The ICP/MS used by the Hazards Control Department is a Perkin-Elmer Elan 6000 ICP/MS. The Department of Energy Laboratory Accreditation Program requires that the total error be assessed for bioassay measurements. A previous evaluation of the errors associated with the ICP/MS measurement of uranium demonstrated a {+-} 9.6% error in the range of 0.01 to 0.02 {micro}g/l. However, the propagation of total error for concentrations above and below this level have heretofore been undetermined. This document is an evaluation of the errors associated with the current LLNL ICP/MS method for a more expanded range of uranium concentrations.

  8. Error Estimation for Reduced Order Models of Dynamical Systems

    SciTech Connect

    Homescu, C; Petzold, L; Serban, R

    2004-01-22

    The use of reduced order models to describe a dynamical system is pervasive in science and engineering. Often these models are used without an estimate of their error or range of validity. In this paper we consider dynamical systems and reduced models built using proper orthogonal decomposition. We show how to compute estimates and bounds for these errors, by a combination of small sample statistical condition estimation and error estimation using the adjoint method. Most importantly, the proposed approach allows the assessment of regions of validity for reduced models, i.e., ranges of perturbations in the original system over which the reduced model is still appropriate. Numerical examples validate our approach: the error norm estimates approximate well the forward error while the derived bounds are within an order of magnitude.

  9. Error Models of the Analog to Digital Converters

    NASA Astrophysics Data System (ADS)

    Michaeli, Linus; Šaliga, Ján

    2014-04-01

    Error models of the Analog to Digital Converters describe metrological properties of the signal conversion from analog to digital domain in a concise form using few dominant error parameters. Knowledge of the error models allows the end user to provide fast testing in the crucial points of the full input signal range and to use identified error models for post correction in the digital domain. The imperfections of the internal ADC structure determine the error characteristics represented by the nonlinearities as a function of the output code. Progress in the microelectronics and missing information about circuital details together with the lack of knowledge about interfering effects caused by ADC installation prefers another modeling approach based on the input-output behavioral characterization by the input-output error box. Internal links in the ADC structure cause that the input-output error function could be described in a concise form by suitable function. Modeled functional parameters allow determining the integral error parameters of ADC. Paper is a survey of error models starting from the structural models for the most common architectures and their linkage with the behavioral models represented by the simple look up table or the functional description of nonlinear errors for the output codes.

  10. Error studies for SNS Linac. Part 1: Transverse errors

    SciTech Connect

    Crandall, K.R.

    1998-12-31

    The SNS linac consist of a radio-frequency quadrupole (RFQ), a drift-tube linac (DTL), a coupled-cavity drift-tube linac (CCDTL) and a coupled-cavity linac (CCL). The RFQ and DTL are operated at 402.5 MHz; the CCDTL and CCL are operated at 805 MHz. Between the RFQ and DTL is a medium-energy beam-transport system (MEBT). This error study is concerned with the DTL, CCDTL and CCL, and each will be analyzed separately. In fact, the CCL is divided into two sections, and each of these will be analyzed separately. The types of errors considered here are those that affect the transverse characteristics of the beam. The errors that cause the beam center to be displaced from the linac axis are quad displacements and quad tilts. The errors that cause mismatches are quad gradient errors and quad rotations (roll).

  11. 48 CFR 2911.103 - Market acceptance.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 7 2011-10-01 2011-10-01 false Market acceptance. 2911... DESCRIBING AGENCY NEEDS Selecting And Developing Requirements Documents 2911.103 Market acceptance. The... offered have either achieved commercial market acceptance or been satisfactorily supplied to an...

  12. 48 CFR 11.103 - Market acceptance.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 1 2011-10-01 2011-10-01 false Market acceptance. 11.103... DESCRIBING AGENCY NEEDS Selecting and Developing Requirements Documents 11.103 Market acceptance. (a) Section... either— (i) Achieved commercial market acceptance; or (ii) Been satisfactorily supplied to an...

  13. Older Adults' Acceptance of Information Technology

    ERIC Educational Resources Information Center

    Wang, Lin; Rau, Pei-Luen Patrick; Salvendy, Gavriel

    2011-01-01

    This study investigated variables contributing to older adults' information technology acceptance through a survey, which was used to find factors explaining and predicting older adults' information technology acceptance behaviors. Four factors, including needs satisfaction, perceived usability, support availability, and public acceptance, were…

  14. 46 CFR 28.73 - Accepted organizations.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 1 2011-10-01 2011-10-01 false Accepted organizations. 28.73 Section 28.73 Shipping... INDUSTRY VESSELS General Provisions § 28.73 Accepted organizations. An organization desiring to be designated by the Commandant as an accepted organization must request such designation in writing. As...

  15. 46 CFR 28.73 - Accepted organizations.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 1 2014-10-01 2014-10-01 false Accepted organizations. 28.73 Section 28.73 Shipping... INDUSTRY VESSELS General Provisions § 28.73 Accepted organizations. An organization desiring to be designated by the Commandant as an accepted organization must request such designation in writing. As...

  16. 46 CFR 28.73 - Accepted organizations.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 1 2013-10-01 2013-10-01 false Accepted organizations. 28.73 Section 28.73 Shipping... INDUSTRY VESSELS General Provisions § 28.73 Accepted organizations. An organization desiring to be designated by the Commandant as an accepted organization must request such designation in writing. As...

  17. 46 CFR 28.73 - Accepted organizations.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 46 Shipping 1 2012-10-01 2012-10-01 false Accepted organizations. 28.73 Section 28.73 Shipping... INDUSTRY VESSELS General Provisions § 28.73 Accepted organizations. An organization desiring to be designated by the Commandant as an accepted organization must request such designation in writing. As...

  18. 46 CFR 28.73 - Accepted organizations.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 1 2010-10-01 2010-10-01 false Accepted organizations. 28.73 Section 28.73 Shipping... INDUSTRY VESSELS General Provisions § 28.73 Accepted organizations. An organization desiring to be designated by the Commandant as an accepted organization must request such designation in writing. As...

  19. 48 CFR 2911.103 - Market acceptance.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... offered have either achieved commercial market acceptance or been satisfactorily supplied to an agency... 48 Federal Acquisition Regulations System 7 2010-10-01 2010-10-01 false Market acceptance. 2911... DESCRIBING AGENCY NEEDS Selecting And Developing Requirements Documents 2911.103 Market acceptance....

  20. 21 CFR 820.86 - Acceptance status.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Acceptance status. 820.86 Section 820.86 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES QUALITY SYSTEM REGULATION Acceptance Activities § 820.86 Acceptance status. Each manufacturer...

  1. 48 CFR 11.103 - Market acceptance.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 1 2013-10-01 2013-10-01 false Market acceptance. 11.103... DESCRIBING AGENCY NEEDS Selecting and Developing Requirements Documents 11.103 Market acceptance. (a) Section... either— (i) Achieved commercial market acceptance; or (ii) Been satisfactorily supplied to an...

  2. 48 CFR 11.103 - Market acceptance.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 1 2014-10-01 2014-10-01 false Market acceptance. 11.103... DESCRIBING AGENCY NEEDS Selecting and Developing Requirements Documents 11.103 Market acceptance. (a) 41 U.S...) Achieved commercial market acceptance; or (ii) Been satisfactorily supplied to an agency under current...

  3. 48 CFR 2911.103 - Market acceptance.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 7 2013-10-01 2012-10-01 true Market acceptance. 2911.103... DESCRIBING AGENCY NEEDS Selecting And Developing Requirements Documents 2911.103 Market acceptance. The... offered have either achieved commercial market acceptance or been satisfactorily supplied to an...

  4. 48 CFR 11.103 - Market acceptance.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 1 2012-10-01 2012-10-01 false Market acceptance. 11.103... DESCRIBING AGENCY NEEDS Selecting and Developing Requirements Documents 11.103 Market acceptance. (a) Section... either— (i) Achieved commercial market acceptance; or (ii) Been satisfactorily supplied to an...

  5. 48 CFR 2911.103 - Market acceptance.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 7 2012-10-01 2012-10-01 false Market acceptance. 2911... DESCRIBING AGENCY NEEDS Selecting And Developing Requirements Documents 2911.103 Market acceptance. The... offered have either achieved commercial market acceptance or been satisfactorily supplied to an...

  6. 48 CFR 2911.103 - Market acceptance.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 7 2014-10-01 2014-10-01 false Market acceptance. 2911... DESCRIBING AGENCY NEEDS Selecting And Developing Requirements Documents 2911.103 Market acceptance. The... offered have either achieved commercial market acceptance or been satisfactorily supplied to an...

  7. The relation between remembered parental acceptance in childhood and self-acceptance among young Turkish adults.

    PubMed

    Kuyumcu, Behire; Rohner, Ronald P

    2016-05-11

    This study examined the relation between young adults' age and remembrances of parental acceptance in childhood, and their current self-acceptance. The study was based on a sample of 236 young adults in Turkey (139 women and 97 men). The adult version of the Parental Acceptance-Rejection/Control Questionnaire for mothers and fathers along with the Self-Acceptance subscale of the Psychological Well-Being Scale, and the Personal Information Form were used as measures. Results showed that both men and women tended to remember having been accepted in childhood by both their mothers and fathers. Women, however, reported more maternal and paternal acceptance in childhood than did men. Similarly, the level of self-acceptance was high among both men and women. However, women's self-acceptance was higher than men's. Correlational analyses showed that self-acceptance was positively related to remembrances of maternal and paternal acceptance among both women and men. Results indicated that age and remembered paternal acceptance significantly predicted women's self-acceptance. Age and remembered maternal acceptance made significant and independent contributions to men's self-acceptance. Men's remembrances of paternal acceptance in childhood did not make significant contribution to their self-acceptance. Finally, the relation between women's age and self-acceptance was significantly moderated by remembrances of paternal acceptance in childhood.

  8. Error begat error: design error analysis and prevention in social infrastructure projects.

    PubMed

    Love, Peter E D; Lopez, Robert; Edwards, David J; Goh, Yang M

    2012-09-01

    Design errors contribute significantly to cost and schedule growth in social infrastructure projects and to engineering failures, which can result in accidents and loss of life. Despite considerable research that has addressed their error causation in construction projects they still remain prevalent. This paper identifies the underlying conditions that contribute to design errors in social infrastructure projects (e.g. hospitals, education, law and order type buildings). A systemic model of error causation is propagated and subsequently used to develop a learning framework for design error prevention. The research suggests that a multitude of strategies should be adopted in congruence to prevent design errors from occurring and so ensure that safety and project performance are ameliorated.

  9. Children's Scale Errors with Tools

    ERIC Educational Resources Information Center

    Casler, Krista; Eshleman, Angelica; Greene, Kimberly; Terziyan, Treysi

    2011-01-01

    Children sometimes make "scale errors," attempting to interact with tiny object replicas as though they were full size. Here, we demonstrate that instrumental tools provide special insight into the origins of scale errors and, moreover, into the broader nature of children's purpose-guided reasoning and behavior with objects. In Study 1, 1.5- to…

  10. Dual Processing and Diagnostic Errors

    ERIC Educational Resources Information Center

    Norman, Geoff

    2009-01-01

    In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical,…

  11. Explaining Errors in Children's Questions

    ERIC Educational Resources Information Center

    Rowland, Caroline F.

    2007-01-01

    The ability to explain the occurrence of errors in children's speech is an essential component of successful theories of language acquisition. The present study tested some generativist and constructivist predictions about error on the questions produced by ten English-learning children between 2 and 5 years of age. The analyses demonstrated that,…

  12. Error Estimates for Mixed Methods.

    DTIC Science & Technology

    1979-03-01

    This paper presents abstract error estimates for mixed methods for the approximate solution of elliptic boundary value problems. These estimates are...then applied to obtain quasi-optimal error estimates in the usual Sobolev norms for four examples: three mixed methods for the biharmonic problem and a mixed method for 2nd order elliptic problems. (Author)

  13. Error Correction, Revision, and Learning

    ERIC Educational Resources Information Center

    Truscott, John; Hsu, Angela Yi-ping

    2008-01-01

    Previous research has shown that corrective feedback on an assignment helps learners reduce their errors on that assignment during the revision process. Does this finding constitute evidence that learning resulted from the feedback? Differing answers play an important role in the ongoing debate over the effectiveness of error correction,…

  14. Human Error: A Concept Analysis

    NASA Technical Reports Server (NTRS)

    Hansen, Frederick D.

    2007-01-01

    Human error is the subject of research in almost every industry and profession of our times. This term is part of our daily language and intuitively understood by most people however, it would be premature to assume that everyone's understanding of human error s the same. For example, human error is used to describe the outcome or consequence of human action, the causal factor of an accident, deliberate violations,a nd the actual action taken by a human being. As a result, researchers rarely agree on the either a specific definition or how to prevent human error. The purpose of this article is to explore the specific concept of human error using Concept Analysis as described by Walker and Avant (1995). The concept of human error is examined as currently used in the literature of a variety of industries and professions. Defining attributes and examples of model, borderline, and contrary cases are described. The antecedents and consequences of human error are also discussed and a definition of human error is offered.

  15. Twenty questions about student errors

    NASA Astrophysics Data System (ADS)

    Fisher, Kathleen M.; Lipson, Joseph Isaac

    Errors in science learning (errors in expression of organized, purposeful thought within the domain of science) provide a window through which glimpses of mental functioning can be obtained. Errors are valuable and normal occurrences in the process of learning science. A student can use his/her errors to develop a deeper understanding of a concept as long as the error can be recognized and appropriate, informative feedback can be obtained. A safe, non-threatening, and nonpunitive environment which encourages dialogue helps students to express their conceptions and to risk making errors. Pedagogical methods that systematically address common student errors produce significant gains in student learning. Just as the nature-nurture interaction is integral to the development of living things, so the individual-environment interaction is basic to thought processes. At a minimum, four systems interact: (1) the individual problem solver (who has a worldview, relatively stable cognitive characteristics, relatively malleable mental states and conditions, and aims or intentions), (2) task to be performed (including relative importance and nature of the task), (3) knowledge domain in which task is contained, and (4) the environment (including orienting conditions and the social and physical context).Several basic assumptions underlie research on errors and alternative conceptions. Among these are: Knowledge and thought involve active, constructive processes; there are many ways to acquire, organize, store, retrieve, and think about a given concept or event; and understanding is achieved by successive approximations. Application of these ideas will require a fundamental change in how science is taught.

  16. Airborne 2 color ranging experiment

    NASA Technical Reports Server (NTRS)

    Millar, Pamela S.; Abshire, James B.; Mcgarry, Jan F.; Zagwodzki, Thomas W.; Pacini, Linda K.

    1993-01-01

    Horizontal variations in the atmospheric refractivity are a limiting error source for many precise laser and radio space geodetic techniques. This experiment was designed to directly measure horizontal variations in atmospheric refractivity, for the first time, by using 2 color laser ranging measurements to an aircraft. The 2 color laser system at the Goddard Optical Research Facility (GORF) ranged to a cooperative laser target package on a T-39 aircraft. Circular patterns which extended from the southern edge of the Washington D.C. Beltway to the southern edge of Baltimore, MD were flown counter clockwise around Greenbelt, MD. Successful acquisition, tracking, and ranging for 21 circular paths were achieved on three flights in August 1992, resulting in over 20,000 two color ranging measurements.

  17. Role of memory errors in quantum repeaters

    NASA Astrophysics Data System (ADS)

    Hartmann, L.; Kraus, B.; Briegel, H.-J.; Dür, W.

    2007-03-01

    We investigate the influence of memory errors in the quantum repeater scheme for long-range quantum communication. We show that the communication distance is limited in standard operation mode due to memory errors resulting from unavoidable waiting times for classical signals. We show how to overcome these limitations by (i) improving local memory and (ii) introducing two operational modes of the quantum repeater. In both operational modes, the repeater is run blindly, i.e., without waiting for classical signals to arrive. In the first scheme, entanglement purification protocols based on one-way classical communication are used allowing to communicate over arbitrary distances. However, the error thresholds for noise in local control operations are very stringent. The second scheme makes use of entanglement purification protocols with two-way classical communication and inherits the favorable error thresholds of the repeater run in standard mode. One can increase the possible communication distance by an order of magnitude with reasonable overhead in physical resources. We outline the architecture of a quantum repeater that can possibly ensure intercontinental quantum communication.

  18. Estimating errors in least-squares fitting

    NASA Technical Reports Server (NTRS)

    Richter, P. H.

    1995-01-01

    While least-squares fitting procedures are commonly used in data analysis and are extensively discussed in the literature devoted to this subject, the proper assessment of errors resulting from such fits has received relatively little attention. The present work considers statistical errors in the fitted parameters, as well as in the values of the fitted function itself, resulting from random errors in the data. Expressions are derived for the standard error of the fit, as a function of the independent variable, for the general nonlinear and linear fitting problems. Additionally, closed-form expressions are derived for some examples commonly encountered in the scientific and engineering fields, namely ordinary polynomial and Gaussian fitting functions. These results have direct application to the assessment of the antenna gain and system temperature characteristics, in addition to a broad range of problems in data analysis. The effects of the nature of the data and the choice of fitting function on the ability to accurately model the system under study are discussed, and some general rules are deduced to assist workers intent on maximizing the amount of information obtained form a given set of measurements.

  19. Onorbit IMU alignment error budget

    NASA Technical Reports Server (NTRS)

    Corson, R. W.

    1980-01-01

    The Star Tracker, Crew Optical Alignment Sight (COAS), and Inertial Measurement Unit (IMU) from a complex navigation system with a multitude of error sources were combined. A complete list of the system errors is presented. The errors were combined in a rational way to yield an estimate of the IMU alignment accuracy for STS-1. The expected standard deviation in the IMU alignment error for STS-1 type alignments was determined to be 72 arc seconds per axis for star tracker alignments and 188 arc seconds per axis for COAS alignments. These estimates are based on current knowledge of the star tracker, COAS, IMU, and navigation base error specifications, and were partially verified by preliminary Monte Carlo analysis.

  20. Angle interferometer cross axis errors

    SciTech Connect

    Bryan, J.B.; Carter, D.L.; Thompson, S.L.

    1994-01-01

    Angle interferometers are commonly used to measure surface plate flatness. An error can exist when the centerline of the double comer cube mirror assembly is not square to the surface plate and the guide bar for the mirror sled is curved. Typical errors can be one to two microns per meter. A similar error can exist in the calibration of rotary tables when the centerline of the double comer cube mirror assembly is not square to the axes of rotation of the angle calibrator and the calibrator axis is not parallel to the rotary table axis. Commercial double comer cube assemblies typically have non-parallelism errors of ten milli-radians between their centerlines and their sides and similar values for non-squareness between their centerlines and end surfaces. The authors have developed a simple method for measuring these errors and correcting them by remachining the reference surfaces.

  1. Angle interferometer cross axis errors

    NASA Astrophysics Data System (ADS)

    Bryan, J. B.; Carter, D. L.; Thompson, S. L.

    1994-01-01

    Angle interferometers are commonly used to measure surface plate flatness. An error can exist when the centerline of the double comer cube mirror assembly is not square to the surface plate and the guide bar for the mirror sled is curved. Typical errors can be one to two microns per meter. A similar error can exist in the calibration of rotary tables when the centerline of the double comer cube mirror assembly is not square to the axes of rotation of the angle calibrator and the calibrator axis is not parallel to the rotary table axis. Commercial double comer cube assemblies typically have non-parallelism errors of ten milli-radians between their centerlines and their sides and similar values for non-squareness between their centerlines and end surfaces. The authors have developed a simple method for measuring these errors and correcting them.

  2. A theory of human error

    NASA Technical Reports Server (NTRS)

    Mcruer, D. T.; Clement, W. F.; Allen, R. W.

    1981-01-01

    Human errors tend to be treated in terms of clinical and anecdotal descriptions, from which remedial measures are difficult to derive. Correction of the sources of human error requires an attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A comprehensive analytical theory of the cause-effect relationships governing propagation of human error is indispensable to a reconstruction of the underlying and contributing causes. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation, maritime, automotive, and process control operations is highlighted. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.

  3. Discretization errors in particle tracking

    NASA Astrophysics Data System (ADS)

    Carmon, G.; Mamman, N.; Feingold, M.

    2007-03-01

    High precision video tracking of microscopic particles is limited by systematic and random errors. Systematic errors are partly due to the discretization process both in position and in intensity. We study the behavior of such errors in a simple tracking algorithm designed for the case of symmetric particles. This symmetry algorithm uses interpolation to estimate the value of the intensity at arbitrary points in the image plane. We show that the discretization error is composed of two parts: (1) the error due to the discretization of the intensity, bD and (2) that due to interpolation, bI. While bD behaves asymptotically like N-1 where N is the number of intensity gray levels, bI is small when using cubic spline interpolation.

  4. Acceptability of GM foods among Pakistani consumers.

    PubMed

    Ali, Akhter; Rahut, Dil Bahadur; Imtiaz, Muhammad

    2016-04-02

    In Pakistan majority of the consumers do not have information about genetically modified (GM) foods. In developing countries particularly in Pakistan few studies have focused on consumers' acceptability about GM foods. Using comprehensive primary dataset collected from 320 consumers in 2013 from Pakistan, this study analyzes the determinants of consumers' acceptability of GM foods. The data was analyzed by employing the bivariate probit model and censored least absolute deviation (CLAD) models. The empirical results indicated that urban consumers are more aware of GM foods compared to rural consumers. The acceptance of GM foods was more among females' consumers as compared to male consumers. In addition, the older consumers were more willing to accept GM food compared to young consumers. The acceptability of GM foods was also higher among wealthier households. Low price is the key factor leading to the acceptability of GM foods. The acceptability of the GM foods also reduces the risks among Pakistani consumers.

  5. Designing to Control Flight Crew Errors

    NASA Technical Reports Server (NTRS)

    Schutte, Paul C.; Willshire, Kelli F.

    1997-01-01

    It is widely accepted that human error is a major contributing factor in aircraft accidents. There has been a significant amount of research in why these errors occurred, and many reports state that the design of flight deck can actually dispose humans to err. This research has led to the call for changes in design according to human factors and human-centered principles. The National Aeronautics and Space Administration's (NASA) Langley Research Center has initiated an effort to design a human-centered flight deck from a clean slate (i.e., without constraints of existing designs.) The effort will be based on recent research in human-centered design philosophy and mission management categories. This design will match the human's model of the mission and function of the aircraft to reduce unnatural or non-intuitive interfaces. The product of this effort will be a flight deck design description, including training and procedures, and a cross reference or paper trail back to design hypotheses, and an evaluation of the design. The present paper will discuss the philosophy, process, and status of this design effort.

  6. Consumer Acceptability of Intramuscular Fat

    PubMed Central

    Frank, Damian; Joo, Seon-Tea

    2016-01-01

    Fat in meat greatly improves eating quality, yet many consumers avoid visible fat, mainly because of health concerns. Generations of consumers, especially in the English-speaking world, have been convinced by health authorities that animal fat, particularly saturated or solid fat, should be reduced or avoided to maintain a healthy diet. Decades of negative messages regarding animal fats has resulted in general avoidance of fatty cuts of meat. Paradoxically, low fat or lean meat tends to have poor eating quality and flavor and low consumer acceptability. The failure of low-fat high-carbohydrate diets to curb “globesity” has prompted many experts to re-evaluate of the place of fat in human diets, including animal fat. Attitudes towards fat vary dramatically between and within cultures. Previous generations of humans sought out fatty cuts of meat for their superior sensory properties. Many consumers in East and Southeast Asia have traditionally valued more fatty meat cuts. As nutritional messages around dietary fat change, there is evidence that attitudes towards animal fat are changing and many consumers are rediscovering and embracing fattier cuts of meat, including marbled beef. The present work provides a short overview of the unique sensory characteristics of marbled beef and changing consumer preferences for fat in meat in general. PMID:28115880

  7. Lunar orbiter ranging data: initial results.

    PubMed

    Mulholland, J D; Sjogren, W L

    1967-01-06

    Data from two Lunar Orbiter spacecraft have been used to test the significance of corrections to the lunar ephemeris. Range residuals of up to 1700 meters were reduced by an order of magnitude by application of the corrections, with most of the residuals reduced to less than 100 meters. Removal of gross errors in the ephemeris reveals residual patterns that may indicate errors in location of observing stations, as well as the expected effects of Lunar nonsphericity.

  8. Error detection for genetic data, using likelihood methods

    SciTech Connect

    Ehm, M.G.; Kimmel, M.; Cottingham, R.W. Jr.

    1996-01-01

    As genetic maps become denser, the effect of laboratory typing errors becomes more serious. We review a general method for detecting errors in pedigree genotyping data that is a variant of the likelihood-ratio test statistic. It pinpoints individuals and loci with relatively unlikely genotypes. Power and significance studies using Monte Carlo methods are shown by using simulated data with pedigree structures similar to the CEPH pedigrees and a larger experimental pedigree used in the study of idiopathic dilated cardiomyopathy (DCM). The studies show the index detects errors for small values of {theta} with high power and an acceptable false positive rate. The method was also used to check for errors in DCM laboratory pedigree data and to estimate the error rate in CEPH chromosome 6 data. The errors flagged by our method in the DCM pedigree were confirmed by the laboratory. The results are consistent with estimated false-positive and false-negative rates obtained using simulation. 21 refs., 5 figs., 2 tabs.

  9. Altimeter error sources at the 10-cm performance level

    NASA Technical Reports Server (NTRS)

    Martin, C. F.

    1977-01-01

    Error sources affecting the calibration and operational use of a 10 cm altimeter are examined to determine the magnitudes of current errors and the investigations necessary to reduce them to acceptable bounds. Errors considered include those affecting operational data pre-processing, and those affecting altitude bias determination, with error budgets developed for both. The most significant error sources affecting pre-processing are bias calibration, propagation corrections for the ionosphere, and measurement noise. No ionospheric models are currently validated at the required 10-25% accuracy level. The optimum smoothing to reduce the effects of measurement noise is investigated and found to be on the order of one second, based on the TASC model of geoid undulations. The 10 cm calibrations are found to be feasible only through the use of altimeter passes that are very high elevation for a tracking station which tracks very close to the time of altimeter track, such as a high elevation pass across the island of Bermuda. By far the largest error source, based on the current state-of-the-art, is the location of the island tracking station relative to mean sea level in the surrounding ocean areas.

  10. Acceptance in Romantic Relationships: The Frequency and Acceptability of Partner Behavior Inventory

    ERIC Educational Resources Information Center

    Doss, Brian D.; Christensen, Andrew

    2006-01-01

    Despite the recent emphasis on acceptance in romantic relationships, no validated measure of relationship acceptance presently exists. To fill this gap, the 20-item Frequency and Acceptability of Partner Behavior Inventory (FAPBI; A. Christensen & N. S. Jacobson, 1997) was created to assess separately the acceptability and frequency of both…

  11. 24 CFR 203.202 - Plan acceptability and acceptance renewal criteria-general.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... HUD acceptance of such change or modification, except that changes mandated by other applicable laws... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Plan acceptability and acceptance... Underwriting Procedures Insured Ten-Year Protection Plans (plan) § 203.202 Plan acceptability and...

  12. Disentangling timing and amplitude errors in streamflow simulations

    NASA Astrophysics Data System (ADS)

    Seibert, Simon Paul; Ehret, Uwe; Zehe, Erwin

    2016-09-01

    This article introduces an improvement in the Series Distance (SD) approach for the improved discrimination and visualization of timing and magnitude uncertainties in streamflow simulations. SD emulates visual hydrograph comparison by distinguishing periods of low flow and periods of rise and recession in hydrological events. Within these periods, it determines the distance of two hydrographs not between points of equal time but between points that are hydrologically similar. The improvement comprises an automated procedure to emulate visual pattern matching, i.e. the determination of an optimal level of generalization when comparing two hydrographs, a scaled error model which is better applicable across large discharge ranges than its non-scaled counterpart, and "error dressing", a concept to construct uncertainty ranges around deterministic simulations or forecasts. Error dressing includes an approach to sample empirical error distributions by increasing variance contribution, which can be extended from standard one-dimensional distributions to the two-dimensional distributions of combined time and magnitude errors provided by SD. In a case study we apply both the SD concept and a benchmark model (BM) based on standard magnitude errors to a 6-year time series of observations and simulations from a small alpine catchment. Time-magnitude error characteristics for low flow and rising and falling limbs of events were substantially different. Their separate treatment within SD therefore preserves useful information which can be used for differentiated model diagnostics, and which is not contained in standard criteria like the Nash-Sutcliffe efficiency. Construction of uncertainty ranges based on the magnitude of errors of the BM approach and the combined time and magnitude errors of the SD approach revealed that the BM-derived ranges were visually narrower and statistically superior to the SD ranges. This suggests that the combined use of time and magnitude errors to

  13. Spatial sampling errors for a satellite-borne scanning radiometer

    NASA Technical Reports Server (NTRS)

    Manalo, Natividad D.; Smith, G. L.

    1991-01-01

    The Clouds and Earth's Radiant Energy System (CERES) scanning radiometer is planned as the Earth radiation budget instrument for the Earth Observation System, to be flown in the late 1990's. In order to minimize the spatial sampling errors of the measurements, it is necessary to select design parameters for the instrument such that the resulting point spread function will minimize spatial sampling errors. These errors are described as aliasing and blurring errors. Aliasing errors are due to presence in the measurements of spatial frequencies beyond the Nyquist frequency, and blurring errors are due to attenuation of frequencies below the Nyquist frequency. The design parameters include pixel shape and dimensions, sampling rate, scan period, and time constants of the measurements. For a satellite-borne scanning radiometer, the pixel footprint grows quickly at large nadir angles. The aliasing errors thus decrease with increasing scan angle, but the blurring errors grow quickly. The best design minimizes the sum of these two errors over a range of scan angles. Results of a parameter study are presented, showing effects of data rates, pixel dimensions, spacecraft altitude, and distance from the spacecraft track.

  14. Sensitivity of planetary cruise navigation to earth orientation calibration errors

    NASA Technical Reports Server (NTRS)

    Estefan, J. A.; Folkner, W. M.

    1995-01-01

    A detailed analysis was conducted to determine the sensitivity of spacecraft navigation errors to the accuracy and timeliness of Earth orientation calibrations. Analyses based on simulated X-band (8.4-GHz) Doppler and ranging measurements acquired during the interplanetary cruise segment of the Mars Pathfinder heliocentric trajectory were completed for the nominal trajectory design and for an alternative trajectory with a longer transit time. Several error models were developed to characterize the effect of Earth orientation on navigational accuracy based on current and anticipated Deep Space Network calibration strategies. The navigational sensitivity of Mars Pathfinder to calibration errors in Earth orientation was computed for each candidate calibration strategy with the Earth orientation parameters included as estimated parameters in the navigation solution. In these cases, the calibration errors contributed 23 to 58% of the total navigation error budget, depending on the calibration strategy being assessed. Navigation sensitivity calculations were also performed for cases in which Earth orientation calibration errors were not adjusted in the navigation solution. In these cases, Earth orientation calibration errors contributed from 26 to as much as 227% of the total navigation error budget. The final analysis suggests that, not only is the method used to calibrate Earth orientation vitally important for precision navigation of Mars Pathfinder, but perhaps equally important is the method for inclusion of the calibration errors in the navigation solutions.

  15. Error analysis of large aperture static interference imaging spectrometer

    NASA Astrophysics Data System (ADS)

    Li, Fan; Zhang, Guo

    2015-12-01

    Large Aperture Static Interference Imaging Spectrometer is a new type of spectrometer with light structure, high spectral linearity, high luminous flux and wide spectral range, etc ,which overcomes the contradiction between high flux and high stability so that enables important values in science studies and applications. However, there're different error laws in imaging process of LASIS due to its different imaging style from traditional imaging spectrometers, correspondingly, its data processing is complicated. In order to improve accuracy of spectrum detection and serve for quantitative analysis and monitoring of topographical surface feature, the error law of LASIS imaging is supposed to be learned. In this paper, the LASIS errors are classified as interferogram error, radiometric correction error and spectral inversion error, and each type of error is analyzed and studied. Finally, a case study of Yaogan-14 is proposed, in which the interferogram error of LASIS by time and space combined modulation is mainly experimented and analyzed, as well as the errors from process of radiometric correction and spectral inversion.

  16. Reducing number entry errors: solving a widespread, serious problem

    PubMed Central

    Thimbleby, Harold; Cairns, Paul

    2010-01-01

    Number entry is ubiquitous: it is required in many fields including science, healthcare, education, government, mathematics and finance. People entering numbers are to be expected to make errors, but shockingly few systems make any effort to detect, block or otherwise manage errors. Worse, errors may be ignored but processed in arbitrary ways, with unintended results. A standard class of error (defined in the paper) is an ‘out by 10 error’, which is easily made by miskeying a decimal point or a zero. In safety-critical domains, such as drug delivery, out by 10 errors generally have adverse consequences. Here, we expose the extent of the problem of numeric errors in a very wide range of systems. An analysis of better error management is presented: under reasonable assumptions, we show that the probability of out by 10 errors can be halved by better user interface design. We provide a demonstration user interface to show that the approach is practical. To kill an error is as good a service as, and sometimes even better than, the establishing of a new truth or fact.(Charles Darwin 1879 [2008], p. 229) PMID:20375037

  17. Neural markers of errors as endophenotypes in neuropsychiatric disorders

    PubMed Central

    Manoach, Dara S.; Agam, Yigal

    2013-01-01

    Learning from errors is fundamental to adaptive human behavior. It requires detecting errors, evaluating what went wrong, and adjusting behavior accordingly. These dynamic adjustments are at the heart of behavioral flexibility and accumulating evidence suggests that deficient error processing contributes to maladaptively rigid and repetitive behavior in a range of neuropsychiatric disorders. Neuroimaging and electrophysiological studies reveal highly reliable neural markers of error processing. In this review, we evaluate the evidence that abnormalities in these neural markers can serve as sensitive endophenotypes of neuropsychiatric disorders. We describe the behavioral and neural hallmarks of error processing, their mediation by common genetic polymorphisms, and impairments in schizophrenia, obsessive-compulsive disorder, and autism spectrum disorders. We conclude that neural markers of errors meet several important criteria as endophenotypes including heritability, established neuroanatomical and neurochemical substrates, association with neuropsychiatric disorders, presence in syndromally-unaffected family members, and evidence of genetic mediation. Understanding the mechanisms of error processing deficits in neuropsychiatric disorders may provide novel neural and behavioral targets for treatment and sensitive surrogate markers of treatment response. Treating error processing deficits may improve functional outcome since error signals provide crucial information for flexible adaptation to changing environments. Given the dearth of effective interventions for cognitive deficits in neuropsychiatric disorders, this represents a potentially promising approach. PMID:23882201

  18. Acceptability of reductive interventions for the control of inappropriate child behavior.

    PubMed

    Witt, J C; Robbins, J R

    1985-03-01

    Teacher attitudes about the acceptability of classroom intervention strategies were evaluated in two experiments. In both, teachers read descriptions of an intervention that was applied to a child with a behavior problem. In Experiment 1, an evaluation of six interventions for reducing inappropriate behavior suggested that one was highly acceptable (DRO), one was highly unacceptable (corporal punishment), and four ranged from mildly acceptable to mildly unacceptable (DRL, reprimands, time-out, and staying after school). In Experiment 2, the acceptability of the same intervention (staying after school) was evaluated as a function of who implemented it (teacher vs. principal). Analyses suggested that the teacher-implemented intervention was perceived as more acceptable. In both experiments, interventions were rated as less acceptable by highly experienced teachers versus those newer to the teaching profession. In addition, there was a trend for the acceptability of an intervention to vary as a function of the severity of the behavior problem to which it was applied.

  19. Controlling type-1 error rates in whole effluent toxicity testing

    SciTech Connect

    Smith, R.; Johnson, S.C.

    1995-12-31

    A form of variability, called the dose x test interaction, has been found to affect the variability of the mean differences from control in the statistical tests used to evaluate Whole Effluent Toxicity Tests for compliance purposes. Since the dose x test interaction is not included in these statistical tests, the assumed type-1 and type-2 error rates can be incorrect. The accepted type-1 error rate for these tests is 5%. Analysis of over 100 Ceriodaphnia, fathead minnow and sea urchin fertilization tests showed that when the test x dose interaction term was not included in the calculations the type-1 error rate was inflated to as high as 20%. In a compliance setting, this problem may lead to incorrect regulatory decisions. Statistical tests are proposed that properly incorporate the dose x test interaction variance.

  20. Mars gravitational field estimation error

    NASA Technical Reports Server (NTRS)

    Compton, H. R.; Daniels, E. F.

    1972-01-01

    The error covariance matrices associated with a weighted least-squares differential correction process have been analyzed for accuracy in determining the gravitational coefficients through degree and order five in the Mars gravitational potential junction. The results are presented in terms of standard deviations for the assumed estimated parameters. The covariance matrices were calculated by assuming Doppler tracking data from a Mars orbiter, a priori statistics for the estimated parameters, and model error uncertainties for tracking-station locations, the Mars ephemeris, the astronomical unit, the Mars gravitational constant (G sub M), and the gravitational coefficients of degrees six and seven. Model errors were treated by using the concept of consider parameters.

  1. Stochastic Models of Human Errors

    NASA Technical Reports Server (NTRS)

    Elshamy, Maged; Elliott, Dawn M. (Technical Monitor)

    2002-01-01

    Humans play an important role in the overall reliability of engineering systems. More often accidents and systems failure are traced to human errors. Therefore, in order to have meaningful system risk analysis, the reliability of the human element must be taken into consideration. Describing the human error process by mathematical models is a key to analyzing contributing factors. Therefore, the objective of this research effort is to establish stochastic models substantiated by sound theoretic foundation to address the occurrence of human errors in the processing of the space shuttle.

  2. Error bounds in cascading regressions

    USGS Publications Warehouse

    Karlinger, M.R.; Troutman, B.M.

    1985-01-01

    Cascading regressions is a technique for predicting a value of a dependent variable when no paired measurements exist to perform a standard regression analysis. Biases in coefficients of a cascaded-regression line as well as error variance of points about the line are functions of the correlation coefficient between dependent and independent variables. Although this correlation cannot be computed because of the lack of paired data, bounds can be placed on errors through the required properties of the correlation coefficient. The potential meansquared error of a cascaded-regression prediction can be large, as illustrated through an example using geomorphologic data. ?? 1985 Plenum Publishing Corporation.

  3. Algorithmic Error Correction of Impedance Measuring Sensors

    PubMed Central

    Starostenko, Oleg; Alarcon-Aquino, Vicente; Hernandez, Wilmar; Sergiyenko, Oleg; Tyrsa, Vira

    2009-01-01

    This paper describes novel design concepts and some advanced techniques proposed for increasing the accuracy of low cost impedance measuring devices without reduction of operational speed. The proposed structural method for algorithmic error correction and iterating correction method provide linearization of transfer functions of the measuring sensor and signal conditioning converter, which contribute the principal additive and relative measurement errors. Some measuring systems have been implemented in order to estimate in practice the performance of the proposed methods. Particularly, a measuring system for analysis of C-V, G-V characteristics has been designed and constructed. It has been tested during technological process control of charge-coupled device CCD manufacturing. The obtained results are discussed in order to define a reasonable range of applied methods, their utility, and performance. PMID:22303177

  4. Error-detective one-dimensional mapping

    NASA Astrophysics Data System (ADS)

    Zhang, Yun; Zhou, Shihao

    2017-02-01

    The 1-D mapping is an intensity-based method used to estimate a projective transformation between two images. However, it lacks an intensity-invariant criterion for deciding whether two images can be aligned or not. The paper proposes a novel decision criterion and, thus, develops an error-detective 1-D mapping method. First, a multiple 1-D mapping scheme is devised for yielding redundant estimates of an image transformation. Then, a voting scheme is proposed for verifying these multiple estimates, in which at least one estimate without receiving all the votes is taken as a decision criterion for false-match rejection. Based on the decision criterion, an error-detective 1-D mapping algorithm is also constructed. Finally, the proposed algorithm is evaluated in registering real image pairs with a large range of projective transformations.

  5. Modelling non-Gaussianity of background and observational errors by the Maximum Entropy method

    NASA Astrophysics Data System (ADS)

    Pires, Carlos; Talagrand, Olivier; Bocquet, Marc

    2010-05-01

    The Best Linear Unbiased Estimator (BLUE) has widely been used in atmospheric-oceanic data assimilation. However, when data errors have non-Gaussian pdfs, the BLUE differs from the absolute Minimum Variance Unbiased Estimator (MVUE), minimizing the mean square analysis error. The non-Gaussianity of errors can be due to the statistical skewness and positiveness of some physical observables (e.g. moisture, chemical species) or due to the nonlinearity of the data assimilation models and observation operators acting on Gaussian errors. Non-Gaussianity of assimilated data errors can be justified from a priori hypotheses or inferred from statistical diagnostics of innovations (observation minus background). Following this rationale, we compute measures of innovation non-Gaussianity, namely its skewness and kurtosis, relating it to: a) the non-Gaussianity of the individual error themselves, b) the correlation between nonlinear functions of errors, and c) the heteroscedasticity of errors within diagnostic samples. Those relationships impose bounds for skewness and kurtosis of errors which are critically dependent on the error variances, thus leading to a necessary tuning of error variances in order to accomplish consistency with innovations. We evaluate the sub-optimality of the BLUE as compared to the MVUE, in terms of excess of error variance, under the presence of non-Gaussian errors. The error pdfs are obtained by the maximum entropy method constrained by error moments up to fourth order, from which the Bayesian probability density function and the MVUE are computed. The impact is higher for skewed extreme innovations and grows in average with the skewness of data errors, especially if those skewnesses have the same sign. Application has been performed to the quality-accepted ECMWF innovations of brightness temperatures of a set of High Resolution Infrared Sounder channels. In this context, the MVUE has led in some extreme cases to a potential reduction of 20-60% error

  6. The impact of observation errors on analysis error and forecast skill investigated with an Observing System Simulation Experiment

    NASA Astrophysics Data System (ADS)

    Prive, N.; Errico, R. M.; Tai, K.

    2012-12-01

    A global observing system simulation experiment (OSSE) has been developed at the NASA Global Modeling and Assimilation Office using the Global Earth Observing System (GEOS-5) forecast model and Gridpoint Statistical Interpolation data assimilation. A 13-month integration of the European Centre for Medium-Range Weather Forecasts operational forecast model is used as the Nature Run. Synthetic observations for conventional and radiance data types are interpolated from the Nature Run, with calibrated observation errors added to reproduce realistic statistics of analysis increment and observation innovation. It is found that correlated observation errors are necessary in order to replicate the statistics of analysis increment and observation innovation found with real data. The impact of these observation errors is explored in a series of OSSE experiments in which the magnitude of the applied observation error is varied from zero to double the calibrated values while the observation error covariances of the GSI are held fixed. Increased observation error has a strong effect on the variance of the analysis increment and observation innovation fields, but a much weaker impact on the root mean square (RMS) analysis error. For the 120 hour forecast, only slight degradation of forecast skill in terms of anomaly correlation and RMS forecast error is observed in the midlatitudes, and there is no appreciable impact of observation error on forecast skill in the tropics.

  7. Distraction-induced driving error: an on-road examination of the errors made by distracted and undistracted drivers.

    PubMed

    Young, Kristie L; Salmon, Paul M; Cornelissen, Miranda

    2013-09-01

    This study explored the nature of errors made by drivers when distracted versus not distracted. Participants drove an instrumented vehicle around an urban test route both while distracted (performing a visual detection task) and while not distracted. Two in-vehicle observers recorded the driving errors made, and a range of other data were collected, including driver verbal protocols, forward, cockpit and driver video, and vehicle data (speed, braking, steering wheel angle, etc.). Classification of the errors revealed that drivers were significantly more likely to make errors when distracted; although driving errors were prevalent even when not distracted. Interestingly, the nature of the errors made when distracted did not differ substantially from those made when not distracted, suggesting that, rather than making different types of errors, distracted drivers simply make a greater number of the same error types they make when not distracted. Avenues for broadening our understanding of the relationship between distraction and driving errors are discussed along with the advantages of using a multi-method framework for studying driver behaviour.

  8. An Empirical State Error Covariance Matrix Orbit Determination Example

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2015-01-01

    is suspect. In its most straight forward form, the technique only requires supplemental calculations to be added to existing batch estimation algorithms. In the current problem being studied a truth model making use of gravity with spherical, J2 and J4 terms plus a standard exponential type atmosphere with simple diurnal and random walk components is used. The ability of the empirical state error covariance matrix to account for errors is investigated under four scenarios during orbit estimation. These scenarios are: exact modeling under known measurement errors, exact modeling under corrupted measurement errors, inexact modeling under known measurement errors, and inexact modeling under corrupted measurement errors. For this problem a simple analog of a distributed space surveillance network is used. The sensors in this network make only range measurements and with simple normally distributed measurement errors. The sensors are assumed to have full horizon to horizon viewing at any azimuth. For definiteness, an orbit at the approximate altitude and inclination of the International Space Station is used for the study. The comparison analyses of the data involve only total vectors. No investigation of specific orbital elements is undertaken. The total vector analyses will look at the chisquare values of the error in the difference between the estimated state and the true modeled state using both the empirical and theoretical error covariance matrices for each of scenario.

  9. Aging transition by random errors

    PubMed Central

    Sun, Zhongkui; Ma, Ning; Xu, Wei

    2017-01-01

    In this paper, the effects of random errors on the oscillating behaviors have been studied theoretically and numerically in a prototypical coupled nonlinear oscillator. Two kinds of noises have been employed respectively to represent the measurement errors accompanied with the parameter specifying the distance from a Hopf bifurcation in the Stuart-Landau model. It has been demonstrated that when the random errors are uniform random noise, the change of the noise intensity can effectively increase the robustness of the system. While the random errors are normal random noise, the increasing of variance can also enhance the robustness of the system under certain conditions that the probability of aging transition occurs reaches a certain threshold. The opposite conclusion is obtained when the probability is less than the threshold. These findings provide an alternative candidate to control the critical value of aging transition in coupled oscillator system, which is composed of the active oscillators and inactive oscillators in practice. PMID:28198430

  10. Robust characterization of leakage errors

    NASA Astrophysics Data System (ADS)

    Wallman, Joel J.; Barnhill, Marie; Emerson, Joseph

    2016-04-01

    Leakage errors arise when the quantum state leaks out of some subspace of interest, for example, the two-level subspace of a multi-level system defining a computational ‘qubit’, the logical code space of a quantum error-correcting code, or a decoherence-free subspace. Leakage errors pose a distinct challenge to quantum control relative to the more well-studied decoherence errors and can be a limiting factor to achieving fault-tolerant quantum computation. Here we present a scalable and robust randomized benchmarking protocol for quickly estimating the leakage rate due to an arbitrary Markovian noise process on a larger system. We illustrate the reliability of the protocol through numerical simulations.

  11. Static Detection of Disassembly Errors

    SciTech Connect

    Krishnamoorthy, Nithya; Debray, Saumya; Fligg, Alan K

    2009-10-13

    Static disassembly is a crucial first step in reverse engineering executable files, and there is a consider- able body of work in reverse-engineering of binaries, as well as areas such as semantics-based security anal- ysis, that assumes that the input executable has been correctly disassembled. However, disassembly errors, e.g., arising from binary obfuscations, can render this assumption invalid. This work describes a machine- learning-based approach, using decision trees, for stat- ically identifying possible errors in a static disassem- bly; such potential errors may then be examined more closely, e.g., using dynamic analyses. Experimental re- sults using a variety of input executables indicate that our approach performs well, correctly identifying most disassembly errors with relatively few false positives.

  12. Prospective errors determine motor learning

    PubMed Central

    Takiyama, Ken; Hirashima, Masaya; Nozaki, Daichi

    2015-01-01

    Diverse features of motor learning have been reported by numerous studies, but no single theoretical framework concurrently accounts for these features. Here, we propose a model for motor learning to explain these features in a unified way by extending a motor primitive framework. The model assumes that the recruitment pattern of motor primitives is determined by the predicted movement error of an upcoming movement (prospective error). To validate this idea, we perform a behavioural experiment to examine the model’s novel prediction: after experiencing an environment in which the movement error is more easily predictable, subsequent motor learning should become faster. The experimental results support our prediction, suggesting that the prospective error might be encoded in the motor primitives. Furthermore, we demonstrate that this model has a strong explanatory power to reproduce a wide variety of motor-learning-related phenomena that have been separately explained by different computational models. PMID:25635628

  13. Aging transition by random errors

    NASA Astrophysics Data System (ADS)

    Sun, Zhongkui; Ma, Ning; Xu, Wei

    2017-02-01

    In this paper, the effects of random errors on the oscillating behaviors have been studied theoretically and numerically in a prototypical coupled nonlinear oscillator. Two kinds of noises have been employed respectively to represent the measurement errors accompanied with the parameter specifying the distance from a Hopf bifurcation in the Stuart-Landau model. It has been demonstrated that when the random errors are uniform random noise, the change of the noise intensity can effectively increase the robustness of the system. While the random errors are normal random noise, the increasing of variance can also enhance the robustness of the system under certain conditions that the probability of aging transition occurs reaches a certain threshold. The opposite conclusion is obtained when the probability is less than the threshold. These findings provide an alternative candidate to control the critical value of aging transition in coupled oscillator system, which is composed of the active oscillators and inactive oscillators in practice.

  14. Interpolation Errors in Spectrum Analyzers

    NASA Technical Reports Server (NTRS)

    Martin, J. L.

    1996-01-01

    To obtain the proper measurement amplitude with a spectrum analyzer, the correct frequency-dependent transducer factor must be added to the voltage measured by the transducer. This report examines how entering transducer factors into a spectrum analyzer can cause significant errors in field amplitude due to the misunderstanding of the analyzer's interpolation methods. It also discusses how to reduce these errors to obtain a more accurate field amplitude reading.

  15. Error protection capability of space shuttle data bus designs

    NASA Technical Reports Server (NTRS)

    Proch, G. E.

    1974-01-01

    Error protection assurance in the reliability of digital data communications is discussed. The need for error protection on the space shuttle data bus system has been recognized and specified as a hardware requirement. The error protection techniques of particular concern are those designed into the Shuttle Main Engine Interface (MEI) and the Orbiter Multiplex Interface Adapter (MIA). The techniques and circuit design details proposed for these hardware are analyzed in this report to determine their error protection capability. The capability is calculated in terms of the probability of an undetected word error. Calculated results are reported for a noise environment that ranges from the nominal noise level stated in the hardware specifications to burst levels which may occur in extreme or anomalous conditions.

  16. Estimation of rod scale errors in geodetic leveling

    USGS Publications Warehouse

    Craymer, Michael R.; Vaníček, Petr; Castle, Robert O.

    1995-01-01

    Comparisons among repeated geodetic levelings have often been used for detecting and estimating residual rod scale errors in leveled heights. Individual rod-pair scale errors are estimated by a two-step procedure using a model based on either differences in heights, differences in section height differences, or differences in section tilts. It is shown that the estimated rod-pair scale errors derived from each model are identical only when the data are correctly weighted, and the mathematical correlations are accounted for in the model based on heights. Analyses based on simple regressions of changes in height versus height can easily lead to incorrect conclusions. We also show that the statistically estimated scale errors are not a simple function of height, height difference, or tilt. The models are valid only when terrain slope is constant over adjacent pairs of setups (i.e., smoothly varying terrain). In order to discriminate between rod scale errors and vertical displacements due to crustal motion, the individual rod-pairs should be used in more than one leveling, preferably in areas of contrasting tectonic activity. From an analysis of 37 separately calibrated rod-pairs used in 55 levelings in southern California, we found eight statistically significant coefficients that could be reasonably attributed to rod scale errors, only one of which was larger than the expected random error in the applied calibration-based scale correction. However, significant differences with other independent checks indicate that caution should be exercised before accepting these results as evidence of scale error. Further refinements of the technique are clearly needed if the results are to be routinely applied in practice.

  17. High Precision Ranging and Range-Rate Measurements over Free-Space-Laser Communication Link

    NASA Technical Reports Server (NTRS)

    Yang, Guangning; Lu, Wei; Krainak, Michael; Sun, Xiaoli

    2016-01-01

    We present a high-precision ranging and range-rate measurement system via an optical-ranging or combined ranging-communication link. A complete bench-top optical communication system was built. It included a ground terminal and a space terminal. Ranging and range rate tests were conducted in two configurations. In the communication configuration with 622 data rate, we achieved a two-way range-rate error of 2 microns/s, or a modified Allan deviation of 9 x 10 (exp -15) with 10 second averaging time. Ranging and range-rate as a function of Bit Error Rate of the communication link is reported. They are not sensitive to the link error rate. In the single-frequency amplitude modulation mode, we report a two-way range rate error of 0.8 microns/s, or a modified Allan deviation of 2.6 x 10 (exp -15) with 10 second averaging time. We identified the major noise sources in the current system as the transmitter modulation injected noise and receiver electronics generated noise. A new improved system will be constructed to further improve the system performance for both operating modes.

  18. Quantum error correction for beginners.

    PubMed

    Devitt, Simon J; Munro, William J; Nemoto, Kae

    2013-07-01

    Quantum error correction (QEC) and fault-tolerant quantum computation represent one of the most vital theoretical aspects of quantum information processing. It was well known from the early developments of this exciting field that the fragility of coherent quantum systems would be a catastrophic obstacle to the development of large-scale quantum computers. The introduction of quantum error correction in 1995 showed that active techniques could be employed to mitigate this fatal problem. However, quantum error correction and fault-tolerant computation is now a much larger field and many new codes, techniques, and methodologies have been developed to implement error correction for large-scale quantum algorithms. In response, we have attempted to summarize the basic aspects of quantum error correction and fault-tolerance, not as a detailed guide, but rather as a basic introduction. The development in this area has been so pronounced that many in the field of quantum information, specifically researchers who are new to quantum information or people focused on the many other important issues in quantum computation, have found it difficult to keep up with the general formalisms and methodologies employed in this area. Rather than introducing these concepts from a rigorous mathematical and computer science framework, we instead examine error correction and fault-tolerance largely through detailed examples, which are more relevant to experimentalists today and in the near future.

  19. Error image aware content restoration

    NASA Astrophysics Data System (ADS)

    Choi, Sungwoo; Lee, Moonsik; Jung, Byunghee

    2015-12-01

    As the resolution of TV significantly increased, content consumers have become increasingly sensitive to the subtlest defect in TV contents. This rising standard in quality demanded by consumers has posed a new challenge in today's context where the tape-based process has transitioned to the file-based process: the transition necessitated digitalizing old archives, a process which inevitably produces errors such as disordered pixel blocks, scattered white noise, or totally missing pixels. Unsurprisingly, detecting and fixing such errors require a substantial amount of time and human labor to meet the standard demanded by today's consumers. In this paper, we introduce a novel, automated error restoration algorithm which can be applied to different types of classic errors by utilizing adjacent images while preserving the undamaged parts of an error image as much as possible. We tested our method to error images detected from our quality check system in KBS(Korean Broadcasting System) video archive. We are also implementing the algorithm as a plugin of well-known NLE(Non-linear editing system), which is a familiar tool for quality control agent.

  20. Dominant modes via model error

    NASA Technical Reports Server (NTRS)

    Yousuff, A.; Breida, M.

    1992-01-01

    Obtaining a reduced model of a stable mechanical system with proportional damping is considered. Such systems can be conveniently represented in modal coordinates. Two popular schemes, the modal cost analysis and the balancing method, offer simple means of identifying dominant modes for retention in the reduced model. The dominance is measured via the modal costs in the case of modal cost analysis and via the singular values of the Gramian-product in the case of balancing. Though these measures do not exactly reflect the more appropriate model error, which is the H2 norm of the output-error between the full and the reduced models, they do lead to simple computations. Normally, the model error is computed after the reduced model is obtained, since it is believed that, in general, the model error cannot be easily computed a priori. The authors point out that the model error can also be calculated a priori, just as easily as the above measures. Hence, the model error itself can be used to determine the dominant modes. Moreover, the simplicity of the computations does not presume any special properties of the system, such as small damping, orthogonal symmetry, etc.

  1. 7 CFR 1207.323 - Acceptance.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE POTATO RESEARCH AND PROMOTION PLAN Potato Research and Promotion Plan National Potato Promotion Board § 1207.323 Acceptance. Each...

  2. 7 CFR 1207.323 - Acceptance.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE POTATO RESEARCH AND PROMOTION PLAN Potato Research and Promotion Plan National Potato Promotion Board § 1207.323 Acceptance. Each...

  3. 7 CFR 1207.323 - Acceptance.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE POTATO RESEARCH AND PROMOTION PLAN Potato Research and Promotion Plan National Potato Promotion Board § 1207.323 Acceptance. Each...

  4. 7 CFR 1207.323 - Acceptance.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE POTATO RESEARCH AND PROMOTION PLAN Potato Research and Promotion Plan National Potato Promotion Board § 1207.323 Acceptance. Each...

  5. 7 CFR 1207.323 - Acceptance.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE POTATO RESEARCH AND PROMOTION PLAN Potato Research and Promotion Plan National Potato Promotion Board § 1207.323 Acceptance. Each...

  6. 7 CFR 932.32 - Acceptance.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE OLIVES GROWN IN CALIFORNIA Order Regulating Handling Olive Administrative Committee § 932.32 Acceptance. Any person selected by the...

  7. 7 CFR 932.32 - Acceptance.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE OLIVES GROWN IN CALIFORNIA Order Regulating Handling Olive Administrative Committee § 932.32 Acceptance. Any person selected by the...

  8. 7 CFR 932.32 - Acceptance.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE OLIVES GROWN IN CALIFORNIA Order Regulating Handling Olive Administrative Committee § 932.32 Acceptance. Any person selected by the...

  9. 7 CFR 932.32 - Acceptance.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE OLIVES GROWN IN CALIFORNIA Order Regulating Handling Olive Administrative Committee § 932.32 Acceptance. Any person selected by the...

  10. 7 CFR 932.32 - Acceptance.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE OLIVES GROWN IN CALIFORNIA Order Regulating Handling Olive Administrative Committee § 932.32 Acceptance. Any person selected by the...

  11. Acceptance Criteria for Aerospace Structural Adhesives.

    DTIC Science & Technology

    ADHESIVES, *AIRFRAMES, PRIMERS, STRUCTURAL ENGINEERING, CHEMICAL COMPOSITION, MECHANICAL PROPERTIES, INDUSTRIAL PRODUCTION , DATA ACQUISITION , PARTICLE SIZE, ACCEPTANCE TESTS, ELASTOMERS, BONDING, QUALITY CONTROL, .

  12. 2013 SYR Accepted Poster Abstracts.

    PubMed

    2013-01-01

    SYR 2013 Accepted Poster abstracts: 1. Benefits of Yoga as a Wellness Practice in a Veterans Affairs (VA) Health Care Setting: If You Build It, Will They Come? 2. Yoga-based Psychotherapy Group With Urban Youth Exposed to Trauma. 3. Embodied Health: The Effects of a Mind�Body Course for Medical Students. 4. Interoceptive Awareness and Vegetable Intake After a Yoga and Stress Management Intervention. 5. Yoga Reduces Performance Anxiety in Adolescent Musicians. 6. Designing and Implementing a Therapeutic Yoga Program for Older Women With Knee Osteoarthritis. 7. Yoga and Life Skills Eating Disorder Prevention Among 5th Grade Females: A Controlled Trial. 8. A Randomized, Controlled Trial Comparing the Impact of Yoga and Physical Education on the Emotional and Behavioral Functioning of Middle School Children. 9. Feasibility of a Multisite, Community based Randomized Study of Yoga and Wellness Education for Women With Breast Cancer Undergoing Chemotherapy. 10. A Delphi Study for the Development of Protocol Guidelines for Yoga Interventions in Mental Health. 11. Impact Investigation of Breathwalk Daily Practice: Canada�India Collaborative Study. 12. Yoga Improves Distress, Fatigue, and Insomnia in Older Veteran Cancer Survivors: Results of a Pilot Study. 13. Assessment of Kundalini Mantra and Meditation as an Adjunctive Treatment With Mental Health Consumers. 14. Kundalini Yoga Therapy Versus Cognitive Behavior Therapy for Generalized Anxiety Disorder and Co-Occurring Mood Disorder. 15. Baseline Differences in Women Versus Men Initiating Yoga Programs to Aid Smoking Cessation: Quitting in Balance Versus QuitStrong. 16. Pranayam Practice: Impact on Focus and Everyday Life of Work and Relationships. 17. Participation in a Tailored Yoga Program is Associated With Improved Physical Health in Persons With Arthritis. 18. Effects of Yoga on Blood Pressure: Systematic Review and Meta-analysis. 19. A Quasi-experimental Trial of a Yoga based Intervention to Reduce Stress and

  13. In acceptance we trust? Conceptualising acceptance as a viable approach to NGO security management.

    PubMed

    Fast, Larissa A; Freeman, C Faith; O'Neill, Michael; Rowley, Elizabeth

    2013-04-01

    This paper documents current understanding of acceptance as a security management approach and explores issues and challenges non-governmental organisations (NGOs) confront when implementing an acceptance approach to security management. It argues that the failure of organisations to systematise and clearly articulate acceptance as a distinct security management approach and a lack of organisational policies and procedures concerning acceptance hinder its efficacy as a security management approach. The paper identifies key and cross-cutting components of acceptance that are critical to its effective implementation in order to advance a comprehensive and systematic concept of acceptance. The key components of acceptance illustrate how organisational and staff functions affect positively or negatively an organisation's acceptance, and include: an organisation's principles and mission, communications, negotiation, programming, relationships and networks, stakeholder and context analysis, staffing, and image. The paper contends that acceptance is linked not only to good programming, but also to overall organisational management and structures.

  14. Phonologic Error Distributions in the Iowa-Nebraska Articulation Norms Project: Word-Initial Consonant Clusters.

    ERIC Educational Resources Information Center

    Smit, Ann Bosma

    1993-01-01

    The errors on word-initial consonant clusters made by children (ages 2-9) in the Iowa-Nebraska Articulation Norms Project were tabulated by age range and frequency. Error data showed support for previous research in the acquisition of clusters. Cluster errors are discussed in terms of theories of phonologic development. (Author/JDD)

  15. Enhanced orbit determination filter sensitivity analysis: Error budget development

    NASA Technical Reports Server (NTRS)

    Estefan, J. A.; Burkhart, P. D.

    1994-01-01

    An error budget analysis is presented which quantifies the effects of different error sources in the orbit determination process when the enhanced orbit determination filter, recently developed, is used to reduce radio metric data. The enhanced filter strategy differs from more traditional filtering methods in that nearly all of the principal ground system calibration errors affecting the data are represented as filter parameters. Error budget computations were performed for a Mars Observer interplanetary cruise scenario for cases in which only X-band (8.4-GHz) Doppler data were used to determine the spacecraft's orbit, X-band ranging data were used exclusively, and a combined set in which the ranging data were used in addition to the Doppler data. In all three cases, the filter model was assumed to be a correct representation of the physical world. Random nongravitational accelerations were found to be the largest source of error contributing to the individual error budgets. Other significant contributors, depending on the data strategy used, were solar-radiation pressure coefficient uncertainty, random earth-orientation calibration errors, and Deep Space Network (DSN) station location uncertainty.

  16. Research on calibration error of carrier phase against antenna arraying

    NASA Astrophysics Data System (ADS)

    Sun, Ke; Hou, Xiaomin

    2016-11-01

    It is the technical difficulty of uplink antenna arraying that signals from various quarters can not be automatically aligned at the target in deep space. The size of the far-field power combining gain is directly determined by the accuracy of carrier phase calibration. It is necessary to analyze the entire arraying system in order to improve the accuracy of the phase calibration. This paper analyzes the factors affecting the calibration error of carrier phase of uplink antenna arraying system including the error of phase measurement and equipment, the error of the uplink channel phase shift, the position error of ground antenna, calibration receiver and target spacecraft, the error of the atmospheric turbulence disturbance. Discuss the spatial and temporal autocorrelation model of atmospheric disturbances. Each antenna of the uplink antenna arraying is no common reference signal for continuous calibration. So it must be a system of the periodic calibration. Calibration is refered to communication of one or more spacecrafts in a certain period. Because the deep space targets are not automatically aligned to multiplexing received signal. Therefore the aligned signal should be done in advance on the ground. Data is shown that the error can be controlled within the range of demand by the use of existing technology to meet the accuracy of carrier phase calibration. The total error can be controlled within a reasonable range.

  17. Masking of errors in transmission of VAPC-coded speech

    NASA Technical Reports Server (NTRS)

    Cox, Neil B.; Froese, Edwin L.

    1990-01-01

    A subjective evaluation is provided of the bit error sensitivity of the message elements of a Vector Adaptive Predictive (VAPC) speech coder, along with an indication of the amenability of these elements to a popular error masking strategy (cross frame hold over). As expected, a wide range of bit error sensitivity was observed. The most sensitive message components were the short term spectral information and the most significant bits of the pitch and gain indices. The cross frame hold over strategy was found to be useful for pitch and gain information, but it was not beneficial for the spectral information unless severe corruption had occurred.

  18. Microdensitometer errors: Their effect on photometric data reduction

    NASA Technical Reports Server (NTRS)

    Bozyan, E. P.; Opal, C. B.

    1984-01-01

    The performance of densitometers used for photometric data reduction of high dynamic range electrographic plate material is analyzed. Densitometer repeatability is tested by comparing two scans of one plate. Internal densitometer errors are examined by constructing histograms of digitized densities and finding inoperative bits and differential nonlinearity in the analog to digital converter. Such problems appear common to the four densitometers used in this investigation and introduce systematic algorithm dependent errors in the results. Strategies to improve densitometer performance are suggested.

  19. Verification of the Forecast Errors Based on Ensemble Spread

    NASA Astrophysics Data System (ADS)

    Vannitsem, S.; Van Schaeybroeck, B.

    2014-12-01

    The use of ensemble prediction systems allows for an uncertainty estimation of the forecast. Most end users do not require all the information contained in an ensemble and prefer the use of a single uncertainty measure. This measure is the ensemble spread which serves to forecast the forecast error. It is however unclear how best the quality of these forecasts can be performed, based on spread and forecast error only. The spread-error verification is intricate for two reasons: First for each probabilistic forecast only one observation is substantiated and second, the spread is not meant to provide an exact prediction for the error. Despite these facts several advances were recently made, all based on traditional deterministic verification of the error forecast. In particular, Grimit and Mass (2007) and Hopson (2014) considered in detail the strengths and weaknesses of the spread-error correlation, while Christensen et al (2014) developed a proper-score extension of the mean squared error. However, due to the strong variance of the error given a certain spread, the error forecast should be preferably considered as probabilistic in nature. In the present work, different probabilistic error models are proposed depending on the spread-error metrics used. Most of these models allow for the discrimination of a perfect forecast from an imperfect one, independent of the underlying ensemble distribution. The new spread-error scores are tested on the ensemble prediction system of the European Centre of Medium-range forecasts (ECMWF) over Europe and Africa. ReferencesChristensen, H. M., Moroz, I. M. and Palmer, T. N., 2014, Evaluation of ensemble forecast uncertainty using a new proper score: application to medium-range and seasonal forecasts. In press, Quarterly Journal of the Royal Meteorological Society. Grimit, E. P., and C. F. Mass, 2007: Measuring the ensemble spread-error relationship with a probabilistic approach: Stochastic ensemble results. Mon. Wea. Rev., 135, 203

  20. An Empirical State Error Covariance Matrix for Batch State Estimation

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2011-01-01

    state estimate, regardless as to the source of the uncertainty. Also, in its most straight forward form, the technique only requires supplemental calculations to be added to existing batch algorithms. The generation of this direct, empirical form of the state error covariance matrix is independent of the dimensionality of the observations. Mixed degrees of freedom for an observation set are allowed. As is the case with any simple, empirical sample variance problems, the presented approach offers an opportunity (at least in the case of weighted least squares) to investigate confidence interval estimates for the error covariance matrix elements. The diagonal or variance terms of the error covariance matrix have a particularly simple form to associate with either a multiple degree of freedom chi-square distribution (more approximate) or with a gamma distribution (less approximate). The off diagonal or covariance terms of the matrix are less clear in their statistical behavior. However, the off diagonal covariance matrix elements still lend themselves to standard confidence interval error analysis. The distributional forms associated with the off diagonal terms are more varied and, perhaps, more approximate than those associated with the diagonal terms. Using a simple weighted least squares sample problem, results obtained through use of the proposed technique are presented. The example consists of a simple, two observer, triangulation problem with range only measurements. Variations of this problem reflect an ideal case (perfect knowledge of the range errors) and a mismodeled case (incorrect knowledge of the range errors).

  1. Error Cost Escalation Through the Project Life Cycle

    NASA Technical Reports Server (NTRS)

    Stecklein, Jonette M.; Dabney, Jim; Dick, Brandon; Haskins, Bill; Lovell, Randy; Moroney, Gregory

    2004-01-01

    It is well known that the costs to fix errors increase as the project matures, but how fast do those costs build? A study was performed to determine the relative cost of fixing errors discovered during various phases of a project life cycle. This study used three approaches to determine the relative costs: the bottom-up cost method, the total cost breakdown method, and the top-down hypothetical project method. The approaches and results described in this paper presume development of a hardware/software system having project characteristics similar to those used in the development of a large, complex spacecraft, a military aircraft, or a small communications satellite. The results show the degree to which costs escalate, as errors are discovered and fixed at later and later phases in the project life cycle. If the cost of fixing a requirements error discovered during the requirements phase is defined to be 1 unit, the cost to fix that error if found during the design phase increases to 3 - 8 units; at the manufacturing/build phase, the cost to fix the error is 7 - 16 units; at the integration and test phase, the cost to fix the error becomes 21 - 78 units; and at the operations phase, the cost to fix the requirements error ranged from 29 units to more than 1500 units

  2. The impact of sound speed errors on medical ultrasound imaging.

    PubMed

    Anderson, M E; McKeag, M S; Trahey, G E

    2000-06-01

    The results of a quantitative study of the impact of sound speed errors on the spatial resolution and amplitude sensitivity of a commercial medical ultrasound scanner are presented in the context of their clinical significance. The beamforming parameters of the scanner were manipulated to produce sound speed errors ranging over +/-8% while imaging a wire target and an attenuating, speckle-generating phantom. For the wire target, these errors produced increases in lateral beam width of up to 320% and reductions in peak echo amplitude of up to 10.5 dB. In the speckle-generating phantom, these errors produced increases in speckle intensity correlation cell area of up to 92% and reductions in mean speckle brightness of up to 5.6 dB. These results are applied in statistical analyses of two detection tasks of clinical relevance. The first is of low contrast lesion detectability, predicting the changes in the correct decision probability as a function of lesion size, contrast, and sound speed error. The second is of point target detectability, predicting the changes in the correct decision probability as function of point target reflectivity and sound speed error. Representative results of these analyses are presented and their implications for clinical imaging are discussed. In general, sound speed errors have a more significant impact on point target detectability over lesion detectability by these analyses, producing up to a 22% reduction in correct decisions for a typical error.

  3. Error-associated behaviors and error rates for robotic geology

    NASA Technical Reports Server (NTRS)

    Anderson, Robert C.; Thomas, Geb; Wagner, Jacob; Glasgow, Justin

    2004-01-01

    This study explores human error as a function of the decision-making process. One of many models for human decision-making is Rasmussen's decision ladder [9]. The decision ladder identifies the multiple tasks and states of knowledge involved in decision-making. The tasks and states of knowledge can be classified by the level of cognitive effort required to make the decision, leading to the skill, rule, and knowledge taxonomy (Rasmussen, 1987). Skill based decisions require the least cognitive effort and knowledge based decisions require the greatest cognitive effort. Errors can occur at any of the cognitive levels.

  4. An estimation error bound for pixelated sensing

    NASA Astrophysics Data System (ADS)

    Kreucher, Chris; Bell, Kristine

    2016-05-01

    This paper considers the ubiquitous problem of estimating the state (e.g., position) of an object based on a series of noisy measurements. The standard approach is to formulate this problem as one of measuring the state (or a function of the state) corrupted by additive Gaussian noise. This model assumes both (i) the sensor provides a measurement of the true target (or, alternatively, a separate signal processing step has eliminated false alarms), and (ii) The error source in the measurement is accurately described by a Gaussian model. In reality, however, sensor measurement are often formed on a grid of pixels - e.g., Ground Moving Target Indication (GMTI) measurements are formed for a discrete set of (angle, range, velocity) voxels, and EO imagery is made on (x, y) grids. When a target is present in a pixel, therefore, uncertainty is not Gaussian (instead it is a boxcar function) and unbiased estimation is not generally possible as the location of the target within the pixel defines the bias of the estimator. It turns out that this small modification to the measurement model makes traditional bounding approaches not applicable. This paper discusses pixelated sensing in more detail and derives the minimum mean squared error (MMSE) bound for estimation in the pixelated scenario. We then use this error calculation to investigate the utility of using non-thresholded measurements.

  5. Heavy Metal, Religiosity, and Suicide Acceptability.

    ERIC Educational Resources Information Center

    Stack, Steven

    1998-01-01

    Reports on data taken from the General Social Survey that found a link between "heavy metal" rock fanship and suicide acceptability. Finds that relationship becomes nonsignificant once level of religiosity is controlled. Heavy metal fans are low in religiosity, which contributes to greater suicide acceptability. (Author/JDM)

  6. Hanford Site liquid waste acceptance criteria

    SciTech Connect

    LUECK, K.J.

    1999-09-11

    This document provides the waste acceptance criteria for liquid waste managed by Waste Management Federal Services of Hanford, Inc. (WMH). These waste acceptance criteria address the various requirements to operate a facility in compliance with applicable environmental, safety, and operational requirements. This document also addresses the sitewide miscellaneous streams program.

  7. 48 CFR 411.103 - Market acceptance.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Market acceptance. 411.103... ACQUISITION PLANNING DESCRIBING AGENCY NEEDS Selecting and Developing Requirements Documents 411.103 Market... accordance with FAR 11.103(a), the market acceptability of their items to be offered. (b) The...

  8. 48 CFR 3011.103 - Market acceptance.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 7 2011-10-01 2011-10-01 false Market acceptance. 3011.103 Section 3011.103 Federal Acquisition Regulations System DEPARTMENT OF HOMELAND SECURITY, HOMELAND... Developing Requirements Documents 3011.103 Market acceptance. (a) Contracting officers may act on behalf...

  9. 48 CFR 411.103 - Market acceptance.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 4 2011-10-01 2011-10-01 false Market acceptance. 411.103... ACQUISITION PLANNING DESCRIBING AGENCY NEEDS Selecting and Developing Requirements Documents 411.103 Market... accordance with FAR 11.103(a), the market acceptability of their items to be offered. (b) The...

  10. 48 CFR 3011.103 - Market acceptance.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 7 2010-10-01 2010-10-01 false Market acceptance. 3011.103 Section 3011.103 Federal Acquisition Regulations System DEPARTMENT OF HOMELAND SECURITY, HOMELAND... Developing Requirements Documents 3011.103 Market acceptance. (a) Contracting officers may act on behalf...

  11. Nevada Test Site Waste Acceptance Criteria (NTSWAC)

    SciTech Connect

    NNSA /NSO Waste Management Project

    2008-06-01

    This document establishes the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office, Nevada Test Site Waste Acceptance Criteria (NTSWAC). The NTSWAC provides the requirements, terms, and conditions under which the Nevada Test Site will accept low-level radioactive (LLW) and LLW Mixed Waste (MW) for disposal.

  12. Consumer acceptance of ginseng food products.

    PubMed

    Chung, Hee Sook; Lee, Young-Chul; Rhee, Young Kyung; Lee, Soo-Yeun

    2011-01-01

    Ginseng has been utilized less in food products than in dietary supplements in the United States. Sensory acceptance of ginseng food products by U.S. consumers has not been reported. The objectives of this study were to: (1) determine the sensory acceptance of commercial ginseng food products and (2) assess influence of the addition of sweeteners to ginseng tea and ginseng extract to chocolate on consumer acceptance. Total of 126 consumers participated in 3 sessions for (1) 7 commercial red ginseng food products, (2) 10 ginseng teas varying in levels of sugar or honey, and (3) 10 ginseng milk or dark chocolates varying in levels of ginseng extract. Ginseng candy with vitamin C and ginseng crunchy white chocolate were the most highly accepted, while sliced ginseng root product was the least accepted among the seven commercial products. Sensory acceptance increased in proportion to the content of sugar and honey in ginseng tea, whereas acceptance decreased with increasing content of ginseng extract in milk and dark chocolates. Findings demonstrate that ginseng food product types with which consumers have been already familiar, such as candy and chocolate, will have potential for success in the U.S. market. Chocolate could be suggested as a food matrix into which ginseng can be incorporated, as containing more bioactive compounds than ginseng tea at a similar acceptance level. Future research may include a descriptive analysis with ginseng-based products to identify the key drivers of liking and disliking for successful new product development.

  13. Genres Across Cultures: Types of Acceptability Variation

    ERIC Educational Resources Information Center

    Shaw, Philip; Gillaerts, Paul; Jacobs, Everett; Palermo, Ofelia; Shinohara, Midori; Verckens, J. Piet

    2004-01-01

    One can ask four questions about genre validity across cultures. Does a certain form or configuration occur in the culture in question? Is it acceptable? If acceptable, is it in practice preferred? Is it recommended by prescriptive authorities? This paper reports the results of an attempt to answer these questions empirically by testing the…

  14. 48 CFR 11.103 - Market acceptance.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Market acceptance. 11.103... DESCRIBING AGENCY NEEDS Selecting and Developing Requirements Documents 11.103 Market acceptance. (a) Section... may, under appropriate circumstances, require offerors to demonstrate that the items offered— (1)...

  15. 48 CFR 2811.103 - Market acceptance.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Market acceptance. 2811.103... Planning DESCRIBING AGENCY NEEDS Selecting and Developing Requirements Documents 2811.103 Market acceptance... offerors to demonstrate that the items offered meet the criteria set forth in FAR 11.103(a)....

  16. 5 CFR 1655.11 - Loan acceptance.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 5 Administrative Personnel 3 2013-01-01 2013-01-01 false Loan acceptance. 1655.11 Section 1655.11 Administrative Personnel FEDERAL RETIREMENT THRIFT INVESTMENT BOARD LOAN PROGRAM § 1655.11 Loan acceptance. The TSP record keeper will reject a loan application if: (a) The participant is not qualified to apply...

  17. 5 CFR 1655.11 - Loan acceptance.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 5 Administrative Personnel 3 2011-01-01 2011-01-01 false Loan acceptance. 1655.11 Section 1655.11 Administrative Personnel FEDERAL RETIREMENT THRIFT INVESTMENT BOARD LOAN PROGRAM § 1655.11 Loan acceptance. The TSP record keeper will reject a loan application if: (a) The participant is not qualified to apply...

  18. 5 CFR 1655.11 - Loan acceptance.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 5 Administrative Personnel 3 2012-01-01 2012-01-01 false Loan acceptance. 1655.11 Section 1655.11 Administrative Personnel FEDERAL RETIREMENT THRIFT INVESTMENT BOARD LOAN PROGRAM § 1655.11 Loan acceptance. The TSP record keeper will reject a loan application if: (a) The participant is not qualified to apply...

  19. 48 CFR 3011.103 - Market acceptance.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 7 2014-10-01 2014-10-01 false Market acceptance. 3011.103 Section 3011.103 Federal Acquisition Regulations System DEPARTMENT OF HOMELAND SECURITY, HOMELAND... Developing Requirements Documents 3011.103 Market acceptance. (a) Contracting officers may act on behalf...

  20. 48 CFR 411.103 - Market acceptance.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 4 2012-10-01 2012-10-01 false Market acceptance. 411.103... ACQUISITION PLANNING DESCRIBING AGENCY NEEDS Selecting and Developing Requirements Documents 411.103 Market... accordance with FAR 11.103(a), the market acceptability of their items to be offered. (b) The...

  1. 48 CFR 411.103 - Market acceptance.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 4 2014-10-01 2014-10-01 false Market acceptance. 411.103... ACQUISITION PLANNING DESCRIBING AGENCY NEEDS Selecting and Developing Requirements Documents 411.103 Market... accordance with FAR 11.103(a), the market acceptability of their items to be offered. (b) The...

  2. 48 CFR 3011.103 - Market acceptance.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 7 2013-10-01 2012-10-01 true Market acceptance. 3011.103 Section 3011.103 Federal Acquisition Regulations System DEPARTMENT OF HOMELAND SECURITY, HOMELAND SECURITY... Requirements Documents 3011.103 Market acceptance. (a) Contracting officers may act on behalf of the head...

  3. 48 CFR 3011.103 - Market acceptance.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 7 2012-10-01 2012-10-01 false Market acceptance. 3011.103 Section 3011.103 Federal Acquisition Regulations System DEPARTMENT OF HOMELAND SECURITY, HOMELAND... Developing Requirements Documents 3011.103 Market acceptance. (a) Contracting officers may act on behalf...

  4. 48 CFR 411.103 - Market acceptance.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 4 2013-10-01 2013-10-01 false Market acceptance. 411.103... ACQUISITION PLANNING DESCRIBING AGENCY NEEDS Selecting and Developing Requirements Documents 411.103 Market... accordance with FAR 11.103(a), the market acceptability of their items to be offered. (b) The...

  5. How psychotherapists handle treatment errors – an ethical analysis

    PubMed Central

    2013-01-01

    Background Dealing with errors in psychotherapy is challenging, both ethically and practically. There is almost no empirical research on this topic. We aimed (1) to explore psychotherapists’ self-reported ways of dealing with an error made by themselves or by colleagues, and (2) to reconstruct their reasoning according to the two principle-based ethical approaches that are dominant in the ethics discourse of psychotherapy, Beauchamp & Childress (B&C) and Lindsay et al. (L). Methods We conducted 30 semi-structured interviews with 30 psychotherapists (physicians and non-physicians) and analysed the transcripts using qualitative content analysis. Answers were deductively categorized according to the two principle-based ethical approaches. Results Most psychotherapists reported that they preferred to an disclose error to the patient. They justified this by spontaneous intuitions and common values in psychotherapy, rarely using explicit ethical reasoning. The answers were attributed to the following categories with descending frequency: 1. Respect for patient autonomy (B&C; L), 2. Non-maleficence (B&C) and Responsibility (L), 3. Integrity (L), 4. Competence (L) and Beneficence (B&C). Conclusions Psychotherapists need specific ethical and communication training to complement and articulate their moral intuitions as a support when disclosing their errors to the patients. Principle-based ethical approaches seem to be useful for clarifying the reasons for disclosure. Further research should help to identify the most effective and acceptable ways of error disclosure in psychotherapy. PMID:24321503

  6. Increased taxon sampling greatly reduces phylogenetic error.

    PubMed

    Zwickl, Derrick J; Hillis, David M

    2002-08-01

    Several authors have argued recently that extensive taxon sampling has a positive and important effect on the accuracy of phylogenetic estimates. However, other authors have argued that there is little benefit of extensive taxon sampling, and so phylogenetic problems can or should be reduced to a few exemplar taxa as a means of reducing the computational complexity of the phylogenetic analysis. In this paper we examined five aspects of study design that may have led to these different perspectives. First, we considered the measurement of phylogenetic error across a wide range of taxon sample sizes, and conclude that the expected error based on randomly selecting trees (which varies by taxon sample size) must be considered in evaluating error in studies of the effects of taxon sampling. Second, we addressed the scope of the phylogenetic problems defined by different samples of taxa, and argue that phylogenetic scope needs to be considered in evaluating the importance of taxon-sampling strategies. Third, we examined the claim that fast and simple tree searches are as effective as more thorough searches at finding near-optimal trees that minimize error. We show that a more complete search of tree space reduces phylogenetic error, especially as the taxon sample size increases. Fourth, we examined the effects of simple versus complex simulation models on taxonomic sampling studies. Although benefits of taxon sampling are apparent for all models, data generated under more complex models of evolution produce higher overall levels of error and show greater positive effects of increased taxon sampling. Fifth, we asked if different phylogenetic optimality criteria show different effects of taxon sampling. Although we found strong differences in effectiveness of different optimality criteria as a function of taxon sample size, increased taxon sampling improved the results from all the common optimality criteria. Nonetheless, the method that showed the lowest overall

  7. Understanding diversity: the importance of social acceptance.

    PubMed

    Chen, Jacqueline M; Hamilton, David L

    2015-04-01

    Two studies investigated how people define and perceive diversity in the historically majority-group dominated contexts of business and academia. We hypothesized that individuals construe diversity as both the numeric representation of racial minorities and the social acceptance of racial minorities within a group. In Study 1, undergraduates' (especially minorities') perceptions of campus diversity were predicted by perceived social acceptance on a college campus, above and beyond perceived minority representation. Study 2 showed that increases in a company's representation and social acceptance independently led to increases in perceived diversity of the company among Whites. Among non-Whites, representation and social acceptance only increased perceived diversity of the company when both qualities were high. Together these findings demonstrate the importance of both representation and social acceptance to the achievement of diversity in groups and that perceiver race influences the relative importance of these two components of diversity.

  8. Heavy metal, religiosity, and suicide acceptability.

    PubMed

    Stack, S

    1998-01-01

    There has been little work at the national level on the subject of musical subcultures and suicide acceptability. The present work explores the link between "heavy metal" rock fanship and suicide acceptability. Metal fanship is thought to elevate suicide acceptability through such means as exposure to a culture of personal and societal chaos marked by hopelessness, and through its associations with demographic risk factors such as gender, socioeconomic status, and education. Data are taken from the General Social Survey. A link between heavy metal fanship and suicide acceptability is found. However, this relationship becomes nonsignificant once level of religiosity is controlled. Metal fans are low in religiosity, which contributes, in turn, to greater suicide acceptability.

  9. Monte Carlo determination of Phoswich Array acceptance

    SciTech Connect

    Costales, J.B.; E859 Collaboration

    1992-07-01

    The purpose of this memo is to describe the means by which the acceptance of the E859 Phoswich Array is determined. By acceptance, two things are meant: first, the geometrical acceptance (the angular size of the modules); second, the detection acceptance (the probability that a particle of a given 4-momentum initially in the detector line-of-sight is detected as such). In particular, this memo will concentrate on those particles for which the energy of the particle can be sufficiently measured; that is to say, protons, deuterons and tritons. In principle, the phoswich array can measure the low end of the pion energy spectrum, but with a poor resolution. The detection acceptance of pions and baryon clusters heavier than tritons will be neglected in this memo.

  10. Flight Test Results of an Angle of Attack and Angle of Sideslip Calibration Method Using Output-Error Optimization

    NASA Technical Reports Server (NTRS)

    Siu, Marie-Michele; Martos, Borja; Foster, John V.

    2013-01-01

    As part of a joint partnership between the NASA Aviation Safety Program (AvSP) and the University of Tennessee Space Institute (UTSI), research on advanced air data calibration methods has been in progress. This research was initiated to expand a novel pitot-static calibration method that was developed to allow rapid in-flight calibration for the NASA Airborne Subscale Transport Aircraft Research (AirSTAR) facility. This approach uses Global Positioning System (GPS) technology coupled with modern system identification methods that rapidly computes optimal pressure error models over a range of airspeed with defined confidence bounds. Subscale flight tests demonstrated small 2-s error bounds with significant reduction in test time compared to other methods. Recent UTSI full scale flight tests have shown airspeed calibrations with the same accuracy or better as the Federal Aviation Administration (FAA) accepted GPS 'four-leg' method in a smaller test area and in less time. The current research was motivated by the desire to extend this method for inflight calibration of angle of attack (AOA) and angle of sideslip (AOS) flow vanes. An instrumented Piper Saratoga research aircraft from the UTSI was used to collect the flight test data and evaluate flight test maneuvers. Results showed that the output-error approach produces good results for flow vane calibration. In addition, maneuvers for pitot-static and flow vane calibration can be integrated to enable simultaneous and efficient testing of each system.

  11. Force matching errors following eccentric exercise.

    PubMed

    Proske, U; Gregory, J E; Morgan, D L; Percival, P; Weerakkody, N S; Canny, B J

    2004-10-01

    During eccentric exercise contracting muscles are forcibly lengthened, to act as a brake to control motion of the body. A consequence of eccentric exercise is damage to muscle fibres. It has been reported that following the damage there is disturbance to proprioception, in particular, the senses of force and limb position. Force sense was tested in an isometric force-matching task using the elbow flexor muscles of both arms before and after the muscles in one arm had performed 50 eccentric contractions at a strength of 30% of a maximum voluntary contraction (MVC). The exercise led to an immediate reduction of about 40%, in the force generated during an MVC followed by a slow recovery over the next four days, and to the development of delayed onset muscle soreness (DOMS) lasting about the same time. After the exercise, even though participants believed they were making an accurate match, they made large matching errors, in a direction where the exercised arm developed less force than the unexercised arm. This was true whichever arm was used to generate the reference forces, which were in a range of 5-30% of the reference arm's MVC, with visual feedback of the reference arm's force levels provided to the participant. The errors were correlated with the fall in MVC following the exercise, suggesting that participants were not matching force, but the subjective effort needed to generate the force: the same effort producing less force in a muscle weakened by eccentric exercise. The errors were, however, larger than predicted from the measured reduction in MVC, suggesting that factors other than effort might also be contributing. One factor may be DOMS. To test this idea, force matches were done in the presence of pain, induced in unexercised muscles by injection of hypertonic (5%) saline or by the application of noxious heat to the skin over the muscle. Both procedures led to errors in the same direction as those seen after eccentric exercise.

  12. POSITION ERROR IN STATION-KEEPING SATELLITE

    DTIC Science & Technology

    of an error in satellite orientation and the sun being in a plane other than the equatorial plane may result in errors in position determination. The nature of the errors involved is described and their magnitudes estimated.

  13. Orbit IMU alignment: Error analysis

    NASA Technical Reports Server (NTRS)

    Corson, R. W.

    1980-01-01

    A comprehensive accuracy analysis of orbit inertial measurement unit (IMU) alignments using the shuttle star trackers was completed and the results are presented. Monte Carlo techniques were used in a computer simulation of the IMU alignment hardware and software systems to: (1) determine the expected Space Transportation System 1 Flight (STS-1) manual mode IMU alignment accuracy; (2) investigate the accuracy of alignments in later shuttle flights when the automatic mode of star acquisition may be used; and (3) verify that an analytical model previously used for estimating the alignment error is a valid model. The analysis results do not differ significantly from expectations. The standard deviation in the IMU alignment error for STS-1 alignments was determined to the 68 arc seconds per axis. This corresponds to a 99.7% probability that the magnitude of the total alignment error is less than 258 arc seconds.

  14. Error analysis using organizational simulation.

    PubMed Central

    Fridsma, D. B.

    2000-01-01

    Organizational simulations have been used by project organizations in civil and aerospace industries to identify work processes and organizational structures that are likely to fail under certain conditions. Using a simulation system based on Galbraith's information-processing theory and Simon's notion of bounded-rationality, we retrospectively modeled a chemotherapy administration error that occurred in a hospital setting. Our simulation suggested that when there is a high rate of unexpected events, the oncology fellow was differentially backlogged with work when compared with other organizational members. Alternative scenarios suggested that providing more knowledge resources to the oncology fellow improved her performance more effectively than adding additional staff to the organization. Although it is not possible to know whether this might have prevented the error, organizational simulation may be an effective tool to prospectively evaluate organizational "weak links", and explore alternative scenarios to correct potential organizational problems before they generate errors. PMID:11079885

  15. Sensation seeking and error processing.

    PubMed

    Zheng, Ya; Sheng, Wenbin; Xu, Jing; Zhang, Yuanyuan

    2014-09-01

    Sensation seeking is defined by a strong need for varied, novel, complex, and intense stimulation, and a willingness to take risks for such experience. Several theories propose that the insensitivity to negative consequences incurred by risks is one of the hallmarks of sensation-seeking behaviors. In this study, we investigated the time course of error processing in sensation seeking by recording event-related potentials (ERPs) while high and low sensation seekers performed an Eriksen flanker task. Whereas there were no group differences in ERPs to correct trials, sensation seeking was associated with a blunted error-related negativity (ERN), which was female-specific. Further, different subdimensions of sensation seeking were related to ERN amplitude differently. These findings indicate that the relationship between sensation seeking and error processing is sex-specific.

  16. Constraint checking during error recovery

    NASA Technical Reports Server (NTRS)

    Lutz, Robyn R.; Wong, Johnny S. K.

    1993-01-01

    The system-level software onboard a spacecraft is responsible for recovery from communication, power, thermal, and computer-health anomalies that may occur. The recovery must occur without disrupting any critical scientific or engineering activity that is executing at the time of the error. Thus, the error-recovery software may have to execute concurrently with the ongoing acquisition of scientific data or with spacecraft maneuvers. This work provides a technique by which the rules that constrain the concurrent execution of these processes can be modeled in a graph. An algorithm is described that uses this model to validate that the constraints hold for all concurrent executions of the error-recovery software with the software that controls the science and engineering activities of the spacecraft. The results are applicable to a variety of control systems with critical constraints on the timing and ordering of the events they control.

  17. Meditation, mindfulness and executive control: the importance of emotional acceptance and brain-based performance monitoring.

    PubMed

    Teper, Rimma; Inzlicht, Michael

    2013-01-01

    Previous studies have documented the positive effects of mindfulness meditation on executive control. What has been lacking, however, is an understanding of the mechanism underlying this effect. Some theorists have described mindfulness as embodying two facets-present moment awareness and emotional acceptance. Here, we examine how the effect of meditation practice on executive control manifests in the brain, suggesting that emotional acceptance and performance monitoring play important roles. We investigated the effect of meditation practice on executive control and measured the neural correlates of performance monitoring, specifically, the error-related negativity (ERN), a neurophysiological response that occurs within 100 ms of error commission. Meditators and controls completed a Stroop task, during which we recorded ERN amplitudes with electroencephalography. Meditators showed greater executive control (i.e. fewer errors), a higher ERN and more emotional acceptance than controls. Finally, mediation pathway models further revealed that meditation practice relates to greater executive control and that this effect can be accounted for by heightened emotional acceptance, and to a lesser extent, increased brain-based performance monitoring.

  18. Relationship between behavioural coping strategies and acceptance in patients with fibromyalgia syndrome: Elucidating targets of interventions

    PubMed Central

    2011-01-01

    Background Previous research has found that acceptance of pain is more successful than cognitive coping variables for predicting adjustment to pain. This research has a limitation because measures of cognitive coping rely on observations and reports of thoughts or attempts to change thoughts rather than on overt behaviours. The purpose of the present study, therefore, is to compare the influence of acceptance measures and the influence of different behavioural coping strategies on the adjustment to chronic pain. Methods A sample of 167 individuals diagnosed with fibromyalgia syndrome completed the Chronic Pain Coping Inventory (CPCI) and the Chronic Pain Acceptance Questionnaire (CPAQ). Results Correlational analyses indicated that the acceptance variables were more related to distress and functioning than were behavioural coping variables. The average magnitudes of the coefficients for activity engagement and pain willingness (both subscales of pain acceptance) across the measures of distress and functioning were r = 0.42 and 0.25, respectively, meanwhile the average magnitude of the correlation between coping and functioning was r = 0.17. Regression analyses examined the independent, relative contributions of coping and acceptance to adjustment indicators and demonstrated that acceptance accounted for more variance than did coping variables. The variance contributed by acceptance scores ranged from 4.0 to 40%. The variance contributed by the coping variables ranged from 0 to 9%. Conclusions This study extends the findings of previous work in enhancing the adoption of acceptance-based interventions for maintaining accurate functioning in fibromyalgia patients. PMID:21714918

  19. Medication errors: definitions and classification.

    PubMed

    Aronson, Jeffrey K

    2009-06-01

    1. To understand medication errors and to identify preventive strategies, we need to classify them and define the terms that describe them. 2. The four main approaches to defining technical terms consider etymology, usage, previous definitions, and the Ramsey-Lewis method (based on an understanding of theory and practice). 3. A medication error is 'a failure in the treatment process that leads to, or has the potential to lead to, harm to the patient'. 4. Prescribing faults, a subset of medication errors, should be distinguished from prescription errors. A prescribing fault is 'a failure in the prescribing [decision-making] process that leads to, or has the potential to lead to, harm to the patient'. The converse of this, 'balanced prescribing' is 'the use of a medicine that is appropriate to the patient's condition and, within the limits created by the uncertainty that attends therapeutic decisions, in a dosage regimen that optimizes the balance of benefit to harm'. This excludes all forms of prescribing faults, such as irrational, inappropriate, and ineffective prescribing, underprescribing and overprescribing. 5. A prescription error is 'a failure in the prescription writing process that results in a wrong instruction about one or more of the normal features of a prescription'. The 'normal features' include the identity of the recipient, the identity of the drug, the formulation, dose, route, timing, frequency, and duration of administration. 6. Medication errors can be classified, invoking psychological theory, as knowledge-based mistakes, rule-based mistakes, action-based slips, and memory-based lapses. This classification informs preventive strategies.

  20. Automatic-repeat-request error control schemes

    NASA Technical Reports Server (NTRS)

    Lin, S.; Costello, D. J., Jr.; Miller, M. J.

    1983-01-01

    Error detection incorporated with automatic-repeat-request (ARQ) is widely used for error control in data communication systems. This method of error control is simple and provides high system reliability. If a properly chosen code is used for error detection, virtually error-free data transmission can be attained. Various types of ARQ and hybrid ARQ schemes, and error detection using linear block codes are surveyed.

  1. Public Acceptance for Geological CO2-Storage

    NASA Astrophysics Data System (ADS)

    Schilling, F.; Ossing, F.; Würdemann, H.; Co2SINK Team

    2009-04-01

    Public acceptance is one of the fundamental prerequisites for geological CO2 storage. In highly populated areas like central Europe, especially in the vicinity of metropolitan areas like Berlin, underground operations are in the focus of the people living next to the site, the media, and politics. To gain acceptance, all these groups - the people in the neighbourhood, journalists, and authorities - need to be confident of the security of the planned storage operation as well as the long term security of storage. A very important point is to show that the technical risks of CO2 storage can be managed with the help of a proper short and long term monitoring concept, as well as appropriate mitigation technologies e.g adequate abandonment procedures for leaking wells. To better explain the possible risks examples for leakage scenarios help the public to assess and to accept the technical risks of CO2 storage. At Ketzin we tried the following approach that can be summed up on the basis: Always tell the truth! This might be self-evident but it has to be stressed that credibility is of vital importance. Suspiciousness and distrust are best friends of fear. Undefined fear seems to be the major risk in public acceptance of geological CO2-storage. Misinformation and missing communication further enhance the denial of geological CO2 storage. When we started to plan and establish the Ketzin storage site, we ensured a forward directed communication. Offensive information activities, an information centre on site, active media politics and open information about the activities taking place are basics. Some of the measures were: - information of the competent authorities through meetings (mayor, governmental authorities) - information of the local public, e.g. hearings (while also inviting local, regional and nation wide media) - we always treated the local people and press first! - organizing of bigger events to inform the public on site, e.g. start of drilling activities (open

  2. ASTP ranging system mathematical model

    NASA Technical Reports Server (NTRS)

    Ellis, M. R.; Robinson, L. H.

    1973-01-01

    A mathematical model is presented of the VHF ranging system to analyze the performance of the Apollo-Soyuz test project (ASTP). The system was adapted for use in the ASTP. The ranging system mathematical model is presented in block diagram form, and a brief description of the overall model is also included. A procedure for implementing the math model is presented along with a discussion of the validation of the math model and the overall summary and conclusions of the study effort. Detailed appendices of the five study tasks are presented: early late gate model development, unlock probability development, system error model development, probability of acquisition and model development, and math model validation testing.

  3. A Positive View of Peer Acceptance in Aggressive Youth: Risk for Future Peer Acceptance.

    ERIC Educational Resources Information Center

    Hughes, Jan N.; Cavell, Timothy A.; Prasad-Gaur, Archna

    2001-01-01

    Uses longitudinal data to determine whether a positive view of perceived peer acceptance is a risk factor for continued aggression and social rejection for aggressive children. Results indicate that perceived peer acceptance did not predict aggression. However, children who reported higher levels of perceived peer acceptance received lower actual…

  4. Hybrid Projectile Body Angle Estimation for Selectable Range Increase

    NASA Astrophysics Data System (ADS)

    Gioia, Christopher J.

    A Hybrid Projectile (HP) is a tube launched munition that transforms into a gliding UAV, and is currently being researched at West Virginia University. A simple launch timer was first envisioned to control the transformation point in order to achieve maximum distance. However, this timer would need to be reprogrammed for any distance less than maximum range due to the nominal time to deployment varying with launch angle. A method was sought for automatic wing deployment that would not require reprogramming the round. A body angle estimation system was used to estimate the pitch of the HP relative to the Earth to determine when the HP is properly oriented for the designed glide slope angle. It was also necessary to filter out noise from a simulated inertial measurement unit (IMU), GPS receiver, and magnetometer. An Extended Kalman Filter (EKF) was chosen to estimate the Euler angles, position and velocity of the HP while an algorithm determined when to deploy the wings. A parametric study was done to verify the optimum deployment condition using a Simulink aerodynamic model. Because range is directly related to launch angle, various launch angles were simulated in the model. By fixing the glide slope angle to -10° as a deployment condition for all launch angles, the range differed only by a maximum of 6.1% from the maximum possible range. Based on these findings, the body angle deployment condition provides the most flexible option to maintain maximum distance without the need of reprogramming. Position and velocity estimates were also determined from the EKF using the GPS measurements. Simulations showed that the EKF estimates exhibited low root mean squared error values, corresponding to less than 3% of the total position values. Because the HP was in flight for less than a minute in this experiment, the drift encountered was acceptable.

  5. Radar ranging to Ganymede and Callisto

    NASA Astrophysics Data System (ADS)

    Harmon, J. K.; Ostro, S. J.; Chandler, J. F.; Hudson, R. S.

    1994-03-01

    Arecibo observations from 1992 February to March have yielded the first successful radar range measurements to the Galilean satellites. Round-up time delays were measured for Ganymede and Callisto with accuracies of 20 to 50 micrometer (3 to 7 km) and 90 micrometer (14 km), respectively. Both satellites showed round-trip delay residuals (relative to the E-3 ephemeris) of about a millisecond, most of which can be attributed to errors in the predicted along-track positions (orbital phases). Using a simple model that assumed that all of the ephemeris error was due to constant orbital phase and Jupiter range errors we estimate that Ganymede was leading its ephemeris by 122 +/- 4 km, Callisto was lagging its ephemeris by 307 +/- 14 km, and Jupiter was 11 +/- 4 km more distant than predicted by the PEP740 planetary ephemeris.

  6. Management of human error by design

    NASA Technical Reports Server (NTRS)

    Wiener, Earl

    1988-01-01

    Design-induced errors and error prevention as well as the concept of lines of defense against human error are discussed. The concept of human error prevention, whose main focus has been on hardware, is extended to other features of the human-machine interface vulnerable to design-induced errors. In particular, it is pointed out that human factors and human error prevention should be part of the process of transport certification. Also, the concept of error tolerant systems is considered as a last line of defense against error.

  7. Consumer Acceptance of Dry Dog Food Variations

    PubMed Central

    Donfrancesco, Brizio Di; Koppel, Kadri; Swaney-Stueve, Marianne; Chambers, Edgar

    2014-01-01

    Simple Summary The objectives of this study were to compare the acceptance of different dry dog food products by consumers, determine consumer clusters for acceptance, and identify the characteristics of dog food that drive consumer acceptance. Pet owners evaluated dry dog food samples available in the US market. The results indicated that appearance of the sample, especially the color, influenced pet owner’s overall liking more than the aroma of the product. Abstract The objectives of this study were to compare the acceptance of different dry dog food products by consumers, determine consumer clusters for acceptance, and identify the characteristics of dog food that drive consumer acceptance. Eight dry dog food samples available in the US market were evaluated by pet owners. In this study, consumers evaluated overall liking, aroma, and appearance liking of the products. Consumers were also asked to predict their purchase intent, their dog’s liking, and cost of the samples. The results indicated that appearance of the sample, especially the color, influenced pet owner’s overall liking more than the aroma of the product. Overall liking clusters were not related to income, age, gender, or education, indicating that general consumer demographics do not appear to play a main role in individual consumer acceptance of dog food products. PMID:26480043

  8. Sampling Errors in Satellite-derived Infrared Sea Surface Temperatures

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Minnett, P. J.

    2014-12-01

    Sea Surface Temperature (SST) measured from satellites has been playing a crucial role in understanding geophysical phenomena. Generating SST Climate Data Records (CDRs) is considered to be the one that imposes the most stringent requirements on data accuracy. For infrared SSTs, sampling uncertainties caused by cloud presence and persistence generate errors. In addition, for sensors with narrow swaths, the swath gap will act as another sampling error source. This study is concerned with quantifying and understanding such sampling errors, which are important for SST CDR generation and for a wide range of satellite SST users. In order to quantify these errors, a reference Level 4 SST field (Multi-scale Ultra-high Resolution SST) is sampled by using realistic swath and cloud masks of Moderate Resolution Imaging Spectroradiometer (MODIS) and Advanced Along Track Scanning Radiometer (AATSR). Global and regional SST uncertainties are studied by assessing the sampling error at different temporal and spatial resolutions (7 spatial resolutions from 4 kilometers to 5.0° at the equator and 5 temporal resolutions from daily to monthly). Global annual and seasonal mean sampling errors are large in the high latitude regions, especially the Arctic, and have geographical distributions that are most likely related to stratus clouds occurrence and persistence. The region between 30°N and 30°S has smaller errors compared to higher latitudes, except for the Tropical Instability Wave area, where persistent negative errors are found. Important differences in sampling errors are also found between the broad and narrow swath scan patterns and between day and night fields. This is the first time that realistic magnitudes of the sampling errors are quantified. Future improvement in the accuracy of SST products will benefit from this quantification.

  9. Hybrid Models for Trajectory Error Modelling in Urban Environments

    NASA Astrophysics Data System (ADS)

    Angelatsa, E.; Parés, M. E.; Colomina, I.

    2016-06-01

    This paper tackles the first step of any strategy aiming to improve the trajectory of terrestrial mobile mapping systems in urban environments. We present an approach to model the error of terrestrial mobile mapping trajectories, combining deterministic and stochastic models. Due to urban specific environment, the deterministic component will be modelled with non-continuous functions composed by linear shifts, drifts or polynomial functions. In addition, we will introduce a stochastic error component for modelling residual noise of the trajectory error function. First step for error modelling requires to know the actual trajectory error values for several representative environments. In order to determine as accurately as possible the trajectories error, (almost) error less trajectories should be estimated using extracted nonsemantic features from a sequence of images collected with the terrestrial mobile mapping system and from a full set of ground control points. Once the references are estimated, they will be used to determine the actual errors in terrestrial mobile mapping trajectory. The rigorous analysis of these data sets will allow us to characterize the errors of a terrestrial mobile mapping system for a wide range of environments. This information will be of great use in future campaigns to improve the results of the 3D points cloud generation. The proposed approach has been evaluated using real data. The data originate from a mobile mapping campaign over an urban and controlled area of Dortmund (Germany), with harmful GNSS conditions. The mobile mapping system, that includes two laser scanner and two cameras, was mounted on a van and it was driven over a controlled area around three hours. The results show the suitability to decompose trajectory error with non-continuous deterministic and stochastic components.

  10. Negotiating vaccine acceptance in an era of reluctance.

    PubMed

    Larson, Heidi J

    2013-08-01

    Studies to better understand the determinants of vaccine acceptance have expanded to include more investigation into dynamics of individual decision-making as well as the influences of peers and social networks. Vaccine acceptance is determined by a range of factors, from structural issues of supply, costs and access to services, as well as the more demand-side determinants. The term vaccine hesitancy is increasingly used in the investigation of demand-side determinants, moving away from the more polarized framing of pro- and anti-vaccine groups to recognizing the importance of understanding and engaging those who are delaying vaccination, accepting only some vaccines, or who are yet undecided, but reluctant. As hesitancy is a state of indecision, it is difficult to measure, but the stage of indecision is a critical time to engage and support the decision-making process. This article suggests modes of investigating the determinants of vaccine confidence and levers of vaccine acceptance toward better engagement and dialogue early in the process of decision-making. Pressure to vaccinate can be counter-productive. Listening and dialog can support individual decision-making and more effectively inform the public health community of the issues and concerns influencing vaccine hesitancy.

  11. Intrinsic and extrinsic influences on children's acceptance of new foods.

    PubMed

    Blissett, Jackie; Fogel, Anna

    2013-09-10

    The foods that tend to be rejected by children include those which may have greatest importance for later health. This paper reviews some of the intrinsic and extrinsic influences on preschool children's eating behavior, with particular reference to their acceptance of new foods into their diet. Factors conceptualized as intrinsic to the child in this review include sensory processing, taste perception, neophobia, and temperament. The important extrinsic determinants of children's food acceptance which are reviewed include parental and peer modeling, the family food environment, infant feeding practices including breastfeeding and age at weaning, concurrent feeding practices including restriction, pressure to eat, prompting and reward, and the taste & energy content of foods. Children's willingness to accept new foods is influenced by a wide range of factors that likely have individual and also interactive effects on children's willingness to taste, and then continue to eat, new foods. The literature lacks longitudinal and experimental studies, which will be particularly important in determining interventions most likely to be effective in facilitating children's acceptance of healthy foods.

  12. Prospective, multidisciplinary recording of perioperative errors in cerebrovascular surgery: is error in the eye of the beholder?

    PubMed

    Michalak, Suzanne M; Rolston, John D; Lawton, Michael T

    2016-06-01

    OBJECT Surgery requires careful coordination of multiple team members, each playing a vital role in mitigating errors. Previous studies have focused on eliciting errors from only the attending surgeon, likely missing events observed by other team members. METHODS Surveys were administered to the attending surgeon, resident surgeon, anesthesiologist, and nursing staff immediately following each of 31 cerebrovascular surgeries; participants were instructed to record any deviation from optimal course (DOC). DOCs were categorized and sorted by reporter and perioperative timing, then correlated with delays and outcome measures. RESULTS Errors were recorded in 93.5% of the 31 cases surveyed. The number of errors recorded per case ranged from 0 to 8, with an average of 3.1 ± 2.1 errors (± SD). Overall, technical errors were most common (24.5%), followed by communication (22.4%), management/judgment (16.0%), and equipment (11.7%). The resident surgeon reported the most errors (52.1%), followed by the circulating nurse (31.9%), the attending surgeon (26.6%), and the anesthesiologist (14.9%). The attending and resident surgeons were most likely to report technical errors (52% and 30.6%, respectively), while anesthesiologists and circulating nurses mostly reported anesthesia errors (36%) and communication errors (50%), respectively. The overlap in reported errors was 20.3%. If this study had used only the surveys completed by the attending surgeon, as in prior studies, 72% of equipment errors, 90% of anesthesia and communication errors, and 100% of nursing errors would have been missed. In addition, it would have been concluded that errors occurred in only 45.2% of cases (rather than 93.5%) and that errors resulting in a delay occurred in 3.2% of cases instead of the 74.2% calculated using data from 4 team members. Compiled results from all team members yielded significant correlations between technical DOCs and prolonged hospital stays and reported and actual delays (p = 0

  13. Approaches to acceptable risk: a critical guide

    SciTech Connect

    Fischhoff, B.; Lichtenstein, S.; Slovic, P.; Keeney, R.; Derby, S.

    1980-12-01

    Acceptable-risk decisions are an essential step in the management of technological hazards. In many situations, they constitute the weak (or missing) link in the management process. The absence of an adequate decision-making methodology often produces indecision, inconsistency, and dissatisfaction. The result is neither good for hazard management nor good for society. This report offers a critical analysis of the viability of various approaches as guides to acceptable-risk decisions. This report seeks to define acceptable-risk decisions and to examine some frequently proposed, but inappropriate, solutions. 255 refs., 22 figs., 25 tabs.

  14. Hanford Site Solid Waste Acceptance Criteria

    SciTech Connect

    Not Available

    1993-11-17

    This manual defines the Hanford Site radioactive, hazardous, and sanitary solid waste acceptance criteria. Criteria in the manual represent a guide for meeting state and federal regulations; DOE Orders; Hanford Site requirements; and other rules, regulations, guidelines, and standards as they apply to acceptance of radioactive and hazardous solid waste at the Hanford Site. It is not the intent of this manual to be all inclusive of the regulations; rather, it is intended that the manual provide the waste generator with only the requirements that waste must meet in order to be accepted at Hanford Site TSD facilities.

  15. Multichannel error correction code decoder

    NASA Technical Reports Server (NTRS)

    Wagner, Paul K.; Ivancic, William D.

    1993-01-01

    A brief overview of a processing satellite for a mesh very-small-aperture (VSAT) communications network is provided. The multichannel error correction code (ECC) decoder system, the uplink signal generation and link simulation equipment, and the time-shared decoder are described. The testing is discussed. Applications of the time-shared decoder are recommended.

  16. Typical errors of ESP users

    NASA Astrophysics Data System (ADS)

    Eremina, Svetlana V.; Korneva, Anna A.

    2004-07-01

    The paper presents analysis of the errors made by ESP (English for specific purposes) users which have been considered as typical. They occur as a result of misuse of resources of English grammar and tend to resist. Their origin and places of occurrence have also been discussed.

  17. Error Analysis and Remedial Teaching.

    ERIC Educational Resources Information Center

    Corder, S. Pit

    The purpose of this paper is to analyze the role of error analysis in specifying and planning remedial treatment in second language learning. Part 1 discusses situations that demand remedial action. This is a quantitative assessment that requires measurement of the varying degrees of disparity between the learner's knowledge and the demands of the…

  18. Sampling Errors of Variance Components.

    ERIC Educational Resources Information Center

    Sanders, Piet F.

    A study on sampling errors of variance components was conducted within the framework of generalizability theory by P. L. Smith (1978). The study used an intuitive approach for solving the problem of how to allocate the number of conditions to different facets in order to produce the most stable estimate of the universe score variance. Optimization…

  19. The error of our ways

    NASA Astrophysics Data System (ADS)

    Swartz, Clifford E.

    1999-10-01

    In Victorian literature it was usually some poor female who came to see the error of her ways. How prescient of her! How I wish that all writers of manuscripts for The Physics Teacher would come to similar recognition of this centerpiece of measurement. For, Brothers and Sisters, we all err.

  20. Amplify Errors to Minimize Them

    ERIC Educational Resources Information Center

    Stewart, Maria Shine

    2009-01-01

    In this article, the author offers her experience of modeling mistakes and writing spontaneously in the computer classroom to get students' attention and elicit their editorial response. She describes how she taught her class about major sentence errors--comma splices, run-ons, and fragments--through her Sentence Meditation exercise, a rendition…

  1. Having Fun with Error Analysis

    ERIC Educational Resources Information Center

    Siegel, Peter

    2007-01-01

    We present a fun activity that can be used to introduce students to error analysis: the M&M game. Students are told to estimate the number of individual candies plus uncertainty in a bag of M&M's. The winner is the group whose estimate brackets the actual number with the smallest uncertainty. The exercise produces enthusiastic discussions and…

  2. RM2: rms error comparisons

    NASA Technical Reports Server (NTRS)

    Rice, R. F.

    1976-01-01

    The root-mean-square error performance measure is used to compare the relative performance of several widely known source coding algorithms with the RM2 image data compression system. The results demonstrate that RM2 has a uniformly significant performance advantage.

  3. The Zero Product Principle Error.

    ERIC Educational Resources Information Center

    Padula, Janice

    1996-01-01

    Argues that the challenge for teachers of algebra in Australia is to find ways of making the structural aspects of algebra accessible to a greater percentage of students. Uses the zero product principle to provide an example of a common student error grounded in the difficulty of understanding the structure of algebra. (DDR)

  4. Competing Criteria for Error Gravity.

    ERIC Educational Resources Information Center

    Hughes, Arthur; Lascaratou, Chryssoula

    1982-01-01

    Presents study in which native-speaker teachers of English, Greek teachers of English, and English native-speakers who were not teachers judged seriousness of errors made by Greek-speaking students of English in their last year of high school. Results show native English speakers were more lenient than Greek teachers, and three groups differed in…

  5. What Is a Reading Error?

    ERIC Educational Resources Information Center

    Labov, William; Baker, Bettina

    2010-01-01

    Early efforts to apply knowledge of dialect differences to reading stressed the importance of the distinction between differences in pronunciation and mistakes in reading. This study develops a method of estimating the probability that a given oral reading that deviates from the text is a true reading error by observing the semantic impact of the…

  6. Error Processing in Huntington's Disease

    PubMed Central

    Andrich, Jürgen; Gold, Ralf; Falkenstein, Michael

    2006-01-01

    Background Huntington's disease (HD) is a genetic disorder expressed by a degeneration of the basal ganglia inter alia accompanied with dopaminergic alterations. These dopaminergic alterations are related to genetic factors i.e., CAG-repeat expansion. The error (related) negativity (Ne/ERN), a cognitive event-related potential related to performance monitoring, is generated in the anterior cingulate cortex (ACC) and supposed to depend on the dopaminergic system. The Ne is reduced in Parkinson's Disease (PD). Due to a dopaminergic deficit in HD, a reduction of the Ne is also likely. Furthermore it is assumed that movement dysfunction emerges as a consequence of dysfunctional error-feedback processing. Since dopaminergic alterations are related to the CAG-repeat, a Ne reduction may furthermore also be related to the genetic disease load. Methodology/Principle Findings We assessed the error negativity (Ne) in a speeded reaction task under consideration of the underlying genetic abnormalities. HD patients showed a specific reduction in the Ne, which suggests impaired error processing in these patients. Furthermore, the Ne was closely related to CAG-repeat expansion. Conclusions/Significance The reduction of the Ne is likely to be an effect of the dopaminergic pathology. The result resembles findings in Parkinson's Disease. As such the Ne might be a measure for the integrity of striatal dopaminergic output function. The relation to the CAG-repeat expansion indicates that the Ne could serve as a gene-associated “cognitive” biomarker in HD. PMID:17183717

  7. ISMP Medication Error Report Analysis.

    PubMed

    2013-10-01

    These medication errors have occurred in health care facilities at least once. They will happen again-perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers' names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters' wishes as to the level of detail included in publications.

  8. ISMP Medication Error Report Analysis.

    PubMed

    2014-01-01

    These medication errors have occurred in health care facilities at least once. They will happen again-perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers' names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters' wishes as to the level of detail included in publications.

  9. ISMP Medication Error Report Analysis.

    PubMed

    2013-05-01

    These medication errors have occurred in health care facilities at least once. They will happen again-perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers' names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters' wishes as to the level of detail included in publications.

  10. ISMP Medication Error Report Analysis.

    PubMed

    2013-12-01

    These medication errors have occurred in health care facilities at least once. They will happen again-perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers' names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters' wishes as to the level of detail included in publications.

  11. ISMP Medication Error Report Analysis.

    PubMed

    2013-11-01

    These medication errors have occurred in health care facilities at least once. They will happen again-perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers' names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters' wishes as to the level of detail included in publications.

  12. ISMP Medication error report analysis.

    PubMed

    2013-04-01

    These medication errors have occurred in health care facilities at least once. They will happen again-perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers' names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters' wishes as to the level of detail included in publications.

  13. ISMP Medication Error Report Analysis.

    PubMed

    2013-06-01

    These medication errors have occurred in health care facilities at least once. They will happen again-perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers' names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters' wishes as to the level of detail included in publications.

  14. ISMP Medication Error Report Analysis.

    PubMed

    2013-01-01

    These medication errors have occurred in health care facilities at least once. They will happen again-perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers' names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters' wishes as to the level of detail included in publications.

  15. ISMP Medication Error Report Analysis.

    PubMed

    2013-02-01

    These medication errors have occurred in health care facilities at least once. They will happen again-perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers' names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters' wishes as to the level of detail included in publications.

  16. ISMP Medication Error Report Analysis.

    PubMed

    2013-03-01

    These medication errors have occurred in health care facilities at least once. They will happen again-perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers' names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters' wishes as to the level of detail included in publications.

  17. ISMP Medication Error Report Analysis.

    PubMed

    2013-09-01

    These medication errors have occurred in health care facilities at least once. They will happen again-perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers' names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters' wishes as to the level of detail included in publications.

  18. ISMP Medication Error Report Analysis.

    PubMed

    2013-07-01

    These medication errors have occurred in health care facilities at least once. They will happen again-perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers' names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters' wishes as to the level of detail included in publications.

  19. The Errors of Our Ways

    ERIC Educational Resources Information Center

    Kane, Michael

    2011-01-01

    Errors don't exist in our data, but they serve a vital function. Reality is complicated, but our models need to be simple in order to be manageable. We assume that attributes are invariant over some conditions of observation, and once we do that we need some way of accounting for the variability in observed scores over these conditions of…

  20. THE SIGNIFICANCE OF LEARNER'S ERRORS.

    ERIC Educational Resources Information Center

    CORDER, S.P.

    ERRORS (NOT MISTAKES) MADE IN BOTH SECOND LANGUAGE LEARNING AND CHILD LANGUAGE ACQUISITION PROVIDE EVIDENCE THAT A LEARNER USES A DEFINITE SYSTEM OF LANGUAGE AT EVERY POINT IN HIS DEVELOPMENT. THIS SYSTEM, OR "BUILT-IN SYLLABUS," MAY YIELD A MORE EFFICIENT SEQUENCE THAN THE INSTRUCTOR-GENERATED SEQUENCE BECAUSE IT IS MORE MEANINGFUL TO THE…

  1. Theory of Test Translation Error

    ERIC Educational Resources Information Center

    Solano-Flores, Guillermo; Backhoff, Eduardo; Contreras-Nino, Luis Angel

    2009-01-01

    In this article, we present a theory of test translation whose intent is to provide the conceptual foundation for effective, systematic work in the process of test translation and test translation review. According to the theory, translation error is multidimensional; it is not simply the consequence of defective translation but an inevitable fact…

  2. Chinese Nurses' Acceptance of PDA: A Cross-Sectional Survey Using a Technology Acceptance Model.

    PubMed

    Wang, Yanling; Xiao, Qian; Sun, Liu; Wu, Ying

    2016-01-01

    This study explores Chinese nurses' acceptance of PDA, using a questionnaire based on the framework of Technology Acceptance Model (TAM). 357 nurses were involved in the study. The results reveal the scores of the nurses' acceptance of PDA were means 3.18~3.36 in four dimensions. The younger of nurses, the higher nurses' title, the longer previous usage time, the more experienced using PDA, and the more acceptance of PDA. Therefore, the hospital administrators may change strategies to enhance nurses' acceptance of PDA, and promote the wide application of PDA.

  3. Toward a cognitive taxonomy of medical errors.

    PubMed

    Zhang, Jiajie; Patel, Vimla L; Johnson, Todd R; Shortliffe, Edward H

    2002-01-01

    One critical step in addressing and resolving the problems associated with human errors is the development of a cognitive taxonomy of such errors. In the case of errors, such a taxonomy may be developed (1) to categorize all types of errors along cognitive dimensions, (2) to associate each type of error with a specific underlying cognitive mechanism, (3) to explain why, and even predict when and where, a specific error will occur, and (4) to generate intervention strategies for each type of error. Based on Reason's (1992) definition of human errors and Norman's (1986) cognitive theory of human action, we have developed a preliminary action-based cognitive taxonomy of errors that largely satisfies these four criteria in the domain of medicine. We discuss initial steps for applying this taxonomy to develop an online medical error reporting system that not only categorizes errors but also identifies problems and generates solutions.

  4. Error and its meaning in forensic science.

    PubMed

    Christensen, Angi M; Crowder, Christian M; Ousley, Stephen D; Houck, Max M

    2014-01-01

    The discussion of "error" has gained momentum in forensic science in the wake of the Daubert guidelines and has intensified with the National Academy of Sciences' Report. Error has many different meanings, and too often, forensic practitioners themselves as well as the courts misunderstand scientific error and statistical error rates, often confusing them with practitioner error (or mistakes). Here, we present an overview of these concepts as they pertain to forensic science applications, discussing the difference between practitioner error (including mistakes), instrument error, statistical error, and method error. We urge forensic practitioners to ensure that potential sources of error and method limitations are understood and clearly communicated and advocate that the legal community be informed regarding the differences between interobserver errors, uncertainty, variation, and mistakes.

  5. Resources and Long-Range Forecasts

    ERIC Educational Resources Information Center

    Smith, Waldo E.

    1973-01-01

    The author argues that forecasts of quick depletion of resources in the environment as a result of overpopulation and increased usage may not be free from error. Ignorance still exists in understanding the recovery mechanisms of nature. Long-range forecasts are likely to be wrong in such situations. (PS)

  6. Least squares evaluations for form and profile errors of ellipse using coordinate data

    NASA Astrophysics Data System (ADS)

    Liu, Fei; Xu, Guanghua; Liang, Lin; Zhang, Qing; Liu, Dan

    2016-09-01

    To improve the measurement and evaluation of form error of an elliptic section, an evaluation method based on least squares fitting is investigated to analyze the form and profile errors of an ellipse using coordinate data. Two error indicators for defining ellipticity are discussed, namely the form error and the profile error, and the difference between both is considered as the main parameter for evaluating machining quality of surface and profile. Because the form error and the profile error rely on different evaluation benchmarks, the major axis and the foci rather than the centre of an ellipse are used as the evaluation benchmarks and can accurately evaluate a tolerance range with the separated form error and profile error of workpiece. Additionally, an evaluation program based on the LS model is developed to extract the form error and the profile error of the elliptic section, which is well suited for separating the two errors by a standard program. Finally, the evaluation method about the form and profile errors of the ellipse is applied to the measurement of skirt line of the piston, and results indicate the effectiveness of the evaluation. This approach provides the new evaluation indicators for the measurement of form and profile errors of ellipse, which is found to have better accuracy and can thus be used to solve the difficult of the measurement and evaluation of the piston in industrial production.

  7. On Logical Error Underlying Classical Mechanics

    NASA Astrophysics Data System (ADS)

    Kalanov, Temur Z.

    2012-03-01

    The logical analysis of the general accepted description of mechanical motion of material point M in classical mechanics is proposed. The key idea of the analysis is as follows. Let point M be moved in the positive direction of the axis O 1ptx. Motion is characterized by a change of coordinate x,( t ) -- continuous function of time t(because motion is a change in general). If δ,->;0;δ,;=;0, then δ,;->;0δ,;=;0, i.e., according to practice and formal logic, value of coordinate does not change and, hence, motion does not exist. But, contrary to practice and formal logic, differential calculus and classical mechanics contain the assertion that velocity δ,;->;0;δ,δ,;exists without motion. Then velocity δ,;->;0;δ,δ,;is not real (i.e. not physical) quantity, but fictitious quantity. Therefore, use of non-physical (unreal) quantity (i.e. the first and second derivatives of function) in classical mechanics is a logic error.

  8. Medication Errors in Cardiopulmonary Arrest and Code-Related Situations.

    PubMed

    Flannery, Alexander H; Parli, Sara E

    2016-01-01

    PubMed/MEDLINE (1966-November 2014) was searched to identify relevant published studies on the overall frequency, types, and examples of medication errors during medical emergencies involving cardiopulmonary resuscitation and related situations, and the breakdown by type of error. The overall frequency of medication errors during medical emergencies, specifically situations related to resuscitation, is highly variable. Medication errors during such emergencies, particularly cardiopulmonary resuscitation and surrounding events, are not well characterized in the literature but may be more frequent than previously thought. Depending on whether research methods included database mining, simulation, or prospective observation of clinical practice, reported occurrence of medication errors during cardiopulmonary resuscitation and surrounding events has ranged from less than 1% to 50%. Because of the chaos of the resuscitation environment, errors in prescribing, dosing, preparing, labeling, and administering drugs are prone to occur. System-based strategies, such as infusion pump policies and code cart management, as well as personal strategies exist to minimize medication errors during emergency situations.

  9. Error in the honeybee waggle dance improves foraging flexibility.

    PubMed

    Okada, Ryuichi; Ikeno, Hidetoshi; Kimura, Toshifumi; Ohashi, Mizue; Aonuma, Hitoshi; Ito, Etsuro

    2014-02-26

    The honeybee waggle dance communicates the location of profitable food sources, usually with a certain degree of error in the directional information ranging from 10-15° at the lower margin. We simulated one-day colonial foraging to address the biological significance of information error in the waggle dance. When the error was 30° or larger, the waggle dance was not beneficial. If the error was 15°, the waggle dance was beneficial when the food sources were scarce. When the error was 10° or smaller, the waggle dance was beneficial under all the conditions tested. Our simulation also showed that precise information (0-5° error) yielded great success in finding feeders, but also caused failures at finding new feeders, i.e., a high-risk high-return strategy. The observation that actual bees perform the waggle dance with an error of 10-15° might reflect, at least in part, the maintenance of a successful yet risky foraging trade-off.

  10. What Are Acceptable Limits of Radiation?

    NASA Video Gallery

    Brad Gersey, lead research scientist at the Center for Radiation Engineering and Science for Space Exploration, or CRESSE, at Prairie View A&M University, describes the legal and acceptable limits ...

  11. Behavioral genetics: scientific and social acceptance.

    PubMed

    Lorenz, David R

    2003-01-01

    Human behavioral genetics can be broadly defined as the attempt to characterize and define the genetic or hereditary basis for human behavior. Examination of the history of these scientific enterprises reveals episodes of controversy, and an apparent distinction between scientific and social acceptance of the genetic nature of such complex behaviors. This essay will review the history and methodology of behavioral genetics research, including a more detailed look at case histories involving behavioral genetic research for aggressive behavior and alcoholism. It includes a discussion of the scientific versus social qualities of the acceptance of behavioral genetics research, as well as the development of a general model for scientific acceptance involving the researchers, the scientific literature, the scientific peer group, the mainstream media, and the public at large. From this model follows a discussion of the means and complications by which behavioral genetics research may be accepted by society, and an analysis of how future studies might be conducted.

  12. 7 CFR 1205.326 - Acceptance.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE COTTON RESEARCH AND PROMOTION Cotton Research and Promotion Order Cotton Board § 1205.326 Acceptance. Any person selected by the Secretary as...

  13. 7 CFR 1205.326 - Acceptance.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE COTTON RESEARCH AND PROMOTION Cotton Research and Promotion Order Cotton Board § 1205.326 Acceptance. Any person selected by the Secretary as...

  14. 7 CFR 1205.326 - Acceptance.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE COTTON RESEARCH AND PROMOTION Cotton Research and Promotion Order Cotton Board § 1205.326 Acceptance. Any person selected by the Secretary as...

  15. Integrated Model for E-Learning Acceptance

    NASA Astrophysics Data System (ADS)

    Ramadiani; Rodziah, A.; Hasan, S. M.; Rusli, A.; Noraini, C.

    2016-01-01

    E-learning is not going to work if the system is not used in accordance with user needs. User Interface is very important to encourage using the application. Many theories had discuss about user interface usability evaluation and technology acceptance separately, actually why we do not make it correlation between interface usability evaluation and user acceptance to enhance e-learning process. Therefore, the evaluation model for e-learning interface acceptance is considered important to investigate. The aim of this study is to propose the integrated e-learning user interface acceptance evaluation model. This model was combined some theories of e-learning interface measurement such as, user learning style, usability evaluation, and the user benefit. We formulated in constructive questionnaires which were shared at 125 English Language School (ELS) students. This research statistics used Structural Equation Model using LISREL v8.80 and MANOVA analysis.

  16. 49 CFR 193.2303 - Construction acceptance.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ...: FEDERAL SAFETY STANDARDS Construction § 193.2303 Construction acceptance. No person may place in service any component until it passes all applicable inspections and tests prescribed by this subpart and...

  17. 49 CFR 193.2303 - Construction acceptance.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ...: FEDERAL SAFETY STANDARDS Construction § 193.2303 Construction acceptance. No person may place in service any component until it passes all applicable inspections and tests prescribed by this subpart and...

  18. 49 CFR 193.2303 - Construction acceptance.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...: FEDERAL SAFETY STANDARDS Construction § 193.2303 Construction acceptance. No person may place in service any component until it passes all applicable inspections and tests prescribed by this subpart and...

  19. 49 CFR 193.2303 - Construction acceptance.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...: FEDERAL SAFETY STANDARDS Construction § 193.2303 Construction acceptance. No person may place in service any component until it passes all applicable inspections and tests prescribed by this subpart and...

  20. Gas characterization system software acceptance test report

    SciTech Connect

    Vo, C.V.

    1996-03-28

    This document details the results of software acceptance testing of gas characterization systems. The gas characterization systems will be used to monitor the vapor spaces of waste tanks known to contain measurable concentrations of flammable gases.

  1. Nevada Test Site Waste Acceptance Criteria

    SciTech Connect

    U.S. Department of Energy, Nevada Operations Office, Waste Acceptance Criteria

    1999-05-01

    This document provides the requirements, terms, and conditions under which the Nevada Test Site will accept low-level radioactive and mixed waste for disposal; and transuranic and transuranic mixed waste for interim storage at the Nevada Test Site.

  2. Quantifying geocode location error using GIS methods

    PubMed Central

    Strickland, Matthew J; Siffel, Csaba; Gardner, Bennett R; Berzen, Alissa K; Correa, Adolfo

    2007-01-01

    Background The Metropolitan Atlanta Congenital Defects Program (MACDP) collects maternal address information at the time of delivery for infants and fetuses with birth defects. These addresses have been geocoded by two independent agencies: (1) the Georgia Division of Public Health Office of Health Information and Policy (OHIP) and (2) a commercial vendor. Geographic information system (GIS) methods were used to quantify uncertainty in the two sets of geocodes using orthoimagery and tax parcel datasets. Methods We sampled 599 infants and fetuses with birth defects delivered during 1994–2002 with maternal residence in either Fulton or Gwinnett County. Tax parcel datasets were obtained from the tax assessor's offices of Fulton and Gwinnett County. High-resolution orthoimagery for these counties was acquired from the U.S. Geological Survey. For each of the 599 addresses we attempted to locate the tax parcel corresponding to the maternal address. If the tax parcel was identified the distance and the angle between the geocode and the residence were calculated. We used simulated data to characterize the impact of geocode location error. In each county 5,000 geocodes were generated and assigned their corresponding Census 2000 tract. Each geocode was then displaced at a random angle by a random distance drawn from the distribution of observed geocode location errors. The census tract of the displaced geocode was determined. We repeated this process 5,000 times and report the percentage of geocodes that resolved into incorrect census tracts. Results Median location error was less than 100 meters for both OHIP and commercial vendor geocodes; the distribution of angles appeared uniform. Median location error was approximately 35% larger in Gwinnett (a suburban county) relative to Fulton (a county with urban and suburban areas). Location error occasionally caused the simulated geocodes to be displaced into incorrect census tracts; the median percentage of geocodes resolving

  3. Analysis of ionospheric refraction error corrections for GRARR systems

    NASA Technical Reports Server (NTRS)

    Mallinckrodt, A. J.; Parker, H. C.; Berbert, J. H.

    1971-01-01

    A determination is presented of the ionospheric refraction correction requirements for the Goddard range and range rate (GRARR) S-band, modified S-band, very high frequency (VHF), and modified VHF systems. The relation ships within these four systems are analyzed to show that the refraction corrections are the same for all four systems and to clarify the group and phase nature of these corrections. The analysis is simplified by recognizing that the range rate is equivalent to a carrier phase range change measurement. The equation for the range errors are given.

  4. Report on errors in pretransfusion testing from a tertiary care center: A step toward transfusion safety

    PubMed Central

    Sidhu, Meena; Meenia, Renu; Akhter, Naveen; Sawhney, Vijay; Irm, Yasmeen

    2016-01-01

    Introduction: Errors in the process of pretransfusion testing for blood transfusion can occur at any stage from collection of the sample to administration of the blood component. The present study was conducted to analyze the errors that threaten patients’ transfusion safety and actual harm/serious adverse events that occurred to the patients due to these errors. Materials and Methods: The prospective study was conducted in the Department Of Transfusion Medicine, Shri Maharaja Gulab Singh Hospital, Government Medical College, Jammu, India from January 2014 to December 2014 for a period of 1 year. Errors were defined as any deviation from established policies and standard operating procedures. A near-miss event was defined as those errors, which did not reach the patient. Location and time of occurrence of the events/errors were also noted. Results: A total of 32,672 requisitions for the transfusion of blood and blood components were received for typing and cross-matching. Out of these, 26,683 products were issued to the various clinical departments. A total of 2,229 errors were detected over a period of 1 year. Near-miss events constituted 53% of the errors and actual harmful events due to errors occurred in 0.26% of the patients. Sample labeling errors were 2.4%, inappropriate request for blood components 2%, and information on requisition forms not matching with that on the sample 1.5% of all the requisitions received were the most frequent errors in clinical services. In transfusion services, the most common event was accepting sample in error with the frequency of 0.5% of all requisitions. ABO incompatible hemolytic reactions were the most frequent harmful event with the frequency of 2.2/10,000 transfusions. Conclusion: Sample labeling, inappropriate request, and sample received in error were the most frequent high-risk errors. PMID:27011670

  5. Study of an instrument for sensing errors in a telescope wavefront

    NASA Technical Reports Server (NTRS)

    Golden, L. J.; Shack, R. V.; Slater, D. N.

    1973-01-01

    Partial results are presented of theoretical and experimental investigations of different focal plane sensor configurations for determining the error in a telescope wavefront. The coarse range sensor and fine range sensors are used in the experimentation. The design of a wavefront error simulator is presented along with the Hartmann test, the shearing polarization interferometer, the Zernike test, and the Zernike polarization test.

  6. Acceptance Test Plan for ANSYS Software

    SciTech Connect

    CREA, B.A.

    2000-10-25

    This plan governs the acceptance testing of the ANSYS software (Full Mechanical Release 5.5) for use on Project Word Management Contract (PHMC) computer systems (either UNIX or Microsoft Windows/NT). There are two phases to the acceptance testing covered by this test plan: program execution in accordance with the guidance provided in installation manuals; and ensuring results of the execution are consistent with the expected physical behavior of the system being modeled.

  7. Error-disturbance uncertainty relations studied in neutron optics

    NASA Astrophysics Data System (ADS)

    Sponar, Stephan; Sulyok, Georg; Demirel, Bulent; Hasegawa, Yuji

    2016-09-01

    Heisenberg's uncertainty principle is probably the most famous statement of quantum physics and its essential aspects are well described by a formulations in terms of standard deviations. However, a naive Heisenberg-type error-disturbance relation is not valid. An alternative universally valid relation was derived by Ozawa in 2003. Though universally valid Ozawa's relation is not optimal. Recently, Branciard has derived a tight error-disturbance uncertainty relation (EDUR), describing the optimal trade-off between error and disturbance. Here, we report a neutron-optical experiment that records the error of a spin-component measurement, as well as the disturbance caused on another spin-component to test EDURs. We demonstrate that Heisenberg's original EDUR is violated, and the Ozawa's and Branciard's EDURs are valid in a wide range of experimental parameters, applying a new measurement procedure referred to as two-state method.

  8. Estimation of projection errors in human ocular fundus imaging.

    PubMed

    Doelemeyer, A; Petrig, B L

    2000-03-01

    Photogrammetric analysis of features in human ocular fundus images is affected by various sources of errors, for example aberrations of the camera and eye optics. Another--usually disregarded--type of distortion arises from projecting the near spherical shape of the fundus onto a planar imaging device. In this paper we quantify such projection errors based on geometrical analysis of the reduced model eye imaged by a pinhole camera. The projection error found near the edge of a 50 degrees fundus image is 24%. In addition, the influence of axial ametropia is investigated for both myopia and hyperopia. The projection errors found in hyperopia are similar to those in emmetropia, but decrease in myopia. Spherical as well as ellipsoidal eye shapes were used in the above calculation and their effect was compared. Our results suggest that the simple spherical eye shape is sufficient for correcting projection distortions within a range of ametropia from -5 to +5 diopters.

  9. Ranging/tracking system for proximity operations

    NASA Technical Reports Server (NTRS)

    Nilsen, P.; Udalov, S.

    1982-01-01

    The hardware development and testing phase of a hand held radar for the ranging and tracking for Shuttle proximity operations are considered. The radar is to measure range to a 3 sigma accuracy of 1 m (3.28 ft) to a maximum range of 1850 m (6000 ft) and velocity to a 3 sigma accuracy of 0.03 m/s (0.1 ft/s). Size and weight are similar to hand held radars, frequently seen in use by motorcycle police officers. Meeting these goals for a target in free space was very difficult to obtain in the testing program; however, at a range of approximately 700 m, the 3 sigma range error was found to be 0.96 m. It is felt that much of this error is due to clutter in the test environment. As an example of the velocity accuracy, at a range of 450 m, a 3 sigma velocity error of 0.02 m/s was measured. The principles of the radar and recommended changes to its design are given. Analyses performed in support of the design process, the actual circuit diagrams, and the software listing are included.

  10. Reducing Error in Mail Surveys. ERIC Digest.

    ERIC Educational Resources Information Center

    Cui, Weiwei

    This Digest describes four types of errors in mail surveys and summarizes the ways they can be reduced. Any one of these sources of error can make survey results unacceptable. Sampling error is examined through inferential statistics applied to sample survey results. In general, increasing sample size will decrease sampling error when simple…

  11. Error Correction in Oral Classroom English Teaching

    ERIC Educational Resources Information Center

    Jing, Huang; Xiaodong, Hao; Yu, Liu

    2016-01-01

    As is known to all, errors are inevitable in the process of language learning for Chinese students. Should we ignore students' errors in learning English? In common with other questions, different people hold different opinions. All teachers agree that errors students make in written English are not allowed. For the errors students make in oral…

  12. 5 CFR 1601.34 - Error correction.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 5 Administrative Personnel 3 2011-01-01 2011-01-01 false Error correction. 1601.34 Section 1601.34... Contribution Allocations and Interfund Transfer Requests § 1601.34 Error correction. Errors in processing... in the wrong investment fund, will be corrected in accordance with the error correction...

  13. 5 CFR 1601.34 - Error correction.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 3 2010-01-01 2010-01-01 false Error correction. 1601.34 Section 1601.34... Contribution Allocations and Interfund Transfer Requests § 1601.34 Error correction. Errors in processing... in the wrong investment fund, will be corrected in accordance with the error correction...

  14. GP-B error modeling and analysis

    NASA Technical Reports Server (NTRS)

    Hung, J. C.

    1982-01-01

    Individual source errors and their effects on the accuracy of the Gravity Probe B (GP-B) experiment were investigated. Emphasis was placed on: (1) the refinement of source error identification and classifications of error according to their physical nature; (2) error analysis for the GP-B data processing; and (3) measurement geometry for the experiment.

  15. Discretization vs. Rounding Error in Euler's Method

    ERIC Educational Resources Information Center

    Borges, Carlos F.

    2011-01-01

    Euler's method for solving initial value problems is an excellent vehicle for observing the relationship between discretization error and rounding error in numerical computation. Reductions in stepsize, in order to decrease discretization error, necessarily increase the number of steps and so introduce additional rounding error. The problem is…

  16. The Sources of Error in Spanish Writing.

    ERIC Educational Resources Information Center

    Justicia, Fernando; Defior, Sylvia; Pelegrina, Santiago; Martos, Francisco J.

    1999-01-01

    Determines the pattern of errors in Spanish spelling. Analyzes and proposes a classification system for the errors made by children in the initial stages of the acquisition of spelling skills. Finds the diverse forms of only 20 Spanish words produces 36% of the spelling errors in Spanish; and substitution is the most frequent type of error. (RS)

  17. Internal Correction Of Errors In A DRAM

    NASA Technical Reports Server (NTRS)

    Zoutendyk, John A.; Watson, R. Kevin; Schwartz, Harvey R.; Nevill, Leland R.; Hasnain, Zille

    1989-01-01

    Error-correcting Hamming code built into circuit. A 256 K dynamic random-access memory (DRAM) circuit incorporates Hamming error-correcting code in its layout. Feature provides faster detection and correction of errors at less cost in amount of equipment, operating time, and software. On-chip error-correcting feature also makes new DRAM less susceptible to single-event upsets.

  18. Error-Related Psychophysiology and Negative Affect

    ERIC Educational Resources Information Center

    Hajcak, G.; McDonald, N.; Simons, R.F.

    2004-01-01

    The error-related negativity (ERN/Ne) and error positivity (Pe) have been associated with error detection and response monitoring. More recently, heart rate (HR) and skin conductance (SC) have also been shown to be sensitive to the internal detection of errors. An enhanced ERN has consistently been observed in anxious subjects and there is some…

  19. Calculation of magnetic error fields in hybrid insertion devices

    NASA Astrophysics Data System (ADS)

    Savoy, R.; Halbach, K.; Hassenzahl, W.; Hoyer, E.; Humphries, D.; Kincaid, B.

    1990-05-01

    The Advanced Light Source (ALS) at the Lawrence Berkeley Laboratory requires insertion devices with fields sufficiently accurate to take advantage of the small emittance of the ALS electron beam. To maintain the spectral performance of the synchrotron radiation and to limit steering effects on the electron beam these errors must be smaller than 0.25%. This paper develops a procedure for calculating the steering error due to misalignment of the easy axis of the permanent-magnet material. The procedure is based on a three-dimensional theory of the design of hybrid insertion devices developed by one of us. The acceptable tolerance for easy axis misalignment is found for a 5-cm-period undulator proposed for the ALS.

  20. A Posteriori Correction of Forecast and Observation Error Variances

    NASA Technical Reports Server (NTRS)

    Rukhovets, Leonid

    2005-01-01

    Proposed method of total observation and forecast error variance correction is based on the assumption about normal distribution of "observed-minus-forecast" residuals (O-F), where O is an observed value and F is usually a short-term model forecast. This assumption can be accepted for several types of observations (except humidity) which are not grossly in error. Degree of nearness to normal distribution can be estimated by the symmetry or skewness (luck of symmetry) a(sub 3) = mu(sub 3)/sigma(sup 3) and kurtosis a(sub 4) = mu(sub 4)/sigma(sup 4) - 3 Here mu(sub i) = i-order moment, sigma is a standard deviation. It is well known that for normal distribution a(sub 3) = a(sub 4) = 0.

  1. Tilt error in cryospheric surface radiation measurements at high latitudes: a model study

    NASA Astrophysics Data System (ADS)

    Bogren, W. S.; Burkhart, J. F.; Kylling, A.

    2015-08-01

    We have evaluated the magnitude and makeup of error in cryospheric radiation observations due to small sensor misalignment in in-situ measurements of solar irradiance. This error is examined through simulation of diffuse and direct irradiance arriving at a detector with a cosine-response foreoptic. Emphasis is placed on assessing total error over the solar shortwave spectrum from 250 to 4500 nm, as well as supporting investigation over other relevant shortwave spectral ranges. The total measurement error introduced by sensor tilt is dominated by the direct component. For a typical high latitude albedo measurement with a solar zenith angle of 60°, a sensor tilted by 1, 3, and 5° can respectively introduce up to 2.6, 7.7, and 12.8 % error into the measured irradiance and similar errors in the derived albedo. Depending on the daily range of solar azimuth and zenith angles, significant measurement error can persist also in integrated daily irradiance and albedo.

  2. An investigation of error correcting techniques for OMV data

    NASA Technical Reports Server (NTRS)

    Ingels, Frank; Fryer, John

    1992-01-01

    Papers on the following topics are presented: considerations of testing the Orbital Maneuvering Vehicle (OMV) system with CLASS; OMV CLASS test results (first go around); equivalent system gain available from R-S encoding versus a desire to lower the power amplifier from 25 watts to 20 watts for OMV; command word acceptance/rejection rates for OMV; a memo concerning energy-to-noise ratio for the Viterbi-BSC Channel and the impact of Manchester coding loss; and an investigation of error correcting techniques for OMV and Advanced X-ray Astrophysics Facility (AXAF).

  3. Writing biomedical manuscripts part II: standard elements and common errors.

    PubMed

    Ohwovoriole, A E

    2011-01-01

    It is incumbent on, satisfying to, and rewarding for, researchers to have their work published. Many workers are denied this satisfaction because of their inability to secure acceptance after what they consider a good research. Several reasons account for rejection or delay of manuscripts submitted to biomedical journals. A research poorly conceptualised and/or conducted will fail to fly but poor writing up of the completed work accounts for a greater majority of manuscripts that get rejected. The chances of manuscript acceptance can be increased by paying attention to the standard elements and avoiding correcting the common errors that make for the rejection of manuscripts. Cultivating the habit of structuring every department of the manuscript greatly improves chances of acceptance. The final paper should follow the universally accepted pattern of aim , introduction , methods, results, and discussion. The sequence of putting the paper together is different from the order in the final form. Follow a pattern that starts with the Tables and figures for the results section , followed by final version of the methods section. The title and abstract should be about the last to be written in the final version of the manuscript. You need to have results sorted out early as the rest of what you will write is largely dictated by your results. Revise the work several times and get co - authors and third parties to help read it over. To succeed follow the universal rules of writing and those of the target journal rules while avoiding those errors that are easily amenable to correction before you submit your manuscript.

  4. ERROR ANALYSIS OF COMPOSITE SHOCK INTERACTION PROBLEMS.

    SciTech Connect

    LEE,T.MU,Y.ZHAO,M.GLIMM,J.LI,X.YE,K.

    2004-07-26

    We propose statistical models of uncertainty and error in numerical solutions. To represent errors efficiently in shock physics simulations we propose a composition law. The law allows us to estimate errors in the solutions of composite problems in terms of the errors from simpler ones as discussed in a previous paper. In this paper, we conduct a detailed analysis of the errors. One of our goals is to understand the relative magnitude of the input uncertainty vs. the errors created within the numerical solution. In more detail, we wish to understand the contribution of each wave interaction to the errors observed at the end of the simulation.

  5. A cascaded coding scheme for error control and its performance analysis

    NASA Technical Reports Server (NTRS)

    Kasami, Tadao; Fujiwara, Toru; Takata, Toyoo; Lin, Shu

    1988-01-01

    A coding scheme for error control in data communication systems is investigated. The scheme is obtained by cascading two error-correcting codes, called the inner and outer codes. Its error performance is analyzed for a binary symmetric channel with bit-error rate epsilon less than 1/2. It is shown that, if the inner and outer codes are chosen properly, high reliability can be attained even for a high-channel bit-error rate. Specific examples with inner codes ranging from high rates and Reed-Solomon codes as outer codes are considered, and their error probabilities evaluated. They all provide high reliability even for high bit-error rates, say 0.1-0.01. Several example schemes are being considered for satellite and spacecraft downlink error control.

  6. A cascaded coding scheme for error control and its performance analysis

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Kasami, Tadao; Fujiwara, Tohru; Takata, Toyoo

    1986-01-01

    A coding scheme is investigated for error control in data communication systems. The scheme is obtained by cascading two error correcting codes, called the inner and outer codes. The error performance of the scheme is analyzed for a binary symmetric channel with bit error rate epsilon <1/2. It is shown that if the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit error rate. Various specific example schemes with inner codes ranging form high rates to very low rates and Reed-Solomon codes as inner codes are considered, and their error probabilities are evaluated. They all provide extremely high reliability even for very high bit error rates. Several example schemes are being considered by NASA for satellite and spacecraft down link error control.

  7. Improving visual estimates of cervical spine range of motion.

    PubMed

    Hirsch, Brandon P; Webb, Matthew L; Bohl, Daniel D; Fu, Michael; Buerba, Rafael A; Gruskay, Jordan A; Grauer, Jonathan N

    2014-11-01

    Cervical spine range of motion (ROM) is a common measure of cervical conditions, surgical outcomes, and functional impairment. Although ROM is routinely assessed by visual estimation in clinical practice, visual estimates have been shown to be unreliable and inaccurate. Reliable goniometers can be used for assessments, but the associated costs and logistics generally limit their clinical acceptance. To investigate whether training can improve visual estimates of cervical spine ROM, we asked attending surgeons, residents, and medical students at our institution to visually estimate the cervical spine ROM of healthy subjects before and after a training session. This training session included review of normal cervical spine ROM in 3 planes and demonstration of partial and full motion in 3 planes by multiple subjects. Estimates before, immediately after, and 1 month after this training session were compared to assess reliability and accuracy. Immediately after training, errors decreased by 11.9° (flexion-extension), 3.8° (lateral bending), and 2.9° (axial rotation). These improvements were statistically significant. One month after training, visual estimates remained improved, by 9.5°, 1.6°, and 3.1°, respectively, but were statistically significant only in flexion-extension. Although the accuracy of visual estimates can be improved, clinicians should be aware of the limitations of visual estimates of cervical spine ROM. Our study results support scrutiny of visual assessment of ROM as a criterion for diagnosing permanent impairment or disability.

  8. Ordering genes: controlling the decision-error probabilities.

    PubMed Central

    Rogatko, A; Zacks, S

    1993-01-01

    Determination of the relative gene order on chromosomes is of critical importance in the construction of human gene maps. In this paper we develop a sequential algorithm for gene ordering. We start by comparing three sequential procedures to order three genes on the basis of Bayesian posterior probabilities, maximum-likelihood ratio, and minimal recombinant class. In the second part of the paper we extend sequential procedure based on the posterior probabilities to the general case of g genes. We present a theorem that states that the predicted average probability of committing a decision error, associated with a Bayesian sequential procedure that accepts the hypothesis of a gene-order configuration with posterior probability equal to or greater than pi *, is smaller than 1 - pi *. This theorem holds irrespective of the number of genes, the genetic model, and the source of genetic information. The theorem is an extension of a classical result of Wald, concerning the sum of the actual and the nominal error probabilities in the sequential probability ratio test of two hypotheses. A stepwise strategy for ordering a large number of genes, with control over the decision-error probabilities, is discussed. An asymptotic approximation is provided, which facilitates the calculations with existing computer software for gene mapping, of the posterior probabilities of an order and the error probabilities. We illustrate with some simulations that the stepwise ordering is an efficient procedure. PMID:8488844

  9. Diagnostic errors in interactive telepathology.

    PubMed

    Stauch, G; Schweppe, K W; Kayser, K

    2000-01-01

    Telepathology (TP) as a service in pathology at a distance is now widely used. It is integrated in the daily workflow of numerous pathologists. Meanwhile, in Germany 15 departments of pathology are using the telepathology technique for frozen section service; however, a common recognised quality standard in diagnostic accuracy is still missing. In a first step, the working group Aurich uses a TP system for frozen section service in order to analyse the frequency and sources of errors in TP frozen section diagnoses for evaluating the quality of frozen section slides, the important components of image quality and their influences an diagnostic accuracy. The authors point to the necessity of an optimal training program for all participants in this service in order to reduce the risk of diagnostic errors. In addition, there is need for optimal cooperation of all partners involved in TP service.

  10. Negligence, genuine error, and litigation

    PubMed Central

    Sohn, David H

    2013-01-01

    Not all medical injuries are the result of negligence. In fact, most medical injuries are the result either of the inherent risk in the practice of medicine, or due to system errors, which cannot be prevented simply through fear of disciplinary action. This paper will discuss the differences between adverse events, negligence, and system errors; the current medical malpractice tort system in the United States; and review current and future solutions, including medical malpractice reform, alternative dispute resolution, health courts, and no-fault compensation systems. The current political environment favors investigation of non-cap tort reform remedies; investment into more rational oversight systems, such as health courts or no-fault systems may reap both quantitative and qualitative benefits for a less costly and safer health system. PMID:23426783

  11. [Criminal prosecution for medical errors].

    PubMed

    Legemaate, J

    2011-01-01

    A policy document providing instructions on the decision to prosecute in medical errors came into effect on November 1st 2010. In this document the Dutch Public Prosecution Service has attempted to make clear which criteria should be adopted when deciding to prosecute in the case of a medical error. There have also been other recent developments in this context: the public prosecutor can now demand access to medical files in certain, highly exceptional circumstances, such as when patients are themselves suspected of committing a criminal offence; and the Dutch Health Care Inspectorate may only pass on a patient's medical file to the public prosecutor if the prosecutor is already in possession of a copy of it. The new policy document leaves several questions unanswered. It does not consider the criminal liability of health care institutions, for example, and there is too much focus on the responsibilities of individual health care workers.

  12. Managing human error in aviation.

    PubMed

    Helmreich, R L

    1997-05-01

    Crew resource management (CRM) programs were developed to address team and leadership aspects of piloting modern airplanes. The goal is to reduce errors through team work. Human factors research and social, cognitive, and organizational psychology are used to develop programs tailored for individual airlines. Flight crews study accident case histories, group dynamics, and human error. Simulators provide pilots with the opportunity to solve complex flight problems. CRM in the simulator is called line-oriented flight training (LOFT). In automated cockpits CRM promotes the idea of automation as a crew member. Cultural aspects of aviation include professional, business, and national culture. The aviation CRM model has been adapted for training surgeons and operating room staff in human factors.

  13. Robot learning and error correction

    NASA Technical Reports Server (NTRS)

    Friedman, L.

    1977-01-01

    A model of robot learning is described that associates previously unknown perceptions with the sensed known consequences of robot actions. For these actions, both the categories of outcomes and the corresponding sensory patterns are incorporated in a knowledge base by the system designer. Thus the robot is able to predict the outcome of an action and compare the expectation with the experience. New knowledge about what to expect in the world may then be incorporated by the robot in a pre-existing structure whether it detects accordance or discrepancy between a predicted consequence and experience. Errors committed during plan execution are detected by the same type of comparison process and learning may be applied to avoiding the errors.

  14. Evaluation of the Regional Atmospheric Modeling System in the Eastern Range Dispersion Assessment System

    NASA Technical Reports Server (NTRS)

    Case, Jonathan

    2000-01-01

    The Applied Meteorology Unit is conducting an evaluation of the Regional Atmospheric Modeling System (RAMS) contained within the Eastern Range Dispersion Assessment System (ERDAS). ERDAS provides emergency response guidance for operations at the Cape Canaveral Air Force Station and the Kennedy Space Center in the event of an accidental hazardous material release or aborted vehicle launch. The prognostic data from RAMS is available to ERDAS for display and is used to initialize the 45th Range Safety (45 SW/SE) dispersion model. Thus, the accuracy of the 45 SW/SE dispersion model is dependent upon the accuracy of RAMS forecasts. The RAMS evaluation task consists of an objective and subjective component for the Florida warm and cool seasons of 1999-2000. The objective evaluation includes gridded and point error statistics at surface and upper-level observational sites, a comparison of the model errors to a coarser grid configuration of RAMS, and a benchmark of RAMS against the widely accepted Eta model. The warm-season subjective evaluation involves a verification of the onset and movement of the Florida east coast sea breeze and RAMS forecast precipitation. This interim report provides a summary of the RAMS objective and subjective evaluation for the 1999 Florida warm season only.

  15. Developing an Acceptability Assessment of Preventive Dental Treatments

    PubMed Central

    Hyde, Susan; Gansky, Stuart A.; Gonzalez-Vargas, Maria J.; Husting, Sheila R.; Cheng, Nancy F.; Millstein, Susan G.; Adams, Sally H.

    2012-01-01

    Objectives Early childhood caries (ECC) is very prevalent among young Hispanic children. ECC is amenable to a variety of preventive procedures, yet many Hispanic families underutilize dental services. Acceptability research may assist in health care planning and resource allocation by identifying patient preferences among efficacious treatments with the goal of improving their utilization. The purposes of this study were (a) to develop a culturally competent acceptability assessment instrument, directed toward the caregivers of young Hispanic children, for five preventive dental treatments for ECC and (b) to test the instrument's reliability and validity. Methods An instrument of five standard treatments known to prevent ECC was developed, translated, reviewed by focus groups, and pilot tested, then tested for reliability. The instrument included illustrated cards, brief video clips, and samples of the treatments and was culturally appropriate for low-income Hispanic caregivers. In addition to determining the acceptability of the five treatments individually, the treatments were also presented as paired comparisons. Results Focus groups and debriefing interviews following the pilot tests established that the instrument has good face validity. The illustrated cards, product samples, and video demonstrations of the five treatments resulted in an instrument possessing good content validity. The instrument has good to excellent test–retest reliability, with identical time 1–time 2 responses for each of the five treatments 92 percent of the time (range 87 to 97 percent), and the same treatment of the paired comparisons preferred 75 percent of the time (range 61 to 90 percent). Conclusions The acceptability instrument described is reliable and valid and may be useful in program planning efforts to identify and increase the utilization of preferred ECC preventive treatments for target populations. PMID:18662256

  16. Language comprehension errors: A further investigation

    NASA Astrophysics Data System (ADS)

    Clarkson, Philip C.

    1991-06-01

    Comprehension errors made when attempting mathematical word problems have been noted as one of the high frequency categories in error analysis. This error category has been assumed to be language based. The study reported here provides some support for the linkage of comprehension errors to measures of language competency. Further, there is evidence that the frequency of such errors is related to competency in both the mother tongue and the language of instruction for bilingual students.

  17. Modeling the glucose sensor error.

    PubMed

    Facchinetti, Andrea; Del Favero, Simone; Sparacino, Giovanni; Castle, Jessica R; Ward, W Kenneth; Cobelli, Claudio

    2014-03-01

    Continuous glucose monitoring (CGM) sensors are portable devices, employed in the treatment of diabetes, able to measure glucose concentration in the interstitium almost continuously for several days. However, CGM sensors are not as accurate as standard blood glucose (BG) meters. Studies comparing CGM versus BG demonstrated that CGM is affected by distortion due to diffusion processes and by time-varying systematic under/overestimations due to calibrations and sensor drifts. In addition, measurement noise is also present in CGM data. A reliable model of the different components of CGM inaccuracy with respect to BG (briefly, "sensor error") is important in several applications, e.g., design of optimal digital filters for denoising of CGM data, real-time glucose prediction, insulin dosing, and artificial pancreas control algorithms. The aim of this paper is to propose an approach to describe CGM sensor error by exploiting n multiple simultaneous CGM recordings. The model of sensor error description includes a model of blood-to-interstitial glucose diffusion process, a linear time-varying model to account for calibration and sensor drift-in-time, and an autoregressive model to describe the additive measurement noise. Model orders and parameters are identified from the n simultaneous CGM sensor recordings and BG references. While the model is applicable to any CGM sensor, here, it is used on a database of 36 datasets of type 1 diabetic adults in which n = 4 Dexcom SEVEN Plus CGM time series and frequent BG references were available simultaneously. Results demonstrates that multiple simultaneous sensor data and proper modeling allow dissecting the sensor error into its different components, distinguishing those related to physiology from those related to technology.

  18. Technical errors in MR arthrography.

    PubMed

    Hodler, Juerg

    2008-01-01

    This article discusses potential technical problems of MR arthrography. It starts with contraindications, followed by problems relating to injection technique, contrast material and MR imaging technique. For some of the aspects discussed, there is only little published evidence. Therefore, the article is based on the personal experience of the author and on local standards of procedures. Such standards, as well as medico-legal considerations, may vary from country to country. Contraindications for MR arthrography include pre-existing infection, reflex sympathetic dystrophy and possibly bleeding disorders, avascular necrosis and known allergy to contrast media. Errors in injection technique may lead to extra-articular collection of contrast agent or to contrast agent leaking from the joint space, which may cause diagnostic difficulties. Incorrect concentrations of contrast material influence image quality and may also lead to non-diagnostic examinations. Errors relating to MR imaging include delays between injection and imaging and inadequate choice of sequences. Potential solutions to the various possible errors are presented.

  19. Error control in the GCF: An information-theoretic model for error analysis and coding

    NASA Technical Reports Server (NTRS)

    Adeyemi, O.

    1974-01-01

    The structure of data-transmission errors within the Ground Communications Facility is analyzed in order to provide error control (both forward error correction and feedback retransmission) for improved communication. Emphasis is placed on constructing a theoretical model of errors and obtaining from it all the relevant statistics for error control. No specific coding strategy is analyzed, but references to the significance of certain error pattern distributions, as predicted by the model, to error correction are made.

  20. 5 CFR 846.724 - Belated elections and correction of administrative errors.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... administrative errors. (a) Belated elections. The employing office may accept a belated election of FERS coverage if it determines that— (1) The employing office did not provide adequate notice to the employee in a timely manner; (2) The agency did not provide access to the FERS Transfer Handbook to the employee in...