Sample records for eliminate errors due

  1. Elimination of Emergency Department Medication Errors Due To Estimated Weights.

    PubMed

    Greenwalt, Mary; Griffen, David; Wilkerson, Jim

    2017-01-01

    From 7/2014 through 6/2015, 10 emergency department (ED) medication dosing errors were reported through the electronic incident reporting system of an urban academic medical center. Analysis of these medication errors identified inaccurate estimated weight on patients as the root cause. The goal of this project was to reduce weight-based dosing medication errors due to inaccurate estimated weights on patients presenting to the ED. Chart review revealed that 13.8% of estimated weights documented on admitted ED patients varied more than 10% from subsequent actual admission weights recorded. A random sample of 100 charts containing estimated weights revealed 2 previously unreported significant medication dosage errors (.02 significant error rate). Key improvements included removing barriers to weighing ED patients, storytelling to engage staff and change culture, and removal of the estimated weight documentation field from the ED electronic health record (EHR) forms. With these improvements estimated weights on ED patients, and the resulting medication errors, were eliminated.

  2. Stem revenue losses with effective CDM management.

    PubMed

    Alwell, Michael

    2003-09-01

    Effective CDM management not only minimizes revenue losses due to denied claims, but also helps eliminate administrative costs associated with correcting coding errors. Accountability for CDM management should be assigned to a single individual, who ideally reports to the CFO or high-level finance director. If your organization is prone to making billing errors due to CDM deficiencies, you should consider purchasing CDM software to help you manage your CDM.

  3. Automatic readout micrometer

    DOEpatents

    Lauritzen, Ted

    1982-01-01

    A measuring system is disclosed for surveying and very accurately positioning objects with respect to a reference line. A principal use of this surveying system is for accurately aligning the electromagnets which direct a particle beam emitted from a particle accelerator. Prior art surveying systems require highly skilled surveyors. Prior art systems include, for example, optical surveying systems which are susceptible to operator reading errors, and celestial navigation-type surveying systems, with their inherent complexities. The present invention provides an automatic readout micrometer which can very accurately measure distances. The invention has a simplicity of operation which practically eliminates the possibilities of operator optical reading error, owning to the elimination of traditional optical alignments for making measurements. The invention has an extendable arm which carries a laser surveying target. The extendable arm can be continuously positioned over its entire length of travel by either a coarse or fine adjustment without having the fine adjustment outrun the coarse adjustment until a reference laser beam is centered on the target as indicated by a digital readout. The length of the micrometer can then be accurately and automatically read by a computer and compared with a standardized set of alignment measurements. Due to its construction, the micrometer eliminates any errors due to temperature changes when the system is operated within a standard operating temperature range.

  4. Automatic readout micrometer

    DOEpatents

    Lauritzen, T.

    A measuring system is described for surveying and very accurately positioning objects with respect to a reference line. A principle use of this surveying system is for accurately aligning the electromagnets which direct a particle beam emitted from a particle accelerator. Prior art surveying systems require highly skilled surveyors. Prior art systems include, for example, optical surveying systems which are susceptible to operator reading errors, and celestial navigation-type surveying systems, with their inherent complexities. The present invention provides an automatic readout micrometer which can very accurately measure distances. The invention has a simplicity of operation which practically eliminates the possibilities of operator optical reading error, owning to the elimination of traditional optical alignments for making measurements. The invention has an extendable arm which carries a laser surveying target. The extendable arm can be continuously positioned over its entire length of travel by either a coarse of fine adjustment without having the fine adjustment outrun the coarse adjustment until a reference laser beam is centered on the target as indicated by a digital readout. The length of the micrometer can then be accurately and automatically read by a computer and compared with a standardized set of alignment measurements. Due to its construction, the micrometer eliminates any errors due to temperature changes when the system is operated within a standard operating temperature range.

  5. Flux Sampling Errors for Aircraft and Towers

    NASA Technical Reports Server (NTRS)

    Mahrt, Larry

    1998-01-01

    Various errors and influences leading to differences between tower- and aircraft-measured fluxes are surveyed. This survey is motivated by reports in the literature that aircraft fluxes are sometimes smaller than tower-measured fluxes. Both tower and aircraft flux errors are larger with surface heterogeneity due to several independent effects. Surface heterogeneity may cause tower flux errors to increase with decreasing wind speed. Techniques to assess flux sampling error are reviewed. Such error estimates suffer various degrees of inapplicability in real geophysical time series due to nonstationarity of tower time series (or inhomogeneity of aircraft data). A new measure for nonstationarity is developed that eliminates assumptions on the form of the nonstationarity inherent in previous methods. When this nonstationarity measure becomes large, the surface energy imbalance increases sharply. Finally, strategies for obtaining adequate flux sampling using repeated aircraft passes and grid patterns are outlined.

  6. Application of the phase shifting diffraction interferometer for measuring convex mirrors and negative lenses

    DOEpatents

    Sommargren, Gary E.; Campbell, Eugene W.

    2004-03-09

    To measure a convex mirror, a reference beam and a measurement beam are both provided through a single optical fiber. A positive auxiliary lens is placed in the system to give a converging wavefront onto the convex mirror under test. A measurement is taken that includes the aberrations of the convex mirror as well as the errors due to two transmissions through the positive auxiliary lens. A second, measurement provides the information to eliminate this error. A negative lens can also be measured in a similar way. Again, there are two measurement set-ups. A reference beam is provided from a first optical fiber and a measurement beam is provided from a second optical fiber. A positive auxiliary lens is placed in the system to provide a converging wavefront from the reference beam onto the negative lens under test. The measurement beam is combined with the reference wavefront and is analyzed by standard methods. This measurement includes the aberrations of the negative lens, as well as the errors due to a single transmission through the positive auxiliary lens. A second measurement provides the information to eliminate this error.

  7. Application Of The Phase Shifting Diffraction Interferometer For Measuring Convex Mirrors And Negative Lenses

    DOEpatents

    Sommargren, Gary E.; Campbell, Eugene W.

    2005-06-21

    To measure a convex mirror, a reference beam and a measurement beam are both provided through a single optical fiber. A positive auxiliary lens is placed in the system to give a converging wavefront onto the convex mirror under test. A measurement is taken that includes the aberrations of the convex mirror as well as the errors due to two transmissions through the positive auxiliary lens. A second measurement provides the information to eliminate this error. A negative lens can also be measured in a similar way. Again, there are two measurement set-ups. A reference beam is provided from a first optical fiber and a measurement beam is provided from a second optical fiber. A positive auxiliary lens is placed in the system to provide a converging wavefront from the reference beam onto the negative lens under test. The measurement beam is combined with the reference wavefront and is analyzed by standard methods. This measurement includes the aberrations of the negative lens, as well as the errors due to a single transmission through the positive auxiliary lens. A second measurement provides the information to eliminate this error.

  8. Interferometric rotation sensor

    NASA Technical Reports Server (NTRS)

    Walsh, T. M.

    1972-01-01

    Sensor generates interference fringes varying in number (horizontally and vertically) as a function of the total angular deviation relative to the line-of-sight axis. Device eliminates errors from zero or null shift due to lack of electrical circuitry stability.

  9. A path planning method used in fluid jet polishing eliminating lightweight mirror imprinting effect

    NASA Astrophysics Data System (ADS)

    Li, Wenzong; Fan, Bin; Shi, Chunyan; Wang, Jia; Zhuo, Bin

    2014-08-01

    With the development of space technology, the design of optical system tends to large aperture lightweight mirror with high dimension-thickness ratio. However, when the lightweight mirror PV value is less than λ/10 , the surface will show wavy imprinting effect obviously. Imprinting effect introduced by head-tool pressure has become a technological barrier in high-precision lightweight mirror manufacturing. Fluid jet polishing can exclude outside pressure. Presently, machining tracks often used are grating type path, screw type path and pseudo-random path. On the edge of imprinting error, the speed of adjacent path points changes too fast, which causes the machine hard to reflect quickly, brings about new path error, and increases the polishing time due to superfluous path. This paper presents a new planning path method to eliminate imprinting effect. Simulation results show that the path of the improved grating path can better eliminate imprinting effect compared to the general path.

  10. Contributions to the problem of piezoelectric accelerometer calibration. [using lock-in voltmeter

    NASA Technical Reports Server (NTRS)

    Jakab, I.; Bordas, A.

    1974-01-01

    After discussing the principal calibration methods for piezoelectric accelerometers, an experimental setup for accelerometer calibration by the reciprocity method is described It is shown how the use of a lock-in voltmeter eliminates errors due to viscous damping and electrical loading.

  11. Development of a new instrument for direct skin friction measurements

    NASA Technical Reports Server (NTRS)

    Vakili, A. D.; Wu, J. M.

    1986-01-01

    A device developed for the direct measurement of wall shear stress generated by flows is described. Simple and symmetric in design with optional small moving mass and no internal friction, the features employed in the design eliminate most of the difficulties associated with the traditional floating element balances. The device is basically small and can be made in various sizes. Vibration problems associated with the floating element skin friction balances were found to be minimized due to the design symmetry and optional damping provided. The design eliminates or reduces the errors associated with conventional floating element devices: such as errors due to gaps, pressure gradient, acceleration, heat transfer, and temperature change. The instrument is equipped with various sensing systems and the output signal is a linear function of the wall shear stress. Dynamic measurements could be made in a limited range and measurements in liquids could be performed readily. Measurement made in the three different tunnels show excellent agreement with data obtained by the floating element devices and other techniques.

  12. Procedures for dealing with certain types of noise and systematic errors common to many Hadamard transform optical systems

    NASA Technical Reports Server (NTRS)

    Harwit, M.

    1977-01-01

    Sources of noise and error correcting procedures characteristic of Hadamard transform optical systems were investigated. Reduction of spectral noise due to noise spikes in the data, the effect of random errors, the relative performance of Fourier and Hadamard transform spectrometers operated under identical detector-noise-limited conditions, and systematic means for dealing with mask defects are among the topics discussed. The distortion in Hadamard transform optical instruments caused by moving Masks, incorrect mask alignment, missing measurements, and diffraction is analyzed and techniques for reducing or eliminating this distortion are described.

  13. Canceling the momentum in a phase-shifting algorithm to eliminate spatially uniform errors.

    PubMed

    Hibino, Kenichi; Kim, Yangjin

    2016-08-10

    In phase-shifting interferometry, phase modulation nonlinearity causes both spatially uniform and nonuniform errors in the measured phase. Conventional linear-detuning error-compensating algorithms only eliminate the spatially variable error component. The uniform error is proportional to the inertial momentum of the data-sampling weight of a phase-shifting algorithm. This paper proposes a design approach to cancel the momentum by using characteristic polynomials in the Z-transform space and shows that an arbitrary M-frame algorithm can be modified to a new (M+2)-frame algorithm that acquires new symmetry to eliminate the uniform error.

  14. Physiological pseudomyopia.

    PubMed

    Jones, R

    1990-08-01

    Objective refraction through plus fogging lenses and base-in prisms revealed that normally accommodation is not completely relaxed when the stimulus to accommodation is zero. The myopic shift in the refractive error due to this focus error of accommodation was defined as physiological pseudomyopia. Two previously established features of accommodation are responsible for this behavior: (1) accommodation acts as a proportional control system for steady-state responses; and (2) the rest focus of accommodation is nonzero. It is proposed that the hyperopic shift in refraction observed in cycloplegia is the result of elimination of physiological pseudomyopia.

  15. Improved astigmatic focus error detection method

    NASA Technical Reports Server (NTRS)

    Bernacki, Bruce E.

    1992-01-01

    All easy-to-implement focus- and track-error detection methods presently used in magneto-optical (MO) disk drives using pre-grooved media suffer from a side effect known as feedthrough. Feedthrough is the unwanted focus error signal (FES) produced when the optical head is seeking a new track, and light refracted from the pre-grooved disk produces an erroneous FES. Some focus and track-error detection methods are more resistant to feedthrough, but tend to be complicated and/or difficult to keep in alignment as a result of environmental insults. The astigmatic focus/push-pull tracking method is an elegant, easy-to-align focus- and track-error detection method. Unfortunately, it is also highly susceptible to feedthrough when astigmatism is present, with the worst effects caused by astigmatism oriented such that the tangential and sagittal foci are at 45 deg to the track direction. This disclosure outlines a method to nearly completely eliminate the worst-case form of feedthrough due to astigmatism oriented 45 deg to the track direction. Feedthrough due to other primary aberrations is not improved, but performance is identical to the unimproved astigmatic method.

  16. Multitasking simulation: Present application and future directions.

    PubMed

    Adams, Traci Nicole; Rho, Jason C

    2017-02-01

    The Accreditation Council for Graduate Medical Education lists multi-tasking as a core competency in several medical specialties due to increasing demands on providers to manage the care of multiple patients simultaneously. Trainees often learn multitasking on the job without any formal curriculum, leading to high error rates. Multitasking simulation training has demonstrated success in reducing error rates among trainees. Studies of multitasking simulation demonstrate that this type of simulation is feasible, does not hinder the acquisition of procedural skill, and leads to better performance during subsequent periods of multitasking. Although some healthcare agencies have discouraged multitasking due to higher error rates among multitasking providers, it cannot be eliminated entirely in settings such as the emergency department in which providers care for more than one patient simultaneously. Simulation can help trainees to identify situations in which multitasking is inappropriate, while preparing them for situations in which multitasking is inevitable.

  17. An ionospheric occultation inversion technique based on epoch difference

    NASA Astrophysics Data System (ADS)

    Lin, Jian; Xiong, Jing; Zhu, Fuying; Yang, Jian; Qiao, Xuejun

    2013-09-01

    Of the ionospheric radio occultation (IRO) electron density profile (EDP) retrievals, the Abel based calibrated TEC inversion (CTI) is the most widely used technique. In order to eliminate the contribution from the altitude above the RO satellite, it is necessary to utilize the calibrated TEC to retrieve the EDP, which introduces the error due to the coplanar assumption. In this paper, a new technique based on the epoch difference inversion (EDI) is firstly proposed to eliminate this error. The comparisons between CTI and EDI have been done, taking advantage of the simulated and real COSMIC data. The following conclusions can be drawn: the EDI technique can successfully retrieve the EDPs without non-occultation side measurements and shows better performance than the CTI method, especially for lower orbit mission; no matter which technique is used, the inversion results at the higher altitudes are better than those at the lower altitudes, which could be explained theoretically.

  18. Wall shear stress measurements using a new transducer

    NASA Technical Reports Server (NTRS)

    Vakili, A. D.; Wu, J. M.; Lawing, P. L.

    1986-01-01

    A new instrument has been developed for direct measurement of wall shear stress. This instrument is simple and symmetric in design with small moving mass and no internal friction. Features employed in the design of this instrument eliminate most of the difficulties associated with the traditional floating element balances. Vibration problems associated with the floating element skin friction balances have been found to be minimized by the design features and optional damping provided. The unique design of this instrument eliminates or reduces the errors associated with conventional floating-element devices: such as errors due to gaps, pressure gradient, acceleration, heat transfer and temperature change. The instrument is equipped with various sensing systems and the output signal is a linear function of the wall shear stress. Measurement made in three different tunnels show good agreement with theory and data obtained by the floating element devices.

  19. 75 FR 11889 - Request for Comments on Proposed NIH, AHRQ and CDC Process Change for Electronic Submission of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-12

    ... impact of eliminating the correction window from the electronic grant application submission process on... process a temporary error correction window to ensure a smooth and successful transition for applicants. This window provides applicants a period of time beyond the grant application due date to correct any...

  20. Errors Made by Elementary Fourth Grade Students When Modelling Word Problems and the Elimination of Those Errors through Scaffolding

    ERIC Educational Resources Information Center

    Ulu, Mustafa

    2017-01-01

    This study aims to identify errors made by primary school students when modelling word problems and to eliminate those errors through scaffolding. A 10-question problem-solving achievement test was used in the research. The qualitative and quantitative designs were utilized together. The study group of the quantitative design comprises 248…

  1. Resolution-enhancement and sampling error correction based on molecular absorption line in frequency scanning interferometry

    NASA Astrophysics Data System (ADS)

    Pan, Hao; Qu, Xinghua; Shi, Chunzhao; Zhang, Fumin; Li, Yating

    2018-06-01

    The non-uniform interval resampling method has been widely used in frequency modulated continuous wave (FMCW) laser ranging. In the large-bandwidth and long-distance measurements, the range peak is deteriorated due to the fiber dispersion mismatch. In this study, we analyze the frequency-sampling error caused by the mismatch and measure it using the spectroscopy of molecular frequency references line. By using the adjacent points' replacement and spline interpolation technique, the sampling errors could be eliminated. The results demonstrated that proposed method is suitable for resolution-enhancement and high-precision measurement. Moreover, using the proposed method, we achieved the precision of absolute distance less than 45 μm within 8 m.

  2. Anthropometric data error detecting and correction with a computer

    NASA Technical Reports Server (NTRS)

    Chesak, D. D.

    1981-01-01

    Data obtained with automated anthropometric data aquisition equipment was examined for short term errors. The least squares curve fitting technique was used to ascertain which data values were erroneous and to replace them, if possible, with corrected values. Errors were due to random reflections of light, masking of the light rays, and other types of optical and electrical interference. It was found that the signals were impossible to eliminate from the initial data produced by the television cameras, and that this was primarily a software problem requiring a digital computer to refine the data off line. The specific data of interest was related to the arm reach envelope of a human being.

  3. How accurate are lexile text measures?

    PubMed

    Stenner, A Jackson; Burdick, Hal; Sanford, Eleanor E; Burdick, Donald S

    2006-01-01

    The Lexile Framework for Reading models comprehension as the difference between a reader measure and a text measure. Uncertainty in comprehension rates results from unreliability in reader measures and inaccuracy in text readability measures. Whole-text processing eliminates sampling error in text measures. However, Lexile text measures are imperfect due to misspecification of the Lexile theory. The standard deviation component associated with theory misspecification is estimated at 64L for a standard-length passage (approximately 125 words). A consequence is that standard errors for longer texts (2,500 to 150,000 words) are measured on the Lexile scale with uncertainties in the single digits. Uncertainties in expected comprehension rates are largely due to imprecision in reader ability and not inaccuracies in text readabilities.

  4. Why do we miss rare targets? Exploring the boundaries of the low prevalence effect

    PubMed Central

    Rich, Anina N.; Kunar, Melina A.; Van Wert, Michael J.; Hidalgo-Sotelo, Barbara; Horowitz, Todd S.; Wolfe, Jeremy M.

    2011-01-01

    Observers tend to miss a disproportionate number of targets in visual search tasks with rare targets. This ‘prevalence effect’ may have practical significance since many screening tasks (e.g., airport security, medical screening) are low prevalence searches. It may also shed light on the rules used to terminate search when a target is not found. Here, we use perceptually simple stimuli to explore the sources of this effect. Experiment 1 shows a prevalence effect in inefficient spatial configuration search. Experiment 2 demonstrates this effect occurs even in a highly efficient feature search. However, the two prevalence effects differ. In spatial configuration search, misses seem to result from ending the search prematurely, while in feature search, they seem due to response errors. In Experiment 3, a minimum delay before response eliminated the prevalence effect for feature but not spatial configuration search. In Experiment 4, a target was present on each trial in either two (2AFC) or four (4AFC) orientations. With only two response alternatives, low prevalence produced elevated errors. Providing four response alternatives eliminated this effect. Low target prevalence puts searchers under pressure that tends to increase miss errors. We conclude that the specific source of those errors depends on the nature of the search. PMID:19146299

  5. Potential and Limitations of an Improved Method to Produce Dynamometric Wheels

    PubMed Central

    García de Jalón, Javier

    2018-01-01

    A new methodology for the estimation of tyre-contact forces is presented. The new procedure is an evolution of a previous method based on harmonic elimination techniques developed with the aim of producing low cost dynamometric wheels. While the original method required stress measurement in many rim radial lines and the fulfillment of some rigid conditions of symmetry, the new methodology described in this article significantly reduces the number of required measurement points and greatly relaxes symmetry constraints. This can be done without compromising the estimation error level. The reduction of the number of measuring radial lines increases the ripple of demodulated signals due to non-eliminated higher order harmonics. Therefore, it is necessary to adapt the calibration procedure to this new scenario. A new calibration procedure that takes into account angular position of the wheel is completely described. This new methodology is tested on a standard commercial five-spoke car wheel. Obtained results are qualitatively compared to those derived from the application of former methodology leading to the conclusion that the new method is both simpler and more robust due to the reduction in the number of measuring points, while contact forces’ estimation error remains at an acceptable level. PMID:29439427

  6. Global Warming Estimation from MSU

    NASA Technical Reports Server (NTRS)

    Prabhakara, C.; Iacovazzi, Robert; Yoo, Jung-Moon

    1998-01-01

    Microwave Sounding Unit (MSU) radiometer observations in Ch 2 (53.74 GHz) from sequential, sun-synchronous, polar-orbiting NOAA satellites contain small systematic errors. Some of these errors are time-dependent and some are time-independent. Small errors in Ch 2 data of successive satellites arise from calibration differences. Also, successive NOAA satellites tend to have different Local Equatorial Crossing Times (LECT), which introduce differences in Ch 2 data due to the diurnal cycle. These two sources of systematic error are largely time independent. However, because of atmospheric drag, there can be a drift in the LECT of a given satellite, which introduces time-dependent systematic errors. One of these errors is due to the progressive chance in the diurnal cycle and the other is due to associated chances in instrument heating by the sun. In order to infer global temperature trend from the these MSU data, we have eliminated explicitly the time-independent systematic errors. Both of the time-dependent errors cannot be assessed from each satellite. For this reason, their cumulative effect on the global temperature trend is evaluated implicitly. Christy et al. (1998) (CSL). based on their method of analysis of the MSU Ch 2 data, infer a global temperature cooling trend (-0.046 K per decade) from 1979 to 1997, although their near nadir measurements yield near zero trend (0.003 K/decade). Utilising an independent method of analysis, we infer global temperature warmed by 0.12 +/- 0.06 C per decade from the observations of the MSU Ch 2 during the period 1980 to 1997.

  7. To err is human nature. Can transfusion errors due to human factors ever be eliminated?

    PubMed

    Lau, F Y; Cheng, G

    2001-11-01

    Fatal hemolytic transfusion reaction due to ABO incompatibility occurs mainly as a result of clerical errors. Blood sample drawn from the wrong patient and labeled as another patient's specimen will not be detected by the blood bank unless there is a previous ABO grouping result. In Hong Kong, we had designed a transfusion wristband system--portable barcode scanner system to detect such clerical errors. The system was well accepted by the house staff and had prevented two BO mismatched transfusion. Other current system of patient's identification may have similar results, but the wristband system has the advantages of being simple, inexpensive and easy to implement. The Hong Kong Government is planning to replace the personal identity card for all citizens with an electronic smart card by 2003. If the new card contains the person's detailed red cell phenotypes in digital code, then the phenotypes of all blood donors and admitted patients will be readily available. It is feasible to issue phenotype-matched blood to patients without any need of pre-transfusion testing, therefore eliminating mismatched transfusions for most patients. Our pilot study of 474 patients showed that the system was safe and up to 98% of admitted patients could be transfused without delays. Patients with rare phenotypes, visitors or illegal immigrants may still need pre-transfusion antibody screen, but if most patients can be issued blood units without testings, the potential savings in health care amount to US$14 million/year.

  8. Skull Defects in Finite Element Head Models for Source Reconstruction from Magnetoencephalography Signals

    PubMed Central

    Lau, Stephan; Güllmar, Daniel; Flemming, Lars; Grayden, David B.; Cook, Mark J.; Wolters, Carsten H.; Haueisen, Jens

    2016-01-01

    Magnetoencephalography (MEG) signals are influenced by skull defects. However, there is a lack of evidence of this influence during source reconstruction. Our objectives are to characterize errors in source reconstruction from MEG signals due to ignoring skull defects and to assess the ability of an exact finite element head model to eliminate such errors. A detailed finite element model of the head of a rabbit used in a physical experiment was constructed from magnetic resonance and co-registered computer tomography imaging that differentiated nine tissue types. Sources of the MEG measurements above intact skull and above skull defects respectively were reconstructed using a finite element model with the intact skull and one incorporating the skull defects. The forward simulation of the MEG signals reproduced the experimentally observed characteristic magnitude and topography changes due to skull defects. Sources reconstructed from measured MEG signals above intact skull matched the known physical locations and orientations. Ignoring skull defects in the head model during reconstruction displaced sources under a skull defect away from that defect. Sources next to a defect were reoriented. When skull defects, with their physical conductivity, were incorporated in the head model, the location and orientation errors were mostly eliminated. The conductivity of the skull defect material non-uniformly modulated the influence on MEG signals. We propose concrete guidelines for taking into account conducting skull defects during MEG coil placement and modeling. Exact finite element head models can improve localization of brain function, specifically after surgery. PMID:27092044

  9. Slope Error Measurement Tool for Solar Parabolic Trough Collectors: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stynes, J. K.; Ihas, B.

    2012-04-01

    The National Renewable Energy Laboratory (NREL) has developed an optical measurement tool for parabolic solar collectors that measures the combined errors due to absorber misalignment and reflector slope error. The combined absorber alignment and reflector slope errors are measured using a digital camera to photograph the reflected image of the absorber in the collector. Previous work using the image of the reflection of the absorber finds the reflector slope errors from the reflection of the absorber and an independent measurement of the absorber location. The accuracy of the reflector slope error measurement is thus dependent on the accuracy of themore » absorber location measurement. By measuring the combined reflector-absorber errors, the uncertainty in the absorber location measurement is eliminated. The related performance merit, the intercept factor, depends on the combined effects of the absorber alignment and reflector slope errors. Measuring the combined effect provides a simpler measurement and a more accurate input to the intercept factor estimate. The minimal equipment and setup required for this measurement technique make it ideal for field measurements.« less

  10. Modified slanted-edge method for camera modulation transfer function measurement using nonuniform fast Fourier transform technique

    NASA Astrophysics Data System (ADS)

    Duan, Yaxuan; Xu, Songbo; Yuan, Suochao; Chen, Yongquan; Li, Hongguang; Da, Zhengshang; Gao, Limin

    2018-01-01

    ISO 12233 slanted-edge method experiences errors using fast Fourier transform (FFT) in the camera modulation transfer function (MTF) measurement due to tilt angle errors in the knife-edge resulting in nonuniform sampling of the edge spread function (ESF). In order to resolve this problem, a modified slanted-edge method using nonuniform fast Fourier transform (NUFFT) for camera MTF measurement is proposed. Theoretical simulations for images with noise at a different nonuniform sampling rate of ESF are performed using the proposed modified slanted-edge method. It is shown that the proposed method successfully eliminates the error due to the nonuniform sampling of the ESF. An experimental setup for camera MTF measurement is established to verify the accuracy of the proposed method. The experiment results show that under different nonuniform sampling rates of ESF, the proposed modified slanted-edge method has improved accuracy for the camera MTF measurement compared to the ISO 12233 slanted-edge method.

  11. Reduction of Orifice-Induced Pressure Errors

    NASA Technical Reports Server (NTRS)

    Plentovich, Elizabeth B.; Gloss, Blair B.; Eves, John W.; Stack, John P.

    1987-01-01

    Use of porous-plug orifice reduces or eliminates errors, induced by orifice itself, in measuring static pressure on airfoil surface in wind-tunnel experiments. Piece of sintered metal press-fitted into static-pressure orifice so it matches surface contour of model. Porous material reduces orifice-induced pressure error associated with conventional orifice of same or smaller diameter. Also reduces or eliminates additional errors in pressure measurement caused by orifice imperfections. Provides more accurate measurements in regions with very thin boundary layers.

  12. Landmark-Based Drift Compensation Algorithm for Inertial Pedestrian Navigation

    PubMed Central

    Munoz Diaz, Estefania; Caamano, Maria; Fuentes Sánchez, Francisco Javier

    2017-01-01

    The navigation of pedestrians based on inertial sensors, i.e., accelerometers and gyroscopes, has experienced a great growth over the last years. However, the noise of medium- and low-cost sensors causes a high error in the orientation estimation, particularly in the yaw angle. This error, called drift, is due to the bias of the z-axis gyroscope and other slow changing errors, such as temperature variations. We propose a seamless landmark-based drift compensation algorithm that only uses inertial measurements. The proposed algorithm adds a great value to the state of the art, because the vast majority of the drift elimination algorithms apply corrections to the estimated position, but not to the yaw angle estimation. Instead, the presented algorithm computes the drift value and uses it to prevent yaw errors and therefore position errors. In order to achieve this goal, a detector of landmarks, i.e., corners and stairs, and an association algorithm have been developed. The results of the experiments show that it is possible to reliably detect corners and stairs using only inertial measurements eliminating the need that the user takes any action, e.g., pressing a button. Associations between re-visited landmarks are successfully made taking into account the uncertainty of the position. After that, the drift is computed out of all associations and used during a post-processing stage to obtain a low-drifted yaw angle estimation, that leads to successfully drift compensated trajectories. The proposed algorithm has been tested with quasi-error-free turn rate measurements introducing known biases and with medium-cost gyroscopes in 3D indoor and outdoor scenarios. PMID:28671622

  13. a Gross Error Elimination Method for Point Cloud Data Based on Kd-Tree

    NASA Astrophysics Data System (ADS)

    Kang, Q.; Huang, G.; Yang, S.

    2018-04-01

    Point cloud data has been one type of widely used data sources in the field of remote sensing. Key steps of point cloud data's pro-processing focus on gross error elimination and quality control. Owing to the volume feature of point could data, existed gross error elimination methods need spend massive memory both in space and time. This paper employed a new method which based on Kd-tree algorithm to construct, k-nearest neighbor algorithm to search, settled appropriate threshold to determine with result turns out a judgement that whether target point is or not an outlier. Experimental results show that, our proposed algorithm will help to delete gross error in point cloud data and facilitate to decrease memory consumption, improve efficiency.

  14. Adaptive elimination of optical fiber transmission noise in fiber ocean bottom seismic system

    NASA Astrophysics Data System (ADS)

    Zhong, Qiuwen; Hu, Zhengliang; Cao, Chunyan; Dong, Hongsheng

    2017-10-01

    In this paper, a pressure and acceleration insensitive reference Interferometer is used to obtain laser and public noise introduced by transmission fiber and laser. By using direct subtraction and adaptive filtering, this paper attempts to eliminate and estimation the transmission noise of sensing probe. This paper compares the noise suppression effect of four methods, including the direct subtraction (DS), the least mean square error adaptive elimination (LMS), the normalized least mean square error adaptive elimination (NLMS) and the least square (RLS) adaptive filtering. The experimental results show that the noise reduction effect of RLS and NLMS are almost the same, better than LMS and DS, which can reach 8dB (@100Hz). But considering the workload, RLS is not conducive to the real-time operating system. When it comes to the same treatment effect, the practicability of NLMS is higher than RLS. The noise reduction effect of LMS is slightly worse than that of RLS and NLMS, about 6dB (@100Hz), but its computational complexity is small, which is beneficial to the real time system implementation. It can also be seen that the DS method has the least amount of computational complexity, but the noise suppression effect is worse than that of the adaptive filter due to the difference of the noise amplitude between the RI and the SI, only 4dB (@100Hz) can be reached. The adaptive filter can basically eliminate the influence of the transmission noise, and the simulation signal of the sensor is kept intact.

  15. Irradiance measurement errors due to the assumption of a Lambertian reference panel

    NASA Technical Reports Server (NTRS)

    Kimes, D. S.; Kirchner, J. A.

    1982-01-01

    A technique is presented for determining the error in diurnal irradiance measurements that results from the non-Lambertian behavior of a reference panel under various irradiance conditions. Spectral biconical reflectance factors of a spray-painted barium sulfate panel, along with simulated sky radiance data for clear and hazy skies at six solar zenith angles, were used to calculate the estimated panel irradiances and true irradiances for a nadir-looking sensor in two wavelength bands. The inherent errors in total spectral irradiance (0.68 microns) for a clear sky were 0.60, 6.0, 13.0, and 27.0% for solar zenith angles of 0, 45, 60, and 75 deg, respectively. The technique can be used to characterize the error of a specific panel used in field measurements, and thus eliminate any ambiguity of the effects of the type, preparation, and aging of the paint.

  16. Out-of-plane ultrasonic velocity measurement

    DOEpatents

    Hall, M.S.; Brodeur, P.H.; Jackson, T.G.

    1998-07-14

    A method for improving the accuracy of measuring the velocity and time of flight of ultrasonic signals through moving web-like materials such as paper, paperboard and the like, includes a pair of ultrasonic transducers disposed on opposing sides of a moving web-like material. In order to provide acoustical coupling between the transducers and the web-like material, the transducers are disposed in fluid-filled wheels. Errors due to variances in the wheel thicknesses about their circumference which can affect time of flight measurements and ultimately the mechanical property being tested are compensated by averaging the ultrasonic signals for a predetermined number of revolutions. The invention further includes a method for compensating for errors resulting from the digitization of the ultrasonic signals. More particularly, the invention includes a method for eliminating errors known as trigger jitter inherent with digitizing oscilloscopes used to digitize the signals for manipulation by a digital computer. In particular, rather than cross-correlate ultrasonic signals taken during different sample periods as is known in the art in order to determine the time of flight of the ultrasonic signal through the moving web, a pulse echo box is provided to enable cross-correlation of predetermined transmitted ultrasonic signals with predetermined reflected ultrasonic or echo signals during the sample period. By cross-correlating ultrasonic signals in the same sample period, the error associated with trigger jitter is eliminated. 20 figs.

  17. Out-of-plane ultrasonic velocity measurement

    DOEpatents

    Hall, Maclin S.; Brodeur, Pierre H.; Jackson, Theodore G.

    1998-01-01

    A method for improving the accuracy of measuring the velocity and time of flight of ultrasonic signals through moving web-like materials such as paper, paperboard and the like, includes a pair of ultrasonic transducers disposed on opposing sides of a moving web-like material. In order to provide acoustical coupling between the transducers and the web-like material, the transducers are disposed in fluid-filled wheels. Errors due to variances in the wheel thicknesses about their circumference which can affect time of flight measurements and ultimately the mechanical property being tested are compensated by averaging the ultrasonic signals for a predetermined number of revolutions. The invention further includes a method for compensating for errors resulting from the digitization of the ultrasonic signals. More particularly, the invention includes a method for eliminating errors known as trigger jitter inherent with digitizing oscilloscopes used to digitize the signals for manipulation by a digital computer. In particular, rather than cross-correlate ultrasonic signals taken during different sample periods as is known in the art in order to determine the time of flight of the ultrasonic signal through the moving web, a pulse echo box is provided to enable cross-correlation of predetermined transmitted ultrasonic signals with predetermined reflected ultrasonic or echo signals during the sample period. By cross-correlating ultrasonic signals in the same sample period, the error associated with trigger jitter is eliminated.

  18. [From the concept of guilt to the value-free notification of errors in medicine. Risks, errors and patient safety].

    PubMed

    Haller, U; Welti, S; Haenggi, D; Fink, D

    2005-06-01

    The number of liability cases but also the size of individual claims due to alleged treatment errors are increasing steadily. Spectacular sentences, especially in the USA, encourage this trend. Wherever human beings work, errors happen. The health care system is particularly susceptible and shows a high potential for errors. Therefore risk management has to be given top priority in hospitals. Preparing the introduction of critical incident reporting (CIR) as the means to notify errors is time-consuming and calls for a change in attitude because in many places the necessary base of trust has to be created first. CIR is not made to find the guilty and punish them but to uncover the origins of errors in order to eliminate them. The Department of Anesthesiology of the University Hospital of Basel has developed an electronic error notification system, which, in collaboration with the Swiss Medical Association, allows each specialist society to participate electronically in a CIR system (CIRS) in order to create the largest database possible and thereby to allow statements concerning the extent and type of error sources in medicine. After a pilot project in 2000-2004, the Swiss Society of Gynecology and Obstetrics is now progressively introducing the 'CIRS Medical' of the Swiss Medical Association. In our country, such programs are vulnerable to judicial intervention due to the lack of explicit legal guarantees of protection. High-quality data registration and skillful counseling are all the more important. Hospital directors and managers are called upon to examine those incidents which are based on errors inherent in the system.

  19. Reliable estimation of orbit errors in spaceborne SAR interferometry. The network approach

    NASA Astrophysics Data System (ADS)

    Bähr, Hermann; Hanssen, Ramon F.

    2012-12-01

    An approach to improve orbital state vectors by orbit error estimates derived from residual phase patterns in synthetic aperture radar interferograms is presented. For individual interferograms, an error representation by two parameters is motivated: the baseline error in cross-range and the rate of change of the baseline error in range. For their estimation, two alternatives are proposed: a least squares approach that requires prior unwrapping and a less reliable gridsearch method handling the wrapped phase. In both cases, reliability is enhanced by mutual control of error estimates in an overdetermined network of linearly dependent interferometric combinations of images. Thus, systematic biases, e.g., due to unwrapping errors, can be detected and iteratively eliminated. Regularising the solution by a minimum-norm condition results in quasi-absolute orbit errors that refer to particular images. For the 31 images of a sample ENVISAT dataset, orbit corrections with a mutual consistency on the millimetre level have been inferred from 163 interferograms. The method itself qualifies by reliability and rigorous geometric modelling of the orbital error signal but does not consider interfering large scale deformation effects. However, a separation may be feasible in a combined processing with persistent scatterer approaches or by temporal filtering of the estimates.

  20. Prevalence of visual impairment due to uncorrected refractive error: Results from Delhi-Rapid Assessment of Visual Impairment Study.

    PubMed

    Senjam, Suraj Singh; Vashist, Praveen; Gupta, Noopur; Malhotra, Sumit; Misra, Vasundhara; Bhardwaj, Amit; Gupta, Vivek

    2016-05-01

    To estimate the prevalence of visual impairment (VI) due to uncorrected refractive error (URE) and to assess the barriers to utilization of services in the adult urban population of Delhi. A population-based rapid assessment of VI was conducted among people aged 40 years and above in 24 randomly selected clusters of East Delhi district. Presenting visual acuity (PVA) was assessed in each eye using Snellen's "E" chart. Pinhole examination was done if PVA was <20/60 in either eye and ocular examination to ascertain the cause of VI. Barriers to utilization of services for refractive error were recorded with questionnaires. Of 2421 individuals enumerated, 2331 (96%) individuals were examined. Females were 50.7% among them. The mean age of all examined subjects was 51.32 ± 10.5 years (standard deviation). VI in either eye due to URE was present in 275 individuals (11.8%, 95% confidence interval [CI]: 10.5-13.1). URE was identified as the most common cause (53.4%) of VI. The overall prevalence of VI due to URE in the study population was 6.1% (95% CI: 5.1-7.0). The elder population as well as females were more likely to have VI due to URE (odds ratio [OR] = 12.3; P < 0.001 and OR = 1.5; P < 0.02). Lack of felt need was the most common reported barrier (31.5%). The prevalence of VI due to URE among the urban adult population of Delhi is still high despite the availability of abundant eye care facilities. The majority of reported barriers are related to human behavior and attitude toward the refractive error. Understanding these aspects will help in planning appropriate strategies to eliminate VI due to URE.

  1. Refractive eye surgery in treating functional amblyopia in children.

    PubMed

    Levenger, Samuel; Nemet, Pinhas; Hirsh, Ami; Kremer, Israel; Nemet, Arie

    2006-01-01

    While excimer laser refractive surgery is recommended and highly successful for correcting refractive errors in adults, its use in children has not been extensively exercised or studied. We report our experience treating children with amblyopia due to high anisometropia, high astigmatism, high myopia and with associated developmental delay. Review of patient records of our refractive clinic. A retrospective review was made of all 11 children with stable refractive errors who were unsuccessfully treated non-surgically and then underwent corneal refractive surgery and in one case, lenticular surgery. Seven had high myopic anisometropia, 2 had high astigmatism, and two had high myopia--one with Down's Syndrome and one with agenesis of the corpus callosum. The surgical refractive treatment eliminated or reduced the anisometropia, reduced the astigmatic error, improved vision and improved the daily function of the children with developmental delay. There were no complications or untoward results. Refractive surgery is safe and effective in treating children with high myopic anisometropia, high astigmatism, high myopia and developmental delay due to the resulting poor vision. Surgery can improve visual acuity in amblyopia not responding to routine treatment by correcting the refractive error and refractive aberrations.

  2. On the equivalence of Gaussian elimination and Gauss-Jordan reduction in solving linear equations

    NASA Technical Reports Server (NTRS)

    Tsao, Nai-Kuan

    1989-01-01

    A novel general approach to round-off error analysis using the error complexity concepts is described. This is applied to the analysis of the Gaussian Elimination and Gauss-Jordan scheme for solving linear equations. The results show that the two algorithms are equivalent in terms of our error complexity measures. Thus the inherently parallel Gauss-Jordan scheme can be implemented with confidence if parallel computers are available.

  3. The RATIO method for time-resolved Laue crystallography

    PubMed Central

    Coppens, Philip; Pitak, Mateusz; Gembicky, Milan; Messerschmidt, Marc; Scheins, Stephan; Benedict, Jason; Adachi, Shin-ichi; Sato, Tokushi; Nozawa, Shunsuke; Ichiyanagi, Kohei; Chollet, Matthieu; Koshihara, Shin-ya

    2009-01-01

    A RATIO method for analysis of intensity changes in time-resolved pump–probe Laue diffraction experiments is described. The method eliminates the need for scaling the data with a wavelength curve representing the spectral distribution of the source and removes the effect of possible anisotropic absorption. It does not require relative scaling of series of frames and removes errors due to all but very short term fluctuations in the synchrotron beam. PMID:19240334

  4. Cost effectiveness of ergonomic redesign of electronic motherboard.

    PubMed

    Sen, Rabindra Nath; Yeow, Paul H P

    2003-09-01

    A case study to illustrate the cost effectiveness of ergonomic redesign of electronic motherboard was presented. The factory was running at a loss due to the high costs of rejects and poor quality and productivity. Subjective assessments and direct observations were made on the factory. Investigation revealed that due to motherboard design errors, the machine had difficulty in placing integrated circuits onto the pads, the operators had much difficulty in manual soldering certain components and much unproductive manual cleaning (MC) was required. Consequently, there were high rejects and occupational health and safety (OHS) problems, such as, boredom and work discomfort. Also, much labour and machine costs were spent on repairs. The motherboard was redesigned to correct the design errors, to allow more components to be machine soldered and to reduce MC. This eliminated rejects, reduced repairs, saved US dollars 581495/year and improved operators' OHS. The customer also saved US dollars 142105/year on loss of business.

  5. Calibration Method to Eliminate Zeroth Order Effect in Lateral Shearing Interferometry

    NASA Astrophysics Data System (ADS)

    Fang, Chao; Xiang, Yang; Qi, Keqi; Chen, Dawei

    2018-04-01

    In this paper, a calibration method is proposed which eliminates the zeroth order effect in lateral shearing interferometry. An analytical expression of the calibration error function is deduced, and the relationship between the phase-restoration error and calibration error is established. The analytical results show that the phase-restoration error introduced by the calibration error is proportional to the phase shifting error and zeroth order effect. The calibration method is verified using simulations and experiments. The simulation results show that the phase-restoration error is approximately proportional to the phase shift error and zeroth order effect, when the phase shifting error is less than 2° and the zeroth order effect is less than 0.2. The experimental result shows that compared with the conventional method with 9-frame interferograms, the calibration method with 5-frame interferograms achieves nearly the same restoration accuracy.

  6. The effects of training on errors of perceived direction in perspective displays

    NASA Technical Reports Server (NTRS)

    Tharp, Gregory K.; Ellis, Stephen R.

    1990-01-01

    An experiment was conducted to determine the effects of training on the characteristic direction errors that are observed when subjects estimate exocentric directions on perspective displays. Changes in five subjects' perceptual errors were measured during a training procedure designed to eliminate the error. The training was provided by displaying to each subject both the sign and the direction of his judgment error. The feedback provided by the error display was found to decrease but not eliminate the error. A lookup table model of the source of the error was developed in which the judgement errors were attributed to overestimates of both the pitch and the yaw of the viewing direction used to produce the perspective projection. The model predicts the quantitative characteristics of the data somewhat better than previous models did. A mechanism is proposed for the observed learning, and further tests of the model are suggested.

  7. An Elimination Method of Temperature-Induced Linear Birefringence in a Stray Current Sensor

    PubMed Central

    Xu, Shaoyi; Li, Wei; Xing, Fangfang; Wang, Yuqiao; Wang, Ruilin; Wang, Xianghui

    2017-01-01

    In this work, an elimination method of the temperature-induced linear birefringence (TILB) in a stray current sensor is proposed using the cylindrical spiral fiber (CSF), which produces a large amount of circular birefringence to eliminate the TILB based on geometric rotation effect. First, the differential equations that indicate the polarization evolution of the CSF element are derived, and the output error model is built based on the Jones matrix calculus. Then, an accurate search method is proposed to obtain the key parameters of the CSF, including the length of the cylindrical silica rod and the number of the curve spirals. The optimized results are 302 mm and 11, respectively. Moreover, an effective factor is proposed to analyze the elimination of the TILB, which should be greater than 7.42 to achieve the output error requirement that is not greater than 0.5%. Finally, temperature experiments are conducted to verify the feasibility of the elimination method. The results indicate that the output error caused by the TILB can be controlled less than 0.43% based on this elimination method within the range from −20 °C to 40 °C. PMID:28282953

  8. PREVALENCE OF REFRACTIVE ERRORS IN MADRASSA STUDENTS OF HARIPUR DISTRICT.

    PubMed

    Atta, Zoia; Arif, Abdus Salam; Ahmed, Iftikhar; Farooq, Umer

    2015-01-01

    Visual impairment due to refractive errors is one of the most common problems among school-age children and is the second leading cause of treatable blindness. The Right to Sight, a global initiative launched by a coalition of non-government organizations and the World Health Organization (WHO), aims to eliminate avoidable visual impairment and blindness at a global level. In order to achieve this goal it is important to know the prevalence of different refractive errors in a community. Children and teenagers are the most susceptible groups to be affected by refractive errors. So, this population needs to be screened for different types of refractive errors. The study was done with the objective to find the frequency of different types of refractive errors in students of madrassas between the ages of 5-20 years in Haripur. This cross sectional study was done with 300 students between ages of 5-20 years in Madrassas of Haripur. The students were screened for refractive errors and the types of the errors were noted. After screening for refractive errors-the glasses were prescribed to the students. Myopia being 52.6% was the most frequent refractive error in students, followed by hyperopia 28.4% and astigmatism 19%. This study showed that myopia is an important problem in madrassa population. Females and males are almost equally affected. Spectacle correction of refractive errors is the cheapest and easy solution of this problem.

  9. Optical voltage reference

    DOEpatents

    Rankin, Richard; Kotter, Dale

    1994-01-01

    An optical voltage reference for providing an alternative to a battery source. The optical reference apparatus provides a temperature stable, high precision, isolated voltage reference through the use of optical isolation techniques to eliminate current and impedance coupling errors. Pulse rate frequency modulation is employed to eliminate errors in the optical transmission link while phase-lock feedback is employed to stabilize the frequency to voltage transfer function.

  10. Sub-nanometer periodic nonlinearity error in absolute distance interferometers

    NASA Astrophysics Data System (ADS)

    Yang, Hongxing; Huang, Kaiqi; Hu, Pengcheng; Zhu, Pengfei; Tan, Jiubin; Fan, Zhigang

    2015-05-01

    Periodic nonlinearity which can result in error in nanometer scale has become a main problem limiting the absolute distance measurement accuracy. In order to eliminate this error, a new integrated interferometer with non-polarizing beam splitter is developed. This leads to disappearing of the frequency and/or polarization mixing. Furthermore, a strict requirement on the laser source polarization is highly reduced. By combining retro-reflector and angel prism, reference and measuring beams can be spatially separated, and therefore, their optical paths are not overlapped. So, the main cause of the periodic nonlinearity error, i.e., the frequency and/or polarization mixing and leakage of beam, is eliminated. Experimental results indicate that the periodic phase error is kept within 0.0018°.

  11. Distance Measurement Error in Time-of-Flight Sensors Due to Shot Noise

    PubMed Central

    Illade-Quinteiro, Julio; Brea, Víctor M.; López, Paula; Cabello, Diego; Doménech-Asensi, Gines

    2015-01-01

    Unlike other noise sources, which can be reduced or eliminated by different signal processing techniques, shot noise is an ever-present noise component in any imaging system. In this paper, we present an in-depth study of the impact of shot noise on time-of-flight sensors in terms of the error introduced in the distance estimation. The paper addresses the effect of parameters, such as the size of the photosensor, the background and signal power or the integration time, and the resulting design trade-offs. The study is demonstrated with different numerical examples, which show that, in general, the phase shift determination technique with two background measurements approach is the most suitable for pixel arrays of large resolution. PMID:25723141

  12. Six reasons why thermospheric measurements and models disagree

    NASA Technical Reports Server (NTRS)

    Moe, Kenneth

    1987-01-01

    The differences between thermospheric measurements and models are discussed. Sometimes the model is in error and at other times the measurements are, but it also is possible for both to be correct, yet have the comparison result in an apparent disagreement. These reasons are collected for disagreement, and, whenever possible, methods of reducing or eliminating them are suggested. The six causes of disagreement discussed are: actual errors caused by the limited knowledge of gas-surface interactions and by in-track winds; limitations of the thermospheric general circulation models due to incomplete knowledge of the energy sources and sinks as well as incompleteness of the parameterization which must be employed; and limitations imposed on the empirical models by the conceptual framework and the transient waves.

  13. A continuous quality improvement project to reduce medication error in the emergency department.

    PubMed

    Lee, Sara Bc; Lee, Larry Ly; Yeung, Richard Sd; Chan, Jimmy Ts

    2013-01-01

    Medication errors are a common source of adverse healthcare incidents particularly in the emergency department (ED) that has a number of factors that make it prone to medication errors. This project aims to reduce medication errors and improve the health and economic outcomes of clinical care in Hong Kong ED. In 2009, a task group was formed to identify problems that potentially endanger medication safety and developed strategies to eliminate these problems. Responsible officers were assigned to look after seven error-prone areas. Strategies were proposed, discussed, endorsed and promulgated to eliminate the problems identified. A reduction of medication incidents (MI) from 16 to 6 was achieved before and after the improvement work. This project successfully established a concrete organizational structure to safeguard error-prone areas of medication safety in a sustainable manner.

  14. Application of 3-signal coherence to core noise transmission

    NASA Technical Reports Server (NTRS)

    Krejsa, E. A.

    1983-01-01

    A method for determining transfer functions across turbofan engine components and from the engine to the far-field is developed. The method is based on the three-signal coherence technique used previously to obtain far-field core noise levels. This method eliminates the bias error in transfer function measurements due to contamination of measured pressures by nonpropagating pressure fluctuations. Measured transfer functions from the engine to the far-field, across the tailpipe, and across the turbine are presented for three turbofan engines.

  15. Face matching in a long task: enforced rest and desk-switching cannot maintain identification accuracy

    PubMed Central

    Alenezi, Hamood M.; Fysh, Matthew C.; Johnston, Robert A.

    2015-01-01

    In face matching, observers have to decide whether two photographs depict the same person or different people. This task is not only remarkably difficult but accuracy declines further during prolonged testing. The current study investigated whether this decline in long tasks can be eliminated with regular rest-breaks (Experiment 1) or room-switching (Experiment 2). Both experiments replicated the accuracy decline for long face-matching tasks and showed that this could not be eliminated with rest or room-switching. These findings suggest that person identification in applied settings, such as passport control, might be particularly error-prone due to the long and repetitive nature of the task. The experiments also show that it is difficult to counteract these problems. PMID:26312179

  16. Optical voltage reference

    DOEpatents

    Rankin, R.; Kotter, D.

    1994-04-26

    An optical voltage reference for providing an alternative to a battery source is described. The optical reference apparatus provides a temperature stable, high precision, isolated voltage reference through the use of optical isolation techniques to eliminate current and impedance coupling errors. Pulse rate frequency modulation is employed to eliminate errors in the optical transmission link while phase-lock feedback is employed to stabilize the frequency to voltage transfer function. 2 figures.

  17. Error sources affecting thermocouple thermometry in RF electromagnetic fields.

    PubMed

    Chakraborty, D P; Brezovich, I A

    1982-03-01

    Thermocouple thermometry errors in radiofrequency (typically 13, 56 MHZ) electromagnetic fields such as are encountered in hyperthermia are described. RF currents capacitatively or inductively coupled into the thermocouple-detector circuit produce errors which are a combination of interference, i.e., 'pick-up' error, and genuine rf induced temperature changes at the junction of the thermocouple. The former can be eliminated by adequate filtering and shielding; the latter is due to (a) junction current heating in which the generally unequal resistances of the thermocouple wires cause a net current flow from the higher to the lower resistance wire across the junction, (b) heating in the surrounding resistive material (tissue in hyperthermia), and (c) eddy current heating of the thermocouple wires in the oscillating magnetic field. Low frequency theories are used to estimate these errors under given operating conditions and relevant experiments demonstrating these effects and precautions necessary to minimize the errors are described. It is shown that at 13.56 MHz and voltage levels below 100 V rms these errors do not exceed 0.1 degrees C if the precautions are observed and thermocouples with adequate insulation (e.g., Bailey IT-18) are used. Results of this study are being currently used in our clinical work with good success.

  18. Human Reliability and the Cost of Doing Business

    NASA Technical Reports Server (NTRS)

    DeMott, Diana

    2014-01-01

    Most businesses recognize that people will make mistakes and assume errors are just part of the cost of doing business, but does it need to be? Companies with high risk, or major consequences, should consider the effect of human error. In a variety of industries, Human Errors have caused costly failures and workplace injuries. These have included: airline mishaps, medical malpractice, administration of medication and major oil spills have all been blamed on human error. A technique to mitigate or even eliminate some of these costly human errors is the use of Human Reliability Analysis (HRA). Various methodologies are available to perform Human Reliability Assessments that range from identifying the most likely areas for concern to detailed assessments with human error failure probabilities calculated. Which methodology to use would be based on a variety of factors that would include: 1) how people react and act in different industries, and differing expectations based on industries standards, 2) factors that influence how the human errors could occur such as tasks, tools, environment, workplace, support, training and procedure, 3) type and availability of data and 4) how the industry views risk & reliability influences ( types of emergencies, contingencies and routine tasks versus cost based concerns). The Human Reliability Assessments should be the first step to reduce, mitigate or eliminate the costly mistakes or catastrophic failures. Using Human Reliability techniques to identify and classify human error risks allows a company more opportunities to mitigate or eliminate these risks and prevent costly failures.

  19. Rapid estimation of frequency response functions by close-range photogrammetry

    NASA Technical Reports Server (NTRS)

    Tripp, J. S.

    1985-01-01

    The accuracy of a rapid method which estimates the frequency response function from stereoscopic dynamic data is computed. It is shown that reversal of the order of the operations of coordinate transformation and Fourier transformation, which provides a significant increase in computational speed, introduces error. A portion of the error, proportional to the perturbation components normal to the camera focal planes, cannot be eliminated. The remaining error may be eliminated by proper scaling of frequency data prior to coordinate transformation. Methods are developed for least squares estimation of the full 3x3 frequency response matrix for a three dimensional structure.

  20. Multipath calibration in GPS pseudorange measurements

    NASA Technical Reports Server (NTRS)

    Kee, Changdon (Inventor); Parkinson, Bradford W. (Inventor)

    1998-01-01

    Novel techniques are disclosed for eliminating multipath errors, including mean bias errors, in pseudorange measurements made by conventional global positioning system receivers. By correlating the multipath signals of different satellites at their cross-over points in the sky, multipath mean bias errors are effectively eliminated. By then taking advantage of the geometrical dependence of multipath, a linear combination of spherical harmonics are fit to the satellite multipath data to create a hemispherical model of the multipath. This calibration model can then be used to compensate for multipath in subsequent measurements and thereby obtain GPS positioning to centimeter accuracy.

  1. High-fidelity target sequencing of individual molecules identified using barcode sequences: de novo detection and absolute quantitation of mutations in plasma cell-free DNA from cancer patients.

    PubMed

    Kukita, Yoji; Matoba, Ryo; Uchida, Junji; Hamakawa, Takuya; Doki, Yuichiro; Imamura, Fumio; Kato, Kikuya

    2015-08-01

    Circulating tumour DNA (ctDNA) is an emerging field of cancer research. However, current ctDNA analysis is usually restricted to one or a few mutation sites due to technical limitations. In the case of massively parallel DNA sequencers, the number of false positives caused by a high read error rate is a major problem. In addition, the final sequence reads do not represent the original DNA population due to the global amplification step during the template preparation. We established a high-fidelity target sequencing system of individual molecules identified in plasma cell-free DNA using barcode sequences; this system consists of the following two steps. (i) A novel target sequencing method that adds barcode sequences by adaptor ligation. This method uses linear amplification to eliminate the errors introduced during the early cycles of polymerase chain reaction. (ii) The monitoring and removal of erroneous barcode tags. This process involves the identification of individual molecules that have been sequenced and for which the number of mutations have been absolute quantitated. Using plasma cell-free DNA from patients with gastric or lung cancer, we demonstrated that the system achieved near complete elimination of false positives and enabled de novo detection and absolute quantitation of mutations in plasma cell-free DNA. © The Author 2015. Published by Oxford University Press on behalf of Kazusa DNA Research Institute.

  2. Self-calibration method without joint iteration for distributed small satellite SAR systems

    NASA Astrophysics Data System (ADS)

    Xu, Qing; Liao, Guisheng; Liu, Aifei; Zhang, Juan

    2013-12-01

    The performance of distributed small satellite synthetic aperture radar systems degrades significantly due to the unavoidable array errors, including gain, phase, and position errors, in real operating scenarios. In the conventional method proposed in (IEEE T Aero. Elec. Sys. 42:436-451, 2006), the spectrum components within one Doppler bin are considered as calibration sources. However, it is found in this article that the gain error estimation and the position error estimation in the conventional method can interact with each other. The conventional method may converge to suboptimal solutions in large position errors since it requires the joint iteration between gain-phase error estimation and position error estimation. In addition, it is also found that phase errors can be estimated well regardless of position errors when the zero Doppler bin is chosen. In this article, we propose a method obtained by modifying the conventional one, based on these two observations. In this modified method, gain errors are firstly estimated and compensated, which eliminates the interaction between gain error estimation and position error estimation. Then, by using the zero Doppler bin data, the phase error estimation can be performed well independent of position errors. Finally, position errors are estimated based on the Taylor-series expansion. Meanwhile, the joint iteration between gain-phase error estimation and position error estimation is not required. Therefore, the problem of suboptimal convergence, which occurs in the conventional method, can be avoided with low computational method. The modified method has merits of faster convergence and lower estimation error compared to the conventional one. Theoretical analysis and computer simulation results verified the effectiveness of the modified method.

  3. Robust LOD scores for variance component-based linkage analysis.

    PubMed

    Blangero, J; Williams, J T; Almasy, L

    2000-01-01

    The variance component method is now widely used for linkage analysis of quantitative traits. Although this approach offers many advantages, the importance of the underlying assumption of multivariate normality of the trait distribution within pedigrees has not been studied extensively. Simulation studies have shown that traits with leptokurtic distributions yield linkage test statistics that exhibit excessive Type I error when analyzed naively. We derive analytical formulae relating the deviation from the expected asymptotic distribution of the lod score to the kurtosis and total heritability of the quantitative trait. A simple correction constant yields a robust lod score for any deviation from normality and for any pedigree structure, and effectively eliminates the problem of inflated Type I error due to misspecification of the underlying probability model in variance component-based linkage analysis.

  4. Analysis of phase error effects in multishot diffusion-prepared turbo spin echo imaging

    PubMed Central

    Cervantes, Barbara; Kooijman, Hendrik; Karampinos, Dimitrios C.

    2017-01-01

    Background To characterize the effect of phase errors on the magnitude and the phase of the diffusion-weighted (DW) signal acquired with diffusion-prepared turbo spin echo (dprep-TSE) sequences. Methods Motion and eddy currents were identified as the main sources of phase errors. An analytical expression for the effect of phase errors on the acquired signal was derived and verified using Bloch simulations, phantom, and in vivo experiments. Results Simulations and experiments showed that phase errors during the diffusion preparation cause both magnitude and phase modulation on the acquired data. When motion-induced phase error (MiPe) is accounted for (e.g., with motion-compensated diffusion encoding), the signal magnitude modulation due to the leftover eddy-current-induced phase error cannot be eliminated by the conventional phase cycling and sum-of-squares (SOS) method. By employing magnitude stabilizers, the phase-error-induced magnitude modulation, regardless of its cause, was removed but the phase modulation remained. The in vivo comparison between pulsed gradient and flow-compensated diffusion preparations showed that MiPe needed to be addressed in multi-shot dprep-TSE acquisitions employing magnitude stabilizers. Conclusions A comprehensive analysis of phase errors in dprep-TSE sequences showed that magnitude stabilizers are mandatory in removing the phase error induced magnitude modulation. Additionally, when multi-shot dprep-TSE is employed the inconsistent signal phase modulation across shots has to be resolved before shot-combination is performed. PMID:28516049

  5. C-band radar pulse Doppler error: Its discovery, modeling, and elimination

    NASA Technical Reports Server (NTRS)

    Krabill, W. B.; Dempsey, D. J.

    1978-01-01

    The discovery of a C Band radar pulse Doppler error is discussed and use of the GEOS 3 satellite's coherent transponder to isolate the error source is described. An analysis of the pulse Doppler tracking loop is presented and a mathematical model for the error was developed. Error correction techniques were developed and are described including implementation details.

  6. 3D measurement using combined Gray code and dual-frequency phase-shifting approach

    NASA Astrophysics Data System (ADS)

    Yu, Shuang; Zhang, Jing; Yu, Xiaoyang; Sun, Xiaoming; Wu, Haibin; Liu, Xin

    2018-04-01

    The combined Gray code and phase-shifting approach is a commonly used 3D measurement technique. In this technique, an error that equals integer multiples of the phase-shifted fringe period, i.e. period jump error, often exists in the absolute analog code, which can lead to gross measurement errors. To overcome this problem, the present paper proposes 3D measurement using a combined Gray code and dual-frequency phase-shifting approach. Based on 3D measurement using the combined Gray code and phase-shifting approach, one set of low-frequency phase-shifted fringe patterns with an odd-numbered multiple of the original phase-shifted fringe period is added. Thus, the absolute analog code measured value can be obtained by the combined Gray code and phase-shifting approach, and the low-frequency absolute analog code measured value can also be obtained by adding low-frequency phase-shifted fringe patterns. Then, the corrected absolute analog code measured value can be obtained by correcting the former by the latter, and the period jump errors can be eliminated, resulting in reliable analog code unwrapping. For the proposed approach, we established its measurement model, analyzed its measurement principle, expounded the mechanism of eliminating period jump errors by error analysis, and determined its applicable conditions. Theoretical analysis and experimental results show that the proposed approach can effectively eliminate period jump errors, reliably perform analog code unwrapping, and improve the measurement accuracy.

  7. Temperature-dependent spectral mismatch corrections

    DOE PAGES

    Osterwald, Carl R.; Campanelli, Mark; Moriarty, Tom; ...

    2015-11-01

    This study develops the mathematical foundation for a translation of solar cell short-circuit current from one thermal and spectral irradiance operating condition to another without the use of ill-defined and error-prone temperature coefficients typically employed in solar cell metrology. Using the partial derivative of quantum efficiency with respect to temperature, the conventional isothermal expression for spectral mismatch corrections is modified to account for changes of current due to temperature; this modification completely eliminates the need for short-circuit-current temperature coefficients. An example calculation is provided to demonstrate use of the new translation.

  8. Factors affecting the sticking of insects on modified aircraft wings

    NASA Technical Reports Server (NTRS)

    Yi, O.; Chitsaz-Z, M. R.; Eiss, N. S.; Wightman, J. P.

    1988-01-01

    Previous work showed that the total number of insects sticking to an aluminum surface was reduced by coating the aluminum surface with elastomers. Due to a large number of possible experimental errors, no correlation between the modulus of elasticity, the elastomer, and the total number of insects sticking to a given elastomer was obtained. One of the errors assumed to be introduced during the road test is a variable insect flux so the number of insects striking one surface might be different from that striking another sample. To eliminate this source of error, the road test used to collect insects was simulated in a laboratory by development of an insect impacting technique using a pipe and high pressure compressed air. The insects are accelerated by a compressed air gun to high velocities and are then impacted with a stationary target on which the sample is mounted. The velocity of an object exiting from the pipe was determined and further improvement of the technique was achieved to obtain a uniform air velocity distribution.

  9. Reducing Bias and Error in the Correlation Coefficient Due to Nonnormality.

    PubMed

    Bishara, Anthony J; Hittner, James B

    2015-10-01

    It is more common for educational and psychological data to be nonnormal than to be approximately normal. This tendency may lead to bias and error in point estimates of the Pearson correlation coefficient. In a series of Monte Carlo simulations, the Pearson correlation was examined under conditions of normal and nonnormal data, and it was compared with its major alternatives, including the Spearman rank-order correlation, the bootstrap estimate, the Box-Cox transformation family, and a general normalizing transformation (i.e., rankit), as well as to various bias adjustments. Nonnormality caused the correlation coefficient to be inflated by up to +.14, particularly when the nonnormality involved heavy-tailed distributions. Traditional bias adjustments worsened this problem, further inflating the estimate. The Spearman and rankit correlations eliminated this inflation and provided conservative estimates. Rankit also minimized random error for most sample sizes, except for the smallest samples ( n = 10), where bootstrapping was more effective. Overall, results justify the use of carefully chosen alternatives to the Pearson correlation when normality is violated.

  10. Reducing Bias and Error in the Correlation Coefficient Due to Nonnormality

    PubMed Central

    Hittner, James B.

    2014-01-01

    It is more common for educational and psychological data to be nonnormal than to be approximately normal. This tendency may lead to bias and error in point estimates of the Pearson correlation coefficient. In a series of Monte Carlo simulations, the Pearson correlation was examined under conditions of normal and nonnormal data, and it was compared with its major alternatives, including the Spearman rank-order correlation, the bootstrap estimate, the Box–Cox transformation family, and a general normalizing transformation (i.e., rankit), as well as to various bias adjustments. Nonnormality caused the correlation coefficient to be inflated by up to +.14, particularly when the nonnormality involved heavy-tailed distributions. Traditional bias adjustments worsened this problem, further inflating the estimate. The Spearman and rankit correlations eliminated this inflation and provided conservative estimates. Rankit also minimized random error for most sample sizes, except for the smallest samples (n = 10), where bootstrapping was more effective. Overall, results justify the use of carefully chosen alternatives to the Pearson correlation when normality is violated. PMID:29795841

  11. Accurate Micro-Tool Manufacturing by Iterative Pulsed-Laser Ablation

    NASA Astrophysics Data System (ADS)

    Warhanek, Maximilian; Mayr, Josef; Dörig, Christian; Wegener, Konrad

    2017-12-01

    Iterative processing solutions, including multiple cycles of material removal and measurement, are capable of achieving higher geometric accuracy by compensating for most deviations manifesting directly on the workpiece. Remaining error sources are the measurement uncertainty and the repeatability of the material-removal process including clamping errors. Due to the lack of processing forces, process fluids and wear, pulsed-laser ablation has proven high repeatability and can be realized directly on a measuring machine. This work takes advantage of this possibility by implementing an iterative, laser-based correction process for profile deviations registered directly on an optical measurement machine. This way efficient iterative processing is enabled, which is precise, applicable for all tool materials including diamond and eliminates clamping errors. The concept is proven by a prototypical implementation on an industrial tool measurement machine and a nanosecond fibre laser. A number of measurements are performed on both the machine and the processed workpieces. Results show production deviations within 2 μm diameter tolerance.

  12. Eliminating US hospital medical errors.

    PubMed

    Kumar, Sameer; Steinebach, Marc

    2008-01-01

    Healthcare costs in the USA have continued to rise steadily since the 1980s. Medical errors are one of the major causes of deaths and injuries of thousands of patients every year, contributing to soaring healthcare costs. The purpose of this study is to examine what has been done to deal with the medical-error problem in the last two decades and present a closed-loop mistake-proof operation system for surgery processes that would likely eliminate preventable medical errors. The design method used is a combination of creating a service blueprint, implementing the six sigma DMAIC cycle, developing cause-and-effect diagrams as well as devising poka-yokes in order to develop a robust surgery operation process for a typical US hospital. In the improve phase of the six sigma DMAIC cycle, a number of poka-yoke techniques are introduced to prevent typical medical errors (identified through cause-and-effect diagrams) that may occur in surgery operation processes in US hospitals. It is the authors' assertion that implementing the new service blueprint along with the poka-yokes, will likely result in the current medical error rate to significantly improve to the six-sigma level. Additionally, designing as many redundancies as possible in the delivery of care will help reduce medical errors. Primary healthcare providers should strongly consider investing in adequate doctor and nurse staffing, and improving their education related to the quality of service delivery to minimize clinical errors. This will lead to an increase in higher fixed costs, especially in the shorter time frame. This paper focuses additional attention needed to make a sound technical and business case for implementing six sigma tools to eliminate medical errors that will enable hospital managers to increase their hospital's profitability in the long run and also ensure patient safety.

  13. Age-Related Eye Diseases and Visual Impairment Among U.S. Adults

    PubMed Central

    Chou, Chiu-Fang; Cotch, Mary Frances; Vitale, Susan; Zhang, Xinzhi; Klein, Ronald; Friedman, David S.; Klein, Barbara E.K.; Saaddine, Jinan B.

    2014-01-01

    Background Visual impairment is a common health-related disability in the U.S. The association between clinical measurements of age-related eye diseases and visual impairment in data from a national survey has not been reported. Purpose To examine common eye conditions and other correlates associated with visual impairment in the U.S. Methods Data from the 2005–2008 National Health and Nutrition Examination Survey of 5222 Americans aged ≥40 years were analyzed in 2012 for visual impairment (presenting distance visual acuity worse than 20/40 in the better-seeing eye), and visual impairment not due to refractive error (distance visual acuity worse than 20/40 after refraction). Diabetic retinopathy (DR) and age-related macular degeneration (AMD) were assessed from retinal fundus images; glaucoma was assessed from two successive frequency-doubling tests and a cup-to-disc ratio measurement. Results Prevalence of visual impairment and of visual impairment not due to refractive error was 7.5% (95% CI=6.9%, 8.1%) and 2.0% (1.7%, 2.3%), respectively. The prevalence of visual impairment not due to refractive error was significantly higher among people with AMD (2.2%) compared to those without AMD (0.8%), or with DR (3.5%) compared to those without DR (1.2%). Independent predictive factors of visual impairment not due to refractive error were AMD (OR=4.52, 95% CI=2.50, 8.17); increasing age (OR=1.09 per year, 95% CI=1.06, 1.13); and less than a high school education (OR=2.99, 95% CI=1.18, 7.55). Conclusions Visual impairment is a public health problem in the U.S. Visual impairment in two thirds of adults could be eliminated with refractive correction. Screening of the older population may identify adults at increased risk of visual impairment due to eye diseases. PMID:23790986

  14. Speech therapy for errors secondary to cleft palate and velopharyngeal dysfunction.

    PubMed

    Kummer, Ann W

    2011-05-01

    Individuals with a history of cleft lip/palate or velopharyngeal dysfunction may demonstrate any combination of speech sound errors, hypernasality, and nasal emission. Speech sound distortion can also occur due to other structural anomalies, including malocclusion. Whenever there are structural anomalies, speech can be affected by obligatory distortions or compensatory errors. Obligatory distortions (including hypernasality due to velopharyngeal insufficiency) are caused by abnormal structure and not by abnormal function. Therefore, surgery or other forms of physical management are needed for correction. In contrast, speech therapy is indicated for compensatory articulation productions where articulation placement is changed in response to the abnormal structure. Speech therapy is much more effective if it is done after normalization of the structure. When speech therapy is appropriate, the techniques involve methods to change articulation placement using standard articulation therapy principles. Oral-motor exercises, including the use of blowing and sucking, are never indicated to improve velopharyngeal function. The purpose of this article is to provide information regarding when speech therapy is appropriate for individuals with a history of cleft palate or other structural anomalies and when physical management is needed. In addition, some specific therapy techniques are offered for the elimination of common compensatory articulation productions. © Thieme Medical Publishers.

  15. Eddy-covariance data with low signal-to-noise ratio: time-lag determination, uncertainties and limit of detection

    NASA Astrophysics Data System (ADS)

    Langford, B.; Acton, W.; Ammann, C.; Valach, A.; Nemitz, E.

    2015-10-01

    All eddy-covariance flux measurements are associated with random uncertainties which are a combination of sampling error due to natural variability in turbulence and sensor noise. The former is the principal error for systems where the signal-to-noise ratio of the analyser is high, as is usually the case when measuring fluxes of heat, CO2 or H2O. Where signal is limited, which is often the case for measurements of other trace gases and aerosols, instrument uncertainties dominate. Here, we are applying a consistent approach based on auto- and cross-covariance functions to quantify the total random flux error and the random error due to instrument noise separately. As with previous approaches, the random error quantification assumes that the time lag between wind and concentration measurement is known. However, if combined with commonly used automated methods that identify the individual time lag by looking for the maximum in the cross-covariance function of the two entities, analyser noise additionally leads to a systematic bias in the fluxes. Combining data sets from several analysers and using simulations, we show that the method of time-lag determination becomes increasingly important as the magnitude of the instrument error approaches that of the sampling error. The flux bias can be particularly significant for disjunct data, whereas using a prescribed time lag eliminates these effects (provided the time lag does not fluctuate unduly over time). We also demonstrate that when sampling at higher elevations, where low frequency turbulence dominates and covariance peaks are broader, both the probability and magnitude of bias are magnified. We show that the statistical significance of noisy flux data can be increased (limit of detection can be decreased) by appropriate averaging of individual fluxes, but only if systematic biases are avoided by using a prescribed time lag. Finally, we make recommendations for the analysis and reporting of data with low signal-to-noise and their associated errors.

  16. Eddy-covariance data with low signal-to-noise ratio: time-lag determination, uncertainties and limit of detection

    NASA Astrophysics Data System (ADS)

    Langford, B.; Acton, W.; Ammann, C.; Valach, A.; Nemitz, E.

    2015-03-01

    All eddy-covariance flux measurements are associated with random uncertainties which are a combination of sampling error due to natural variability in turbulence and sensor noise. The former is the principal error for systems where the signal-to-noise ratio of the analyser is high, as is usually the case when measuring fluxes of heat, CO2 or H2O. Where signal is limited, which is often the case for measurements of other trace gases and aerosols, instrument uncertainties dominate. We are here applying a consistent approach based on auto- and cross-covariance functions to quantifying the total random flux error and the random error due to instrument noise separately. As with previous approaches, the random error quantification assumes that the time-lag between wind and concentration measurement is known. However, if combined with commonly used automated methods that identify the individual time-lag by looking for the maximum in the cross-covariance function of the two entities, analyser noise additionally leads to a systematic bias in the fluxes. Combining datasets from several analysers and using simulations we show that the method of time-lag determination becomes increasingly important as the magnitude of the instrument error approaches that of the sampling error. The flux bias can be particularly significant for disjunct data, whereas using a prescribed time-lag eliminates these effects (provided the time-lag does not fluctuate unduly over time). We also demonstrate that when sampling at higher elevations, where low frequency turbulence dominates and covariance peaks are broader, both the probability and magnitude of bias are magnified. We show that the statistical significance of noisy flux data can be increased (limit of detection can be decreased) by appropriate averaging of individual fluxes, but only if systematic biases are avoided by using a prescribed time-lag. Finally, we make recommendations for the analysis and reporting of data with low signal-to-noise and their associated errors.

  17. The High-Resolution Wave-Propagation Method Applied to Meso- and Micro-Scale Flows

    NASA Technical Reports Server (NTRS)

    Ahmad, Nashat N.; Proctor, Fred H.

    2012-01-01

    The high-resolution wave-propagation method for computing the nonhydrostatic atmospheric flows on meso- and micro-scales is described. The design and implementation of the Riemann solver used for computing the Godunov fluxes is discussed in detail. The method uses a flux-based wave decomposition in which the flux differences are written directly as the linear combination of the right eigenvectors of the hyperbolic system. The two advantages of the technique are: 1) the need for an explicit definition of the Roe matrix is eliminated and, 2) the inclusion of source term due to gravity does not result in discretization errors. The resulting flow solver is conservative and able to resolve regions of large gradients without introducing dispersion errors. The methodology is validated against exact analytical solutions and benchmark cases for non-hydrostatic atmospheric flows.

  18. Better band gaps for wide-gap semiconductors from a locally corrected exchange-correlation potential that nearly eliminates self-interaction errors

    DOE PAGES

    Singh, Prashant; Harbola, Manoj K.; Johnson, Duane D.

    2017-09-08

    Here, this work constitutes a comprehensive and improved account of electronic-structure and mechanical properties of silicon-nitride (more » $${\\rm Si}_{3}$$ $${\\rm N}_{4}$$ ) polymorphs via van Leeuwen and Baerends (LB) exchange-corrected local density approximation (LDA) that enforces the exact exchange potential asymptotic behavior. The calculated lattice constant, bulk modulus, and electronic band structure of $${\\rm Si}_{3}$$ $${\\rm N}_{4}$$ polymorphs are in good agreement with experimental results. We also show that, for a single electron in a hydrogen atom, spherical well, or harmonic oscillator, the LB-corrected LDA reduces the (self-interaction) error to exact total energy to ~10%, a factor of three to four lower than standard LDA, due to a dramatically improved representation of the exchange-potential.« less

  19. Errors in radiation oncology: A study in pathways and dosimetric impact

    PubMed Central

    Drzymala, Robert E.; Purdy, James A.; Michalski, Jeff

    2005-01-01

    As complexity for treating patients increases, so does the risk of error. Some publications have suggested that record and verify (R&V) systems may contribute in propagating errors. Direct data transfer has the potential to eliminate most, but not all, errors. And although the dosimetric consequences may be obvious in some cases, a detailed study does not exist. In this effort, we examined potential errors in terms of scenarios, pathways of occurrence, and dosimetry. Our goal was to prioritize error prevention according to likelihood of event and dosimetric impact. For conventional photon treatments, we investigated errors of incorrect source‐to‐surface distance (SSD), energy, omitted wedge (physical, dynamic, or universal) or compensating filter, incorrect wedge or compensating filter orientation, improper rotational rate for arc therapy, and geometrical misses due to incorrect gantry, collimator or table angle, reversed field settings, and setup errors. For electron beam therapy, errors investigated included incorrect energy, incorrect SSD, along with geometric misses. For special procedures we examined errors for total body irradiation (TBI, incorrect field size, dose rate, treatment distance) and LINAC radiosurgery (incorrect collimation setting, incorrect rotational parameters). Likelihood of error was determined and subsequently rated according to our history of detecting such errors. Dosimetric evaluation was conducted by using dosimetric data, treatment plans, or measurements. We found geometric misses to have the highest error probability. They most often occurred due to improper setup via coordinate shift errors or incorrect field shaping. The dosimetric impact is unique for each case and depends on the proportion of fields in error and volume mistreated. These errors were short‐lived due to rapid detection via port films. The most significant dosimetric error was related to a reversed wedge direction. This may occur due to incorrect collimator angle or wedge orientation. For parallel‐opposed 60° wedge fields, this error could be as high as 80% to a point off‐axis. Other examples of dosimetric impact included the following: SSD, ~2%/cm for photons or electrons; photon energy (6 MV vs. 18 MV), on average 16% depending on depth, electron energy, ~0.5cm of depth coverage per MeV (mega‐electron volt). Of these examples, incorrect distances were most likely but rapidly detected by in vivo dosimetry. Errors were categorized by occurrence rate, methods and timing of detection, longevity, and dosimetric impact. Solutions were devised according to these criteria. To date, no one has studied the dosimetric impact of global errors in radiation oncology. Although there is heightened awareness that with increased use of ancillary devices and automation, there must be a parallel increase in quality check systems and processes, errors do and will continue to occur. This study has helped us identify and prioritize potential errors in our clinic according to frequency and dosimetric impact. For example, to reduce the use of an incorrect wedge direction, our clinic employs off‐axis in vivo dosimetry. To avoid a treatment distance setup error, we use both vertical table settings and optical distance indicator (ODI) values to properly set up fields. As R&V systems become more automated, more accurate and efficient data transfer will occur. This will require further analysis. Finally, we have begun examining potential intensity‐modulated radiation therapy (IMRT) errors according to the same criteria. PACS numbers: 87.53.Xd, 87.53.St PMID:16143793

  20. Flexible Automatic Discretization for Finite Differences: Eliminating the Human Factor

    NASA Astrophysics Data System (ADS)

    Pranger, Casper

    2017-04-01

    In the geophysical numerical modelling community, finite differences are (in part due to their small footprint) a popular spatial discretization method for PDEs in the regular-shaped continuum that is the earth. However, they rapidly become prone to programming mistakes when physics increase in complexity. To eliminate opportunities for human error, we have designed an automatic discretization algorithm using Wolfram Mathematica, in which the user supplies symbolic PDEs, the number of spatial dimensions, and a choice of symbolic boundary conditions, and the script transforms this information into matrix- and right-hand-side rules ready for use in a C++ code that will accept them. The symbolic PDEs are further used to automatically develop and perform manufactured solution benchmarks, ensuring at all stages physical fidelity while providing pragmatic targets for numerical accuracy. We find that this procedure greatly accelerates code development and provides a great deal of flexibility in ones choice of physics.

  1. [Statistical Process Control (SPC) can help prevent treatment errors without increasing costs in radiotherapy].

    PubMed

    Govindarajan, R; Llueguera, E; Melero, A; Molero, J; Soler, N; Rueda, C; Paradinas, C

    2010-01-01

    Statistical Process Control (SPC) was applied to monitor patient set-up in radiotherapy and, when the measured set-up error values indicated a loss of process stability, its root cause was identified and eliminated to prevent set-up errors. Set up errors were measured for medial-lateral (ml), cranial-caudal (cc) and anterior-posterior (ap) dimensions and then the upper control limits were calculated. Once the control limits were known and the range variability was acceptable, treatment set-up errors were monitored using sub-groups of 3 patients, three times each shift. These values were plotted on a control chart in real time. Control limit values showed that the existing variation was acceptable. Set-up errors, measured and plotted on a X chart, helped monitor the set-up process stability and, if and when the stability was lost, treatment was interrupted, the particular cause responsible for the non-random pattern was identified and corrective action was taken before proceeding with the treatment. SPC protocol focuses on controlling the variability due to assignable cause instead of focusing on patient-to-patient variability which normally does not exist. Compared to weekly sampling of set-up error in each and every patient, which may only ensure that just those sampled sessions were set-up correctly, the SPC method enables set-up error prevention in all treatment sessions for all patients and, at the same time, reduces the control costs. Copyright © 2009 SECA. Published by Elsevier Espana. All rights reserved.

  2. Application of based on improved wavelet algorithm in fiber temperature sensor

    NASA Astrophysics Data System (ADS)

    Qi, Hui; Tang, Wenjuan

    2018-03-01

    It is crucial point that accurate temperature in distributed optical fiber temperature sensor. In order to solve the problem of temperature measurement error due to weak Raman scattering signal and strong noise in system, a new based on improved wavelet algorithm is presented. On the basis of the traditional modulus maxima wavelet algorithm, signal correlation is considered to improve the ability to capture signals and noise, meanwhile, combined with wavelet decomposition scale adaptive method to eliminate signal loss or noise not filtered due to mismatch scale. Superiority of algorithm filtering is compared with others by Matlab. At last, the 3km distributed optical fiber temperature sensing system is used for verification. Experimental results show that accuracy of temperature generally increased by 0.5233.

  3. Entropy-based gene ranking without selection bias for the predictive classification of microarray data.

    PubMed

    Furlanello, Cesare; Serafini, Maria; Merler, Stefano; Jurman, Giuseppe

    2003-11-06

    We describe the E-RFE method for gene ranking, which is useful for the identification of markers in the predictive classification of array data. The method supports a practical modeling scheme designed to avoid the construction of classification rules based on the selection of too small gene subsets (an effect known as the selection bias, in which the estimated predictive errors are too optimistic due to testing on samples already considered in the feature selection process). With E-RFE, we speed up the recursive feature elimination (RFE) with SVM classifiers by eliminating chunks of uninteresting genes using an entropy measure of the SVM weights distribution. An optimal subset of genes is selected according to a two-strata model evaluation procedure: modeling is replicated by an external stratified-partition resampling scheme, and, within each run, an internal K-fold cross-validation is used for E-RFE ranking. Also, the optimal number of genes can be estimated according to the saturation of Zipf's law profiles. Without a decrease of classification accuracy, E-RFE allows a speed-up factor of 100 with respect to standard RFE, while improving on alternative parametric RFE reduction strategies. Thus, a process for gene selection and error estimation is made practical, ensuring control of the selection bias, and providing additional diagnostic indicators of gene importance.

  4. Noninvasive liver iron measurements with a room-temperature susceptometer

    PubMed Central

    Avrin, W F; Kumar, S

    2011-01-01

    Magnetic susceptibility measurements on the liver can quantify iron overload accurately and noninvasively. However, established susceptometer designs, using Superconducting QUantum Interference Devices (SQUIDs) that work in liquid helium, have been too expensive for widespread use. This paper presents a less expensive liver susceptometer that works at room temperature. This system uses oscillating magnetic fields, which are produced and detected by copper coils. The coil design cancels the signal from the applied field, eliminating noise from fluctuations of the source-coil current and sensor gain. The coil unit moves toward and away from the patient at 1 Hz, cancelling drifts due to thermal expansion of the coils. Measurements on a water phantom indicated instrumental errors less than 30 μg of iron per gram of wet liver tissue, which is small compared with other errors due to the response of the patient’s body. Liver iron measurements on eight thalassemia patients yielded a correlation coefficient r=0.98 between the room-temperature susceptometer and an existing SQUID. These results indicate that the fundamental accuracy limits of the room-temperature susceptometer are similar to those of the SQUID. PMID:17395991

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olama, Mohammed M; Matalgah, Mustafa M; Bobrek, Miljko

    Traditional encryption techniques require packet overhead, produce processing time delay, and suffer from severe quality of service deterioration due to fades and interference in wireless channels. These issues reduce the effective transmission data rate (throughput) considerably in wireless communications, where data rate with limited bandwidth is the main constraint. In this paper, performance evaluation analyses are conducted for an integrated signaling-encryption mechanism that is secure and enables improved throughput and probability of bit-error in wireless channels. This mechanism eliminates the drawbacks stated herein by encrypting only a small portion of an entire transmitted frame, while the rest is not subjectmore » to traditional encryption but goes through a signaling process (designed transformation) with the plaintext of the portion selected for encryption. We also propose to incorporate error correction coding solely on the small encrypted portion of the data to drastically improve the overall bit-error rate performance while not noticeably increasing the required bit-rate. We focus on validating the signaling-encryption mechanism utilizing Hamming and convolutional error correction coding by conducting an end-to-end system-level simulation-based study. The average probability of bit-error and throughput of the encryption mechanism are evaluated over standard Gaussian and Rayleigh fading-type channels and compared to the ones of the conventional advanced encryption standard (AES).« less

  6. Shape of the ocean surface and implications for the Earth's interior: GEOS-3 results

    NASA Technical Reports Server (NTRS)

    Chapman, M. E.; Talwani, M.; Kahle, H.; Bodine, J. H.

    1979-01-01

    A new set of 1 deg x 1 deg mean free air anomalies was used to construct a gravimetric geoid by Stokes' formula for the Indian Ocean. Utilizing such 1 deg x 1 deg geoid comparisons were made with GEOS-3 radar altimeter estimates of geoid height. Most commonly there were constant offsets and long wavelength discrepancies between the two data sets; there were many probable causes including radial orbit error, scale errors in the geoid, or bias errors in altitude determination. Across the Aleutian Trench the 1 deg x 1 deg gravimetric geoids did not measure the entire depth of the geoid anomaly due to averaging over 1 deg squares and subsequent aliasing of the data. After adjustment of GEOS-3 data to eliminate long wavelength discrepancies, agreement between the altimeter geoid and gravimetric geoid was between 1.7 and 2.7 meters in rms errors. For purposes of geological interpretation, techniques were developed to directly compute the geoid anomaly over models of density within the Earth. In observing the results from satellite altimetry it was possible to identify geoid anomalies over different geologic features in the ocean. Examples and significant results are reported.

  7. Refraction error correction for deformation measurement by digital image correlation at elevated temperature

    NASA Astrophysics Data System (ADS)

    Su, Yunquan; Yao, Xuefeng; Wang, Shen; Ma, Yinji

    2017-03-01

    An effective correction model is proposed to eliminate the refraction error effect caused by an optical window of a furnace in digital image correlation (DIC) deformation measurement under high-temperature environment. First, a theoretical correction model with the corresponding error correction factor is established to eliminate the refraction error induced by double-deck optical glass in DIC deformation measurement. Second, a high-temperature DIC experiment using a chromium-nickel austenite stainless steel specimen is performed to verify the effectiveness of the correction model by the correlation calculation results under two different conditions (with and without the optical glass). Finally, both the full-field and the divisional displacement results with refraction influence are corrected by the theoretical model and then compared to the displacement results extracted from the images without refraction influence. The experimental results demonstrate that the proposed theoretical correction model can effectively improve the measurement accuracy of DIC method by decreasing the refraction errors from measured full-field displacements under high-temperature environment.

  8. Temperature-dependent errors in nuclear lattice simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Dean; Thomson, Richard

    2007-06-15

    We study the temperature dependence of discretization errors in nuclear lattice simulations. We find that for systems with strong attractive interactions the predominant error arises from the breaking of Galilean invariance. We propose a local 'well-tempered' lattice action which eliminates much of this error. The well-tempered action can be readily implemented in lattice simulations for nuclear systems as well as cold atomic Fermi systems.

  9. Generalized Variance Function Applications in Forestry

    Treesearch

    James Alegria; Charles T. Scott; Charles T. Scott

    1991-01-01

    Adequately predicting the sampling errors of tabular data can reduce printing costs by eliminating the need to publish separate sampling error tables. Two generalized variance functions (GVFs) found in the literature and three GVFs derived for this study were evaluated for their ability to predict the sampling error of tabular forestry estimates. The recommended GVFs...

  10. Fixing Stellarator Magnetic Surfaces

    NASA Astrophysics Data System (ADS)

    Hanson, James D.

    1999-11-01

    Magnetic surfaces are a perennial issue for stellarators. The design heuristic of finding a magnetic field with zero perpendicular component on a specified outer surface often yields inner magnetic surfaces with very small resonant islands. However, magnetic fields in the laboratory are not design fields. Island-causing errors can arise from coil placement errors, stray external fields, and design inadequacies such as ignoring coil leads and incomplete characterization of current distributions within the coil pack. The problem addressed is how to eliminate such error-caused islands. I take a perturbation approach, where the zero order field is assumed to have good magnetic surfaces, and comes from a VMEC equilibrium. The perturbation field consists of error and correction pieces. The error correction method is to determine the correction field so that the sum of the error and correction fields gives zero island size at specified rational surfaces. It is particularly important to correctly calculate the island size for a given perturbation field. The method works well with many correction knobs, and a Singular Value Decomposition (SVD) technique is used to determine minimal corrections necessary to eliminate islands.

  11. Method to suppress DDFS spurious signals in a frequency-hopping synthesizer with DDFS-driven PLL architecture.

    PubMed

    Kwon, Kun-Sup; Yoon, Won-Sang

    2010-01-01

    In this paper we propose a method of removing from synthesizer output spurious signals due to quasi-amplitude modulation and superposition effect in a frequency-hopping synthesizer with direct digital frequency synthesizer (DDFS)-driven phase-locked loop (PLL) architecture, which has the advantages of high frequency resolution, fast transition time, and small size. There are spurious signals that depend on normalized frequency of DDFS. They can be dominant if they occur within the PLL loop bandwidth. We suggest that such signals can be eliminated by purposefully creating frequency errors in the developed synthesizer.

  12. The discrete-time compensated Kalman filter

    NASA Technical Reports Server (NTRS)

    Lee, W. H.; Athans, M.

    1978-01-01

    A suboptimal dynamic compensator to be used in conjunction with the ordinary discrete time Kalman filter was derived. The resultant compensated Kalman Filter has the property that steady state bias estimation errors, resulting from modelling errors, were eliminated.

  13. High-accuracy self-calibration method for dual-axis rotation-modulating RLG-INS

    NASA Astrophysics Data System (ADS)

    Wei, Guo; Gao, Chunfeng; Wang, Qi; Wang, Qun; Long, Xingwu

    2017-05-01

    Inertial navigation system has been the core component of both military and civil navigation systems. Dual-axis rotation modulation can completely eliminate the inertial elements constant errors of the three axes to improve the system accuracy. But the error caused by the misalignment angles and the scale factor error cannot be eliminated through dual-axis rotation modulation. And discrete calibration method cannot fulfill requirements of high-accurate calibration of the mechanically dithered ring laser gyroscope navigation system with shock absorbers. This paper has analyzed the effect of calibration error during one modulated period and presented a new systematic self-calibration method for dual-axis rotation-modulating RLG-INS. Procedure for self-calibration of dual-axis rotation-modulating RLG-INS has been designed. The results of self-calibration simulation experiment proved that: this scheme can estimate all the errors in the calibration error model, the calibration precision of the inertial sensors scale factor error is less than 1ppm and the misalignment is less than 5″. These results have validated the systematic self-calibration method and proved its importance for accuracy improvement of dual -axis rotation inertial navigation system with mechanically dithered ring laser gyroscope.

  14. Blind system identification of two-thermocouple sensor based on cross-relation method.

    PubMed

    Li, Yanfeng; Zhang, Zhijie; Hao, Xiaojian

    2018-03-01

    In dynamic temperature measurement, the dynamic characteristics of the sensor affect the accuracy of the measurement results. Thermocouples are widely used for temperature measurement in harsh conditions due to their low cost, robustness, and reliability, but because of the presence of the thermal inertia, there is a dynamic error in the dynamic temperature measurement. In order to eliminate the dynamic error, two-thermocouple sensor was used to measure dynamic gas temperature in constant velocity flow environments in this paper. Blind system identification of two-thermocouple sensor based on a cross-relation method was carried out. Particle swarm optimization algorithm was used to estimate time constants of two thermocouples and compared with the grid based search method. The method was validated on the experimental equipment built by using high temperature furnace, and the input dynamic temperature was reconstructed by using the output data of the thermocouple with small time constant.

  15. Blind system identification of two-thermocouple sensor based on cross-relation method

    NASA Astrophysics Data System (ADS)

    Li, Yanfeng; Zhang, Zhijie; Hao, Xiaojian

    2018-03-01

    In dynamic temperature measurement, the dynamic characteristics of the sensor affect the accuracy of the measurement results. Thermocouples are widely used for temperature measurement in harsh conditions due to their low cost, robustness, and reliability, but because of the presence of the thermal inertia, there is a dynamic error in the dynamic temperature measurement. In order to eliminate the dynamic error, two-thermocouple sensor was used to measure dynamic gas temperature in constant velocity flow environments in this paper. Blind system identification of two-thermocouple sensor based on a cross-relation method was carried out. Particle swarm optimization algorithm was used to estimate time constants of two thermocouples and compared with the grid based search method. The method was validated on the experimental equipment built by using high temperature furnace, and the input dynamic temperature was reconstructed by using the output data of the thermocouple with small time constant.

  16. pKa prediction of monoprotic small molecules the SMARTS way.

    PubMed

    Lee, Adam C; Yu, Jing-Yu; Crippen, Gordon M

    2008-10-01

    Realizing favorable absorption, distribution, metabolism, elimination, and toxicity profiles is a necessity due to the high attrition rate of lead compounds in drug development today. The ability to accurately predict bioavailability can help save time and money during the screening and optimization processes. As several robust programs already exist for predicting logP, we have turned our attention to the fast and robust prediction of pK(a) for small molecules. Using curated data from the Beilstein Database and Lange's Handbook of Chemistry, we have created a decision tree based on a novel set of SMARTS strings that can accurately predict the pK(a) for monoprotic compounds with R(2) of 0.94 and root mean squared error of 0.68. Leave-some-out (10%) cross-validation achieved Q(2) of 0.91 and root mean squared error of 0.80.

  17. Immortalization of normal human mammary epithelial cells in two steps by direct targeting of senescence barriers does not require gross genomic alterations

    DOE PAGES

    Garbe, James C.; Vrba, Lukas; Sputova, Klara; ...

    2014-10-29

    Telomerase reactivation and immortalization are critical for human carcinoma progression. However, little is known about the mechanisms controlling this crucial step, due in part to the paucity of experimentally tractable model systems that can examine human epithelial cell immortalization as it might occur in vivo. We achieved efficient non-clonal immortalization of normal human mammary epithelial cells (HMEC) by directly targeting the 2 main senescence barriers encountered by cultured HMEC. The stress-associated stasis barrier was bypassed using shRNA to p16INK4; replicative senescence due to critically shortened telomeres was bypassed in post-stasis HMEC by c-MYC transduction. Thus, 2 pathologically relevant oncogenic agentsmore » are sufficient to immortally transform normal HMEC. The resultant non-clonal immortalized lines exhibited normal karyotypes. Most human carcinomas contain genomically unstable cells, with widespread instability first observed in vivo in pre-malignant stages; in vitro, instability is seen as finite cells with critically shortened telomeres approach replicative senescence. Our results support our hypotheses that: (1) telomere-dysfunction induced genomic instability in pre-malignant finite cells may generate the errors required for telomerase reactivation and immortalization, as well as many additional “passenger” errors carried forward into resulting carcinomas; (2) genomic instability during cancer progression is needed to generate errors that overcome tumor suppressive barriers, but not required per se; bypassing the senescence barriers by direct targeting eliminated a need for genomic errors to generate immortalization. Achieving efficient HMEC immortalization, in the absence of “passenger” genomic errors, should facilitate examination of telomerase regulation during human carcinoma progression, and exploration of agents that could prevent immortalization.« less

  18. Reducing diagnostic errors in medicine: what's the goal?

    PubMed

    Graber, Mark; Gordon, Ruthanna; Franklin, Nancy

    2002-10-01

    This review considers the feasibility of reducing or eliminating the three major categories of diagnostic errors in medicine: "No-fault errors" occur when the disease is silent, presents atypically, or mimics something more common. These errors will inevitably decline as medical science advances, new syndromes are identified, and diseases can be detected more accurately or at earlier stages. These errors can never be eradicated, unfortunately, because new diseases emerge, tests are never perfect, patients are sometimes noncompliant, and physicians will inevitably, at times, choose the most likely diagnosis over the correct one, illustrating the concept of necessary fallibility and the probabilistic nature of choosing a diagnosis. "System errors" play a role when diagnosis is delayed or missed because of latent imperfections in the health care system. These errors can be reduced by system improvements, but can never be eliminated because these improvements lag behind and degrade over time, and each new fix creates the opportunity for novel errors. Tradeoffs also guarantee system errors will persist, when resources are just shifted. "Cognitive errors" reflect misdiagnosis from faulty data collection or interpretation, flawed reasoning, or incomplete knowledge. The limitations of human processing and the inherent biases in using heuristics guarantee that these errors will persist. Opportunities exist, however, for improving the cognitive aspect of diagnosis by adopting system-level changes (e.g., second opinions, decision-support systems, enhanced access to specialists) and by training designed to improve cognition or cognitive awareness. Diagnostic error can be substantially reduced, but never eradicated.

  19. Common errors of drug administration in infants: causes and avoidance.

    PubMed

    Anderson, B J; Ellis, J F

    1999-01-01

    Drug administration errors are common in infants. Although the infant population has a high exposure to drugs, there are few data concerning pharmacokinetics or pharmacodynamics, or the influence of paediatric diseases on these processes. Children remain therapeutic orphans. Formulations are often suitable only for adults; in addition, the lack of maturation of drug elimination processes, alteration of body composition and influence of size render the calculation of drug doses complex in infants. The commonest drug administration error in infants is one of dose, and the commonest hospital site for this error is the intensive care unit. Drug errors are a consequence of system error, and preventive strategies are possible through system analysis. The goal of a zero drug error rate should be aggressively sought, with systems in place that aim to eliminate the effects of inevitable human error. This involves review of the entire system from drug manufacture to drug administration. The nuclear industry, telecommunications and air traffic control services all practise error reduction policies with zero error as a clear goal, not by finding fault in the individual, but by identifying faults in the system and building into that system mechanisms for picking up faults before they occur. Such policies could be adapted to medicine using interventions both specific (the production of formulations which are for children only and clearly labelled, regular audit by pharmacists, legible prescriptions, standardised dose tables) and general (paediatric drug trials, education programmes, nonpunitive error reporting) to reduce the number of errors made in giving medication to infants.

  20. Mapping DNA methylation by transverse current sequencing: Reduction of noise from neighboring nucleotides

    NASA Astrophysics Data System (ADS)

    Alvarez, Jose; Massey, Steven; Kalitsov, Alan; Velev, Julian

    Nanopore sequencing via transverse current has emerged as a competitive candidate for mapping DNA methylation without needed bisulfite-treatment, fluorescent tag, or PCR amplification. By eliminating the error producing amplification step, long read lengths become feasible, which greatly simplifies the assembly process and reduces the time and the cost inherent in current technologies. However, due to the large error rates of nanopore sequencing, single base resolution has not been reached. A very important source of noise is the intrinsic structural noise in the electric signature of the nucleotide arising from the influence of neighboring nucleotides. In this work we perform calculations of the tunneling current through DNA molecules in nanopores using the non-equilibrium electron transport method within an effective multi-orbital tight-binding model derived from first-principles calculations. We develop a base-calling algorithm accounting for the correlations of the current through neighboring bases, which in principle can reduce the error rate below any desired precision. Using this method we show that we can clearly distinguish DNA methylation and other base modifications based on the reading of the tunneling current.

  1. Application of adaptive Kalman filter in vehicle laser Doppler velocimetry

    NASA Astrophysics Data System (ADS)

    Fan, Zhe; Sun, Qiao; Du, Lei; Bai, Jie; Liu, Jingyun

    2018-03-01

    Due to the variation of road conditions and motor characteristics of vehicle, great root-mean-square (rms) error and outliers would be caused. Application of Kalman filter in laser Doppler velocimetry(LDV) is important to improve the velocity measurement accuracy. In this paper, the state-space model is built by using current statistical model. A strategy containing two steps is adopted to make the filter adaptive and robust. First, the acceleration variance is adaptively adjusted by using the difference of predictive observation and measured observation. Second, the outliers would be identified and the measured noise variance would be adjusted according to the orthogonal property of innovation to reduce the impaction of outliers. The laboratory rotating table experiments show that adaptive Kalman filter greatly reduces the rms error from 0.59 cm/s to 0.22 cm/s and has eliminated all the outliers. Road experiments compared with a microwave radar show that the rms error of LDV is 0.0218 m/s, and it proves that the adaptive Kalman filtering is suitable for vehicle speed signal processing.

  2. Treatment-refractory anxiety; definition, risk factors, and treatment challenges

    PubMed Central

    Roy-Byrne, Peter

    2015-01-01

    A sizable proportion of psychiatric patients will seek clinical evaluation and treatment for anxiety symptoms reportedly refractory to treatment. This apparent lack of response is either due to “pseudo-resistance” (a failure to have received and adhered to a recognized and effective treatment or treatments for their condition) or to true “treatment resistance.” Pseudo-resistance can be due to clinician errors in selecting and delivering an appropriate treatment effectively, or to patient nonadherence to a course of treatment. True treatment resistance can be due to unrecognized exogenous anxiogenic factors (eg, caffeine overuse, sleep deprivation, use of alcohol or marijuana) or an incorrect diagnosis (eg, atypical bipolar illness, occult substance abuse, attention deficit-hyperactivity disorder). Once the above factors are eliminated, treatment should focus on combining effective medications and cognitive behavioral therapy, combining several medications (augmentation), or employing novel medications or psychotherapies not typically indicated as first-line evidence-based anxiety treatments. PMID:26246793

  3. Treatment-refractory anxiety; definition, risk factors, and treatment challenges.

    PubMed

    Roy-Byrne, Peter

    2015-06-01

    A sizable proportion of psychiatric patients will seek clinical evaluation and treatment for anxiety symptoms reportedly refractory to treatment. This apparent lack of response is either due to "pseudo-resistance" (a failure to have received and adhered to a recognized and effective treatment or treatments for their condition) or to true "treatment resistance." Pseudo-resistance can be due to clinician errors in selecting and delivering an appropriate treatment effectively, or to patient nonadherence to a course of treatment. True treatment resistance can be due to unrecognized exogenous anxiogenic factors (eg, caffeine overuse, sleep deprivation, use of alcohol or marijuana) or an incorrect diagnosis (eg, atypical bipolar illness, occult substance abuse, attention deficit-hyperactivity disorder). Once the above factors are eliminated, treatment should focus on combining effective medications and cognitive behavioral therapy, combining several medications (augmentation), or employing novel medications or psychotherapies not typically indicated as first-line evidence-based anxiety treatments.

  4. A Physiologically Based Pharmacokinetic Model to Predict the Pharmacokinetics of Highly Protein-Bound Drugs and Impact of Errors in Plasma Protein Binding

    PubMed Central

    Ye, Min; Nagar, Swati; Korzekwa, Ken

    2015-01-01

    Predicting the pharmacokinetics of highly protein-bound drugs is difficult. Also, since historical plasma protein binding data was often collected using unbuffered plasma, the resulting inaccurate binding data could contribute to incorrect predictions. This study uses a generic physiologically based pharmacokinetic (PBPK) model to predict human plasma concentration-time profiles for 22 highly protein-bound drugs. Tissue distribution was estimated from in vitro drug lipophilicity data, plasma protein binding, and blood: plasma ratio. Clearance was predicted with a well-stirred liver model. Underestimated hepatic clearance for acidic and neutral compounds was corrected by an empirical scaling factor. Predicted values (pharmacokinetic parameters, plasma concentration-time profile) were compared with observed data to evaluate model accuracy. Of the 22 drugs, less than a 2-fold error was obtained for terminal elimination half-life (t1/2, 100% of drugs), peak plasma concentration (Cmax, 100%), area under the plasma concentration-time curve (AUC0–t, 95.4%), clearance (CLh, 95.4%), mean retention time (MRT, 95.4%), and steady state volume (Vss, 90.9%). The impact of fup errors on CLh and Vss prediction was evaluated. Errors in fup resulted in proportional errors in clearance prediction for low-clearance compounds, and in Vss prediction for high-volume neutral drugs. For high-volume basic drugs, errors in fup did not propagate to errors in Vss prediction. This is due to the cancellation of errors in the calculations for tissue partitioning of basic drugs. Overall, plasma profiles were well simulated with the present PBPK model. PMID:26531057

  5. High stability integrated Tri-axial fluxgate sensor with suspended technology

    NASA Astrophysics Data System (ADS)

    Wang, Chen; Teng, Yuntian; Wang, Xiaomei; Fan, Xiaoyong; Wu, Qiong

    2017-04-01

    The relative geomagnetic record of China Geomagnetic Network of China(GNC) has been digitized, network, meanwhile achieving second data acquisition and storage during after 9th five-year and 10th five-year plan upgraded. Currently the relative record in geomagnetic observatories are generally two sets of the same type instrument with parallel observation, which could distinguish the differential between observation instrument failures and environmental interference, and ensure the continuity and integrity of the observation data. Fluxgate magnetometer has become mainstream equipment for relative geomagnetic record because of its low noise, high sensitivity, and fast response. There is a problem about data inconsistency by the same type of instrument in the same station though few years observation data analysis. The researchers have done a lot of experiments and found three main error sources:1. The instrument performances, due to the limitation of manufacturing and assembly process level it is difficult to ensure the orthogonality of the instrument; other performances of scale, zero offset and temperature coefficient; 2. horizontal error, which introduced by the initial installation process due to horizontal adjustment and pillar tilling due to long-term observations; 3.The observation environment, the temperature and humidity, power supply system. The new fluxgate magnetometer uses special nonmagnetic gimbaled (made by beryllium / bronze material) construction for suspension, so the fluxgate sensor is fixed at the suspended platform in order to automatically keep the horizontal level. The advantage of this design is to eliminate horizontal error introduced by the initial installation process due to horizontal adjustment and pillar tilling due to long-term observations. The signal processing circuit board is fixed on the top of the suspended platform with certain distance to ensure the static and dynamic magnetic field produced by circuit board no effect to the sensor, so we could get flexible instrument due to signal attenuation resulting signal transmission cable limited length.

  6. SU-E-T-195: Gantry Angle Dependency of MLC Leaf Position Error

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ju, S; Hong, C; Kim, M

    Purpose: The aim of this study was to investigate the gantry angle dependency of the multileaf collimator (MLC) leaf position error. Methods: An automatic MLC quality assurance system (AutoMLCQA) was developed to evaluate the gantry angle dependency of the MLC leaf position error using an electronic portal imaging device (EPID). To eliminate the EPID position error due to gantry rotation, we designed a reference maker (RM) that could be inserted into the wedge mount. After setting up the EPID, a reference image was taken of the RM using an open field. Next, an EPID-based picket-fence test (PFT) was performed withoutmore » the RM. These procedures were repeated at every 45° intervals of the gantry angle. A total of eight reference images and PFT image sets were analyzed using in-house software. The average MLC leaf position error was calculated at five pickets (-10, -5, 0, 5, and 10 cm) in accordance with general PFT guidelines using in-house software. This test was carried out for four linear accelerators. Results: The average MLC leaf position errors were within the set criterion of <1 mm (actual errors ranged from -0.7 to 0.8 mm) for all gantry angles, but significant gantry angle dependency was observed in all machines. The error was smaller at a gantry angle of 0° but increased toward the positive direction with gantry angle increments in the clockwise direction. The error reached a maximum value at a gantry angle of 90° and then gradually decreased until 180°. In the counter-clockwise rotation of the gantry, the same pattern of error was observed but the error increased in the negative direction. Conclusion: The AutoMLCQA system was useful to evaluate the MLC leaf position error for various gantry angles without the EPID position error. The Gantry angle dependency should be considered during MLC leaf position error analysis.« less

  7. UNDERSTANDING OR NURSES' REACTIONS TO ERRORS AND USING THIS UNDERSTANDING TO IMPROVE PATIENT SAFETY.

    PubMed

    Taifoori, Ladan; Valiee, Sina

    2015-09-01

    The operating room can be home to many different types of nursing errors due to the invasiveness of OR procedures. The nurses' reactions towards errors can be a key factor in patient safety. This article is based on a study, with the aim of investigating nurses' reactions toward nursing errors and the various contributing and resulting factors, conducted at Kurdistan University of Medical Sciences in Sanandaj, Iran in 2014. The goal of the study was to determine how OR nurses' reacted to nursing errors with the goal of having this information used to improve patient safety. Research was conducted as a cross-sectional descriptive study. The participants were all nurses employed in the operating rooms of the teaching hospitals of Kurdistan University of Medical Sciences, which was selected by a consensus method (170 persons). The information was gathered through questionnaires that focused on demographic information, error definition, reasons for error occurrence, and emotional reactions for error occurrence, and emotional reactions toward the errors. 153 questionnaires were completed and analyzed by SPSS software version 16.0. "Not following sterile technique" (82.4 percent) was the most reported nursing error, "tiredness" (92.8 percent) was the most reported reason for the error occurrence, "being upset at having harmed the patient" (85.6 percent) was the most reported emotional reaction after error occurrence", with "decision making for a better approach to tasks the next time" (97.7 percent) as the most common goal and "paying more attention to details" (98 percent) was the most reported planned strategy for future improved outcomes. While healthcare facilities are focused on planning for the prevention and elimination of errors it was shown that nurses can also benefit from support after error occurrence. Their reactions, and coping strategies, need guidance and, with both individual and organizational support, can be a factor in improving patient safety.

  8. Production and detection of atomic hexadecapole at Earth's magnetic field.

    PubMed

    Acosta, V M; Auzinsh, M; Gawlik, W; Grisins, P; Higbie, J M; Jackson Kimball, D F; Krzemien, L; Ledbetter, M P; Pustelny, S; Rochester, S M; Yashchuk, V V; Budker, D

    2008-07-21

    Optical magnetometers measure magnetic fields with extremely high precision and without cryogenics. However, at geomagnetic fields, important for applications from landmine removal to archaeology, they suffer from nonlinear Zeeman splitting, leading to systematic dependence on sensor orientation. We present experimental results on a method of eliminating this systematic error, using the hexadecapole atomic polarization moment. In particular, we demonstrate selective production of the atomic hexadecapole moment at Earth's magnetic field and verify its immunity to nonlinear Zeeman splitting. This technique promises to eliminate directional errors in all-optical atomic magnetometers, potentially improving their measurement accuracy by several orders of magnitude.

  9. Development of a computer-assisted personal interview software system for collection of tribal fish consumption data.

    PubMed

    Kissinger, Lon; Lorenzana, Roseanne; Mittl, Beth; Lasrado, Merwyn; Iwenofu, Samuel; Olivo, Vanessa; Helba, Cynthia; Capoeman, Pauline; Williams, Ann H

    2010-12-01

    The authors developed a computer-assisted personal interviewing (CAPI) seafood consumption survey tool from existing Pacific NW Native American seafood consumption survey methodology. The software runs on readily available hardware and software, and is easily configured for different cultures and seafood resources. The CAPI is used with a booklet of harvest location maps and species and portion size images. The use of a CAPI facilitates tribal administration of seafood consumption surveys, allowing cost-effective collection of scientifically defensible data and tribal management of data and data interpretation. Use of tribal interviewers reduces potential bias and discomfort that may be associated with nontribal interviewers. The CAPI contains a 24-hour recall and food frequency questionnaire, and assesses seasonal seafood consumption and temporal changes in consumption. EPA's methodology for developing ambient water quality criteria for tribes assigns a high priority to local data. The CAPI will satisfy this guidance objective. Survey results will support development of tribal water quality standards on their lands and assessment of seafood consumption-related contaminant risks and nutritional benefits. CAPI advantages over paper surveys include complex question branching without raising respondent burden, more complete interviews due to answer error and range checking, data transcription error elimination, printing and mailing cost elimination, and improved data storage. The survey instrument was pilot tested among the Quinault Nation in 2006. © 2010 Society for Risk Analysis.

  10. Mass-balance measurements in Alaska and suggestions for simplified observation programs

    USGS Publications Warehouse

    Trabant, D.C.; March, R.S.

    1999-01-01

    US Geological Survey glacier fieldwork in Alaska includes repetitious measurements, corrections for leaning or bending stakes, an ability to reliably measure seasonal snow as deep as 10 m, absolute identification of summer surfaces in the accumulation area, and annual evaluation of internal accumulation, internal ablation, and glacier-thickness changes. Prescribed field measurement and note-taking techniques help eliminate field errors and expedite the interpretative process. In the office, field notes are transferred to computerized spread-sheets for analysis, release on the World Wide Web, and archival storage. The spreadsheets have error traps to help eliminate note-taking and transcription errors. Rigorous error analysis ends when mass-balance measurements are extrapolated and integrated with area to determine glacier and basin mass balances. Unassessable errors in the glacier and basin mass-balance data reduce the value of the data set for correlations with climate change indices. The minimum glacier mass-balance program has at least three measurement sites on a glacier and the measurements must include the seasonal components of mass balance as well as the annual balance.

  11. Subnanosecond GPS-based clock synchronization and precision deep-space tracking

    NASA Technical Reports Server (NTRS)

    Dunn, C. E.; Lichten, S. M.; Jefferson, D. C.; Border, J. S.

    1992-01-01

    Interferometric spacecraft tracking is accomplished by the Deep Space Network (DSN) by comparing the arrival time of electromagnetic spacecraft signals at ground antennas separated by baselines on the order of 8000 km. Clock synchronization errors within and between DSN stations directly impact the attainable tracking accuracy, with a 0.3-nsec error in clock synchronization resulting in an 11-nrad angular position error. This level of synchronization is currently achieved by observing a quasar which is angularly close to the spacecraft just after the spacecraft observations. By determining the differential arrival times of the random quasar signal at the stations, clock offsets and propagation delays within the atmosphere and within the DSN stations are calibrated. Recent developments in time transfer techniques may allow medium accuracy (50-100 nrad) spacecraft tracking without near-simultaneous quasar-based calibrations. Solutions are presented for a worldwide network of Global Positioning System (GPS) receivers in which the formal errors for DSN clock offset parameters are less than 0.5 nsec. Comparisons of clock rate offsets derived from GPS measurements and from very long baseline interferometry (VLBI), as well as the examination of clock closure, suggest that these formal errors are a realistic measure of GPS-based clock offset precision and accuracy. Incorporating GPS-based clock synchronization measurements into a spacecraft differential ranging system would allow tracking without near-simultaneous quasar observations. The impact on individual spacecraft navigation-error sources due to elimination of quasar-based calibrations is presented. System implementation, including calibration of station electronic delays, is discussed.

  12. Sub-nanosecond clock synchronization and precision deep space tracking

    NASA Technical Reports Server (NTRS)

    Dunn, Charles; Lichten, Stephen; Jefferson, David; Border, James S.

    1992-01-01

    Interferometric spacecraft tracking is accomplished at the NASA Deep Space Network (DSN) by comparing the arrival time of electromagnetic spacecraft signals to ground antennas separated by baselines on the order of 8000 km. Clock synchronization errors within and between DSN stations directly impact the attainable tracking accuracy, with a 0.3 ns error in clock synchronization resulting in an 11 nrad angular position error. This level of synchronization is currently achieved by observing a quasar which is angularly close to the spacecraft just after the spacecraft observations. By determining the differential arrival times of the random quasar signal at the stations, clock synchronization and propagation delays within the atmosphere and within the DSN stations are calibrated. Recent developments in time transfer techniques may allow medium accuracy (50-100 nrad) spacecraft observations without near-simultaneous quasar-based calibrations. Solutions are presented for a global network of GPS receivers in which the formal errors in clock offset parameters are less than 0.5 ns. Comparisons of clock rate offsets derived from GPS measurements and from very long baseline interferometry and the examination of clock closure suggest that these formal errors are a realistic measure of GPS-based clock offset precision and accuracy. Incorporating GPS-based clock synchronization measurements into a spacecraft differential ranging system would allow tracking without near-simultaneous quasar observations. The impact on individual spacecraft navigation error sources due to elimination of quasar-based calibrations is presented. System implementation, including calibration of station electronic delays, is discussed.

  13. Image Reconstruction for Interferometric Imaging of Geosynchronous Satellites

    NASA Astrophysics Data System (ADS)

    DeSantis, Zachary J.

    Imaging distant objects at a high resolution has always presented a challenge due to the diffraction limit. Larger apertures improve the resolution, but at some point the cost of engineering, building, and correcting phase aberrations of large apertures become prohibitive. Interferometric imaging uses the Van Cittert-Zernike theorem to form an image from measurements of spatial coherence. This effectively allows the synthesis of a large aperture from two or more smaller telescopes to improve the resolution. We apply this method to imaging geosynchronous satellites with a ground-based system. Imaging a dim object from the ground presents unique challenges. The atmosphere creates errors in the phase measurements. The measurements are taken simultaneously across a large bandwidth of light. The atmospheric piston error, therefore, manifests as a linear phase error across the spectral measurements. Because the objects are faint, many of the measurements are expected to have a poor signal-to-noise ratio (SNR). This eliminates possibility of use of commonly used techniques like closure phase, which is a standard technique in astronomical interferometric imaging for making partial phase measurements in the presence of atmospheric error. The bulk of our work has been focused on forming an image, using sub-Nyquist sampled data, in the presence of these linear phase errors without relying on closure phase techniques. We present an image reconstruction algorithm that successfully forms an image in the presence of these linear phase errors. We demonstrate our algorithm’s success in both simulation and in laboratory experiments.

  14. Multiple symbol partially coherent detection of MPSK

    NASA Technical Reports Server (NTRS)

    Simon, M. K.; Divsalar, D.

    1992-01-01

    It is shown that by using the known (or estimated) value of carrier tracking loop signal to noise ratio (SNR) in the decision metric, it is possible to improve the error probability performance of a partially coherent multiple phase-shift-keying (MPSK) system relative to that corresponding to the commonly used ideal coherent decision rule. Using a maximum-likeihood approach, an optimum decision metric is derived and shown to take the form of a weighted sum of the ideal coherent decision metric (i.e., correlation) and the noncoherent decision metric which is optimum for differential detection of MPSK. The performance of a receiver based on this optimum decision rule is derived and shown to provide continued improvement with increasing length of observation interval (data symbol sequence length). Unfortunately, increasing the observation length does not eliminate the error floor associated with the finite loop SNR. Nevertheless, in the limit of infinite observation length, the average error probability performance approaches the algebraic sum of the error floor and the performance of ideal coherent detection, i.e., at any error probability above the error floor, there is no degradation due to the partial coherence. It is shown that this limiting behavior is virtually achievable with practical size observation lengths. Furthermore, the performance is quite insensitive to mismatch between the estimate of loop SNR (e.g., obtained from measurement) fed to the decision metric and its true value. These results may be of use in low-cost Earth-orbiting or deep-space missions employing coded modulations.

  15. Statistical analysis of global horizontal solar irradiation GHI in Fez city, Morocco

    NASA Astrophysics Data System (ADS)

    Bounoua, Z.; Mechaqrane, A.

    2018-05-01

    An accurate knowledge of the solar energy reaching the ground is necessary for sizing and optimizing the performances of solar installations. This paper describes a statistical analysis of the global horizontal solar irradiation (GHI) at Fez city, Morocco. For better reliability, we have first applied a set of check procedures to test the quality of hourly GHI measurements. We then eliminate the erroneous values which are generally due to measurement or the cosine effect errors. Statistical analysis show that the annual mean daily values of GHI is of approximately 5 kWh/m²/day. Daily monthly mean values and other parameter are also calculated.

  16. A vacuum gauge based on an ultracold gas

    NASA Astrophysics Data System (ADS)

    Makhalov, V. B.; Turlapov, A. V.

    2017-06-01

    We report the design and application of a primary vacuum gauge based on an ultracold gas of atoms in an optical dipole trap. The pressure is calculated from the confinement time for atoms in the trap. The relationship between pressure and confinement time is established from the first principles owing to elimination of all channels introducing losses, except for knocking out an atom from the trap due to collisions with a residual gas particle. The method requires the knowledge of the gas chemical composition in the vacuum chamber, and, in the absence of this information, the systematic error is less than that of the ionisation sensor.

  17. Methods for Addressing Technology-induced Errors: The Current State.

    PubMed

    Borycki, E; Dexheimer, J W; Hullin Lucay Cossio, C; Gong, Y; Jensen, S; Kaipio, J; Kennebeck, S; Kirkendall, E; Kushniruk, A W; Kuziemsky, C; Marcilly, R; Röhrig, R; Saranto, K; Senathirajah, Y; Weber, J; Takeda, H

    2016-11-10

    The objectives of this paper are to review and discuss the methods that are being used internationally to report on, mitigate, and eliminate technology-induced errors. The IMIA Working Group for Health Informatics for Patient Safety worked together to review and synthesize some of the main methods and approaches associated with technology- induced error reporting, reduction, and mitigation. The work involved a review of the evidence-based literature as well as guideline publications specific to health informatics. The paper presents a rich overview of current approaches, issues, and methods associated with: (1) safe HIT design, (2) safe HIT implementation, (3) reporting on technology-induced errors, (4) technology-induced error analysis, and (5) health information technology (HIT) risk management. The work is based on research from around the world. Internationally, researchers have been developing methods that can be used to identify, report on, mitigate, and eliminate technology-induced errors. Although there remain issues and challenges associated with the methodologies, they have been shown to improve the quality and safety of HIT. Since the first publications documenting technology-induced errors in healthcare in 2005, we have seen in a short 10 years researchers develop ways of identifying and addressing these types of errors. We have also seen organizations begin to use these approaches. Knowledge has been translated into practice in a short ten years whereas the norm for other research areas is of 20 years.

  18. Chlorine measurement in the jet singlet oxygen generator considering the effects of the droplets.

    PubMed

    Goodarzi, Mohamad S; Saghafifar, Hossein

    2016-09-01

    A new method is presented to measure chlorine concentration more accurately than conventional method in exhaust gases of a jet-type singlet oxygen generator. One problem in this measurement is the existence of micrometer-sized droplets. In this article, an empirical method is reported to eliminate the effects of the droplets. Two wavelengths from a fiber coupled LED are adopted and the measurement is made on both selected wavelengths. Chlorine is measured by the two-wavelength more accurately than the one-wavelength method by eliminating the droplet term in the equations. This method is validated without the basic hydrogen peroxide injection in the reactor. In this case, a pressure meter value in the diagnostic cell is compared with the optically calculated pressure, which is obtained by the one-wavelength and the two-wavelength methods. It is found that chlorine measurement by the two-wavelength method and pressure meter is nearly the same, while the one-wavelength method has a significant error due to the droplets.

  19. Assessment of a model for achieving competency in administration and scoring of the WAIS-IV in post-graduate psychology students.

    PubMed

    Roberts, Rachel M; Davis, Melissa C

    2015-01-01

    There is a need for an evidence-based approach to training professional psychologists in the administration and scoring of standardized tests such as the Wechsler Adult Intelligence Scale (WAIS) due to substantial evidence that these tasks are associated with numerous errors that have the potential to significantly impact clients' lives. Twenty three post-graduate psychology students underwent training in using the WAIS-IV according to a best-practice teaching model that involved didactic teaching, independent study of the test manual, and in-class practice with teacher supervision and feedback. Video recordings and test protocols from a role-played test administration were analyzed for errors according to a comprehensive checklist with self, peer, and faculty member reviews. 91.3% of students were rated as having demonstrated competency in administration and scoring. All students were found to make errors, with substantially more errors being detected by the faculty member than by self or peers. Across all subtests, the most frequent errors related to failure to deliver standardized instructions verbatim from the manual. The failure of peer and self-reviews to detect the majority of the errors suggests that novice feedback (self or peers) may be ineffective to eliminate errors and the use of more senior peers may be preferable. It is suggested that involving senior trainees, recent graduates and/or experienced practitioners in the training of post-graduate students may have benefits for both parties, promoting a peer-learning and continuous professional development approach to the development and maintenance of skills in psychological assessment.

  20. Error in telemetry studies: Effects of animal movement on triangulation

    USGS Publications Warehouse

    Schmutz, Joel A.; White, Gary C.

    1990-01-01

    We used Monte Carlo simulations to investigate the effects of animal movement on error of estimated animal locations derived from radio-telemetry triangulation of sequentially obtained bearings. Simulated movements of 0-534 m resulted in up to 10-fold increases in average location error but <10% decreases in location precision when observer-to-animal distances were <1,000 m. Location error and precision were minimally affected by censorship of poor locations with Chi-square goodness-of-fit tests. Location error caused by animal movement can only be eliminated by taking simultaneous bearings.

  1. A Comprehensive Quality Assurance Program for Personnel and Procedures in Radiation Oncology: Value of Voluntary Error Reporting and Checklists

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalapurakal, John A., E-mail: j-kalapurakal@northwestern.edu; Zafirovski, Aleksandar; Smith, Jeffery

    Purpose: This report describes the value of a voluntary error reporting system and the impact of a series of quality assurance (QA) measures including checklists and timeouts on reported error rates in patients receiving radiation therapy. Methods and Materials: A voluntary error reporting system was instituted with the goal of recording errors, analyzing their clinical impact, and guiding the implementation of targeted QA measures. In response to errors committed in relation to treatment of the wrong patient, wrong treatment site, and wrong dose, a novel initiative involving the use of checklists and timeouts for all staff was implemented. The impactmore » of these and other QA initiatives was analyzed. Results: From 2001 to 2011, a total of 256 errors in 139 patients after 284,810 external radiation treatments (0.09% per treatment) were recorded in our voluntary error database. The incidence of errors related to patient/tumor site, treatment planning/data transfer, and patient setup/treatment delivery was 9%, 40.2%, and 50.8%, respectively. The compliance rate for the checklists and timeouts initiative was 97% (P<.001). These and other QA measures resulted in a significant reduction in many categories of errors. The introduction of checklists and timeouts has been successful in eliminating errors related to wrong patient, wrong site, and wrong dose. Conclusions: A comprehensive QA program that regularly monitors staff compliance together with a robust voluntary error reporting system can reduce or eliminate errors that could result in serious patient injury. We recommend the adoption of these relatively simple QA initiatives including the use of checklists and timeouts for all staff to improve the safety of patients undergoing radiation therapy in the modern era.« less

  2. Error "Reflection": Embracing Growth Mindset in the General Music Classroom

    ERIC Educational Resources Information Center

    Davis, Virginia Wayman

    2017-01-01

    As music teachers, part of the job description involves the detection of student errors and the use of our experience and education to eliminate them. This article is an exploration of the role of error in the learning process, with the goal of recognizing mistakes not as an enemy to be vanquished but as a friend with much to teach us. Carol…

  3. Ulysses, one year after the launch

    NASA Astrophysics Data System (ADS)

    Petersen, H.

    1991-12-01

    Ulysses is currently one year underway in a huge heliocentric orbit. A late change in some of the blankets' external material was required to prevent electrical charging due to contamination by nozzle outgassing products. Test results are shown, governing various ranges of plasma parameters and sample temperatures. Even clean materials show a few volts charging due to imperfections in the conductive film. Thermal environment in the Shuttle cargo bay proved to be slightly different from prelaunch predictions: less warm with doors closed, and less cold with doors opened. Temperatures experienced in orbit are nominal. A problem was caused by a complex interaction of a Sun induced thermal gradient in a sensitive boom on the dynamic stability of the spacecraft. A user interface program was an invaluable tool to ease computations with the mathematical models, eliminate error risk and provide configuration control.

  4. A physiologically based pharmacokinetic model to predict the pharmacokinetics of highly protein-bound drugs and the impact of errors in plasma protein binding.

    PubMed

    Ye, Min; Nagar, Swati; Korzekwa, Ken

    2016-04-01

    Predicting the pharmacokinetics of highly protein-bound drugs is difficult. Also, since historical plasma protein binding data were often collected using unbuffered plasma, the resulting inaccurate binding data could contribute to incorrect predictions. This study uses a generic physiologically based pharmacokinetic (PBPK) model to predict human plasma concentration-time profiles for 22 highly protein-bound drugs. Tissue distribution was estimated from in vitro drug lipophilicity data, plasma protein binding and the blood: plasma ratio. Clearance was predicted with a well-stirred liver model. Underestimated hepatic clearance for acidic and neutral compounds was corrected by an empirical scaling factor. Predicted values (pharmacokinetic parameters, plasma concentration-time profile) were compared with observed data to evaluate the model accuracy. Of the 22 drugs, less than a 2-fold error was obtained for the terminal elimination half-life (t1/2 , 100% of drugs), peak plasma concentration (Cmax , 100%), area under the plasma concentration-time curve (AUC0-t , 95.4%), clearance (CLh , 95.4%), mean residence time (MRT, 95.4%) and steady state volume (Vss , 90.9%). The impact of fup errors on CLh and Vss prediction was evaluated. Errors in fup resulted in proportional errors in clearance prediction for low-clearance compounds, and in Vss prediction for high-volume neutral drugs. For high-volume basic drugs, errors in fup did not propagate to errors in Vss prediction. This is due to the cancellation of errors in the calculations for tissue partitioning of basic drugs. Overall, plasma profiles were well simulated with the present PBPK model. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  5. Experimental investigation of false positive errors in auditory species occurrence surveys

    USGS Publications Warehouse

    Miller, David A.W.; Weir, Linda A.; McClintock, Brett T.; Grant, Evan H. Campbell; Bailey, Larissa L.; Simons, Theodore R.

    2012-01-01

    False positive errors are a significant component of many ecological data sets, which in combination with false negative errors, can lead to severe biases in conclusions about ecological systems. We present results of a field experiment where observers recorded observations for known combinations of electronically broadcast calling anurans under conditions mimicking field surveys to determine species occurrence. Our objectives were to characterize false positive error probabilities for auditory methods based on a large number of observers, to determine if targeted instruction could be used to reduce false positive error rates, and to establish useful predictors of among-observer and among-species differences in error rates. We recruited 31 observers, ranging in abilities from novice to expert, that recorded detections for 12 species during 180 calling trials (66,960 total observations). All observers made multiple false positive errors and on average 8.1% of recorded detections in the experiment were false positive errors. Additional instruction had only minor effects on error rates. After instruction, false positive error probabilities decreased by 16% for treatment individuals compared to controls with broad confidence interval overlap of 0 (95% CI: -46 to 30%). This coincided with an increase in false negative errors due to the treatment (26%; -3 to 61%). Differences among observers in false positive and in false negative error rates were best predicted by scores from an online test and a self-assessment of observer ability completed prior to the field experiment. In contrast, years of experience conducting call surveys was a weak predictor of error rates. False positive errors were also more common for species that were played more frequently, but were not related to the dominant spectral frequency of the call. Our results corroborate other work that demonstrates false positives are a significant component of species occurrence data collected by auditory methods. Instructing observers to only report detections they are completely certain are correct is not sufficient to eliminate errors. As a result, analytical methods that account for false positive errors will be needed, and independent testing of observer ability is a useful predictor for among-observer variation in observation error rates.

  6. An outlet breaching algorithm for the treatment of closed depressions in a raster DEM

    NASA Astrophysics Data System (ADS)

    Martz, Lawrence W.; Garbrecht, Jurgen

    1999-08-01

    Automated drainage analysis of raster DEMs typically begins with the simulated filling of all closed depressions and the imposition of a drainage pattern on the resulting flat areas. The elimination of closed depressions by filling implicitly assumes that all depressions are caused by elevation underestimation. This assumption is difficult to support, as depressions can be produced by overestimation as well as by underestimation of DEM values.This paper presents a new algorithm that is applied in conjunction with conventional depression filling to provide a more realistic treatment of those depressions that are likely due to overestimation errors. The algorithm lowers the elevation of selected cells on the edge of closed depressions to simulate breaching of the depression outlets. Application of this breaching algorithm prior to depression filling can substantially reduce the number and size of depressions that need to be filled, especially in low relief terrain.Removing or reducing the size of a depression by breaching implicitly assumes that the depression is due to a spurious flow blockage caused by elevation overestimation. Removing a depression by filling, on the other hand, implicitly assumes that the depression is a direct artifact of elevation underestimation. Although the breaching algorithm cannot distinguish between overestimation and underestimation errors in a DEM, a constraining parameter for breaching length can be used to restrict breaching to closed depressions caused by narrow blockages along well-defined drainage courses. These are considered the depressions most likely to have arisen from overestimation errors. Applying the constrained breaching algorithm prior to a conventional depression-filling algorithm allows both positive and negative elevation adjustments to be used to remove depressions.The breaching algorithm was incorporated into the DEM pre-processing operations of the TOPAZ software system. The effect of the algorithm is illustrated by the application of TOPAZ to a DEM of a low-relief landscape. The use of the breaching algorithm during DEM pre-processing substantially reduced the number of cells that needed to be subsequently raised in elevation to remove depressions. The number and kind of depression cells that were eliminated by the breaching algorithm suggested that the algorithm effectively targeted those topographic situations for which it was intended. A detailed inspection of a portion of the DEM that was processed using breaching algorithm in conjunction with depression-filling also suggested the effects of the algorithm were as intended.The breaching algorithm provides an empirically satisfactory and robust approach to treating closed depressions in a raster DEM. It recognises that depressions in certain topographic settings are as likely to be due to elevation overestimation as to elevation underestimation errors. The algorithm allows a more realistic treatment of depressions in these situations than conventional methods that rely solely on depression-filling.

  7. Minimizing Accidents and Risks in High Adventure Outdoor Pursuits.

    ERIC Educational Resources Information Center

    Meier, Joel

    The fundamental dilemma in adventure programming is eliminating unreasonable risks to participants without also reducing levels of excitement, challenge, and stress. Most accidents are caused by a combination of unsafe conditions, unsafe acts, and error judgments. The best and only way to minimize critical human error in adventure programs is…

  8. Managerial process improvement: a lean approach to eliminating medication delivery.

    PubMed

    Hussain, Aftab; Stewart, LaShonda M; Rivers, Patrick A; Munchus, George

    2015-01-01

    Statistical evidence shows that medication errors are a major cause of injuries that concerns all health care oganizations. Despite all the efforts to improve the quality of care, the lack of understanding and inability of management to design a robust system that will strategically target those factors is a major cause of distress. The paper aims to discuss these issues. Achieving optimum organizational performance requires two key variables; work process factors and human performance factors. The approach is that healthcare administrators must take in account both variables in designing a strategy to reduce medication errors. However, strategies that will combat such phenomena require that managers and administrators understand the key factors that are causing medication delivery errors. The authors recommend that healthcare organizations implement the Toyota Production System (TPS) combined with human performance improvement (HPI) methodologies to eliminate medication delivery errors in hospitals. Despite all the efforts to improve the quality of care, there continues to be a lack of understanding and the ability of management to design a robust system that will strategically target those factors associated with medication errors. This paper proposes a solution to an ambiguous workflow process using the TPS combined with the HPI system.

  9. Methods for Addressing Technology-Induced Errors: The Current State

    PubMed Central

    Dexheimer, J. W.; Hullin Lucay Cossio, C.; Gong, Y.; Jensen, S.; Kaipio, J.; Kennebeck, S.; Kirkendall, E.; Kushniruk, A. W.; Kuziemsky, C.; Marcilly, R.; Röhrig, R.; Saranto, K.; Senathirajah, Y.; Weber, J.; Takeda, H.

    2016-01-01

    Summary Objectives The objectives of this paper are to review and discuss the methods that are being used internationally to report on, mitigate, and eliminate technology-induced errors. Methods The IMIA Working Group for Health Informatics for Patient Safety worked together to review and synthesize some of the main methods and approaches associated with technology-induced error reporting, reduction, and mitigation. The work involved a review of the evidence-based literature as well as guideline publications specific to health informatics. Results The paper presents a rich overview of current approaches, issues, and methods associated with: (1) safe HIT design, (2) safe HIT implementation, (3) reporting on technology-induced errors, (4) technology-induced error analysis, and (5) health information technology (HIT) risk management. The work is based on research from around the world. Conclusions Internationally, researchers have been developing methods that can be used to identify, report on, mitigate, and eliminate technology-induced errors. Although there remain issues and challenges associated with the methodologies, they have been shown to improve the quality and safety of HIT. Since the first publications documenting technology-induced errors in healthcare in 2005, we have seen in a short 10 years researchers develop ways of identifying and addressing these types of errors. We have also seen organizations begin to use these approaches. Knowledge has been translated into practice in a short ten years whereas the norm for other research areas is of 20 years. PMID:27830228

  10. Improvement of the grid-connect current quality using novel proportional-integral controller for photovoltaic inverters.

    PubMed

    Cheng, Yuhua; Chen, Kai; Bai, Libing; Yang, Jing

    2014-02-01

    Precise control of the grid-connected current is a challenge in photovoltaic inverter research. Traditional Proportional-Integral (PI) control technology cannot eliminate steady-state error when tracking the sinusoidal signal from the grid, which results in a very high total harmonic distortion in the grid-connected current. A novel PI controller has been developed in this paper, in which the sinusoidal wave is discretized into an N-step input signal that is decided by the control frequency to eliminate the steady state error of the system. The effect of periodical error caused by the dead zone of the power switch and conduction voltage drop can be avoided; the current tracking accuracy and current harmonic content can also be improved. Based on the proposed PI controller, a 700 W photovoltaic grid-connected inverter is developed and validated. The improvement has been demonstrated through experimental results.

  11. Interspecies scaling and prediction of human clearance: comparison of small- and macro-molecule drugs

    PubMed Central

    Huh, Yeamin; Smith, David E.; Feng, Meihau Rose

    2014-01-01

    Human clearance prediction for small- and macro-molecule drugs was evaluated and compared using various scaling methods and statistical analysis.Human clearance is generally well predicted using single or multiple species simple allometry for macro- and small-molecule drugs excreted renally.The prediction error is higher for hepatically eliminated small-molecules using single or multiple species simple allometry scaling, and it appears that the prediction error is mainly associated with drugs with low hepatic extraction ratio (Eh). The error in human clearance prediction for hepatically eliminated small-molecules was reduced using scaling methods with a correction of maximum life span (MLP) or brain weight (BRW).Human clearance of both small- and macro-molecule drugs is well predicted using the monkey liver blood flow method. Predictions using liver blood flow from other species did not work as well, especially for the small-molecule drugs. PMID:21892879

  12. Modified zirconium-eriochrome cyanine R determination of fluoride

    USGS Publications Warehouse

    Thatcher, L.L.

    1957-01-01

    The Eriochrome Cyanine R method for determining fluoride in natural water has been modified to provide a single, stable reagent solution, eliminate interference from oxidizing agents, extend the concentration range to 3 p.p.m., and extend the phosphate tolerance. Temperature effect was minimized; sulfate error was eliminated by precipitation. The procedure is sufficiently tolerant to interferences found in natural and polluted waters to permit the elimination of prior distillation for most samples. The method has been applied to 500 samples.

  13. International Round-Robin Testing of Bulk Thermoelectrics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Hsin; Porter, Wallace D; Bottner, Harold

    2011-11-01

    Two international round-robin studies were conducted on transport properties measurements of bulk thermoelectric materials. The study discovered current measurement problems. In order to get ZT of a material four separate transport measurements must be taken. The round-robin study showed that among the four properties Seebeck coefficient is the one can be measured consistently. Electrical resistivity has +4-9% scatter. Thermal diffusivity has similar +5-10% scatter. The reliability of the above three properties can be improved by standardizing test procedures and enforcing system calibrations. The worst problem was found in specific heat measurements using DSC. The probability of making measurement error ismore » great due to the fact three separate runs must be taken to determine Cp and the baseline shift is always an issue for commercial DSC. It is suggest the Dulong Petit limit be always used as a guide line for Cp. Procedures have been developed to eliminate operator and system errors. The IEA-AMT annex is developing standard procedures for transport properties testing.« less

  14. 4.5-Gb/s RGB-LED based WDM visible light communication system employing CAP modulation and RLS based adaptive equalization.

    PubMed

    Wang, Yiguang; Huang, Xingxing; Tao, Li; Shi, Jianyang; Chi, Nan

    2015-05-18

    Inter-symbol interference (ISI) is one of the key problems that seriously limit transmission data rate in high-speed VLC systems. To eliminate ISI and further improve the system performance, series of equalization schemes have been widely investigated. As an adaptive algorithm commonly used in wireless communication, RLS is also suitable for visible light communication due to its quick convergence and better performance. In this paper, for the first time we experimentally demonstrate a high-speed RGB-LED based WDM VLC system employing carrier-less amplitude and phase (CAP) modulation and recursive least square (RLS) based adaptive equalization. An aggregate data rate of 4.5Gb/s is successfully achieved over 1.5-m indoor free space transmission with the bit error rate (BER) below the 7% forward error correction (FEC) limit of 3.8x10(-3). To the best of our knowledge, this is the highest data rate ever achieved in RGB-LED based VLC systems.

  15. Medición de posiciones astrométricas con CCD en la zona de Rup 21

    NASA Astrophysics Data System (ADS)

    Bustos Fierro, I. H.; Calderón, J. H.

    It is shown the utilization of the block adjustment method for the measurement of astrometric positions from a mosaic of sixteen CCD images with partial overlap, which were taken with the Telescope Jorge Sahade of CASLEO. The observations cover an area of 25' x 25' around the open cluster Rup21. The source of reference positions was ACT Reference Catalog. The internal error of the measured positions is analyzed, and the external error is estimated from the comparison with the catalog USNO-A. In this comparison it is found that the direct CCD images taken with focal reducer could be distorted by severe field curvature. The effect of the distortion presumably introduced by the optics is eliminated with the suitable corrections of the stellar positions measured on every frame, but a new systematic effect on scales of the entire field is observed, which could be due to the distribution of the reference stars.

  16. Efficient computer algebra algorithms for polynomial matrices in control design

    NASA Technical Reports Server (NTRS)

    Baras, J. S.; Macenany, D. C.; Munach, R.

    1989-01-01

    The theory of polynomial matrices plays a key role in the design and analysis of multi-input multi-output control and communications systems using frequency domain methods. Examples include coprime factorizations of transfer functions, cannonical realizations from matrix fraction descriptions, and the transfer function design of feedback compensators. Typically, such problems abstract in a natural way to the need to solve systems of Diophantine equations or systems of linear equations over polynomials. These and other problems involving polynomial matrices can in turn be reduced to polynomial matrix triangularization procedures, a result which is not surprising given the importance of matrix triangularization techniques in numerical linear algebra. Matrices with entries from a field and Gaussian elimination play a fundamental role in understanding the triangularization process. In the case of polynomial matrices, matrices with entries from a ring for which Gaussian elimination is not defined and triangularization is accomplished by what is quite properly called Euclidean elimination. Unfortunately, the numerical stability and sensitivity issues which accompany floating point approaches to Euclidean elimination are not very well understood. New algorithms are presented which circumvent entirely such numerical issues through the use of exact, symbolic methods in computer algebra. The use of such error-free algorithms guarantees that the results are accurate to within the precision of the model data--the best that can be hoped for. Care must be taken in the design of such algorithms due to the phenomenon of intermediate expressions swell.

  17. Bidirectional optimization of the melting spinning process.

    PubMed

    Liang, Xiao; Ding, Yongsheng; Wang, Zidong; Hao, Kuangrong; Hone, Kate; Wang, Huaping

    2014-02-01

    A bidirectional optimizing approach for the melting spinning process based on an immune-enhanced neural network is proposed. The proposed bidirectional model can not only reveal the internal nonlinear relationship between the process configuration and the quality indices of the fibers as final product, but also provide a tool for engineers to develop new fiber products with expected quality specifications. A neural network is taken as the basis for the bidirectional model, and an immune component is introduced to enlarge the searching scope of the solution field so that the neural network has a larger possibility to find the appropriate and reasonable solution, and the error of prediction can therefore be eliminated. The proposed intelligent model can also help to determine what kind of process configuration should be made in order to produce satisfactory fiber products. To make the proposed model practical to the manufacturing, a software platform is developed. Simulation results show that the proposed model can eliminate the approximation error raised by the neural network-based optimizing model, which is due to the extension of focusing scope by the artificial immune mechanism. Meanwhile, the proposed model with the corresponding software can conduct optimization in two directions, namely, the process optimization and category development, and the corresponding results outperform those with an ordinary neural network-based intelligent model. It is also proved that the proposed model has the potential to act as a valuable tool from which the engineers and decision makers of the spinning process could benefit.

  18. Apparatus for and method of eliminating single event upsets in combinational logic

    NASA Technical Reports Server (NTRS)

    Gambles, Jody W. (Inventor); Hass, Kenneth J. (Inventor); Cameron, Kelly B. (Inventor)

    2001-01-01

    An apparatus for and method of eliminating single event upsets (or SEU) in combinational logic are used to prevent error propagation as a result of cosmic particle strikes to the combinational logic. The apparatus preferably includes a combinational logic block electrically coupled to a delay element, a latch and an output buffer. In operation, a signal from the combinational logic is electrically coupled to a first input of the latch. In addition, the signal is routed through the delay element to produce a delayed signal. The delayed signal is routed to a second input of the latch. The latch used in the apparatus for preventing SEU preferably includes latch outputs and a feature that the latch outputs will not change state unless both latch inputs are correct. For example, the latch outputs may not change state unless both latch inputs have the same logical state. When a cosmic particle strikes the combinational logic, a transient disturbance with a predetermined length may appear in the signal. However, a function of the delay element is to preferably provide a time delay greater than the length of the transient disturbance. Therefore, the transient disturbance will not reach both latch inputs simultaneously. As a result, the latch outputs will not permanently change state in error due to the transient disturbance. In addition, the output buffer preferably combines the latch outputs in such a way that the correct state is preserved at all times. Thus, combinational logic with protection from SEU is provided.

  19. Post implantation adjustable intraocular lenses.

    PubMed

    Schwartz, D M; Jethmalani, J M; Sandstedt, C A; Kornfield, J A; Grubbs, R H

    2001-06-01

    To eliminate persistent refractive errors after cataract and phakic IOL surgery, photosensitive silicone IOLs have been developed. These IOL formulations enable precise laser adjustment of IOL power to correct spherical and toric errors post-operatively, after wound and IOL stabilization. Initial experience with these laser adjustable IOLs indicate excellent biocompatability and adjustability of more than five diopters.

  20. Accurate optical vector network analyzer based on optical single-sideband modulation and balanced photodetection.

    PubMed

    Xue, Min; Pan, Shilong; Zhao, Yongjiu

    2015-02-15

    A novel optical vector network analyzer (OVNA) based on optical single-sideband (OSSB) modulation and balanced photodetection is proposed and experimentally demonstrated, which can eliminate the measurement error induced by the high-order sidebands in the OSSB signal. According to the analytical model of the conventional OSSB-based OVNA, if the optical carrier in the OSSB signal is fully suppressed, the measurement result is exactly the high-order-sideband-induced measurement error. By splitting the OSSB signal after the optical device-under-test (ODUT) into two paths, removing the optical carrier in one path, and then detecting the two signals in the two paths using a balanced photodetector (BPD), high-order-sideband-induced measurement error can be ideally eliminated. As a result, accurate responses of the ODUT can be achieved without complex post-signal processing. A proof-of-concept experiment is carried out. The magnitude and phase responses of a fiber Bragg grating (FBG) measured by the proposed OVNA with different modulation indices are superimposed, showing that the high-order-sideband-induced measurement error is effectively removed.

  1. Sensor failure detection for jet engines

    NASA Technical Reports Server (NTRS)

    Beattie, E. C.; Laprad, R. F.; Akhter, M. M.; Rock, S. M.

    1983-01-01

    Revisions to the advanced sensor failure detection, isolation, and accommodation (DIA) algorithm, developed under the sensor failure detection system program were studied to eliminate the steady state errors due to estimation filter biases. Three algorithm revisions were formulated and one revision for detailed evaluation was chosen. The selected version modifies the DIA algorithm to feedback the actual sensor outputs to the integral portion of the control for the nofailure case. In case of a failure, the estimates of the failed sensor output is fed back to the integral portion. The estimator outputs are fed back to the linear regulator portion of the control all the time. The revised algorithm is evaluated and compared to the baseline algorithm developed previously.

  2. Digital optical conversion module

    DOEpatents

    Kotter, D.K.; Rankin, R.A.

    1988-07-19

    A digital optical conversion module used to convert an analog signal to a computer compatible digital signal including a voltage-to-frequency converter, frequency offset response circuitry, and an electrical-to-optical converter. Also used in conjunction with the digital optical conversion module is an optical link and an interface at the computer for converting the optical signal back to an electrical signal. Suitable for use in hostile environments having high levels of electromagnetic interference, the conversion module retains high resolution of the analog signal while eliminating the potential for errors due to noise and interference. The module can be used to link analog output scientific equipment such as an electrometer used with a mass spectrometer to a computer. 2 figs.

  3. Digital optical conversion module

    DOEpatents

    Kotter, Dale K.; Rankin, Richard A.

    1991-02-26

    A digital optical conversion module used to convert an analog signal to a computer compatible digital signal including a voltage-to-frequency converter, frequency offset response circuitry, and an electrical-to-optical converter. Also used in conjunction with the digital optical conversion module is an optical link and an interface at the computer for converting the optical signal back to an electrical signal. Suitable for use in hostile environments having high levels of electromagnetic interference, the conversion module retains high resolution of the analog signal while eliminating the potential for errors due to noise and interference. The module can be used to link analog output scientific equipment such as an electrometer used with a mass spectrometer to a computer.

  4. Real-time image mosaicing for medical applications.

    PubMed

    Loewke, Kevin E; Camarillo, David B; Jobst, Christopher A; Salisbury, J Kenneth

    2007-01-01

    In this paper we describe the development of a robotically-assisted image mosaicing system for medical applications. The processing occurs in real-time due to a fast initial image alignment provided by robotic position sensing. Near-field imaging, defined by relatively large camera motion, requires translations as well as pan and tilt orientations to be measured. To capture these measurements we use 5-d.o.f. sensing along with a hand-eye calibration to account for sensor offset. This sensor-based approach speeds up the mosaicing, eliminates cumulative errors, and readily handles arbitrary camera motions. Our results have produced visually satisfactory mosaics on a dental model but can be extended to other medical images.

  5. Self-Nulling Eddy Current Probe for Surface and Subsurface Flaw Detection

    NASA Technical Reports Server (NTRS)

    Wincheski, B.; Fulton, J. P.; Nath, S.; Namkung, M.; Simpson, J. W.

    1994-01-01

    An eddy current probe which provides a null-signal in the presence of unflawed material without the need for any balancing circuitry has been developed at NASA Langley Research Center. Such a unique capability of the probe reduces set-up time, eliminates tester configuration errors, and decreases instrumentation requirements. The probe is highly sensitive to surface breaking fatigue cracks, and shows excellent resolution for the measurement of material thickness, including material loss due to corrosion damage. The presence of flaws in the material under test causes an increase in the extremely stable and reproducible output voltage of the probe. The design of the probe and some examples illustrating its flaw detection capabilities are presented.

  6. Studies of contamination of three broiler breeder houses with Salmonella enteritidis before and after cleansing and disinfection.

    PubMed

    Davies, R H; Wray, C

    1996-01-01

    Three broiler breeder houses on three different sites were sampled before and after cleansing and disinfection. None of the farms achieved total elimination of Salmonella enteritidis from the poultry house environment but substantial improvements were seen when errors in the cleansing and disinfection protocol in the first house had been corrected. Fundamental errors such as over-dilution and inconsistent application of disinfectants were observed despite supervision of the process by technical advisors. In each of the three poultry units failure to eliminate a mouse population that was infected with S. enteritidis was likely to be the most important hazard for the next flock.

  7. Human Error and the International Space Station: Challenges and Triumphs in Science Operations

    NASA Technical Reports Server (NTRS)

    Harris, Samantha S.; Simpson, Beau C.

    2016-01-01

    Any system with a human component is inherently risky. Studies in human factors and psychology have repeatedly shown that human operators will inevitably make errors, regardless of how well they are trained. Onboard the International Space Station (ISS) where crew time is arguably the most valuable resource, errors by the crew or ground operators can be costly to critical science objectives. Operations experts at the ISS Payload Operations Integration Center (POIC), located at NASA's Marshall Space Flight Center in Huntsville, Alabama, have learned that from payload concept development through execution, there are countless opportunities to introduce errors that can potentially result in costly losses of crew time and science. To effectively address this challenge, we must approach the design, testing, and operation processes with two specific goals in mind. First, a systematic approach to error and human centered design methodology should be implemented to minimize opportunities for user error. Second, we must assume that human errors will be made and enable rapid identification and recoverability when they occur. While a systematic approach and human centered development process can go a long way toward eliminating error, the complete exclusion of operator error is not a reasonable expectation. The ISS environment in particular poses challenging conditions, especially for flight controllers and astronauts. Operating a scientific laboratory 250 miles above the Earth is a complicated and dangerous task with high stakes and a steep learning curve. While human error is a reality that may never be fully eliminated, smart implementation of carefully chosen tools and techniques can go a long way toward minimizing risk and increasing the efficiency of NASA's space science operations.

  8. New efficient optimizing techniques for Kalman filters and numerical weather prediction models

    NASA Astrophysics Data System (ADS)

    Famelis, Ioannis; Galanis, George; Liakatas, Aristotelis

    2016-06-01

    The need for accurate local environmental predictions and simulations beyond the classical meteorological forecasts are increasing the last years due to the great number of applications that are directly or not affected: renewable energy resource assessment, natural hazards early warning systems, global warming and questions on the climate change can be listed among them. Within this framework the utilization of numerical weather and wave prediction systems in conjunction with advanced statistical techniques that support the elimination of the model bias and the reduction of the error variability may successfully address the above issues. In the present work, new optimization methods are studied and tested in selected areas of Greece where the use of renewable energy sources is of critical. The added value of the proposed work is due to the solid mathematical background adopted making use of Information Geometry and Statistical techniques, new versions of Kalman filters and state of the art numerical analysis tools.

  9. Transparent Flexible Active Faraday Cage Enables In Vivo Capacitance Measurement in Assembled Microsensor.

    PubMed

    Ahmadi, Mahdi; Rajamani, Rajesh; Sezen, Serdar

    2017-10-01

    Capacitive micro-sensors such as accelerometers, gyroscopes and pressure sensors are increasingly used in the modern electronic world. However, the in vivo use of capacitive sensing for measurement of pressure or other variables inside a human body suffers from significant errors due to stray capacitance. This paper proposes a solution consisting of a transparent thin flexible Faraday cage that surrounds the sensor. By supplying the active sensing voltage simultaneously to the deformable electrode of the capacitive sensor and to the Faraday cage, the stray capacitance during in vivo measurements can be largely eliminated. Due to the transparency of the Faraday cage, the top and bottom portions of a capacitive sensor can be accurately aligned and assembled together. Experimental results presented in the paper show that stray capacitance is reduced by a factor of 10 by the Faraday cage, when the sensor is subjected to a full immersion in water.

  10. Design of a novel passive flexure-based mechanism for microelectromechanical system optical switch assembly

    NASA Astrophysics Data System (ADS)

    Zhang, Jianbin; Sun, Xiantao; Chen, Weihai; Chen, Wenjie; Jiang, Lusha

    2014-12-01

    In microelectromechanical system (MEMS) optical switch assembly, the collision always exists between the optical fiber and the edges of the U-groove due to the positioning errors between them. It will cause the irreparable damage since the optical fiber and the silicon-made U-groove are usually very fragile. Typical solution is first to detect the positioning errors by the machine vision or high-resolution sensors and then to actively eliminate them with the aid of the motion of precision mechanisms. However, this method will increase the cost and complexity of the system. In this paper, we present a passive compensation method to accommodate the positioning errors. First, we study the insertion process of the optical fiber into the U-groove to analyze all possible positioning errors as well as the conditions of successful insertion. Then, a novel passive flexure-based mechanism based on the remote center of compliance concept is designed to satisfy the required insertion condition. The pseudo-rigid-body-model method is utilized to calculate the stiffness of the mechanism along the different directions, which is verified by finite element analysis (FEA). Finally, a prototype of the passive flexure-based mechanism is fabricated for performance tests. Both FEA and experimental results indicate that the designed mechanism can be used to the MEMS optical switch assembly.

  11. Design of a novel passive flexure-based mechanism for microelectromechanical system optical switch assembly.

    PubMed

    Zhang, Jianbin; Sun, Xiantao; Chen, Weihai; Chen, Wenjie; Jiang, Lusha

    2014-12-01

    In microelectromechanical system (MEMS) optical switch assembly, the collision always exists between the optical fiber and the edges of the U-groove due to the positioning errors between them. It will cause the irreparable damage since the optical fiber and the silicon-made U-groove are usually very fragile. Typical solution is first to detect the positioning errors by the machine vision or high-resolution sensors and then to actively eliminate them with the aid of the motion of precision mechanisms. However, this method will increase the cost and complexity of the system. In this paper, we present a passive compensation method to accommodate the positioning errors. First, we study the insertion process of the optical fiber into the U-groove to analyze all possible positioning errors as well as the conditions of successful insertion. Then, a novel passive flexure-based mechanism based on the remote center of compliance concept is designed to satisfy the required insertion condition. The pseudo-rigid-body-model method is utilized to calculate the stiffness of the mechanism along the different directions, which is verified by finite element analysis (FEA). Finally, a prototype of the passive flexure-based mechanism is fabricated for performance tests. Both FEA and experimental results indicate that the designed mechanism can be used to the MEMS optical switch assembly.

  12. Effect of bar-code technology on the safety of medication administration.

    PubMed

    Poon, Eric G; Keohane, Carol A; Yoon, Catherine S; Ditmore, Matthew; Bane, Anne; Levtzion-Korach, Osnat; Moniz, Thomas; Rothschild, Jeffrey M; Kachalia, Allen B; Hayes, Judy; Churchill, William W; Lipsitz, Stuart; Whittemore, Anthony D; Bates, David W; Gandhi, Tejal K

    2010-05-06

    Serious medication errors are common in hospitals and often occur during order transcription or administration of medication. To help prevent such errors, technology has been developed to verify medications by incorporating bar-code verification technology within an electronic medication-administration system (bar-code eMAR). We conducted a before-and-after, quasi-experimental study in an academic medical center that was implementing the bar-code eMAR. We assessed rates of errors in order transcription and medication administration on units before and after implementation of the bar-code eMAR. Errors that involved early or late administration of medications were classified as timing errors and all others as nontiming errors. Two clinicians reviewed the errors to determine their potential to harm patients and classified those that could be harmful as potential adverse drug events. We observed 14,041 medication administrations and reviewed 3082 order transcriptions. Observers noted 776 nontiming errors in medication administration on units that did not use the bar-code eMAR (an 11.5% error rate) versus 495 such errors on units that did use it (a 6.8% error rate)--a 41.4% relative reduction in errors (P<0.001). The rate of potential adverse drug events (other than those associated with timing errors) fell from 3.1% without the use of the bar-code eMAR to 1.6% with its use, representing a 50.8% relative reduction (P<0.001). The rate of timing errors in medication administration fell by 27.3% (P<0.001), but the rate of potential adverse drug events associated with timing errors did not change significantly. Transcription errors occurred at a rate of 6.1% on units that did not use the bar-code eMAR but were completely eliminated on units that did use it. Use of the bar-code eMAR substantially reduced the rate of errors in order transcription and in medication administration as well as potential adverse drug events, although it did not eliminate such errors. Our data show that the bar-code eMAR is an important intervention to improve medication safety. (ClinicalTrials.gov number, NCT00243373.) 2010 Massachusetts Medical Society

  13. Adiabatic leakage elimination operator in an experimental framework

    NASA Astrophysics Data System (ADS)

    Wang, Zhao-Ming; Byrd, Mark S.; Jing, Jun; Wu, Lian-Ao

    2018-06-01

    Adiabatic evolution is used in a variety of quantum information processing tasks. However, the elimination of errors is not as well developed as it is for circuit model processing. Here, we present a strategy to improve the performance of a quantum adiabatic process by adding leakage elimination operators (LEOs) to the evolution. These are a sequence of pulse controls acting in an adiabatic subspace to eliminate errors by suppressing unwanted transitions. Using the Feshbach P Q partitioning technique, we obtain an analytical solution for a set of pulse controls. The effectiveness of the LEO is independent of the specific form of the pulse but depends on the average frequency of the control function. By observing that the evolution of the target eigenstate is governed by a periodic function appearing in the integral of the control function, we show that control parameters can be chosen in such a way that the instantaneous eigenstates of the system are unchanged, yet a speedup can be achieved by suppressing transitions. Furthermore, we give the exact expression of the control function for a counter unitary transformation to be used in experiments which provides a clear physical meaning for the LEO, aiding in the implementation.

  14. A constrained-gradient method to control divergence errors in numerical MHD

    NASA Astrophysics Data System (ADS)

    Hopkins, Philip F.

    2016-10-01

    In numerical magnetohydrodynamics (MHD), a major challenge is maintaining nabla \\cdot {B}=0. Constrained transport (CT) schemes achieve this but have been restricted to specific methods. For more general (meshless, moving-mesh, ALE) methods, `divergence-cleaning' schemes reduce the nabla \\cdot {B} errors; however they can still be significant and can lead to systematic errors which converge away slowly. We propose a new constrained gradient (CG) scheme which augments these with a projection step, and can be applied to any numerical scheme with a reconstruction. This iteratively approximates the least-squares minimizing, globally divergence-free reconstruction of the fluid. Unlike `locally divergence free' methods, this actually minimizes the numerically unstable nabla \\cdot {B} terms, without affecting the convergence order of the method. We implement this in the mesh-free code GIZMO and compare various test problems. Compared to cleaning schemes, our CG method reduces the maximum nabla \\cdot {B} errors by ˜1-3 orders of magnitude (˜2-5 dex below typical errors if no nabla \\cdot {B} cleaning is used). By preventing large nabla \\cdot {B} at discontinuities, this eliminates systematic errors at jumps. Our CG results are comparable to CT methods; for practical purposes, the nabla \\cdot {B} errors are eliminated. The cost is modest, ˜30 per cent of the hydro algorithm, and the CG correction can be implemented in a range of numerical MHD methods. While for many problems, we find Dedner-type cleaning schemes are sufficient for good results, we identify a range of problems where using only Powell or `8-wave' cleaning can produce order-of-magnitude errors.

  15. Techniques for Down-Sampling a Measured Surface Height Map for Model Validation

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin

    2012-01-01

    This software allows one to down-sample a measured surface map for model validation, not only without introducing any re-sampling errors, but also eliminating the existing measurement noise and measurement errors. The software tool of the current two new techniques can be used in all optical model validation processes involving large space optical surfaces

  16. On the accuracy of estimation of basic pharmacokinetic parameters by the traditional noncompartmental equations and the prediction of the steady-state volume of distribution in obese patients based upon data derived from normal subjects.

    PubMed

    Berezhkovskiy, Leonid M

    2011-06-01

    The steady-state and terminal volumes of distribution, as well as the mean residence time of drug in the body (V(ss), V(β), and MRT) are the common pharmacokinetic parameters calculated using the drug plasma concentration-time profile C(p) (t) following intravenous (i.v. bolus or constant rate infusion) drug administration. These calculations are valid for the linear pharmacokinetic system with central elimination (i.e., elimination rate being proportional to drug concentration in plasma). Formally, the assumption of central elimination is not normally met because the rate of drug elimination is proportional to the unbound drug concentration at elimination site, although equilibration between systemic circulation and the site of clearance for majority of small molecule drugs is fast. Thus, the assumption of central elimination is practically quite adequate. It appears reasonable to estimate the extent of possible errors in determination of these pharmacokinetic parameters due to the absence of central elimination. The comparison of V(ss), V(β), and MRT calculated by exact equations and the commonly used ones was made considering a simplified physiologically based pharmacokinetic model. It was found that if the drug plasma concentration profile is detected accurately, determination of drug distribution volumes and MRT using the traditional noncompartmental calculations of these parameters from C(p) (t) yields the values very close to that obtained from exact equations. Though in practice, the accurate measurement of C(p) (t), especially its terminal phase, may not always be possible. This is particularly applicable for obtaining the distribution volumes of lipophilic compounds in obese subjects, when the possibility of late terminal phase at low drug concentration is quite likely, specifically for compounds with high clearance. An accurate determination of V(ss) is much needed in clinical practice because it is critical for the proper selection of drug treatment regimen. For that reason, we developed a convenient method for calculation of V(ss) in obese (or underweight) subjects. It is based on using the V(ss) values obtained from pharmacokinetic studies in normal subjects and the physicochemical properties of drug molecule. A simple criterion that determines either the increase or decrease of V(ss) (per unit body weight) due to obesity is obtained. The accurate determination of adipose tissue-plasma partition coefficient is crucial for the practical application of suggested method. Copyright © 2011 Wiley-Liss, Inc.

  17. Identifying and attributing common data quality problems: temperature and precipitation observations in Bolivia and Peru

    NASA Astrophysics Data System (ADS)

    Hunziker, Stefan; Gubler, Stefanie; Calle, Juan; Moreno, Isabel; Andrade, Marcos; Velarde, Fernando; Ticona, Laura; Carrasco, Gualberto; Castellón, Yaruska; Oria Rojas, Clara; Brönnimann, Stefan; Croci-Maspoli, Mischa; Konzelmann, Thomas; Rohrer, Mario

    2016-04-01

    Assessing climatological trends and extreme events requires high-quality data. However, for many regions of the world, observational data of the desired quality is not available. In order to eliminate errors in the data, quality control (QC) should be applied before data analysis. If the data still contains undetected errors and quality problems after QC, a consequence may be misleading and erroneous results. A region which is seriously affected by observational data quality problems is the Central Andes. At the same time, climatological information on ongoing climate change and climate risks are of utmost importance in this area due to its vulnerability to meteorological extreme events and climatic changes. Beside data quality issues, the lack of metadata and the low station network density complicate quality control and assessment, and hence, appropriate application of the data. Errors and data problems may occur at any point of the data generation chain, e.g. due to unsuitable station configuration or siting, poor station maintenance, erroneous instrument reading, or inaccurate data digitalization and post processing. Different measurement conditions in the predominantly conventional station networks in Bolivia and Peru compared to the mostly automated networks e.g. in Europe or Northern America may cause different types of errors. Hence, applying QC methods used on state of the art networks to Bolivian and Peruvian climate observations may not be suitable or sufficient. A comprehensive amount of Bolivian and Peruvian maximum and minimum temperature and precipitation in-situ measurements were analyzed to detect and describe common data quality problems. Furthermore, station visits and reviews of the original documents were done. Some of the errors could be attributed to a specific source. Such information is of great importance for data users, since it allows them to decide for what applications the data still can be used. In ideal cases, it may even allow to correct the error. Strategies on how to deal with data from the Central Andes will be suggested. However, the approach may be applicable to networks from other countries where conditions of climate observations are comparable.

  18. Effects of free convection and friction on heat-pulse flowmeter measurement

    NASA Astrophysics Data System (ADS)

    Lee, Tsai-Ping; Chia, Yeeping; Chen, Jiun-Szu; Chen, Hongey; Liu, Chen-Wuing

    2012-03-01

    SummaryHeat-pulse flowmeter can be used to measure low flow velocities in a borehole; however, bias in the results due to measurement error is often encountered. A carefully designed water circulation system was established in the laboratory to evaluate the accuracy and precision of flow velocity measured by heat-pulse flowmeter in various conditions. Test results indicated that the coefficient of variation for repeated measurements, ranging from 0.4% to 5.8%, tends to increase with flow velocity. The measurement error increases from 4.6% to 94.4% as the average flow velocity decreases from 1.37 cm/s to 0.18 cm/s. We found that the error resulted primarily from free convection and frictional loss. Free convection plays an important role in heat transport at low flow velocities. Frictional effect varies with the position of measurement and geometric shape of the inlet and flow-through cell of the flowmeter. Based on the laboratory test data, a calibration equation for the measured flow velocity was derived by the least-squares regression analysis. When the flowmeter is used with a diverter, the range of measured flow velocity can be extended, but the measurement error and the coefficient of variation due to friction increase significantly. At higher velocities under turbulent flow conditions, the measurement error is greater than 100%. Our laboratory experimental results suggested that, to avoid a large error, the heat-pulse flowmeter measurement is better conducted in laminar flow and the effect of free convection should be eliminated at any flow velocities. Field measurement of the vertical flow velocity using the heat-pulse flowmeter was tested in a monitoring well. The calibration of measured velocities not only improved the contrast in hydraulic conductivity between permeable and less permeable layers, but also corrected the inconsistency between the pumping rate and the measured flow rate. We identified two highly permeable sections where the horizontal hydraulic conductivity is 3.7-6.4 times of the equivalent hydraulic conductivity obtained from the pumping test. The field test results indicated that, with a proper calibration, the flowmeter measurement is capable of characterizing the vertical distribution of preferential flow or hydraulic conductivity.

  19. Efficiency, error and yield in light-directed maskless synthesis of DNA microarrays

    PubMed Central

    2011-01-01

    Background Light-directed in situ synthesis of DNA microarrays using computer-controlled projection from a digital micromirror device--maskless array synthesis (MAS)--has proved to be successful at both commercial and laboratory scales. The chemical synthetic cycle in MAS is quite similar to that of conventional solid-phase synthesis of oligonucleotides, but the complexity of microarrays and unique synthesis kinetics on the glass substrate require a careful tuning of parameters and unique modifications to the synthesis cycle to obtain optimal deprotection and phosphoramidite coupling. In addition, unintended deprotection due to scattering and diffraction introduce insertion errors that contribute significantly to the overall error rate. Results Stepwise phosphoramidite coupling yields have been greatly improved and are now comparable to those obtained in solid phase synthesis of oligonucleotides. Extended chemical exposure in the synthesis of complex, long oligonucleotide arrays result in lower--but still high--final average yields which approach 99%. The new synthesis chemistry includes elimination of the standard oxidation until the final step, and improved coupling and light deprotection. Coupling Insertions due to stray light are the limiting factor in sequence quality for oligonucleotide synthesis for gene assembly. Diffraction and local flare are by far the largest contributors to loss of optical contrast. Conclusions Maskless array synthesis is an efficient and versatile method for synthesizing high density arrays of long oligonucleotides for hybridization- and other molecular binding-based experiments. For applications requiring high sequence purity, such as gene assembly, diffraction and flare remain significant obstacles, but can be significantly reduced with straightforward experimental strategies. PMID:22152062

  20. Kinematics Simulation of the Cardan Shaft for Investigation of the Cardan Error in Catia V5

    NASA Astrophysics Data System (ADS)

    Hajdu, Štefan; Rolník, Ladislav; Švoš, Juraj

    2016-12-01

    The goal of this paper is the creation of kinematic systems of the cardan shaft in the CAD/CAM/CAE system CATIA V5 and analysis of three cases of assembly to determine upon which, angular accelerations had been observed between the input driving shaft, central cardan shaft and output driven shaft. The scientific result of this paper was to confirm the presence of cardan error and how this type of error can be successfully eliminated.

  1. [Establishment of model of traditional Chinese medicine injections post-marketing safety monitoring].

    PubMed

    Guo, Xin-E; Zhao, Yu-Bin; Xie, Yan-Ming; Zhao, Li-Cai; Li, Yan-Feng; Hao, Zhe

    2013-09-01

    To establish a nurse based post-marketing safety surveillance model for traditional Chinese medicine injections (TCMIs). A TCMIs safety monitoring team and a research hospital team engaged in the research, monitoring processes, and quality control processes were established, in order to achieve comprehensive, timely, accurate and real-time access to research data, to eliminate errors in data collection. A triage system involving a study nurse, as the first point of contact, clinicians and clinical pharmacists was set up in a TCM hospital. Following the specified workflow involving labeling of TCM injections and using improved monitoring forms it was found that there were no missing reports at the ratio of error was zero. A research nurse as the first and main point of contact in post-marketing safety monitoring of TCM as part of a triage model, ensures that research data collected has the characteristics of authenticity, accuracy, timeliness, integrity, and eliminate errors during the process of data collection. Hospital based monitoring is a robust and operable process.

  2. Population size estimation in Yellowstone wolves with error-prone noninvasive microsatellite genotypes.

    PubMed

    Creel, Scott; Spong, Goran; Sands, Jennifer L; Rotella, Jay; Zeigle, Janet; Joe, Lawrence; Murphy, Kerry M; Smith, Douglas

    2003-07-01

    Determining population sizes can be difficult, but is essential for conservation. By counting distinct microsatellite genotypes, DNA from noninvasive samples (hair, faeces) allows estimation of population size. Problems arise because genotypes from noninvasive samples are error-prone, but genotyping errors can be reduced by multiple polymerase chain reaction (PCR). For faecal genotypes from wolves in Yellowstone National Park, error rates varied substantially among samples, often above the 'worst-case threshold' suggested by simulation. Consequently, a substantial proportion of multilocus genotypes held one or more errors, despite multiple PCR. These genotyping errors created several genotypes per individual and caused overestimation (up to 5.5-fold) of population size. We propose a 'matching approach' to eliminate this overestimation bias.

  3. Reward system and temporal pole contributions to affective evaluation during a first person shooter video game.

    PubMed

    Mathiak, Krystyna A; Klasen, Martin; Weber, René; Ackermann, Hermann; Shergill, Sukhwinder S; Mathiak, Klaus

    2011-07-12

    Violent content in video games evokes many concerns but there is little research concerning its rewarding aspects. It was demonstrated that playing a video game leads to striatal dopamine release. It is unclear, however, which aspects of the game cause this reward system activation and if violent content contributes to it. We combined functional Magnetic Resonance Imaging (fMRI) with individual affect measures to address the neuronal correlates of violence in a video game. Thirteen male German volunteers played a first-person shooter game (Tactical Ops: Assault on Terror) during fMRI measurement. We defined success as eliminating opponents, and failure as being eliminated themselves. Affect was measured directly before and after game play using the Positive and Negative Affect Schedule (PANAS). Failure and success events evoked increased activity in visual cortex but only failure decreased activity in orbitofrontal cortex and caudate nucleus. A negative correlation between negative affect and responses to failure was evident in the right temporal pole (rTP). The deactivation of the caudate nucleus during failure is in accordance with its role in reward-prediction error: it occurred whenever subject missed an expected reward (being eliminated rather than eliminating the opponent). We found no indication that violence events were directly rewarding for the players. We addressed subjective evaluations of affect change due to gameplay to study the reward system. Subjects reporting greater negative affect after playing the game had less rTP activity associated with failure. The rTP may therefore be involved in evaluating the failure events in a social context, to regulate the players' mood.

  4. Issues associated with Galilean invariance on a moving solid boundary in the lattice Boltzmann method

    NASA Astrophysics Data System (ADS)

    Peng, Cheng; Geneva, Nicholas; Guo, Zhaoli; Wang, Lian-Ping

    2017-01-01

    In lattice Boltzmann simulations involving moving solid boundaries, the momentum exchange between the solid and fluid phases was recently found to be not fully consistent with the principle of local Galilean invariance (GI) when the bounce-back schemes (BBS) and the momentum exchange method (MEM) are used. In the past, this inconsistency was resolved by introducing modified MEM schemes so that the overall moving-boundary algorithm could be more consistent with GI. However, in this paper we argue that the true origin of this violation of Galilean invariance (VGI) in the presence of a moving solid-fluid interface is due to the BBS itself, as the VGI error not only exists in the hydrodynamic force acting on the solid phase, but also in the boundary force exerted on the fluid phase, according to Newton's Third Law. The latter, however, has so far gone unnoticed in previously proposed modified MEM schemes. Based on this argument, we conclude that the previous modifications to the momentum exchange method are incomplete solutions to the VGI error in the lattice Boltzmann method (LBM). An implicit remedy to the VGI error in the LBM and its limitation is then revealed. To address the VGI error for a case when this implicit remedy does not exist, a bounce-back scheme based on coordinate transformation is proposed. Numerical tests in both laminar and turbulent flows show that the proposed scheme can effectively eliminate the errors associated with the usual bounce-back implementations on a no-slip solid boundary, and it can maintain an accurate momentum exchange calculation with minimal computational overhead.

  5. Using Pipelined XNOR Logic to Reduce SEU Risks in State Machines

    NASA Technical Reports Server (NTRS)

    Le, Martin; Zheng, Xin; Katanyoutant, Sunant

    2008-01-01

    Single-event upsets (SEUs) pose great threats to avionic systems state machine control logic, which are frequently used to control sequence of events and to qualify protocols. The risks of SEUs manifest in two ways: (a) the state machine s state information is changed, causing the state machine to unexpectedly transition to another state; (b) due to the asynchronous nature of SEU, the state machine's state registers become metastable, consequently causing any combinational logic associated with the metastable registers to malfunction temporarily. Effect (a) can be mitigated with methods such as triplemodular redundancy (TMR). However, effect (b) cannot be eliminated and can degrade the effectiveness of any mitigation method of effect (a). Although there is no way to completely eliminate the risk of SEU-induced errors, the risk can be made very small by use of a combination of very fast state-machine logic and error-detection logic. Therefore, one goal of two main elements of the present method is to design the fastest state-machine logic circuitry by basing it on the fastest generic state-machine design, which is that of a one-hot state machine. The other of the two main design elements is to design fast error-detection logic circuitry and to optimize it for implementation in a field-programmable gate array (FPGA) architecture: In the resulting design, the one-hot state machine is fitted with a multiple-input XNOR gate for detection of illegal states. The XNOR gate is implemented with lookup tables and with pipelines for high speed. In this method, the task of designing all the logic must be performed manually because no currently available logic synthesis software tool can produce optimal solutions of design problems of this type. However, some assistance is provided by a script, written for this purpose in the Python language (an object-oriented interpretive computer language) to automatically generate hardware description language (HDL) code from state-transition rules.

  6. The impact of modelling errors on interferometer calibration for 21 cm power spectra

    NASA Astrophysics Data System (ADS)

    Ewall-Wice, Aaron; Dillon, Joshua S.; Liu, Adrian; Hewitt, Jacqueline

    2017-09-01

    We study the impact of sky-based calibration errors from source mismodelling on 21 cm power spectrum measurements with an interferometer and propose a method for suppressing their effects. While emission from faint sources that are not accounted for in calibration catalogues is believed to be spectrally smooth, deviations of true visibilities from model visibilities are not, due to the inherent chromaticity of the interferometer's sky response (the 'wedge'). Thus, unmodelled foregrounds, below the confusion limit of many instruments, introduce frequency structure into gain solutions on the same line-of-sight scales on which we hope to observe the cosmological signal. We derive analytic expressions describing these errors using linearized approximations of the calibration equations and estimate the impact of this bias on measurements of the 21 cm power spectrum during the epoch of reionization. Given our current precision in primary beam and foreground modelling, this noise will significantly impact the sensitivity of existing experiments that rely on sky-based calibration. Our formalism describes the scaling of calibration with array and sky-model parameters and can be used to guide future instrument design and calibration strategy. We find that sky-based calibration that downweights long baselines can eliminate contamination in most of the region outside of the wedge with only a modest increase in instrumental noise.

  7. Hybrid Adaptive Flight Control with Model Inversion Adaptation

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan

    2011-01-01

    This study investigates a hybrid adaptive flight control method as a design possibility for a flight control system that can enable an effective adaptation strategy to deal with off-nominal flight conditions. The hybrid adaptive control blends both direct and indirect adaptive control in a model inversion flight control architecture. The blending of both direct and indirect adaptive control provides a much more flexible and effective adaptive flight control architecture than that with either direct or indirect adaptive control alone. The indirect adaptive control is used to update the model inversion controller by an on-line parameter estimation of uncertain plant dynamics based on two methods. The first parameter estimation method is an indirect adaptive law based on the Lyapunov theory, and the second method is a recursive least-squares indirect adaptive law. The model inversion controller is therefore made to adapt to changes in the plant dynamics due to uncertainty. As a result, the modeling error is reduced that directly leads to a decrease in the tracking error. In conjunction with the indirect adaptive control that updates the model inversion controller, a direct adaptive control is implemented as an augmented command to further reduce any residual tracking error that is not entirely eliminated by the indirect adaptive control.

  8. Global Digital Image Mosaics of Mars: Assessment of Geodetic Accuracy

    NASA Technical Reports Server (NTRS)

    Kirk, R.; Archinal, B. A.; Lee, E. M.; Davies, M. E.; Colvin, T. R.; Duxbury, T. C.

    2001-01-01

    A revised global image mosaic of Mars (MDIM 2.0) was recently completed by USGS. Comparison with high-resolution gridded Mars Orbiter Laser Altimeter (MOLA) digital image mosaics will allow us to quantify its geodetic errors; linking the next MDIM to the MOLA data will help eliminate those errors. Additional information is contained in the original extended abstract.

  9. Leveraging the Potential of Peer Feedback in an Academic Writing Activity through Sense-Making Support

    ERIC Educational Resources Information Center

    Wichmann, Astrid; Funk, Alexandra; Rummel, Nikol

    2018-01-01

    The act of revising is an important aspect of academic writing. Although revision is crucial for eliminating writing errors and producing high-quality texts, research on writing expertise shows that novices rarely engage in revision activities. Providing information on writing errors by means of peer feedback has become a popular method in writing…

  10. Does applying technology throughout the medication use process improve patient safety with antineoplastics?

    PubMed

    Bubalo, Joseph; Warden, Bruce A; Wiegel, Joshua J; Nishida, Tess; Handel, Evelyn; Svoboda, Leanne M; Nguyen, Lam; Edillo, P Neil

    2014-12-01

    Medical errors, in particular medication errors, continue to be a troublesome factor in the delivery of safe and effective patient care. Antineoplastic agents represent a group of medications highly susceptible to medication errors due to their complex regimens and narrow therapeutic indices. As the majority of these medication errors are frequently associated with breakdowns in poorly defined systems, developing technologies and evolving workflows seem to be a logical approach to provide added safeguards against medication errors. This article will review both the pros and cons of today's technologies and their ability to simplify the medication use process, reduce medication errors, improve documentation, improve healthcare costs and increase provider efficiency as relates to the use of antineoplastic therapy throughout the medication use process. Several technologies, mainly computerized provider order entry (CPOE), barcode medication administration (BCMA), smart pumps, electronic medication administration record (eMAR), and telepharmacy, have been well described and proven to reduce medication errors, improve adherence to quality metrics, and/or improve healthcare costs in a broad scope of patients. The utilization of these technologies during antineoplastic therapy is weak at best and lacking for most. Specific to the antineoplastic medication use system, the only technology with data to adequately support a claim of reduced medication errors is CPOE. In addition to the benefits these technologies can provide, it is also important to recognize their potential to induce new types of errors and inefficiencies which can negatively impact patient care. The utilization of technology reduces but does not eliminate the potential for error. The evidence base to support technology in preventing medication errors is limited in general but even more deficient in the realm of antineoplastic therapy. Though CPOE has the best evidence to support its use in the antineoplastic population, benefit from many other technologies may have to be inferred based on data from other patient populations. As health systems begin to widely adopt and implement new technologies it is important to critically assess their effectiveness in improving patient safety. © The Author(s) 2013 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  11. Characterization of identification errors and uses in localization of poor modal correlation

    NASA Astrophysics Data System (ADS)

    Martin, Guillaume; Balmes, Etienne; Chancelier, Thierry

    2017-05-01

    While modal identification is a mature subject, very few studies address the characterization of errors associated with components of a mode shape. This is particularly important in test/analysis correlation procedures, where the Modal Assurance Criterion is used to pair modes and to localize at which sensors discrepancies occur. Poor correlation is usually attributed to modeling errors, but clearly identification errors also occur. In particular with 3D Scanning Laser Doppler Vibrometer measurement, many transfer functions are measured. As a result individual validation of each measurement cannot be performed manually in a reasonable time frame and a notable fraction of measurements is expected to be fairly noisy leading to poor identification of the associated mode shape components. The paper first addresses measurements and introduces multiple criteria. The error measures the difference between test and synthesized transfer functions around each resonance and can be used to localize poorly identified modal components. For intermediate error values, diagnostic of the origin of the error is needed. The level evaluates the transfer function amplitude in the vicinity of a given mode and can be used to eliminate sensors with low responses. A Noise Over Signal indicator, product of error and level, is then shown to be relevant to detect poorly excited modes and errors due to modal property shifts between test batches. Finally, a contribution is introduced to evaluate the visibility of a mode in each transfer. Using tests on a drum brake component, these indicators are shown to provide relevant insight into the quality of measurements. In a second part, test/analysis correlation is addressed with a focus on the localization of sources of poor mode shape correlation. The MACCo algorithm, which sorts sensors by the impact of their removal on a MAC computation, is shown to be particularly relevant. Combined with the error it avoids keeping erroneous modal components. Applied after removal of poor modal components, it provides spatial maps of poor correlation, which help localizing mode shape correlation errors and thus prepare the selection of model changes in updating procedures.

  12. Rate, causes and reporting of medication errors in Jordan: nurses' perspectives.

    PubMed

    Mrayyan, Majd T; Shishani, Kawkab; Al-Faouri, Ibrahim

    2007-09-01

    The aim of the study was to describe Jordanian nurses' perceptions about various issues related to medication errors. This is the first nursing study about medication errors in Jordan. This was a descriptive study. A convenient sample of 799 nurses from 24 hospitals was obtained. Descriptive and inferential statistics were used for data analysis. Over the course of their nursing career, the average number of recalled committed medication errors per nurse was 2.2. Using incident reports, the rate of medication errors reported to nurse managers was 42.1%. Medication errors occurred mainly when medication labels/packaging were of poor quality or damaged. Nurses failed to report medication errors because they were afraid that they might be subjected to disciplinary actions or even lose their jobs. In the stepwise regression model, gender was the only predictor of medication errors in Jordan. Strategies to reduce or eliminate medication errors are required.

  13. Determination of fluoride in water - A modified zirconium-alizarin method

    USGS Publications Warehouse

    Lamar, W.L.

    1945-01-01

    A convenient, rapid colorimetric procedure using the zirconium-alizarin indicator acidified with sulfuric acid for the determination of fluoride in water is described. Since this acid indicator is stable indefinitely, it is more useful than other zirconium-alizarin reagents previously reported. The use of sulfuric acid alone in acidifying the zirconium-alizarin reagent makes possible the maximum suppression of the interference of sulfate. Control of the pH of the samples eliminates errors due to the alkalinity of the samples. The fluoride content of waters containing less than 500 parts per million of sulfate and less than 1000 p.p.m. of chloride may be determined within a limit of 0.1 p.p.m. when a 100-ml. sample is used.

  14. A consideration of the use of optical fibers to remotely couple photometers to telescopes

    NASA Technical Reports Server (NTRS)

    Heacox, William D.

    1988-01-01

    The possible use of optical fibers to remotely couple photometers to telescopes is considered. Such an application offers the apparent prospect of enhancing photometric stability as a consequence of the benefits of remote operation and decreased sensitivity to image details. A properly designed fiber optic coupler will probably show no significant changes in optical transmisssion due to normal variations in the fiber configuration. It may be more difficult to eliminate configuration-dependent effects on the pupil of the transmitted beam, and thus achieve photometric stability to guiding and seeing errors. In addition, there is some evidence for significant changes in the optical throughputs of fibers over the temperature range normally encountered in astronomical observatories.

  15. Identifying sensitive areas of adaptive observations for prediction of the Kuroshio large meander using a shallow-water model

    NASA Astrophysics Data System (ADS)

    Zou, Guang'an; Wang, Qiang; Mu, Mu

    2016-09-01

    Sensitive areas for prediction of the Kuroshio large meander using a 1.5-layer, shallow-water ocean model were investigated using the conditional nonlinear optimal perturbation (CNOP) and first singular vector (FSV) methods. A series of sensitivity experiments were designed to test the sensitivity of sensitive areas within the numerical model. The following results were obtained: (1) the eff ect of initial CNOP and FSV patterns in their sensitive areas is greater than that of the same patterns in randomly selected areas, with the eff ect of the initial CNOP patterns in CNOP sensitive areas being the greatest; (2) both CNOP- and FSV-type initial errors grow more quickly than random errors; (3) the eff ect of random errors superimposed on the sensitive areas is greater than that of random errors introduced into randomly selected areas, and initial errors in the CNOP sensitive areas have greater eff ects on final forecasts. These results reveal that the sensitive areas determined using the CNOP are more sensitive than those of FSV and other randomly selected areas. In addition, ideal hindcasting experiments were conducted to examine the validity of the sensitive areas. The results indicate that reduction (or elimination) of CNOP-type errors in CNOP sensitive areas at the initial time has a greater forecast benefit than the reduction (or elimination) of FSV-type errors in FSV sensitive areas. These results suggest that the CNOP method is suitable for determining sensitive areas in the prediction of the Kuroshio large-meander path.

  16. Computerized Orders with Standardized Concentrations Decrease Dispensing Errors of Continuous Infusion Medications for Pediatrics

    PubMed Central

    Sowan, Azizeh K.; Vaidya, Vinay U.; Soeken, Karen L.; Hilmas, Elora

    2010-01-01

    OBJECTIVES The use of continuous infusion medications with individualized concentrations may increase the risk for errors in pediatric patients. The objective of this study was to evaluate the effect of computerized prescriber order entry (CPOE) for continuous infusions with standardized concentrations on frequency of pharmacy processing errors. In addition, time to process handwritten versus computerized infusion orders was evaluated and user satisfaction with CPOE as compared to handwritten orders was measured. METHODS Using a crossover design, 10 pharmacists in the pediatric satellite within a university teaching hospital were given test scenarios of handwritten and CPOE order sheets and asked to process infusion orders using the pharmacy system in order to generate infusion labels. Participants were given three groups of orders: five correct handwritten orders, four handwritten orders written with deliberate errors, and five correct CPOE orders. Label errors were analyzed and time to complete the task was recorded. RESULTS Using CPOE orders, participants required less processing time per infusion order (2 min, 5 sec ± 58 sec) compared with time per infusion order in the first handwritten order sheet group (3 min, 7 sec ± 1 min, 20 sec) and the second handwritten order sheet group (3 min, 26 sec ± 1 min, 8 sec), (p<0.01). CPOE eliminated all error types except wrong concentration. With CPOE, 4% of infusions processed contained errors, compared with 26% of the first group of handwritten orders and 45% of the second group of handwritten orders (p<0.03). Pharmacists were more satisfied with CPOE orders when compared with the handwritten method (p=0.0001). CONCLUSIONS CPOE orders saved pharmacists' time and greatly improved the safety of processing continuous infusions, although not all errors were eliminated. pharmacists were overwhelmingly satisfied with the CPOE orders PMID:22477811

  17. A simple differential steady-state method to measure the thermal conductivity of solid bulk materials with high accuracy.

    PubMed

    Kraemer, D; Chen, G

    2014-02-01

    Accurate measurements of thermal conductivity are of great importance for materials research and development. Steady-state methods determine thermal conductivity directly from the proportionality between heat flow and an applied temperature difference (Fourier Law). Although theoretically simple, in practice, achieving high accuracies with steady-state methods is challenging and requires rather complex experimental setups due to temperature sensor uncertainties and parasitic heat loss. We developed a simple differential steady-state method in which the sample is mounted between an electric heater and a temperature-controlled heat sink. Our method calibrates for parasitic heat losses from the electric heater during the measurement by maintaining a constant heater temperature close to the environmental temperature while varying the heat sink temperature. This enables a large signal-to-noise ratio which permits accurate measurements of samples with small thermal conductance values without an additional heater calibration measurement or sophisticated heater guards to eliminate parasitic heater losses. Additionally, the differential nature of the method largely eliminates the uncertainties of the temperature sensors, permitting measurements with small temperature differences, which is advantageous for samples with high thermal conductance values and/or with strongly temperature-dependent thermal conductivities. In order to accelerate measurements of more than one sample, the proposed method allows for measuring several samples consecutively at each temperature measurement point without adding significant error. We demonstrate the method by performing thermal conductivity measurements on commercial bulk thermoelectric Bi2Te3 samples in the temperature range of 30-150 °C with an error below 3%.

  18. Skull registration for prone patient position using tracked ultrasound

    NASA Astrophysics Data System (ADS)

    Underwood, Grace; Ungi, Tamas; Baum, Zachary; Lasso, Andras; Kronreif, Gernot; Fichtinger, Gabor

    2017-03-01

    PURPOSE: Tracked navigation has become prevalent in neurosurgery. Problems with registration of a patient and a preoperative image arise when the patient is in a prone position. Surfaces accessible to optical tracking on the back of the head are unreliable for registration. We investigated the accuracy of surface-based registration using points accessible through tracked ultrasound. Using ultrasound allows access to bone surfaces that are not available through optical tracking. Tracked ultrasound could eliminate the need to work (i) under the table for registration and (ii) adjust the tracker between surgery and registration. In addition, tracked ultrasound could provide a non-invasive method in comparison to an alternative method of registration involving screw implantation. METHODS: A phantom study was performed to test the feasibility of tracked ultrasound for registration. An initial registration was performed to partially align the pre-operative computer tomography data and skull phantom. The initial registration was performed by an anatomical landmark registration. Surface points accessible by tracked ultrasound were collected and used to perform an Iterative Closest Point Algorithm. RESULTS: When the surface registration was compared to a ground truth landmark registration, the average TRE was found to be 1.6+/-0.1mm and the average distance of points off the skull surface was 0.6+/-0.1mm. CONCLUSION: The use of tracked ultrasound is feasible for registration of patients in prone position and eliminates the need to perform registration under the table. The translational component of error found was minimal. Therefore, the amount of TRE in registration is due to a rotational component of error.

  19. Evaluating the Impact of Radio Frequency Identification Retained Surgical Instruments Tracking on Patient Safety: Literature Review.

    PubMed

    Schnock, Kumiko O; Biggs, Bonnie; Fladger, Anne; Bates, David W; Rozenblum, Ronen

    2017-02-22

    Retained surgical instruments (RSI) are one of the most serious preventable complications in operating room settings, potentially leading to profound adverse effects for patients, as well as costly legal and financial consequences for hospitals. Safety measures to eliminate RSIs have been widely adopted in the United States and abroad, but despite widespread efforts, medical errors with RSI have not been eliminated. Through a systematic review of recent studies, we aimed to identify the impact of radio frequency identification (RFID) technology on reducing RSI errors and improving patient safety. A literature search on the effects of RFID technology on RSI error reduction was conducted in PubMed and CINAHL (2000-2016). Relevant articles were selected and reviewed by 4 researchers. After the literature search, 385 articles were identified and the full texts of the 88 articles were assessed for eligibility. Of these, 5 articles were included to evaluate the benefits and drawbacks of using RFID for preventing RSI-related errors. The use of RFID resulted in rapid detection of RSI through body tissue with high accuracy rates, reducing risk of counting errors and improving workflow. Based on the existing literature, RFID technology seems to have the potential to substantially improve patient safety by reducing RSI errors, although the body of evidence is currently limited. Better designed research studies are needed to get a clear understanding of this domain and to find new opportunities to use this technology and improve patient safety.

  20. Intelligent magnetometer with photoelectric sampler

    NASA Astrophysics Data System (ADS)

    Wang, Defang; Xu, Yan; Zhu, Minjun

    1991-08-01

    The magnetometer described in this paper introduces a photoelectric sampler and a single-chip microcomputer, thus eliminating the error that is not eliminated in the analog circuit. The application of the photoelectric segregator and the voltage-to-frequency convertor have suppressed the interference significantly. According to the requirement of measuring the magnetic field, the function of automatic searching the latching is added. The intelligent magnetometer has higher accuracy and good temperature stability.

  1. A Performance Evaluation of a Lean Reparable Pipeline in Various Demand Environments

    DTIC Science & Technology

    2004-03-23

    of defects (Dennis, 2002:90). Shingo espoused the true goal should be zero defects and to this end, invented the poka - yoke , or a simple, inexpensive...92). Despite the inability to eliminate human errors, poka - yoke devices can still enable the elimination of production defects (Dennis, 2002:91... Poka - yoke devices are essentially foolproofing mechanisms which incorporate automatic inspection into the production process. Despite the fact

  2. Optimizing Processes to Minimize Risk

    NASA Technical Reports Server (NTRS)

    Loyd, David

    2017-01-01

    NASA, like the other hazardous industries, has suffered very catastrophic losses. Human error will likely never be completely eliminated as a factor in our failures. When you can't eliminate risk, focus on mitigating the worst consequences and recovering operations. Bolstering processes to emphasize the role of integration and problem solving is key to success. Building an effective Safety Culture bolsters skill-based performance that minimizes risk and encourages successful engagement.

  3. National Practitioner Data Bank; change in user fee and elimination of diskette queries--HRSA. Withdrawal.

    PubMed

    1998-02-13

    National Practitioner Data Bank; Change in User Fee and Elimination of Diskette Queries notice, document 98-2637, pages 5811-5812, Volume 63, Number 23, in the issue of Wednesday, February 4, 1998, was published in error and is withdrawn from publication. The correct version of the notice was published on Thursday, January 29, 1998, Document No. 98-2116, Volume 63, Number 19, page 4460.

  4. The (lack of) effect of dynamic visual noise on the concreteness effect in short-term memory.

    PubMed

    Castellà, Judit; Campoy, Guillermo

    2018-05-17

    It has been suggested that the concreteness effect in short-term memory (STM) is a consequence of concrete words having more distinctive and richer semantic representations. The generation and storage of visual codes in STM could also play a crucial role on the effect because concrete words are more imaginable than abstract words. If this were the case, the introduction of a visual interference task would be expected to disrupt recall of concrete words. A Dynamic Visual Noise (DVN) display, which has been proven to eliminate the concreteness effect on long-term memory (LTM), was presented along encoding of concrete and abstract words in a STM serial recall task. Results showed a main effect of word type, with more item errors in abstract words, a main effect of DVN, which impaired global performance due to more order errors, but no interaction, suggesting that DVN did not have any impact on the concreteness effect. These findings are discussed in terms of LTM participation through redintegration processes and in terms of the language-based models of verbal STM.

  5. Error free physically unclonable function with programmed resistive random access memory using reliable resistance states by specific identification-generation method

    NASA Astrophysics Data System (ADS)

    Tseng, Po-Hao; Hsu, Kai-Chieh; Lin, Yu-Yu; Lee, Feng-Min; Lee, Ming-Hsiu; Lung, Hsiang-Lan; Hsieh, Kuang-Yeu; Chung Wang, Keh; Lu, Chih-Yuan

    2018-04-01

    A high performance physically unclonable function (PUF) implemented with WO3 resistive random access memory (ReRAM) is presented in this paper. This robust ReRAM-PUF can eliminated bit flipping problem at very high temperature (up to 250 °C) due to plentiful read margin by using initial resistance state and set resistance state. It is also promised 10 years retention at the temperature range of 210 °C. These two stable resistance states enable stable operation at automotive environments from -40 to 125 °C without need of temperature compensation circuit. The high uniqueness of PUF can be achieved by implementing a proposed identification (ID)-generation method. Optimized forming condition can move 50% of the cells to low resistance state and the remaining 50% remain at initial high resistance state. The inter- and intra-PUF evaluations with unlimited separation of hamming distance (HD) are successfully demonstrated even under the corner condition. The number of reproduction was measured to exceed 107 times with 0% bit error rate (BER) at read voltage from 0.4 to 0.7 V.

  6. Effects of Artificial Viscosity on the Accuracy of High-reynolds-number Kappa-epsilon Turbulence Model

    NASA Technical Reports Server (NTRS)

    Chitsomboon, Tawit

    1994-01-01

    Wall functions, as used in the typical high Reynolds number k-epsilon turbulence model, can be implemented in various ways. A least disruptive method (to the flow solver) is to directly solve for the flow variables at the grid point next to the wall while prescribing the values of k and epsilon. For the centrally-differenced finite-difference scheme employing artificial viscocity (AV) as a stabilizing mechanism, this methodology proved to be totally useless. This is because the AV gives rise to a large error at the wall due to too steep a velocity gradient resulting from the use of a coarse grid as required by the wall function methodology. This error can be eliminated simply by extrapolating velocities at the wall, instead of using the physical values of the no-slip velocities (i.e. the zero value). The applicability of the technique used in this paper is demonstrated by solving a flow over a flat plate and comparing the results with those of experiments. It was also observed that AV gives rise to a velocity overshoot (about 1 percent) near the edge of the boundary layer. This small velocity error, however, can yield as much as 10 percent error in the momentum thickness. A method which integrates the boundary layer up to only the edge of the boundary (instead of infinity) was proposed and demonstrated to give better results than the standard method.

  7. The Computer Revolution and Physical Chemistry.

    ERIC Educational Resources Information Center

    O'Brien, James F.

    1989-01-01

    Describes laboratory-oriented software programs that are short, time-saving, eliminate computational errors, and not found in public domain courseware. Program availability for IBM and Apple microcomputers is included. (RT)

  8. Gradient, contact-free volume transfers minimize compound loss in dose-response experiments.

    PubMed

    Harris, David; Olechno, Joe; Datwani, Sammy; Ellson, Richard

    2010-01-01

    More accurate dose-response curves can be constructed by eliminating aqueous serial dilution of compounds. Traditional serial dilutions that use aqueous diluents can result in errors in dose-response values of up to 4 orders of magnitude for a significant percentage of a compound library. When DMSO is used as the diluent, the errors are reduced but not eliminated. The authors use acoustic drop ejection (ADE) to transfer different volumes of model library compounds, directly creating a concentration gradient series in the receiver assay plate. Sample losses and contamination associated with compound handling are therefore avoided or minimized, particularly in the case of less water-soluble compounds. ADE is particularly well suited for assay miniaturization, but gradient volume dispensing is not limited to miniaturized applications.

  9. Phase correction system for automatic focusing of synthetic aperture radar

    DOEpatents

    Eichel, Paul H.; Ghiglia, Dennis C.; Jakowatz, Jr., Charles V.

    1990-01-01

    A phase gradient autofocus system for use in synthetic aperture imaging accurately compensates for arbitrary phase errors in each imaged frame by locating highlighted areas and determining the phase disturbance or image spread associated with each of these highlight areas. An estimate of the image spread for each highlighted area in a line in the case of one dimensional processing or in a sector, in the case of two-dimensional processing, is determined. The phase error is determined using phase gradient processing. The phase error is then removed from the uncorrected image and the process is iteratively performed to substantially eliminate phase errors which can degrade the image.

  10. On low-frequency errors of uniformly modulated filtered white-noise models for ground motions

    USGS Publications Warehouse

    Safak, Erdal; Boore, David M.

    1988-01-01

    Low-frequency errors of a commonly used non-stationary stochastic model (uniformly modulated filtered white-noise model) for earthquake ground motions are investigated. It is shown both analytically and by numerical simulation that uniformly modulated filter white-noise-type models systematically overestimate the spectral response for periods longer than the effective duration of the earthquake, because of the built-in low-frequency errors in the model. The errors, which are significant for low-magnitude short-duration earthquakes, can be eliminated by using the filtered shot-noise-type models (i. e. white noise, modulated by the envelope first, and then filtered).

  11. Image data compression having minimum perceptual error

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1995-01-01

    A method for performing image compression that eliminates redundant and invisible image components is described. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  12. Application of parameter estimation to highly unstable aircraft

    NASA Technical Reports Server (NTRS)

    Maine, R. E.; Murray, J. E.

    1986-01-01

    This paper discusses the application of parameter estimation to highly unstable aircraft. It includes a discussion of the problems in applying the output error method to such aircraft and demonstrates that the filter error method eliminates these problems. The paper shows that the maximum likelihood estimator with no process noise does not reduce to the output error method when the system is unstable. It also proposes and demonstrates an ad hoc method that is similar in form to the filter error method, but applicable to nonlinear problems. Flight data from the X-29 forward-swept-wing demonstrator is used to illustrate the problems and methods discussed.

  13. Application of parameter estimation to highly unstable aircraft

    NASA Technical Reports Server (NTRS)

    Maine, R. E.; Murray, J. E.

    1986-01-01

    The application of parameter estimation to highly unstable aircraft is discussed. Included are a discussion of the problems in applying the output error method to such aircraft and demonstrates that the filter error method eliminates these problems. The paper shows that the maximum likelihood estimator with no process noise does not reduce to the output error method when the system is unstable. It also proposes and demonstrates an ad hoc method that is similar in form to the filter error method, but applicable to nonlinear problems. Flight data from the X-29 forward-swept-wing demonstrator is used to illustrate the problems and methods discussed.

  14. Theory and compensation method of axial magnetic error induced by axial magnetic field in a polarization-maintaining fiber optic gyro

    NASA Astrophysics Data System (ADS)

    Zhou, Yanru; Zhao, Yuxiang; Tian, Hui; Zhang, Dengwei; Huang, Tengchao; Miao, Lijun; Shu, Xiaowu; Che, Shuangliang; Liu, Cheng

    2016-12-01

    In an axial magnetic field (AMF), which is vertical to the plane of the fiber coil, a polarization-maintaining fiber optic gyro (PM-FOG) appears as an axial magnetic error. This error is linearly related to the intensity of an AMF, the radius of the fiber coil, and the light wavelength, and also influenced by the distribution of fiber twist. When a PM-FOG is manufactured completely, this error only appears a linear correlation with the AMF. A real-time compensation model is established to eliminate the error, and the experimental results show that the axial magnetic error of the PM-FOG is decreased from 5.83 to 0.09 deg/h in 12G AMF with 18-dB suppression.

  15. Control techniques to improve Space Shuttle solid rocket booster separation

    NASA Technical Reports Server (NTRS)

    Tomlin, D. D.

    1983-01-01

    The present Space Shuttle's control system does not prevent the Orbiter's main engines from being in gimbal positions that are adverse to solid rocket booster separation. By eliminating the attitude error and attitude rate feedback just prior to solid rocket booster separation, the detrimental effects of the Orbiter's main engines can be reduced. In addition, if angular acceleration feedback is applied, the gimbal torques produced by the Orbiter's engines can reduce the detrimental effects of the aerodynamic torques. This paper develops these control techniques and compares the separation capability of the developed control systems. Currently with the worst case initial conditions and each Shuttle system dispersion aligned in the worst direction (which is more conservative than will be experienced in flight), the solid rocket booster has an interference with the Shuttle's external tank of 30 in. Elimination of the attitude error and attitude rate feedback reduces that interference to 19 in. Substitution of angular acceleration feedback reduces the interference to 6 in. The two latter interferences can be eliminated by atess conservative analysis techniques, that is, by using a root sum square of the system dispersions.

  16. Image Data Compression Having Minimum Perceptual Error

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1997-01-01

    A method is presented for performing color or grayscale image compression that eliminates redundant and invisible image components. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The quantization matrix comprises visual masking by luminance and contrast technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  17. Synchronization Design and Error Analysis of Near-Infrared Cameras in Surgical Navigation.

    PubMed

    Cai, Ken; Yang, Rongqian; Chen, Huazhou; Huang, Yizhou; Wen, Xiaoyan; Huang, Wenhua; Ou, Shanxing

    2016-01-01

    The accuracy of optical tracking systems is important to scientists. With the improvements reported in this regard, such systems have been applied to an increasing number of operations. To enhance the accuracy of these systems further and to reduce the effect of synchronization and visual field errors, this study introduces a field-programmable gate array (FPGA)-based synchronization control method, a method for measuring synchronous errors, and an error distribution map in field of view. Synchronization control maximizes the parallel processing capability of FPGA, and synchronous error measurement can effectively detect the errors caused by synchronization in an optical tracking system. The distribution of positioning errors can be detected in field of view through the aforementioned error distribution map. Therefore, doctors can perform surgeries in areas with few positioning errors, and the accuracy of optical tracking systems is considerably improved. The system is analyzed and validated in this study through experiments that involve the proposed methods, which can eliminate positioning errors attributed to asynchronous cameras and different fields of view.

  18. Improvement on post-OPC verification efficiency for contact/via coverage check by final CD biasing of metal lines and considering their location on the metal layout

    NASA Astrophysics Data System (ADS)

    Kim, Youngmi; Choi, Jae-Young; Choi, Kwangseon; Choi, Jung-Hoe; Lee, Sooryong

    2011-04-01

    As IC design complexity keeps increasing, it is more and more difficult to ensure the pattern transfer after optical proximity correction (OPC) due to the continuous reduction of layout dimensions and lithographic limitation by k1 factor. To guarantee the imaging fidelity, resolution enhancement technologies (RET) such as off-axis illumination (OAI), different types of phase shift masks and OPC technique have been developed. In case of model-based OPC, to cross-confirm the contour image versus target layout, post-OPC verification solutions continuously keep developed - contour generation method and matching it to target structure, method for filtering and sorting the patterns to eliminate false errors and duplicate patterns. The way to detect only real errors by excluding false errors is the most important thing for accurate and fast verification process - to save not only reviewing time and engineer resource, but also whole wafer process time and so on. In general case of post-OPC verification for metal-contact/via coverage (CC) check, verification solution outputs huge of errors due to borderless design, so it is too difficult to review and correct all points of them. It should make OPC engineer to miss the real defect, and may it cause the delay time to market, at least. In this paper, we studied method for increasing efficiency of post-OPC verification, especially for the case of CC check. For metal layers, final CD after etch process shows various CD bias, which depends on distance with neighbor patterns, so it is more reasonable that consider final metal shape to confirm the contact/via coverage. Through the optimization of biasing rule for different pitches and shapes of metal lines, we could get more accurate and efficient verification results and decrease the time for review to find real errors. In this paper, the suggestion in order to increase efficiency of OPC verification process by using simple biasing rule to metal layout instead of etch model application is presented.

  19. Prevention of gross setup errors in radiotherapy with an efficient automatic patient safety system.

    PubMed

    Yan, Guanghua; Mittauer, Kathryn; Huang, Yin; Lu, Bo; Liu, Chihray; Li, Jonathan G

    2013-11-04

    Treatment of the wrong body part due to incorrect setup is among the leading types of errors in radiotherapy. The purpose of this paper is to report an efficient automatic patient safety system (PSS) to prevent gross setup errors. The system consists of a pair of charge-coupled device (CCD) cameras mounted in treatment room, a single infrared reflective marker (IRRM) affixed on patient or immobilization device, and a set of in-house developed software. Patients are CT scanned with a CT BB placed over their surface close to intended treatment site. Coordinates of the CT BB relative to treatment isocenter are used as reference for tracking. The CT BB is replaced with an IRRM before treatment starts. PSS evaluates setup accuracy by comparing real-time IRRM position with reference position. To automate system workflow, PSS synchronizes with the record-and-verify (R&V) system in real time and automatically loads in reference data for patient under treatment. Special IRRMs, which can permanently stick to patient face mask or body mold throughout the course of treatment, were designed to minimize therapist's workload. Accuracy of the system was examined on an anthropomorphic phantom with a designed end-to-end test. Its performance was also evaluated on head and neck as well as abdominalpelvic patients using cone-beam CT (CBCT) as standard. The PSS system achieved a seamless clinic workflow by synchronizing with the R&V system. By permanently mounting specially designed IRRMs on patient immobilization devices, therapist intervention is eliminated or minimized. Overall results showed that the PSS system has sufficient accuracy to catch gross setup errors greater than 1 cm in real time. An efficient automatic PSS with sufficient accuracy has been developed to prevent gross setup errors in radiotherapy. The system can be applied to all treatment sites for independent positioning verification. It can be an ideal complement to complex image-guidance systems due to its advantages of continuous tracking ability, no radiation dose, and fully automated clinic workflow.

  20. Instrumental variables vs. grouping approach for reducing bias due to measurement error.

    PubMed

    Batistatou, Evridiki; McNamee, Roseanne

    2008-01-01

    Attenuation of the exposure-response relationship due to exposure measurement error is often encountered in epidemiology. Given that error cannot be totally eliminated, bias correction methods of analysis are needed. Many methods require more than one exposure measurement per person to be made, but the `group mean OLS method,' in which subjects are grouped into several a priori defined groups followed by ordinary least squares (OLS) regression on the group means, can be applied with one measurement. An alternative approach is to use an instrumental variable (IV) method in which both the single error-prone measure and an IV are used in IV analysis. In this paper we show that the `group mean OLS' estimator is equal to an IV estimator with the group mean used as IV, but that the variance estimators for the two methods are different. We derive a simple expression for the bias in the common estimator which is a simple function of group size, reliability and contrast of exposure between groups, and show that the bias can be very small when group size is large. We compare this method with a new proposal (group mean ranking method), also applicable with a single exposure measurement, in which the IV is the rank of the group means. When there are two independent exposure measurements per subject, we propose a new IV method (EVROS IV) and compare it with Carroll and Stefanski's (CS IV) proposal in which the second measure is used as an IV; the new IV estimator combines aspects of the `group mean' and `CS' strategies. All methods are evaluated in terms of bias, precision and root mean square error via simulations and a dataset from occupational epidemiology. The `group mean ranking method' does not offer much improvement over the `group mean method.' Compared with the `CS' method, the `EVROS' method is less affected by low reliability of exposure. We conclude that the group IV methods we propose may provide a useful way to handle mismeasured exposures in epidemiology with or without replicate measurements. Our finding may also have implications for the use of aggregate variables in epidemiology to control for unmeasured confounding.

  1. Rapid Ice Loss at Vatnajokull,Iceland Since Late 1990s Constrained by Synthetic Aperture Radar Interferometry

    NASA Astrophysics Data System (ADS)

    Zhao, W.; Amelung, F.; Dixon, T. H.; Wdowinski, S.

    2012-12-01

    Synthetic aperture radar interferometry time series is applied over Vatnajokull, Iceland by using 15 years ERS data. Ice loss at Vatnajokull accelerates since late 1990s especially after 21th century. Clear uplift signal due to ice mass loss is detected. The rebound signal is generally linear and increases a little bit after 2000. The relative annual velocity (GPS station 7485 as reference) is about 12 mm/yr at the ice cap edge, which matches the previous studies using GPS. The standard deviation compared to 11 GPS stations in this area is about 2 mm/yr. A relative-value modeling method ignoring the effect of viscous flow is chosen assuming elastic half space earth. The final ice loss estimation - 83 cm/yr - matches the climatology model with ground observations. Small Baseline Subsets is applied for time series analysis. Orbit error coupling with long wavelength phase trend due to horizontal plate motion is removed based on a second polynomial model. For simplicity, we do not consider atmospheric delay in this area because of no complex topography and small-scale turbulence is eliminated well after long-term average when calculating the annual mean velocity. Some unwrapping error still exits because of low coherence. Other uncertainties can be the basic assumption of ice loss pattern and spatial variation of the elastic parameters. It is the first time we apply InSAR time series for ice mass balance study and provide detailed error and uncertainty analysis. The successful of this application proves InSAR as an option for mass balance study and it is also important for validation of different ice loss estimation techniques.

  2. Quantum Devices Bonded Beneath a Superconducting Shield: Part 2

    NASA Astrophysics Data System (ADS)

    McRae, Corey Rae; Abdallah, Adel; Bejanin, Jeremy; Earnest, Carolyn; McConkey, Thomas; Pagel, Zachary; Mariantoni, Matteo

    The next-generation quantum computer will rely on physical quantum bits (qubits) organized into arrays to form error-robust logical qubits. In the superconducting quantum circuit implementation, this architecture will require the use of larger and larger chip sizes. In order for on-chip superconducting quantum computers to be scalable, various issues found in large chips must be addressed, including the suppression of box modes (due to the sample holder) and the suppression of slot modes (due to fractured ground planes). By bonding a metallized shield layer over a superconducting circuit using thin-film indium as a bonding agent, we have demonstrated proof of concept of an extensible circuit architecture that holds the key to the suppression of spurious modes. Microwave characterization of shielded transmission lines and measurement of superconducting resonators were compared to identical unshielded devices. The elimination of box modes was investigated, as well as bond characteristics including bond homogeneity and the presence of a superconducting connection.

  3. Application of image processing to calculate the number of fish seeds using raspberry-pi

    NASA Astrophysics Data System (ADS)

    Rahmadiansah, A.; Kusumawardhani, A.; Duanto, F. N.; Qoonita, F.

    2018-03-01

    Many fish cultivator in Indonesia who suffered losses due to the sale and purchase of fish seeds did not match the agreed amount. The loss is due to the calculation of fish seed still using manual method. To overcome these problems, then in this study designed fish counting system automatically and real-time fish using the image processing based on Raspberry Pi. Used image processing because it can calculate moving objects and eliminate noise. Image processing method used to calculate moving object is virtual loop detector or virtual detector method and the approach used is “double difference image”. The “double difference” approach uses information from the previous frame and the next frame to estimate the shape and position of the object. Using these methods and approaches, the results obtained were quite good with an average error of 1.0% for 300 individuals in a test with a virtual detector width of 96 pixels and a slope of 1 degree test plane.

  4. Bias in the Wagner-Nelson estimate of the fraction of drug absorbed.

    PubMed

    Wang, Yibin; Nedelman, Jerry

    2002-04-01

    To examine and quantify bias in the Wagner-Nelson estimate of the fraction of drug absorbed resulting from the estimation error of the elimination rate constant (k), measurement error of the drug concentration, and the truncation error in the area under the curve. Bias in the Wagner-Nelson estimate was derived as a function of post-dosing time (t), k, ratio of absorption rate constant to k (r), and the coefficient of variation for estimates of k (CVk), or CV% for the observed concentration, by assuming a one-compartment model and using an independent estimate of k. The derived functions were used for evaluating the bias with r = 0.5, 3, or 6; k = 0.1 or 0.2; CV, = 0.2 or 0.4; and CV, =0.2 or 0.4; for t = 0 to 30 or 60. Estimation error of k resulted in an upward bias in the Wagner-Nelson estimate that could lead to the estimate of the fraction absorbed being greater than unity. The bias resulting from the estimation error of k inflates the fraction of absorption vs. time profiles mainly in the early post-dosing period. The magnitude of the bias in the Wagner-Nelson estimate resulting from estimation error of k was mainly determined by CV,. The bias in the Wagner-Nelson estimate resulting from to estimation error in k can be dramatically reduced by use of the mean of several independent estimates of k, as in studies for development of an in vivo-in vitro correlation. The truncation error in the area under the curve can introduce a negative bias in the Wagner-Nelson estimate. This can partially offset the bias resulting from estimation error of k in the early post-dosing period. Measurement error of concentration does not introduce bias in the Wagner-Nelson estimate. Estimation error of k results in an upward bias in the Wagner-Nelson estimate, mainly in the early drug absorption phase. The truncation error in AUC can result in a downward bias, which may partially offset the upward bias due to estimation error of k in the early absorption phase. Measurement error of concentration does not introduce bias. The joint effect of estimation error of k and truncation error in AUC can result in a non-monotonic fraction-of-drug-absorbed-vs-time profile. However, only estimation error of k can lead to the Wagner-Nelson estimate of fraction of drug absorbed greater than unity.

  5. On the path to genetic novelties: insights from programmed DNA elimination and RNA splicing.

    PubMed

    Catania, Francesco; Schmitz, Jürgen

    2015-01-01

    Understanding how genetic novelties arise is a central goal of evolutionary biology. To this end, programmed DNA elimination and RNA splicing deserve special consideration. While programmed DNA elimination reshapes genomes by eliminating chromatin during organismal development, RNA splicing rearranges genetic messages by removing intronic regions during transcription. Small RNAs help to mediate this class of sequence reorganization, which is not error-free. It is this imperfection that makes programmed DNA elimination and RNA splicing excellent candidates for generating evolutionary novelties. Leveraging a number of these two processes' mechanistic and evolutionary properties, which have been uncovered over the past years, we present recently proposed models and empirical evidence for how splicing can shape the structure of protein-coding genes in eukaryotes. We also chronicle a number of intriguing similarities between the processes of programmed DNA elimination and RNA splicing, and highlight the role that the variation in the population-genetic environment may play in shaping their target sequences. © 2015 Wiley Periodicals, Inc.

  6. Methods and Apparatus for Reducing Multipath Signal Error Using Deconvolution

    NASA Technical Reports Server (NTRS)

    Kumar, Rajendra (Inventor); Lau, Kenneth H. (Inventor)

    1999-01-01

    A deconvolution approach to adaptive signal processing has been applied to the elimination of signal multipath errors as embodied in one preferred embodiment in a global positioning system receiver. The method and receiver of the present invention estimates then compensates for multipath effects in a comprehensive manner. Application of deconvolution, along with other adaptive identification and estimation techniques, results in completely novel GPS (Global Positioning System) receiver architecture.

  7. An oscillation-free flow solver based on flux reconstruction

    NASA Astrophysics Data System (ADS)

    Aguerre, Horacio J.; Pairetti, Cesar I.; Venier, Cesar M.; Márquez Damián, Santiago; Nigro, Norberto M.

    2018-07-01

    In this paper, a segregated algorithm is proposed to suppress high-frequency oscillations in the velocity field for incompressible flows. In this context, a new velocity formula based on a reconstruction of face fluxes is defined eliminating high-frequency errors. In analogy to the Rhie-Chow interpolation, this approach is equivalent to including a flux-based pressure gradient with a velocity diffusion in the momentum equation. In order to guarantee second-order accuracy of the numerical solver, a set of conditions are defined for the reconstruction operator. To arrive at the final formulation, an outlook over the state of the art regarding velocity reconstruction procedures is presented comparing them through an error analysis. A new operator is then obtained by means of a flux difference minimization satisfying the required spatial accuracy. The accuracy of the new algorithm is analyzed by performing mesh convergence studies for unsteady Navier-Stokes problems with analytical solutions. The stabilization properties of the solver are then tested in a problem where spurious numerical oscillations arise for the velocity field. The results show a remarkable performance of the proposed technique eliminating high-frequency errors without losing accuracy.

  8. Reward system and temporal pole contributions to affective evaluation during a first person shooter video game

    PubMed Central

    2011-01-01

    Background Violent content in video games evokes many concerns but there is little research concerning its rewarding aspects. It was demonstrated that playing a video game leads to striatal dopamine release. It is unclear, however, which aspects of the game cause this reward system activation and if violent content contributes to it. We combined functional Magnetic Resonance Imaging (fMRI) with individual affect measures to address the neuronal correlates of violence in a video game. Results Thirteen male German volunteers played a first-person shooter game (Tactical Ops: Assault on Terror) during fMRI measurement. We defined success as eliminating opponents, and failure as being eliminated themselves. Affect was measured directly before and after game play using the Positive and Negative Affect Schedule (PANAS). Failure and success events evoked increased activity in visual cortex but only failure decreased activity in orbitofrontal cortex and caudate nucleus. A negative correlation between negative affect and responses to failure was evident in the right temporal pole (rTP). Conclusions The deactivation of the caudate nucleus during failure is in accordance with its role in reward-prediction error: it occurred whenever subject missed an expected reward (being eliminated rather than eliminating the opponent). We found no indication that violence events were directly rewarding for the players. We addressed subjective evaluations of affect change due to gameplay to study the reward system. Subjects reporting greater negative affect after playing the game had less rTP activity associated with failure. The rTP may therefore be involved in evaluating the failure events in a social context, to regulate the players' mood. PMID:21749711

  9. Risk behaviours for organism transmission in health care delivery-A two month unstructured observational study.

    PubMed

    Lindberg, Maria; Lindberg, Magnus; Skytt, Bernice

    2017-05-01

    Errors in infection control practices risk patient safety. The probability for errors can increase when care practices become more multifaceted. It is therefore fundamental to track risk behaviours and potential errors in various care situations. The aim of this study was to describe care situations involving risk behaviours for organism transmission that could lead to subsequent healthcare-associated infections. Unstructured nonparticipant observations were performed at three medical wards. Healthcare personnel (n=27) were shadowed, in total 39h, on randomly selected weekdays between 7:30 am and 12 noon. Content analysis was used to inductively categorize activities into tasks and based on the character into groups. Risk behaviours for organism transmission were deductively classified into types of errors. Multiple response crosstabs procedure was used to visualize the number and proportion of errors in tasks. One-Way ANOVA with Bonferroni post Hoc test was used to determine differences among the three groups of activities. The qualitative findings gives an understanding of that risk behaviours for organism transmission goes beyond the five moments of hand hygiene and also includes the handling and placement of materials and equipment. The tasks with the highest percentage of errors were; 'personal hygiene', 'elimination' and 'dressing/wound care'. The most common types of errors in all identified tasks were; 'hand disinfection', 'glove usage', and 'placement of materials'. Significantly more errors (p<0.0001) were observed the more multifaceted (single, combined or interrupted) the activity was. The numbers and types of errors as well as the character of activities performed in care situations described in this study confirm the need to improve current infection control practices. It is fundamental that healthcare personnel practice good hand hygiene however effective preventive hygiene is complex in healthcare activities due to the multifaceted care situations, especially when activities are interrupted. A deeper understanding of infection control practices that goes beyond the sense of security by means of hand disinfection and use of gloves is needed as materials and surfaces in the care environment might be contaminated and thus pose a risk for organism transmission. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Comparative study of two intraoral laser techniques for soft tissue surgery

    NASA Astrophysics Data System (ADS)

    Swick, Michael D.; Richter, Alexander

    2003-06-01

    Historically, 810nm has been the predominant wavelength used for intraoral surgery, when diode lasers have been discussed, due to their large numbers in the market place. The techniques used intraorally with the 810nm diode have been relatively similar in most cases. Low powers, 1 or 2 watts, using continuous wave, are employed. The purpose of this study is to compare the thermal damage of the technique of using continuous wave at low powers, to using higher powers with a pulse mode and water for coolant, with the 980nm diode wavelength. During the study the laser fiber was held immobile eliminating surgical manipulation as an error. The resultant histology proves, while the volume of vaporization dramatically increases, thus giving the clinician the ability to reduce the time for destructive conduction of excess heat for a given procedure, the amount of coagulation actually decreases in width and depth. As an added benefit charring, which has been implicated in delayed healing is virtually eliminated. This evidence, coupled with excellent clinical results, lends validity to the use of pulsed higher powers and water coolant for the 980nm diode laser.

  11. Frozen-Orbital and Downfolding Calculations with Auxiliary-Field Quantum Monte Carlo.

    PubMed

    Purwanto, Wirawan; Zhang, Shiwei; Krakauer, Henry

    2013-11-12

    We describe the implementation of the frozen-orbital and downfolding approximations in the auxiliary-field quantum Monte Carlo (AFQMC) method. These approaches can provide significant computational savings, compared to fully correlating all of the electrons. While the many-body wave function is never explicit in AFQMC, its random walkers are Slater determinants, whose orbitals may be expressed in terms of any one-particle orbital basis. It is therefore straightforward to partition the full N-particle Hilbert space into active and inactive parts to implement the frozen-orbital method. In the frozen-core approximation, for example, the core electrons can be eliminated in the correlated part of the calculations, greatly increasing the computational efficiency, especially for heavy atoms. Scalar relativistic effects are easily included using the Douglas-Kroll-Hess theory. Using this method, we obtain a way to effectively eliminate the error due to single-projector, norm-conserving pseudopotentials in AFQMC. We also illustrate a generalization of the frozen-orbital approach that downfolds high-energy basis states to a physically relevant low-energy sector, which allows a systematic approach to produce realistic model Hamiltonians to further increase efficiency for extended systems.

  12. Patient identification error among prostate needle core biopsy specimens--are we ready for a DNA time-out?

    PubMed

    Suba, Eric J; Pfeifer, John D; Raab, Stephen S

    2007-10-01

    Patient identification errors in surgical pathology often involve switches of prostate or breast needle core biopsy specimens among patients. We assessed strategies for decreasing the occurrence of these uncommon and yet potentially catastrophic events. Root cause analyses were performed following 3 cases of patient identification error involving prostate needle core biopsy specimens. Patient identification errors in surgical pathology result from slips and lapses of automatic human action that may occur at numerous steps during pre-laboratory, laboratory and post-laboratory work flow processes. Patient identification errors among prostate needle biopsies may be difficult to entirely prevent through the optimization of work flow processes. A DNA time-out, whereby DNA polymorphic microsatellite analysis is used to confirm patient identification before radiation therapy or radical surgery, may eliminate patient identification errors among needle biopsies.

  13. AC orbit bump method of local impedance measurement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smaluk, Victor; Yang, Xi; Blednykh, Alexei

    A fast and precise technique of local impedance measurement has been developed and tested at NSLS-II. This technique is based on in-phase sine-wave (AC) excitation of four fast correctors adjacent to the vacuum chamber section, impedance of which is measured. The beam position is measured using synchronous detection. Use of the narrow-band sine-wave signal allows us to improve significantly the accuracy of the orbit bump method. Beam excitation by fast correctors results in elimination of the systematic error caused by hysteresis effect. The systematic error caused by orbit drift is also eliminated because the measured signal is not affected bymore » the orbit motion outside the excitation frequency range. In this article, the measurement technique is described and the result of proof-of-principle experiment carried out at NSLS-II is presented.« less

  14. Practical training framework for fitting a function and its derivatives.

    PubMed

    Pukrittayakamee, Arjpolson; Hagan, Martin; Raff, Lionel; Bukkapatnam, Satish T S; Komanduri, Ranga

    2011-06-01

    This paper describes a practical framework for using multilayer feedforward neural networks to simultaneously fit both a function and its first derivatives. This framework involves two steps. The first step is to train the network to optimize a performance index, which includes both the error in fitting the function and the error in fitting the derivatives. The second step is to prune the network by removing neurons that cause overfitting and then to retrain it. This paper describes two novel types of overfitting that are only observed when simultaneously fitting both a function and its first derivatives. A new pruning algorithm is proposed to eliminate these types of overfitting. Experimental results show that the pruning algorithm successfully eliminates the overfitting and produces the smoothest responses and the best generalization among all the training algorithms that we have tested.

  15. AC orbit bump method of local impedance measurement

    DOE PAGES

    Smaluk, Victor; Yang, Xi; Blednykh, Alexei; ...

    2017-08-04

    A fast and precise technique of local impedance measurement has been developed and tested at NSLS-II. This technique is based on in-phase sine-wave (AC) excitation of four fast correctors adjacent to the vacuum chamber section, impedance of which is measured. The beam position is measured using synchronous detection. Use of the narrow-band sine-wave signal allows us to improve significantly the accuracy of the orbit bump method. Beam excitation by fast correctors results in elimination of the systematic error caused by hysteresis effect. The systematic error caused by orbit drift is also eliminated because the measured signal is not affected bymore » the orbit motion outside the excitation frequency range. In this article, the measurement technique is described and the result of proof-of-principle experiment carried out at NSLS-II is presented.« less

  16. Modified ADALINE algorithm for harmonic estimation and selective harmonic elimination in inverters

    NASA Astrophysics Data System (ADS)

    Vasumathi, B.; Moorthi, S.

    2011-11-01

    In digital signal processing, algorithms are very well developed for the estimation of harmonic components. In power electronic applications, an objective like fast response of a system is of primary importance. An effective method for the estimation of instantaneous harmonic components, along with conventional harmonic elimination technique, is presented in this article. The primary function is to eliminate undesirable higher harmonic components from the selected signal (current or voltage) and it requires only the knowledge of the frequency of the component to be eliminated. A signal processing technique using modified ADALINE algorithm has been proposed for harmonic estimation. The proposed method stays effective as it converges to a minimum error and brings out a finer estimation. A conventional control based on pulse width modulation for selective harmonic elimination is used to eliminate harmonic components after its estimation. This method can be applied to a wide range of equipment. The validity of the proposed method to estimate and eliminate voltage harmonics is proved with a dc/ac inverter as a simulation example. Then, the results are compared with existing ADALINE algorithm for illustrating its effectiveness.

  17. Precision improvement of frequency-modulated continuous-wave laser ranging system with two auxiliary interferometers

    NASA Astrophysics Data System (ADS)

    Shi, Guang; Wang, Wen; Zhang, Fumin

    2018-03-01

    The measurement precision of frequency-modulated continuous-wave (FMCW) laser distance measurement should be proportional to the scanning range of the tunable laser. However, the commercial external cavity diode laser (ECDL) is not an ideal tunable laser source in practical applications. Due to the unavoidable mode hopping and scanning nonlinearity of the ECDL, the measurement precision of FMCW laser distance measurements can be substantially affected. Therefore, an FMCW laser ranging system with two auxiliary interferometers is proposed in this paper. Moreover, to eliminate the effects of ECDL, the frequency-sampling method and mode hopping influence suppression method are employed. Compared with a fringe counting interferometer, this FMCW laser ranging system has a measuring error of ± 20 μm at the distance of 5.8 m.

  18. Effect of black point on accuracy of LCD displays colorimetric characterization

    NASA Astrophysics Data System (ADS)

    Li, Tong; Xie, Kai; He, Nannan; Ye, Yushan

    2018-03-01

    Black point is the point at which RGB's single channel digital drive value is 0. Due to the problem of light leakage of liquid-crystal displays (LCDs), black point's luminance value is not 0, this phenomenon bring some errors to colorimetric characterization of LCDs, especially low luminance value driving greater sampling effect. This paper describes the characteristic accuracy of polynomial model method and the effect of black point on accuracy, the color difference accuracy is given. When considering the black point in the characteristics equation, the maximum color difference is 3.246, the maximum color difference than without considering the black points reduced by 2.36. The experimental results show that the accuracy of LCDs colorimetric characterization can be improved, if the effect of black point is eliminated properly.

  19. Low power arcjet system spacecraft impacts

    NASA Technical Reports Server (NTRS)

    Pencil, Eric J.; Sarmiento, Charles J.; Lichtin, D. A.; Palchefsky, J. W.; Bogorad, A. L.

    1993-01-01

    Potential plume contamination of spacecraft surfaces was investigated by positioning spacecraft material samples relative to an arcjet thruster. Samples in the simulated solar array region were exposed to the cold gas arcjet plume for 40 hrs to address concerns about contamination by backstreaming diffusion pump oil. Except for one sample, no significant changes were measured in absorptance and emittance within experimental error. Concerns about surface property degradation due to electrostatic discharges led to the investigation of the discharge phenomenon of charged samples during arcjet ignition. Short duration exposure of charged samples demonstrated that potential differences are consistently and completely eliminated within the first second of exposure to a weakly ionized plume. The spark discharge mechanism was not the discharge phenomenon. The results suggest that the arcjet could act as a charge control device on spacecraft.

  20. Success and High Predictability of Intraorally Welded Titanium Bar in the Immediate Loading Implants

    PubMed Central

    Fogli, Vaniel; Camerini, Michele; Carinci, Francesco

    2014-01-01

    The implants failure may be caused by micromotion and stress exerted on implants during the phase of bone healing. This concept is especially true in case of implants placed in atrophic ridges. So the primary stabilization and fixation of implants are an important goal that can also allow immediate loading and oral rehabilitation on the same day of surgery. This goal may be achieved thanks to the technique of welding titanium bars on implant abutments. In fact, the procedure can be performed directly in the mouth eliminating possibility of errors or distortions due to impression. This paper describes a case report and the most recent data about long-term success and high predictability of intraorally welded titanium bar in immediate loading implants. PMID:24963419

  1. Effects of microgravity on tissue perfusion and the efficacy of astronaut denitrogenation for EVA

    NASA Technical Reports Server (NTRS)

    Gerth, Wayne A.; Vann, Richard D.; Leatherman, Nelson E.; Feezor, Michael D.

    1987-01-01

    A potentially flight-applicable, breath-by-breath method for measuring N2 elimination from human subjects breathing 100 percent O2 for 2-3 hr periods has been developed. The present report describes this development with particular emphasis on required methodological accuracy and its achievement in view of certain properties of mass spectrometer performance. A method for the breath-by-breath analysis of errors in measured N2 elimination profiles is also described.

  2. Prediction of final error level in learning and repetitive control

    NASA Astrophysics Data System (ADS)

    Levoci, Peter A.

    Repetitive control (RC) is a field that creates controllers to eliminate the effects of periodic disturbances on a feedback control system. The methods have applications in spacecraft problems, to isolate fine pointing equipment from periodic vibration disturbances such as slight imbalances in momentum wheels or cryogenic pumps. A closely related field of control design is iterative learning control (ILC) which aims to eliminate tracking error in a task that repeats, each time starting from the same initial condition. Experiments done on a robot at NASA Langley Research Center showed that the final error levels produced by different candidate repetitive and learning controllers can be very different, even when each controller is analytically proven to converge to zero error in the deterministic case. Real world plant and measurement noise and quantization noise (from analog to digital and digital to analog converters) in these control methods are acted on as if they were error sources that will repeat and should be cancelled, which implies that the algorithms amplify such errors. Methods are developed that predict the final error levels of general first order ILC, of higher order ILC including current cycle learning, and of general RC, in the presence of noise, using frequency response methods. The method involves much less computation than the corresponding time domain approach that involves large matrices. The time domain approach was previously developed for ILC and handles a certain class of ILC methods. Here methods are created to include zero-phase filtering that is very important in creating practical designs. Also, time domain methods are developed for higher order ILC and for repetitive control. Since RC and ILC must be implemented digitally, all of these methods predict final error levels at the sample times. It is shown here that RC can easily converge to small error levels between sample times, but that ILC in most applications will have large and diverging intersample error if in fact zero error is reached at the sample times. This is independent of the ILC law used, and is purely a property of the physical system. Methods are developed to address this issue.

  3. Investigation of technology needs for avoiding helicopter pilot error related accidents

    NASA Technical Reports Server (NTRS)

    Chais, R. I.; Simpson, W. E.

    1985-01-01

    Pilot error which is cited as a cause or related factor in most rotorcraft accidents was examined. Pilot error related accidents in helicopters to identify areas in which new technology could reduce or eliminate the underlying causes of these human errors were investigated. The aircraft accident data base at the U.S. Army Safety Center was studied as the source of data on helicopter accidents. A randomly selected sample of 110 aircraft records were analyzed on a case-by-case basis to assess the nature of problems which need to be resolved and applicable technology implications. Six technology areas in which there appears to be a need for new or increased emphasis are identified.

  4. Method for Pre-Conditioning a Measured Surface Height Map for Model Validation

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin

    2012-01-01

    This software allows one to up-sample or down-sample a measured surface map for model validation, not only without introducing any re-sampling errors, but also eliminating the existing measurement noise and measurement errors. Because the re-sampling of a surface map is accomplished based on the analytical expressions of Zernike-polynomials and a power spectral density model, such re-sampling does not introduce any aliasing and interpolation errors as is done by the conventional interpolation and FFT-based (fast-Fourier-transform-based) spatial-filtering method. Also, this new method automatically eliminates the measurement noise and other measurement errors such as artificial discontinuity. The developmental cycle of an optical system, such as a space telescope, includes, but is not limited to, the following two steps: (1) deriving requirements or specs on the optical quality of individual optics before they are fabricated through optical modeling and simulations, and (2) validating the optical model using the measured surface height maps after all optics are fabricated. There are a number of computational issues related to model validation, one of which is the "pre-conditioning" or pre-processing of the measured surface maps before using them in a model validation software tool. This software addresses the following issues: (1) up- or down-sampling a measured surface map to match it with the gridded data format of a model validation tool, and (2) eliminating the surface measurement noise or measurement errors such that the resulted surface height map is continuous or smoothly-varying. So far, the preferred method used for re-sampling a surface map is two-dimensional interpolation. The main problem of this method is that the same pixel can take different values when the method of interpolation is changed among the different methods such as the "nearest," "linear," "cubic," and "spline" fitting in Matlab. The conventional, FFT-based spatial filtering method used to eliminate the surface measurement noise or measurement errors can also suffer from aliasing effects. During re-sampling of a surface map, this software preserves the low spatial-frequency characteristic of a given surface map through the use of Zernike-polynomial fit coefficients, and maintains mid- and high-spatial-frequency characteristics of the given surface map by the use of a PSD model derived from the two-dimensional PSD data of the mid- and high-spatial-frequency components of the original surface map. Because this new method creates the new surface map in the desired sampling format from analytical expressions only, it does not encounter any aliasing effects and does not cause any discontinuity in the resultant surface map.

  5. Frequency and Severity of Parenteral Nutrition Medication Errors at a Large Children's Hospital After Implementation of Electronic Ordering and Compounding.

    PubMed

    MacKay, Mark; Anderson, Collin; Boehme, Sabrina; Cash, Jared; Zobell, Jeffery

    2016-04-01

    The Institute for Safe Medication Practices has stated that parenteral nutrition (PN) is considered a high-risk medication and has the potential of causing harm. Three organizations--American Society for Parenteral and Enteral Nutrition (A.S.P.E.N.), American Society of Health-System Pharmacists, and National Advisory Group--have published guidelines for ordering, transcribing, compounding and administering PN. These national organizations have published data on compliance to the guidelines and the risk of errors. The purpose of this article is to compare total compliance with ordering, transcription, compounding, administration, and error rate with a large pediatric institution. A computerized prescriber order entry (CPOE) program was developed that incorporates dosing with soft and hard stop recommendations and simultaneously eliminating the need for paper transcription. A CPOE team prioritized and identified issues, then developed solutions and integrated innovative CPOE and automated compounding device (ACD) technologies and practice changes to minimize opportunities for medication errors in PN prescription, transcription, preparation, and administration. Thirty developmental processes were identified and integrated in the CPOE program, resulting in practices that were compliant with A.S.P.E.N. safety consensus recommendations. Data from 7 years of development and implementation were analyzed and compared with published literature comparing error, harm rates, and cost reductions to determine if our process showed lower error rates compared with national outcomes. The CPOE program developed was in total compliance with the A.S.P.E.N. guidelines for PN. The frequency of PN medication errors at our hospital over the 7 years was 230 errors/84,503 PN prescriptions, or 0.27% compared with national data that determined that 74 of 4730 (1.6%) of prescriptions over 1.5 years were associated with a medication error. Errors were categorized by steps in the PN process: prescribing, transcription, preparation, and administration. There were no transcription errors, and most (95%) errors occurred during administration. We conclude that PN practices that conferred a meaningful cost reduction and a lower error rate (2.7/1000 PN) than reported in the literature (15.6/1000 PN) were ascribed to the development and implementation of practices that conform to national PN guidelines and recommendations. Electronic ordering and compounding programs eliminated all transcription and related opportunities for errors. © 2015 American Society for Parenteral and Enteral Nutrition.

  6. Constructing the L2-Graph for Robust Subspace Learning and Subspace Clustering.

    PubMed

    Peng, Xi; Yu, Zhiding; Yi, Zhang; Tang, Huajin

    2017-04-01

    Under the framework of graph-based learning, the key to robust subspace clustering and subspace learning is to obtain a good similarity graph that eliminates the effects of errors and retains only connections between the data points from the same subspace (i.e., intrasubspace data points). Recent works achieve good performance by modeling errors into their objective functions to remove the errors from the inputs. However, these approaches face the limitations that the structure of errors should be known prior and a complex convex problem must be solved. In this paper, we present a novel method to eliminate the effects of the errors from the projection space (representation) rather than from the input space. We first prove that l 1 -, l 2 -, l ∞ -, and nuclear-norm-based linear projection spaces share the property of intrasubspace projection dominance, i.e., the coefficients over intrasubspace data points are larger than those over intersubspace data points. Based on this property, we introduce a method to construct a sparse similarity graph, called L2-graph. The subspace clustering and subspace learning algorithms are developed upon L2-graph. We conduct comprehensive experiment on subspace learning, image clustering, and motion segmentation and consider several quantitative benchmarks classification/clustering accuracy, normalized mutual information, and running time. Results show that L2-graph outperforms many state-of-the-art methods in our experiments, including L1-graph, low rank representation (LRR), and latent LRR, least square regression, sparse subspace clustering, and locally linear representation.

  7. Real-time precise orbit determination of LEO satellites using a single-frequency GPS receiver: Preliminary results of Chinese SJ-9A satellite

    NASA Astrophysics Data System (ADS)

    Sun, Xiucong; Han, Chao; Chen, Pei

    2017-10-01

    Spaceborne Global Positioning System (GPS) receivers are widely used for orbit determination of low-Earth-orbiting (LEO) satellites. With the improvement of measurement accuracy, single-frequency receivers are recently considered for low-cost small satellite missions. In this paper, a Schmidt-Kalman filter which processes single-frequency GPS measurements and broadcast ephemerides is proposed for real-time precise orbit determination of LEO satellites. The C/A code and L1 phase are linearly combined to eliminate the first-order ionospheric effects. Systematic errors due to ionospheric delay residual, group delay variation, phase center variation, and broadcast ephemeris errors, are lumped together into a noise term, which is modeled as a first-order Gauss-Markov process. In order to reduce computational complexity, the colored noise is considered rather than estimated in the orbit determination process. This ensures that the covariance matrix accurately represents the distribution of estimation errors without increasing the dimension of the state vector. The orbit determination algorithm is tested with actual flight data from the single-frequency GPS receiver onboard China's small satellite Shi Jian-9A (SJ-9A). Preliminary results using a 7-h data arc on October 25, 2012 show that the Schmidt-Kalman filter performs better than the standard Kalman filter in terms of accuracy.

  8. Total elimination of sampling errors in polarization imagery obtained with integrated microgrid polarimeters.

    PubMed

    Tyo, J Scott; LaCasse, Charles F; Ratliff, Bradley M

    2009-10-15

    Microgrid polarimeters operate by integrating a focal plane array with an array of micropolarizers. The Stokes parameters are estimated by comparing polarization measurements from pixels in a neighborhood around the point of interest. The main drawback is that the measurements used to estimate the Stokes vector are made at different locations, leading to a false polarization signature owing to instantaneous field-of-view (IFOV) errors. We demonstrate for the first time, to our knowledge, that spatially band limited polarization images can be ideally reconstructed with no IFOV error by using a linear system framework.

  9. Cloud-free resolution element statistics program

    NASA Technical Reports Server (NTRS)

    Liley, B.; Martin, C. D.

    1971-01-01

    Computer program computes number of cloud-free elements in field-of-view and percentage of total field-of-view occupied by clouds. Human error is eliminated by using visual estimation to compute cloud statistics from aerial photographs.

  10. Evolutionary Epistemology and the Educative Process.

    ERIC Educational Resources Information Center

    Perkinson, Henry J.

    2003-01-01

    Uses Karl Popper's theory that knowledge is produced through continual trial conjectures and error elimination to argue that students are fallible creators of knowledge and that the primary role of the teacher is as a critic. (EV)

  11. Term Cancellations in Computing Floating-Point Gröbner Bases

    NASA Astrophysics Data System (ADS)

    Sasaki, Tateaki; Kako, Fujio

    We discuss the term cancellation which makes the floating-point Gröbner basis computation unstable, and show that error accumulation is never negligible in our previous method. Then, we present a new method, which removes accumulated errors as far as possible by reducing matrices constructed from coefficient vectors by the Gaussian elimination. The method manifests amounts of term cancellations caused by the existence of approximate linearly dependent relations among input polynomials.

  12. Higher-order ionospheric error at Arecibo, Millstone, and Jicamarca

    NASA Astrophysics Data System (ADS)

    Matteo, N. A.; Morton, Y. T.

    2010-12-01

    The ionosphere is a dominant source of Global Positioning System receiver range measurement error. Although dual-frequency receivers can eliminate the first-order ionospheric error, most second- and third-order errors remain in the range measurements. Higher-order ionospheric error is a function of both electron density distribution and the magnetic field vector along the GPS signal propagation path. This paper expands previous efforts by combining incoherent scatter radar (ISR) electron density measurements, the International Reference Ionosphere model, exponential decay extensions of electron densities, the International Geomagnetic Reference Field, and total electron content maps to compute higher-order error at ISRs in Arecibo, Puerto Rico; Jicamarca, Peru; and Millstone Hill, Massachusetts. Diurnal patterns, dependency on signal direction, seasonal variation, and geomagnetic activity dependency are analyzed. Higher-order error is largest at Arecibo with code phase maxima circa 7 cm for low-elevation southern signals. The maximum variation of the error over all angles of arrival is circa 8 cm.

  13. Isolated core vs. superficial cooling effects on virtual maze navigation.

    PubMed

    Payne, Jennifer; Cheung, Stephen S

    2007-07-01

    Cold impairs cognitive performance and is a common occurrence in many survival situations. Altered behavior patterns due to impaired navigation abilities in cold environments are potential problems in lost-person situations. We investigated the separate effects of low core temperature and superficial cooling on a spatially demanding virtual navigation task. There were 12 healthy men who were passively cooled via 15 degrees C water immersion to a core temperature of 36.0 degrees C, then transferred to a warm (40 degrees C) water bath to eliminate superficial shivering while completing a series of 20 virtual computer mazes. In a control condition, subjects rested in a thermoneutral (approximately 35 degrees C) bath for a time-matched period before being transferred to a warm bath for testing. Superficial cooling and distraction were achieved by whole-body immersion in 35 degree water for a time-matched period, followed by lower leg immersion in 10 degree C water for the duration of the navigational tests. Mean completion time and mean error scores for the mazes were not significantly different (p > 0.05) across the core cooling (16.59 +/- 11.54 s, 0.91 +/- 1.86 errors), control (15.40 +/- 8.85 s, 0.82 +/- 1.76 errors), and superficial cooling (15.19 +/- 7.80 s, 0.77 +/- 1.40 errors) conditions. Separately reducing core temperature or increasing cold sensation in the lower extremities did not influence performance on virtual computer mazes, suggesting that navigation is more resistive to cooling than other, simpler cognitive tasks. Further research is warranted to explore navigational ability at progressively lower core and skin temperatures, and in different populations.

  14. Nondimensional parameter for conformal grinding: combining machine and process parameters

    NASA Astrophysics Data System (ADS)

    Funkenbusch, Paul D.; Takahashi, Toshio; Gracewski, Sheryl M.; Ruckman, Jeffrey L.

    1999-11-01

    Conformal grinding of optical materials with CNC (Computer Numerical Control) machining equipment can be used to achieve precise control over complex part configurations. However complications can arise due to the need to fabricate complex geometrical shapes at reasonable production rates. For example high machine stiffness is essential, but the need to grind 'inside' small or highly concave surfaces may require use of tooling with less than ideal stiffness characteristics. If grinding generates loads sufficient for significant tool deflection, the programmed removal depth will not be achieved. Moreover since grinding load is a function of the volumetric removal rate the amount of load deflection can vary with location on the part, potentially producing complex figure errors. In addition to machine/tool stiffness and removal rate, load generation is a function of the process parameters. For example by reducing the feed rate of the tool into the part, both the load and resultant deflection/removal error can be decreased. However this must be balanced against the need for part through put. In this paper a simple model which permits combination of machine stiffness and process parameters into a single non-dimensional parameter is adapted for a conformal grinding geometry. Errors in removal can be minimized by maintaining this parameter above a critical value. Moreover, since the value of this parameter depends on the local part geometry, it can be used to optimize process settings during grinding. For example it may be used to guide adjustment of the feed rate as a function of location on the part to eliminate figure errors while minimizing the total grinding time required.

  15. Circular carrier squeezing interferometry: Suppressing phase shift error in simultaneous phase-shifting point-diffraction interferometer

    NASA Astrophysics Data System (ADS)

    Zheng, Donghui; Chen, Lei; Li, Jinpeng; Sun, Qinyuan; Zhu, Wenhua; Anderson, James; Zhao, Jian; Schülzgen, Axel

    2018-03-01

    Circular carrier squeezing interferometry (CCSI) is proposed and applied to suppress phase shift error in simultaneous phase-shifting point-diffraction interferometer (SPSPDI). By introducing a defocus, four phase-shifting point-diffraction interferograms with circular carrier are acquired, and then converted into linear carrier interferograms by a coordinate transform. Rearranging the transformed interferograms into a spatial-temporal fringe (STF), so the error lobe will be separated from the phase lobe in the Fourier spectrum of the STF, and filtering the phase lobe to calculate the extended phase, when combined with the corresponding inverse coordinate transform, exactly retrieves the initial phase. Both simulations and experiments validate the ability of CCSI to suppress the ripple error generated by the phase shift error. Compared with carrier squeezing interferometry (CSI), CCSI is effective on some occasions in which a linear carrier is difficult to introduce, and with the added benefit of eliminating retrace error.

  16. Reliable absolute analog code retrieval approach for 3D measurement

    NASA Astrophysics Data System (ADS)

    Yu, Shuang; Zhang, Jing; Yu, Xiaoyang; Sun, Xiaoming; Wu, Haibin; Chen, Deyun

    2017-11-01

    The wrapped phase of phase-shifting approach can be unwrapped by using Gray code, but both the wrapped phase error and Gray code decoding error can result in period jump error, which will lead to gross measurement error. Therefore, this paper presents a reliable absolute analog code retrieval approach. The combination of unequal-period Gray code and phase shifting patterns at high frequencies are used to obtain high-frequency absolute analog code, and at low frequencies, the same unequal-period combination patterns are used to obtain the low-frequency absolute analog code. Next, the difference between the two absolute analog codes was employed to eliminate period jump errors, and a reliable unwrapped result can be obtained. Error analysis was used to determine the applicable conditions, and this approach was verified through theoretical analysis. The proposed approach was further verified experimentally. Theoretical analysis and experimental results demonstrate that the proposed approach can perform reliable analog code unwrapping.

  17. An adaptive modeling and simulation environment for combined-cycle data reconciliation and degradation estimation

    NASA Astrophysics Data System (ADS)

    Lin, Tsungpo

    Performance engineers face the major challenge in modeling and simulation for the after-market power system due to system degradation and measurement errors. Currently, the majority in power generation industries utilizes the deterministic data matching method to calibrate the model and cascade system degradation, which causes significant calibration uncertainty and also the risk of providing performance guarantees. In this research work, a maximum-likelihood based simultaneous data reconciliation and model calibration (SDRMC) is used for power system modeling and simulation. By replacing the current deterministic data matching with SDRMC one can reduce the calibration uncertainty and mitigate the error propagation to the performance simulation. A modeling and simulation environment for a complex power system with certain degradation has been developed. In this environment multiple data sets are imported when carrying out simultaneous data reconciliation and model calibration. Calibration uncertainties are estimated through error analyses and populated to performance simulation by using principle of error propagation. System degradation is then quantified by performance comparison between the calibrated model and its expected new & clean status. To mitigate smearing effects caused by gross errors, gross error detection (GED) is carried out in two stages. The first stage is a screening stage, in which serious gross errors are eliminated in advance. The GED techniques used in the screening stage are based on multivariate data analysis (MDA), including multivariate data visualization and principal component analysis (PCA). Subtle gross errors are treated at the second stage, in which the serial bias compensation or robust M-estimator is engaged. To achieve a better efficiency in the combined scheme of the least squares based data reconciliation and the GED technique based on hypotheses testing, the Levenberg-Marquardt (LM) algorithm is utilized as the optimizer. To reduce the computation time and stabilize the problem solving for a complex power system such as a combined cycle power plant, meta-modeling using the response surface equation (RSE) and system/process decomposition are incorporated with the simultaneous scheme of SDRMC. The goal of this research work is to reduce the calibration uncertainties and, thus, the risks of providing performance guarantees arisen from uncertainties in performance simulation.

  18. Preventing medication errors in cancer chemotherapy.

    PubMed

    Cohen, M R; Anderson, R W; Attilio, R M; Green, L; Muller, R J; Pruemer, J M

    1996-04-01

    Recommendations for preventing medication errors in cancer chemotherapy are made. Before a health care provider is granted privileges to prescribe, dispense, or administer antineoplastic agents, he or she should undergo a tailored educational program and possibly testing or certification. Appropriate reference materials should be developed. Each institution should develop a dose-verification process with as many independent checks as possible. A detailed checklist covering prescribing, transcribing, dispensing, and administration should be used. Oral orders are not acceptable. All doses should be calculated independently by the physician, the pharmacist, and the nurse. Dosage limits should be established and a review process set up for doses that exceed the limits. These limits should be entered into pharmacy computer systems, listed on preprinted order forms, stated on the product packaging, placed in strategic locations in the institution, and communicated to employees. The prescribing vocabulary must be standardized. Acronyms, abbreviations, and brand names must be avoided and steps taken to avoid other sources of confusion in the written orders, such as trailing zeros. Preprinted antineoplastic drug order forms containing checklists can help avoid errors. Manufacturers should be encouraged to avoid or eliminate ambiguities in drug names and dosing information. Patients must be educated about all aspects of their cancer chemotherapy, as patients represent a last line of defense against errors. An interdisciplinary team at each practice site should review every medication error reported. Pharmacists should be involved at all sites where antineoplastic agents are dispensed. Although it may not be possible to eliminate all medication errors in cancer chemotherapy, the risk can be minimized through specific steps. Because of their training and experience, pharmacists should take the lead in this effort.

  19. Porous plug for reducing orifice induced pressure error in airfoils

    NASA Technical Reports Server (NTRS)

    Plentovich, Elizabeth B. (Inventor); Gloss, Blair B. (Inventor); Eves, John W. (Inventor); Stack, John P. (Inventor)

    1988-01-01

    A porous plug is provided for the reduction or elimination of positive error caused by the orifice during static pressure measurements of airfoils. The porous plug is press fitted into the orifice, thereby preventing the error caused either by fluid flow turning into the exposed orifice or by the fluid flow stagnating at the downstream edge of the orifice. In addition, the porous plug is made flush with the outer surface of the airfoil, by filing and polishing, to provide a smooth surface which alleviates the error caused by imperfections in the orifice. The porous plug is preferably made of sintered metal, which allows air to pass through the pores, so that the static pressure measurements can be made by remote transducers.

  20. Total absorption cross sections of several gases of aeronomic interest at 584 A.

    NASA Technical Reports Server (NTRS)

    Starr, W. L.; Loewenstein, M.

    1972-01-01

    Total photoabsorption cross sections have been measured at 584.3 A for N2, O2, Ar, CO2, CO, NO, N2O, NH3, CH4, H2, and H2S. A monochromator was used to isolate the He I 584 line produced in a helium resonance lamp, and thin aluminum filters were used as absorption cell windows, thereby eliminating possible errors associated with the use of undispersed radiation or windowless cells. Sources of error are examined, and limits of uncertainty are given. Previous relevant cross-sectional measurements and possible error sources are reviewed. Wall adsorption as a source of error in cross-sectional measurements has not previously been considered and is discussed briefly.

  1. Algorithm for repairing the damaged images of grain structures obtained from the cellular automata and measurement of grain size

    NASA Astrophysics Data System (ADS)

    Ramírez-López, A.; Romero-Romo, M. A.; Muñoz-Negron, D.; López-Ramírez, S.; Escarela-Pérez, R.; Duran-Valencia, C.

    2012-10-01

    Computational models are developed to create grain structures using mathematical algorithms based on the chaos theory such as cellular automaton, geometrical models, fractals, and stochastic methods. Because of the chaotic nature of grain structures, some of the most popular routines are based on the Monte Carlo method, statistical distributions, and random walk methods, which can be easily programmed and included in nested loops. Nevertheless, grain structures are not well defined as the results of computational errors and numerical inconsistencies on mathematical methods. Due to the finite definition of numbers or the numerical restrictions during the simulation of solidification, damaged images appear on the screen. These images must be repaired to obtain a good measurement of grain geometrical properties. Some mathematical algorithms were developed to repair, measure, and characterize grain structures obtained from cellular automata in the present work. An appropriate measurement of grain size and the corrected identification of interfaces and length are very important topics in materials science because they are the representation and validation of mathematical models with real samples. As a result, the developed algorithms are tested and proved to be appropriate and efficient to eliminate the errors and characterize the grain structures.

  2. MAGNIFICENT MAGNIFICATION: EXPLOITING THE OTHER HALF OF THE LENSING SIGNAL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huff, Eric M.; Graves, Genevieve J.

    2014-01-10

    We describe a new method for measuring galaxy magnification due to weak gravitational lensing. Our method makes use of a tight scaling relation between galaxy properties that are modified by gravitational lensing, such as apparent size, and other properties that are not, such as surface brightness. In particular, we use a version of the well-known fundamental plane relation for early-type galaxies. This modified ''photometric fundamental plane'' uses only photometric galaxy properties, eliminating the need for spectroscopic data. We present the first detection of magnification using this method by applying it to photometric catalogs from the Sloan Digital Sky Survey. Thismore » analysis shows that the derived magnification signal is within a factor of three of that available from conventional methods using gravitational shear. We suppress the dominant sources of systematic error and discuss modest improvements that may further enhance the lensing signal-to-noise available with this method. Moreover, some of the dominant sources of systematic error are substantially different from those of shear-based techniques. With this new technique, magnification becomes a useful measurement tool for the coming era of large ground-based surveys intending to measure gravitational lensing.« less

  3. A holistic calibration method with iterative distortion compensation for stereo deflectometry

    NASA Astrophysics Data System (ADS)

    Xu, Yongjia; Gao, Feng; Zhang, Zonghua; Jiang, Xiangqian

    2018-07-01

    This paper presents a novel holistic calibration method for stereo deflectometry system to improve the system measurement accuracy. The reconstruction result of stereo deflectometry is integrated with the calculated normal data of the measured surface. The calculation accuracy of the normal data is seriously influenced by the calibration accuracy of the geometrical relationship of the stereo deflectometry system. Conventional calibration approaches introduce form error to the system due to inaccurate imaging model and distortion elimination. The proposed calibration method compensates system distortion based on an iterative algorithm instead of the conventional distortion mathematical model. The initial value of the system parameters are calculated from the fringe patterns displayed on the systemic LCD screen through a reflection of a markless flat mirror. An iterative algorithm is proposed to compensate system distortion and optimize camera imaging parameters and system geometrical relation parameters based on a cost function. Both simulation work and experimental results show the proposed calibration method can significantly improve the calibration and measurement accuracy of a stereo deflectometry. The PV (peak value) of measurement error of a flat mirror can be reduced to 69.7 nm by applying the proposed method from 282 nm obtained with the conventional calibration approach.

  4. Modified Bat Algorithm for Feature Selection with the Wisconsin Diagnosis Breast Cancer (WDBC) Dataset

    PubMed

    Jeyasingh, Suganthi; Veluchamy, Malathi

    2017-05-01

    Early diagnosis of breast cancer is essential to save lives of patients. Usually, medical datasets include a large variety of data that can lead to confusion during diagnosis. The Knowledge Discovery on Database (KDD) process helps to improve efficiency. It requires elimination of inappropriate and repeated data from the dataset before final diagnosis. This can be done using any of the feature selection algorithms available in data mining. Feature selection is considered as a vital step to increase the classification accuracy. This paper proposes a Modified Bat Algorithm (MBA) for feature selection to eliminate irrelevant features from an original dataset. The Bat algorithm was modified using simple random sampling to select the random instances from the dataset. Ranking was with the global best features to recognize the predominant features available in the dataset. The selected features are used to train a Random Forest (RF) classification algorithm. The MBA feature selection algorithm enhanced the classification accuracy of RF in identifying the occurrence of breast cancer. The Wisconsin Diagnosis Breast Cancer Dataset (WDBC) was used for estimating the performance analysis of the proposed MBA feature selection algorithm. The proposed algorithm achieved better performance in terms of Kappa statistic, Mathew’s Correlation Coefficient, Precision, F-measure, Recall, Mean Absolute Error (MAE), Root Mean Square Error (RMSE), Relative Absolute Error (RAE) and Root Relative Squared Error (RRSE). Creative Commons Attribution License

  5. AfterQC: automatic filtering, trimming, error removing and quality control for fastq data.

    PubMed

    Chen, Shifu; Huang, Tanxiao; Zhou, Yanqing; Han, Yue; Xu, Mingyan; Gu, Jia

    2017-03-14

    Some applications, especially those clinical applications requiring high accuracy of sequencing data, usually have to face the troubles caused by unavoidable sequencing errors. Several tools have been proposed to profile the sequencing quality, but few of them can quantify or correct the sequencing errors. This unmet requirement motivated us to develop AfterQC, a tool with functions to profile sequencing errors and correct most of them, plus highly automated quality control and data filtering features. Different from most tools, AfterQC analyses the overlapping of paired sequences for pair-end sequencing data. Based on overlapping analysis, AfterQC can detect and cut adapters, and furthermore it gives a novel function to correct wrong bases in the overlapping regions. Another new feature is to detect and visualise sequencing bubbles, which can be commonly found on the flowcell lanes and may raise sequencing errors. Besides normal per cycle quality and base content plotting, AfterQC also provides features like polyX (a long sub-sequence of a same base X) filtering, automatic trimming and K-MER based strand bias profiling. For each single or pair of FastQ files, AfterQC filters out bad reads, detects and eliminates sequencer's bubble effects, trims reads at front and tail, detects the sequencing errors and corrects part of them, and finally outputs clean data and generates HTML reports with interactive figures. AfterQC can run in batch mode with multiprocess support, it can run with a single FastQ file, a single pair of FastQ files (for pair-end sequencing), or a folder for all included FastQ files to be processed automatically. Based on overlapping analysis, AfterQC can estimate the sequencing error rate and profile the error transform distribution. The results of our error profiling tests show that the error distribution is highly platform dependent. Much more than just another new quality control (QC) tool, AfterQC is able to perform quality control, data filtering, error profiling and base correction automatically. Experimental results show that AfterQC can help to eliminate the sequencing errors for pair-end sequencing data to provide much cleaner outputs, and consequently help to reduce the false-positive variants, especially for the low-frequency somatic mutations. While providing rich configurable options, AfterQC can detect and set all the options automatically and require no argument in most cases.

  6. Measuring the Lense-Thirring precession using a second Lageos satellite

    NASA Technical Reports Server (NTRS)

    Tapley, B. D.; Ciufolini, I.

    1989-01-01

    A complete numerical simulation and error analysis was performed for the proposed experiment with the objective of establishing an accurate assessment of the feasibility and the potential accuracy of the measurement of the Lense-Thirring precession. Consideration was given to identifying the error sources which limit the accuracy of the experiment and proposing procedures for eliminating or reducing the effect of these errors. Analytic investigations were conducted to study the effects of major error sources with the objective of providing error bounds on the experiment. The analysis of realistic simulated data is used to demonstrate that satellite laser ranging of two Lageos satellites, orbiting with supplemental inclinations, collected for a period of 3 years or more, can be used to verify the Lense-Thirring precession. A comprehensive covariance analysis for the solution was also developed.

  7. Reflections on human error - Matters of life and death

    NASA Technical Reports Server (NTRS)

    Wiener, Earl L.

    1989-01-01

    The last two decades have witnessed a rapid growth in the introduction of automatic devices into aircraft cockpits, and eleswhere in human-machine systems. This was motivated in part by the assumption that when human functioning is replaced by machine functioning, human error is eliminated. Experience to date shows that this is far from true, and that automation does not replace humans, but changes their role in the system, as well as the types and severity of the errors they make. This altered role may lead to fewer, but more critical errors. Intervention strategies to prevent these errors, or ameliorate their consequences include basic human factors engineering of the interface, enhanced warning and alerting systems, and more intelligent interfaces that understand the strategic intent of the crew and can detect and trap inconsistent or erroneous input before it affects the system.

  8. Structured methods for identifying and correcting potential human errors in aviation operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, W.R.

    1997-10-01

    Human errors have been identified as the source of approximately 60% of the incidents and accidents that occur in commercial aviation. It can be assumed that a very large number of human errors occur in aviation operations, even though in most cases the redundancies and diversities built into the design of aircraft systems prevent the errors from leading to serious consequences. In addition, when it is acknowledged that many system failures have their roots in human errors that occur in the design phase, it becomes apparent that the identification and elimination of potential human errors could significantly decrease the risksmore » of aviation operations. This will become even more critical during the design of advanced automation-based aircraft systems as well as next-generation systems for air traffic management. Structured methods to identify and correct potential human errors in aviation operations have been developed and are currently undergoing testing at the Idaho National Engineering and Environmental Laboratory (INEEL).« less

  9. Association of medication errors with drug classifications, clinical units, and consequence of errors: Are they related?

    PubMed

    Muroi, Maki; Shen, Jay J; Angosta, Alona

    2017-02-01

    Registered nurses (RNs) play an important role in safe medication administration and patient safety. This study examined a total of 1276 medication error (ME) incident reports made by RNs in hospital inpatient settings in the southwestern region of the United States. The most common drug class associated with MEs was cardiovascular drugs (24.7%). Among this class, anticoagulants had the most errors (11.3%). The antimicrobials was the second most common drug class associated with errors (19.1%) and vancomycin was the most common antimicrobial that caused errors in this category (6.1%). MEs occurred more frequently in the medical-surgical and intensive care units than any other hospital units. Ten percent of MEs reached the patients with harm and 11% reached the patients with increased monitoring. Understanding the contributing factors related to MEs, addressing and eliminating risk of errors across hospital units, and providing education and resources for nurses may help reduce MEs. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Self-checking self-repairing computer nodes using the mirror processor

    NASA Technical Reports Server (NTRS)

    Tamir, Yuval

    1992-01-01

    Circuitry added to fault-tolerant systems for concurrent error deduction usually reduces performance. Using a technique called micro rollback, it is possible to eliminate most of the performance penalty of concurrent error detection. Error detection is performed in parallel with intermodule communication, and erroneous state changes are later undone. The author reports on the design and implementation of a VLSI RISC microprocessor, called the Mirror Processor (MP), which is capable of micro rollback. In order to achieve concurrent error detection, two MP chips operate in lockstep, comparing external signals and a signature of internal signals every clock cycle. If a mismatch is detected, both processors roll back to the beginning of the cycle when the error occurred. In some cases the erroneous state is corrected by copying a value from the fault-free processor to the faulty processor. The architecture, microarchitecture, and VLSI implementation of the MP, emphasizing its error-detection, error-recovery, and self-diagnosis capabilities, are described.

  11. Radiation and Scattering Compact Antenna Laboratory (RASCAL) Capabilities Brochure

    DTIC Science & Technology

    2016-09-06

    Array Measurements Integrated Measurement of Subsystems with Digital Backends RADIATION AND SCATTERING COMPACT ANTENNA LABORATORY...hardware gating to eliminate sources of error within the range itself. Processing is also available for multi-arm spiral antennas for the generation

  12. Wide-range radioactive-gas-concentration detector

    DOEpatents

    Anderson, D.F.

    1981-11-16

    A wide-range radioactive-gas-concentration detector and monitor capable of measuring radioactive-gas concentrations over a range of eight orders of magnitude is described. The device is designed to have an ionization chamber sufficiently small to give a fast response time for measuring radioactive gases but sufficiently large to provide accurate readings at low concentration levels. Closely spaced parallel-plate grids provide a uniform electric field in the active region to improve the accuracy of measurements and reduce ion migration time so as to virtually eliminate errors due to ion recombination. The parallel-plate grids are fabricated with a minimal surface area to reduce the effects of contamination resulting from absorption of contaminating materials on the surface of the grids. Additionally, the ionization-chamber wall is spaced a sufficient distance from the active region of the ionization chamber to minimize contamination effects.

  13. Effects of salt secretion on psychrometric determinations of water potential of cotton leaves.

    PubMed

    Klepper, B; Barrs, H D

    1968-07-01

    Thermocouple psychrometers gave lower estimates of water potential of cotton leaves than did a pressure chamber. This difference was considerable for turgid leaves, but progressively decreased for leaves with lower water potentials and fell to zero at water potentials below about -10 bars. The conductivity of washings from cotton leaves removed from the psychrometric equilibration chambers was related to the magnitude of this discrepancy in water potential, indicating that the discrepancy is due to salts on the leaf surface which make the psychrometric estimates too low. This error, which may be as great as 400 to 500%, cannot be eliminated by washing the leaves because salts may be secreted during the equilibration period. Therefore, a thermocouple psychrometer is not suitable for measuring the water potential of cotton leaves when it is above about -10 bars.

  14. Accurate RNA consensus sequencing for high-fidelity detection of transcriptional mutagenesis-induced epimutations.

    PubMed

    Reid-Bayliss, Kate S; Loeb, Lawrence A

    2017-08-29

    Transcriptional mutagenesis (TM) due to misincorporation during RNA transcription can result in mutant RNAs, or epimutations, that generate proteins with altered properties. TM has long been hypothesized to play a role in aging, cancer, and viral and bacterial evolution. However, inadequate methodologies have limited progress in elucidating a causal association. We present a high-throughput, highly accurate RNA sequencing method to measure epimutations with single-molecule sensitivity. Accurate RNA consensus sequencing (ARC-seq) uniquely combines RNA barcoding and generation of multiple cDNA copies per RNA molecule to eliminate errors introduced during cDNA synthesis, PCR, and sequencing. The stringency of ARC-seq can be scaled to accommodate the quality of input RNAs. We apply ARC-seq to directly assess transcriptome-wide epimutations resulting from RNA polymerase mutants and oxidative stress.

  15. Analysis of the “naming game” with learning errors in communications

    NASA Astrophysics Data System (ADS)

    Lou, Yang; Chen, Guanrong

    2015-07-01

    Naming game simulates the process of naming an objective by a population of agents organized in a certain communication network. By pair-wise iterative interactions, the population reaches consensus asymptotically. We study naming game with communication errors during pair-wise conversations, with error rates in a uniform probability distribution. First, a model of naming game with learning errors in communications (NGLE) is proposed. Then, a strategy for agents to prevent learning errors is suggested. To that end, three typical topologies of communication networks, namely random-graph, small-world and scale-free networks, are employed to investigate the effects of various learning errors. Simulation results on these models show that 1) learning errors slightly affect the convergence speed but distinctively increase the requirement for memory of each agent during lexicon propagation; 2) the maximum number of different words held by the population increases linearly as the error rate increases; 3) without applying any strategy to eliminate learning errors, there is a threshold of the learning errors which impairs the convergence. The new findings may help to better understand the role of learning errors in naming game as well as in human language development from a network science perspective.

  16. Analysis of the "naming game" with learning errors in communications.

    PubMed

    Lou, Yang; Chen, Guanrong

    2015-07-16

    Naming game simulates the process of naming an objective by a population of agents organized in a certain communication network. By pair-wise iterative interactions, the population reaches consensus asymptotically. We study naming game with communication errors during pair-wise conversations, with error rates in a uniform probability distribution. First, a model of naming game with learning errors in communications (NGLE) is proposed. Then, a strategy for agents to prevent learning errors is suggested. To that end, three typical topologies of communication networks, namely random-graph, small-world and scale-free networks, are employed to investigate the effects of various learning errors. Simulation results on these models show that 1) learning errors slightly affect the convergence speed but distinctively increase the requirement for memory of each agent during lexicon propagation; 2) the maximum number of different words held by the population increases linearly as the error rate increases; 3) without applying any strategy to eliminate learning errors, there is a threshold of the learning errors which impairs the convergence. The new findings may help to better understand the role of learning errors in naming game as well as in human language development from a network science perspective.

  17. Relative peripheral hyperopic defocus alters central refractive development in infant monkeys

    PubMed Central

    Smith, Earl L.; Hung, Li-Fang; Huang, Juan

    2009-01-01

    Understanding the role of peripheral defocus on central refractive development is critical because refractive errors can vary significantly with eccentricity and peripheral refractions have been implicated in the genesis of central refractive errors in humans. Two rearing strategies were used to determine whether peripheral hyperopia alters central refractive development in rhesus monkeys. In intact eyes, lens-induced relative peripheral hyperopia produced central axial myopia. Moreover, eliminating the fovea by laser photoablation did not prevent compensating myopic changes in response to optically imposed hyperopia. These results show that peripheral refractive errors can have a substantial impact on central refractive development in primates. PMID:19632261

  18. Image-adapted visually weighted quantization matrices for digital image compression

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1994-01-01

    A method for performing image compression that eliminates redundant and invisible image components is presented. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  19. The pattern of the discovery of medication errors in a tertiary hospital in Hong Kong.

    PubMed

    Samaranayake, N R; Cheung, S T D; Chui, W C M; Cheung, B M Y

    2013-06-01

    The primary goal of reducing medication errors is to eliminate those that reach the patient. We aimed to study the pattern of interceptions to tackle medication errors along the medication use processes. Tertiary care hospital in Hong Kong. The 'Swiss Cheese Model' was used to explain the interceptions targeting medication error reporting over 5 years (2006-2010). Proportions of prescribing, dispensing and drug administration errors intercepted by pharmacists and nurses; proportions of prescribing, dispensing and drug administration errors that reached the patient. Our analysis included 1,268 in-patient medication errors, of which 53.4% were related to prescribing, 29.0% to administration and 17.6% to dispensing. 34.1% of all medication errors (4.9% prescribing, 26.8% drug administration and 2.4% dispensing) were not intercepted. Pharmacy staff intercepted 85.4% of the prescribing errors. Nurses detected 83.0% of dispensing and 5.0% of prescribing errors. However, 92.4% of all drug administration errors reached the patient. Having a preventive measure at each stage of the medication use process helps to prevent most errors. Most drug administration errors reach the patient as there is no defense against these. Therefore, more interventions to prevent drug administration errors are warranted.

  20. Optical Fiber Connection Navigation System Using Visible Light Communication in Central Office with Economic Evaluation

    NASA Astrophysics Data System (ADS)

    Waki, Masaki; Uruno, Shigenori; Ohashi, Hiroyuki; Manabe, Tetsuya; Azuma, Yuji

    We propose an optical fiber connection navigation system that uses visible light communication for an integrated distribution module in a central office. The system realizes an accurate database, requires less skilled work to operate and eliminates human error. This system can achieve a working time reduction of up to 88.0% compared with the conventional work without human error for the connection/removal of optical fiber cords, and is economical as regards installation and operation.

  1. Defining the Relationship Between Human Error Classes and Technology Intervention Strategies

    NASA Technical Reports Server (NTRS)

    Wiegmann, Douglas A.; Rantanen, Eas M.

    2003-01-01

    The modus operandi in addressing human error in aviation systems is predominantly that of technological interventions or fixes. Such interventions exhibit considerable variability both in terms of sophistication and application. Some technological interventions address human error directly while others do so only indirectly. Some attempt to eliminate the occurrence of errors altogether whereas others look to reduce the negative consequences of these errors. In any case, technological interventions add to the complexity of the systems and may interact with other system components in unforeseeable ways and often create opportunities for novel human errors. Consequently, there is a need to develop standards for evaluating the potential safety benefit of each of these intervention products so that resources can be effectively invested to produce the biggest benefit to flight safety as well as to mitigate any adverse ramifications. The purpose of this project was to help define the relationship between human error and technological interventions, with the ultimate goal of developing a set of standards for evaluating or measuring the potential benefits of new human error fixes.

  2. Research on effects of phase error in phase-shifting interferometer

    NASA Astrophysics Data System (ADS)

    Wang, Hongjun; Wang, Zhao; Zhao, Hong; Tian, Ailing; Liu, Bingcai

    2007-12-01

    Referring to phase-shifting interferometry technology, the phase shifting error from the phase shifter is the main factor that directly affects the measurement accuracy of the phase shifting interferometer. In this paper, the resources and sorts of phase shifting error were introduction, and some methods to eliminate errors were mentioned. Based on the theory of phase shifting interferometry, the effects of phase shifting error were analyzed in detail. The Liquid Crystal Display (LCD) as a new shifter has advantage as that the phase shifting can be controlled digitally without any mechanical moving and rotating element. By changing coded image displayed on LCD, the phase shifting in measuring system was induced. LCD's phase modulation characteristic was analyzed in theory and tested. Based on Fourier transform, the effect model of phase error coming from LCD was established in four-step phase shifting interferometry. And the error range was obtained. In order to reduce error, a new error compensation algorithm was put forward. With this method, the error can be obtained by process interferogram. The interferogram can be compensated, and the measurement results can be obtained by four-step phase shifting interferogram. Theoretical analysis and simulation results demonstrate the feasibility of this approach to improve measurement accuracy.

  3. Ozone Profile Retrievals from the OMPS on Suomi NPP

    NASA Astrophysics Data System (ADS)

    Bak, J.; Liu, X.; Kim, J. H.; Haffner, D. P.; Chance, K.; Yang, K.; Sun, K.; Gonzalez Abad, G.

    2017-12-01

    We verify and correct the Ozone Mapping and Profiler Suite (OMPS) Nadir Mapper (NM) L1B v2.0 data with the aim of producing accurate ozone profile retrievals using an optimal estimation based inversion method in the 302.5-340 nm fitting. The evaluation of available slit functions demonstrates that preflight-measured slit functions well represent OMPS measurements compared to derived Gaussian slit functions. Our OMPS fitting residuals contain significant wavelength and cross-track dependent biases, and thereby serious cross-track striping errors are found in preliminary retrievals, especially in the troposphere. To eliminate the systematic component of the fitting residuals, we apply "soft calibration" to OMPS radiances. With the soft calibration the amplitude of fitting residuals decreases from 1 % to 0.2 % over low/mid latitudes, and thereby the consistency of tropospheric ozone retrievals between OMPS and Ozone Monitoring Instrument (OMI) are substantially improved. A common mode correction is implemented for additional radiometric calibration, which improves retrievals especially at high latitudes where the amplitude of fitting residuals decreases by a factor of 2. We estimate the floor noise error of OMPS measurements from standard deviations of the fitting residuals. The derived error in the Huggins band ( 0.1 %) is 2 times smaller than OMI floor noise error and 2 times larger than OMPS L1B measurement error. The OMPS floor noise errors better constrain our retrievals for maximizing measurement information and stabilizing our fitting residuals. The final precision of the fitting residuals is less than 0.1 % in the low/mid latitude, with 1 degrees of freedom for signal for the tropospheric ozone, so that we meet the general requirements for successful tropospheric ozone retrievals. To assess if the quality of OMPS ozone retrievals could be acceptable for scientific use, we will characterize OMPS ozone profile retrievals, present error analysis, and validate retrievals using a reference dataset. The useful information on the vertical distribution of ozone is limited below 40 km only from OMPS NM measurements due to the absence of Hartley ozone wavelength. This shortcoming will be improved with the joint ozone profile retrieval using Nadir Profiler (NP) measurements covering the 250 to 310 nm range.

  4. A method of treating the non-grey error in total emittance measurements

    NASA Technical Reports Server (NTRS)

    Heaney, J. B.; Henninger, J. H.

    1971-01-01

    In techniques for the rapid determination of total emittance, the sample is generally exposed to surroundings that are at a different temperature than the sample's surface. When the infrared spectral reflectance of the surface is spectrally selective, these techniques introduce an error into the total emittance values. Surfaces of aluminum overcoated with oxides of various thicknesses fall into this class. Because they are often used as temperature control coatings on satellites, their emittances must be accurately known. The magnitude of the error was calculated for Alzak and silicon oxide-coated aluminum and was shown to be dependent on the thickness of the oxide coating. The results demonstrate that, because the magnitude of the error is thickness-dependent, it is generally impossible or impractical to eliminate it by calibrating the measuring device.

  5. Structural power flow measurement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Falter, K.J.; Keltie, R.F.

    Previous investigations of structural power flow through beam-like structures resulted in some unexplained anomalies in the calculated data. In order to develop structural power flow measurement as a viable technique for machine tool design, the causes of these anomalies needed to be found. Once found, techniques for eliminating the errors could be developed. Error sources were found in the experimental apparatus itself as well as in the instrumentation. Although flexural waves are the carriers of power in the experimental apparatus, at some frequencies longitudinal waves were excited which were picked up by the accelerometers and altered power measurements. Errors weremore » found in the phase and gain response of the sensors and amplifiers used for measurement. A transfer function correction technique was employed to compensate for these instrumentation errors.« less

  6. Reliable before-fabrication forecasting of normal and touch mode MEMS capacitive pressure sensor: modeling and simulation

    NASA Astrophysics Data System (ADS)

    Jindal, Sumit Kumar; Mahajan, Ankush; Raghuwanshi, Sanjeev Kumar

    2017-10-01

    An analytical model and numerical simulation for the performance of MEMS capacitive pressure sensors in both normal and touch modes is required for expected behavior of the sensor prior to their fabrication. Obtaining such information should be based on a complete analysis of performance parameters such as deflection of diaphragm, change of capacitance when the diaphragm deflects, and sensitivity of the sensor. In the literature, limited work has been carried out on the above-stated issue; moreover, due to approximation factors of polynomials, a tolerance error cannot be overseen. Reliable before-fabrication forecasting requires exact mathematical calculation of the parameters involved. A second-order polynomial equation is calculated mathematically for key performance parameters of both modes. This eliminates the approximation factor, and an exact result can be studied, maintaining high accuracy. The elimination of approximation factors and an approach of exact results are based on a new design parameter (δ) that we propose. The design parameter gives an initial hint to the designers on how the sensor will behave once it is fabricated. The complete work is aided by extensive mathematical detailing of all the parameters involved. Next, we verified our claims using MATLAB® simulation. Since MATLAB® effectively provides the simulation theory for the design approach, more complicated finite element method is not used.

  7. Improving CMD Areal Density Analysis: Algorithms and Strategies

    NASA Astrophysics Data System (ADS)

    Wilson, R. E.

    2014-06-01

    Essential ideas, successes, and difficulties of Areal Density Analysis (ADA) for color-magnitude diagrams (CMD¡¯s) of resolved stellar populations are examined, with explanation of various algorithms and strategies for optimal performance. A CMDgeneration program computes theoretical datasets with simulated observational error and a solution program inverts the problem by the method of Differential Corrections (DC) so as to compute parameter values from observed magnitudes and colors, with standard error estimates and correlation coefficients. ADA promises not only impersonal results, but also significant saving of labor, especially where a given dataset is analyzed with several evolution models. Observational errors and multiple star systems, along with various single star characteristics and phenomena, are modeled directly via the Functional Statistics Algorithm (FSA). Unlike Monte Carlo, FSA is not dependent on a random number generator. Discussions include difficulties and overall requirements, such as need for fast evolutionary computation and realization of goals within machine memory limits. Degradation of results due to influence of pixelization on derivatives, Initial Mass Function (IMF) quantization, IMF steepness, low Areal Densities (A ), and large variation in A are reduced or eliminated through a variety of schemes that are explained sufficiently for general application. The Levenberg-Marquardt and MMS algorithms for improvement of solution convergence are contained within the DC program. An example of convergence, which typically is very good, is shown in tabular form. A number of theoretical and practical solution issues are discussed, as are prospects for further development.

  8. Genotype-Based Association Mapping of Complex Diseases: Gene-Environment Interactions with Multiple Genetic Markers and Measurement Error in Environmental Exposures

    PubMed Central

    Lobach, Irvna; Fan, Ruzone; Carroll, Raymond T.

    2011-01-01

    With the advent of dense single nucleotide polymorphism genotyping, population-based association studies have become the major tools for identifying human disease genes and for fine gene mapping of complex traits. We develop a genotype-based approach for association analysis of case-control studies of gene-environment interactions in the case when environmental factors are measured with error and genotype data are available on multiple genetic markers. To directly use the observed genotype data, we propose two genotype-based models: genotype effect and additive effect models. Our approach offers several advantages. First, the proposed risk functions can directly incorporate the observed genotype data while modeling the linkage disequihbrium information in the regression coefficients, thus eliminating the need to infer haplotype phase. Compared with the haplotype-based approach, an estimating procedure based on the proposed methods can be much simpler and significantly faster. In addition, there is no potential risk due to haplotype phase estimation. Further, by fitting the proposed models, it is possible to analyze the risk alleles/variants of complex diseases, including their dominant or additive effects. To model measurement error, we adopt the pseudo-likelihood method by Lobach et al. [2008]. Performance of the proposed method is examined using simulation experiments. An application of our method is illustrated using a population-based case-control study of association between calcium intake with the risk of colorectal adenoma development. PMID:21031455

  9. Sustained attention performance during sleep deprivation: evidence of state instability

    NASA Technical Reports Server (NTRS)

    Doran, S. M.; Van Dongen, H. P.; Dinges, D. F.

    2001-01-01

    Nathaniel Kleitman was the first to observe that sleep deprivation in humans did not eliminate the ability to perform neurobehavioral functions, but it did make it difficult to maintain stable performance for more than a few minutes. To investigate variability in performance as a function of sleep deprivation, n = 13 subjects were tested every 2 hours on a 10-minute, sustained-attention, psychomotor vigilance task (PVT) throughout 88 hours of total sleep deprivation (TSD condition), and compared to a control group of n = 15 subjects who were permitted a 2-hour nap every 12 hours (NAP condition) throughout the 88-hour period. PVT reaction time means and standard deviations increased markedly among subjects and within each individual subject in the TSD condition relative to the NAP condition. TSD subjects also had increasingly greater performance variability as a function of time on task after 18 hours of wakefulness. During sleep deprivation, variability in PVT performance reflected a combination of normal timely responses, errors of omission (i.e., lapses), and errors of commission (i.e., responding when no stimulus was present). Errors of omission and errors of commission were highly intercorrelated across deprivation in the TSD condition (r = 0.85, p = 0.0001), suggesting that performance instability is more likely to include compensatory effort than a lack of motivation. The marked increases in PVT performance variability as sleep loss continued supports the "state instability" hypothesis, which posits that performance during sleep deprivation is increasingly variable due to the influence of sleep initiating mechanisms on the endogenous capacity to maintain attention and alertness, thereby creating an unstable state that fluctuates within seconds and that cannot be characterized as either fully awake or asleep.

  10. Elimination of single-beam substitution error in diffuse reflectance measurements using an integrating sphere.

    PubMed

    Vidovic, Luka; Majaron, Boris

    2014-02-01

    Diffuse reflectance spectra (DRS) of biological samples are commonly measured using an integrating sphere (IS). To account for the incident light spectrum, measurement begins by placing a highly reflective white standard against the IS sample opening and collecting the reflected light. After replacing the white standard with the test sample of interest, DRS of the latter is determined as the ratio of the two values at each involved wavelength. However, such a substitution may alter the fluence rate inside the IS. This leads to distortion of measured DRS, which is known as single-beam substitution error (SBSE). Barring the use of more complex experimental setups, the literature states that only approximate corrections of the SBSE are possible, e.g., by using look-up tables generated with calibrated low-reflectivity standards. We present a practical method for elimination of SBSE when using IS equipped with an additional reference port. Two additional measurements performed at this port enable a rigorous elimination of SBSE. Our experimental characterization of SBSE is replicated by theoretical derivation. This offers an alternative possibility of computational removal of SBSE based on advance characterization of a specific DRS setup. The influence of SBSE on quantitative analysis of DRS is illustrated in one application example.

  11. [Investigating phonological planning processes in speech production through a speech-error induction technique].

    PubMed

    Nakayama, Masataka; Saito, Satoru

    2015-08-01

    The present study investigated principles of phonological planning, a common serial ordering mechanism for speech production and phonological short-term memory. Nakayama and Saito (2014) have investigated the principles by using a speech-error induction technique, in which participants were exposed to an auditory distracIor word immediately before an utterance of a target word. They demonstrated within-word adjacent mora exchanges and serial position effects on error rates. These findings support, respectively, the temporal distance and the edge principles at a within-word level. As this previous study induced errors using word distractors created by exchanging adjacent morae in the target words, it is possible that the speech errors are expressions of lexical intrusions reflecting interactive activation of phonological and lexical/semantic representations. To eliminate this possibility, the present study used nonword distractors that had no lexical or semantic representations. This approach successfully replicated the error patterns identified in the abovementioned study, further confirming that the temporal distance and edge principles are organizing precepts in phonological planning.

  12. Radar error statistics for the space shuttle

    NASA Technical Reports Server (NTRS)

    Lear, W. M.

    1979-01-01

    Radar error statistics of C-band and S-band that are recommended for use with the groundtracking programs to process space shuttle tracking data are presented. The statistics are divided into two parts: bias error statistics, using the subscript B, and high frequency error statistics, using the subscript q. Bias errors may be slowly varying to constant. High frequency random errors (noise) are rapidly varying and may or may not be correlated from sample to sample. Bias errors were mainly due to hardware defects and to errors in correction for atmospheric refraction effects. High frequency noise was mainly due to hardware and due to atmospheric scintillation. Three types of atmospheric scintillation were identified: horizontal, vertical, and line of sight. This was the first time that horizontal and line of sight scintillations were identified.

  13. Colour and spatial cueing in low-prevalence visual search.

    PubMed

    Russell, Nicholas C C; Kunar, Melina A

    2012-01-01

    In visual search, 30-40% of targets with a prevalence rate of 2% are missed, compared to 7% of targets with a prevalence rate of 50% (Wolfe, Horowitz, & Kenner, 2005). This "low-prevalence" (LP) effect is thought to occur as participants are making motor errors, changing their response criteria, and/or quitting their search too soon. We investigate whether colour and spatial cues, known to improve visual search when the target has a high prevalence (HP), benefit search when the target is rare. Experiments 1 and 2 showed that although knowledge of the target's colour reduces miss errors overall, it does not eliminate the LP effect as more targets were missed at LP than at HP. Furthermore, detection of a rare target is significantly impaired if it appears in an unexpected colour-more so than if the prevalence of the target is high (Experiment 2). Experiment 3 showed that, if a rare target is exogenously cued, target detection is improved but still impaired relative to high-prevalence conditions. Furthermore, if the cue is absent or invalid, the percentage of missed targets increases. Participants were given the option to correct motor errors in all three experiments, which reduced but did not eliminate the LP effect. The results suggest that although valid colour and spatial cues improve target detection, participants still miss more targets at LP than at HP. Furthermore, invalid cues at LP are very costly in terms of miss errors. We discuss our findings in relation to current theories and applications of LP search.

  14. Improved thermal lattice Boltzmann model for simulation of liquid-vapor phase change

    NASA Astrophysics Data System (ADS)

    Li, Qing; Zhou, P.; Yan, H. J.

    2017-12-01

    In this paper, an improved thermal lattice Boltzmann (LB) model is proposed for simulating liquid-vapor phase change, which is aimed at improving an existing thermal LB model for liquid-vapor phase change [S. Gong and P. Cheng, Int. J. Heat Mass Transfer 55, 4923 (2012), 10.1016/j.ijheatmasstransfer.2012.04.037]. First, we emphasize that the replacement of ∇ .(λ ∇ T ) /∇.(λ ∇ T ) ρ cV ρ cV with ∇ .(χ ∇ T ) is an inappropriate treatment for diffuse interface modeling of liquid-vapor phase change. Furthermore, the error terms ∂t 0(T v ) +∇ .(T vv ) , which exist in the macroscopic temperature equation recovered from the previous model, are eliminated in the present model through a way that is consistent with the philosophy of the LB method. Moreover, the discrete effect of the source term is also eliminated in the present model. Numerical simulations are performed for droplet evaporation and bubble nucleation to validate the capability of the model for simulating liquid-vapor phase change. It is shown that the numerical results of the improved model agree well with those of a finite-difference scheme. Meanwhile, it is found that the replacement of ∇ .(λ ∇ T ) /∇ .(λ ∇ T ) ρ cV ρ cV with ∇ .(χ ∇ T ) leads to significant numerical errors and the error terms in the recovered macroscopic temperature equation also result in considerable errors.

  15. Self-referenced locking of optical coherence by single-detector electronic-frequency tagging

    NASA Astrophysics Data System (ADS)

    Shay, T. M.; Benham, Vincent; Spring, Justin; Ward, Benjamin; Ghebremichael, F.; Culpepper, Mark A.; Sanchez, Anthony D.; Baker, J. T.; Pilkington, D.; Berdine, Richard

    2006-02-01

    We report a novel coherent beam combining technique. This is the first actively phase locked optical fiber array that eliminates the need for a separate reference beam. In addition, only a single photodetector is required. The far-field central spot of the array is imaged onto the photodetector to produce the phase control loop signals. Each leg of the fiber array is phase modulated with a separate RF frequency, thus tagging the optical phase shift for each leg by a separate RF frequency. The optical phase errors for the individual array legs are separated in the electronic domain. In contrast with the previous active phase locking techniques, in our system the reference beam is spatially overlapped with all the RF modulated fiber leg beams onto a single detector. The phase shift between the optical wave in the reference leg and in the RF modulated legs is measured separately in the electronic domain and the phase error signal is feedback to the LiNbO 3 phase modulator for that leg to minimize the phase error for that leg relative to the reference leg. The advantages of this technique are 1) the elimination of the reference beam and beam combination optics and 2) the electronic separation of the phase error signals without any degradation of the phase locking accuracy. We will present the first theoretical model for self-referenced LOCSET and describe experimental results for a 3 x 3 array.

  16. Computerized pharmaceutical intervention to reduce reconciliation errors at hospital discharge in Spain: an interrupted time-series study.

    PubMed

    García-Molina Sáez, C; Urbieta Sanz, E; Madrigal de Torres, M; Vicente Vera, T; Pérez Cárceles, M D

    2016-04-01

    It is well known that medication reconciliation at discharge is a key strategy to ensure proper drug prescription and the effectiveness and safety of any treatment. Different types of interventions to reduce reconciliation errors at discharge have been tested, many of which are based on the use of electronic tools as they are useful to optimize the medication reconciliation process. However, not all countries are progressing at the same speed in this task and not all tools are equally effective. So it is important to collate updated country-specific data in order to identify possible strategies for improvement in each particular region. Our aim therefore was to analyse the effectiveness of a computerized pharmaceutical intervention to reduce reconciliation errors at discharge in Spain. A quasi-experimental interrupted time-series study was carried out in the cardio-pneumology unit of a general hospital from February to April 2013. The study consisted of three phases: pre-intervention, intervention and post-intervention, each involving 23 days of observations. At the intervention period, a pharmacist was included in the medical team and entered the patient's pre-admission medication in a computerized tool integrated into the electronic clinical history of the patient. The effectiveness was evaluated by the differences between the mean percentages of reconciliation errors in each period using a Mann-Whitney U test accompanied by Bonferroni correction, eliminating autocorrelation of the data by first using an ARIMA analysis. In addition, the types of error identified and their potential seriousness were analysed. A total of 321 patients (119, 105 and 97 in each phase, respectively) were included in the study. For the 3966 medicaments recorded, 1087 reconciliation errors were identified in 77·9% of the patients. The mean percentage of reconciliation errors per patient in the first period of the study was 42·18%, falling to 19·82% during the intervention period (P = 0·000). When the intervention was withdrawn, the mean percentage of reconciliation errors increased again to 27·72% (P = 0·008). The difference between the percentages of pre- and post-intervention periods was statistically significant (P = 0·000). Most reconciliation errors were due to omission (46·7%) or incomplete prescription (43·8%), and 35·3% of which could have caused harm to the patient. A computerized pharmaceutical intervention is shown to reduce reconciliation errors in the context of a high incidence of such errors. © 2016 John Wiley & Sons Ltd.

  17. Increased instrument intelligence--can it reduce laboratory error?

    PubMed

    Jekelis, Albert W

    2005-01-01

    Recent literature has focused on the reduction of laboratory errors and the potential impact on patient management. This study assessed the intelligent, automated preanalytical process-control abilities in newer generation analyzers as compared with older analyzers and the impact on error reduction. Three generations of immuno-chemistry analyzers were challenged with pooled human serum samples for a 3-week period. One of the three analyzers had an intelligent process of fluidics checks, including bubble detection. Bubbles can cause erroneous results due to incomplete sample aspiration. This variable was chosen because it is the most easily controlled sample defect that can be introduced. Traditionally, lab technicians have had to visually inspect each sample for the presence of bubbles. This is time consuming and introduces the possibility of human error. Instruments with bubble detection may be able to eliminate the human factor and reduce errors associated with the presence of bubbles. Specific samples were vortexed daily to introduce a visible quantity of bubbles, then immediately placed in the daily run. Errors were defined as a reported result greater than three standard deviations below the mean and associated with incomplete sample aspiration of the analyte of the individual analyzer Three standard deviations represented the target limits of proficiency testing. The results of the assays were examined for accuracy and precision. Efficiency, measured as process throughput, was also measured to associate a cost factor and potential impact of the error detection on the overall process. The analyzer performance stratified according to their level of internal process control The older analyzers without bubble detection reported 23 erred results. The newest analyzer with bubble detection reported one specimen incorrectly. The precision and accuracy of the nonvortexed specimens were excellent and acceptable for all three analyzers. No errors were found in the nonvortexed specimens. There were no significant differences in overall process time for any of the analyzers when tests were arranged in an optimal configuration. The analyzer with advanced fluidic intelligence demostrated the greatest ability to appropriately deal with an incomplete aspiration by not processing and reporting a result for the sample. This study suggests that preanalytical process-control capabilities could reduce errors. By association, it implies that similar intelligent process controls could favorably impact the error rate and, in the case of this instrument, do it without negatively impacting process throughput. Other improvements may be realized as a result of having an intelligent error-detection process including further reduction in misreported results, fewer repeats, less operator intervention, and less reagent waste.

  18. Human error and human factors engineering in health care.

    PubMed

    Welch, D L

    1997-01-01

    Human error is inevitable. It happens in health care systems as it does in all other complex systems, and no measure of attention, training, dedication, or punishment is going to stop it. The discipline of human factors engineering (HFE) has been dealing with the causes and effects of human error since the 1940's. Originally applied to the design of increasingly complex military aircraft cockpits, HFE has since been effectively applied to the problem of human error in such diverse systems as nuclear power plants, NASA spacecraft, the process control industry, and computer software. Today the health care industry is becoming aware of the costs of human error and is turning to HFE for answers. Just as early experimental psychologists went beyond the label of "pilot error" to explain how the design of cockpits led to air crashes, today's HFE specialists are assisting the health care industry in identifying the causes of significant human errors in medicine and developing ways to eliminate or ameliorate them. This series of articles will explore the nature of human error and how HFE can be applied to reduce the likelihood of errors and mitigate their effects.

  19. Continental-scale, data-driven predictive assessment of eliminating the vector-borne disease, lymphatic filariasis, in sub-Saharan Africa by 2020.

    PubMed

    Michael, Edwin; Singh, Brajendra K; Mayala, Benjamin K; Smith, Morgan E; Hampton, Scott; Nabrzyski, Jaroslaw

    2017-09-27

    There are growing demands for predicting the prospects of achieving the global elimination of neglected tropical diseases as a result of the institution of large-scale nation-wide intervention programs by the WHO-set target year of 2020. Such predictions will be uncertain due to the impacts that spatial heterogeneity and scaling effects will have on parasite transmission processes, which will introduce significant aggregation errors into any attempt aiming to predict the outcomes of interventions at the broader spatial levels relevant to policy making. We describe a modeling platform that addresses this problem of upscaling from local settings to facilitate predictions at regional levels by the discovery and use of locality-specific transmission models, and we illustrate the utility of using this approach to evaluate the prospects for eliminating the vector-borne disease, lymphatic filariasis (LF), in sub-Saharan Africa by the WHO target year of 2020 using currently applied or newly proposed intervention strategies. METHODS AND RESULTS: We show how a computational platform that couples site-specific data discovery with model fitting and calibration can allow both learning of local LF transmission models and simulations of the impact of interventions that take a fuller account of the fine-scale heterogeneous transmission of this parasitic disease within endemic countries. We highlight how such a spatially hierarchical modeling tool that incorporates actual data regarding the roll-out of national drug treatment programs and spatial variability in infection patterns into the modeling process can produce more realistic predictions of timelines to LF elimination at coarse spatial scales, ranging from district to country to continental levels. Our results show that when locally applicable extinction thresholds are used, only three countries are likely to meet the goal of LF elimination by 2020 using currently applied mass drug treatments, and that switching to more intensive drug regimens, increasing the frequency of treatments, or switching to new triple drug regimens will be required if LF elimination is to be accelerated in Africa. The proportion of countries that would meet the goal of eliminating LF by 2020 may, however, reach up to 24/36 if the WHO 1% microfilaremia prevalence threshold is used and sequential mass drug deliveries are applied in countries. We have developed and applied a data-driven spatially hierarchical computational platform that uses the discovery of locally applicable transmission models in order to predict the prospects for eliminating the macroparasitic disease, LF, at the coarser country level in sub-Saharan Africa. We show that fine-scale spatial heterogeneity in local parasite transmission and extinction dynamics, as well as the exact nature of intervention roll-outs in countries, will impact the timelines to achieving national LF elimination on this continent.

  20. MkMRCC, APUCC and APUBD approaches to 1,n-didehydropolyene diradicals: the nature of through-bond exchange interactions

    NASA Astrophysics Data System (ADS)

    Nishihara, Satomichi; Saito, Toru; Yamanaka, Shusuke; Kitagawa, Yasutaka; Kawakami, Takashi; Okumura, Mitsutaka; Yamaguchi, Kizashi

    2010-10-01

    Mukherjee-type (Mk) state specific (SS) multi-reference (MR) coupled-cluster (CC) calculations of 1,n-didehydropolyene diradicals were carried out to elucidate singlet-triplet energy gaps via through-bond coupling between terminal radicals. Spin-unrestricted Hartree-Fock (UHF) based coupled-cluster (CC) computations of these diradicals were also performed. Comparison between symmetry-adapted MkMRCC and broken-symmetry (BS) UHF-CC computational results indicated that spin-contamination error of UHF-CC solutions was left at the SD level, although it had been thought that this error was negligible for the CC scheme in general. In order to eliminate the spin contamination error, approximate spin-projection (AP) scheme was applied for UCC, and the AP procedure indeed eliminated the error to yield good agreement with MRCC in energy. The CCD with spin-unrestricted Brueckner's orbital (UB) was also employed for these polyene diradicals, showing that large spin-contamination errors at UHF solutions are dramatically improved, and therefore AP scheme for UBD removed easily the rest of spin-contaminations. Pure- and hybrid-density functional theory (DFT) calculations of the species were also performed. Three different computational schemes for total spin angular momentums were examined for the AP correction of the hybrid DFT. The AP DFT calculations yielded the singlet-triplet energy gaps that were in good agreement with those of MRCC, AP UHF-CC and AP UB-CC. Chemical indices such as the diradical character were calculated with all these methods. Implications of the present computational results are discussed in relation to previous RMRCC calculations of diradical species and BS calculations of large exchange coupled systems.

  1. Study of a Solar Sensor for use in Space Vehicle Orientation Control Systems

    NASA Technical Reports Server (NTRS)

    Spencer, Paul R.

    1961-01-01

    The solar sensor described herein may be used for a variety of space operations requiring solar orientation. The use of silicon solar cells as the sensing elements provides the sensor with sufficient capability to withstand the hazards of a space environment. A method of arranging the cells in a sensor consists simply of mounting them at a large angle to the base. The use of an opaque shield placed between the cells and perpendicular to the base enhances the small-angle sensitivity while adding slightly to the bulk of the sensor. The difference in illumination of these cells as the result of an oblique incidence of the light rays from the reference source causes an electrical error signal which, when used in a battery-bridge circuit, requires a minimum of electrical processing for use in a space-vehicle orientation control system. An error which could occur after prolonged operation of the sensor is that resulting from asymmetrical aging of opposite cells. This could be periodically corrected with a balance potentiometer. A more routine error in the sensor is that produced by reflected earth radiation. This error may be eliminated over a large portion of the operation time by restricting the field of view and, consequently, the capture capability. A more sophisticated method of eliminating this error is to use separate sensors, for capture and fine pointing, along with a switching device. An experimental model has been constructed and tested to yield an output sensitivity of 1.2 millivolts per second of arc with a load resistance of 1,000 ohms and a reference light source of approximately 1,200 foot-candles delivered at the sensor.

  2. Optical cell tracking analysis using a straight-forward approach to minimize processing time for high frame rate data

    NASA Astrophysics Data System (ADS)

    Seeto, Wen Jun; Lipke, Elizabeth Ann

    2016-03-01

    Tracking of rolling cells via in vitro experiment is now commonly performed using customized computer programs. In most cases, two critical challenges continue to limit analysis of cell rolling data: long computation times due to the complexity of tracking algorithms and difficulty in accurately correlating a given cell with itself from one frame to the next, which is typically due to errors caused by cells that either come close in proximity to each other or come in contact with each other. In this paper, we have developed a sophisticated, yet simple and highly effective, rolling cell tracking system to address these two critical problems. This optical cell tracking analysis (OCTA) system first employs ImageJ for cell identification in each frame of a cell rolling video. A custom MATLAB code was written to use the geometric and positional information of all cells as the primary parameters for matching each individual cell with itself between consecutive frames and to avoid errors when tracking cells that come within close proximity to one another. Once the cells are matched, rolling velocity can be obtained for further analysis. The use of ImageJ for cell identification eliminates the need for high level MATLAB image processing knowledge. As a result, only fundamental MATLAB syntax is necessary for cell matching. OCTA has been implemented in the tracking of endothelial colony forming cell (ECFC) rolling under shear. The processing time needed to obtain tracked cell data from a 2 min ECFC rolling video recorded at 70 frames per second with a total of over 8000 frames is less than 6 min using a computer with an Intel® Core™ i7 CPU 2.80 GHz (8 CPUs). This cell tracking system benefits cell rolling analysis by substantially reducing the time required for post-acquisition data processing of high frame rate video recordings and preventing tracking errors when individual cells come in close proximity to one another.

  3. How smart is your BEOL? productivity improvement through intelligent automation

    NASA Astrophysics Data System (ADS)

    Schulz, Kristian; Egodage, Kokila; Tabbone, Gilles; Garetto, Anthony

    2017-07-01

    The back end of line (BEOL) workflow in the mask shop still has crucial issues throughout all standard steps which are inspection, disposition, photomask repair and verification of repair success. All involved tools are typically run by highly trained operators or engineers who setup jobs and recipes, execute tasks, analyze data and make decisions based on the results. No matter how experienced operators are and how good the systems perform, there is one aspect that always limits the productivity and effectiveness of the operation: the human aspect. Human errors can range from seemingly rather harmless slip-ups to mistakes with serious and direct economic impact including mask rejects, customer returns and line stops in the wafer fab. Even with the introduction of quality control mechanisms that help to reduce these critical but unavoidable faults, they can never be completely eliminated. Therefore the mask shop BEOL cannot run in the most efficient manner as unnecessary time and money are spent on processes that still remain labor intensive. The best way to address this issue is to automate critical segments of the workflow that are prone to human errors. In fact, manufacturing errors can occur for each BEOL step where operators intervene. These processes comprise of image evaluation, setting up tool recipes, data handling and all other tedious but required steps. With the help of smart solutions, operators can work more efficiently and dedicate their time to less mundane tasks. Smart solutions connect tools, taking over the data handling and analysis typically performed by operators and engineers. These solutions not only eliminate the human error factor in the manufacturing process but can provide benefits in terms of shorter cycle times, reduced bottlenecks and prediction of an optimized workflow. In addition such software solutions consist of building blocks that seamlessly integrate applications and allow the customers to use tailored solutions. To accommodate for the variability and complexity in mask shops today, individual workflows can be supported according to the needs of any particular manufacturing line with respect to necessary measurement and production steps. At the same time the efficiency of assets is increased by avoiding unneeded cycle time and waste of resources due to the presence of process steps that are very crucial for a given technology. In this paper we present details of which areas of the BEOL can benefit most from intelligent automation, what solutions exist and the quantification of benefits to a mask shop with full automation by the use of a back end of line model.

  4. Strong diffusion formulation of Markov chain ensembles and its optimal weaker reductions

    NASA Astrophysics Data System (ADS)

    Güler, Marifi

    2017-10-01

    Two self-contained diffusion formulations, in the form of coupled stochastic differential equations, are developed for the temporal evolution of state densities over an ensemble of Markov chains evolving independently under a common transition rate matrix. Our first formulation derives from Kurtz's strong approximation theorem of density-dependent Markov jump processes [Stoch. Process. Their Appl. 6, 223 (1978), 10.1016/0304-4149(78)90020-0] and, therefore, strongly converges with an error bound of the order of lnN /N for ensemble size N . The second formulation eliminates some fluctuation variables, and correspondingly some noise terms, within the governing equations of the strong formulation, with the objective of achieving a simpler analytic formulation and a faster computation algorithm when the transition rates are constant or slowly varying. There, the reduction of the structural complexity is optimal in the sense that the elimination of any given set of variables takes place with the lowest attainable increase in the error bound. The resultant formulations are supported by numerical simulations.

  5. Phonological and Motor Errors in Individuals with Acquired Sound Production Impairment

    ERIC Educational Resources Information Center

    Buchwald, Adam; Miozzo, Michele

    2012-01-01

    Purpose: This study aimed to compare sound production errors arising due to phonological processing impairment with errors arising due to motor speech impairment. Method: Two speakers with similar clinical profiles who produced similar consonant cluster simplification errors were examined using a repetition task. We compared both overall accuracy…

  6. Velocity structure of the shallow lunar crust

    NASA Technical Reports Server (NTRS)

    Gangi, A. F.; Yen, T. E.

    1979-01-01

    Data from the thumper shots of the Apollo 14 and Apollo 16 active seismic experiments, testing whether the velocity variation in the shallow lunar crust (depths less than or equal to 10 m) can be represented by a self-compacting-power-layer or by a constant-velocity-layer model, are analyzed. Although filtering and stacking improved the S/N ratios, it was found that measuring the arrival times or amplitudes of arrivals beyond 32 m was not possible. The data quality precluded a definitive distinction between the power-law velocity variation and the layered-velocity model. Furthermore, it was found that the shallow lunar regolith is made up of fine particles, which supports the idea of a 1/6 power-velocity model. Analysis of the amplitudes of first arrivals revealed large errors in the data due to variations in the geophone sensitivities and shot strengths; a least-squares method, that uses data redundancy was employed to eliminate them.

  7. Development of a miniature autofluorescence device for the early diagnosis of squamous cell carcinoma

    NASA Astrophysics Data System (ADS)

    Patil, Ajeetkumar; Rao K., Swati; V. K., Unnikrishnan; Pai, Keerthilatha M.; Kartha, V. B.; Chidangil, Santhosh

    2017-07-01

    Autofluorescence spectroscopy offer noninvasive and promising tools for the detection of alternations biochemical compositions of tissues and cells, in presence of disease. They have the added advantage of being highly objective due to the fact that diagnostic evaluation is by statistical methods, eliminating errors from lack of experience, fatigue factor, and subjectivity of visual perceptions. The present research work involves in designing and assembling of a low cost, miniature oral cancer screening device with for routine clinical applications. A miniature system was designed and assembled with much smaller and cost-effective components like compact light source and miniature spectrometer, in a hand-held unit configuration. The performance of the system was evaluated using animal -mouse- SCC model. The current system can be used in handheld operation, which makes it very useful for many applications like, screening of squamous cell carcinoma susceptible population.

  8. Wide range radioactive gas concentration detector

    DOEpatents

    Anderson, David F.

    1984-01-01

    A wide range radioactive gas concentration detector and monitor which is capable of measuring radioactive gas concentrations over a range of eight orders of magnitude. The device of the present invention is designed to have an ionization chamber which is sufficiently small to give a fast response time for measuring radioactive gases but sufficiently large to provide accurate readings at low concentration levels. Closely spaced parallel plate grids provide a uniform electric field in the active region to improve the accuracy of measurements and reduce ion migration time so as to virtually eliminate errors due to ion recombination. The parallel plate grids are fabricated with a minimal surface area to reduce the effects of contamination resulting from absorption of contaminating materials on the surface of the grids. Additionally, the ionization chamber wall is spaced a sufficient distance from the active region of the ionization chamber to minimize contamination effects.

  9. Effects of Salt Secretion on Psychrometric Determinations of Water Potential of Cotton Leaves

    PubMed Central

    Klepper, Betty; Barrs, H. D.

    1968-01-01

    Thermocouple psychrometers gave lower estimates of water potential of cotton leaves than did a pressure chamber. This difference was considerable for turgid leaves, but progressively decreased for leaves with lower water potentials and fell to zero at water potentials below about −10 bars. The conductivity of washings from cotton leaves removed from the psychrometric equilibration chambers was related to the magnitude of this discrepancy in water potential, indicating that the discrepancy is due to salts on the leaf surface which make the psychrometric estimates too low. This error, which may be as great as 400 to 500%, cannot be eliminated by washing the leaves because salts may be secreted during the equilibration period. Therefore, a thermocouple psychrometer is not suitable for measuring the water potential of cotton leaves when it is above about −10 bars. PMID:16656895

  10. Design of pulse waveform for waveform division multiple access UWB wireless communication system.

    PubMed

    Yin, Zhendong; Wang, Zhirui; Liu, Xiaohui; Wu, Zhilu

    2014-01-01

    A new multiple access scheme, Waveform Division Multiple Access (WDMA) based on the orthogonal wavelet function, is presented. After studying the correlation properties of different categories of single wavelet functions, the one with the best correlation property will be chosen as the foundation for combined waveform. In the communication system, each user is assigned to different combined orthogonal waveform. Demonstrated by simulation, combined waveform is more suitable than single wavelet function to be a communication medium in WDMA system. Due to the excellent orthogonality, the bit error rate (BER) of multiuser with combined waveforms is so close to that of single user in a synchronous system. That is to say, the multiple access interference (MAI) is almost eliminated. Furthermore, even in an asynchronous system without multiuser detection after matched filters, the result is still pretty ideal and satisfactory by using the third combination mode that will be mentioned in the study.

  11. Non-intrusive high voltage measurement using slab coupled optical sensors

    NASA Astrophysics Data System (ADS)

    Stan, Nikola; Chadderdon, Spencer; Selfridge, Richard H.; Schultz, Stephen M.

    2014-03-01

    We present an optical fiber non-intrusive sensor for measuring high voltage transients. The sensor converts the unknown voltage to electric field, which is then measured using slab-coupled optical fiber sensor (SCOS). Since everything in the sensor except the electrodes is made of dielectric materials and due to the small field sensor size, the sensor is minimally perturbing to the measured voltage. We present the details of the sensor design, which eliminates arcing and minimizes local dielectric breakdown using Teflon blocks and insulation of the whole structure with transformer oil. The structure has a capacitance of less than 3pF and resistance greater than 10 GΩ. We show the measurement of 66.5 kV pulse with a 32.6μs time constant. The measurement matches the expected value of 67.8 kV with less than 2% error.

  12. Measurement of surface microtopography

    NASA Technical Reports Server (NTRS)

    Wall, S. D.; Farr, T. G.; Muller, J.-P.; Lewis, P.; Leberl, F. W.

    1991-01-01

    Acquisition of ground truth data for use in microwave interaction modeling requires measurement of surface roughness sampled at intervals comparable to a fraction of the microwave wavelength and extensive enough to adequately represent the statistics of a surface unit. Sub-centimetric measurement accuracy is thus required over large areas, and existing techniques are usually inadequate. A technique is discussed for acquiring the necessary photogrammetric data using twin film cameras mounted on a helicopter. In an attempt to eliminate tedious data reduction, an automated technique was applied to the helicopter photographs, and results were compared to those produced by conventional stereogrammetry. Derived root-mean-square (RMS) roughness for the same stereo-pair was 7.5 cm for the automated technique versus 6.5 cm for the manual method. The principal source of error is probably due to vegetation in the scene, which affects the automated technique but is ignored by a human operator.

  13. Geodesy and metrology with a transportable optical clock

    NASA Astrophysics Data System (ADS)

    Grotti, Jacopo; Koller, Silvio; Vogt, Stefan; Häfner, Sebastian; Sterr, Uwe; Lisdat, Christian; Denker, Heiner; Voigt, Christian; Timmen, Ludger; Rolland, Antoine; Baynes, Fred N.; Margolis, Helen S.; Zampaolo, Michel; Thoumany, Pierre; Pizzocaro, Marco; Rauf, Benjamin; Bregolin, Filippo; Tampellini, Anna; Barbieri, Piero; Zucco, Massimo; Costanzo, Giovanni A.; Clivati, Cecilia; Levi, Filippo; Calonico, Davide

    2018-05-01

    Optical atomic clocks, due to their unprecedented stability1-3 and uncertainty3-6, are already being used to test physical theories7,8 and herald a revision of the International System of Units9,10. However, to unlock their potential for cross-disciplinary applications such as relativistic geodesy11, a major challenge remains: their transformation from highly specialized instruments restricted to national metrology laboratories into flexible devices deployable in different locations12-14. Here, we report the first field measurement campaign with a transportable 87Sr optical lattice clock12. We use it to determine the gravity potential difference between the middle of a mountain and a location 90 km away, exploiting both local and remote clock comparisons to eliminate potential clock errors. A local comparison with a 171Yb lattice clock15 also serves as an important check on the international consistency of independently developed optical clocks. This campaign demonstrates the exciting prospects for transportable optical clocks.

  14. Uncertain behaviours of integrated circuits improve computational performance.

    PubMed

    Yoshimura, Chihiro; Yamaoka, Masanao; Hayashi, Masato; Okuyama, Takuya; Aoki, Hidetaka; Kawarabayashi, Ken-ichi; Mizuno, Hiroyuki

    2015-11-20

    Improvements to the performance of conventional computers have mainly been achieved through semiconductor scaling; however, scaling is reaching its limitations. Natural phenomena, such as quantum superposition and stochastic resonance, have been introduced into new computing paradigms to improve performance beyond these limitations. Here, we explain that the uncertain behaviours of devices due to semiconductor scaling can improve the performance of computers. We prototyped an integrated circuit by performing a ground-state search of the Ising model. The bit errors of memory cell devices holding the current state of search occur probabilistically by inserting fluctuations into dynamic device characteristics, which will be actualised in the future to the chip. As a result, we observed more improvements in solution accuracy than that without fluctuations. Although the uncertain behaviours of devices had been intended to be eliminated in conventional devices, we demonstrate that uncertain behaviours has become the key to improving computational performance.

  15. A median-Gaussian filtering framework for Moiré pattern noise removal from X-ray microscopy image.

    PubMed

    Wei, Zhouping; Wang, Jian; Nichol, Helen; Wiebe, Sheldon; Chapman, Dean

    2012-02-01

    Moiré pattern noise in Scanning Transmission X-ray Microscopy (STXM) imaging introduces significant errors in qualitative and quantitative image analysis. Due to the complex origin of the noise, it is difficult to avoid Moiré pattern noise during the image data acquisition stage. In this paper, we introduce a post-processing method for filtering Moiré pattern noise from STXM images. This method includes a semi-automatic detection of the spectral peaks in the Fourier amplitude spectrum by using a local median filter, and elimination of the spectral noise peaks using a Gaussian notch filter. The proposed median-Gaussian filtering framework shows good results for STXM images with the size of power of two, if such parameters as threshold, sizes of the median and Gaussian filters, and size of the low frequency window, have been properly selected. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. Exocentric direction judgements in computer-generated displays and actual scenes

    NASA Technical Reports Server (NTRS)

    Ellis, Stephen R.; Smith, Stephen; Mcgreevy, Michael W.; Grunwald, Arthur J.

    1989-01-01

    One of the most remarkable perceptual properties of common experience is that the perceived shapes of known objects are constant despite movements about them which transform their projections on the retina. This perceptual ability is one aspect of shape constancy (Thouless, 1931; Metzger, 1953; Borresen and Lichte, 1962). It requires that the viewer be able to sense and discount his or her relative position and orientation with respect to a viewed object. This discounting of relative position may be derived directly from the ranging information provided from stereopsis, from motion parallax, from vestibularly sensed rotation and translation, or from corollary information associated with voluntary movement. It is argued that: (1) errors in exocentric judgements of the azimuth of a target generated on an electronic perspective display are not viewpoint-independent, but are influenced by the specific geometry of their perspective projection; (2) elimination of binocular conflict by replacing electronic displays with actual scenes eliminates a previously reported equidistance tendency in azimuth error, but the viewpoint dependence remains; (3) the pattern of exocentrically judged azimuth error in real scenes viewed with a viewing direction depressed 22 deg and rotated + or - 22 deg with respect to a reference direction could not be explained by overestimation of the depression angle, i.e., a slant overestimation.

  17. Children's identification of faces from lineups: the effects of lineup presentation and instructions on accuracy.

    PubMed

    Beresford, Jayne; Blades, Mark

    2006-09-01

    The authors investigated whether the type of lineup and instructions given to children 6-7 or 9-10 years of age affected their identification accuracy. Children witnessed a man stealing property and were later asked to identify him in either photo or video lineups. Some lineups contained the target and some did not. Two lineup procedures were used (standard or elimination), and 2 types of instruction were used (standard or cautioning about false identification). Standard lineups with cautioning instructions decreased target-absent errors with no associated reduction in correct identifications, but elimination lineups did not. Lineup media had an interaction effect whereby correct identifications were reduced in video but not photo elimination lineups. The results are discussed in a forensic context. (c) 2006 APA, all rights reserved

  18. A Novel Multi-Camera Calibration Method based on Flat Refractive Geometry

    NASA Astrophysics Data System (ADS)

    Huang, S.; Feng, M. C.; Zheng, T. X.; Li, F.; Wang, J. Q.; Xiao, L. F.

    2018-03-01

    Multi-camera calibration plays an important role in many field. In the paper, we present a novel multi-camera calibration method based on flat refractive geometry. All cameras can acquire calibration images of transparent glass calibration board (TGCB) at the same time. The application of TGCB leads to refractive phenomenon which can generate calibration error. The theory of flat refractive geometry is employed to eliminate the error. The new method can solve the refractive phenomenon of TGCB. Moreover, the bundle adjustment method is used to minimize the reprojection error and obtain optimized calibration results. Finally, the four-cameras calibration results of real data show that the mean value and standard deviation of the reprojection error of our method are 4.3411e-05 and 0.4553 pixel, respectively. The experimental results show that the proposed method is accurate and reliable.

  19. Precision analysis of autonomous orbit determination using star sensor for Beidou MEO satellite

    NASA Astrophysics Data System (ADS)

    Shang, Lin; Chang, Jiachao; Zhang, Jun; Li, Guotong

    2018-04-01

    This paper focuses on the autonomous orbit determination accuracy of Beidou MEO satellite using the onboard observations of the star sensors and infrared horizon sensor. A polynomial fitting method is proposed to calibrate the periodic error in the observation of the infrared horizon sensor, which will greatly influence the accuracy of autonomous orbit determination. Test results show that the periodic error can be eliminated using the polynomial fitting method. The User Range Error (URE) of Beidou MEO satellite is less than 2 km using the observations of the star sensors and infrared horizon sensor for autonomous orbit determination. The error of the Right Ascension of Ascending Node (RAAN) is less than 60 μrad and the observations of star sensors can be used as a spatial basis for Beidou MEO navigation constellation.

  20. Do CAS measurements correlate with EOS 3D alignment measurements in primary TKA?

    PubMed

    Meijer, Marrigje F; Boerboom, Alexander L; Bulstra, Sjoerd K; Reininga, Inge H F; Stevens, Martin

    2017-09-01

    Objective of this study was to compare intraoperative computer-assisted surgery (CAS) alignment measurements during total knee arthroplasty (TKA) with pre- and postoperative coronal alignment measurements using EOS 3D reconstructions. In a prospective study, 56 TKAs using imageless CAS were performed and coronal alignment measurements were recorded twice: before bone cuts were made and after implantation of the prosthesis. Pre- and postoperative coronal alignment measurements were performed using EOS 3D reconstructions. Thanks to the EOS radiostereography system, measurement errors due to malpositioning and deformity during acquisition are eliminated. CAS measurements were compared with EOS 3D reconstructions. Varus/valgus angle (VV), mechanical lateral distal femoral angle (mLDFA) and mechanical medial proximal tibial angle (mMPTA) were measured. Significantly different VV angles were measured pre- and postoperatively with CAS compared to EOS. For preoperative measurements, mLDFA did not differ significantly, but a significantly larger mMPTA in valgus was measured with CAS. Results of this study indicate that differences in alignment measurements between CAS measurements and pre- and postoperative EOS 3D are due mainly to the difference between weight-bearing and non-weight-bearing position and potential errors in validity and reliability of the CAS system. EOS 3D measurements overestimate VV angle in lower limbs with substantial mechanical axis deviation. For lower limbs with minor mechanical axis deviation as well as for mMPTA measurements, CAS measures more valgus than EOS. Eventually the results of this study are of clinical relevance, since it raises concerns regarding the validity and reliability of CAS systems in TKA. IIb.

  1. The impact of noisy and misaligned attenuation maps on human-observer performance at lesion detection in SPECT

    NASA Astrophysics Data System (ADS)

    Wells, R. G.; Gifford, H. C.; Pretorius, P. H.; Famcombe, T. H.; Narayanan, M. V.; King, M. A.

    2002-06-01

    We have demonstrated an improvement due to attenuation correction (AC) at the task of lesion detection in thoracic SPECT images. However, increased noise in the transmission data due to aging sources or very large patients, and misregistration of the emission and transmission maps, can reduce the accuracy of the AC and may result in a loss of lesion detectability. We investigated the impact of noise in and misregistration of transmission data, on the detection of simulated Ga-67 thoracic lesions. Human-observer localization-receiver-operating-characteristic (LROC) methodology was used to assess performance. Both emission and transmission data were simulated using the MCAT computer phantom. Emission data were reconstructed using OSEM incorporating AC and detector resolution compensation. Clinical noise levels were used in the emission data. The transmission-data noise levels ranged from zero (noise-free) to 32 times the measured clinical levels. Transaxial misregistrations of 0.32, 0.63, and 1.27 cm between emission and transmission data were also examined. Three different algorithms were considered for creating the attenuation maps: filtered backprojection (FBP), unbounded maximum-likelihood (ML), and block-iterative transmission AB (BITAB). Results indicate that a 16-fold increase in the noise was required to eliminate the benefit afforded by AC, when using FBP or ML to reconstruct the attenuation maps. When using BITAB, no significant loss in performance was observed for a 32-fold increase in noise. Misregistration errors are also a concern as even small errors here reduce the performance gains of AC.

  2. Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hollingsworth, Jeff

    2014-07-31

    The purpose of this project was to develop tools and techniques to improve the ability of computational scientists to investigate and correct problems (bugs) in their programs. Specifically, the University of Maryland component of this project focused on the problems associated with the finite number of bits available in a computer to represent numeric values. In large scale scientific computation, numbers are frequently added to and multiplied with each other billions of times. Thus even small errors due to the representation of numbers can accumulate into big errors. However, using too many bits to represent a number results in additionalmore » computation, memory, and energy costs. Thus it is critical to find the right size for numbers. This project focused on several aspects of this general problem. First, we developed a tool to look for cancelations, the catastrophic loss of precision in numbers due to the addition of two numbers whose actual values are close to each other, but whose representation in a computer is identical or nearly so. Second, we developed a suite of tools to allow programmers to identify exactly how much precision is required for each operation in their program. This tool allows programmers to both verify that enough precision is available, but more importantly find cases where extra precision could be eliminated to allow the program to use less memory, computer time, or energy. These tools use advanced binary modification techniques to allow the analysis of actual optimized code. The system, called Craft, has been applied to a number of benchmarks and real applications.« less

  3. Scatter correction for x-ray conebeam CT using one-dimensional primary modulation

    NASA Astrophysics Data System (ADS)

    Zhu, Lei; Gao, Hewei; Bennett, N. Robert; Xing, Lei; Fahrig, Rebecca

    2009-02-01

    Recently, we developed an efficient scatter correction method for x-ray imaging using primary modulation. A two-dimensional (2D) primary modulator with spatially variant attenuating materials is inserted between the x-ray source and the object to separate primary and scatter signals in the Fourier domain. Due to the high modulation frequency in both directions, the 2D primary modulator has a strong scatter correction capability for objects with arbitrary geometries. However, signal processing on the modulated projection data requires knowledge of the modulator position and attenuation. In practical systems, mainly due to system gantry vibration, beam hardening effects and the ramp-filtering in the reconstruction, the insertion of the 2D primary modulator results in artifacts such as rings in the CT images, if no post-processing is applied. In this work, we eliminate the source of artifacts in the primary modulation method by using a one-dimensional (1D) modulator. The modulator is aligned parallel to the ramp-filtering direction to avoid error magnification, while sufficient primary modulation is still achieved for scatter correction on a quasicylindrical object, such as a human body. The scatter correction algorithm is also greatly simplified for the convenience and stability in practical implementations. The method is evaluated on a clinical CBCT system using the Catphan© 600 phantom. The result shows effective scatter suppression without introducing additional artifacts. In the selected regions of interest, the reconstruction error is reduced from 187.2HU to 10.0HU if the proposed method is used.

  4. A Case Study of the Impact of AIRS Temperature Retrievals on Numerical Weather Prediction

    NASA Technical Reports Server (NTRS)

    Reale, O.; Atlas, R.; Jusem, J. C.

    2004-01-01

    Large errors in numerical weather prediction are often associated with explosive cyclogenesis. Most studes focus on the under-forecasting error, i.e. cases of rapidly developing cyclones which are poorly predicted in numerical models. However, the over-forecasting error (i.e., to predict an explosively developing cyclone which does not occur in reality) is a very common error that severely impacts the forecasting skill of all models and may also present economic costs if associated with operational forecasting. Unnecessary precautions taken by marine activities can result in severe economic loss. Moreover, frequent occurrence of over-forecasting can undermine the reliance on operational weather forecasting. Therefore, it is important to understand and reduce the prdctions of extreme weather associated with explosive cyclones which do not actually develop. In this study we choose a very prominent case of over-forecasting error in the northwestern Pacific. A 960 hPa cyclone develops in less than 24 hour in the 5-day forecast, with a deepening rate of about 30 hPa in one day. The cyclone is not versed in the analyses and is thus a case of severe over-forecasting. By assimilating AIRS data, the error is largely eliminated. By following the propagation of the anomaly that generates the spurious cyclone, it is found that a small mid-tropospheric geopotential height negative anomaly over the northern part of the Indian subcontinent in the initial conditions, propagates westward, is amplified by orography, and generates a very intense jet streak in the subtropical jet stream, with consequent explosive cyclogenesis over the Pacific. The AIRS assimilation eliminates this anomaly that may have been caused by erroneous upper-air data, and represents the jet stream more correctly. The energy associated with the jet is distributed over a much broader area and as a consequence a multiple, but much more moderate cyclogenesis is observed.

  5. ON NONSTATIONARY STOCHASTIC MODELS FOR EARTHQUAKES.

    USGS Publications Warehouse

    Safak, Erdal; Boore, David M.

    1986-01-01

    A seismological stochastic model for earthquake ground-motion description is presented. Seismological models are based on the physical properties of the source and the medium and have significant advantages over the widely used empirical models. The model discussed here provides a convenient form for estimating structural response by using random vibration theory. A commonly used random process for ground acceleration, filtered white-noise multiplied by an envelope function, introduces some errors in response calculations for structures whose periods are longer than the faulting duration. An alternate random process, filtered shot-noise process, eliminates these errors.

  6. Instrument Attitude Precision Control

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan

    2004-01-01

    A novel approach is presented in this paper to analyze attitude precision and control for an instrument gimbaled to a spacecraft subject to an internal disturbance caused by a moving component inside the instrument. Nonlinear differential equations of motion for some sample cases are derived and solved analytically to gain insight into the influence of the disturbance on the attitude pointing error. A simple control law is developed to eliminate the instrument pointing error caused by the internal disturbance. Several cases are presented to demonstrate and verify the concept presented in this paper.

  7. Crosstalk eliminating and low-density parity-check codes for photochromic dual-wavelength storage

    NASA Astrophysics Data System (ADS)

    Wang, Meicong; Xiong, Jianping; Jian, Jiqi; Jia, Huibo

    2005-01-01

    Multi-wavelength storage is an approach to increase the memory density with the problem of crosstalk to be deal with. We apply Low Density Parity Check (LDPC) codes as error-correcting codes in photochromic dual-wavelength optical storage based on the investigation of LDPC codes in optical data storage. A proper method is applied to reduce the crosstalk and simulation results show that this operation is useful to improve Bit Error Rate (BER) performance. At the same time we can conclude that LDPC codes outperform RS codes in crosstalk channel.

  8. Data Assimilation in the Presence of Forecast Bias: The GEOS Moisture Analysis

    NASA Technical Reports Server (NTRS)

    Dee, Dick P.; Todling, Ricardo

    1999-01-01

    We describe the application of the unbiased sequential analysis algorithm developed by Dee and da Silva (1998) to the GEOS DAS moisture analysis. The algorithm estimates the persistent component of model error using rawinsonde observations and adjusts the first-guess moisture field accordingly. Results of two seasonal data assimilation cycles show that moisture analysis bias is almost completely eliminated in all observed regions. The improved analyses cause a sizable reduction in the 6h-forecast bias and a marginal improvement in the error standard deviations.

  9. Strategic Use of Microscrews for Enhancing the Accuracy of Computer-Guided Implant Surgery in Fully Edentulous Arches: A Case History Report.

    PubMed

    Lee, Du-Hyeong

    Implant guide systems can be classified by their supporting structure as tooth-, mucosa-, or bone-supported. Mucosa-supported guides for fully edentulous arches show lower accuracy in implant placement because of errors in image registration and guide positioning. This article introduces the application of a novel microscrew system for computer-aided implant surgery. This technique can markedly improve the accuracy of computer-guided implant surgery in fully edentulous arches by eliminating errors from image fusion and guide positioning.

  10. Acquisition Theory and Experimental Design: A Critique of Tomasello and Herron.

    ERIC Educational Resources Information Center

    Beck, Maria-Luise; Eubank, Lynn

    1991-01-01

    Caution should be taken in viewing previous research indicating that negative evidence, a special type of error correction to eliminate overgeneralizations, could be crucial to second-language learning, because the underlying theories adopted for that research possibly could be flawed. (10 references) (CB)

  11. Review of Research on Sight Word Instruction.

    ERIC Educational Resources Information Center

    Browder, Diane M.; Lalli, Joseph S.

    1991-01-01

    This review of 20 years of literature on sight word instruction for individuals with handicaps examines effectiveness data for procedures teaching word recognition and comprehension. Covered are "errorless procedures," prompt elimination, stimulus fading, time delay, easy to hard discrimination, and trial and error with feedback. Two tables…

  12. MAGSAT and aeromagnetic data in the North American continent

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Problems were encountered in deriving a proper reference field to be subtracted from the aeromagnetic data obtained from Project MAGNET. Field models tried thus far do not seem to eliminate properly the main field. The MAGSAT data in the North American continent for the period November 1 to December 22, 1979 are being compiled and compared with MAGNET data. Efforts are being made to eliminate the orbital bias errors. A computer program was developed and successfully tested which computes a topographic profile of the Curie depth isotherm which fits best to the observed vector or scalar field magnetic data.

  13. Automated food microbiology: potential for the hydrophobic grid-membrane filter.

    PubMed Central

    Sharpe, A N; Diotte, M P; Dudas, I; Michaud, G L

    1978-01-01

    Bacterial counts obtained on hydrophobic grid-membrane filters were comparable to conventional plate counts for Pseudomonas aeruginosa, Escherichia coli, and Staphylococcus aureus in homogenates from a range of foods. The wide numerical operating range of the hydrophobic grid-membrane filters allowed sequential diluting to be reduced or even eliminated, making them attractive as components in automated systems of analysis. Food debris could be rinsed completely from the unincubated hydrophobic grid-membrane filter surface without affecting the subsequent count, thus eliminating the possibility of counting food particles, a common source of error in electronic counting systems. PMID:100054

  14. Recent progress on air-bearing slumping of segmented thin-shell mirrors for x-ray telescopes: experiments and numerical analysis

    NASA Astrophysics Data System (ADS)

    Zuo, Heng E.; Yao, Youwei; Chalifoux, Brandon D.; DeTienne, Michael D.; Heilmann, Ralf K.; Schattenburg, Mark L.

    2017-08-01

    Slumping (or thermal-shaping) of thin glass sheets onto high precision mandrels was used successfully by NASA Goddard Space Flight Center to fabricate the NuSTAR telescope. But this process requires long thermal cycles and produces mid-range spatial frequency errors due to the anti-stick mandrel coatings. Over the last few years, we have designed and tested non-contact horizontal slumping of round flat glass sheets floating on thin layers of nitrogen between porous air-bearings using fast position control algorithms and precise fiber sensing techniques during short thermal cycles. We recently built a finite element model with ADINA to simulate the viscoelastic behavior of glass during the slumping process. The model utilizes fluid-structure interaction (FSI) to understand the deformation and motion of glass under the influence of air flow. We showed that for the 2D axisymmetric model, experimental and numerical approaches have comparable results. We also investigated the impact of bearing permeability on the resulting shape of the wafers. A novel vertical slumping set-up is also under development to eliminate the undesirable influence of gravity. Progress towards generating mirrors for good angular resolution and low mid-range spatial frequency errors is reported.

  15. Automatic liver segmentation on Computed Tomography using random walkers for treatment planning

    PubMed Central

    Moghbel, Mehrdad; Mashohor, Syamsiah; Mahmud, Rozi; Saripan, M. Iqbal Bin

    2016-01-01

    Segmentation of the liver from Computed Tomography (CT) volumes plays an important role during the choice of treatment strategies for liver diseases. Despite lots of attention, liver segmentation remains a challenging task due to the lack of visible edges on most boundaries of the liver coupled with high variability of both intensity patterns and anatomical appearances with all these difficulties becoming more prominent in pathological livers. To achieve a more accurate segmentation, a random walker based framework is proposed that can segment contrast-enhanced livers CT images with great accuracy and speed. Based on the location of the right lung lobe, the liver dome is automatically detected thus eliminating the need for manual initialization. The computational requirements are further minimized utilizing rib-caged area segmentation, the liver is then extracted by utilizing random walker method. The proposed method was able to achieve one of the highest accuracies reported in the literature against a mixed healthy and pathological liver dataset compared to other segmentation methods with an overlap error of 4.47 % and dice similarity coefficient of 0.94 while it showed exceptional accuracy on segmenting the pathological livers with an overlap error of 5.95 % and dice similarity coefficient of 0.91. PMID:28096782

  16. Risk of Performance and Behavioral Health Decrements Due to Inadequate Cooperation, Coordination, Communication, and Psychosocial Adaptation within a Team

    NASA Technical Reports Server (NTRS)

    Landon, Lauren Blackwell; Vessey, William B.; Barrett, Jamie D.

    2015-01-01

    A team is defined as: "two or more individuals who interact socially and adaptively, have shared or common goals, and hold meaningful task interdependences; it is hierarchically structured and has a limited life span; in it expertise and roles are distributed; and it is embedded within an organization/environmental context that influences and is influenced by ongoing processes and performance outcomes" (Salas, Stagl, Burke, & Goodwin, 2007, p. 189). From the NASA perspective, a team is commonly understood to be a collection of individuals that is assigned to support and achieve a particular mission. Thus, depending on context, this definition can encompass both the spaceflight crew and the individuals and teams in the larger multi-team system who are assigned to support that crew during a mission. The Team Risk outcomes of interest are predominantly performance related, with a secondary emphasis on long-term health; this is somewhat unique in the NASA HRP in that most Risk areas are medically related and primarily focused on long-term health consequences. In many operational environments (e.g., aviation), performance is assessed as the avoidance of errors. However, the research on performance errors is ambiguous. It implies that actions may be dichotomized into "correct" or "incorrect" responses, where incorrect responses or errors are always undesirable. Researchers have argued that this dichotomy is a harmful oversimplification, and it would be more productive to focus on the variability of human performance and how organizations can manage that variability (Hollnagel, Woods, & Leveson, 2006) (Category III1). Two problems occur when focusing on performance errors: 1) the errors are infrequent and, therefore, difficult to observe and record; and 2) the errors do not directly correspond to failure. Research reveals that humans are fairly adept at correcting or compensating for performance errors before such errors result in recognizable or recordable failures. Astronauts are notably adept high performers. Most failures are recorded only when multiple, small errors occur and humans are unable to recognize and correct or compensate for these errors in time to prevent a failure (Dismukes, Berman, Loukopoulos, 2007) (Category III). More commonly, observers record variability in levels of performance. Some teams commit no observable errors but fail to achieve performance objectives or perform only adequately, while other teams commit some errors but perform spectacularly. Successful performance, therefore, cannot be viewed as simply the absence of errors or the avoidance of failure Johnson Space Center (JSC) Joint Leadership Team, 2008). While failure is commonly attributed to making a major error, focusing solely on the elimination of error(s) does not significantly reduce the risk of failure. Failure may also occur when performance is simply insufficient or an effort is incapable of adjusting sufficiently to a contextual change (e.g., changing levels of autonomy).

  17. Propagation of stage measurement uncertainties to streamflow time series

    NASA Astrophysics Data System (ADS)

    Horner, Ivan; Le Coz, Jérôme; Renard, Benjamin; Branger, Flora; McMillan, Hilary

    2016-04-01

    Streamflow uncertainties due to stage measurements errors are generally overlooked in the promising probabilistic approaches that have emerged in the last decade. We introduce an original error model for propagating stage uncertainties through a stage-discharge rating curve within a Bayesian probabilistic framework. The method takes into account both rating curve (parametric errors and structural errors) and stage uncertainty (systematic and non-systematic errors). Practical ways to estimate the different types of stage errors are also presented: (1) non-systematic errors due to instrument resolution and precision and non-stationary waves and (2) systematic errors due to gauge calibration against the staff gauge. The method is illustrated at a site where the rating-curve-derived streamflow can be compared with an accurate streamflow reference. The agreement between the two time series is overall satisfying. Moreover, the quantification of uncertainty is also satisfying since the streamflow reference is compatible with the streamflow uncertainty intervals derived from the rating curve and the stage uncertainties. Illustrations from other sites are also presented. Results are much contrasted depending on the site features. In some cases, streamflow uncertainty is mainly due to stage measurement errors. The results also show the importance of discriminating systematic and non-systematic stage errors, especially for long term flow averages. Perspectives for improving and validating the streamflow uncertainty estimates are eventually discussed.

  18. [Epidemiology of refractive errors].

    PubMed

    Wolfram, C

    2017-07-01

    Refractive errors are very common and can lead to severe pathological changes in the eye. This article analyzes the epidemiology of refractive errors in the general population in Germany and worldwide and describes common definitions for refractive errors and clinical characteristics for pathologicaal changes. Refractive errors differ between age groups due to refractive changes during the life time and also due to generation-specific factors. Current research about the etiology of refractive errors has strengthened the influence of environmental factors, which led to new strategies for the prevention of refractive pathologies.

  19. Unforced errors and error reduction in tennis

    PubMed Central

    Brody, H

    2006-01-01

    Only at the highest level of tennis is the number of winners comparable to the number of unforced errors. As the average player loses many more points due to unforced errors than due to winners by an opponent, if the rate of unforced errors can be reduced, it should lead to an increase in points won. This article shows how players can improve their game by understanding and applying the laws of physics to reduce the number of unforced errors. PMID:16632568

  20. Evaluating the design of an earth radiation budget instrument with system simulations. Part 2: Minimization of instantaneous sampling errors for CERES-I

    NASA Technical Reports Server (NTRS)

    Stowe, Larry; Hucek, Richard; Ardanuy, Philip; Joyce, Robert

    1994-01-01

    Much of the new record of broadband earth radiation budget satellite measurements to be obtained during the late 1990s and early twenty-first century will come from the dual-radiometer Clouds and Earth's Radiant Energy System Instrument (CERES-I) flown aboard sun-synchronous polar orbiters. Simulation studies conducted in this work for an early afternoon satellite orbit indicate that spatial root-mean-square (rms) sampling errors of instantaneous CERES-I shortwave flux estimates will range from about 8.5 to 14.0 W/m on a 2.5 deg latitude and longitude grid resolution. Rms errors in longwave flux estimates are only about 20% as large and range from 1.5 to 3.5 W/sq m. These results are based on an optimal cross-track scanner design that includes 50% footprint overlap to eliminate gaps in the top-of-the-atmosphere coverage, and a 'smallest' footprint size to increase the ratio in the number of observations lying within to the number of observations lying on grid area boundaries. Total instantaneous measurement error also depends on the variability of anisotropic reflectance and emission patterns and on retrieval methods used to generate target area fluxes. Three retrieval procedures from both CERES-I scanners (cross-track and rotating azimuth plane) are used. (1) The baseline Earth Radiaton Budget Experiment (ERBE) procedure, which assumes that errors due to the use of mean angular dependence models (ADMs) in the radiance-to-flux inversion process nearly cancel when averaged over grid areas. (2) To estimate N, instantaneous ADMs are estimated from the multiangular, collocated observations of the two scanners. These observed models replace the mean models in computation of satellite flux estimates. (3) The scene flux approach, conducts separate target-area retrievals for each ERBE scene category and combines their results using area weighting by scene type. The ERBE retrieval performs best when the simulated radiance field departs from the ERBE mean models by less than 10%. For larger perturbations, both the scene flux and collocation methods produce less error than the ERBE retrieval. The scene flux technique is preferable, however, because it involves fewer restrictive assumptions.

  1. Pharmacokinetic design optimization in children and estimation of maturation parameters: example of cytochrome P450 3A4.

    PubMed

    Bouillon-Pichault, Marion; Jullien, Vincent; Bazzoli, Caroline; Pons, Gérard; Tod, Michel

    2011-02-01

    The aim of this work was to determine whether optimizing the study design in terms of ages and sampling times for a drug eliminated solely via cytochrome P450 3A4 (CYP3A4) would allow us to accurately estimate the pharmacokinetic parameters throughout the entire childhood timespan, while taking into account age- and weight-related changes. A linear monocompartmental model with first-order absorption was used successively with three different residual error models and previously published pharmacokinetic parameters ("true values"). The optimal ages were established by D-optimization using the CYP3A4 maturation function to create "optimized demographic databases." The post-dose times for each previously selected age were determined by D-optimization using the pharmacokinetic model to create "optimized sparse sampling databases." We simulated concentrations by applying the population pharmacokinetic model to the optimized sparse sampling databases to create optimized concentration databases. The latter were modeled to estimate population pharmacokinetic parameters. We then compared true and estimated parameter values. The established optimal design comprised four age ranges: 0.008 years old (i.e., around 3 days), 0.192 years old (i.e., around 2 months), 1.325 years old, and adults, with the same number of subjects per group and three or four samples per subject, in accordance with the error model. The population pharmacokinetic parameters that we estimated with this design were precise and unbiased (root mean square error [RMSE] and mean prediction error [MPE] less than 11% for clearance and distribution volume and less than 18% for k(a)), whereas the maturation parameters were unbiased but less precise (MPE < 6% and RMSE < 37%). Based on our results, taking growth and maturation into account a priori in a pediatric pharmacokinetic study is theoretically feasible. However, it requires that very early ages be included in studies, which may present an obstacle to the use of this approach. First-pass effects, alternative elimination routes, and combined elimination pathways should also be investigated.

  2. Driving range estimation for electric vehicles based on driving condition identification and forecast

    NASA Astrophysics Data System (ADS)

    Pan, Chaofeng; Dai, Wei; Chen, Liao; Chen, Long; Wang, Limei

    2017-10-01

    With the impact of serious environmental pollution in our cities combined with the ongoing depletion of oil resources, electric vehicles are becoming highly favored as means of transport. Not only for the advantage of low noise, but for their high energy efficiency and zero pollution. The Power battery is used as the energy source of electric vehicles. However, it does currently still have a few shortcomings, noticeably the low energy density, with high costs and short cycle life results in limited mileage compared with conventional passenger vehicles. There is great difference in vehicle energy consumption rate under different environment and driving conditions. Estimation error of current driving range is relatively large due to without considering the effects of environmental temperature and driving conditions. The development of a driving range estimation method will have a great impact on the electric vehicles. A new driving range estimation model based on the combination of driving cycle identification and prediction is proposed and investigated. This model can effectively eliminate mileage errors and has good convergence with added robustness. Initially the identification of the driving cycle is based on Kernel Principal Component feature parameters and fuzzy C referring to clustering algorithm. Secondly, a fuzzy rule between the characteristic parameters and energy consumption is established under MATLAB/Simulink environment. Furthermore the Markov algorithm and BP(Back Propagation) neural network method is utilized to predict the future driving conditions to improve the accuracy of the remaining range estimation. Finally, driving range estimation method is carried out under the ECE 15 condition by using the rotary drum test bench, and the experimental results are compared with the estimation results. Results now show that the proposed driving range estimation method can not only estimate the remaining mileage, but also eliminate the fluctuation of the residual range under different driving conditions.

  3. Crystal Genetics, Inc.

    PubMed

    Kermani, Bahram G

    2016-07-01

    Crystal Genetics, Inc. is an early-stage genetic test company, focused on achieving the highest possible clinical-grade accuracy and comprehensiveness for detecting germline (e.g., in hereditary cancer) and somatic (e.g., in early cancer detection) mutations. Crystal's mission is to significantly improve the health status of the population, by providing high accuracy, comprehensive, flexible and affordable genetic tests, primarily in cancer. Crystal's philosophy is that when it comes to detecting mutations that are strongly correlated with life-threatening diseases, the detection accuracy of every single mutation counts: a single false-positive error could cause severe anxiety for the patient. And, more importantly, a single false-negative error could potentially cost the patient's life. Crystal's objective is to eliminate both of these error types.

  4. 3 CFR - Finding and Recapturing Improper Payments

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 3 The President 1 2011-01-01 2011-01-01 false Finding and Recapturing Improper Payments Presidential Documents Other Presidential Documents Memorandum of March 10, 2010 Finding and Recapturing Improper Payments Memorandum for the Heads of Executive Departments and Agencies My Administration is committed to reducing payment errors and eliminating...

  5. Total Quality Management in Libraries: A Sourcebook.

    ERIC Educational Resources Information Center

    O'Neil, Rosanna M., Comp.

    Total Quality Management (TQM) brings together the best aspects of organizational excellence by driving out fear, offering customer-driven products and services, doing it right the first time by eliminating error, and maintaining inventory control without waste. Libraries are service organizations which are constantly trying to improve service.…

  6. Eliminating ambiguity in digital signals

    NASA Technical Reports Server (NTRS)

    Weber, W. J., III

    1979-01-01

    Multiamplitude minimum shift keying (mamsk) transmission system, method of differential encoding overcomes problem of ambiguity associated with advanced digital-transmission techniques with little or no penalty in transmission rate, error rate, or system complexity. Principle of method states, if signal points are properly encoded and decoded, bits are detected correctly, regardless of phase ambiguities.

  7. Phase modulation for reduced vibration sensitivity in laser-cooled clocks in space

    NASA Technical Reports Server (NTRS)

    Klipstein, W.; Dick, G.; Jefferts, S.; Walls, F.

    2001-01-01

    The standard interrogation technique in atomic beam clocks is square-wave frequency modulation (SWFM), which suffers a first order sensitivity to vibrations as changes in the transit time of the atoms translates to perceived frequency errors. Square-wave phase modulation (SWPM) interrogation eliminates sensitivity to this noise.

  8. Semiannual Report to Congress, No. 49. April 1, 2004-September 30, 2004

    ERIC Educational Resources Information Center

    US Department of Education, 2004

    2004-01-01

    This report highlights significant work of the U.S. Department of Education's Office of Inspector General for the 6-month period ending September 30, 2004. Sections include: Activities and Accomplishments; Elimination of Fraud and Error in Student Aid Programs; Budget and Performance Integration; Financial Management; Expanded Electronic…

  9. An analysis of the adaptability of Loran-C to air navigation

    NASA Technical Reports Server (NTRS)

    Littlefield, J. A.

    1981-01-01

    The sources of position errors characteristics of the Loran-C navigation system were identified. Particular emphasis was given to their point on entry as well as their elimination. It is shown that the ratio of realized accuracy to theoretical accuracy of the Loran-C is highly receiver dependent.

  10. Error and Error Mitigation in Low-Coverage Genome Assemblies

    PubMed Central

    Hubisz, Melissa J.; Lin, Michael F.; Kellis, Manolis; Siepel, Adam

    2011-01-01

    The recent release of twenty-two new genome sequences has dramatically increased the data available for mammalian comparative genomics, but twenty of these new sequences are currently limited to ∼2× coverage. Here we examine the extent of sequencing error in these 2× assemblies, and its potential impact in downstream analyses. By comparing 2× assemblies with high-quality sequences from the ENCODE regions, we estimate the rate of sequencing error to be 1–4 errors per kilobase. While this error rate is fairly modest, sequencing error can still have surprising effects. For example, an apparent lineage-specific insertion in a coding region is more likely to reflect sequencing error than a true biological event, and the length distribution of coding indels is strongly distorted by error. We find that most errors are contributed by a small fraction of bases with low quality scores, in particular, by the ends of reads in regions of single-read coverage in the assembly. We explore several approaches for automatic sequencing error mitigation (SEM), making use of the localized nature of sequencing error, the fact that it is well predicted by quality scores, and information about errors that comes from comparisons across species. Our automatic methods for error mitigation cannot replace the need for additional sequencing, but they do allow substantial fractions of errors to be masked or eliminated at the cost of modest amounts of over-correction, and they can reduce the impact of error in downstream phylogenomic analyses. Our error-mitigated alignments are available for download. PMID:21340033

  11. Speech Enhancement of Mobile Devices Based on the Integration of a Dual Microphone Array and a Background Noise Elimination Algorithm.

    PubMed

    Chen, Yung-Yue

    2018-05-08

    Mobile devices are often used in our daily lives for the purposes of speech and communication. The speech quality of mobile devices is always degraded due to the environmental noises surrounding mobile device users. Regretfully, an effective background noise reduction solution cannot easily be developed for this speech enhancement problem. Due to these depicted reasons, a methodology is systematically proposed to eliminate the effects of background noises for the speech communication of mobile devices. This methodology integrates a dual microphone array with a background noise elimination algorithm. The proposed background noise elimination algorithm includes a whitening process, a speech modelling method and an H ₂ estimator. Due to the adoption of the dual microphone array, a low-cost design can be obtained for the speech enhancement of mobile devices. Practical tests have proven that this proposed method is immune to random background noises, and noiseless speech can be obtained after executing this denoise process.

  12. Streamlining the medication process improves safety in the intensive care unit.

    PubMed

    Benoit, E; Eckert, P; Theytaz, C; Joris-Frasseren, M; Faouzi, M; Beney, J

    2012-09-01

    Multiple interventions were made to optimize the medication process in our intensive care unit (ICU). 1 Transcriptions from the medical order form to the administration plan were eliminated by merging both into a single document; 2 the new form was built in a logical sequence and was highly structured to promote completeness and standardization of information; 3 frequently used drug names, approved units, and fixed routes were pre-printed; 4 physicians and nurses were trained with regard to the correct use of the new form. This study was aimed at evaluating the impact of these interventions on clinically significant types of medication errors. Eight types of medication errors were measured by a prospective chart review before and after the interventions in the ICU of a public tertiary care hospital. We used an interrupted time-series design to control the secular trends. Over 85 days, 9298 lines of drug prescription and/or administration to 294 patients, corresponding to 754 patient-days were collected and analysed for the three series before and three series following the intervention. Global error rate decreased from 4.95 to 2.14% (-56.8%, P < 0.001). The safety of the medication process in our ICU was improved by simple and inexpensive interventions. In addition to the optimization of the prescription writing process, the documentation of intravenous preparation, and the scheduling of administration, the elimination of the transcription in combination with the training of users contributed to reducing errors and carried an interesting potential to increase safety. © 2012 The Authors. Acta Anaesthesiologica Scandinavica © 2012 The Acta Anaesthesiologica Scandinavica Foundation.

  13. An interactive framework for acquiring vision models of 3-D objects from 2-D images.

    PubMed

    Motai, Yuichi; Kak, Avinash

    2004-02-01

    This paper presents a human-computer interaction (HCI) framework for building vision models of three-dimensional (3-D) objects from their two-dimensional (2-D) images. Our framework is based on two guiding principles of HCI: 1) provide the human with as much visual assistance as possible to help the human make a correct input; and 2) verify each input provided by the human for its consistency with the inputs previously provided. For example, when stereo correspondence information is elicited from a human, his/her job is facilitated by superimposing epipolar lines on the images. Although that reduces the possibility of error in the human marked correspondences, such errors are not entirely eliminated because there can be multiple candidate points close together for complex objects. For another example, when pose-to-pose correspondence is sought from a human, his/her job is made easier by allowing the human to rotate the partial model constructed in the previous pose in relation to the partial model for the current pose. While this facility reduces the incidence of human-supplied pose-to-pose correspondence errors, such errors cannot be eliminated entirely because of confusion created when multiple candidate features exist close together. Each input provided by the human is therefore checked against the previous inputs by invoking situation-specific constraints. Different types of constraints (and different human-computer interaction protocols) are needed for the extraction of polygonal features and for the extraction of curved features. We will show results on both polygonal objects and object containing curved features.

  14. A Method for Oscillation Errors Restriction of SINS Based on Forecasted Time Series.

    PubMed

    Zhao, Lin; Li, Jiushun; Cheng, Jianhua; Jia, Chun; Wang, Qiufan

    2015-07-17

    Continuity, real-time, and accuracy are the key technical indexes of evaluating comprehensive performance of a strapdown inertial navigation system (SINS). However, Schuler, Foucault, and Earth periodic oscillation errors significantly cut down the real-time accuracy of SINS. A method for oscillation error restriction of SINS based on forecasted time series is proposed by analyzing the characteristics of periodic oscillation errors. The innovative method gains multiple sets of navigation solutions with different phase delays in virtue of the forecasted time series acquired through the measurement data of the inertial measurement unit (IMU). With the help of curve-fitting based on least square method, the forecasted time series is obtained while distinguishing and removing small angular motion interference in the process of initial alignment. Finally, the periodic oscillation errors are restricted on account of the principle of eliminating the periodic oscillation signal with a half-wave delay by mean value. Simulation and test results show that the method has good performance in restricting the Schuler, Foucault, and Earth oscillation errors of SINS.

  15. A Method for Oscillation Errors Restriction of SINS Based on Forecasted Time Series

    PubMed Central

    Zhao, Lin; Li, Jiushun; Cheng, Jianhua; Jia, Chun; Wang, Qiufan

    2015-01-01

    Continuity, real-time, and accuracy are the key technical indexes of evaluating comprehensive performance of a strapdown inertial navigation system (SINS). However, Schuler, Foucault, and Earth periodic oscillation errors significantly cut down the real-time accuracy of SINS. A method for oscillation error restriction of SINS based on forecasted time series is proposed by analyzing the characteristics of periodic oscillation errors. The innovative method gains multiple sets of navigation solutions with different phase delays in virtue of the forecasted time series acquired through the measurement data of the inertial measurement unit (IMU). With the help of curve-fitting based on least square method, the forecasted time series is obtained while distinguishing and removing small angular motion interference in the process of initial alignment. Finally, the periodic oscillation errors are restricted on account of the principle of eliminating the periodic oscillation signal with a half-wave delay by mean value. Simulation and test results show that the method has good performance in restricting the Schuler, Foucault, and Earth oscillation errors of SINS. PMID:26193283

  16. Effects of Reynolds number on orifice induced pressure error

    NASA Technical Reports Server (NTRS)

    Plentovich, E. B.; Gloss, B. B.

    1982-01-01

    Data previously reported for orifice induced pressure errors are extended to the case of higher Reynolds number flows, and a remedy is presented in the form of a porous metal plug for the orifice. Test orifices with apertures 0.330, 0.660, and 1.321 cm in diam. were fabricated on a flat plate for trials in the NASA Langley wind tunnel at Mach numbers 0.40-0.72. A boundary layer survey rake was also mounted on the flat plate to allow measurement of the total boundary layer pressures at the orifices. At the high Reynolds number flows studied, the orifice induced pressure error was found to be a function of the ratio of the orifice diameter to the boundary layer thickness. The error was effectively eliminated by the insertion of a porous metal disc set flush with the orifice outside surface.

  17. Real-Time Point Positioning Performance Evaluation of Single-Frequency Receivers Using NASA's Global Differential GPS System

    NASA Technical Reports Server (NTRS)

    Muellerschoen, Ronald J.; Iijima, Byron; Meyer, Robert; Bar-Sever, Yoaz; Accad, Elie

    2004-01-01

    This paper evaluates the performance of a single-frequency receiver using the 1-Hz differential corrections as provided by NASA's global differential GPS system. While the dual-frequency user has the ability to eliminate the ionosphere error by taking a linear combination of observables, the single-frequency user must remove or calibrate this error by other means. To remove the ionosphere error we take advantage of the fact that the magnitude of the group delay in range observable and the carrier phase advance have the same magnitude but are opposite in sign. A way to calibrate this error is to use a real-time database of grid points computed by JPL's RTI (Real-Time Ionosphere) software. In both cases we evaluate the positional accuracy of a kinematic carrier phase based point positioning method on a global extent.

  18. Adaptive control of theophylline therapy: importance of blood sampling times.

    PubMed

    D'Argenio, D Z; Khakmahd, K

    1983-10-01

    A two-observation protocol for estimating theophylline clearance during a constant-rate intravenous infusion is used to examine the importance of blood sampling schedules with regard to the information content of resulting concentration data. Guided by a theory for calculating maximally informative sample times, population simulations are used to assess the effect of specific sampling times on the precision of resulting clearance estimates and subsequent predictions of theophylline plasma concentrations. The simulations incorporated noise terms for intersubject variability, dosing errors, sample collection errors, and assay error. Clearance was estimated using Chiou's method, least squares, and a Bayesian estimation procedure. The results of these simulations suggest that clinically significant estimation and prediction errors may result when using the above two-point protocol for estimating theophylline clearance if the time separating the two blood samples is less than one population mean elimination half-life.

  19. Common medial frontal mechanisms of adaptive control in humans and rodents

    PubMed Central

    Frank, Michael J.; Laubach, Mark

    2013-01-01

    In this report, we describe how common brain networks within the medial frontal cortex facilitate adaptive behavioral control in rodents and humans. We demonstrate that low frequency oscillations below 12 Hz are dramatically modulated after errors in humans over mid-frontal cortex and in rats within prelimbic and anterior cingulate regions of medial frontal cortex. These oscillations were phase-locked between medial frontal cortex and motor areas in both rats and humans. In rats, single neurons that encoded prior behavioral outcomes were phase-coherent with low-frequency field oscillations particularly after errors. Inactivating medial frontal regions in rats led to impaired behavioral adjustments after errors, eliminated the differential expression of low frequency oscillations after errors, and increased low-frequency spike-field coupling within motor cortex. Our results describe a novel mechanism for behavioral adaptation via low-frequency oscillations and elucidate how medial frontal networks synchronize brain activity to guide performance. PMID:24141310

  20. DtaRefinery: a software tool for elimination of systematic errors from parent ion mass measurements in tandem mass spectra datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petyuk, Vladislav A.; Mayampurath, Anoop M.; Monroe, Matthew E.

    2009-12-16

    Hybrid two-stage mass spectrometers capable of both highly accurate mass measurement and MS/MS fragmentation have become widely available in recent years and have allowed for sig-nificantly better discrimination between true and false MS/MS pep-tide identifications by applying relatively narrow windows for maxi-mum allowable deviations for parent ion mass measurements. To fully gain the advantage of highly accurate parent ion mass meas-urements, it is important to limit systematic mass measurement errors. The DtaRefinery software tool can correct systematic errors in parent ion masses by reading a set of fragmentation spectra, searching for MS/MS peptide identifications, then fitting a model that canmore » estimate systematic errors, and removing them. This results in a new fragmentation spectrum file with updated parent ion masses.« less

  1. ERROR COMPENSATOR FOR A POSITION TRANSDUCER

    DOEpatents

    Fowler, A.H.

    1962-06-12

    A device is designed for eliminating the effect of leadscrew errors in positioning machines in which linear motion of a slide is effected from rotary motion of a leadscrew. This is accomplished by providing a corrector cam mounted on the slide, a cam follower, and a transducer housing rotatable by the follower to compensate for all the reproducible errors in the transducer signal which can be related to the slide position. The transducer has an inner part which is movable with respect to the transducer housing. The transducer inner part is coupled to the means for rotating the leadscrew such that relative movement between this part and its housing will provide an output signal proportional to the position of the slide. The corrector cam and its follower perform the compensation by changing the angular position of the transducer housing by an amount that is a function of the slide position and the error at that position. (AEC)

  2. Lagrangian numerical techniques for modelling multicomponent flow in the presence of large viscosity contrasts: Markers-in-bulk versus Markers-in-chain

    NASA Astrophysics Data System (ADS)

    Mulyukova, Elvira; Dabrowski, Marcin; Steinberger, Bernhard

    2015-04-01

    Many problems in geodynamic applications may be described as viscous flow of chemically heterogeneous materials. Examples include subduction of compositionally stratified lithospheric plates, folding of rheologically layered rocks, and thermochemical convection of the Earth's mantle. The associated time scales are significantly shorter than that of chemical diffusion, which justifies the commonly featured phenomena in geodynamic flow models termed contact discontinuities. These are spatially sharp interfaces separating regions of different material properties. Numerical modelling of advection of fields with sharp interfaces is challenging. Typical errors include numerical diffusion, which arises due to the repeated action of numerical interpolation. Mathematically, a material field can be represented by discrete indicator functions, whose values are interpreted as logical statements (e.g. whether or not the location is occupied by a given material). Interpolation of a discrete function boils down to determining where in the intermediate node-positions one material ends, and the other begins. The numerical diffusion error thus manifests itself as an erroneous location of the material-interface. Lagrangian advection-schemes are known to be less prone to numerical diffusion errors, compared to their Eulerian counterparts. The tracer-ratio method, where Lagrangian markers are used to discretize the bulk of materials filling the entire domain, is a popular example of such methods. The Stokes equation in this case is solved on a separate, static grid, and in order to do it - material properties must be interpolated from the markers to the grid. This involves the difficulty related to interpolation of discrete fields. The material distribution, and thus material-properties like viscosity and density, seen by the grid is polluted by the interpolation error, which enters the solution of the momentum equation. Errors due to the uncertainty of interface-location can be avoided when using interface tracking methods for advection. Marker-chain method is one such approach, where rather than discretizing the volume of each material, only their interface is discretized by a connected set of markers. Together with the boundary of the domain, the marker-chain constitutes closed polygon-boundaries which enclose the regions spanned by each material. Communicating material properties to the static grid can be done by determining which polygon each grid-node (or integration point) falls into, eliminating the need for interpolation. In our chosen implementation, an efficient parallelized algorithm for the point-in-polygon location is used, so this part of the code takes up only a small fraction of the CPU-time spent on each time step, and allows for spatial resolution of the compositional field beyond that which is practical with markers-in-bulk methods. An additional advantage of using marker-chains for material advection is that it offers a possibility to use some of its markers, or even edges, to generate a FEM grid. One can tailor a grid for obtaining a Stokes solution with optimal accuracy, while controlling the quality and size of its elements. Where geometry of the interface allows - element-edges may be aligned with it, which is known to significantly improve the quality of Stokes solution, compared to when the interface cuts through the elements (Moresi et al., 1996; Deubelbeiss and Kaus, 2008). In more geometrically complex interface-regions, the grid may simply be refined to reduce the error. As materials get deformed in the course of a simulation, the interface may get stretched and entangled. Addition of new markers along the chain may be required in order to properly resolve the increasingly complicated geometry. Conversely, some markers may be removed from regions where they get clustered. Such resampling of the interface requires additional computational effort (although small compared to other parts of the code), and introduces an error in the interface-location (similar to numerical diffusion). Our implementation of this procedure, which utilizes an auxiliary high-resolution structured grid, allows a high degree of control on the magnitude of this error, although cannot eliminate it completely. We will present our chosen numerical implementation of the markers-in-bulk and markers-in-chain methods outlined above, together with the simulation results of the especially designed benchmarks that demonstrate the relative successes and limitations of these methods.

  3. A simple sample preparation for simultaneous determination of chloramphenicol and its succinate esters in food products using high-performance liquid chromatography/high-resolution mass spectrometry.

    PubMed

    Amelin, Vasiliy; Korotkov, Anton

    2017-02-01

    A simple method is described for the determination of chloramphenicol and its succinate esters in food products. Examination of food products using high-performance liquid chromatography/high-resolution mass spectrometry showed the presence not only of chloramphenicol but also of its succinate forms. A scheme is proposed for determining chloramphenicol and its succinate esters (calculated as chloramphenicol) in meat (beef, pork, poultry), milk, liver, kidney, eggs, fish and honey. Analytes are extracted from a 1.0 g sample with 5 ml acetonitrile. It was found that using the method of standard addition and diluting the extract with water leads to the elimination of matrix effects and also eliminates errors associated with peak splitting due to the separate elution of the differing forms of the analyte. Validation results were satisfactory, with recoveries from 85% to 111% (meat, milk, liver, kidney, eggs, fish and honey) and a relative standard deviation (RSD) lower than 13% for spiked levels of 0.3, 1.0 and 5 µg kg - 1 . The limits of detection and quantification (calculated as chloramphenicol for all forms) were 0.1 and 0.3 µg kg - 1 , respectively. The RSD of the results of the analysis was < 10%. The duration of the analysis was less than 1 h.

  4. Curvelet-domain multiple matching method combined with cubic B-spline function

    NASA Astrophysics Data System (ADS)

    Wang, Tong; Wang, Deli; Tian, Mi; Hu, Bin; Liu, Chengming

    2018-05-01

    Since the large amount of surface-related multiple existed in the marine data would influence the results of data processing and interpretation seriously, many researchers had attempted to develop effective methods to remove them. The most successful surface-related multiple elimination method was proposed based on data-driven theory. However, the elimination effect was unsatisfactory due to the existence of amplitude and phase errors. Although the subsequent curvelet-domain multiple-primary separation method achieved better results, poor computational efficiency prevented its application. In this paper, we adopt the cubic B-spline function to improve the traditional curvelet multiple matching method. First, select a little number of unknowns as the basis points of the matching coefficient; second, apply the cubic B-spline function on these basis points to reconstruct the matching array; third, build constraint solving equation based on the relationships of predicted multiple, matching coefficients, and actual data; finally, use the BFGS algorithm to iterate and realize the fast-solving sparse constraint of multiple matching algorithm. Moreover, the soft-threshold method is used to make the method perform better. With the cubic B-spline function, the differences between predicted multiple and original data diminish, which results in less processing time to obtain optimal solutions and fewer iterative loops in the solving procedure based on the L1 norm constraint. The applications to synthetic and field-derived data both validate the practicability and validity of the method.

  5. A Vision-Based Self-Calibration Method for Robotic Visual Inspection Systems

    PubMed Central

    Yin, Shibin; Ren, Yongjie; Zhu, Jigui; Yang, Shourui; Ye, Shenghua

    2013-01-01

    A vision-based robot self-calibration method is proposed in this paper to evaluate the kinematic parameter errors of a robot using a visual sensor mounted on its end-effector. This approach could be performed in the industrial field without external, expensive apparatus or an elaborate setup. A robot Tool Center Point (TCP) is defined in the structural model of a line-structured laser sensor, and aligned to a reference point fixed in the robot workspace. A mathematical model is established to formulate the misalignment errors with kinematic parameter errors and TCP position errors. Based on the fixed point constraints, the kinematic parameter errors and TCP position errors are identified with an iterative algorithm. Compared to the conventional methods, this proposed method eliminates the need for a robot-based-frame and hand-to-eye calibrations, shortens the error propagation chain, and makes the calibration process more accurate and convenient. A validation experiment is performed on an ABB IRB2400 robot. An optimal configuration on the number and distribution of fixed points in the robot workspace is obtained based on the experimental results. Comparative experiments reveal that there is a significant improvement of the measuring accuracy of the robotic visual inspection system. PMID:24300597

  6. Orifice-induced pressure error studies in Langley 7- by 10-foot high-speed tunnel

    NASA Technical Reports Server (NTRS)

    Plentovich, E. B.; Gloss, B. B.

    1986-01-01

    For some time it has been known that the presence of a static pressure measuring hole will disturb the local flow field in such a way that the sensed static pressure will be in error. The results of previous studies aimed at studying the error induced by the pressure orifice were for relatively low Reynolds number flows. Because of the advent of high Reynolds number transonic wind tunnels, a study was undertaken to assess the magnitude of this error at high Reynolds numbers than previously published and to study a possible method of eliminating this pressure error. This study was conducted in the Langley 7- by 10-Foot High-Speed Tunnel on a flat plate. The model was tested at Mach numbers from 0.40 to 0.72 and at Reynolds numbers from 7.7 x 1,000,000 to 11 x 1,000,000 per meter (2.3 x 1,000,000 to 3.4 x 1,000,000 per foot), respectively. The results indicated that as orifice size increased, the pressure error also increased but that a porous metal (sintered metal) plug inserted in an orifice could greatly reduce the pressure error induced by the orifice.

  7. Use of units of measurement error in anthropometric comparisons.

    PubMed

    Lucas, Teghan; Henneberg, Maciej

    2017-09-01

    Anthropometrists attempt to minimise measurement errors, however, errors cannot be eliminated entirely. Currently, measurement errors are simply reported. Measurement errors should be included into analyses of anthropometric data. This study proposes a method which incorporates measurement errors into reported values, replacing metric units with 'units of technical error of measurement (TEM)' by applying these to forensics, industrial anthropometry and biological variation. The USA armed forces anthropometric survey (ANSUR) contains 132 anthropometric dimensions of 3982 individuals. Concepts of duplication and Euclidean distance calculations were applied to the forensic-style identification of individuals in this survey. The National Size and Shape Survey of Australia contains 65 anthropometric measurements of 1265 women. This sample was used to show how a woman's body measurements expressed in TEM could be 'matched' to standard clothing sizes. Euclidean distances show that two sets of repeated anthropometric measurements of the same person cannot be matched (> 0) on measurements expressed in millimetres but can in units of TEM (= 0). Only 81 women can fit into any standard clothing size when matched using centimetres, with units of TEM, 1944 women fit. The proposed method can be applied to all fields that use anthropometry. Units of TEM are considered a more reliable unit of measurement for comparisons.

  8. A steep peripheral ring in irregular cornea topography, real or an instrument error?

    PubMed

    Galindo-Ferreiro, Alicia; Galvez-Ruiz, Alberto; Schellini, Silvana A; Galindo-Alonso, Julio

    2016-01-01

    To demonstrate that the steep peripheral ring (red zone) on corneal topography after myopic laser in situ keratomileusis (LASIK) could possibly due to instrument error and not always to a real increase in corneal curvature. A spherical model for the corneal surface and modifying topography software was used to analyze the cause of an error due to instrument design. This study involved modification of the software of a commercially available topographer. A small modification of the topography image results in a red zone on the corneal topography color map. Corneal modeling indicates that the red zone could be an artifact due to an instrument-induced error. The steep curvature changes after LASIK, signified by the red zone, could be also an error due to the plotting algorithms of the corneal topographer, besides a steep curvature change.

  9. Model Error Estimation for the CPTEC Eta Model

    NASA Technical Reports Server (NTRS)

    Tippett, Michael K.; daSilva, Arlindo

    1999-01-01

    Statistical data assimilation systems require the specification of forecast and observation error statistics. Forecast error is due to model imperfections and differences between the initial condition and the actual state of the atmosphere. Practical four-dimensional variational (4D-Var) methods try to fit the forecast state to the observations and assume that the model error is negligible. Here with a number of simplifying assumption, a framework is developed for isolating the model error given the forecast error at two lead-times. Two definitions are proposed for the Talagrand ratio tau, the fraction of the forecast error due to model error rather than initial condition error. Data from the CPTEC Eta Model running operationally over South America are used to calculate forecast error statistics and lower bounds for tau.

  10. A general model for attitude determination error analysis

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Seidewitz, ED; Nicholson, Mark

    1988-01-01

    An overview is given of a comprehensive approach to filter and dynamics modeling for attitude determination error analysis. The models presented include both batch least-squares and sequential attitude estimation processes for both spin-stabilized and three-axis stabilized spacecraft. The discussion includes a brief description of a dynamics model of strapdown gyros, but it does not cover other sensor models. Model parameters can be chosen to be solve-for parameters, which are assumed to be estimated as part of the determination process, or consider parameters, which are assumed to have errors but not to be estimated. The only restriction on this choice is that the time evolution of the consider parameters must not depend on any of the solve-for parameters. The result of an error analysis is an indication of the contributions of the various error sources to the uncertainties in the determination of the spacecraft solve-for parameters. The model presented gives the uncertainty due to errors in the a priori estimates of the solve-for parameters, the uncertainty due to measurement noise, the uncertainty due to dynamic noise (also known as process noise or measurement noise), the uncertainty due to the consider parameters, and the overall uncertainty due to all these sources of error.

  11. Significant and Sustained Reduction in Chemotherapy Errors Through Improvement Science.

    PubMed

    Weiss, Brian D; Scott, Melissa; Demmel, Kathleen; Kotagal, Uma R; Perentesis, John P; Walsh, Kathleen E

    2017-04-01

    A majority of children with cancer are now cured with highly complex chemotherapy regimens incorporating multiple drugs and demanding monitoring schedules. The risk for error is high, and errors can occur at any stage in the process, from order generation to pharmacy formulation to bedside drug administration. Our objective was to describe a program to eliminate errors in chemotherapy use among children. To increase reporting of chemotherapy errors, we supplemented the hospital reporting system with a new chemotherapy near-miss reporting system. After the model for improvement, we then implemented several interventions, including a daily chemotherapy huddle, improvements to the preparation and delivery of intravenous therapy, headphones for clinicians ordering chemotherapy, and standards for chemotherapy administration throughout the hospital. Twenty-two months into the project, we saw a centerline shift in our U chart of chemotherapy errors that reached the patient from a baseline rate of 3.8 to 1.9 per 1,000 doses. This shift has been sustained for > 4 years. In Poisson regression analyses, we found an initial increase in error rates, followed by a significant decline in errors after 16 months of improvement work ( P < .001). After the model for improvement, our improvement efforts were associated with significant reductions in chemotherapy errors that reached the patient. Key drivers for our success included error vigilance through a huddle, standardization, and minimization of interruptions during ordering.

  12. Modeling uncertainties for tropospheric nitrogen dioxide columns affecting satellite-based inverse modeling of nitrogen oxides emissions

    NASA Astrophysics Data System (ADS)

    Lin, J.-T.; Liu, Z.; Zhang, Q.; Liu, H.; Mao, J.; Zhuang, G.

    2012-12-01

    Errors in chemical transport models (CTMs) interpreting the relation between space-retrieved tropospheric column densities of nitrogen dioxide (NO2) and emissions of nitrogen oxides (NOx) have important consequences on the inverse modeling. They are however difficult to quantify due to lack of adequate in situ measurements, particularly over China and other developing countries. This study proposes an alternate approach for model evaluation over East China, by analyzing the sensitivity of modeled NO2 columns to errors in meteorological and chemical parameters/processes important to the nitrogen abundance. As a demonstration, it evaluates the nested version of GEOS-Chem driven by the GEOS-5 meteorology and the INTEX-B anthropogenic emissions and used with retrievals from the Ozone Monitoring Instrument (OMI) to constrain emissions of NOx. The CTM has been used extensively for such applications. Errors are examined for a comprehensive set of meteorological and chemical parameters using measurements and/or uncertainty analysis based on current knowledge. Results are exploited then for sensitivity simulations perturbing the respective parameters, as the basis of the following post-model linearized and localized first-order modification. It is found that the model meteorology likely contains errors of various magnitudes in cloud optical depth, air temperature, water vapor, boundary layer height and many other parameters. Model errors also exist in gaseous and heterogeneous reactions, aerosol optical properties and emissions of non-nitrogen species affecting the nitrogen chemistry. Modifications accounting for quantified errors in 10 selected parameters increase the NO2 columns in most areas with an average positive impact of 18% in July and 8% in January, the most important factor being modified uptake of the hydroperoxyl radical (HO2) on aerosols. This suggests a possible systematic model bias such that the top-down emissions will be overestimated by the same magnitude if the model is used for emission inversion without corrections. The modifications however cannot eliminate the large model underestimates in cities and other extremely polluted areas (particularly in the north) as compared to satellite retrievals, likely pointing to underestimates of the a priori emission inventory in these places with important implications for understanding of atmospheric chemistry and air quality. Note that these modifications are simplified and should be interpreted with caution for error apportionment.

  13. Asteroid thermal modeling in the presence of reflected sunlight

    NASA Astrophysics Data System (ADS)

    Myhrvold, Nathan

    2018-03-01

    A new derivation of simple asteroid thermal models is presented, investigating the need to account correctly for Kirchhoff's law of thermal radiation when IR observations contain substantial reflected sunlight. The framework applies to both the NEATM and related thermal models. A new parameterization of these models eliminates the dependence of thermal modeling on visible absolute magnitude H, which is not always available. Monte Carlo simulations are used to assess the potential impact of violating Kirchhoff's law on estimates of physical parameters such as diameter and IR albedo, with an emphasis on NEOWISE results. The NEOWISE papers use ten different models, applied to 12 different combinations of WISE data bands, in 47 different combinations. The most prevalent combinations are simulated and the accuracy of diameter estimates is found to be depend critically on the model and data band combination. In the best case of full thermal modeling of all four band the errors in an idealized model the 1σ (68.27%) confidence interval is -5% to +6%, but this combination is just 1.9% of NEOWISE results. Other combinations representing 42% of the NEOWISE results have about twice the CI at -10% to +12%, before accounting for errors due to irregular shape or other real world effects that are not simulated. The model and data band combinations found for the majority of NEOWISE results have much larger systematic and random errors. Kirchhoff's law violation by NEOWISE models leads to errors in estimation accuracy that are strongest for asteroids with W1, W2 band emissivity ɛ12 in both the lowest (0.605 ≤ɛ12 ≤ 0 . 780), and highest decile (0.969 ≤ɛ12 ≤ 0 . 988), corresponding to the highest and lowest deciles of near-IR albedo pIR. Systematic accuracy error between deciles ranges from a low of 5% to as much as 45%, and there are also differences in the random errors. Kirchhoff's law effects also produce large errors in NEOWISE estimates of pIR, particularly for high values. IR observations of asteroids in bands that have substantial reflected sunlight can largely avoid these problems by adopting the Kirchhoff law compliant modeling framework presented here, which is conceptually straightforward and comes without computational cost.

  14. Self-Nulling Beam Combiner Using No External Phase Inverter

    NASA Technical Reports Server (NTRS)

    Bloemhof, Eric E.

    2010-01-01

    A self-nulling beam combiner is proposed that completely eliminates the phase inversion subsystem from the nulling interferometer, and instead uses the intrinsic phase shifts in the beam splitters. Simplifying the flight instrument in this way will be a valuable enhancement of mission reliability. The tighter tolerances on R = T (R being reflection and T being transmission coefficients) required by the self-nulling configuration actually impose no new constraints on the architecture, as two adaptive nullers must be situated between beam splitters to correct small errors in the coatings. The new feature is exploiting the natural phase shifts in beam combiners to achieve the 180 phase inversion necessary for nulling. The advantage over prior art is that an entire subsystem, the field-flipping optics, can be eliminated. For ultimate simplicity in the flight instrument, one might fabricate coatings to very high tolerances and dispense with the adaptive nullers altogether, with all their moving parts, along with the field flipper subsystem. A single adaptive nuller upstream of the beam combiner may be required to correct beam train errors (systematic noise), but in some circumstances phase chopping reduces these errors substantially, and there may be ways to further reduce the chop residuals. Though such coatings are beyond the current state of the art, the mechanical simplicity and robustness of a flight system without field flipper or adaptive nullers would perhaps justify considerable effort on coating fabrication.

  15. A Module for Assimilating Hyperspectral Infrared Retrieved Profiles into the Gridpoint Statistical Interpolation System for Unique Forecasting Applications

    NASA Technical Reports Server (NTRS)

    Berndt, Emily; Zavodsky, Bradley; Srikishen, Jayanthi; Blankenship, Clay

    2015-01-01

    Hyperspectral infrared sounder radiance data are assimilated into operational modeling systems however the process is computationally expensive and only approximately 1% of available data are assimilated due to data thinning as well as the fact that radiances are restricted to cloud-free fields of view. In contrast, the number of hyperspectral infrared profiles assimilated is much higher since the retrieved profiles can be assimilated in some partly cloudy scenes due to profile coupling other data, such as microwave or neural networks, as first guesses to the retrieval process. As the operational data assimilation community attempts to assimilate cloud-affected radiances, it is possible that the use of retrieved profiles might offer an alternative methodology that is less complex and more computationally efficient to solve this problem. The NASA Short-term Prediction Research and Transition (SPoRT) Center has assimilated hyperspectral infrared retrieved profiles into Weather Research and Forecasting Model (WRF) simulations using the Gridpoint Statistical Interpolation (GSI) System. Early research at SPoRT demonstrated improved initial conditions when assimilating Atmospheric Infrared Sounder (AIRS) thermodynamic profiles into WRF (using WRF-Var and assigning more appropriate error weighting to the profiles) to improve regional analysis and heavy precipitation forecasts. Successful early work has led to more recent research utilizing WRF and GSI for applications including the assimilation of AIRS profiles to improve WRF forecasts of atmospheric rivers and assimilation of AIRS, Cross-track Infrared and Microwave Sounding Suite (CrIMSS), and Infrared Atmospheric Sounding Interferometer (IASI) profiles to improve model representation of tropopause folds and associated non-convective wind events. Although more hyperspectral infrared retrieved profiles can be assimilated into model forecasts, one disadvantage is the retrieved profiles have traditionally been assigned the same error values as the rawinsonde observations when assimilated with GSI. Typically, satellitederived profile errors are larger and more difficult to quantify than traditional rawinsonde observations (especially in the boundary layer), so it is important to appropriately assign observation errors within GSI to eliminate potential spurious innovations and analysis increments that can sometimes arise when using retrieved profiles. The goal of this study is to describe modifications to the GSI source code to more appropriately assimilate hyperspectral infrared retrieved profiles and outline preliminary results that show the differences between a model simulation that assimilated the profiles as rawinsonde observations and one that assimilated the profiles in a module with the appropriate error values.

  16. Interference elimination in digital controllers of automation systems of oil and gas complex

    NASA Astrophysics Data System (ADS)

    Solomentsev, K. Yu; Fugarov, D. D.; Purchina, O. A.; Poluyan, A. Y.; Nesterchuk, V. V.; Petrenkova, S. B.

    2018-05-01

    The given article considers the problems arising in the process of digital governors development for the systems of automatic control. In the case of interference, and also in case of high frequency of digitization, digital differentiation gives a big error. The problem is that the derivative is calculated as the difference of two close variables. The method of differentiation is offered to reduce this error, when there is a case of averaging the difference quotient of the series of meanings. The structure chart for the implementation of this differentiation method is offered in the case of governors construction.

  17. Design and flight test of a differential GPS/inertial navigation system for approach/landing guidance

    NASA Technical Reports Server (NTRS)

    Vallot, Lawrence; Snyder, Scott; Schipper, Brian; Parker, Nigel; Spitzer, Cary

    1991-01-01

    NASA-Langley has conducted a flight test program evaluating a differential GPS/inertial navigation system's (DGPS/INS) utility as an approach/landing aid. The DGPS/INS airborne and ground components are based on off-the-shelf transport aircraft avionics, namely a global positioning/inertial reference unit (GPIRU) and two GPS sensor units (GPSSUs). Systematic GPS errors are measured by the ground GPSSU and transmitted to the aircraft GPIRU, allowing the errors to be eliminated or greatly reduced in the airborne equipment. Over 120 landings were flown; 36 of these were fully automatic DGPS/INS landings.

  18. Direct model reference adaptive control with application to flexible robots

    NASA Technical Reports Server (NTRS)

    Steinvorth, Rodrigo; Kaufman, Howard; Neat, Gregory W.

    1992-01-01

    A modification to a direct command generator tracker-based model reference adaptive control (MRAC) system is suggested in this paper. This modification incorporates a feedforward into the reference model's output as well as the plant's output. Its purpose is to eliminate the bounded model following error present in steady state when previous MRAC systems were used. The algorithm was evaluated using the dynamics for a single-link flexible-joint arm. The results of these simulations show a response with zero steady state model following error. These results encourage further use of MRAC for various types of nonlinear plants.

  19. Note: Eddy current displacement sensors independent of target conductivity.

    PubMed

    Wang, Hongbo; Li, Wei; Feng, Zhihua

    2015-01-01

    Eddy current sensors (ECSs) are widely used for non-contact displacement measurement. In this note, the quantitative error of an ECS caused by target conductivity was analyzed using a complex image method. The response curves (L-x) of the ECS with different targets were similar and could be overlapped by shifting the curves on x direction with √2δ/2. Both finite element analysis and experiments match well with the theoretical analysis, which indicates that the measured error of high precision ECSs caused by target conductivity can be completely eliminated, and the ECSs can measure different materials precisely without calibration.

  20. [De-noising and measurement of pulse wave velocity of the wavelet].

    PubMed

    Liu, Baohua; Zhu, Honglian; Ren, Xiaohua

    2011-02-01

    Pulse wave velocity (PWV) is a vital index of the cardiovascular pathology, so that the accurate measurement of PWV can be of benefit for prevention and treatment of cardiovascular diseases. The noise in the measure system of pulse wave signal, rounding error and selection of the recording site all cause errors in the measure result. In this paper, with wavelet transformation to eliminate the noise and to raise the precision, and with the choice of the point whose slope was maximum as the recording site of the reconstructing pulse wave, the measuring system accuracy was improved.

  1. Calculations of axisymmetric vortex sheet roll-up using a panel and a filament model

    NASA Technical Reports Server (NTRS)

    Kantelis, J. P.; Widnall, S. E.

    1986-01-01

    A method for calculating the self-induced motion of a vortex sheet using discrete vortex elements is presented. Vortex panels and vortex filaments are used to simulate two-dimensional and axisymmetric vortex sheet roll-up. A straight forward application using vortex elements to simulate the motion of a disk of vorticity with an elliptic circulation distribution yields unsatisfactroy results where the vortex elements move in a chaotic manner. The difficulty is assumed to be due to the inability of a finite number of discrete vortex elements to model the singularity at the sheet edge and due to large velocity calculation errors which result from uneven sheet stretching. A model of the inner portion of the spiral is introduced to eliminate the difficulty with the sheet edge singularity. The model replaces the outermost portion of the sheet with a single vortex of equivalent circulation and a number of higher order terms which account for the asymmetry of the spiral. The resulting discrete vortex model is applied to both two-dimensional and axisymmetric sheets. The two-dimensional roll-up is compared to the solution for a semi-infinite sheet with good results.

  2. Evaluating the Impacts of Real-Time Pricing on the Cost and Value of Wind Generation

    DOE PAGES

    Siohansi, Ramteen

    2010-05-01

    One of the costs associated with integrating wind generation into a power system is the cost of redispatching the system in real-time due to day-ahead wind resource forecast errors. One possible way of reducing these redispatch costs is to introduce demand response in the form of real-time pricing (RTP), which could allow electricity demand to respond to actual real-time wind resource availability using price signals. A day-ahead unit commitment model with day-ahead wind forecasts and a real-time dispatch model with actual wind resource availability is used to estimate system operations in a high wind penetration scenario. System operations are comparedmore » to a perfect foresight benchmark, in which actual wind resource availability is known day-ahead. The results show that wind integration costs with fixed demands can be high, both due to real-time redispatch costs and lost load. It is demonstrated that introducing RTP can reduce redispatch costs and eliminate loss of load events. Finally, social surplus with wind generation and RTP is compared to a system with neither and the results demonstrate that introducing wind and RTP into a market can result in superadditive surplus gains.« less

  3. Algorithms to eliminate the influence of non-uniform intensity distributions on wavefront reconstruction by quadri-wave lateral shearing interferometers

    NASA Astrophysics Data System (ADS)

    Chen, Xiao-jun; Dong, Li-zhi; Wang, Shuai; Yang, Ping; Xu, Bing

    2017-11-01

    In quadri-wave lateral shearing interferometry (QWLSI), when the intensity distribution of the incident light wave is non-uniform, part of the information of the intensity distribution will couple with the wavefront derivatives to cause wavefront reconstruction errors. In this paper, we propose two algorithms to reduce the influence of a non-uniform intensity distribution on wavefront reconstruction. Our simulation results demonstrate that the reconstructed amplitude distribution (RAD) algorithm can effectively reduce the influence of the intensity distribution on the wavefront reconstruction and that the collected amplitude distribution (CAD) algorithm can almost eliminate it.

  4. Correcting false memories: Errors must be noticed and replaced.

    PubMed

    Mullet, Hillary G; Marsh, Elizabeth J

    2016-04-01

    Memory can be unreliable. For example, after reading The new baby stayed awake all night, people often misremember that the new baby cried all night (Brewer, 1977); similarly, after hearing bed, rest, and tired, people often falsely remember that sleep was on the list (Roediger & McDermott, 1995). In general, such false memories are difficult to correct, persisting despite warnings and additional study opportunities. We argue that errors must first be detected to be corrected; consistent with this argument, two experiments showed that false memories were nearly eliminated when conditions facilitated comparisons between participants' errors and corrective feedback (e.g., immediate trial-by-trial feedback that allowed direct comparisons between their responses and the correct information). However, knowledge that they had made an error was insufficient; unless the feedback message also contained the correct answer, the rate of false memories remained relatively constant. On the one hand, there is nothing special about correcting false memories: simply labeling an error as "wrong" is also insufficient for correcting other memory errors, including misremembered facts or mistranslations. However, unlike these other types of errors--which often benefit from the spacing afforded by delayed feedback--false memories require a special consideration: Learners may fail to notice their errors unless the correction conditions specifically highlight them.

  5. Validation of prostate-specific antigen laboratory values recorded in Surveillance, Epidemiology, and End Results registries.

    PubMed

    Adamo, Margaret Peggy; Boten, Jessica A; Coyle, Linda M; Cronin, Kathleen A; Lam, Clara J K; Negoita, Serban; Penberthy, Lynne; Stevens, Jennifer L; Ward, Kevin C

    2017-02-15

    Researchers have used prostate-specific antigen (PSA) values collected by central cancer registries to evaluate tumors for potential aggressive clinical disease. An independent study collecting PSA values suggested a high error rate (18%) related to implied decimal points. To evaluate the error rate in the Surveillance, Epidemiology, and End Results (SEER) program, a comprehensive review of PSA values recorded across all SEER registries was performed. Consolidated PSA values for eligible prostate cancer cases in SEER registries were reviewed and compared with text documentation from abstracted records. Four types of classification errors were identified: implied decimal point errors, abstraction or coding implementation errors, nonsignificant errors, and changes related to "unknown" values. A total of 50,277 prostate cancer cases diagnosed in 2012 were reviewed. Approximately 94.15% of cases did not have meaningful changes (85.85% correct, 5.58% with a nonsignificant change of <1 ng/mL, and 2.80% with no clinical change). Approximately 5.70% of cases had meaningful changes (1.93% due to implied decimal point errors, 1.54% due to abstract or coding errors, and 2.23% due to errors related to unknown categories). Only 419 of the original 50,277 cases (0.83%) resulted in a change in disease stage due to a corrected PSA value. The implied decimal error rate was only 1.93% of all cases in the current validation study, with a meaningful error rate of 5.81%. The reasons for the lower error rate in SEER are likely due to ongoing and rigorous quality control and visual editing processes by the central registries. The SEER program currently is reviewing and correcting PSA values back to 2004 and will re-release these data in the public use research file. Cancer 2017;123:697-703. © 2016 American Cancer Society. © 2016 The Authors. Cancer published by Wiley Periodicals, Inc. on behalf of American Cancer Society.

  6. Determination and classification of the aerodynamic properties of wing sections

    NASA Technical Reports Server (NTRS)

    Munk, Max M

    1925-01-01

    The following note, prepared for the NACA, contains several remarks on the possible improvement of the experimental determination of the aerodynamic properties of wing sections. It shows how errors of observation can subsequently be partially eliminated, and how the computation of the maxima or minima of aerodynamic characteristics can be much improved.

  7. On Rater Agreement and Rater Training

    ERIC Educational Resources Information Center

    Wang, Binhong

    2010-01-01

    This paper first analyzed two studies on rater factors and rating criteria to raise the problem of rater agreement. After that the author reveals the causes of discrepencies in rating administration by discussing rater variability and rater bias. The author argues that rater bias can not be eliminated completely, we can only reduce the error to a…

  8. Does Unit Analysis Help Students Construct Equations?

    ERIC Educational Resources Information Center

    Reed, Stephen K.

    2006-01-01

    Previous research has shown that students construct equations for word problems in which many of the terms have no referents. Experiment 1 attempted to eliminate some of these errors by providing instruction on canceling units. The failure of this method was attributed to the cognitive overload (Sweller, 2003) imposed by adding units to the…

  9. Improving NAVFAC's total quality management of construction drawings with CLIPS

    NASA Technical Reports Server (NTRS)

    Antelman, Albert

    1991-01-01

    A diagnostic expert system to improve the quality of Naval Facilities Engineering Command (NAVFAC) construction drawings and specification is described. C Language Integrated Production System (CLIPS) and computer aided design layering standards are used in an expert system to check and coordinate construction drawings and specifications to eliminate errors and omissions.

  10. Risk and Hazard Management in High Adventure Outdoor Pursuits.

    ERIC Educational Resources Information Center

    Meier, Joel

    The dilemma in adventure education is to eliminate unreasonable risks to participants without reducing the levels of excitement, challenge, and stress that are inherent in adventure programming. Most accidents in outdoor pursuits are caused by a combination of unsafe conditions; unsafe acts (usually on the part of the student); and error judgments…

  11. Turbulence excited frequency domain damping measurement and truncation effects

    NASA Technical Reports Server (NTRS)

    Soovere, J.

    1976-01-01

    Existing frequency domain modal frequency and damping analysis methods are discussed. The effects of truncation in the Laplace and Fourier transform data analysis methods are described. Methods for eliminating truncation errors from measured damping are presented. Implications of truncation effects in fast Fourier transform analysis are discussed. Limited comparison with test data is presented.

  12. NON-RANDOM CELL KILLING IN CRYOPRESERVATION: IMPLICATIONS FOR PERFORMANCE OF THE BATTERY OF LEUKOCYTE TESTS (BLT) - I. TOXIC AND IMMUNOTOXIC EFFECTS

    EPA Science Inventory

    To eliminate between-tests error in longitudinal studies, for specimen sharing, convenient scheduling, etc., it is necessary to freeze freshly separated leukocytes as well as non-transformed, continuous T lymphocyte (CTL) lines. o test the efficacy of a programmable reezer (tempe...

  13. Rounding Technique for High-Speed Digital Signal Processing

    NASA Technical Reports Server (NTRS)

    Wechsler, E. R.

    1983-01-01

    Arithmetic technique facilitates high-speed rounding of 2's complement binary data. Conventional rounding of 2's complement numbers presents problems in high-speed digital circuits. Proposed technique consists of truncating K + 1 bits then attaching bit in least significant position. Mean output error is zero, eliminating introducing voltage offset at input.

  14. Eliminating the Blame Game

    ERIC Educational Resources Information Center

    Swanson, Kristen; Allen, Gayle; Mancabelli, Rob

    2015-01-01

    Even mentioning data analysis puts many educators on edge; they fear that in data discussions, their performance will be judged. And, the authors note, it's a human trait to look for the source of a problem in the behavior of people involved rather than the system surrounding those people--what some call the Fundamental Attribution Error. When…

  15. Using QR Codes to Differentiate Learning for Gifted and Talented Students

    ERIC Educational Resources Information Center

    Siegle, Del

    2015-01-01

    QR codes are two-dimensional square patterns that are capable of coding information that ranges from web addresses to links to YouTube video. The codes save time typing and eliminate errors in entering addresses incorrectly. These codes make learning with technology easier for students and motivationally engage them in news ways.

  16. Time-division multiplexer uses digital gates

    NASA Technical Reports Server (NTRS)

    Myers, C. E.; Vreeland, A. E.

    1977-01-01

    Device eliminates errors caused by analog gates in multiplexing a large number of channels at high frequency. System was designed for use in aerospace work to multiplex signals for monitoring such variables as fuel consumption, pressure, temperature, strain, and stress. Circuit may be useful in monitoring variables in process control and medicine as well.

  17. System Measures Thermal Noise In A Microphone

    NASA Technical Reports Server (NTRS)

    Zuckerwar, Allan J.; Ngo, Kim Chi T.

    1994-01-01

    Vacuum provides acoustic isolation from environment. System for measuring thermal noise of microphone and its preamplifier eliminates some sources of error found in older systems. Includes isolation vessel and exterior suspension, acting together, enables measurement of thermal noise under realistic conditions while providing superior vibrational and accoustical isolation. System yields more accurate measurements of thermal noise.

  18. A Quick Test for the Highly Colored Ions of the Aluminum-Nickel Group.

    ERIC Educational Resources Information Center

    Grenda, Stanley C.

    1986-01-01

    Presents a technique for eliminating errors in the analysis of the nickel subgroup of the aluminum-nickel group cations. Describes the process of color and chemical changes that occur in this group as a result of ligand and coordination number changes. Discusses opportunities for student observations. (TW)

  19. Reflectance measurements

    NASA Technical Reports Server (NTRS)

    Brown, R. A.

    1982-01-01

    The productivity of spectroreflectometer equipment and operating personnel and the accuracy and sensitivity of the measurements were investigated. Increased optical sensitivity and better design of the data collection and processing scheme to eliminate some of the unnecessary present operations were conducted. Two promising approaches to increased sensitivity were identified, conventional processing with error compensation and detection of random noise modulation.

  20. High accurate interpolation of NURBS tool path for CNC machine tools

    NASA Astrophysics Data System (ADS)

    Liu, Qiang; Liu, Huan; Yuan, Songmei

    2016-09-01

    Feedrate fluctuation caused by approximation errors of interpolation methods has great effects on machining quality in NURBS interpolation, but few methods can efficiently eliminate or reduce it to a satisfying level without sacrificing the computing efficiency at present. In order to solve this problem, a high accurate interpolation method for NURBS tool path is proposed. The proposed method can efficiently reduce the feedrate fluctuation by forming a quartic equation with respect to the curve parameter increment, which can be efficiently solved by analytic methods in real-time. Theoretically, the proposed method can totally eliminate the feedrate fluctuation for any 2nd degree NURBS curves and can interpolate 3rd degree NURBS curves with minimal feedrate fluctuation. Moreover, a smooth feedrate planning algorithm is also proposed to generate smooth tool motion with considering multiple constraints and scheduling errors by an efficient planning strategy. Experiments are conducted to verify the feasibility and applicability of the proposed method. This research presents a novel NURBS interpolation method with not only high accuracy but also satisfying computing efficiency.

  1. 20 CFR 10.509 - If an employee's light-duty job is eliminated due to downsizing, what is the effect on compensation?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... to Work-Employer's Responsibilities § 10.509 If an employee's light-duty job is eliminated due to... experienced a compensable recurrence of disability as defined in § 10.5(x) merely because his or her employer... established physical limitations of the injured employee and for which the employer has already prepared a...

  2. Performance of concatenated Reed-Solomon/Viterbi channel coding

    NASA Technical Reports Server (NTRS)

    Divsalar, D.; Yuen, J. H.

    1982-01-01

    The concatenated Reed-Solomon (RS)/Viterbi coding system is reviewed. The performance of the system is analyzed and results are derived with a new simple approach. A functional model for the input RS symbol error probability is presented. Based on this new functional model, we compute the performance of a concatenated system in terms of RS word error probability, output RS symbol error probability, bit error probability due to decoding failure, and bit error probability due to decoding error. Finally we analyze the effects of the noisy carrier reference and the slow fading on the system performance.

  3. Modeling and forecasting of KLCI weekly return using WT-ANN integrated model

    NASA Astrophysics Data System (ADS)

    Liew, Wei-Thong; Liong, Choong-Yeun; Hussain, Saiful Izzuan; Isa, Zaidi

    2013-04-01

    The forecasting of weekly return is one of the most challenging tasks in investment since the time series are volatile and non-stationary. In this study, an integrated model of wavelet transform and artificial neural network, WT-ANN is studied for modeling and forecasting of KLCI weekly return. First, the WT is applied to decompose the weekly return time series in order to eliminate noise. Then, a mathematical model of the time series is constructed using the ANN. The performance of the suggested model will be evaluated by root mean squared error (RMSE), mean absolute error (MAE), mean absolute percentage error (MAPE). The result shows that the WT-ANN model can be considered as a feasible and powerful model for time series modeling and prediction.

  4. Eliminating time dispersion from seismic wave modeling

    NASA Astrophysics Data System (ADS)

    Koene, Erik F. M.; Robertsson, Johan O. A.; Broggini, Filippo; Andersson, Fredrik

    2018-04-01

    We derive an expression for the error introduced by the second-order accurate temporal finite-difference (FD) operator, as present in the FD, pseudospectral and spectral element methods for seismic wave modeling applied to time-invariant media. The `time-dispersion' error speeds up the signal as a function of frequency and time step only. Time dispersion is thus independent of the propagation path, medium or spatial modeling error. We derive two transforms to either add or remove time dispersion from synthetic seismograms after a simulation. The transforms are compared to previous related work and demonstrated on wave modeling in acoustic as well as elastic media. In addition, an application to imaging is shown. The transforms enable accurate computation of synthetic seismograms at reduced cost, benefitting modeling applications in both exploration and global seismology.

  5. Linear discriminant analysis with misallocation in training samples

    NASA Technical Reports Server (NTRS)

    Chhikara, R. (Principal Investigator); Mckeon, J.

    1982-01-01

    Linear discriminant analysis for a two-class case is studied in the presence of misallocation in training samples. A general appraoch to modeling of mislocation is formulated, and the mean vectors and covariance matrices of the mixture distributions are derived. The asymptotic distribution of the discriminant boundary is obtained and the asymptotic first two moments of the two types of error rate given. Certain numerical results for the error rates are presented by considering the random and two non-random misallocation models. It is shown that when the allocation procedure for training samples is objectively formulated, the effect of misallocation on the error rates of the Bayes linear discriminant rule can almost be eliminated. If, however, this is not possible, the use of Fisher rule may be preferred over the Bayes rule.

  6. Sampling Errors in Monthly Rainfall Totals for TRMM and SSM/I, Based on Statistics of Retrieved Rain Rates and Simple Models

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Kundu, Prasun K.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Estimates from TRMM satellite data of monthly total rainfall over an area are subject to substantial sampling errors due to the limited number of visits to the area by the satellite during the month. Quantitative comparisons of TRMM averages with data collected by other satellites and by ground-based systems require some estimate of the size of this sampling error. A method of estimating this sampling error based on the actual statistics of the TRMM observations and on some modeling work has been developed. "Sampling error" in TRMM monthly averages is defined here relative to the monthly total a hypothetical satellite permanently stationed above the area would have reported. "Sampling error" therefore includes contributions from the random and systematic errors introduced by the satellite remote sensing system. As part of our long-term goal of providing error estimates for each grid point accessible to the TRMM instruments, sampling error estimates for TRMM based on rain retrievals from TRMM microwave (TMI) data are compared for different times of the year and different oceanic areas (to minimize changes in the statistics due to algorithmic differences over land and ocean). Changes in sampling error estimates due to changes in rain statistics due 1) to evolution of the official algorithms used to process the data, and 2) differences from other remote sensing systems such as the Defense Meteorological Satellite Program (DMSP) Special Sensor Microwave/Imager (SSM/I), are analyzed.

  7. Is the psychological refractory period effect for ideomotor compatible tasks eliminated by speed-stress instructions?

    PubMed

    Shin, Yun Kyoung; Cho, Yang Seok; Lien, Mei-Ching; Proctor, Robert W

    2007-09-01

    It has been argued that the psychological refractory period (PRP) effect is eliminated with two ideomotor compatible tasks when instructions stress fast and simultaneous responding. Three experiments were conducted to test this hypothesis. In all experiments, Task 1 required spatially compatible manual responses (left or right) to the direction of an arrow, and Task 2 required saying the name of the auditory letter A or B. In Experiments 1 and 3, the manual responses were keypresses made with the left and right hands, whereas in Experiment 2 they were left-right toggle-switch movements made with the dominant hand. Instructions that stressed response speed reduced reaction time and increased error rate compared to standard instructions to respond fast and accurately, but did not eliminate the PRP effect on Task 2 reaction time. These results imply that, even when response speed is emphasized, ideomotor compatible tasks do not bypass response selection.

  8. Reduction in chemotherapy order errors with computerized physician order entry.

    PubMed

    Meisenberg, Barry R; Wright, Robert R; Brady-Copertino, Catherine J

    2014-01-01

    To measure the number and type of errors associated with chemotherapy order composition associated with three sequential methods of ordering: handwritten orders, preprinted orders, and computerized physician order entry (CPOE) embedded in the electronic health record. From 2008 to 2012, a sample of completed chemotherapy orders were reviewed by a pharmacist for the number and type of errors as part of routine performance improvement monitoring. Error frequencies for each of the three distinct methods of composing chemotherapy orders were compared using statistical methods. The rate of problematic order sets-those requiring significant rework for clarification-was reduced from 30.6% with handwritten orders to 12.6% with preprinted orders (preprinted v handwritten, P < .001) to 2.2% with CPOE (preprinted v CPOE, P < .001). The incidence of errors capable of causing harm was reduced from 4.2% with handwritten orders to 1.5% with preprinted orders (preprinted v handwritten, P < .001) to 0.1% with CPOE (CPOE v preprinted, P < .001). The number of problem- and error-containing chemotherapy orders was reduced sequentially by preprinted order sets and then by CPOE. CPOE is associated with low error rates, but it did not eliminate all errors, and the technology can introduce novel types of errors not seen with traditional handwritten or preprinted orders. Vigilance even with CPOE is still required to avoid patient harm.

  9. Eliminating cubic terms in the pseudopotential lattice Boltzmann model for multiphase flow

    NASA Astrophysics Data System (ADS)

    Huang, Rongzong; Wu, Huiying; Adams, Nikolaus A.

    2018-05-01

    It is well recognized that there exist additional cubic terms of velocity in the lattice Boltzmann (LB) model based on the standard lattice. In this work, elimination of these cubic terms in the pseudopotential LB model for multiphase flow is investigated, where the force term and density gradient are considered. By retaining high-order (≥3 ) Hermite terms in the equilibrium distribution function and the discrete force term, as well as introducing correction terms in the LB equation, the additional cubic terms of velocity are entirely eliminated. With this technique, the computational simplicity of the pseudopotential LB model is well maintained. Numerical tests, including stationary and moving flat and circular interface problems, are carried out to show the effects of such cubic terms on the simulation of multiphase flow. It is found that the elimination of additional cubic terms is beneficial to reduce the numerical error, especially when the velocity is relatively large. Numerical results also suggest that these cubic terms mainly take effect in the interfacial region and that the density-gradient-related cubic terms are more important than the other cubic terms for multiphase flow.

  10. C-Terminal End-Directed Protein Elimination by CRL2 Ubiquitin Ligases.

    PubMed

    Lin, Hsiu-Chuan; Yeh, Chi-Wei; Chen, Yen-Fu; Lee, Ting-Ting; Hsieh, Pei-Yun; Rusnac, Domnita V; Lin, Sung-Ya; Elledge, Stephen J; Zheng, Ning; Yen, Hsueh-Chi S

    2018-05-17

    The proteolysis-assisted protein quality control system guards the proteome from potentially detrimental aberrant proteins. How miscellaneous defective proteins are specifically eliminated and which molecular characteristics direct them for removal are fundamental questions. We reveal a mechanism, DesCEND (destruction via C-end degrons), by which CRL2 ubiquitin ligase uses interchangeable substrate receptors to recognize the unusual C termini of abnormal proteins (i.e., C-end degrons). C-end degrons are mostly less than ten residues in length and comprise a few indispensable residues along with some rather degenerate ones. The C-terminal end position is essential for C-end degron function. Truncated selenoproteins generated by translation errors and the USP1 N-terminal fragment from post-translational cleavage are eliminated by DesCEND. DesCEND also targets full-length proteins with naturally occurring C-end degrons. The C-end degron in DesCEND echoes the N-end degron in the N-end rule pathway, highlighting the dominance of protein "ends" as indicators for protein elimination. Copyright © 2018 Elsevier Inc. All rights reserved.

  11. The role of ensemble-based statistics in variational assimilation of cloud-affected observations from infrared imagers

    NASA Astrophysics Data System (ADS)

    Hacker, Joshua; Vandenberghe, Francois; Jung, Byoung-Jo; Snyder, Chris

    2017-04-01

    Effective assimilation of cloud-affected radiance observations from space-borne imagers, with the aim of improving cloud analysis and forecasting, has proven to be difficult. Large observation biases, nonlinear observation operators, and non-Gaussian innovation statistics present many challenges. Ensemble-variational data assimilation (EnVar) systems offer the benefits of flow-dependent background error statistics from an ensemble, and the ability of variational minimization to handle nonlinearity. The specific benefits of ensemble statistics, relative to static background errors more commonly used in variational systems, have not been quantified for the problem of assimilating cloudy radiances. A simple experiment framework is constructed with a regional NWP model and operational variational data assimilation system, to provide the basis understanding the importance of ensemble statistics in cloudy radiance assimilation. Restricting the observations to those corresponding to clouds in the background forecast leads to innovations that are more Gaussian. The number of large innovations is reduced compared to the more general case of all observations, but not eliminated. The Huber norm is investigated to handle the fat tails of the distributions, and allow more observations to be assimilated without the need for strict background checks that eliminate them. Comparing assimilation using only ensemble background error statistics with assimilation using only static background error statistics elucidates the importance of the ensemble statistics. Although the cost functions in both experiments converge to similar values after sufficient outer-loop iterations, the resulting cloud water, ice, and snow content are greater in the ensemble-based analysis. The subsequent forecasts from the ensemble-based analysis also retain more condensed water species, indicating that the local environment is more supportive of clouds. In this presentation we provide details that explain the apparent benefit from using ensembles for cloudy radiance assimilation in an EnVar context.

  12. Galilean-invariant preconditioned central-moment lattice Boltzmann method without cubic velocity errors for efficient steady flow simulations

    NASA Astrophysics Data System (ADS)

    Hajabdollahi, Farzaneh; Premnath, Kannan N.

    2018-05-01

    Lattice Boltzmann (LB) models used for the computation of fluid flows represented by the Navier-Stokes (NS) equations on standard lattices can lead to non-Galilean-invariant (GI) viscous stress involving cubic velocity errors. This arises from the dependence of their third-order diagonal moments on the first-order moments for standard lattices, and strategies have recently been introduced to restore Galilean invariance without such errors using a modified collision operator involving corrections to either the relaxation times or the moment equilibria. Convergence acceleration in the simulation of steady flows can be achieved by solving the preconditioned NS equations, which contain a preconditioning parameter that can be used to tune the effective sound speed, and thereby alleviating the numerical stiffness. In the present paper, we present a GI formulation of the preconditioned cascaded central-moment LB method used to solve the preconditioned NS equations, which is free of cubic velocity errors on a standard lattice, for steady flows. A Chapman-Enskog analysis reveals the structure of the spurious non-GI defect terms and it is demonstrated that the anisotropy of the resulting viscous stress is dependent on the preconditioning parameter, in addition to the fluid velocity. It is shown that partial correction to eliminate the cubic velocity defects is achieved by scaling the cubic velocity terms in the off-diagonal third-order moment equilibria with the square of the preconditioning parameter. Furthermore, we develop additional corrections based on the extended moment equilibria involving gradient terms with coefficients dependent locally on the fluid velocity and the preconditioning parameter. Such parameter dependent corrections eliminate the remaining truncation errors arising from the degeneracy of the diagonal third-order moments and fully restore Galilean invariance without cubic defects for the preconditioned LB scheme on a standard lattice. Several conclusions are drawn from the analysis of the structure of the non-GI errors and the associated corrections, with particular emphasis on their dependence on the preconditioning parameter. The GI preconditioned central-moment LB method is validated for a number of complex flow benchmark problems and its effectiveness to achieve convergence acceleration and improvement in accuracy is demonstrated.

  13. Use of an identification system based on biometric data for patients requiring transfusions guarantees transfusion safety and traceability

    PubMed Central

    Bennardello, Francesco; Fidone, Carmelo; Cabibbo, Sergio; Calabrese, Salvatore; Garozzo, Giovanni; Cassarino, Grazia; Antolino, Agostino; Tavolino, Giuseppe; Zisa, Nuccio; Falla, Cadigia; Drago, Giuseppe; Di Stefano, Giovanna; Bonomo, Pietro

    2009-01-01

    Background One of the most serious risks of blood transfusions is an error in ABO blood group compatibility, which can cause a haemolytic transfusion reaction and, in the most severe cases, the death of the patient. The frequency and type of errors observed suggest that these are inevitable, in that mistakes are inherent to human nature, unless significant changes, including the use of computerised instruments, are made to procedures. Methods In order to identify patients who are candidates for the transfusion of blood components and to guarantee the traceability of the transfusion, the Securblood system (BBS srl) was introduced. This system records the various stages of the transfusion process, the health care workers involved and any immediate transfusion reactions. The patients and staff are identified by fingerprinting or a bar code. The system was implemented within Ragusa hospital in 16 operative units (ordinary wards, day hospital, operating theatres). Results In the period from August 2007 to July 2008, 7282 blood components were transfused within the hospital, of which 5606 (77%) using the Securblood system. Overall, 1777 patients were transfused. In this year of experience, no transfusion errors were recorded and each blood component was transfused to the right patient. We recorded 33 blocks of the terminals (involving 0.6% of the transfused blood components) which required the intervention of staff from the Service of Immunohaematology and Transfusion Medicine (SIMT). Most of the blocks were due to procedural errors. Conclusions The Securblood system guarantees complete traceability of the transfusion process outside the SIMT and eliminates the possibility of mistaken identification of patients or blood components. The use of fingerprinting to identify health care staff (nurses and doctors) and patients obliges the staff to carry out the identification procedures directly in the presence of the patient and guarantees the presence of the doctor at the start of the transfusion. PMID:19657483

  14. Harvesting tree biomass at the stand level to assess the accuracy of field and airborne biomass estimation in savannas.

    PubMed

    Colgan, Matthew S; Asner, Gregory P; Swemmer, Tony

    2013-07-01

    Tree biomass is an integrated measure of net growth and is critical for understanding, monitoring, and modeling ecosystem functions. Despite the importance of accurately measuring tree biomass, several fundamental barriers preclude direct measurement at large spatial scales, including the facts that trees must be felled to be weighed and that even modestly sized trees are challenging to maneuver once felled. Allometric methods allow for estimation of tree mass using structural characteristics, such as trunk diameter. Savanna trees present additional challenges, including limited available allometry and a prevalence of multiple stems per individual. Here we collected airborne lidar data over a semiarid savanna adjacent to the Kruger National Park, South Africa, and then harvested and weighed woody plant biomass at the plot scale to provide a standard against which field and airborne estimation methods could be compared. For an existing airborne lidar method, we found that half of the total error was due to averaging canopy height at the plot scale. This error was eliminated by instead measuring maximum height and crown area of individual trees from lidar data using an object-based method to identify individual tree crowns and estimate their biomass. The best object-based model approached the accuracy of field allometry at both the tree and plot levels, and it more than doubled the accuracy compared to existing airborne methods (17% vs. 44% deviation from harvested biomass). Allometric error accounted for less than one-third of the total residual error in airborne biomass estimates at the plot scale when using allometry with low bias. Airborne methods also gave more accurate predictions at the plot level than did field methods based on diameter-only allometry. These results provide a novel comparison of field and airborne biomass estimates using harvested plots and advance the role of lidar remote sensing in savanna ecosystems.

  15. Back-Propagation Operation for Analog Neural Network Hardware with Synapse Components Having Hysteresis Characteristics

    PubMed Central

    Ueda, Michihito; Nishitani, Yu; Kaneko, Yukihiro; Omote, Atsushi

    2014-01-01

    To realize an analog artificial neural network hardware, the circuit element for synapse function is important because the number of synapse elements is much larger than that of neuron elements. One of the candidates for this synapse element is a ferroelectric memristor. This device functions as a voltage controllable variable resistor, which can be applied to a synapse weight. However, its conductance shows hysteresis characteristics and dispersion to the input voltage. Therefore, the conductance values vary according to the history of the height and the width of the applied pulse voltage. Due to the difficulty of controlling the accurate conductance, it is not easy to apply the back-propagation learning algorithm to the neural network hardware having memristor synapses. To solve this problem, we proposed and simulated a learning operation procedure as follows. Employing a weight perturbation technique, we derived the error change. When the error reduced, the next pulse voltage was updated according to the back-propagation learning algorithm. If the error increased the amplitude of the next voltage pulse was set in such way as to cause similar memristor conductance but in the opposite voltage scanning direction. By this operation, we could eliminate the hysteresis and confirmed that the simulation of the learning operation converged. We also adopted conductance dispersion numerically in the simulation. We examined the probability that the error decreased to a designated value within a predetermined loop number. The ferroelectric has the characteristics that the magnitude of polarization does not become smaller when voltages having the same polarity are applied. These characteristics greatly improved the probability even if the learning rate was small, if the magnitude of the dispersion is adequate. Because the dispersion of analog circuit elements is inevitable, this learning operation procedure is useful for analog neural network hardware. PMID:25393715

  16. Use of an identification system based on biometric data for patients requiring transfusions guarantees transfusion safety and traceability.

    PubMed

    Bennardello, Francesco; Fidone, Carmelo; Cabibbo, Sergio; Calabrese, Salvatore; Garozzo, Giovanni; Cassarino, Grazia; Antolino, Agostino; Tavolino, Giuseppe; Zisa, Nuccio; Falla, Cadigia; Drago, Giuseppe; Di Stefano, Giovanna; Bonomo, Pietro

    2009-07-01

    One of the most serious risks of blood transfusions is an error in ABO blood group compatibility, which can cause a haemolytic transfusion reaction and, in the most severe cases, the death of the patient. The frequency and type of errors observed suggest that these are inevitable, in that mistakes are inherent to human nature, unless significant changes, including the use of computerised instruments, are made to procedures. In order to identify patients who are candidates for the transfusion of blood components and to guarantee the traceability of the transfusion, the Securblood system (BBS srl) was introduced. This system records the various stages of the transfusion process, the health care workers involved and any immediate transfusion reactions. The patients and staff are identified by fingerprinting or a bar code. The system was implemented within Ragusa hospital in 16 operative units (ordinary wards, day hospital, operating theatres). In the period from August 2007 to July 2008, 7282 blood components were transfused within the hospital, of which 5606 (77%) using the Securblood system. Overall, 1777 patients were transfused. In this year of experience, no transfusion errors were recorded and each blood component was transfused to the right patient. We recorded 33 blocks of the terminals (involving 0.6% of the transfused blood components) which required the intervention of staff from the Service of Immunohaematology and Transfusion Medicine (SIMT). Most of the blocks were due to procedural errors. The Securblood system guarantees complete traceability of the transfusion process outside the SIMT and eliminates the possibility of mistaken identification of patients or blood components. The use of fingerprinting to identify health care staff (nurses and doctors) and patients obliges the staff to carry out the identification procedures directly in the presence of the patient and guarantees the presence of the doctor at the start of the transfusion.

  17. Triangulation Error Analysis for the Barium Ion Cloud Experiment. M.S. Thesis - North Carolina State Univ.

    NASA Technical Reports Server (NTRS)

    Long, S. A. T.

    1973-01-01

    The triangulation method developed specifically for the Barium Ion Cloud Project is discussed. Expression for the four displacement errors, the three slope errors, and the curvature error in the triangulation solution due to a probable error in the lines-of-sight from the observation stations to points on the cloud are derived. The triangulation method is then used to determine the effect of the following on these different errors in the solution: the number and location of the stations, the observation duration, east-west cloud drift, the number of input data points, and the addition of extra cameras to one of the stations. The pointing displacement errors, and the pointing slope errors are compared. The displacement errors in the solution due to a probable error in the position of a moving station plus the weighting factors for the data from the moving station are also determined.

  18. Star centroiding error compensation for intensified star sensors.

    PubMed

    Jiang, Jie; Xiong, Kun; Yu, Wenbo; Yan, Jinyun; Zhang, Guangjun

    2016-12-26

    A star sensor provides high-precision attitude information by capturing a stellar image; however, the traditional star sensor has poor dynamic performance, which is attributed to its low sensitivity. Regarding the intensified star sensor, the image intensifier is utilized to improve the sensitivity, thereby further improving the dynamic performance of the star sensor. However, the introduction of image intensifier results in star centroiding accuracy decrease, further influencing the attitude measurement precision of the star sensor. A star centroiding error compensation method for intensified star sensors is proposed in this paper to reduce the influences. First, the imaging model of the intensified detector, which includes the deformation parameter of the optical fiber panel, is established based on the orthographic projection through the analysis of errors introduced by the image intensifier. Thereafter, the position errors at the target points based on the model are obtained by using the Levenberg-Marquardt (LM) optimization method. Last, the nearest trigonometric interpolation method is presented to compensate for the arbitrary centroiding error of the image plane. Laboratory calibration result and night sky experiment result show that the compensation method effectively eliminates the error introduced by the image intensifier, thus remarkably improving the precision of the intensified star sensors.

  19. Evaluation of resolution and periodic errors of a flatbed scanner used for digitizing spectroscopic photographic plates

    PubMed Central

    Wyatt, Madison; Nave, Gillian

    2017-01-01

    We evaluated the use of a commercial flatbed scanner for digitizing photographic plates used for spectroscopy. The scanner has a bed size of 420 mm by 310 mm and a pixel size of about 0.0106 mm. Our tests show that the closest line pairs that can be resolved with the scanner are 0.024 mm apart, only slightly larger than the Nyquist resolution of 0.021 mm expected by the 0.0106 mm pixel size. We measured periodic errors in the scanner using both a calibrated length scale and a photographic plate. We find no noticeable periodic errors in the direction parallel to the linear detector in the scanner, but errors with an amplitude of 0.03 mm to 0.05 mm in the direction perpendicular to the detector. We conclude that large periodic errors in measurements of spectroscopic plates using flatbed scanners can be eliminated by scanning the plates with the dispersion direction parallel to the linear detector by placing the plate along the short side of the scanner. PMID:28463262

  20. A study and simulation of the impact of high-order aberrations to overlay error distribution

    NASA Astrophysics Data System (ADS)

    Sun, G.; Wang, F.; Zhou, C.

    2011-03-01

    With reduction of design rules, a number of corresponding new technologies, such as i-HOPC, HOWA and DBO have been proposed and applied to eliminate overlay error. When these technologies are in use, any high-order error distribution needs to be clearly distinguished in order to remove the underlying causes. Lens aberrations are normally thought to mainly impact the Matching Machine Overlay (MMO). However, when using Image-Based overlay (IBO) measurement tools, aberrations become the dominant influence on single machine overlay (SMO) and even on stage repeatability performance. In this paper, several measurements of the error distributions of the lens of SMEE SSB600/10 prototype exposure tool are presented. Models that characterize the primary influence from lens magnification, high order distortion, coma aberration and telecentricity are shown. The contribution to stage repeatability (as measured with IBO tools) from the above errors was predicted with simulator and compared to experiments. Finally, the drift of every lens distortion that impact to SMO over several days was monitored and matched with the result of measurements.

  1. Hybrid ICA-Regression: Automatic Identification and Removal of Ocular Artifacts from Electroencephalographic Signals.

    PubMed

    Mannan, Malik M Naeem; Jeong, Myung Y; Kamran, Muhammad A

    2016-01-01

    Electroencephalography (EEG) is a portable brain-imaging technique with the advantage of high-temporal resolution that can be used to record electrical activity of the brain. However, it is difficult to analyze EEG signals due to the contamination of ocular artifacts, and which potentially results in misleading conclusions. Also, it is a proven fact that the contamination of ocular artifacts cause to reduce the classification accuracy of a brain-computer interface (BCI). It is therefore very important to remove/reduce these artifacts before the analysis of EEG signals for applications like BCI. In this paper, a hybrid framework that combines independent component analysis (ICA), regression and high-order statistics has been proposed to identify and eliminate artifactual activities from EEG data. We used simulated, experimental and standard EEG signals to evaluate and analyze the effectiveness of the proposed method. Results demonstrate that the proposed method can effectively remove ocular artifacts as well as it can preserve the neuronal signals present in EEG data. A comparison with four methods from literature namely ICA, regression analysis, wavelet-ICA (wICA), and regression-ICA (REGICA) confirms the significantly enhanced performance and effectiveness of the proposed method for removal of ocular activities from EEG, in terms of lower mean square error and mean absolute error values and higher mutual information between reconstructed and original EEG.

  2. Hybrid ICA—Regression: Automatic Identification and Removal of Ocular Artifacts from Electroencephalographic Signals

    PubMed Central

    Mannan, Malik M. Naeem; Jeong, Myung Y.; Kamran, Muhammad A.

    2016-01-01

    Electroencephalography (EEG) is a portable brain-imaging technique with the advantage of high-temporal resolution that can be used to record electrical activity of the brain. However, it is difficult to analyze EEG signals due to the contamination of ocular artifacts, and which potentially results in misleading conclusions. Also, it is a proven fact that the contamination of ocular artifacts cause to reduce the classification accuracy of a brain-computer interface (BCI). It is therefore very important to remove/reduce these artifacts before the analysis of EEG signals for applications like BCI. In this paper, a hybrid framework that combines independent component analysis (ICA), regression and high-order statistics has been proposed to identify and eliminate artifactual activities from EEG data. We used simulated, experimental and standard EEG signals to evaluate and analyze the effectiveness of the proposed method. Results demonstrate that the proposed method can effectively remove ocular artifacts as well as it can preserve the neuronal signals present in EEG data. A comparison with four methods from literature namely ICA, regression analysis, wavelet-ICA (wICA), and regression-ICA (REGICA) confirms the significantly enhanced performance and effectiveness of the proposed method for removal of ocular activities from EEG, in terms of lower mean square error and mean absolute error values and higher mutual information between reconstructed and original EEG. PMID:27199714

  3. The Impact of Environmental and Endogenous Damage on Somatic Mutation Load in Human Skin Fibroblasts

    PubMed Central

    Saini, Natalie; Chan, Kin; Grimm, Sara A.; Dai, Shuangshuang; Fargo, David C.; Kaufmann, William K.; Taylor, Jack A.; Lee, Eunjung; Cortes-Ciriano, Isidro; Park, Peter J.; Schurman, Shepherd H.; Malc, Ewa P.; Mieczkowski, Piotr A.

    2016-01-01

    Accumulation of somatic changes, due to environmental and endogenous lesions, in the human genome is associated with aging and cancer. Understanding the impacts of these processes on mutagenesis is fundamental to understanding the etiology, and improving the prognosis and prevention of cancers and other genetic diseases. Previous methods relying on either the generation of induced pluripotent stem cells, or sequencing of single-cell genomes were inherently error-prone and did not allow independent validation of the mutations. In the current study we eliminated these potential sources of error by high coverage genome sequencing of single-cell derived clonal fibroblast lineages, obtained after minimal propagation in culture, prepared from skin biopsies of two healthy adult humans. We report here accurate measurement of genome-wide magnitude and spectra of mutations accrued in skin fibroblasts of healthy adult humans. We found that every cell contains at least one chromosomal rearrangement and 600–13,000 base substitutions. The spectra and correlation of base substitutions with epigenomic features resemble many cancers. Moreover, because biopsies were taken from body parts differing by sun exposure, we can delineate the precise contributions of environmental and endogenous factors to the accrual of genetic changes within the same individual. We show here that UV-induced and endogenous DNA damage can have a comparable impact on the somatic mutation loads in skin fibroblasts. Trial Registration ClinicalTrials.gov NCT01087307 PMID:27788131

  4. Enhanced control of a flexure-jointed micromanipulation system using a vision-based servoing approach

    NASA Astrophysics Data System (ADS)

    Chuthai, T.; Cole, M. O. T.; Wongratanaphisan, T.; Puangmali, P.

    2018-01-01

    This paper describes a high-precision motion control implementation for a flexure-jointed micromanipulator. A desktop experimental motion platform has been created based on a 3RUU parallel kinematic mechanism, driven by rotary voice coil actuators. The three arms supporting the platform have rigid links with compact flexure joints as integrated parts and are made by single-process 3D printing. The mechanism overall size is approximately 250x250x100 mm. The workspace is relatively large for a flexure-jointed mechanism, being approximately 20x20x6 mm. A servo-control implementation based on pseudo-rigid-body models (PRBM) of kinematic behavior combined with nonlinear-PID control has been developed. This is shown to achieve fast response with good noise-rejection and platform stability. However, large errors in absolute positioning occur due to deficiencies in the PRBM kinematics, which cannot accurately capture flexure compliance behavior. To overcome this problem, visual servoing is employed, where a digital microscopy system is used to directly measure the platform position by image processing. By adopting nonlinear PID feedback of measured angles for the actuated joints as inner control loops, combined with auxiliary feedback of vision-based measurements, the absolute positioning error can be eliminated. With controller gain tuning, fast dynamic response and low residual vibration of the end platform can be achieved with absolute positioning accuracy within ±1 micron.

  5. RMP Enhanced Transport and Rotation Screening in DIII-D Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Izzo, V; Joseph, I; Moyer, R

    The application of resonant magnetic perturbations (RMP) to DIII-D plasmas at low collisionality has achieved ELM suppression, primarily due to a pedestal density reduction. The mechanism of the enhanced particle transport is investigated in 3D MHD simulations with the NIMROD code. The simulations apply realistic vacuum fields from the DIII-D I-coils, C-coils and measure intrinsic error fields to an EFIT reconstructed DIII-D equilibrium, and allow the plasma to respond to the applied fields while the fields are fixed at the boundary, which lies in the vacuum region. A non-rotating plasma amplifies the resonant components of the applied fields by factorsmore » of 2-5. The poloidal velocity forms E x B convection cells crossing the separatrix, which push particles into the vacuum region and reduce the pedestal density. Low toroidal rotation at the separatrix reduces the resonant field amplitudes, but does not strongly affect the particle pumpout. At higher separatrix rotation, the poloidal E x B velocity is reduced by half, while the enhanced particle transport is entirely eliminated. A high collisionality DIII-D equilibrium with an experimentally measured rotation profile serves as the starting point for a simulation with odd parity I-coil fields that can ultimately be compared with experimental results. All of the NIMROD results are compared with analytic error field theory.« less

  6. A five-collector system for the simultaneous measurement of argon isotope ratios in a static mass spectrometer

    USGS Publications Warehouse

    Stacey, J.S.; Sherrill, N.D.; Dalrymple, G.B.; Lanphere, M.A.; Carpenter, N.V.

    1981-01-01

    A system is described that utilizes five separate Faraday-cup collector assemblies, aligned along the focal plane of a mass spectrometer, to collect simultaneous argon ion beams at masses 36-40. Each collector has its own electrometer amplifier and analog-to-digital measuring channel, the outputs of which are processed by a minicomputer that also controls the mass spectrometer. The mass spectrometer utilizes a 90?? sector magnetic analyzer with a radius of 23 cm, in which some degree of z-direction focussing is provided for all the ion beams by the fringe field of the magnet. Simultaneous measurement of the ion beams helps to eliminate mass-spectrometer memory as a significant source of measurement error during an analysis. Isotope ratios stabilize between 7 and 9 s after sample admission into the spectrometer, and thereafter changes in the measured ratios are linear, typically to within ??0.02%. Thus the multi-collector arrangement permits very short extrapolation times for computation of initial ratios, and also provides the advantages of simultaneous measurement of the ion currents in that errors due to variations in ion beam intensity are minimized. A complete analysis takes less than 10 min, so that sample throughput can be greatly enhanced. In this instrument, the factor limiting analytical precision now lies in short-term apparent variations in the interchannel calibration factors. ?? 1981.

  7. Comparing errors in ED computer-assisted vs conventional pediatric drug dosing and administration.

    PubMed

    Yamamoto, Loren; Kanemori, Joan

    2010-06-01

    Compared to fixed-dose single-vial drug administration in adults, pediatric drug dosing and administration requires a series of calculations, all of which are potentially error prone. The purpose of this study is to compare error rates and task completion times for common pediatric medication scenarios using computer program assistance vs conventional methods. Two versions of a 4-part paper-based test were developed. Each part consisted of a set of medication administration and/or dosing tasks. Emergency department and pediatric intensive care unit nurse volunteers completed these tasks using both methods (sequence assigned to start with a conventional or a computer-assisted approach). Completion times, errors, and the reason for the error were recorded. Thirty-eight nurses completed the study. Summing the completion of all 4 parts, the mean conventional total time was 1243 seconds vs the mean computer program total time of 879 seconds (P < .001). The conventional manual method had a mean of 1.8 errors vs the computer program with a mean of 0.7 errors (P < .001). Of the 97 total errors, 36 were due to misreading the drug concentration on the label, 34 were due to calculation errors, and 8 were due to misplaced decimals. Of the 36 label interpretation errors, 18 (50%) occurred with digoxin or insulin. Computerized assistance reduced errors and the time required for drug administration calculations. A pattern of errors emerged, noting that reading/interpreting certain drug labels were more error prone. Optimizing the layout of drug labels could reduce the error rate for error-prone labels. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  8. Defense Logistics Agency Disposition Services Afghanistan Disposal Process Needed Improvement

    DTIC Science & Technology

    2013-11-08

    audit, and management was proactive in correcting the deficiencies we identified. DLA DS eliminated backlogs, identified and corrected system ...problems, provided additional system training, corrected coding errors, added personnel to key positions, addressed scale issues, submitted debit...Service Automated Information System to the Reutilization Business Integration2 (RBI) solution. The implementation of RBI in Afghanistan occurred in

  9. Multi-saline sample distillation apparatus for hydrogen isotope analyses : design and accuracy

    USGS Publications Warehouse

    Hassan, Afifa Afifi

    1981-01-01

    A distillation apparatus for saline water samples was designed and tested. Six samples may be distilled simultaneously. The temperature was maintained at 400 C to ensure complete dehydration of the precipitating salts. Consequently, the error in the measured ratio of stable hydrogen isotopes resulting from incomplete dehydration of hydrated salts during distillation was eliminated. (USGS)

  10. Full Wave Analysis of Passive Microwave Monolithic Integrated Circuit Devices Using a Generalized Finite Difference Time Domain (GFDTD) Algorithm

    NASA Technical Reports Server (NTRS)

    Lansing, Faiza S.; Rascoe, Daniel L.

    1993-01-01

    This paper presents a modified Finite-Difference Time-Domain (FDTD) technique using a generalized conformed orthogonal grid. The use of the Conformed Orthogonal Grid, Finite Difference Time Domain (GFDTD) enables the designer to match all the circuit dimensions, hence eliminating a major source o error in the analysis.

  11. Dynamic modelling and estimation of the error due to asynchronism in a redundant asynchronous multiprocessor system

    NASA Technical Reports Server (NTRS)

    Huynh, Loc C.; Duval, R. W.

    1986-01-01

    The use of Redundant Asynchronous Multiprocessor System to achieve ultrareliable Fault Tolerant Control Systems shows great promise. The development has been hampered by the inability to determine whether differences in the outputs of redundant CPU's are due to failures or to accrued error built up by slight differences in CPU clock intervals. This study derives an analytical dynamic model of the difference between redundant CPU's due to differences in their clock intervals and uses this model with on-line parameter identification to idenitify the differences in the clock intervals. The ability of this methodology to accurately track errors due to asynchronisity generate an error signal with the effect of asynchronisity removed and this signal may be used to detect and isolate actual system failures.

  12. Bioelimination of /sup 51/Cr and /sup 85/Sr by cockroaches, Gromphadorhina portentosa (Orthoptera: Blaberidae), as affected by mites, Gromphadorholaelaps schaeferi (parasitiformes: laelapidae)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schowalter, T.D.; Crossley, D.A. Jr.

    1982-03-01

    This paper describes rates of Chromium-51 and Strontium-85 assimilation and bioelimination by the hissing cockroach, Gromphadorhina portentosa (Schaum), when the symbiotic mite, Gromphadorholaelaps schaeferi Till, was present or removed. Mite-infested cockroaches had significantly higher rates of /sup 51/Cr elimination relative to mite-free cockroaches, implying more rapid gut clearance times. We did not find a significant mite effect on /sup 85/Sr elimination by the host, but mite effects could have been masked by the apparently unique process of nutrient assimilation and elimination by G. portentosa. Conventional models of radioactive tracer bioelimination predict a rapid initial loss of tracer due to gutmore » clearance, followed by a slower loss due to excretion of assimilated tracer. Our results indicated that assimilated /sup 85/Sr was eliminated earlier than unassimilated /sup 85/Sr was lost by defecation.« less

  13. Visuomotor adaptation needs a validation of prediction error by feedback error

    PubMed Central

    Gaveau, Valérie; Prablanc, Claude; Laurent, Damien; Rossetti, Yves; Priot, Anne-Emmanuelle

    2014-01-01

    The processes underlying short-term plasticity induced by visuomotor adaptation to a shifted visual field are still debated. Two main sources of error can induce motor adaptation: reaching feedback errors, which correspond to visually perceived discrepancies between hand and target positions, and errors between predicted and actual visual reafferences of the moving hand. These two sources of error are closely intertwined and difficult to disentangle, as both the target and the reaching limb are simultaneously visible. Accordingly, the goal of the present study was to clarify the relative contributions of these two types of errors during a pointing task under prism-displaced vision. In “terminal feedback error” condition, viewing of their hand by subjects was allowed only at movement end, simultaneously with viewing of the target. In “movement prediction error” condition, viewing of the hand was limited to movement duration, in the absence of any visual target, and error signals arose solely from comparisons between predicted and actual reafferences of the hand. In order to prevent intentional corrections of errors, a subthreshold, progressive stepwise increase in prism deviation was used, so that subjects remained unaware of the visual deviation applied in both conditions. An adaptive aftereffect was observed in the “terminal feedback error” condition only. As far as subjects remained unaware of the optical deviation and self-assigned pointing errors, prediction error alone was insufficient to induce adaptation. These results indicate a critical role of hand-to-target feedback error signals in visuomotor adaptation; consistent with recent neurophysiological findings, they suggest that a combination of feedback and prediction error signals is necessary for eliciting aftereffects. They also suggest that feedback error updates the prediction of reafferences when a visual perturbation is introduced gradually and cognitive factors are eliminated or strongly attenuated. PMID:25408644

  14. The application of interference fits for overcoming limitations in clamping methodologies for cryo-cooling first crystal configurations in x-ray monochromators

    NASA Astrophysics Data System (ADS)

    Stimson, J.; Docker, P.; Ward, M.; Kay, J.; Chapon, L.; Diaz-Moreno, S.

    2017-12-01

    The work detailed here describes how a novel approach has been applied to overcome the challenging task of cryo-cooling the first monochromator crystals of many of the world’s synchrotrons’ more challenging beam lines. The beam line configuration investigated in this work requires the crystal to diffract 15 Watts of 4-34 keV X-ray wavelength and dissipate the additional 485 watts of redundant X-ray power without significant deformation of the crystal surface. In this case the beam foot print is 25 mm by 25 mm on a crystal surface measuring 38 mm by 25 mm and maintain a radius of curvature of more than 50 km. Currently the crystal is clamped between two copper heat exchangers which have LN2 flowing through them. There are two conditions that must be met simultaneously in this scenario: the crystal needs to be clamped strongly enough to prevent the thermal deformation developing whilst being loose enough not to mechanically deform the diffracting surface. An additional source of error also occurs as the configuration is assembled by hand, leading to human error in the assembly procedure. This new approach explores making the first crystal cylindrical with a sleeve heat exchanger. By manufacturing the copper sleeve to be slightly larger than the silicon crystal at room temperature the sleeve can be slid over the silicon and when cooled will form an interference fit. This has the additional advantage that the crystal and its heat exchanger become a single entity and will always perform the same way each time it is used, eliminating error due to assembly. Various fits have been explored to investigate the associated crystal surface deformations under such a regime

  15. Generating standardized image data for testing and calibrating quantification of volumes, surfaces, lengths, and object counts in fibrous and porous materials using X-ray microtomography.

    PubMed

    Jiřík, Miroslav; Bartoš, Martin; Tomášek, Petr; Malečková, Anna; Kural, Tomáš; Horáková, Jana; Lukáš, David; Suchý, Tomáš; Kochová, Petra; Hubálek Kalbáčová, Marie; Králíčková, Milena; Tonar, Zbyněk

    2018-06-01

    Quantification of the structure and composition of biomaterials using micro-CT requires image segmentation due to the low contrast and overlapping radioopacity of biological materials. The amount of bias introduced by segmentation procedures is generally unknown. We aim to develop software that generates three-dimensional models of fibrous and porous structures with known volumes, surfaces, lengths, and object counts in fibrous materials and to provide a software tool that calibrates quantitative micro-CT assessments. Virtual image stacks were generated using the newly developed software TeIGen, enabling the simulation of micro-CT scans of unconnected tubes, connected tubes, and porosities. A realistic noise generator was incorporated. Forty image stacks were evaluated using micro-CT, and the error between the true known and estimated data was quantified. Starting with geometric primitives, the error of the numerical estimation of surfaces and volumes was eliminated, thereby enabling the quantification of volumes and surfaces of colliding objects. Analysis of the sensitivity of the thresholding upon parameters of generated testing image sets revealed the effects of decreasing resolution and increasing noise on the accuracy of the micro-CT quantification. The size of the error increased with decreasing resolution when the voxel size exceeded 1/10 of the typical object size, which simulated the effect of the smallest details that could still be reliably quantified. Open-source software for calibrating quantitative micro-CT assessments by producing and saving virtually generated image data sets with known morphometric data was made freely available to researchers involved in morphometry of three-dimensional fibrillar and porous structures in micro-CT scans. © 2018 Wiley Periodicals, Inc.

  16. Neural Network Burst Pressure Prediction in Graphite/Epoxy Pressure Vessels from Acoustic Emission Amplitude Data

    NASA Technical Reports Server (NTRS)

    Hill, Eric v. K.; Walker, James L., II; Rowell, Ginger H.

    1995-01-01

    Acoustic emission (AE) data were taken during hydroproof for three sets of ASTM standard 5.75 inch diameter filament wound graphite/epoxy bottles. All three sets of bottles had the same design and were wound from the same graphite fiber; the only difference was in the epoxies used. Two of the epoxies had similar mechanical properties, and because the acoustic properties of materials are a function of their stiffnesses, it was thought that the AE data from the two sets might also be similar; however, this was not the case. Therefore, the three resin types were categorized using dummy variables, which allowed the prediction of burst pressures all three sets of bottles using a single neural network. Three bottles from each set were used to train the network. The resin category, the AE amplitude distribution data taken up to 25 % of the expected burst pressure, and the actual burst pressures were used as inputs. Architecturally, the network consisted of a forty-three neuron input layer (a single categorical variable defining the resin type plus forty-two continuous variables for the AE amplitude frequencies), a fifteen neuron hidden layer for mapping, and a single output neuron for burst pressure prediction. The network trained on all three bottle sets was able to predict burst pressures in the remaining bottles with a worst case error of + 6.59%, slightly greater than the desired goal of + 5%. This larger than desired error was due to poor resolution in the amplitude data for the third bottle set. When the third set of bottles was eliminated from consideration, only four hidden layer neurons were necessary to generate a worst case prediction error of - 3.43%, well within the desired goal.

  17. Harmonics rejection in pixelated interferograms using spatio-temporal demodulation.

    PubMed

    Padilla, J M; Servin, M; Estrada, J C

    2011-09-26

    Pixelated phase-mask interferograms have become an industry standard in spatial phase-shifting interferometry. These pixelated interferograms allow full wavefront encoding using a single interferogram. This allows the study of fast dynamic events in hostile mechanical environments. Recently an error-free demodulation method for ideal pixelated interferograms was proposed. However, non-ideal conditions in interferometry may arise due to non-linear response of the CCD camera, multiple light paths in the interferometer, etc. These conditions generate non-sinusoidal fringes containing harmonics which degrade the phase estimation. Here we show that two-dimensional Fourier demodulation of pixelated interferograms rejects most harmonics except the complex ones at {-3(rd), +5(th), -7(th), +9(th), -11(th),…}. We propose temporal phase-shifting to remove these remaining harmonics. In particular, a 2-step phase-shifting algorithm is used to eliminate the -3(rd) and +5(th) complex harmonics, while a 3-step one is used to remove the -3(rd), +5<(th), -7(th) and +9(th) complex harmonics. © 2011 Optical Society of America

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldsmith, John

    High Spectral Resolution Lidar (HSRL) systems provide vertical profiles of optical depth, backscatter cross-section, depolarization, and backscatter phase function. All HSRL measurements are absolutely calibrated by reference to molecular scattering, which is measured at each point in the lidar profile. Like the Raman lidar but unlike simple backscatter lidars such as the micropulse lidar, the HSRL can measure backscatter cross-sections and optical depths without prior assumptions about the scattering properties of the atmosphere. The depolarization observations also allow robust discrimination between ice and water clouds. In addition, rigorous error estimates can be computed for all measurements. A very narrow, angularmore » field of view reduces multiple scattering contributions. The small field of view, coupled with a narrow optical bandwidth, nearly eliminates noise due to scattered sunlight. There are two operational U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Climate Research Facility HSRL systems, one at the Barrow North Slope of Alaska (NSA) site and the other in the second ARM Mobile Facility (AMF2) collection of instrumentation.« less

  19. State feedback integral control for a rotary direct drive servo valve using a Lyapunov function approach.

    PubMed

    Yu, Jue; Zhuang, Jian; Yu, Dehong

    2015-01-01

    This paper concerns a state feedback integral control using a Lyapunov function approach for a rotary direct drive servo valve (RDDV) while considering parameter uncertainties. Modeling of this RDDV servovalve reveals that its mechanical performance is deeply influenced by friction torques and flow torques; however, these torques are uncertain and mutable due to the nature of fluid flow. To eliminate load resistance and to achieve satisfactory position responses, this paper develops a state feedback control that integrates an integral action and a Lyapunov function. The integral action is introduced to address the nonzero steady-state error; in particular, the Lyapunov function is employed to improve control robustness by adjusting the varying parameters within their value ranges. This new controller also has the advantages of simple structure and ease of implementation. Simulation and experimental results demonstrate that the proposed controller can achieve higher control accuracy and stronger robustness. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Improvement of short-term numerical wind predictions

    NASA Astrophysics Data System (ADS)

    Bedard, Joel

    Geophysic Model Output Statistics (GMOS) are developed to optimize the use of NWP for complex sites. GMOS differs from other MOS that are widely used by meteorological centers in the following aspects: it takes into account the surrounding geophysical parameters such as surface roughness, terrain height, etc., along with wind direction; it can be directly applied without any training, although training will further improve the results. The GMOS was applied to improve the Environment Canada GEM-LAM 2.5km forecasts at North Cape (PEI, Canada): It improves the predictions RMSE by 25-30% for all time horizons and almost all meteorological conditions; the topographic signature of the forecast error due to insufficient grid refinement is eliminated and the NWP combined with GMOS outperform the persistence from a 2h horizon, instead of 4h without GMOS. Finally, GMOS was applied at another site (Bouctouche, NB, Canada): similar improvements were observed, thus showing its general applicability. Keywords: wind energy, wind power forecast, numerical weather prediction, complex sites, model output statistics

  1. Computer-aided target tracking in motion analysis studies

    NASA Astrophysics Data System (ADS)

    Burdick, Dominic C.; Marcuse, M. L.; Mislan, J. D.

    1990-08-01

    Motion analysis studies require the precise tracking of reference objects in sequential scenes. In a typical situation, events of interest are captured at high frame rates using special cameras, and selected objects or targets are tracked on a frame by frame basis to provide necessary data for motion reconstruction. Tracking is usually done using manual methods which are slow and prone to error. A computer based image analysis system has been developed that performs tracking automatically. The objective of this work was to eliminate the bottleneck due to manual methods in high volume tracking applications such as the analysis of crash test films for the automotive industry. The system has proven to be successful in tracking standard fiducial targets and other objects in crash test scenes. Over 95 percent of target positions which could be located using manual methods can be tracked by the system, with a significant improvement in throughput over manual methods. Future work will focus on the tracking of clusters of targets and on tracking deformable objects such as airbags.

  2. High accuracy electronic material level sensor

    DOEpatents

    McEwan, T.E.

    1997-03-11

    The High Accuracy Electronic Material Level Sensor (electronic dipstick) is a sensor based on time domain reflectometry (TDR) of very short electrical pulses. Pulses are propagated along a transmission line or guide wire that is partially immersed in the material being measured; a launcher plate is positioned at the beginning of the guide wire. Reflected pulses are produced at the material interface due to the change in dielectric constant. The time difference of the reflections at the launcher plate and at the material interface are used to determine the material level. Improved performance is obtained by the incorporation of: (1) a high accuracy time base that is referenced to a quartz crystal, (2) an ultrawideband directional sampler to allow operation without an interconnect cable between the electronics module and the guide wire, (3) constant fraction discriminators (CFDs) that allow accurate measurements regardless of material dielectric constants, and reduce or eliminate errors induced by triple-transit or ``ghost`` reflections on the interconnect cable. These improvements make the dipstick accurate to better than 0.1%. 4 figs.

  3. High accuracy electronic material level sensor

    DOEpatents

    McEwan, Thomas E.

    1997-01-01

    The High Accuracy Electronic Material Level Sensor (electronic dipstick) is a sensor based on time domain reflectometry (TDR) of very short electrical pulses. Pulses are propagated along a transmission line or guide wire that is partially immersed in the material being measured; a launcher plate is positioned at the beginning of the guide wire. Reflected pulses are produced at the material interface due to the change in dielectric constant. The time difference of the reflections at the launcher plate and at the material interface are used to determine the material level. Improved performance is obtained by the incorporation of: 1) a high accuracy time base that is referenced to a quartz crystal, 2) an ultrawideband directional sampler to allow operation without an interconnect cable between the electronics module and the guide wire, 3) constant fraction discriminators (CFDs) that allow accurate measurements regardless of material dielectric constants, and reduce or eliminate errors induced by triple-transit or "ghost" reflections on the interconnect cable. These improvements make the dipstick accurate to better than 0.1%.

  4. First measurement of Lyman alpha x-ray lines in hydrogen-like vanadium: results and implications for precision wavelength metrology and tests of QED

    NASA Astrophysics Data System (ADS)

    Gillaspy, J. D.; Chantler, C. T.; Paterson, D.; Hudson, L. T.; Serpa, F. G.; Takács, E.

    2010-04-01

    The first measurement of hydrogen-like vanadium x-ray Lyman alpha transitions has been made. The measurement was made on an absolute scale, fully independent of atomic structure calculations. Sufficient signal was obtained to reduce the statistical uncertainty to a small fraction of the total uncertainty budget. Potential sources of systematic error due to Doppler shifts were eliminated by performing the measurement on trapped ions. The energies for Ly α1 (1s-2p3/2) and Ly α2 (1s-2p1/2) are found to be 5443.95(25) eV and 5431.10(25) eV, respectively. These results are within approximately 1.5 σ (experimental) of the theoretical values 5443.63 eV and 5430.70 eV. The results are discussed in terms of their relation to the Lamb shift and the development of an x-ray wavelength standard based on a compact source of trapped highly charged ions.

  5. High-power, null-type, inverted pendulum thrust stand.

    PubMed

    Xu, Kunning G; Walker, Mitchell L R

    2009-05-01

    This article presents the theory and operation of a null-type, inverted pendulum thrust stand. The thrust stand design supports thrusters having a total mass up to 250 kg and measures thrust over a range of 1 mN to 5 N. The design uses a conventional inverted pendulum to increase sensitivity, coupled with a null-type feature to eliminate thrust alignment error due to deflection of thrust. The thrust stand position serves as the input to the null-circuit feedback control system and the output is the current to an electromagnetic actuator. Mechanical oscillations are actively damped with an electromagnetic damper. A closed-loop inclination system levels the stand while an active cooling system minimizes thermal effects. The thrust stand incorporates an in situ calibration rig. The thrust of a 3.4 kW Hall thruster is measured for thrust levels up to 230 mN. The uncertainty of the thrust measurements in this experiment is +/-0.6%, determined by examination of the hysteresis, drift of the zero offset and calibration slope variation.

  6. High-Throughput Bit-Serial LDPC Decoder LSI Based on Multiple-Valued Asynchronous Interleaving

    NASA Astrophysics Data System (ADS)

    Onizawa, Naoya; Hanyu, Takahiro; Gaudet, Vincent C.

    This paper presents a high-throughput bit-serial low-density parity-check (LDPC) decoder that uses an asynchronous interleaver. Since consecutive log-likelihood message values on the interleaver are similar, node computations are continuously performed by using the most recently arrived messages without significantly affecting bit-error rate (BER) performance. In the asynchronous interleaver, each message's arrival rate is based on the delay due to the wire length, so that the decoding throughput is not restricted by the worst-case latency, which results in a higher average rate of computation. Moreover, the use of a multiple-valued data representation makes it possible to multiplex control signals and data from mutual nodes, thus minimizing the number of handshaking steps in the asynchronous interleaver and eliminating the clock signal entirely. As a result, the decoding throughput becomes 1.3 times faster than that of a bit-serial synchronous decoder under a 90nm CMOS technology, at a comparable BER.

  7. Sliding mode disturbance observer-based control of a twin rotor MIMO system.

    PubMed

    Rashad, Ramy; El-Badawy, Ayman; Aboudonia, Ahmed

    2017-07-01

    This work proposes a robust tracking controller for a helicopter laboratory setup known as the twin rotor MIMO system (TRMS) using an integral sliding mode controller. To eliminate the discontinuity in the control signal, the controller is augmented by a sliding mode disturbance observer. The actuator dynamics is handled using a backstepping approach which is applicable due to the continuous chattering-free nature of the command signals generated using the disturbance observer based controller. To avoid the complexity of analytically differentiating the command signals, a first order sliding mode differentiator is used. Stability analysis of the closed loop system and the ultimate boundedness of the tracking error is proved using Lyapunov stability arguments. The proposed controller is validated by several simulation studies and is compared to other schemes in the literature. Experimental results using a hardware-in-the-loop system validate the robustness and effectiveness of the proposed controller. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  8. 77 FR 41699 - Transportation of Household Goods in Interstate Commerce; Consumer Protection Regulations...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-16

    ... due Revision due to agency Collection Old burden to error error (old-- error) IC1: ``Ready to Move... Revisions of Estimates of Annual Costs to Respondents Total cost Collection New cost Old cost reduction (new--old) IC1: ``Ready to Move?'' $288,000 $720,000 -$432,000 ``Rights & Responsibilities'' 3,264,000 8,160...

  9. USGS Blind Sample Project: monitoring and evaluating laboratory analytical quality

    USGS Publications Warehouse

    Ludtke, Amy S.; Woodworth, Mark T.

    1997-01-01

    The U.S. Geological Survey (USGS) collects and disseminates information about the Nation's water resources. Surface- and ground-water samples are collected and sent to USGS laboratories for chemical analyses. The laboratories identify and quantify the constituents in the water samples. Random and systematic errors occur during sample handling, chemical analysis, and data processing. Although all errors cannot be eliminated from measurements, the magnitude of their uncertainty can be estimated and tracked over time. Since 1981, the USGS has operated an independent, external, quality-assurance project called the Blind Sample Project (BSP). The purpose of the BSP is to monitor and evaluate the quality of laboratory analytical results through the use of double-blind quality-control (QC) samples. The information provided by the BSP assists the laboratories in detecting and correcting problems in the analytical procedures. The information also can aid laboratory users in estimating the extent that laboratory errors contribute to the overall errors in their environmental data.

  10. Clinical measuring system for the form and position errors of circular workpieces using optical fiber sensors

    NASA Astrophysics Data System (ADS)

    Tan, Jiubin; Qiang, Xifu; Ding, Xuemei

    1991-08-01

    Optical sensors have two notable advantages in modern precision measurement. One is that they can be used in nondestructive measurement because the sensors need not touch the surfaces of workpieces in measuring. The other one is that they can strongly resist electromagnetic interferences, vibrations, and noises, so they are suitable to be used in machining sites. But the drift of light intensity and the changing of the reflection coefficient at different measuring positions of a workpiece may have great influence on measured results. To solve the problem, a spectroscopic differential characteristic compensating method is put forward. The method can be used effectively not only in compensating the measuring errors resulted from the drift of light intensity but also in eliminating the influence to measured results caused by the changing of the reflection coefficient. Also, the article analyzes the possibility of and the means of separating data errors of a clinical measuring system for form and position errors of circular workpieces.

  11. Topological analysis of polymeric melts: chain-length effects and fast-converging estimators for entanglement length.

    PubMed

    Hoy, Robert S; Foteinopoulou, Katerina; Kröger, Martin

    2009-09-01

    Primitive path analyses of entanglements are performed over a wide range of chain lengths for both bead spring and atomistic polyethylene polymer melts. Estimators for the entanglement length N_{e} which operate on results for a single chain length N are shown to produce systematic O(1/N) errors. The mathematical roots of these errors are identified as (a) treating chain ends as entanglements and (b) neglecting non-Gaussian corrections to chain and primitive path dimensions. The prefactors for the O(1/N) errors may be large; in general their magnitude depends both on the polymer model and the method used to obtain primitive paths. We propose, derive, and test new estimators which eliminate these systematic errors using information obtainable from the variation in entanglement characteristics with chain length. The new estimators produce accurate results for N_{e} from marginally entangled systems. Formulas based on direct enumeration of entanglements appear to converge faster and are simpler to apply.

  12. Control method and system for hydraulic machines employing a dynamic joint motion model

    DOEpatents

    Danko, George [Reno, NV

    2011-11-22

    A control method and system for controlling a hydraulically actuated mechanical arm to perform a task, the mechanical arm optionally being a hydraulically actuated excavator arm. The method can include determining a dynamic model of the motion of the hydraulic arm for each hydraulic arm link by relating the input signal vector for each respective link to the output signal vector for the same link. Also the method can include determining an error signal for each link as the weighted sum of the differences between a measured position and a reference position and between the time derivatives of the measured position and the time derivatives of the reference position for each respective link. The weights used in the determination of the error signal can be determined from the constant coefficients of the dynamic model. The error signal can be applied in a closed negative feedback control loop to diminish or eliminate the error signal for each respective link.

  13. Can utilizing a computerized provider order entry (CPOE) system prevent hospital medical errors and adverse drug events?

    PubMed

    Charles, Krista; Cannon, Margaret; Hall, Robert; Coustasse, Alberto

    2014-01-01

    Computerized provider order entry (CPOE) systems allow physicians to prescribe patient services electronically. In hospitals, CPOE essentially eliminates the need for handwritten paper orders and achieves cost savings through increased efficiency. The purpose of this research study was to examine the benefits of and barriers to CPOE adoption in hospitals to determine the effects on medical errors and adverse drug events (ADEs) and examine cost and savings associated with the implementation of this newly mandated technology. This study followed a methodology using the basic principles of a systematic review and referenced 50 sources. CPOE systems in hospitals were found to be capable of reducing medical errors and ADEs, especially when CPOE systems are bundled with clinical decision support systems designed to alert physicians and other healthcare providers of pending lab or medical errors. However, CPOE systems face major barriers associated with adoption in a hospital system, mainly high implementation costs and physicians' resistance to change.

  14. Geolocation and Pointing Accuracy Analysis for the WindSat Sensor

    NASA Technical Reports Server (NTRS)

    Meissner, Thomas; Wentz, Frank J.; Purdy, William E.; Gaiser, Peter W.; Poe, Gene; Uliana, Enzo A.

    2006-01-01

    Geolocation and pointing accuracy analyses of the WindSat flight data are presented. The two topics were intertwined in the flight data analysis and will be addressed together. WindSat has no unusual geolocation requirements relative to other sensors, but its beam pointing knowledge accuracy is especially critical to support accurate polarimetric radiometry. Pointing accuracy was improved and verified using geolocation analysis in conjunction with scan bias analysis. nvo methods were needed to properly identify and differentiate between data time tagging and pointing knowledge errors. Matchups comparing coastlines indicated in imagery data with their known geographic locations were used to identify geolocation errors. These coastline matchups showed possible pointing errors with ambiguities as to the true source of the errors. Scan bias analysis of U, the third Stokes parameter, and of vertical and horizontal polarizations provided measurement of pointing offsets resolving ambiguities in the coastline matchup analysis. Several geolocation and pointing bias sources were incfementally eliminated resulting in pointing knowledge and geolocation accuracy that met all design requirements.

  15. Collaborated measurement of three-dimensional position and orientation errors of assembled miniature devices with two vision systems

    NASA Astrophysics Data System (ADS)

    Wang, Xiaodong; Zhang, Wei; Luo, Yi; Yang, Weimin; Chen, Liang

    2013-01-01

    In assembly of miniature devices, the position and orientation of the parts to be assembled should be guaranteed during or after assembly. In some cases, the relative position or orientation errors among the parts can not be measured from only one direction using visual method, because of visual occlusion or for the features of parts located in a three-dimensional way. An automatic assembly system for precise miniature devices is introduced. In the modular assembly system, two machine vision systems were employed for measurement of the three-dimensionally distributed assembly errors. High resolution CCD cameras and high position repeatability precision stages were integrated to realize high precision measurement in large work space. The two cameras worked in collaboration in measurement procedure to eliminate the influence of movement errors of the rotational or translational stages. A set of templates were designed for calibration of the vision systems and evaluation of the system's measurement accuracy.

  16. MERIT DEM: A new high-accuracy global digital elevation model and its merit to global hydrodynamic modeling

    NASA Astrophysics Data System (ADS)

    Yamazaki, D.; Ikeshima, D.; Neal, J. C.; O'Loughlin, F.; Sampson, C. C.; Kanae, S.; Bates, P. D.

    2017-12-01

    Digital Elevation Models (DEM) are fundamental data for flood modelling. While precise airborne DEMs are available in developed regions, most parts of the world rely on spaceborne DEMs which include non-negligible height errors. Here we show the most accurate global DEM to date at 90m resolution by eliminating major error components from the SRTM and AW3D DEMs. Using multiple satellite data and multiple filtering techniques, we addressed absolute bias, stripe noise, speckle noise and tree height bias from spaceborne DEMs. After the error removal, significant improvements were found in flat regions where height errors were larger than topography variability, and landscapes features such as river networks and hill-valley structures became clearly represented. We found the topography slope of the previous DEMs was largely distorted in most of world major floodplains (e.g. Ganges, Nile, Niger, Mekong) and swamp forests (e.g. Amazon, Congo, Vasyugan). The developed DEM will largely reduce the uncertainty in both global and regional flood modelling.

  17. Electronic implementation of associative memory based on neural network models

    NASA Technical Reports Server (NTRS)

    Moopenn, A.; Lambe, John; Thakoor, A. P.

    1987-01-01

    An electronic embodiment of a neural network based associative memory in the form of a binary connection matrix is described. The nature of false memory errors, their effect on the information storage capacity of binary connection matrix memories, and a novel technique to eliminate such errors with the help of asymmetrical extra connections are discussed. The stability of the matrix memory system incorporating a unique local inhibition scheme is analyzed in terms of local minimization of an energy function. The memory's stability, dynamic behavior, and recall capability are investigated using a 32-'neuron' electronic neural network memory with a 1024-programmable binary connection matrix.

  18. Gallium Compounds: A Possible Problem for the G2 Approaches

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Melius, Carl F.; Allendorf, Mark D.; Arnold, James (Technical Monitor)

    1998-01-01

    The G2 atomization energies of fluorine and oxygen containing Ga compounds are greatly in error. This arises from an inversion of the Ga 3d core orbital and the F 2s or O 2s valence orbitals. Adding the Ga 3d orbital to the correlation treatment or removing the F 2s orbitals from the correlation treatment are shown to eliminate the problem. Removing the O 2s orbital from the correlation treatment reduces the error, but it can still be more than 6 kcal/mol. It is concluded that the experimental atomization energy of GaF2 is too large.

  19. Criticality of Adaptive Control Dynamics

    NASA Astrophysics Data System (ADS)

    Patzelt, Felix; Pawelzik, Klaus

    2011-12-01

    We show, that stabilization of a dynamical system can annihilate observable information about its structure. This mechanism induces critical points as attractors in locally adaptive control. It also reveals, that previously reported criticality in simple controllers is caused by adaptation and not by other controller details. We apply these results to a real-system example: human balancing behavior. A model of predictive adaptive closed-loop control subject to some realistic constraints is introduced and shown to reproduce experimental observations in unprecedented detail. Our results suggests, that observed error distributions in between the Lévy and Gaussian regimes may reflect a nearly optimal compromise between the elimination of random local trends and rare large errors.

  20. NATO Phase Zero Contracting - A Proposed Strategic and Operational Planning Construct Within the NATO Framework for Defense Planning and Standardization

    DTIC Science & Technology

    2015-02-12

    become better stewards of scarce resources, to eliminate potential waste, and to reduce abuse of taxpayer money due to poor management, operational...stewards of scarce resources, to eliminate potential waste, and to reduce abuse of taxpayer money due to poor management, operational redundancy and...Conference—Best Practices and Lessons Learned and Training and Educating for Acquisition, Procurement and Contracting in Defense Institutions

  1. Error analysis for the ground-based microwave ozone measurements during STOIC

    NASA Technical Reports Server (NTRS)

    Connor, Brian J.; Parrish, Alan; Tsou, Jung-Jung; McCormick, M. Patrick

    1995-01-01

    We present a formal error analysis and characterization of the microwave measurements made during the Stratospheric Ozone Intercomparison Campaign (STOIC). The most important error sources are found to be determination of the tropospheric opacity, the pressure-broadening coefficient of the observed line, and systematic variations in instrument response as a function of frequency ('baseline'). Net precision is 4-6% between 55 and 0.2 mbar, while accuracy is 6-10%. Resolution is 8-10 km below 3 mbar and increases to 17km at 0.2 mbar. We show the 'blind' microwave measurements from STOIC and make limited comparisons to other measurements. We use the averaging kernels of the microwave measurement to eliminate resolution and a priori effects from a comparison to SAGE 2. The STOIC results and comparisons are broadly consistent with the formal analysis.

  2. Application of Energy Function as a Measure of Error in the Numerical Solution for Online Transient Stability Assessment

    NASA Astrophysics Data System (ADS)

    Sarojkumar, K.; Krishna, S.

    2016-08-01

    Online dynamic security assessment (DSA) is a computationally intensive task. In order to reduce the amount of computation, screening of contingencies is performed. Screening involves analyzing the contingencies with the system described by a simpler model so that computation requirement is reduced. Screening identifies those contingencies which are sure to not cause instability and hence can be eliminated from further scrutiny. The numerical method and the step size used for screening should be chosen with a compromise between speed and accuracy. This paper proposes use of energy function as a measure of error in the numerical solution used for screening contingencies. The proposed measure of error can be used to determine the most accurate numerical method satisfying the time constraint of online DSA. Case studies on 17 generator system are reported.

  3. Secondary analysis of national survey datasets.

    PubMed

    Boo, Sunjoo; Froelicher, Erika Sivarajan

    2013-06-01

    This paper describes the methodological issues associated with secondary analysis of large national survey datasets. Issues about survey sampling, data collection, and non-response and missing data in terms of methodological validity and reliability are discussed. Although reanalyzing large national survey datasets is an expedient and cost-efficient way of producing nursing knowledge, successful investigations require a methodological consideration of the intrinsic limitations of secondary survey analysis. Nursing researchers using existing national survey datasets should understand potential sources of error associated with survey sampling, data collection, and non-response and missing data. Although it is impossible to eliminate all potential errors, researchers using existing national survey datasets must be aware of the possible influence of errors on the results of the analyses. © 2012 The Authors. Japan Journal of Nursing Science © 2012 Japan Academy of Nursing Science.

  4. A radiation tolerant Data link board for the ATLAS Tile Cal upgrade

    NASA Astrophysics Data System (ADS)

    Åkerstedt, H.; Bohm, C.; Muschter, S.; Silverstein, S.; Valdes, E.

    2016-01-01

    This paper describes the latest, full-functionality revision of the high-speed data link board developed for the Phase-2 upgrade of ATLAS hadronic Tile Calorimeter. The link board design is highly redundant, with digital functionality implemented in two Xilinx Kintex-7 FPGAs, and two Molex QSFP+ electro-optic modules with uplinks run at 10 Gbps. The FPGAs are remotely configured through two radiation-hard CERN GBTx deserialisers (GBTx), which also provide the LHC-synchronous system clock. The redundant design eliminates virtually all single-point error modes, and a combination of triple-mode redundancy (TMR), internal and external scrubbing will provide adequate protection against radiation-induced errors. The small portion of the FPGA design that cannot be protected by TMR will be the dominant source of radiation-induced errors, even if that area is small.

  5. The method for measuring the groove density of variable-line-space gratings with elimination of the eccentricity effect

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Qingbo; Liu, Zhengkun, E-mail: zhkliu@ustc.edu.cn; Chen, Huoyao

    2015-02-15

    To eliminate the eccentricity effect, a new method for measuring the groove density of a variable-line-space grating was adapted. Based on grating equation, groove density is calculated by measuring the internal angles between zeroth-order and first-order diffracted light for two different wavelengths with the same angle of incidence. The measurement system mainly includes two laser sources, a phase plate, plane mirror, and charge coupled device. The measurement results of a variable-line-space grating demonstrate that the experiment data agree well with theoretical values, and the value of measurement error (ΔN/N) is less than 2.72 × 10{sup −4}.

  6. Patient and nurse safety: how information technology makes a difference.

    PubMed

    Simpson, Roy L

    2005-01-01

    The Institute of Medicine's landmark report asserted medical error is seldom the fault of individuals, but the result of faulty healthcare policy/procedure systems. Numerous studies have shown that information technology can shore up weak systems. For nursing, information technology plays a key role in protecting patients by eliminating nursing mistakes and protecting nurses by reducing their negative exposure. However, managing information technology is a function of managing the people who use it. This article examines critical issues that impact patient and nurse safety, both physical and professional. It discusses the importance of eliminating the culture of blame, the requirements of process change, how to implement technology in harmony with the organization and the significance of vision.

  7. Medication errors in anesthesia: unacceptable or unavoidable?

    PubMed

    Dhawan, Ira; Tewari, Anurag; Sehgal, Sankalp; Sinha, Ashish Chandra

    Medication errors are the common causes of patient morbidity and mortality. It adds financial burden to the institution as well. Though the impact varies from no harm to serious adverse effects including death, it needs attention on priority basis since medication errors' are preventable. In today's world where people are aware and medical claims are on the hike, it is of utmost priority that we curb this issue. Individual effort to decrease medication error alone might not be successful until a change in the existing protocols and system is incorporated. Often drug errors that occur cannot be reversed. The best way to 'treat' drug errors is to prevent them. Wrong medication (due to syringe swap), overdose (due to misunderstanding or preconception of the dose, pump misuse and dilution error), incorrect administration route, under dosing and omission are common causes of medication error that occur perioperatively. Drug omission and calculation mistakes occur commonly in ICU. Medication errors can occur perioperatively either during preparation, administration or record keeping. Numerous human and system errors can be blamed for occurrence of medication errors. The need of the hour is to stop the blame - game, accept mistakes and develop a safe and 'just' culture in order to prevent medication errors. The newly devised systems like VEINROM, a fluid delivery system is a novel approach in preventing drug errors due to most commonly used medications in anesthesia. Similar developments along with vigilant doctors, safe workplace culture and organizational support all together can help prevent these errors. Copyright © 2016. Published by Elsevier Editora Ltda.

  8. A novel artificial fish swarm algorithm for recalibration of fiber optic gyroscope error parameters.

    PubMed

    Gao, Yanbin; Guan, Lianwu; Wang, Tingjun; Sun, Yunlong

    2015-05-05

    The artificial fish swarm algorithm (AFSA) is one of the state-of-the-art swarm intelligent techniques, which is widely utilized for optimization purposes. Fiber optic gyroscope (FOG) error parameters such as scale factors, biases and misalignment errors are relatively unstable, especially with the environmental disturbances and the aging of fiber coils. These uncalibrated error parameters are the main reasons that the precision of FOG-based strapdown inertial navigation system (SINS) degraded. This research is mainly on the application of a novel artificial fish swarm algorithm (NAFSA) on FOG error coefficients recalibration/identification. First, the NAFSA avoided the demerits (e.g., lack of using artificial fishes' pervious experiences, lack of existing balance between exploration and exploitation, and high computational cost) of the standard AFSA during the optimization process. To solve these weak points, functional behaviors and the overall procedures of AFSA have been improved with some parameters eliminated and several supplementary parameters added. Second, a hybrid FOG error coefficients recalibration algorithm has been proposed based on NAFSA and Monte Carlo simulation (MCS) approaches. This combination leads to maximum utilization of the involved approaches for FOG error coefficients recalibration. After that, the NAFSA is verified with simulation and experiments and its priorities are compared with that of the conventional calibration method and optimal AFSA. Results demonstrate high efficiency of the NAFSA on FOG error coefficients recalibration.

  9. Remediating Common Math Errors.

    ERIC Educational Resources Information Center

    Wagner, Rudolph F.

    1981-01-01

    Explanations and remediation suggestions for five types of mathematics errors due either to perceptual or cognitive difficulties are given. Error types include directionality problems, mirror writing, visually misperceived signs, diagnosed directionality problems, and mixed process errors. (CL)

  10. Intimate Partner Violence, 1993-2010

    MedlinePlus

    ... appendix table 2 for standard errors. *Due to methodological changes, use caution when comparing 2006 NCVS criminal ... appendix table 2 for standard errors. *Due to methodological changes, use caution when comparing 2006 NCVS criminal ...

  11. Reflection of medical error highlighted on media in Turkey: A retrospective study

    PubMed Central

    Isik, Oguz; Bayin, Gamze; Ugurluoglu, Ozgur

    2016-01-01

    Objective: This study was performed with the aim of identifying how news on medical errors have be transmitted, and how the types, reasons, and conclusions of medical errors have been reflected to by the media in Turkey. Methods: A content analysis method was used in the study, and in this context, the data for the study was acquired by scanning five newspapers with the top editions on the national basis between the years 2012 and 2015 for the news about medical errors. Some specific selection criteria was used for the scanning of resulted news, and 116 news items acquired as a result of all the eliminations. Results: According to the results of the study; the vast majority of medical errors (40.5%) transmitted by the news resulted from the negligence of the medical staff. The medical errors were caused by physicians in the ratio of 74.1%, they most commonly occurred in state hospitals (31.9%). Another important result of the research was that medical errors resulted in either patient death to a large extent (51.7%), or permanent damage and disability to patients (25.0%). Conclusion: The news concerning medical errors provided information about the types, causes, and the results of these medical errors. It also reflected the media point of view on the issue. The examination of the content of the medical errors reported by the media were important which calls for appropriate interventions to avoid and minimize the occurrence of medical errors by improving the healthcare delivery system. PMID:27882026

  12. Defining near misses: towards a sharpened definition based on empirical data about error handling processes.

    PubMed

    Kessels-Habraken, Marieke; Van der Schaaf, Tjerk; De Jonge, Jan; Rutte, Christel

    2010-05-01

    Medical errors in health care still occur frequently. Unfortunately, errors cannot be completely prevented and 100% safety can never be achieved. Therefore, in addition to error reduction strategies, health care organisations could also implement strategies that promote timely error detection and correction. Reporting and analysis of so-called near misses - usually defined as incidents without adverse consequences for patients - are necessary to gather information about successful error recovery mechanisms. This study establishes the need for a clearer and more consistent definition of near misses to enable large-scale reporting and analysis in order to obtain such information. Qualitative incident reports and interviews were collected on four units of two Dutch general hospitals. Analysis of the 143 accompanying error handling processes demonstrated that different incident types each provide unique information about error handling. Specifically, error handling processes underlying incidents that did not reach the patient differed significantly from those of incidents that reached the patient, irrespective of harm, because of successful countermeasures that had been taken after error detection. We put forward two possible definitions of near misses and argue that, from a practical point of view, the optimal definition may be contingent on organisational context. Both proposed definitions could yield large-scale reporting of near misses. Subsequent analysis could enable health care organisations to improve the safety and quality of care proactively by (1) eliminating failure factors before real accidents occur, (2) enhancing their ability to intercept errors in time, and (3) improving their safety culture. Copyright 2010 Elsevier Ltd. All rights reserved.

  13. Intrusion errors in visuospatial working memory performance.

    PubMed

    Cornoldi, Cesare; Mammarella, Nicola

    2006-02-01

    This study tested the hypothesis that failure in active visuospatial working memory tasks involves a difficulty in avoiding intrusions due to information that is already activated. Two experiments are described, in which participants were required to process several series of locations on a 4 x 4 matrix and then to produce only the final location of each series. Results revealed a higher number of errors due to already activated locations (intrusions) compared with errors due to new locations (inventions). Moreover, when participants were required to pay extra attention to some irrelevant (non-final) locations by tapping on the table, intrusion errors increased. Results are discussed in terms of current models of working memory functioning.

  14. Goldmann tonometer error correcting prism: clinical evaluation.

    PubMed

    McCafferty, Sean; Lim, Garrett; Duncan, William; Enikov, Eniko T; Schwiegerling, Jim; Levine, Jason; Kew, Corin

    2017-01-01

    Clinically evaluate a modified applanating surface Goldmann tonometer prism designed to substantially negate errors due to patient variability in biomechanics. A modified Goldmann prism with a correcting applanation tonometry surface (CATS) was mathematically optimized to minimize the intraocular pressure (IOP) measurement error due to patient variability in corneal thickness, stiffness, curvature, and tear film adhesion force. A comparative clinical study of 109 eyes measured IOP with CATS and Goldmann prisms. The IOP measurement differences between the CATS and Goldmann prisms were correlated to corneal thickness, hysteresis, and curvature. The CATS tonometer prism in correcting for Goldmann central corneal thickness (CCT) error demonstrated a reduction to <±2 mmHg in 97% of a standard CCT population. This compares to only 54% with CCT error <±2 mmHg using the Goldmann prism. Equal reductions of ~50% in errors due to corneal rigidity and curvature were also demonstrated. The results validate the CATS prism's improved accuracy and expected reduced sensitivity to Goldmann errors without IOP bias as predicted by mathematical modeling. The CATS replacement for the Goldmann prism does not change Goldmann measurement technique or interpretation.

  15. 26 CFR 301.6621-3 - Higher interest rate payable on large corporate underpayments.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... resulting from a math error on Y's return. Y did not request an abatement of the assessment pursuant to...,000 amount shown as due on the math error assessment notice (plus interest) on or before January 31...,000 amount shown as due on the math error assessment notice (plus interest) on or before January 31...

  16. 26 CFR 301.6621-3 - Higher interest rate payable on large corporate underpayments.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... resulting from a math error on Y's return. Y did not request an abatement of the assessment pursuant to...,000 amount shown as due on the math error assessment notice (plus interest) on or before January 31...,000 amount shown as due on the math error assessment notice (plus interest) on or before January 31...

  17. 26 CFR 301.6621-3 - Higher interest rate payable on large corporate underpayments.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... resulting from a math error on Y's return. Y did not request an abatement of the assessment pursuant to...,000 amount shown as due on the math error assessment notice (plus interest) on or before January 31...,000 amount shown as due on the math error assessment notice (plus interest) on or before January 31...

  18. 26 CFR 301.6621-3 - Higher interest rate payable on large corporate underpayments.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... resulting from a math error on Y's return. Y did not request an abatement of the assessment pursuant to...,000 amount shown as due on the math error assessment notice (plus interest) on or before January 31...,000 amount shown as due on the math error assessment notice (plus interest) on or before January 31...

  19. 26 CFR 301.6621-3 - Higher interest rate payable on large corporate underpayments.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... resulting from a math error on Y's return. Y did not request an abatement of the assessment pursuant to...,000 amount shown as due on the math error assessment notice (plus interest) on or before January 31...,000 amount shown as due on the math error assessment notice (plus interest) on or before January 31...

  20. High density scintillating glass proton imaging detector

    NASA Astrophysics Data System (ADS)

    Wilkinson, C. J.; Goranson, K.; Turney, A.; Xie, Q.; Tillman, I. J.; Thune, Z. L.; Dong, A.; Pritchett, D.; McInally, W.; Potter, A.; Wang, D.; Akgun, U.

    2017-03-01

    In recent years, proton therapy has achieved remarkable precision in delivering doses to cancerous cells while avoiding healthy tissue. However, in order to utilize this high precision treatment, greater accuracy in patient positioning is needed. An accepted approximate uncertainty of +/-3% exists in the current practice of proton therapy due to conversions between x-ray and proton stopping power. The use of protons in imaging would eliminate this source of error and lessen the radiation exposure of the patient. To this end, this study focuses on developing a novel proton-imaging detector built with high-density glass scintillator. The model described herein contains a compact homogeneous proton calorimeter composed of scintillating, high density glass as the active medium. The unique geometry of this detector allows for the measurement of both the position and residual energy of protons, eliminating the need for a separate set of position trackers in the system. Average position and energy of a pencil beam of 106 protons is used to reconstruct the image rather than by analyzing individual proton data. Simplicity and efficiency were major objectives in this model in order to present an imaging technique that is compact, cost-effective, and precise, as well as practical for a clinical setting with pencil-beam scanning proton therapy equipment. In this work, the development of novel high-density glass scintillator and the unique conceptual design of the imager are discussed; a proof-of-principle Monte Carlo simulation study is performed; preliminary two-dimensional images reconstructed from the Geant4 simulation are presented.

  1. A global/local analysis method for treating details in structural design

    NASA Technical Reports Server (NTRS)

    Aminpour, Mohammad A.; Mccleary, Susan L.; Ransom, Jonathan B.

    1993-01-01

    A method for analyzing global/local behavior of plate and shell structures is described. In this approach, a detailed finite element model of the local region is incorporated within a coarser global finite element model. The local model need not be nodally compatible (i.e., need not have a one-to-one nodal correspondence) with the global model at their common boundary; therefore, the two models may be constructed independently. The nodal incompatibility of the models is accounted for by introducing appropriate constraint conditions into the potential energy in a hybrid variational formulation. The primary advantage of this method is that the need for transition modeling between global and local models is eliminated. Eliminating transition modeling has two benefits. First, modeling efforts are reduced since tedious and complex transitioning need not be performed. Second, errors due to the mesh distortion, often unavoidable in mesh transitioning, are minimized by avoiding distorted elements beyond what is needed to represent the geometry of the component. The method is applied reduced to a plate loaded in tension and transverse bending. The plate has a central hole, and various hole sixes and shapes are studied. The method is also applied to a composite laminated fuselage panel with a crack emanating from a window in the panel. While this method is applied herein to global/local problems, it is also applicable to the coupled analysis of independently modeled components as well as adaptive refinement.

  2. Motorcycle waste heat energy harvesting

    NASA Astrophysics Data System (ADS)

    Schlichting, Alexander D.; Anton, Steven R.; Inman, Daniel J.

    2008-03-01

    Environmental concerns coupled with the depletion of fuel sources has led to research on ethanol, fuel cells, and even generating electricity from vibrations. Much of the research in these areas is stalling due to expensive or environmentally contaminating processes, however recent breakthroughs in materials and production has created a surge in research on waste heat energy harvesting devices. The thermoelectric generators (TEGs) used in waste heat energy harvesting are governed by the Thermoelectric, or Seebeck, effect, generating electricity from a temperature gradient. Some research to date has featured platforms such as heavy duty diesel trucks, model airplanes, and automobiles, attempting to either eliminate heavy batteries or the alternator. A motorcycle is another platform that possesses some very promising characteristics for waste heat energy harvesting, mainly because the exhaust pipes are exposed to significant amounts of air flow. A 1995 Kawasaki Ninja 250R was used for these trials. The module used in these experiments, the Melcor HT3-12-30, produced an average of 0.4694 W from an average temperature gradient of 48.73 °C. The mathematical model created from the Thermoelectric effect equation and the mean Seebeck coefficient displayed by the module produced an average error from the experimental data of 1.75%. Although the module proved insufficient to practically eliminate the alternator on a standard motorcycle, the temperature data gathered as well as the examination of a simple, yet accurate, model represent significant steps in the process of creating a TEG capable of doing so.

  3. Laboratory errors and patient safety.

    PubMed

    Miligy, Dawlat A

    2015-01-01

    Laboratory data are extensively used in medical practice; consequently, laboratory errors have a tremendous impact on patient safety. Therefore, programs designed to identify and reduce laboratory errors, as well as, setting specific strategies are required to minimize these errors and improve patient safety. The purpose of this paper is to identify part of the commonly encountered laboratory errors throughout our practice in laboratory work, their hazards on patient health care and some measures and recommendations to minimize or to eliminate these errors. Recording the encountered laboratory errors during May 2008 and their statistical evaluation (using simple percent distribution) have been done in the department of laboratory of one of the private hospitals in Egypt. Errors have been classified according to the laboratory phases and according to their implication on patient health. Data obtained out of 1,600 testing procedure revealed that the total number of encountered errors is 14 tests (0.87 percent of total testing procedures). Most of the encountered errors lay in the pre- and post-analytic phases of testing cycle (representing 35.7 and 50 percent, respectively, of total errors). While the number of test errors encountered in the analytic phase represented only 14.3 percent of total errors. About 85.7 percent of total errors were of non-significant implication on patients health being detected before test reports have been submitted to the patients. On the other hand, the number of test errors that have been already submitted to patients and reach the physician represented 14.3 percent of total errors. Only 7.1 percent of the errors could have an impact on patient diagnosis. The findings of this study were concomitant with those published from the USA and other countries. This proves that laboratory problems are universal and need general standardization and bench marking measures. Original being the first data published from Arabic countries that evaluated the encountered laboratory errors and launch the great need for universal standardization and bench marking measures to control the laboratory work.

  4. CE: Original Research: Exploring How Nursing Schools Handle Student Errors and Near Misses.

    PubMed

    Disch, Joanne; Barnsteiner, Jane; Connor, Susan; Brogren, Fabiana

    2017-10-01

    : Background: Little attention has been paid to how nursing students learn about quality and safety, and to the tools and policies that guide nursing schools in helping students respond to errors and near misses. This study sought to determine whether prelicensure nursing programs have a policy for reporting and following up on student clinical errors and near misses, a tool for such reporting, a tool or process (or both) for identifying trends, strategies for follow-up with students after errors and near misses, and strategies for follow-up with clinical agencies and individual faculty members. A national electronic survey of 1,667 schools of nursing with a prelicensure registered nursing program was conducted. Data from 494 responding schools (30%) were analyzed. Of the responding schools, 245 (50%) reported having no policy for managing students following a clinical error or near miss, and 272 (55%) reported having no tool for reporting student errors or near misses. Significant work is needed if the principles of a fair and just culture are to shape the response to nursing student errors and near misses. For nursing schools, some essential first steps are to understand the tools and policies a school has in place; the school's philosophy regarding errors and near misses; the resources needed to establish a fair and just culture; and how faculty can work together to create learning environments that eliminate or minimize the negative consequences of errors and near misses for patients, students, and faculty.

  5. Decay of motor memories in the absence of error

    PubMed Central

    Vaswani, Pavan A.; Shadmehr, Reza

    2013-01-01

    When motor commands are accompanied by an unexpected outcome, the resulting error induces changes in subsequent commands. However, when errors are artificially eliminated, changes in motor commands are not sustained, but show decay. Why does the adaptation-induced change in motor output decay in the absence of error? A prominent idea is that decay reflects the stability of the memory. We show results that challenge this idea and instead suggest that motor output decays because the brain actively disengages a component of the memory. Humans adapted their reaching movements to a perturbation and were then introduced to a long period of trials in which errors were absent (error-clamp). We found that, in some subjects, motor output did not decay at the onset of the error-clamp block, but a few trials later. We manipulated the kinematics of movements in the error-clamp block and found that as movements became more similar to subjects’ natural movements in the perturbation block, the lag to decay onset became longer and eventually reached hundreds of trials. Furthermore, when there was decay in the motor output, the endpoint of decay was not zero, but a fraction of the motor memory that was last acquired. Therefore, adaptation to a perturbation installed two distinct kinds of memories: one that was disengaged when the brain detected a change in the task, and one that persisted despite it. Motor memories showed little decay in the absence of error if the brain was prevented from detecting a change in task conditions. PMID:23637163

  6. Effects of tropospheric and ionospheric refraction errors in the utilization of GEOS-C altimeter data

    NASA Technical Reports Server (NTRS)

    Goad, C. C.

    1977-01-01

    The effects of tropospheric and ionospheric refraction errors are analyzed for the GEOS-C altimeter project in terms of their resultant effects on C-band orbits and the altimeter measurement itself. Operational procedures using surface meteorological measurements at ground stations and monthly means for ocean surface conditions are assumed, with no corrections made for ionospheric effects. Effects on the orbit height due to tropospheric errors are approximately 15 cm for single pass short arcs (such as for calibration) and 10 cm for global orbits of one revolution. Orbit height errors due to neglect of the ionosphere have an amplitude of approximately 40 cm when the orbits are determined from C-band range data with predominantly daylight tracking. Altimeter measurement errors are approximately 10 cm due to residual tropospheric refraction correction errors. Ionospheric effects on the altimeter range measurement are also on the order of 10 cm during the GEOS-C launch and early operation period.

  7. Geographically correlated orbit error

    NASA Technical Reports Server (NTRS)

    Rosborough, G. W.

    1989-01-01

    The dominant error source in estimating the orbital position of a satellite from ground based tracking data is the modeling of the Earth's gravity field. The resulting orbit error due to gravity field model errors are predominantly long wavelength in nature. This results in an orbit error signature that is strongly correlated over distances on the size of ocean basins. Anderle and Hoskin (1977) have shown that the orbit error along a given ground track also is correlated to some degree with the orbit error along adjacent ground tracks. This cross track correlation is verified here and is found to be significant out to nearly 1000 kilometers in the case of TOPEX/POSEIDON when using the GEM-T1 gravity model. Finally, it was determined that even the orbit error at points where ascending and descending ground traces cross is somewhat correlated. The implication of these various correlations is that the orbit error due to gravity error is geographically correlated. Such correlations have direct implications when using altimetry to recover oceanographic signals.

  8. Transperineal prostate biopsy under magnetic resonance image guidance: a needle placement accuracy study.

    PubMed

    Blumenfeld, Philip; Hata, Nobuhiko; DiMaio, Simon; Zou, Kelly; Haker, Steven; Fichtinger, Gabor; Tempany, Clare M C

    2007-09-01

    To quantify needle placement accuracy of magnetic resonance image (MRI)-guided core needle biopsy of the prostate. A total of 10 biopsies were performed with 18-gauge (G) core biopsy needle via a percutaneous transperineal approach. Needle placement error was assessed by comparing the coordinates of preplanned targets with the needle tip measured from the intraprocedural coherent gradient echo images. The source of these errors was subsequently investigated by measuring displacement caused by needle deflection and needle susceptibility artifact shift in controlled phantom studies. Needle placement error due to misalignment of the needle template guide was also evaluated. The mean and standard deviation (SD) of errors in targeted biopsies was 6.5 +/- 3.5 mm. Phantom experiments showed significant placement error due to needle deflection with a needle with an asymmetrically beveled tip (3.2-8.7 mm depending on tissue type) but significantly smaller error with a symmetrical bevel (0.6-1.1 mm). Needle susceptibility artifacts observed a shift of 1.6 +/- 0.4 mm from the true needle axis. Misalignment of the needle template guide contributed an error of 1.5 +/- 0.3 mm. Needle placement error was clinically significant in MRI-guided biopsy for diagnosis of prostate cancer. Needle placement error due to needle deflection was the most significant cause of error, especially for needles with an asymmetrical bevel. (c) 2007 Wiley-Liss, Inc.

  9. Automated Welding System

    NASA Technical Reports Server (NTRS)

    Bayless, E. O.; Lawless, K. G.; Kurgan, C.; Nunes, A. C.; Graham, B. F.; Hoffman, D.; Jones, C. S.; Shepard, R.

    1993-01-01

    Fully automated variable-polarity plasma arc VPPA welding system developed at Marshall Space Flight Center. System eliminates defects caused by human error. Integrates many sensors with mathematical model of the weld and computer-controlled welding equipment. Sensors provide real-time information on geometry of weld bead, location of weld joint, and wire-feed entry. Mathematical model relates geometry of weld to critical parameters of welding process.

  10. Calibration Of Partial-Pressure-Of-Oxygen Sensors

    NASA Technical Reports Server (NTRS)

    Yount, David W.; Heronimus, Kevin

    1995-01-01

    Report and analysis of, and discussion of improvements in, procedure for calibrating partial-pressure-of-oxygen sensors to satisfy Spacelab calibration requirements released. Sensors exhibit fast drift, which results in short calibration period not suitable for Spacelab. By assessing complete process of determining total drift range available, calibration procedure modified to eliminate errors and still satisfy requirements without compromising integrity of system.

  11. A Piece of Paper Falling Faster than Free Fall

    ERIC Educational Resources Information Center

    Vera, F.; Rivera, R.

    2011-01-01

    We report a simple experiment that clearly demonstrates a common error in the explanation of the classic experiment where a small piece of paper is put over a book and the system is let fall. This classic demonstration is used in introductory physics courses to show that after eliminating the friction force with the air, the piece of paper falls…

  12. Guidewire retention following central venous catheterisation: a human factors and safe design investigation.

    PubMed

    Horberry, Tim; Teng, Yi-Chun; Ward, James; Patil, Vishal; Clarkson, P John

    2014-01-01

    Central Venous Catheterisation (CVC) has occasionally been associated with cases of retained guidewires in patients after surgery. In theory, this is a completely avoidable complication; however, as with any human procedure, operator error leading to guidewires being occasionally retained cannot be fully eliminated. The work described here investigated the issue in an attempt to better understand it both from an operator and a systems perspective, and to ultimately recommend appropriate safe design solutions that reduce guidewire retention errors. Nine distinct methods were used: observations of the procedure, a literature review, interviewing CVC end-users, task analysis construction, CVC procedural audits, two human reliability assessments, usability heuristics and a comprehensive solution survey with CVC end-users. The three solutions that operators rated most highly, in terms of both practicality and effectiveness, were: making trainees better aware of the potential guidewire complications and strongly emphasising guidewire removal in CVC training, actively checking that the guidewire is present in the waste tray for disposal, and standardising purchase of central line sets so that differences that may affect chances of guidewire loss is minimised. Further work to eliminate/engineer out the possibility of guidewires being retained is proposed.

  13. Estimation bias from using nonlinear Fourier plane correlators for sub-pixel image shift measurement and implications for the binary joint transform correlator

    NASA Astrophysics Data System (ADS)

    Grycewicz, Thomas J.; Florio, Christopher J.; Franz, Geoffrey A.; Robinson, Ross E.

    2007-09-01

    When using Fourier plane digital algorithms or an optical correlator to measure the correlation between digital images, interpolation by center-of-mass or quadratic estimation techniques can be used to estimate image displacement to the sub-pixel level. However, this can lead to a bias in the correlation measurement. This bias shifts the sub-pixel output measurement to be closer to the nearest pixel center than the actual location. The paper investigates the bias in the outputs of both digital and optical correlators, and proposes methods to minimize this effect. We use digital studies and optical implementations of the joint transform correlator to demonstrate optical registration with accuracies better than 0.1 pixels. We use both simulations of image shift and movies of a moving target as inputs. We demonstrate bias error for both center-of-mass and quadratic interpolation, and discuss the reasons that this bias is present. Finally, we suggest measures to reduce or eliminate the bias effects. We show that when sub-pixel bias is present, it can be eliminated by modifying the interpolation method. By removing the bias error, we improve registration accuracy by thirty percent.

  14. Lean six sigma methodologies improve clinical laboratory efficiency and reduce turnaround times.

    PubMed

    Inal, Tamer C; Goruroglu Ozturk, Ozlem; Kibar, Filiz; Cetiner, Salih; Matyar, Selcuk; Daglioglu, Gulcin; Yaman, Akgun

    2018-01-01

    Organizing work flow is a major task of laboratory management. Recently, clinical laboratories have started to adopt methodologies such as Lean Six Sigma and some successful implementations have been reported. This study used Lean Six Sigma to simplify the laboratory work process and decrease the turnaround time by eliminating non-value-adding steps. The five-stage Six Sigma system known as define, measure, analyze, improve, and control (DMAIC) is used to identify and solve problems. The laboratory turnaround time for individual tests, total delay time in the sample reception area, and percentage of steps involving risks of medical errors and biological hazards in the overall process are measured. The pre-analytical process in the reception area was improved by eliminating 3 h and 22.5 min of non-value-adding work. Turnaround time also improved for stat samples from 68 to 59 min after applying Lean. Steps prone to medical errors and posing potential biological hazards to receptionists were reduced from 30% to 3%. Successful implementation of Lean Six Sigma significantly improved all of the selected performance metrics. This quality-improvement methodology has the potential to significantly improve clinical laboratories. © 2017 Wiley Periodicals, Inc.

  15. Astrometry of Pluto from 1930-1951 observations: The Lampland plate collection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buie, Marc W.; Folkner, William M., E-mail: buie@boulder.swri.edu, E-mail: william.m.folkner@jpl.nasa.gov

    We present a new analysis of 843 photographic plates of Pluto taken by Carl Lampland at Lowell Observatory from 1930–1951. This large collection of plates contains useful astrometric information that improves our knowledge of Pluto's orbit. This improvement provides critical support to the impending flyby of Pluto by New Horizons. New Horizons can do inbound navigation of the system to improve its targeting. This navigation is capable of nearly eliminating the sky-plane errors but can do little to constrain the time of closest approach. Thus the focus on this work was to better determine Pluto's heliocentric distance and to determinemore » the uncertainty on that distance with a particular eye to eliminating systematic errors that might have been previously unrecognized. This work adds 596 new astrometric measurements based on the USNO CCD Astrograph Catalog 4. With the addition of these data the uncertainty of the estimated heliocentric position of Pluto in Developmental Ephemerides 432 (DE432) is at the level of 1000 km. This new analysis gives us more confidence that these estimations are accurate and are sufficient to support a successful flyby of Pluto by New Horizons.« less

  16. A study on the applications of AI in finishing of additive manufacturing parts

    NASA Astrophysics Data System (ADS)

    Fathima Patham, K.

    2017-06-01

    Artificial intelligent and computer simulation are the technological powerful tools for solving complex problems in the manufacturing industries. Additive Manufacturing is one of the powerful manufacturing techniques that provide design flexibilities to the products. The products with complex shapes are directly manufactured without the need of any machining and tooling using Additive Manufacturing. However, the main drawback of the components produced using the Additive Manufacturing processes is the quality of the surfaces. This study aims to minimize the defects caused during Additive Manufacturing with the aid of Artificial Intelligence. The developed AI system has three layers, each layer is trying to eliminate or minimize the production errors. The first layer of the AI system optimizes the digitization of the 3D CAD model of the product and hence reduces the stair case errors. The second layer of the AI system optimizes the 3D printing machine parameters in order to eliminate the warping effect. The third layer of AI system helps to choose the surface finishing technique suitable for the printed component based on the Degree of Complexity of the product and the material. The efficiency of the developed AI system was examined on the functional parts such as gears.

  17. Astrometry of Pluto from 1930-1951 Observations: the Lampland Plate Collection

    NASA Astrophysics Data System (ADS)

    Buie, Marc W.; Folkner, William M.

    2015-01-01

    We present a new analysis of 843 photographic plates of Pluto taken by Carl Lampland at Lowell Observatory from 1930-1951. This large collection of plates contains useful astrometric information that improves our knowledge of Pluto's orbit. This improvement provides critical support to the impending flyby of Pluto by New Horizons. New Horizons can do inbound navigation of the system to improve its targeting. This navigation is capable of nearly eliminating the sky-plane errors but can do little to constrain the time of closest approach. Thus the focus on this work was to better determine Pluto's heliocentric distance and to determine the uncertainty on that distance with a particular eye to eliminating systematic errors that might have been previously unrecognized. This work adds 596 new astrometric measurements based on the USNO CCD Astrograph Catalog 4. With the addition of these data the uncertainty of the estimated heliocentric position of Pluto in Developmental Ephemerides 432 (DE432) is at the level of 1000 km. This new analysis gives us more confidence that these estimations are accurate and are sufficient to support a successful flyby of Pluto by New Horizons.

  18. TH-B-BRC-00: How to Identify and Resolve Potential Clinical Errors Before They Impact Patients Treatment: Lessons Learned

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    2016-06-15

    Radiation treatment consists of a chain of events influenced by the quality of machine operation, beam data commissioning, machine calibration, patient specific data, simulation, treatment planning, imaging and treatment delivery. There is always a chance that the clinical medical physicist may make or fail to detect an error in one of the events that may impact on the patient’s treatment. In the clinical scenario, errors may be systematic and, without peer review, may have a low detectability because they are not part of routine QA procedures. During treatment, there might be errors on machine that needs attention. External reviews ofmore » some of the treatment delivery components by independent reviewers, like IROC, can detect errors, but may not be timely. The goal of this session is to help junior clinical physicists identify potential errors as well as the approach of quality assurance to perform a root cause analysis to find and eliminate an error and to continually monitor for errors. A compilation of potential errors will be presented by examples of the thought process required to spot the error and determine the root cause. Examples may include unusual machine operation, erratic electrometer reading, consistent lower electron output, variation in photon output, body parts inadvertently left in beam, unusual treatment plan, poor normalization, hot spots etc. Awareness of the possibility and detection of error in any link of the treatment process chain will help improve the safe and accurate delivery of radiation to patients. Four experts will discuss how to identify errors in four areas of clinical treatment. D. Followill, NIH grant CA 180803.« less

  19. TH-B-BRC-01: How to Identify and Resolve Potential Clinical Errors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Das, I.

    2016-06-15

    Radiation treatment consists of a chain of events influenced by the quality of machine operation, beam data commissioning, machine calibration, patient specific data, simulation, treatment planning, imaging and treatment delivery. There is always a chance that the clinical medical physicist may make or fail to detect an error in one of the events that may impact on the patient’s treatment. In the clinical scenario, errors may be systematic and, without peer review, may have a low detectability because they are not part of routine QA procedures. During treatment, there might be errors on machine that needs attention. External reviews ofmore » some of the treatment delivery components by independent reviewers, like IROC, can detect errors, but may not be timely. The goal of this session is to help junior clinical physicists identify potential errors as well as the approach of quality assurance to perform a root cause analysis to find and eliminate an error and to continually monitor for errors. A compilation of potential errors will be presented by examples of the thought process required to spot the error and determine the root cause. Examples may include unusual machine operation, erratic electrometer reading, consistent lower electron output, variation in photon output, body parts inadvertently left in beam, unusual treatment plan, poor normalization, hot spots etc. Awareness of the possibility and detection of error in any link of the treatment process chain will help improve the safe and accurate delivery of radiation to patients. Four experts will discuss how to identify errors in four areas of clinical treatment. D. Followill, NIH grant CA 180803.« less

  20. A Dual Frequency Carrier Phase Error Difference Checking Algorithm for the GNSS Compass.

    PubMed

    Liu, Shuo; Zhang, Lei; Li, Jian

    2016-11-24

    The performance of the Global Navigation Satellite System (GNSS) compass is related to the quality of carrier phase measurement. How to process the carrier phase error properly is important to improve the GNSS compass accuracy. In this work, we propose a dual frequency carrier phase error difference checking algorithm for the GNSS compass. The algorithm aims at eliminating large carrier phase error in dual frequency double differenced carrier phase measurement according to the error difference between two frequencies. The advantage of the proposed algorithm is that it does not need additional environment information and has a good performance on multiple large errors compared with previous research. The core of the proposed algorithm is removing the geographical distance from the dual frequency carrier phase measurement, then the carrier phase error is separated and detectable. We generate the Double Differenced Geometry-Free (DDGF) measurement according to the characteristic that the different frequency carrier phase measurements contain the same geometrical distance. Then, we propose the DDGF detection to detect the large carrier phase error difference between two frequencies. The theoretical performance of the proposed DDGF detection is analyzed. An open sky test, a manmade multipath test and an urban vehicle test were carried out to evaluate the performance of the proposed algorithm. The result shows that the proposed DDGF detection is able to detect large error in dual frequency carrier phase measurement by checking the error difference between two frequencies. After the DDGF detection, the accuracy of the baseline vector is improved in the GNSS compass.

  1. Eliminating Hospital Acquired Infections: Is It Possible? Is It Sustainable? Is It Worth It?

    PubMed Central

    Shannon, Richard P.

    2011-01-01

    An estimated 2 million hospital-acquired infections (HAI) are now reported annually in the US, and are associated with an estimated $5 billion in additional health care costs. With this, the growing incidence of HAI has become “ground zero” in the campaign to improve patient safety and eliminate waste in health care. We studied the characteristics of high-performing organizations and their leaders outside of health care to determine how such organizations become “best in class.” We then sought to apply the principles that led to this status to eliminating HAI associated with central venous catheters. Observations of the current condition of health care revealed multiple defects in various processes, that were breeding grounds for error. Redesign of these processes by the people involved in them under the guidance of a leader resulted in an 86% reduction in infections in the blood. Overall, financial performance improved by $5.1 million over a 2-year period. Mortality in intensive care units declined by 29%. Using methods borrowed from highly reliable industries and engaging workers at the point of care can have profound and sustainable effects in nearly eliminating HAI, with significant clinical and financial benefits. PMID:21686213

  2. Eliminating hospital acquired infections: is it possible? Is it sustainable? Is it worth it?

    PubMed

    Shannon, Richard P

    2011-01-01

    An estimated 2 million hospital-acquired infections (HAI) are now reported annually in the US, and are associated with an estimated $5 billion in additional health care costs. With this, the growing incidence of HAI has become "ground zero" in the campaign to improve patient safety and eliminate waste in health care.We studied the characteristics of high-performing organizations and their leaders outside of health care to determine how such organizations become "best in class." We then sought to apply the principles that led to this status to eliminating HAI associated with central venous catheters.Observations of the current condition of health care revealed multiple defects in various processes, that were breeding grounds for error. Redesign of these processes by the people involved in them under the guidance of a leader resulted in an 86% reduction in infections in the blood. Overall, financial performance improved by $5.1 million over a 2-year period. Mortality in intensive care units declined by 29%.Using methods borrowed from highly reliable industries and engaging workers at the point of care can have profound and sustainable effects in nearly eliminating HAI, with significant clinical and financial benefits.

  3. Simplifications in analyzing positron emission tomography data: effects on outcome measures.

    PubMed

    Logan, Jean; Alexoff, David; Kriplani, Aarti

    2007-10-01

    Initial validation studies of new radiotracers generally involve kinetic models that require a measured arterial input function. This allows for the separation of tissue binding from delivery and blood flow effects. However, when using a tracer in a clinical setting, it is necessary to eliminate arterial blood sampling due to its invasiveness and the extra burden of counting and analyzing the blood samples for metabolites. In some cases, it may also be necessary to replace dynamic scanning with a shortened scanning period some time after tracer injection, as is done with FDG (F-18 fluorodeoxyglucose). These approximations represent loss of information. In this work, we considered several questions related to this: (1) Do differences in experimental conditions (drug treatments) or populations affect the input function, and what effect, if any, does this have on the final outcome measure? (2) How do errors in metabolite measurements enter into results? (3) What errors are incurred if the uptake ratio is used in place of the distribution volume ratio? (4) Is one- or two-point blood sampling any better for FDG data than the standardized uptake value? and (5) If blood sampling is necessary, what alternatives are there to arterial blood sampling? The first three questions were considered in terms of data from human dynamic positron emission tomography (PET) studies under conditions of baseline and drug pretreatment. Data from [11C]raclopride studies and those from the norepinephrine transporter tracer (S,S)-[11C]O-methyl reboxetine were used. Calculation of a metabolic rate for FDG using the operational equation requires a measured input function. We tested a procedure based on two blood samples to estimate the plasma integral and convolution that occur in the operational equation. There are some tracers for which blood sampling is necessary. Strategies for brain studies involve using the internal carotids in estimating the radioactivity after correcting for partial volume and spillover in order to eliminate arterial sampling. Some venous blood samples are still required for metabolite measurements. The ultimate solution to the problem of arterial sampling may be a wrist scanner, which acts as a small PET camera for imaging the arteries in the wrist. This is currently under development.

  4. Toward Continuous GPS Carrier-Phase Time Transfer: Eliminating the Time Discontinuity at an Anomaly

    PubMed Central

    Yao, Jian; Levine, Judah; Weiss, Marc

    2015-01-01

    The wide application of Global Positioning System (GPS) carrier-phase (CP) time transfer is limited by the problem of boundary discontinuity (BD). The discontinuity has two categories. One is “day boundary discontinuity,” which has been studied extensively and can be solved by multiple methods [1–8]. The other category of discontinuity, called “anomaly boundary discontinuity (anomaly-BD),” comes from a GPS data anomaly. The anomaly can be a data gap (i.e., missing data), a GPS measurement error (i.e., bad data), or a cycle slip. Initial study of the anomaly-BD shows that we can fix the discontinuity if the anomaly lasts no more than 20 min, using the polynomial curve-fitting strategy to repair the anomaly [9]. However, sometimes, the data anomaly lasts longer than 20 min. Thus, a better curve-fitting strategy is in need. Besides, a cycle slip, as another type of data anomaly, can occur and lead to an anomaly-BD. To solve these problems, this paper proposes a new strategy, i.e., the satellite-clock-aided curve fitting strategy with the function of cycle slip detection. Basically, this new strategy applies the satellite clock correction to the GPS data. After that, we do the polynomial curve fitting for the code and phase data, as before. Our study shows that the phase-data residual is only ~3 mm for all GPS satellites. The new strategy also detects and finds the number of cycle slips by searching the minimum curve-fitting residual. Extensive examples show that this new strategy enables us to repair up to a 40-min GPS data anomaly, regardless of whether the anomaly is due to a data gap, a cycle slip, or a combination of the two. We also find that interference of the GPS signal, known as “jamming”, can possibly lead to a time-transfer error, and that this new strategy can compensate for jamming outages. Thus, the new strategy can eliminate the impact of jamming on time transfer. As a whole, we greatly improve the robustness of the GPS CP time transfer. PMID:26958451

  5. Study of atmospheric plasma spray process with the emphasis on gas-shrouded nozzles

    NASA Astrophysics Data System (ADS)

    Jankovic, Miodrag M.

    An atmospheric plasma spraying process is investigated in this work by using experimental approach and mathematical modelling. Emphasis was put on the gas shrouded nozzles, their design, and the protection against the mixing with the surrounding air, which they give to the plasma jet. First part of the thesis is dedicated to the analysis of enthalpy probe method, as a major diagnostic tool in this work. Systematic error in measuring the stagnation pressure, due to a big temperature difference between the plasma and the water-cooled probe, is investigated here. Parallel measurements with the enthalpy probe and an uncooled ceramic probe were performed. Also, numerical experiments were conducted, using the k-ɛ model of turbulence. Based on the obtained results, a compensating algorithm for the above error is suggested. Major objective of the thesis was to study the plasma spraying process, and potential benefits from using the gas shrouded nozzles. Mathematical modelling was used to perform the parametric study on the flow pattern inside these nozzles. Two nozzles were used: a commercial conical nozzle, and a custom-made curvilinear nozzle. The later is aimed towards elimination of the cold air entrainment, recorded for the conical nozzle. Also, parametric study on the shrouding gas and its interaction with the plasma jet was carried out. Two modes of the shrouding gas injection were tested: through sixteen injection ports, and through a continuous slot, surrounding the plasma jet. Both nozzles and both injection modes were thoroughly tested, experimentally and numerically. The curvilinear nozzle completely eliminates the cold air entrainment and yields significantly higher plasma temperature. Also, injection through the continuous slot resulted in a much better protection of the plasma jet. Both nozzles were used to perform the spraying tests. Obtained coatings were tested on porosity, adhesion strength, and micro- structure. These tests indicated better micro-structure of the coatings sprayed by the curvilinear nozzle. Also, their porosity was significantly lower, and the adhesion strength was higher for more than 25%. The overall results suggest that the curvilinear nozzles represent a much better solution for the gas shrouded plasma spraying.

  6. An arbitrary-order staggered time integrator for the linear acoustic wave equation

    NASA Astrophysics Data System (ADS)

    Lee, Jaejoon; Park, Hyunseo; Park, Yoonseo; Shin, Changsoo

    2018-02-01

    We suggest a staggered time integrator whose order of accuracy can arbitrarily be extended to solve the linear acoustic wave equation. A strategy to select the appropriate order of accuracy is also proposed based on the error analysis that quantitatively predicts the truncation error of the numerical solution. This strategy not only reduces the computational cost several times, but also allows us to flexibly set the modelling parameters such as the time step length, grid interval and P-wave speed. It is demonstrated that the proposed method can almost eliminate temporal dispersive errors during long term simulations regardless of the heterogeneity of the media and time step lengths. The method can also be successfully applied to the source problem with an absorbing boundary condition, which is frequently encountered in the practical usage for the imaging algorithms or the inverse problems.

  7. New Abstraction Networks and a New Visualization Tool in Support of Auditing the SNOMED CT Content

    PubMed Central

    Geller, James; Ochs, Christopher; Perl, Yehoshua; Xu, Junchuan

    2012-01-01

    Medical terminologies are large and complex. Frequently, errors are hidden in this complexity. Our objective is to find such errors, which can be aided by deriving abstraction networks from a large terminology. Abstraction networks preserve important features but eliminate many minor details, which are often not useful for identifying errors. Providing visualizations for such abstraction networks aids auditors by allowing them to quickly focus on elements of interest within a terminology. Previously we introduced area taxonomies and partial area taxonomies for SNOMED CT. In this paper, two advanced, novel kinds of abstraction networks, the relationship-constrained partial area subtaxonomy and the root-constrained partial area subtaxonomy are defined and their benefits are demonstrated. We also describe BLUSNO, an innovative software tool for quickly generating and visualizing these SNOMED CT abstraction networks. BLUSNO is a dynamic, interactive system that provides quick access to well organized information about SNOMED CT. PMID:23304293

  8. New abstraction networks and a new visualization tool in support of auditing the SNOMED CT content.

    PubMed

    Geller, James; Ochs, Christopher; Perl, Yehoshua; Xu, Junchuan

    2012-01-01

    Medical terminologies are large and complex. Frequently, errors are hidden in this complexity. Our objective is to find such errors, which can be aided by deriving abstraction networks from a large terminology. Abstraction networks preserve important features but eliminate many minor details, which are often not useful for identifying errors. Providing visualizations for such abstraction networks aids auditors by allowing them to quickly focus on elements of interest within a terminology. Previously we introduced area taxonomies and partial area taxonomies for SNOMED CT. In this paper, two advanced, novel kinds of abstraction networks, the relationship-constrained partial area subtaxonomy and the root-constrained partial area subtaxonomy are defined and their benefits are demonstrated. We also describe BLUSNO, an innovative software tool for quickly generating and visualizing these SNOMED CT abstraction networks. BLUSNO is a dynamic, interactive system that provides quick access to well organized information about SNOMED CT.

  9. Error and objectivity: cognitive illusions and qualitative research.

    PubMed

    Paley, John

    2005-07-01

    Psychological research has shown that cognitive illusions, of which visual illusions are just a special case, are systematic and pervasive, raising epistemological questions about how error in all forms of research can be identified and eliminated. The quantitative sciences make use of statistical techniques for this purpose, but it is not clear what the qualitative equivalent is, particularly in view of widespread scepticism about validity and objectivity. I argue that, in the light of cognitive psychology, the 'error question' cannot be dismissed as a positivist obsession, and that the concepts of truth and objectivity are unavoidable. However, they constitute only a 'minimal realism', which does not necessarily bring a commitment to 'absolute' truth, certainty, correspondence, causation, reductionism, or universal laws in its wake. The assumption that it does reflects a misreading of positivism and, ironically, precipitates a 'crisis of legitimation and representation', as described by constructivist authors.

  10. Controlling Reflections from Mesh Refinement Interfaces in Numerical Relativity

    NASA Technical Reports Server (NTRS)

    Baker, John G.; Van Meter, James R.

    2005-01-01

    A leading approach to improving the accuracy on numerical relativity simulations of black hole systems is through fixed or adaptive mesh refinement techniques. We describe a generic numerical error which manifests as slowly converging, artificial reflections from refinement boundaries in a broad class of mesh-refinement implementations, potentially limiting the effectiveness of mesh- refinement techniques for some numerical relativity applications. We elucidate this numerical effect by presenting a model problem which exhibits the phenomenon, but which is simple enough that its numerical error can be understood analytically. Our analysis shows that the effect is caused by variations in finite differencing error generated across low and high resolution regions, and that its slow convergence is caused by the presence of dramatic speed differences among propagation modes typical of 3+1 relativity. Lastly, we resolve the problem, presenting a class of finite-differencing stencil modifications which eliminate this pathology in both our model problem and in numerical relativity examples.

  11. Uncertainty modelling and analysis of volume calculations based on a regular grid digital elevation model (DEM)

    NASA Astrophysics Data System (ADS)

    Li, Chang; Wang, Qing; Shi, Wenzhong; Zhao, Sisi

    2018-05-01

    The accuracy of earthwork calculations that compute terrain volume is critical to digital terrain analysis (DTA). The uncertainties in volume calculations (VCs) based on a DEM are primarily related to three factors: 1) model error (ME), which is caused by an adopted algorithm for a VC model, 2) discrete error (DE), which is usually caused by DEM resolution and terrain complexity, and 3) propagation error (PE), which is caused by the variables' error. Based on these factors, the uncertainty modelling and analysis of VCs based on a regular grid DEM are investigated in this paper. Especially, how to quantify the uncertainty of VCs is proposed by a confidence interval based on truncation error (TE). In the experiments, the trapezoidal double rule (TDR) and Simpson's double rule (SDR) were used to calculate volume, where the TE is the major ME, and six simulated regular grid DEMs with different terrain complexity and resolution (i.e. DE) were generated by a Gauss synthetic surface to easily obtain the theoretical true value and eliminate the interference of data errors. For PE, Monte-Carlo simulation techniques and spatial autocorrelation were used to represent DEM uncertainty. This study can enrich uncertainty modelling and analysis-related theories of geographic information science.

  12. Detrimental Effect Elimination of Laser Frequency Instability in Brillouin Optical Time Domain Reflectometer by Using Self-Heterodyne Detection

    PubMed Central

    Li, Yongqian; Li, Xiaojuan; An, Qi; Zhang, Lixin

    2017-01-01

    A useful method for eliminating the detrimental effect of laser frequency instability on Brillouin signals by employing the self-heterodyne detection of Rayleigh and Brillouin scattering is presented. From the analysis of Brillouin scattering spectra from fibers with different lengths measured by heterodyne detection, the maximum usable pulse width immune to laser frequency instability is obtained to be about 4 µs in a self-heterodyne detection Brillouin optical time domain reflectometer (BOTDR) system using a broad-band laser with low frequency stability. Applying the self-heterodyne detection of Rayleigh and Brillouin scattering in BOTDR system, we successfully demonstrate that the detrimental effect of laser frequency instability on Brillouin signals can be eliminated effectively. Employing the broad-band laser modulated by a 130-ns wide pulse driven electro-optic modulator, the observed maximum errors in temperatures measured by the local heterodyne and self-heterodyne detection BOTDR systems are 7.9 °C and 1.2 °C, respectively. PMID:28335508

  13. a Generic Probabilistic Model and a Hierarchical Solution for Sensor Localization in Noisy and Restricted Conditions

    NASA Astrophysics Data System (ADS)

    Ji, S.; Yuan, X.

    2016-06-01

    A generic probabilistic model, under fundamental Bayes' rule and Markov assumption, is introduced to integrate the process of mobile platform localization with optical sensors. And based on it, three relative independent solutions, bundle adjustment, Kalman filtering and particle filtering are deduced under different and additional restrictions. We want to prove that first, Kalman filtering, may be a better initial-value supplier for bundle adjustment than traditional relative orientation in irregular strips and networks or failed tie-point extraction. Second, in high noisy conditions, particle filtering can act as a bridge for gap binding when a large number of gross errors fail a Kalman filtering or a bundle adjustment. Third, both filtering methods, which help reduce the error propagation and eliminate gross errors, guarantee a global and static bundle adjustment, who requires the strictest initial values and control conditions. The main innovation is about the integrated processing of stochastic errors and gross errors in sensor observations, and the integration of the three most used solutions, bundle adjustment, Kalman filtering and particle filtering into a generic probabilistic localization model. The tests in noisy and restricted situations are designed and examined to prove them.

  14. Cost-effectiveness of the stream-gaging program in Nebraska

    USGS Publications Warehouse

    Engel, G.B.; Wahl, K.L.; Boohar, J.A.

    1984-01-01

    This report documents the results of a study of the cost-effectiveness of the streamflow information program in Nebraska. Presently, 145 continuous surface-water stations are operated in Nebraska on a budget of $908,500. Data uses and funding sources are identified for each of the 145 stations. Data from most stations have multiple uses. All stations have sufficient justification for continuation, but two stations primarily are used in short-term research studies; their continued operation needs to be evaluated when the research studies end. The present measurement frequency produces an average standard error for instantaneous discharges of about 12 percent, including periods when stage data are missing. Altering the travel routes and the measurement frequency will allow a reduction in standard error of about 1 percent with the present budget. Standard error could be reduced to about 8 percent if lost record could be eliminated. A minimum budget of $822,000 is required to operate the present network, but operations at that funding level would result in an increase in standard error to about 16 percent. The maximum budget analyzed was $1,363,000, which would result in an average standard error of 6 percent. (USGS)

  15. Bioelimination of /sup 51/Cr and /sup 85/Sr by cockroaches, Gromphadorhina portentosa (orthoptera: blaberidae), as affected by mites, Gromphadorholaelaps schaeferi (parasitiformes: laelapidae)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schowalter, T.D.; Crossley, D.A. Jr.

    1982-03-01

    The rates of Chromium-51 and Strontium-85 assimilation and bioelimination by the hissing cockroach, Gromphadorhina portentosa (Schaum) are described when the symbiotic mite, Gromphadorholaelaps schaeferi Till, was present or removed. Mite-infested cockroaches had significantly higher rates of /sup 51/Cr elimination relative to mite-free cockroaches, implying more rapid gut clearance times. The authors did not find a significant mite effect on /sup 85/Sr elimination by the host, but mite effects could have been masked by the apparently unique process of nutrient assimilation and elimination by G. portentosa. Conventional models of radioactive tracer bioelimination predict a rapid initial loss of tracer due tomore » gut clearance, followed by a slower loss due to excretion of assimilated tracer. The results indicated that assimilated /sup 85/Sr was eliminated earlier than unassimilated /sup 85/Sr, which was lost by defecation.« less

  16. Note: A dual-channel sensor for dew point measurement based on quartz crystal microbalance.

    PubMed

    Li, Ning; Meng, Xiaofeng; Nie, Jing

    2017-05-01

    A new sensor with dual-channel was designed for eliminating the temperature effect on the frequency measurement of the quartz crystal microbalance (QCM) in dew point detection. The sensor uses active temperature control, produces condensation on the surface of QCM, and then detects the dew point. Both the single-channel and the dual-channel methods were conducted based on the device. The measurement error of the single-channel method was less than 0.5 °C at the dew point range of -2 °C-10 °C while the dual-channel was 0.3 °C. The results showed that the dual-channel method was able to eliminate the temperature effect and yield better measurement accuracy.

  17. Note: A dual-channel sensor for dew point measurement based on quartz crystal microbalance

    NASA Astrophysics Data System (ADS)

    Li, Ning; Meng, Xiaofeng; Nie, Jing

    2017-05-01

    A new sensor with dual-channel was designed for eliminating the temperature effect on the frequency measurement of the quartz crystal microbalance (QCM) in dew point detection. The sensor uses active temperature control, produces condensation on the surface of QCM, and then detects the dew point. Both the single-channel and the dual-channel methods were conducted based on the device. The measurement error of the single-channel method was less than 0.5 °C at the dew point range of -2 °C-10 °C while the dual-channel was 0.3 °C. The results showed that the dual-channel method was able to eliminate the temperature effect and yield better measurement accuracy.

  18. GRACE star camera noise

    NASA Astrophysics Data System (ADS)

    Harvey, Nate

    2016-08-01

    Extending results from previous work by Bandikova et al. (2012) and Inacio et al. (2015), this paper analyzes Gravity Recovery and Climate Experiment (GRACE) star camera attitude measurement noise by processing inter-camera quaternions from 2003 to 2015. We describe a correction to star camera data, which will eliminate a several-arcsec twice-per-rev error with daily modulation, currently visible in the auto-covariance function of the inter-camera quaternion, from future GRACE Level-1B product releases. We also present evidence supporting the argument that thermal conditions/settings affect long-term inter-camera attitude biases by at least tens-of-arcsecs, and that several-to-tens-of-arcsecs per-rev star camera errors depend largely on field-of-view.

  19. Development and Characterization of a Low-Pressure Calibration System for Hypersonic Wind Tunnels

    NASA Technical Reports Server (NTRS)

    Green, Del L.; Everhart, Joel L.; Rhode, Matthew N.

    2004-01-01

    Minimization of uncertainty is essential for accurate ESP measurements at very low free-stream static pressures found in hypersonic wind tunnels. Statistical characterization of environmental error sources requires a well defined and controlled calibration method. A calibration system has been constructed and environmental control software developed to control experimentation to eliminate human induced error sources. The initial stability study of the calibration system shows a high degree of measurement accuracy and precision in temperature and pressure control. Control manometer drift and reference pressure instabilities induce uncertainty into the repeatability of voltage responses measured from the PSI System 8400 between calibrations. Methods of improving repeatability are possible through software programming and further experimentation.

  20. Research on control strategy based on fuzzy PR for grid-connected inverter

    NASA Astrophysics Data System (ADS)

    Zhang, Qian; Guan, Weiguo; Miao, Wen

    2018-04-01

    In the traditional PI controller, there is static error in tracking ac signals. To solve the problem, the control strategy of a fuzzy PR and the grid voltage feed-forward is proposed. The fuzzy PR controller is to eliminate the static error of the system. It also adjusts parameters of PR controller in real time, which avoids the defect of fixed parameter fixed. The grid voltage feed-forward control can ensure the quality of current and improve the system's anti-interference ability when the grid voltage is distorted. Finally, the simulation results show that the system can output grid current with good quality and also has good dynamic and steady state performance.

Top