Sample records for include errors due

  1. Remediating Common Math Errors.

    ERIC Educational Resources Information Center

    Wagner, Rudolph F.

    1981-01-01

    Explanations and remediation suggestions for five types of mathematics errors due either to perceptual or cognitive difficulties are given. Error types include directionality problems, mirror writing, visually misperceived signs, diagnosed directionality problems, and mixed process errors. (CL)

  2. A general model for attitude determination error analysis

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Seidewitz, ED; Nicholson, Mark

    1988-01-01

    An overview is given of a comprehensive approach to filter and dynamics modeling for attitude determination error analysis. The models presented include both batch least-squares and sequential attitude estimation processes for both spin-stabilized and three-axis stabilized spacecraft. The discussion includes a brief description of a dynamics model of strapdown gyros, but it does not cover other sensor models. Model parameters can be chosen to be solve-for parameters, which are assumed to be estimated as part of the determination process, or consider parameters, which are assumed to have errors but not to be estimated. The only restriction on this choice is that the time evolution of the consider parameters must not depend on any of the solve-for parameters. The result of an error analysis is an indication of the contributions of the various error sources to the uncertainties in the determination of the spacecraft solve-for parameters. The model presented gives the uncertainty due to errors in the a priori estimates of the solve-for parameters, the uncertainty due to measurement noise, the uncertainty due to dynamic noise (also known as process noise or measurement noise), the uncertainty due to the consider parameters, and the overall uncertainty due to all these sources of error.

  3. Sampling Errors in Monthly Rainfall Totals for TRMM and SSM/I, Based on Statistics of Retrieved Rain Rates and Simple Models

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Kundu, Prasun K.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Estimates from TRMM satellite data of monthly total rainfall over an area are subject to substantial sampling errors due to the limited number of visits to the area by the satellite during the month. Quantitative comparisons of TRMM averages with data collected by other satellites and by ground-based systems require some estimate of the size of this sampling error. A method of estimating this sampling error based on the actual statistics of the TRMM observations and on some modeling work has been developed. "Sampling error" in TRMM monthly averages is defined here relative to the monthly total a hypothetical satellite permanently stationed above the area would have reported. "Sampling error" therefore includes contributions from the random and systematic errors introduced by the satellite remote sensing system. As part of our long-term goal of providing error estimates for each grid point accessible to the TRMM instruments, sampling error estimates for TRMM based on rain retrievals from TRMM microwave (TMI) data are compared for different times of the year and different oceanic areas (to minimize changes in the statistics due to algorithmic differences over land and ocean). Changes in sampling error estimates due to changes in rain statistics due 1) to evolution of the official algorithms used to process the data, and 2) differences from other remote sensing systems such as the Defense Meteorological Satellite Program (DMSP) Special Sensor Microwave/Imager (SSM/I), are analyzed.

  4. Absolute calibration of optical flats

    DOEpatents

    Sommargren, Gary E.

    2005-04-05

    The invention uses the phase shifting diffraction interferometer (PSDI) to provide a true point-by-point measurement of absolute flatness over the surface of optical flats. Beams exiting the fiber optics in a PSDI have perfect spherical wavefronts. The measurement beam is reflected from the optical flat and passed through an auxiliary optic to then be combined with the reference beam on a CCD. The combined beams include phase errors due to both the optic under test and the auxiliary optic. Standard phase extraction algorithms are used to calculate this combined phase error. The optical flat is then removed from the system and the measurement fiber is moved to recombine the two beams. The newly combined beams include only the phase errors due to the auxiliary optic. When the second phase measurement is subtracted from the first phase measurement, the absolute phase error of the optical flat is obtained.

  5. Medication errors in anesthesia: unacceptable or unavoidable?

    PubMed

    Dhawan, Ira; Tewari, Anurag; Sehgal, Sankalp; Sinha, Ashish Chandra

    Medication errors are the common causes of patient morbidity and mortality. It adds financial burden to the institution as well. Though the impact varies from no harm to serious adverse effects including death, it needs attention on priority basis since medication errors' are preventable. In today's world where people are aware and medical claims are on the hike, it is of utmost priority that we curb this issue. Individual effort to decrease medication error alone might not be successful until a change in the existing protocols and system is incorporated. Often drug errors that occur cannot be reversed. The best way to 'treat' drug errors is to prevent them. Wrong medication (due to syringe swap), overdose (due to misunderstanding or preconception of the dose, pump misuse and dilution error), incorrect administration route, under dosing and omission are common causes of medication error that occur perioperatively. Drug omission and calculation mistakes occur commonly in ICU. Medication errors can occur perioperatively either during preparation, administration or record keeping. Numerous human and system errors can be blamed for occurrence of medication errors. The need of the hour is to stop the blame - game, accept mistakes and develop a safe and 'just' culture in order to prevent medication errors. The newly devised systems like VEINROM, a fluid delivery system is a novel approach in preventing drug errors due to most commonly used medications in anesthesia. Similar developments along with vigilant doctors, safe workplace culture and organizational support all together can help prevent these errors. Copyright © 2016. Published by Elsevier Editora Ltda.

  6. Elimination of Emergency Department Medication Errors Due To Estimated Weights.

    PubMed

    Greenwalt, Mary; Griffen, David; Wilkerson, Jim

    2017-01-01

    From 7/2014 through 6/2015, 10 emergency department (ED) medication dosing errors were reported through the electronic incident reporting system of an urban academic medical center. Analysis of these medication errors identified inaccurate estimated weight on patients as the root cause. The goal of this project was to reduce weight-based dosing medication errors due to inaccurate estimated weights on patients presenting to the ED. Chart review revealed that 13.8% of estimated weights documented on admitted ED patients varied more than 10% from subsequent actual admission weights recorded. A random sample of 100 charts containing estimated weights revealed 2 previously unreported significant medication dosage errors (.02 significant error rate). Key improvements included removing barriers to weighing ED patients, storytelling to engage staff and change culture, and removal of the estimated weight documentation field from the ED electronic health record (EHR) forms. With these improvements estimated weights on ED patients, and the resulting medication errors, were eliminated.

  7. Measurements of the toroidal torque balance of error field penetration locked modes

    DOE PAGES

    Shiraki, Daisuke; Paz-Soldan, Carlos; Hanson, Jeremy M.; ...

    2015-01-05

    Here, detailed measurements from the DIII-D tokamak of the toroidal dynamics of error field penetration locked modes under the influence of slowly evolving external fields, enable study of the toroidal torques on the mode, including interaction with the intrinsic error field. The error field in these low density Ohmic discharges is well known based on the mode penetration threshold, allowing resonant and non-resonant torque effects to be distinguished. These m/n = 2/1 locked modes are found to be well described by a toroidal torque balance between the resonant interaction with n = 1 error fields, and a viscous torque inmore » the electron diamagnetic drift direction which is observed to scale as the square of the perturbed field due to the island. Fitting to this empirical torque balance allows a time-resolved measurement of the intrinsic error field of the device, providing evidence for a time-dependent error field in DIII-D due to ramping of the Ohmic coil current.« less

  8. Uncertainties of predictions from parton distributions II: theoretical errors

    NASA Astrophysics Data System (ADS)

    Martin, A. D.; Roberts, R. G.; Stirling, W. J.; Thorne, R. S.

    2004-06-01

    We study the uncertainties in parton distributions, determined in global fits to deep inelastic and related hard scattering data, due to so-called theoretical errors. Amongst these, we include potential errors due to the change of perturbative order (NLO to NNLO), ln(1/x) and ln(1-x) effects, absorptive corrections and higher-twist contributions. We investigate these uncertainties both by including explicit corrections to our standard global analysis and by examining the sensitivity to changes of the x, Q 2, W 2 cuts on the data that are fitted. In this way we expose those kinematic regions where the conventional DGLAP description is inadequate. As a consequence we obtain a set of NLO, and of NNLO, conservative partons where the data are fully consistent with DGLAP evolution, but over a restricted kinematic domain. We also examine the potential effects of such issues as the choice of input parametrisation, heavy target corrections, assumptions about the strange quark sea and isospin violation. Hence we are able to compare the theoretical errors with those uncertainties due to errors on the experimental measurements, which we studied previously. We use W and Higgs boson production at the Tevatron and the LHC as explicit examples of the uncertainties arising from parton distributions. For many observables the theoretical error is dominant, but for the cross section for W production at the Tevatron both the theoretical and experimental uncertainties are small, and hence the NNLO prediction may serve as a valuable luminosity monitor.

  9. LANDSAT/coastal processes

    NASA Technical Reports Server (NTRS)

    James, W. P. (Principal Investigator); Hill, J. M.; Bright, J. B.

    1977-01-01

    The author has identified the following significant results. Correlations between the satellite radiance values water color, Secchi disk visibility, turbidity, and attenuation coefficients were generally good. The residual was due to several factors including systematic errors in the remotely sensed data, errors, small time and space variations in the water quality measurements, and errors caused by experimental design. Satellite radiance values were closely correlated with the optical properties of the water.

  10. Precision and Error of Three-dimensional Phenotypic Measures Acquired from 3dMD Photogrammetric Images

    PubMed Central

    Aldridge, Kristina; Boyadjiev, Simeon A.; Capone, George T.; DeLeon, Valerie B.; Richtsmeier, Joan T.

    2015-01-01

    The genetic basis for complex phenotypes is currently of great interest for both clinical investigators and basic scientists. In order to acquire a thorough understanding of the translation from genotype to phenotype, highly precise measures of phenotypic variation are required. New technologies, such as 3D photogrammetry are being implemented in phenotypic studies due to their ability to collect data rapidly and non-invasively. Before these systems can be broadly implemented the error associated with data collected from images acquired using these technologies must be assessed. This study investigates the precision, error, and repeatability associated with anthropometric landmark coordinate data collected from 3D digital photogrammetric images acquired with the 3dMDface System. Precision, error due to the imaging system, error due to digitization of the images, and repeatability are assessed in a sample of children and adults (N=15). Results show that data collected from images with the 3dMDface System are highly repeatable and precise. The average error associated with the placement of landmarks is sub-millimeter; both the error due to digitization and to the imaging system are very low. The few measures showing a higher degree of error include those crossing the labial fissure, which are influenced by even subtle movement of the mandible. These results suggest that 3D anthropometric data collected using the 3dMDface System are highly reliable and therefore useful for evaluation of clinical dysmorphology and surgery, analyses of genotype-phenotype correlations, and inheritance of complex phenotypes. PMID:16158436

  11. Postoperative Refractive Errors Following Pediatric Cataract Extraction with Intraocular Lens Implantation.

    PubMed

    Indaram, Maanasa; VanderVeen, Deborah K

    2018-01-01

    Advances in surgical techniques allow implantation of intraocular lenses (IOL) with cataract extraction, even in young children. However, there are several challenges unique to the pediatric population that result in greater degrees of postoperative refractive error compared to adults. Literature review of the techniques and outcomes of pediatric cataract surgery with IOL implantation. Pediatric cataract surgery is associated with several sources of postoperative refractive error. These include planned refractive error based on age or fellow eye status, loss of accommodation, and unexpected refractive errors due to inaccuracies in biometry technique, use of IOL power formulas based on adult normative values, and late refractive changes due to unpredictable eye growth. Several factors can preclude the achievement of optimal refractive status following pediatric cataract extraction with IOL implantation. There is a need for new technology to reduce postoperative refractive surprises and address refractive adjustment in a growing eye.

  12. Error Analysis for High Resolution Topography with Bi-Static Single-Pass SAR Interferometry

    NASA Technical Reports Server (NTRS)

    Muellerschoen, Ronald J.; Chen, Curtis W.; Hensley, Scott; Rodriguez, Ernesto

    2006-01-01

    We present a flow down error analysis from the radar system to topographic height errors for bi-static single pass SAR interferometry for a satellite tandem pair. Because of orbital dynamics the baseline length and baseline orientation evolve spatially and temporally, the height accuracy of the system is modeled as a function of the spacecraft position and ground location. Vector sensitivity equations of height and the planar error components due to metrology, media effects, and radar system errors are derived and evaluated globally for a baseline mission. Included in the model are terrain effects that contribute to layover and shadow and slope effects on height errors. The analysis also accounts for nonoverlapping spectra and the non-overlapping bandwidth due to differences between the two platforms' viewing geometries. The model is applied to a 514 km altitude 97.4 degree inclination tandem satellite mission with a 300 m baseline separation and X-band SAR. Results from our model indicate that global DTED level 3 can be achieved.

  13. Coping with medical error: a systematic review of papers to assess the effects of involvement in medical errors on healthcare professionals' psychological well-being.

    PubMed

    Sirriyeh, Reema; Lawton, Rebecca; Gardner, Peter; Armitage, Gerry

    2010-12-01

    Previous research has established health professionals as secondary victims of medical error, with the identification of a range of emotional and psychological repercussions that may occur as a result of involvement in error.2 3 Due to the vast range of emotional and psychological outcomes, research to date has been inconsistent in the variables measured and tools used. Therefore, differing conclusions have been drawn as to the nature of the impact of error on professionals and the subsequent repercussions for their team, patients and healthcare institution. A systematic review was conducted. Data sources were identified using database searches, with additional reference and hand searching. Eligibility criteria were applied to all studies identified, resulting in a total of 24 included studies. Quality assessment was conducted with the included studies using a tool that was developed as part of this research, but due to the limited number and diverse nature of studies, no exclusions were made on this basis. Review findings suggest that there is consistent evidence for the widespread impact of medical error on health professionals. Psychological repercussions may include negative states such as shame, self-doubt, anxiety and guilt. Despite much attention devoted to the assessment of negative outcomes, the potential for positive outcomes resulting from error also became apparent, with increased assertiveness, confidence and improved colleague relationships reported. It is evident that involvement in a medical error can elicit a significant psychological response from the health professional involved. However, a lack of literature around coping and support, coupled with inconsistencies and weaknesses in methodology, may need be addressed in future work.

  14. Three-dimensional FLASH Laser Radar Range Estimation via Blind Deconvolution

    DTIC Science & Technology

    2009-10-01

    scene can result in errors due to several factors including the optical spatial impulse response, detector blurring, photon noise , timing jitter, and...estimation error include spatial blur, detector blurring, noise , timing jitter, and inter-sample targets. Unlike previous research, this paper ac- counts...for pixel coupling by defining the range image mathematical model as a 2D convolution between the system spatial impulse response and the object (target

  15. Frequency synchronization of a frequency-hopped MFSK communication system

    NASA Technical Reports Server (NTRS)

    Huth, G. K.; Polydoros, A.; Simon, M. K.

    1981-01-01

    This paper presents the performance of fine-frequency synchronization. The performance degradation due to imperfect frequency synchronization is found in terms of the effect on bit error probability as a function of full-band or partial-band noise jamming levels and of the number of frequency hops used in the estimator. The effect of imperfect fine-time synchronization is also included in the calculation of fine-frequency synchronization performance to obtain the overall performance degradation due to synchronization errors.

  16. 49 CFR Appendix C to Part 236 - Safety Assurance Criteria and Processes

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... system (all its elements including hardware and software) must be designed to assure safe operation with... unsafe errors in the software due to human error in the software specification, design, or coding phases... (hardware or software, or both) are used in combination to ensure safety. If a common mode failure exists...

  17. Errors in radiation oncology: A study in pathways and dosimetric impact

    PubMed Central

    Drzymala, Robert E.; Purdy, James A.; Michalski, Jeff

    2005-01-01

    As complexity for treating patients increases, so does the risk of error. Some publications have suggested that record and verify (R&V) systems may contribute in propagating errors. Direct data transfer has the potential to eliminate most, but not all, errors. And although the dosimetric consequences may be obvious in some cases, a detailed study does not exist. In this effort, we examined potential errors in terms of scenarios, pathways of occurrence, and dosimetry. Our goal was to prioritize error prevention according to likelihood of event and dosimetric impact. For conventional photon treatments, we investigated errors of incorrect source‐to‐surface distance (SSD), energy, omitted wedge (physical, dynamic, or universal) or compensating filter, incorrect wedge or compensating filter orientation, improper rotational rate for arc therapy, and geometrical misses due to incorrect gantry, collimator or table angle, reversed field settings, and setup errors. For electron beam therapy, errors investigated included incorrect energy, incorrect SSD, along with geometric misses. For special procedures we examined errors for total body irradiation (TBI, incorrect field size, dose rate, treatment distance) and LINAC radiosurgery (incorrect collimation setting, incorrect rotational parameters). Likelihood of error was determined and subsequently rated according to our history of detecting such errors. Dosimetric evaluation was conducted by using dosimetric data, treatment plans, or measurements. We found geometric misses to have the highest error probability. They most often occurred due to improper setup via coordinate shift errors or incorrect field shaping. The dosimetric impact is unique for each case and depends on the proportion of fields in error and volume mistreated. These errors were short‐lived due to rapid detection via port films. The most significant dosimetric error was related to a reversed wedge direction. This may occur due to incorrect collimator angle or wedge orientation. For parallel‐opposed 60° wedge fields, this error could be as high as 80% to a point off‐axis. Other examples of dosimetric impact included the following: SSD, ~2%/cm for photons or electrons; photon energy (6 MV vs. 18 MV), on average 16% depending on depth, electron energy, ~0.5cm of depth coverage per MeV (mega‐electron volt). Of these examples, incorrect distances were most likely but rapidly detected by in vivo dosimetry. Errors were categorized by occurrence rate, methods and timing of detection, longevity, and dosimetric impact. Solutions were devised according to these criteria. To date, no one has studied the dosimetric impact of global errors in radiation oncology. Although there is heightened awareness that with increased use of ancillary devices and automation, there must be a parallel increase in quality check systems and processes, errors do and will continue to occur. This study has helped us identify and prioritize potential errors in our clinic according to frequency and dosimetric impact. For example, to reduce the use of an incorrect wedge direction, our clinic employs off‐axis in vivo dosimetry. To avoid a treatment distance setup error, we use both vertical table settings and optical distance indicator (ODI) values to properly set up fields. As R&V systems become more automated, more accurate and efficient data transfer will occur. This will require further analysis. Finally, we have begun examining potential intensity‐modulated radiation therapy (IMRT) errors according to the same criteria. PACS numbers: 87.53.Xd, 87.53.St PMID:16143793

  18. Trans-dimensional matched-field geoacoustic inversion with hierarchical error models and interacting Markov chains.

    PubMed

    Dettmer, Jan; Dosso, Stan E

    2012-10-01

    This paper develops a trans-dimensional approach to matched-field geoacoustic inversion, including interacting Markov chains to improve efficiency and an autoregressive model to account for correlated errors. The trans-dimensional approach and hierarchical seabed model allows inversion without assuming any particular parametrization by relaxing model specification to a range of plausible seabed models (e.g., in this case, the number of sediment layers is an unknown parameter). Data errors are addressed by sampling statistical error-distribution parameters, including correlated errors (covariance), by applying a hierarchical autoregressive error model. The well-known difficulty of low acceptance rates for trans-dimensional jumps is addressed with interacting Markov chains, resulting in a substantial increase in efficiency. The trans-dimensional seabed model and the hierarchical error model relax the degree of prior assumptions required in the inversion, resulting in substantially improved (more realistic) uncertainty estimates and a more automated algorithm. In particular, the approach gives seabed parameter uncertainty estimates that account for uncertainty due to prior model choice (layering and data error statistics). The approach is applied to data measured on a vertical array in the Mediterranean Sea.

  19. [The German promille law--overview and guideline for legal traffic applications].

    PubMed

    Grohmann, P

    1996-07-01

    1. The alcohol level regulation affects everyone who participates in public road, rail, shipping and air traffic. In legal terms a person participating in traffic is anyone who has a direct, physical influence on the traffic flow i.e. as a pedestrian, a vehicle driver, an aircraft captain or a train driver. Participation in any kind of traffic requires a physical, mental and psychological joint effort which is controlled by the central nervous system. The influence of alcohol drastically deteriorates driver performance to the detritment of traffic safety. In cases of traffic law determination of dangerous driving or in cases of the legal limitations of diminished or no responsibility, recognised discoveries in traffic medicine as well as blood alcohol research, psychiatry and statistics have played a significant role. The main medical-scientific used to determine the alcohol level regulation, in particular the borderline cases, essentially rely on on experiments (mainly driving experiments). These experiments are carried out with drivers under the influence of alcohol and who drive road vehicles, particularly motor vehicles. At present, no comparable, scientifically convincing research is available for the other groups of participants in traffic. Therefore, the determination of a universal alcohol level regulation including pedestrians and train drivers for example, can not be justified. Blood alcohol effect has been thoroughly researched, the metabolic reaction is well known, alcohol is easily quantifiable, its effect can be easily reviewed and is reproducible to a large extent. Therefore, due to existence of certain alcohol level values, legal conclusions can be drawn which affect all participants in traffic. The main issue is, that the blood alcohol level taken at the time of the accident is definite, regardless whether it was taken by means by a blood sample or by means of a statement of the amount of alcohol consumed. Whether or not the driving under the influence of alcohol falls under the category of "infringement of the law" or "criminal offence" depends largely on the abstract danger caused to the traffic. According to section 316 StGB a motorized or non-motorized driver under the influence of alcohol is considered to be unsafe if he/she is incapable of driving the vehicle safely for a long span of time or when sudden difficulties arise. It would apply if the alcohol has caused a personality change which would not enable the driver to drive safely despite wilfully trying. This applies respectively to drivers of vehicles that don't circulate on roads. 2. Currently, the following alcohol limit regulation applied in Germany: Criminal offences section 316, section 315 c section 1 no. 1 a StGB 1. Drivers of motor vehicles on the road--0.3 to 1.09/1000 and an additional error to due the consumption of alcohol--1.10/1000 (including the body's alcohol resorption effect) and more, with or without errors due to the consumption of alcohol. 2. Cyclists--0.3 to 1.59/1000 and an additional error due to the consumption of alcohol--1.60/1000 (including the body's resorption effect) and more, with oder without error due to the consumption of alcohol. 3. Carriage drivers, motorized wheelchair users--0.3/1000 and more and an additional error due to the consumption of alcohol. Section 316, Section 315 StGB 4. Train drivers, airplane pilots and persons in charge of ships--0.3/1000 and more and an additional error due to the consumption of alcohol. Author's opinion: airplane pilots, from 0.1/1000 even without an error due to the consumption of alcohol. Infringement of the law section 24 StVG--0.80 (including body's alcohol resorption effect) to 1.09/1000 without error due to the comsumption of alcohol. Sections 2, 69 a section 1 no. 1 StVZO pedestrians with or without special means of transport, animal leaders/animal drovers, pillion rider or passenger on a motor-bike--0.3/1000 and more and an additional error due to the consumption of alc

  20. Evaluation of Approaches to Deal with Low-Frequency Nuisance Covariates in Population Pharmacokinetic Analyses.

    PubMed

    Lagishetty, Chakradhar V; Duffull, Stephen B

    2015-11-01

    Clinical studies include occurrences of rare variables, like genotypes, which due to their frequency and strength render their effects difficult to estimate from a dataset. Variables that influence the estimated value of a model-based parameter are termed covariates. It is often difficult to determine if such an effect is significant, since type I error can be inflated when the covariate is rare. Their presence may have either an insubstantial effect on the parameters of interest, hence are ignorable, or conversely they may be influential and therefore non-ignorable. In the case that these covariate effects cannot be estimated due to power and are non-ignorable, then these are considered nuisance, in that they have to be considered but due to type 1 error are of limited interest. This study assesses methods of handling nuisance covariate effects. The specific objectives include (1) calibrating the frequency of a covariate that is associated with type 1 error inflation, (2) calibrating its strength that renders it non-ignorable and (3) evaluating methods for handling these non-ignorable covariates in a nonlinear mixed effects model setting. Type 1 error was determined for the Wald test. Methods considered for handling the nuisance covariate effects were case deletion, Box-Cox transformation and inclusion of a specific fixed effects parameter. Non-ignorable nuisance covariates were found to be effectively handled through addition of a fixed effect parameter.

  1. Determination of Barometric Altimeter Errors for the Orion Exploration Flight Test-1 Entry

    NASA Technical Reports Server (NTRS)

    Brown, Denise L.; Munoz, Jean-Philippe; Gay, Robert

    2011-01-01

    The EFT-1 mission is the unmanned flight test for the upcoming Multi-Purpose Crew Vehicle (MPCV). During entry, the EFT-1 vehicle will trigger several Landing and Recovery System (LRS) events, such as parachute deployment, based on onboard altitude information. The primary altitude source is the filtered navigation solution updated with GPS measurement data. The vehicle also has three barometric altimeters that will be used to measure atmospheric pressure during entry. In the event that GPS data is not available during entry, the altitude derived from the barometric altimeter pressure will be used to trigger chute deployment for the drogues and main parachutes. Therefore it is important to understand the impact of error sources on the pressure measured by the barometric altimeters and on the altitude derived from that pressure. There are four primary error sources impacting the sensed pressure: sensor errors, Analog to Digital conversion errors, aerodynamic errors, and atmosphere modeling errors. This last error source is induced by the conversion from pressure to altitude in the vehicle flight software, which requires an atmosphere model such as the US Standard 1976 Atmosphere model. There are several secondary error sources as well, such as waves, tides, and latencies in data transmission. Typically, for error budget calculations it is assumed that all error sources are independent, normally distributed variables. Thus, the initial approach to developing the EFT-1 barometric altimeter altitude error budget was to create an itemized error budget under these assumptions. This budget was to be verified by simulation using high fidelity models of the vehicle hardware and software. The simulation barometric altimeter model includes hardware error sources and a data-driven model of the aerodynamic errors expected to impact the pressure in the midbay compartment in which the sensors are located. The aerodynamic model includes the pressure difference between the midbay compartment and the free stream pressure as a function of altitude, oscillations in sensed pressure due to wake effects, and an acoustics model capturing fluctuations in pressure due to motion of the passive vents separating the barometric altimeters from the outside of the vehicle.

  2. Resistance fail strain gage technology as applied to composite materials

    NASA Technical Reports Server (NTRS)

    Tuttle, M. E.; Brinson, H. F.

    1985-01-01

    Existing strain gage technologies as applied to orthotropic composite materials are reviewed. The bonding procedures, transverse sensitivity effects, errors due to gage misalignment, and temperature compensation methods are addressed. Numerical examples are included where appropriate. It is shown that the orthotropic behavior of composites can result in experimental error which would not be expected based on practical experience with isotropic materials. In certain cases, the transverse sensitivity of strain gages and/or slight gage misalignment can result in strain measurement errors.

  3. Imaging phased telescope array study

    NASA Technical Reports Server (NTRS)

    Harvey, James E.

    1989-01-01

    The problems encountered in obtaining a wide field-of-view with large, space-based direct imaging phased telescope arrays were considered. After defining some of the critical systems issues, previous relevant work in the literature was reviewed and summarized. An extensive list was made of potential error sources and the error sources were categorized in the form of an error budget tree including optical design errors, optical fabrication errors, assembly and alignment errors, and environmental errors. After choosing a top level image quality requirment as a goal, a preliminary tops-down error budget allocation was performed; then, based upon engineering experience, detailed analysis, or data from the literature, a bottoms-up error budget reallocation was performed in an attempt to achieve an equitable distribution of difficulty in satisfying the various allocations. This exercise provided a realistic allocation for residual off-axis optical design errors in the presence of state-of-the-art optical fabrication and alignment errors. Three different computational techniques were developed for computing the image degradation of phased telescope arrays due to aberrations of the individual telescopes. Parametric studies and sensitivity analyses were then performed for a variety of subaperture configurations and telescope design parameters in an attempt to determine how the off-axis performance of a phased telescope array varies as the telescopes are scaled up in size. The Air Force Weapons Laboratory (AFWL) multipurpose telescope testbed (MMTT) configuration was analyzed in detail with regard to image degradation due to field curvature and distortion of the individual telescopes as they are scaled up in size.

  4. Preventability of Voluntarily Reported or Trigger Tool-Identified Medication Errors in a Pediatric Institution by Information Technology: A Retrospective Cohort Study.

    PubMed

    Stultz, Jeremy S; Nahata, Milap C

    2015-07-01

    Information technology (IT) has the potential to prevent medication errors. While many studies have analyzed specific IT technologies and preventable adverse drug events, no studies have identified risk factors for errors still occurring that are not preventable by IT. The objective of this study was to categorize reported or trigger tool-identified errors and adverse events (AEs) at a pediatric tertiary care institution. Also, we sought to identify medication errors preventable by IT, determine why IT-preventable errors occurred, and to identify risk factors for errors that were not preventable by IT. This was a retrospective analysis of voluntarily reported or trigger tool-identified errors and AEs occurring from 1 July 2011 to 30 June 2012. Medication errors reaching the patients were categorized based on the origin, severity, and location of the error, the month in which they occurred, and the age of the patient involved. Error characteristics were included in a multivariable logistic regression model to determine independent risk factors for errors occurring that were not preventable by IT. A medication error was defined as a medication-related failure of a planned action to be completed as intended or the use of a wrong plan to achieve an aim. An IT-preventable error was defined as having an IT system in place to aid in prevention of the error at the phase and location of its origin. There were 936 medication errors (identified by voluntarily reporting or a trigger tool system) included and analyzed. Drug administration errors were identified most frequently (53.4% ), but prescribing errors most frequently caused harm (47.2 % of harmful errors). There were 470 (50.2 %) errors that were IT preventable at their origin, including 155 due to IT system bypasses, 103 due to insensitivity of IT alerting systems, and 47 with IT alert overrides. Dispensing, administration, and documentation errors had higher odds than prescribing errors for being not preventable by IT [odds ratio (OR) 8.0, 95 % CI 4.4-14.6; OR 2.4, 95 % CI 1.7-3.7; and OR 6.7, 95 % CI 3.3-14.5, respectively; all p < 0.001). Errors occurring in the operating room and in the outpatient setting had higher odds than intensive care units for being not preventable by IT (OR 10.4, 95 % CI 4.0-27.2, and OR 2.6, 95 % CI 1.3-5.0, respectively; all p ≤ 0.004). Despite extensive IT implementation at the studied institution, approximately one-half of the medication errors identified by voluntarily reporting or a trigger tool system were not preventable by the utilized IT systems. Inappropriate use of IT systems was a common cause of errors. The identified risk factors represent areas where IT safety features were lacking.

  5. Application of the phase shifting diffraction interferometer for measuring convex mirrors and negative lenses

    DOEpatents

    Sommargren, Gary E.; Campbell, Eugene W.

    2004-03-09

    To measure a convex mirror, a reference beam and a measurement beam are both provided through a single optical fiber. A positive auxiliary lens is placed in the system to give a converging wavefront onto the convex mirror under test. A measurement is taken that includes the aberrations of the convex mirror as well as the errors due to two transmissions through the positive auxiliary lens. A second, measurement provides the information to eliminate this error. A negative lens can also be measured in a similar way. Again, there are two measurement set-ups. A reference beam is provided from a first optical fiber and a measurement beam is provided from a second optical fiber. A positive auxiliary lens is placed in the system to provide a converging wavefront from the reference beam onto the negative lens under test. The measurement beam is combined with the reference wavefront and is analyzed by standard methods. This measurement includes the aberrations of the negative lens, as well as the errors due to a single transmission through the positive auxiliary lens. A second measurement provides the information to eliminate this error.

  6. Application Of The Phase Shifting Diffraction Interferometer For Measuring Convex Mirrors And Negative Lenses

    DOEpatents

    Sommargren, Gary E.; Campbell, Eugene W.

    2005-06-21

    To measure a convex mirror, a reference beam and a measurement beam are both provided through a single optical fiber. A positive auxiliary lens is placed in the system to give a converging wavefront onto the convex mirror under test. A measurement is taken that includes the aberrations of the convex mirror as well as the errors due to two transmissions through the positive auxiliary lens. A second measurement provides the information to eliminate this error. A negative lens can also be measured in a similar way. Again, there are two measurement set-ups. A reference beam is provided from a first optical fiber and a measurement beam is provided from a second optical fiber. A positive auxiliary lens is placed in the system to provide a converging wavefront from the reference beam onto the negative lens under test. The measurement beam is combined with the reference wavefront and is analyzed by standard methods. This measurement includes the aberrations of the negative lens, as well as the errors due to a single transmission through the positive auxiliary lens. A second measurement provides the information to eliminate this error.

  7. Claims, errors, and compensation payments in medical malpractice litigation.

    PubMed

    Studdert, David M; Mello, Michelle M; Gawande, Atul A; Gandhi, Tejal K; Kachalia, Allen; Yoon, Catherine; Puopolo, Ann Louise; Brennan, Troyen A

    2006-05-11

    In the current debate over tort reform, critics of the medical malpractice system charge that frivolous litigation--claims that lack evidence of injury, substandard care, or both--is common and costly. Trained physicians reviewed a random sample of 1452 closed malpractice claims from five liability insurers to determine whether a medical injury had occurred and, if so, whether it was due to medical error. We analyzed the prevalence, characteristics, litigation outcomes, and costs of claims that lacked evidence of error. For 3 percent of the claims, there were no verifiable medical injuries, and 37 percent did not involve errors. Most of the claims that were not associated with errors (370 of 515 [72 percent]) or injuries (31 of 37 [84 percent]) did not result in compensation; most that involved injuries due to error did (653 of 889 [73 percent]). Payment of claims not involving errors occurred less frequently than did the converse form of inaccuracy--nonpayment of claims associated with errors. When claims not involving errors were compensated, payments were significantly lower on average than were payments for claims involving errors (313,205 dollars vs. 521,560 dollars, P=0.004). Overall, claims not involving errors accounted for 13 to 16 percent of the system's total monetary costs. For every dollar spent on compensation, 54 cents went to administrative expenses (including those involving lawyers, experts, and courts). Claims involving errors accounted for 78 percent of total administrative costs. Claims that lack evidence of error are not uncommon, but most are denied compensation. The vast majority of expenditures go toward litigation over errors and payment of them. The overhead costs of malpractice litigation are exorbitant. Copyright 2006 Massachusetts Medical Society.

  8. Characteristics of number transcoding errors of Chinese- versus English-speaking Alzheimer's disease patients.

    PubMed

    Ting, Simon Kang Seng; Chia, Pei Shi; Kwek, Kevin; Tan, Wilnard; Hameed, Shahul

    2016-10-01

    Number processing disorder is an acquired deficit in mathematical skills commonly observed in Alzheimer's disease (AD), usually as a consequence of neurological dysfunction. Common impairments include syntactic errors (800012 instead of 8012) and intrusion errors (8 thousand and 12 instead of eight thousand and twelve) in number transcoding tasks. This study aimed to understand the characterization of AD-related number processing disorder within an alphabetic language (English) and ideographical language (Chinese), and to investigate the differences between alphabetic and ideographic language processing. Chinese-speaking AD patients were hypothesized to make significantly more intrusion errors than English-speaking ones, due to the ideographical nature of both Chinese characters and Arabic numbers. A simplified number transcoding test derived from EC301 battery was administered to AD patients. Chinese-speaking AD patients made significantly more intrusion errors (p = 0.001) than English speakers. This demonstrates that number processing in an alphabetic language such as English does not function in the same manner as in Chinese. The impaired inhibition capability likely contributes to such observations due to its competitive lexical representation in brain for Chinese speakers.

  9. Time trend of injection drug errors before and after implementation of bar-code verification system.

    PubMed

    Sakushima, Ken; Umeki, Reona; Endoh, Akira; Ito, Yoichi M; Nasuhara, Yasuyuki

    2015-01-01

    Bar-code technology, used for verification of patients and their medication, could prevent medication errors in clinical practice. Retrospective analysis of electronically stored medical error reports was conducted in a university hospital. The number of reported medication errors of injected drugs, including wrong drug administration and administration to the wrong patient, was compared before and after implementation of the bar-code verification system for inpatient care. A total of 2867 error reports associated with injection drugs were extracted. Wrong patient errors decreased significantly after implementation of the bar-code verification system (17.4/year vs. 4.5/year, p< 0.05), although wrong drug errors did not decrease sufficiently (24.2/year vs. 20.3/year). The source of medication errors due to wrong drugs was drug preparation in hospital wards. Bar-code medication administration is effective for prevention of wrong patient errors. However, ordinary bar-code verification systems are limited in their ability to prevent incorrect drug preparation in hospital wards.

  10. Dual-mass vibratory rate gyroscope with suppressed translational acceleration response and quadrature-error correction capability

    NASA Technical Reports Server (NTRS)

    Clark, William A. (Inventor); Juneau, Thor N. (Inventor); Lemkin, Mark A. (Inventor); Roessig, Allen W. (Inventor)

    2001-01-01

    A microfabricated vibratory rate gyroscope to measure rotation includes two proof-masses mounted in a suspension system anchored to a substrate. The suspension has two principal modes of compliance, one of which is driven into oscillation. The driven oscillation combined with rotation of the substrate about an axis perpendicular to the substrate results in Coriolis acceleration along the other mode of compliance, the sense-mode. The sense-mode is designed to respond to Coriolis accelerationwhile suppressing the response to translational acceleration. This is accomplished using one or more rigid levers connecting the two proof-masses. The lever allows the proof-masses to move in opposite directions in response to Coriolis acceleration. The invention includes a means for canceling errors, termed quadrature error, due to imperfections in implementation of the sensor. Quadrature-error cancellation utilizes electrostatic forces to cancel out undesired sense-axis motion in phase with drive-mode position.

  11. Operational Data Reduction Procedure for Determining Density and Vertical Structure of the Martian Upper Atmosphere from Mars Global Surveyor Accelerometer Measurements

    NASA Technical Reports Server (NTRS)

    Cancro, George J.; Tolson, Robert H.; Keating, Gerald M.

    1998-01-01

    The success of aerobraking by the Mars Global Surveyor (MGS) spacecraft was partly due to the analysis of MGS accelerometer data. Accelerometer data was used to determine the effect of the atmosphere on each orbit, to characterize the nature of the atmosphere, and to predict the atmosphere for future orbits. To interpret the accelerometer data, a data reduction procedure was developed to produce density estimations utilizing inputs from the spacecraft, the Navigation Team, and pre-mission aerothermodynamic studies. This data reduction procedure was based on the calculation of aerodynamic forces from the accelerometer data by considering acceleration due to gravity gradient, solar pressure, angular motion of the MGS, instrument bias, thruster activity, and a vibration component due to the motion of the damaged solar array. Methods were developed to calculate all of the acceleration components including a 4 degree of freedom dynamics model used to gain a greater understanding of the damaged solar array. The total error inherent to the data reduction procedure was calculated as a function of altitude and density considering contributions from ephemeris errors, errors in force coefficient, and instrument errors due to bias and digitization. Comparing the results from this procedure to the data of other MGS Teams has demonstrated that this procedure can quickly and accurately describe the density and vertical structure of the Martian upper atmosphere.

  12. Using integrated models to minimize environmentally induced wavefront error in optomechanical design and analysis

    NASA Astrophysics Data System (ADS)

    Genberg, Victor L.; Michels, Gregory J.

    2017-08-01

    The ultimate design goal of an optical system subjected to dynamic loads is to minimize system level wavefront error (WFE). In random response analysis, system WFE is difficult to predict from finite element results due to the loss of phase information. In the past, the use of ystem WFE was limited by the difficulty of obtaining a linear optics model. In this paper, an automated method for determining system level WFE using a linear optics model is presented. An error estimate is included in the analysis output based on fitting errors of mode shapes. The technique is demonstrated by example with SigFit, a commercially available tool integrating mechanical analysis with optical analysis.

  13. Implementation of an experimental fault-tolerant memory system

    NASA Technical Reports Server (NTRS)

    Carter, W. C.; Mccarthy, C. E.

    1976-01-01

    The experimental fault-tolerant memory system described in this paper has been designed to enable the modular addition of spares, to validate the theoretical fault-secure and self-testing properties of the translator/corrector, to provide a basis for experiments using the new testing and correction processes for recovery, and to determine the practicality of such systems. The hardware design and implementation are described, together with methods of fault insertion. The hardware/software interface, including a restricted single error correction/double error detection (SEC/DED) code, is specified. Procedures are carefully described which, (1) test for specified physical faults, (2) ensure that single error corrections are not miscorrections due to triple faults, and (3) enable recovery from double errors.

  14. A description of medication errors reported by pharmacists in a neonatal intensive care unit.

    PubMed

    Pawluk, Shane; Jaam, Myriam; Hazi, Fatima; Al Hail, Moza Sulaiman; El Kassem, Wessam; Khalifa, Hanan; Thomas, Binny; Abdul Rouf, Pallivalappila

    2017-02-01

    Background Patients in the Neonatal Intensive Care Unit (NICU) are at an increased risk for medication errors. Objective The objective of this study is to describe the nature and setting of medication errors occurring in patients admitted to an NICU in Qatar based on a standard electronic system reported by pharmacists. Setting Neonatal intensive care unit, Doha, Qatar. Method This was a retrospective cross-sectional study on medication errors reported electronically by pharmacists in the NICU between January 1, 2014 and April 30, 2015. Main outcome measure Data collected included patient information, and incident details including error category, medications involved, and follow-up completed. Results A total of 201 NICU pharmacists-reported medication errors were submitted during the study period. All reported errors did not reach the patient and did not cause harm. Of the errors reported, 98.5% occurred in the prescribing phase of the medication process with 58.7% being due to calculation errors. Overall, 53 different medications were documented in error reports with the anti-infective agents being the most frequently cited. The majority of incidents indicated that the primary prescriber was contacted and the error was resolved before reaching the next phase of the medication process. Conclusion Medication errors reported by pharmacists occur most frequently in the prescribing phase of the medication process. Our data suggest that error reporting systems need to be specific to the population involved. Special attention should be paid to frequently used medications in the NICU as these were responsible for the greatest numbers of medication errors.

  15. Outpatient CPOE orders discontinued due to 'erroneous entry': prospective survey of prescribers' explanations for errors.

    PubMed

    Hickman, Thu-Trang T; Quist, Arbor Jessica Lauren; Salazar, Alejandra; Amato, Mary G; Wright, Adam; Volk, Lynn A; Bates, David W; Schiff, Gordon

    2018-04-01

    Computerised prescriber order entry (CPOE) systems users often discontinue medications because the initial order was erroneous. To elucidate error types by querying prescribers about their reasons for discontinuing outpatient medication orders that they had self-identified as erroneous. During a nearly 3 year retrospective data collection period, we identified 57 972 drugs discontinued with the reason 'Error (erroneous entry)." Because chart reviews revealed limited information about these errors, we prospectively studied consecutive, discontinued erroneous orders by querying prescribers in near-real-time to learn more about the erroneous orders. From January 2014 to April 2014, we prospectively emailed prescribers about outpatient drug orders that they had discontinued due to erroneous initial order entry. Of 2 50 806 medication orders in these 4 months, 1133 (0.45%) of these were discontinued due to error. From these 1133, we emailed 542 unique prescribers to ask about their reason(s) for discontinuing these mediation orders in error. We received 312 responses (58% response rate). We categorised these responses using a previously published taxonomy. The top reasons for these discontinued erroneous orders included: medication ordered for wrong patient (27.8%, n=60); wrong drug ordered (18.5%, n=40); and duplicate order placed (14.4%, n=31). Other common discontinued erroneous orders related to drug dosage and formulation (eg, extended release versus not). Oxycodone (3%) was the most frequent drug discontinued error. Drugs are not infrequently discontinued 'in error.' Wrong patient and wrong drug errors constitute the leading types of erroneous prescriptions recognised and discontinued by prescribers. Data regarding erroneous medication entries represent an important source of intelligence about how CPOE systems are functioning and malfunctioning, providing important insights regarding areas for designing CPOE more safely in the future. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  16. Design framework for spherical microphone and loudspeaker arrays in a multiple-input multiple-output system.

    PubMed

    Morgenstern, Hai; Rafaely, Boaz; Noisternig, Markus

    2017-03-01

    Spherical microphone arrays (SMAs) and spherical loudspeaker arrays (SLAs) facilitate the study of room acoustics due to the three-dimensional analysis they provide. More recently, systems that combine both arrays, referred to as multiple-input multiple-output (MIMO) systems, have been proposed due to the added spatial diversity they facilitate. The literature provides frameworks for designing SMAs and SLAs separately, including error analysis from which the operating frequency range (OFR) of an array is defined. However, such a framework does not exist for the joint design of a SMA and a SLA that comprise a MIMO system. This paper develops a design framework for MIMO systems based on a model that addresses errors and highlights the importance of a matched design. Expanding on a free-field assumption, errors are incorporated separately for each array and error bounds are defined, facilitating error analysis for the system. The dependency of the error bounds on the SLA and SMA parameters is studied and it is recommended that parameters should be chosen to assure matched OFRs of the arrays in MIMO system design. A design example is provided, demonstrating the superiority of a matched system over an unmatched system in the synthesis of directional room impulse responses.

  17. Error Modeling of Multi-baseline Optical Truss. Part II; Application to SIM Metrology Truss Field Dependent Error

    NASA Technical Reports Server (NTRS)

    Zhang, Liwei Dennis; Milman, Mark; Korechoff, Robert

    2004-01-01

    The current design of the Space Interferometry Mission (SIM) employs a 19 laser-metrology-beam system (also called L19 external metrology truss) to monitor changes of distances between the fiducials of the flight system's multiple baselines. The function of the external metrology truss is to aid in the determination of the time-variations of the interferometer baseline. The largest contributor to truss error occurs in SIM wide-angle observations when the articulation of the siderostat mirrors (in order to gather starlight from different sky coordinates) brings to light systematic errors due to offsets at levels of instrument components (which include comer cube retro-reflectors, etc.). This error is labeled external metrology wide-angle field-dependent error. Physics-based model of field-dependent error at single metrology gauge level is developed and linearly propagated to errors in interferometer delay. In this manner delay error sensitivity to various error parameters or their combination can be studied using eigenvalue/eigenvector analysis. Also validation of physics-based field-dependent model on SIM testbed lends support to the present approach. As a first example, dihedral error model is developed for the comer cubes (CC) attached to the siderostat mirrors. Then the delay errors due to this effect can be characterized using the eigenvectors of composite CC dihedral error. The essence of the linear error model is contained in an error-mapping matrix. A corresponding Zernike component matrix approach is developed in parallel, first for convenience of describing the RMS of errors across the field-of-regard (FOR), and second for convenience of combining with additional models. Average and worst case residual errors are computed when various orders of field-dependent terms are removed from the delay error. Results of the residual errors are important in arriving at external metrology system component requirements. Double CCs with ideally co-incident vertices reside with the siderostat. The non-common vertex error (NCVE) is treated as a second example. Finally combination of models, and various other errors are discussed.

  18. Patterns of technical error among surgical malpractice claims: an analysis of strategies to prevent injury to surgical patients.

    PubMed

    Regenbogen, Scott E; Greenberg, Caprice C; Studdert, David M; Lipsitz, Stuart R; Zinner, Michael J; Gawande, Atul A

    2007-11-01

    To identify the most prevalent patterns of technical errors in surgery, and evaluate commonly recommended interventions in light of these patterns. The majority of surgical adverse events involve technical errors, but little is known about the nature and causes of these events. We examined characteristics of technical errors and common contributing factors among closed surgical malpractice claims. Surgeon reviewers analyzed 444 randomly sampled surgical malpractice claims from four liability insurers. Among 258 claims in which injuries due to error were detected, 52% (n = 133) involved technical errors. These technical errors were further analyzed with a structured review instrument designed by qualitative content analysis. Forty-nine percent of the technical errors caused permanent disability; an additional 16% resulted in death. Two-thirds (65%) of the technical errors were linked to manual error, 9% to errors in judgment, and 26% to both manual and judgment error. A minority of technical errors involved advanced procedures requiring special training ("index operations"; 16%), surgeons inexperienced with the task (14%), or poorly supervised residents (9%). The majority involved experienced surgeons (73%), and occurred in routine, rather than index, operations (84%). Patient-related complexities-including emergencies, difficult or unexpected anatomy, and previous surgery-contributed to 61% of technical errors, and technology or systems failures contributed to 21%. Most technical errors occur in routine operations with experienced surgeons, under conditions of increased patient complexity or systems failure. Commonly recommended interventions, including restricting high-complexity operations to experienced surgeons, additional training for inexperienced surgeons, and stricter supervision of trainees, are likely to address only a minority of technical errors. Surgical safety research should instead focus on improving decision-making and performance in routine operations for complex patients and circumstances.

  19. Diffraction analysis of sidelobe characteristics of optical elements with ripple error

    NASA Astrophysics Data System (ADS)

    Zhao, Lei; Luo, Yupeng; Bai, Jian; Zhou, Xiangdong; Du, Juan; Liu, Qun; Luo, Yujie

    2018-03-01

    The ripple errors of the lens lead to optical damage in high energy laser system. The analysis of sidelobe on the focal plane, caused by ripple error, provides a reference to evaluate the error and the imaging quality. In this paper, we analyze the diffraction characteristics of sidelobe of optical elements with ripple errors. First, we analyze the characteristics of ripple error and build relationship between ripple error and sidelobe. The sidelobe results from the diffraction of ripple errors. The ripple error tends to be periodic due to fabrication method on the optical surface. The simulated experiments are carried out based on angular spectrum method by characterizing ripple error as rotationally symmetric periodic structures. The influence of two major parameter of ripple including spatial frequency and peak-to-valley value to sidelobe is discussed. The results indicate that spatial frequency and peak-to-valley value both impact sidelobe at the image plane. The peak-tovalley value is the major factor to affect the energy proportion of the sidelobe. The spatial frequency is the major factor to affect the distribution of the sidelobe at the image plane.

  20. High Precision Metrology on the Ultra-Lightweight W 50.8 cm f/1.25 Parabolic SHARPI Primary Mirror using a CGH Null Lens

    NASA Technical Reports Server (NTRS)

    Antonille, Scott

    2004-01-01

    For potential use on the SHARPI mission, Eastman Kodak has delivered a 50.8cm CA f/1.25 ultra-lightweight UV parabolic mirror with a surface figure error requirement of 6nm RMS. We address the challenges involved in verifying and mapping the surface error of this large lightweight mirror to +/-3nm using a diffractive CGH null lens. Of main concern is removal of large systematic errors resulting from surface deflections of the mirror due to gravity as well as smaller contributions from system misalignment and reference optic errors. We present our efforts to characterize these errors and remove their wavefront error contribution in post-processing as well as minimizing the uncertainty these calculations introduce. Data from Kodak and preliminary measurements from NASA Goddard will be included.

  1. Radar error statistics for the space shuttle

    NASA Technical Reports Server (NTRS)

    Lear, W. M.

    1979-01-01

    Radar error statistics of C-band and S-band that are recommended for use with the groundtracking programs to process space shuttle tracking data are presented. The statistics are divided into two parts: bias error statistics, using the subscript B, and high frequency error statistics, using the subscript q. Bias errors may be slowly varying to constant. High frequency random errors (noise) are rapidly varying and may or may not be correlated from sample to sample. Bias errors were mainly due to hardware defects and to errors in correction for atmospheric refraction effects. High frequency noise was mainly due to hardware and due to atmospheric scintillation. Three types of atmospheric scintillation were identified: horizontal, vertical, and line of sight. This was the first time that horizontal and line of sight scintillations were identified.

  2. Speech therapy for errors secondary to cleft palate and velopharyngeal dysfunction.

    PubMed

    Kummer, Ann W

    2011-05-01

    Individuals with a history of cleft lip/palate or velopharyngeal dysfunction may demonstrate any combination of speech sound errors, hypernasality, and nasal emission. Speech sound distortion can also occur due to other structural anomalies, including malocclusion. Whenever there are structural anomalies, speech can be affected by obligatory distortions or compensatory errors. Obligatory distortions (including hypernasality due to velopharyngeal insufficiency) are caused by abnormal structure and not by abnormal function. Therefore, surgery or other forms of physical management are needed for correction. In contrast, speech therapy is indicated for compensatory articulation productions where articulation placement is changed in response to the abnormal structure. Speech therapy is much more effective if it is done after normalization of the structure. When speech therapy is appropriate, the techniques involve methods to change articulation placement using standard articulation therapy principles. Oral-motor exercises, including the use of blowing and sucking, are never indicated to improve velopharyngeal function. The purpose of this article is to provide information regarding when speech therapy is appropriate for individuals with a history of cleft palate or other structural anomalies and when physical management is needed. In addition, some specific therapy techniques are offered for the elimination of common compensatory articulation productions. © Thieme Medical Publishers.

  3. Restrictions on surgical resident shift length does not impact type of medical errors.

    PubMed

    Anderson, Jamie E; Goodman, Laura F; Jensen, Guy W; Salcedo, Edgardo S; Galante, Joseph M

    2017-05-15

    In 2011, resident duty hours were restricted in an attempt to improve patient safety and resident education. With the goal of reducing fatigue, shorter shift length leads to more patient handoffs, raising concerns about adverse effects on patient safety. This study seeks to determine whether differences in duty-hour restrictions influence types of errors made by residents. This is a nested retrospective cohort study at a surgery department in an academic medical center. During 2013-14, standard 2011 duty hours were in place for residents. In 2014-15, duty-hour restrictions at the study site were relaxed ("flexible") with no restrictions on shift length. We reviewed all morbidity and mortality submissions from July 1, 2013-June 30, 2015 and compared differences in types of errors between these periods. A total of 383 patients experienced adverse events, including 59 deaths (15.4%). Comparing standard versus flexible periods, there was no difference in mortality (15.7% versus 12.6%, P = 0.479) or complication rates (2.6% versus 2.5%, P = 0.696). There was no difference in types of errors between periods (P = 0.050-0.808). The most number of errors were due to cognitive failures (229, 59.6%), whereas the fewest number of errors were due to team failure (127, 33.2%). By subset, technical errors resulted in the highest number of errors (169, 44.1%). There were no differences between types of errors of cases that were nonelective, at night, or involving residents. Among adverse events reported in this departmental surgical morbidity and mortality, there were no differences in types of errors when resident duty hours were less restrictive. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Tilt error in cryospheric surface radiation measurements at high latitudes: a model study

    NASA Astrophysics Data System (ADS)

    Bogren, Wiley Steven; Faulkner Burkhart, John; Kylling, Arve

    2016-03-01

    We have evaluated the magnitude and makeup of error in cryospheric radiation observations due to small sensor misalignment in in situ measurements of solar irradiance. This error is examined through simulation of diffuse and direct irradiance arriving at a detector with a cosine-response fore optic. Emphasis is placed on assessing total error over the solar shortwave spectrum from 250 to 4500 nm, as well as supporting investigation over other relevant shortwave spectral ranges. The total measurement error introduced by sensor tilt is dominated by the direct component. For a typical high-latitude albedo measurement with a solar zenith angle of 60°, a sensor tilted by 1, 3, and 5° can, respectively introduce up to 2.7, 8.1, and 13.5 % error into the measured irradiance and similar errors in the derived albedo. Depending on the daily range of solar azimuth and zenith angles, significant measurement error can persist also in integrated daily irradiance and albedo. Simulations including a cloud layer demonstrate decreasing tilt error with increasing cloud optical depth.

  5. Prevention of prescription errors by computerized, on-line, individual patient related surveillance of drug order entry.

    PubMed

    Oliven, A; Zalman, D; Shilankov, Y; Yeshurun, D; Odeh, M

    2002-01-01

    Computerized prescription of drugs is expected to reduce the number of many preventable drug ordering errors. In the present study we evaluated the usefullness of a computerized drug order entry (CDOE) system in reducing prescription errors. A department of internal medicine using a comprehensive CDOE, which included also patient-related drug-laboratory, drug-disease and drug-allergy on-line surveillance was compared to a similar department in which drug orders were handwritten. CDOE reduced prescription errors to 25-35%. The causes of errors remained similar, and most errors, on both departments, were associated with abnormal renal function and electrolyte balance. Residual errors remaining on the CDOE-using department were due to handwriting on the typed order, failure to feed patients' diseases, and system failures. The use of CDOE was associated with a significant reduction in mean hospital stay and in the number of changes performed in the prescription. The findings of this study both quantity the impact of comprehensive CDOE on prescription errors and delineate the causes for remaining errors.

  6. New dimension analyses with error analysis for quaking aspen and black spruce

    NASA Technical Reports Server (NTRS)

    Woods, K. D.; Botkin, D. B.; Feiveson, A. H.

    1987-01-01

    Dimension analysis for black spruce in wetland stands and trembling aspen are reported, including new approaches in error analysis. Biomass estimates for sacrificed trees have standard errors of 1 to 3%; standard errors for leaf areas are 10 to 20%. Bole biomass estimation accounts for most of the error for biomass, while estimation of branch characteristics and area/weight ratios accounts for the leaf area error. Error analysis provides insight for cost effective design of future analyses. Predictive equations for biomass and leaf area, with empirically derived estimators of prediction error, are given. Systematic prediction errors for small aspen trees and for leaf area of spruce from different site-types suggest a need for different predictive models within species. Predictive equations are compared with published equations; significant differences may be due to species responses to regional or site differences. Proportional contributions of component biomass in aspen change in ways related to tree size and stand development. Spruce maintains comparatively constant proportions with size, but shows changes corresponding to site. This suggests greater morphological plasticity of aspen and significance for spruce of nutrient conditions.

  7. Phonological and Motor Errors in Individuals with Acquired Sound Production Impairment

    ERIC Educational Resources Information Center

    Buchwald, Adam; Miozzo, Michele

    2012-01-01

    Purpose: This study aimed to compare sound production errors arising due to phonological processing impairment with errors arising due to motor speech impairment. Method: Two speakers with similar clinical profiles who produced similar consonant cluster simplification errors were examined using a repetition task. We compared both overall accuracy…

  8. Multiple Cognitive Control Effects of Error Likelihood and Conflict

    PubMed Central

    Brown, Joshua W.

    2010-01-01

    Recent work on cognitive control has suggested a variety of performance monitoring functions of the anterior cingulate cortex, such as errors, conflict, error likelihood, and others. Given the variety of monitoring effects, a corresponding variety of control effects on behavior might be expected. This paper explores whether conflict and error likelihood produce distinct cognitive control effects on behavior, as measured by response time. A change signal task (Brown & Braver, 2005) was modified to include conditions of likely errors due to tardy as well as premature responses, in conditions with and without conflict. The results discriminate between competing hypotheses of independent vs. interacting conflict and error likelihood control effects. Specifically, the results suggest that the likelihood of premature vs. tardy response errors can lead to multiple distinct control effects, which are independent of cognitive control effects driven by response conflict. As a whole, the results point to the existence of multiple distinct cognitive control mechanisms and challenge existing models of cognitive control that incorporate only a single control signal. PMID:19030873

  9. Development of an errorable car-following driver model

    NASA Astrophysics Data System (ADS)

    Yang, H.-H.; Peng, H.

    2010-06-01

    An errorable car-following driver model is presented in this paper. An errorable driver model is one that emulates human driver's functions and can generate both nominal (error-free), as well as devious (with error) behaviours. This model was developed for evaluation and design of active safety systems. The car-following data used for developing and validating the model were obtained from a large-scale naturalistic driving database. The stochastic car-following behaviour was first analysed and modelled as a random process. Three error-inducing behaviours were then introduced. First, human perceptual limitation was studied and implemented. Distraction due to non-driving tasks was then identified based on the statistical analysis of the driving data. Finally, time delay of human drivers was estimated through a recursive least-square identification process. By including these three error-inducing behaviours, rear-end collisions with the lead vehicle could occur. The simulated crash rate was found to be similar but somewhat higher than that reported in traffic statistics.

  10. Plan for Quality to Improve Patient Safety at the Point of Care

    PubMed Central

    Ehrmeyer, Sharon S.

    2011-01-01

    The U.S. Institute of Medicine (IOM) much publicized report in “To Err is Human” (2000, National Academy Press) stated that as many as 98 000 hospitalized patients in the U.S. die each year due to preventable medical errors. This revelation about medical error and patient safety focused the public and the medical community's attention on errors in healthcare delivery including laboratory and point-of-care-testing (POCT). Errors introduced anywhere in the POCT process clearly can impact quality and place patient's safety at risk. While POCT performed by or near the patient reduces the potential of some errors, the process presents many challenges to quality with its multiple tests sites, test menus, testing devices and non-laboratory analysts, who often have little understanding of quality testing. Incoherent or no regulations and the rapid availability of test results for immediate clinical intervention can further amplify errors. System planning and management of the entire POCT process are essential to reduce errors and improve quality and patient safety. PMID:21808107

  11. The causes of and factors associated with prescribing errors in hospital inpatients: a systematic review.

    PubMed

    Tully, Mary P; Ashcroft, Darren M; Dornan, Tim; Lewis, Penny J; Taylor, David; Wass, Val

    2009-01-01

    Prescribing errors are common, they result in adverse events and harm to patients and it is unclear how best to prevent them because recommendations are more often based on surmized rather than empirically collected data. The aim of this systematic review was to identify all informative published evidence concerning the causes of and factors associated with prescribing errors in specialist and non-specialist hospitals, collate it, analyse it qualitatively and synthesize conclusions from it. Seven electronic databases were searched for articles published between 1985-July 2008. The reference lists of all informative studies were searched for additional citations. To be included, a study had to be of handwritten prescriptions for adult or child inpatients that reported empirically collected data on the causes of or factors associated with errors. Publications in languages other than English and studies that evaluated errors for only one disease, one route of administration or one type of prescribing error were excluded. Seventeen papers reporting 16 studies, selected from 1268 papers identified by the search, were included in the review. Studies from the US and the UK in university-affiliated hospitals predominated (10/16 [62%]). The definition of a prescribing error varied widely and the included studies were highly heterogeneous. Causes were grouped according to Reason's model of accident causation into active failures, error-provoking conditions and latent conditions. The active failure most frequently cited was a mistake due to inadequate knowledge of the drug or the patient. Skills-based slips and memory lapses were also common. Where error-provoking conditions were reported, there was at least one per error. These included lack of training or experience, fatigue, stress, high workload for the prescriber and inadequate communication between healthcare professionals. Latent conditions included reluctance to question senior colleagues and inadequate provision of training. Prescribing errors are often multifactorial, with several active failures and error-provoking conditions often acting together to cause them. In the face of such complexity, solutions addressing a single cause, such as lack of knowledge, are likely to have only limited benefit. Further rigorous study, seeking potential ways of reducing error, needs to be conducted. Multifactorial interventions across many parts of the system are likely to be required.

  12. Evaluation of MEMS-based in-place inclinometers in cold regions : [summary].

    DOT National Transportation Integrated Search

    2013-03-01

    Inclinometer probes are used to measure ground movement. While an industry standard, this technology has drawbacks, including costly trips for manual measurements, : operator error, and limited measurements due to casing deformation. Relatively new M...

  13. Evaluation of MEMS-based in-place inclinometers in cold regions.

    DOT National Transportation Integrated Search

    2013-03-01

    Inclinometer probes are used to measure ground movement. While an industry standard, this technology has drawbacks, including costly trips for manual measurements, : operator error, and limited measurements due to casing deformation. Relatively new M...

  14. Optimum Cyclic Redundancy Codes for Noisy Channels

    NASA Technical Reports Server (NTRS)

    Posner, E. C.; Merkey, P.

    1986-01-01

    Capabilities and limitations of cyclic redundancy codes (CRC's) for detecting transmission errors in data sent over relatively noisy channels (e.g., voice-grade telephone lines or very-high-density storage media) discussed in 16-page report. Due to prevalent use of bytes in multiples of 8 bits data transmission, report primarily concerned with cases in which both block length and number of redundant bits (check bits for use in error detection) included in each block are multiples of 8 bits.

  15. Error Rate Comparison during Polymerase Chain Reaction by DNA Polymerase

    DOE PAGES

    McInerney, Peter; Adams, Paul; Hadi, Masood Z.

    2014-01-01

    As larger-scale cloning projects become more prevalent, there is an increasing need for comparisons among high fidelity DNA polymerases used for PCR amplification. All polymerases marketed for PCR applications are tested for fidelity properties (i.e., error rate determination) by vendors, and numerous literature reports have addressed PCR enzyme fidelity. Nonetheless, it is often difficult to make direct comparisons among different enzymes due to numerous methodological and analytical differences from study to study. We have measured the error rates for 6 DNA polymerases commonly used in PCR applications, including 3 polymerases typically used for cloning applications requiring high fidelity. Error ratemore » measurement values reported here were obtained by direct sequencing of cloned PCR products. The strategy employed here allows interrogation of error rate across a very large DNA sequence space, since 94 unique DNA targets were used as templates for PCR cloning. The six enzymes included in the study, Taq polymerase, AccuPrime-Taq High Fidelity, KOD Hot Start, cloned Pfu polymerase, Phusion Hot Start, and Pwo polymerase, we find the lowest error rates with Pfu , Phusion, and Pwo polymerases. Error rates are comparable for these 3 enzymes and are >10x lower than the error rate observed with Taq polymerase. Mutation spectra are reported, with the 3 high fidelity enzymes displaying broadly similar types of mutations. For these enzymes, transition mutations predominate, with little bias observed for type of transition.« less

  16. Clinical biochemistry laboratory rejection rates due to various types of preanalytical errors.

    PubMed

    Atay, Aysenur; Demir, Leyla; Cuhadar, Serap; Saglam, Gulcan; Unal, Hulya; Aksun, Saliha; Arslan, Banu; Ozkan, Asuman; Sutcu, Recep

    2014-01-01

    Preanalytical errors, along the process from the beginning of test requests to the admissions of the specimens to the laboratory, cause the rejection of samples. The aim of this study was to better explain the reasons of rejected samples, regarding to their rates in certain test groups in our laboratory. This preliminary study was designed on the rejected samples in one-year period, based on the rates and types of inappropriateness. Test requests and blood samples of clinical chemistry, immunoassay, hematology, glycated hemoglobin, coagulation and erythrocyte sedimentation rate test units were evaluated. Types of inappropriateness were evaluated as follows: improperly labelled samples, hemolysed, clotted specimen, insufficient volume of specimen and total request errors. A total of 5,183,582 test requests from 1,035,743 blood collection tubes were considered. The total rejection rate was 0.65 %. The rejection rate of coagulation group was significantly higher (2.28%) than the other test groups (P < 0.001) including insufficient volume of specimen error rate as 1.38%. Rejection rates of hemolysis, clotted specimen and insufficient volume of sample error were found to be 8%, 24% and 34%, respectively. Total request errors, particularly, for unintelligible requests were 32% of the total for inpatients. The errors were especially attributable to unintelligible requests of inappropriate test requests, improperly labelled samples for inpatients and blood drawing errors especially due to insufficient volume of specimens in a coagulation test group. Further studies should be performed after corrective and preventive actions to detect a possible decrease in rejecting samples.

  17. Inducible DNA-repair systems in yeast: competition for lesions.

    PubMed

    Mitchel, R E; Morrison, D P

    1987-03-01

    DNA lesions may be recognized and repaired by more than one DNA-repair process. If two repair systems with different error frequencies have overlapping lesion specificity and one or both is inducible, the resulting variable competition for the lesions can change the biological consequences of these lesions. This concept was demonstrated by observing mutation in yeast cells (Saccharomyces cerevisiae) exposed to combinations of mutagens under conditions which influenced the induction of error-free recombinational repair or error-prone repair. Total mutation frequency was reduced in a manner proportional to the dose of 60Co-gamma- or 254 nm UV radiation delivered prior to or subsequent to an MNNG exposure. Suppression was greater per unit radiation dose in cells gamma-irradiated in O2 as compared to N2. A rad3 (excision-repair) mutant gave results similar to wild-type but mutation in a rad52 (rec-) mutant exposed to MNNG was not suppressed by radiation. Protein-synthesis inhibition with heat shock or cycloheximide indicated that it was the mutation due to MNNG and not that due to radiation which had changed. These results indicate that MNNG lesions are recognized by both the recombinational repair system and the inducible error-prone system, but that gamma-radiation induction of error-free recombinational repair resulted in increased competition for the lesions, thereby reducing mutation. Similarly, gamma-radiation exposure resulted in a radiation dose-dependent reduction in mutation due to MNU, EMS, ENU and 8-MOP + UVA, but no reduction in mutation due to MMS. These results suggest that the number of mutational MMS lesions recognizable by the recombinational repair system must be very small relative to those produced by the other agents. MNNG induction of the inducible error-prone systems however, did not alter mutation frequencies due to ENU or MMS exposure but, in contrast to radiation, increased the mutagenic effectiveness of EMS. These experiments demonstrate that in this lower eukaryote, mutagen exposure does not necessarily result in a fixed risk of mutation, but that the risk can be markedly influenced by a variety of external stimuli including heat shock or exposure to other mutagens.

  18. Propagation of stage measurement uncertainties to streamflow time series

    NASA Astrophysics Data System (ADS)

    Horner, Ivan; Le Coz, Jérôme; Renard, Benjamin; Branger, Flora; McMillan, Hilary

    2016-04-01

    Streamflow uncertainties due to stage measurements errors are generally overlooked in the promising probabilistic approaches that have emerged in the last decade. We introduce an original error model for propagating stage uncertainties through a stage-discharge rating curve within a Bayesian probabilistic framework. The method takes into account both rating curve (parametric errors and structural errors) and stage uncertainty (systematic and non-systematic errors). Practical ways to estimate the different types of stage errors are also presented: (1) non-systematic errors due to instrument resolution and precision and non-stationary waves and (2) systematic errors due to gauge calibration against the staff gauge. The method is illustrated at a site where the rating-curve-derived streamflow can be compared with an accurate streamflow reference. The agreement between the two time series is overall satisfying. Moreover, the quantification of uncertainty is also satisfying since the streamflow reference is compatible with the streamflow uncertainty intervals derived from the rating curve and the stage uncertainties. Illustrations from other sites are also presented. Results are much contrasted depending on the site features. In some cases, streamflow uncertainty is mainly due to stage measurement errors. The results also show the importance of discriminating systematic and non-systematic stage errors, especially for long term flow averages. Perspectives for improving and validating the streamflow uncertainty estimates are eventually discussed.

  19. [Epidemiology of refractive errors].

    PubMed

    Wolfram, C

    2017-07-01

    Refractive errors are very common and can lead to severe pathological changes in the eye. This article analyzes the epidemiology of refractive errors in the general population in Germany and worldwide and describes common definitions for refractive errors and clinical characteristics for pathologicaal changes. Refractive errors differ between age groups due to refractive changes during the life time and also due to generation-specific factors. Current research about the etiology of refractive errors has strengthened the influence of environmental factors, which led to new strategies for the prevention of refractive pathologies.

  20. Unforced errors and error reduction in tennis

    PubMed Central

    Brody, H

    2006-01-01

    Only at the highest level of tennis is the number of winners comparable to the number of unforced errors. As the average player loses many more points due to unforced errors than due to winners by an opponent, if the rate of unforced errors can be reduced, it should lead to an increase in points won. This article shows how players can improve their game by understanding and applying the laws of physics to reduce the number of unforced errors. PMID:16632568

  1. Influence of non-ideal performance of lasers on displacement precision in single-grating heterodyne interferometry

    NASA Astrophysics Data System (ADS)

    Wang, Guochao; Xie, Xuedong; Yan, Shuhua

    2010-10-01

    Principle of the dual-wavelength single grating nanometer displacement measuring system, with a long range, high precision, and good stability, is presented. As a result of the nano-level high-precision displacement measurement, the error caused by a variety of adverse factors must be taken into account. In this paper, errors, due to the non-ideal performance of the dual-frequency laser, including linear error caused by wavelength instability and non-linear error caused by elliptic polarization of the laser, are mainly discussed and analyzed. On the basis of theoretical modeling, the corresponding error formulas are derived as well. Through simulation, the limit value of linear error caused by wavelength instability is 2nm, and on the assumption that 0.85 x T = , 1 Ty = of the polarizing beam splitter(PBS), the limit values of nonlinear-error caused by elliptic polarization are 1.49nm, 2.99nm, 4.49nm while the non-orthogonal angle is selected correspondingly at 1°, 2°, 3° respectively. The law of the error change is analyzed based on different values of Tx and Ty .

  2. Differing Air Traffic Controller Responses to Similar Trajectory Prediction Errors

    NASA Technical Reports Server (NTRS)

    Mercer, Joey; Hunt-Espinosa, Sarah; Bienert, Nancy; Laraway, Sean

    2016-01-01

    A Human-In-The-Loop simulation was conducted in January of 2013 in the Airspace Operations Laboratory at NASA's Ames Research Center. The simulation airspace included two en route sectors feeding the northwest corner of Atlanta's Terminal Radar Approach Control. The focus of this paper is on how uncertainties in the study's trajectory predictions impacted the controllers ability to perform their duties. Of particular interest is how the controllers interacted with the delay information displayed in the meter list and data block while managing the arrival flows. Due to wind forecasts with 30-knot over-predictions and 30-knot under-predictions, delay value computations included errors of similar magnitude, albeit in opposite directions. However, when performing their duties in the presence of these errors, did the controllers issue clearances of similar magnitude, albeit in opposite directions?

  3. Classification-Based Spatial Error Concealment for Visual Communications

    NASA Astrophysics Data System (ADS)

    Chen, Meng; Zheng, Yefeng; Wu, Min

    2006-12-01

    In an error-prone transmission environment, error concealment is an effective technique to reconstruct the damaged visual content. Due to large variations of image characteristics, different concealment approaches are necessary to accommodate the different nature of the lost image content. In this paper, we address this issue and propose using classification to integrate the state-of-the-art error concealment techniques. The proposed approach takes advantage of multiple concealment algorithms and adaptively selects the suitable algorithm for each damaged image area. With growing awareness that the design of sender and receiver systems should be jointly considered for efficient and reliable multimedia communications, we proposed a set of classification-based block concealment schemes, including receiver-side classification, sender-side attachment, and sender-side embedding. Our experimental results provide extensive performance comparisons and demonstrate that the proposed classification-based error concealment approaches outperform the conventional approaches.

  4. Comparisons of single event vulnerability of GaAs SRAMS

    NASA Astrophysics Data System (ADS)

    Weatherford, T. R.; Hauser, J. R.; Diehl, S. E.

    1986-12-01

    A GaAs MESFET/JFET model incorporated into SPICE has been used to accurately describe C-EJFET, E/D MESFET and D MESFET/resistor GaAs memory technologies. These cells have been evaluated for critical charges due to gate-to-drain and drain-to-source charge collection. Low gate-to-drain critical charges limit conventional GaAs SRAM soft error rates to approximately 1E-6 errors/bit-day. SEU hardening approaches including decoupling resistors, diodes, and FETs have been investigated. Results predict GaAs RAM cell critical charges can be increased to over 0.1 pC. Soft error rates in such hardened memories may approach 1E-7 errors/bit-day without significantly reducing memory speed. Tradeoffs between hardening level, performance and fabrication complexity are discussed.

  5. Patient safety in otolaryngology: a descriptive review.

    PubMed

    Danino, Julian; Muzaffar, Jameel; Metcalfe, Chris; Coulson, Chris

    2017-03-01

    Human evaluation and judgement may include errors that can have disastrous results. Within medicine and healthcare there has been slow progress towards major changes in safety. Healthcare lags behind other specialised industries, such as aviation and nuclear power, where there have been significant improvements in overall safety, especially in reducing risk of errors. Following several high profile cases in the USA during the 1990s, a report titled "To Err Is Human: Building a Safer Health System" was published. The report extrapolated that in the USA approximately 50,000 to 100,000 patients may die each year as a result of medical errors. Traditionally otolaryngology has always been regarded as a "safe specialty". A study in the USA in 2004 inferred that there may be 2600 cases of major morbidity and 165 deaths within the specialty. MEDLINE via PubMed interface was searched for English language articles published between 2000 and 2012. Each combined two or three of the keywords noted earlier. Limitations are related to several generic topics within patient safety in otolaryngology. Other areas covered have been current relevant topics due to recent interest or new advances in technology. There has been a heightened awareness within the healthcare community of patient safety; it has become a major priority. Focus has shifted from apportioning blame to prevention of the errors and implementation of patient safety mechanisms in healthcare delivery. Type of Errors can be divided into errors due to action and errors due to knowledge or planning. In healthcare there are several factors that may influence adverse events and patient safety. Although technology may improve patient safety, it also introduces new sources of error. The ability to work with people allows for the increase in safety netting. Team working has been shown to have a beneficial effect on patient safety. Any field of work involving human decision-making will always have a risk of error. Within Otolaryngology, although patient safety has evolved along similar themes as other surgical specialties; there are several specific high-risk areas. Medical error is a common problem and its human cost is of immense importance. Steps to reduce such errors require the identification of high-risk practice within a complex healthcare system. The commitment to patient safety and quality improvement in medicine depend on personal responsibility and professional accountability.

  6. Performance enhancement of wireless mobile adhoc networks through improved error correction and ICI cancellation

    NASA Astrophysics Data System (ADS)

    Sabir, Zeeshan; Babar, M. Inayatullah; Shah, Syed Waqar

    2012-12-01

    Mobile adhoc network (MANET) refers to an arrangement of wireless mobile nodes that have the tendency of dynamically and freely self-organizing into temporary and arbitrary network topologies. Orthogonal frequency division multiplexing (OFDM) is the foremost choice for MANET system designers at the Physical Layer due to its inherent property of high data rate transmission that corresponds to its lofty spectrum efficiency. The downside of OFDM includes its sensitivity to synchronization errors (frequency offsets and symbol time). Most of the present day techniques employing OFDM for data transmission support mobility as one of the primary features. This mobility causes small frequency offsets due to the production of Doppler frequencies. It results in intercarrier interference (ICI) which degrades the signal quality due to a crosstalk between the subcarriers of OFDM symbol. An efficient frequency-domain block-type pilot-assisted ICI mitigation scheme is proposed in this article which nullifies the effect of channel frequency offsets from the received OFDM symbols. Second problem addressed in this article is the noise effect induced by different sources into the received symbol increasing its bit error rate and making it unsuitable for many applications. Forward-error-correcting turbo codes have been employed into the proposed model which adds redundant bits into the system which are later used for error detection and correction purpose. At the receiver end, maximum a posteriori (MAP) decoding algorithm is implemented using two component MAP decoders. These decoders tend to exchange interleaved extrinsic soft information among each other in the form of log likelihood ratio improving the previous estimate regarding the decoded bit in each iteration.

  7. Component Analysis of Errors on PERSIANN Precipitation Estimates over Urmia Lake Basin, IRAN

    NASA Astrophysics Data System (ADS)

    Ghajarnia, N.; Daneshkar Arasteh, P.; Liaghat, A. M.; Araghinejad, S.

    2016-12-01

    In this study, PERSIANN daily dataset is evaluated from 2000 to 2011 in 69 pixels over Urmia Lake basin in northwest of Iran. Different analytical approaches and indexes are used to examine PERSIANN precision in detection and estimation of rainfall rate. The residuals are decomposed into Hit, Miss and FA estimation biases while continues decomposition of systematic and random error components are also analyzed seasonally and categorically. New interpretation of estimation accuracy named "reliability on PERSIANN estimations" is introduced while the changing manners of existing categorical/statistical measures and error components are also seasonally analyzed over different rainfall rate categories. This study yields new insights into the nature of PERSIANN errors over Urmia lake basin as a semi-arid region in the middle-east, including the followings: - The analyzed contingency table indexes indicate better detection precision during spring and fall. - A relatively constant level of error is generally observed among different categories. The range of precipitation estimates at different rainfall rate categories is nearly invariant as a sign for the existence of systematic error. - Low level of reliability is observed on PERSIANN estimations at different categories which are mostly associated with high level of FA error. However, it is observed that as the rate of precipitation increase, the ability and precision of PERSIANN in rainfall detection also increases. - The systematic and random error decomposition in this area shows that PERSIANN has more difficulty in modeling the system and pattern of rainfall rather than to have bias due to rainfall uncertainties. The level of systematic error also considerably increases in heavier rainfalls. It is also important to note that PERSIANN error characteristics at each season varies due to the condition and rainfall patterns of that season which shows the necessity of seasonally different approach for the calibration of this product. Overall, we believe that different error component's analysis performed in this study, can substantially help any further local studies for post-calibration and bias reduction of PERSIANN estimations.

  8. 49 CFR 193.2509 - Emergency procedures.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... plant; (ii) Potential hazards at the plant, including fires; (iii) Communication and emergency control... plant due to operating malfunctions, structural collapse, personnel error, forces of nature, and activities adjacent to the plant. (b) To adequately handle each type of emergency identified under paragraph...

  9. Testing accelerometer rectification error caused by multidimensional composite inputs with double turntable centrifuge.

    PubMed

    Guan, W; Meng, X F; Dong, X M

    2014-12-01

    Rectification error is a critical characteristic of inertial accelerometers. Accelerometers working in operational situations are stimulated by composite inputs, including constant acceleration and vibration, from multiple directions. However, traditional methods for evaluating rectification error only use one-dimensional vibration. In this paper, a double turntable centrifuge (DTC) was utilized to produce the constant acceleration and vibration simultaneously and we tested the rectification error due to the composite accelerations. At first, we deduced the expression of the rectification error with the output of the DTC and a static model of the single-axis pendulous accelerometer under test. Theoretical investigation and analysis were carried out in accordance with the rectification error model. Then a detailed experimental procedure and testing results were described. We measured the rectification error with various constant accelerations at different frequencies and amplitudes of the vibration. The experimental results showed the distinguished characteristics of the rectification error caused by the composite accelerations. The linear relation between the constant acceleration and the rectification error was proved. The experimental procedure and results presented in this context can be referenced for the investigation of the characteristics of accelerometer with multiple inputs.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    McInerney, Peter; Adams, Paul; Hadi, Masood Z.

    As larger-scale cloning projects become more prevalent, there is an increasing need for comparisons among high fidelity DNA polymerases used for PCR amplification. All polymerases marketed for PCR applications are tested for fidelity properties (i.e., error rate determination) by vendors, and numerous literature reports have addressed PCR enzyme fidelity. Nonetheless, it is often difficult to make direct comparisons among different enzymes due to numerous methodological and analytical differences from study to study. We have measured the error rates for 6 DNA polymerases commonly used in PCR applications, including 3 polymerases typically used for cloning applications requiring high fidelity. Error ratemore » measurement values reported here were obtained by direct sequencing of cloned PCR products. The strategy employed here allows interrogation of error rate across a very large DNA sequence space, since 94 unique DNA targets were used as templates for PCR cloning. The six enzymes included in the study, Taq polymerase, AccuPrime-Taq High Fidelity, KOD Hot Start, cloned Pfu polymerase, Phusion Hot Start, and Pwo polymerase, we find the lowest error rates with Pfu , Phusion, and Pwo polymerases. Error rates are comparable for these 3 enzymes and are >10x lower than the error rate observed with Taq polymerase. Mutation spectra are reported, with the 3 high fidelity enzymes displaying broadly similar types of mutations. For these enzymes, transition mutations predominate, with little bias observed for type of transition.« less

  11. Segmented Mirror Image Degradation Due to Surface Dust, Alignment and Figure

    NASA Technical Reports Server (NTRS)

    Schreur, Julian J.

    1999-01-01

    In 1996 an algorithm was developed to include the effects of surface roughness in the calculation of the point spread function of a telescope mirror. This algorithm has been extended to include the effects of alignment errors and figure errors for the individual elements, and an overall contamination by surface dust. The final algorithm builds an array for a guard-banded pupil function of a mirror that may or may not have a central hole, a central reflecting segment, or an outer ring of segments. The central hole, central reflecting segment, and outer ring may be circular or polygonal, and the outer segments may have trimmed comers. The modeled point spread functions show that x-tilt and y-tilt, or the corresponding R-tilt and theta-tilt for a segment in an outer ring, is readily apparent for maximum wavefront errors of 0.1 lambda. A similar sized piston error is also apparent, but integral wavelength piston errors are not. Severe piston error introduces a focus error of the opposite sign, so piston could be adjusted to compensate for segments with varying focal lengths. Dust affects the image principally by decreasing the Strehl ratio, or peak intensity of the image. For an eight-meter telescope a 25% coverage by dust produced a scattered light intensity of 10(exp -9) of the peak intensity, a level well below detectability.

  12. Accurate Micro-Tool Manufacturing by Iterative Pulsed-Laser Ablation

    NASA Astrophysics Data System (ADS)

    Warhanek, Maximilian; Mayr, Josef; Dörig, Christian; Wegener, Konrad

    2017-12-01

    Iterative processing solutions, including multiple cycles of material removal and measurement, are capable of achieving higher geometric accuracy by compensating for most deviations manifesting directly on the workpiece. Remaining error sources are the measurement uncertainty and the repeatability of the material-removal process including clamping errors. Due to the lack of processing forces, process fluids and wear, pulsed-laser ablation has proven high repeatability and can be realized directly on a measuring machine. This work takes advantage of this possibility by implementing an iterative, laser-based correction process for profile deviations registered directly on an optical measurement machine. This way efficient iterative processing is enabled, which is precise, applicable for all tool materials including diamond and eliminates clamping errors. The concept is proven by a prototypical implementation on an industrial tool measurement machine and a nanosecond fibre laser. A number of measurements are performed on both the machine and the processed workpieces. Results show production deviations within 2 μm diameter tolerance.

  13. Review of medication errors that are new or likely to occur more frequently with electronic medication management systems.

    PubMed

    Van de Vreede, Melita; McGrath, Anne; de Clifford, Jan

    2018-05-14

    Objective. The aim of the present study was to identify and quantify medication errors reportedly related to electronic medication management systems (eMMS) and those considered likely to occur more frequently with eMMS. This included developing a new classification system relevant to eMMS errors. Methods. Eight Victorian hospitals with eMMS participated in a retrospective audit of reported medication incidents from their incident reporting databases between May and July 2014. Site-appointed project officers submitted deidentified incidents they deemed new or likely to occur more frequently due to eMMS, together with the Incident Severity Rating (ISR). The authors reviewed and classified incidents. Results. There were 5826 medication-related incidents reported. In total, 93 (47 prescribing errors, 46 administration errors) were identified as new or potentially related to eMMS. Only one ISR2 (moderate) and no ISR1 (severe or death) errors were reported, so harm to patients in this 3-month period was minimal. The most commonly reported error types were 'human factors' and 'unfamiliarity or training' (70%) and 'cross-encounter or hybrid system errors' (22%). Conclusions. Although the results suggest that the errors reported were of low severity, organisations must remain vigilant to the risk of new errors and avoid the assumption that eMMS is the panacea to all medication error issues. What is known about the topic? eMMS have been shown to reduce some types of medication errors, but it has been reported that some new medication errors have been identified and some are likely to occur more frequently with eMMS. There are few published Australian studies that have reported on medication error types that are likely to occur more frequently with eMMS in more than one organisation and that include administration and prescribing errors. What does this paper add? This paper includes a new simple classification system for eMMS that is useful and outlines the most commonly reported incident types and can inform organisations and vendors on possible eMMS improvements. The paper suggests a new classification system for eMMS medication errors. What are the implications for practitioners? The results of the present study will highlight to organisations the need for ongoing review of system design, refinement of workflow issues, staff education and training and reporting and monitoring of errors.

  14. FRamework Assessing Notorious Contributing Influences for Error (FRANCIE): Perspective on Taxonomy Development to Support Error Reporting and Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lon N. Haney; David I. Gertman

    2003-04-01

    Beginning in the 1980s a primary focus of human reliability analysis was estimation of human error probabilities. However, detailed qualitative modeling with comprehensive representation of contextual variables often was lacking. This was likely due to the lack of comprehensive error and performance shaping factor taxonomies, and the limited data available on observed error rates and their relationship to specific contextual variables. In the mid 90s Boeing, America West Airlines, NASA Ames Research Center and INEEL partnered in a NASA sponsored Advanced Concepts grant to: assess the state of the art in human error analysis, identify future needs for human errormore » analysis, and develop an approach addressing these needs. Identified needs included the need for a method to identify and prioritize task and contextual characteristics affecting human reliability. Other needs identified included developing comprehensive taxonomies to support detailed qualitative modeling and to structure meaningful data collection efforts across domains. A result was the development of the FRamework Assessing Notorious Contributing Influences for Error (FRANCIE) with a taxonomy for airline maintenance tasks. The assignment of performance shaping factors to generic errors by experts proved to be valuable to qualitative modeling. Performance shaping factors and error types from such detailed approaches can be used to structure error reporting schemes. In a recent NASA Advanced Human Support Technology grant FRANCIE was refined, and two new taxonomies for use on space missions were developed. The development, sharing, and use of error taxonomies, and the refinement of approaches for increased fidelity of qualitative modeling is offered as a means to help direct useful data collection strategies.« less

  15. Medical errors; causes, consequences, emotional response and resulting behavioral change

    PubMed Central

    Bari, Attia; Khan, Rehan Ahmed; Rathore, Ahsan Waheed

    2016-01-01

    Objective: To determine the causes of medical errors, the emotional and behavioral response of pediatric medicine residents to their medical errors and to determine their behavior change affecting their future training. Methods: One hundred thirty postgraduate residents were included in the study. Residents were asked to complete questionnaire about their errors and responses to their errors in three domains: emotional response, learning behavior and disclosure of the error. The names of the participants were kept confidential. Data was analyzed using SPSS version 20. Results: A total of 130 residents were included. Majority 128(98.5%) of these described some form of error. Serious errors that occurred were 24(19%), 63(48%) minor, 24(19%) near misses,2(2%) never encountered an error and 17(12%) did not mention type of error but mentioned causes and consequences. Only 73(57%) residents disclosed medical errors to their senior physician but disclosure to patient’s family was negligible 15(11%). Fatigue due to long duty hours 85(65%), inadequate experience 66(52%), inadequate supervision 58(48%) and complex case 58(45%) were common causes of medical errors. Negative emotions were common and were significantly associated with lack of knowledge (p=0.001), missing warning signs (p=<0.001), not seeking advice (p=0.003) and procedural complications (p=0.001). Medical errors had significant impact on resident’s behavior; 119(93%) residents became more careful, increased advice seeking from seniors 109(86%) and 109(86%) started paying more attention to details. Intrinsic causes of errors were significantly associated with increased information seeking behavior and vigilance (p=0.003) and (p=0.01) respectively. Conclusion: Medical errors committed by residents have inadequate disclosure to senior physicians and result in negative emotions but there was positive change in their behavior, which resulted in improvement in their future training and patient care. PMID:27375682

  16. A steep peripheral ring in irregular cornea topography, real or an instrument error?

    PubMed

    Galindo-Ferreiro, Alicia; Galvez-Ruiz, Alberto; Schellini, Silvana A; Galindo-Alonso, Julio

    2016-01-01

    To demonstrate that the steep peripheral ring (red zone) on corneal topography after myopic laser in situ keratomileusis (LASIK) could possibly due to instrument error and not always to a real increase in corneal curvature. A spherical model for the corneal surface and modifying topography software was used to analyze the cause of an error due to instrument design. This study involved modification of the software of a commercially available topographer. A small modification of the topography image results in a red zone on the corneal topography color map. Corneal modeling indicates that the red zone could be an artifact due to an instrument-induced error. The steep curvature changes after LASIK, signified by the red zone, could be also an error due to the plotting algorithms of the corneal topographer, besides a steep curvature change.

  17. Model Error Estimation for the CPTEC Eta Model

    NASA Technical Reports Server (NTRS)

    Tippett, Michael K.; daSilva, Arlindo

    1999-01-01

    Statistical data assimilation systems require the specification of forecast and observation error statistics. Forecast error is due to model imperfections and differences between the initial condition and the actual state of the atmosphere. Practical four-dimensional variational (4D-Var) methods try to fit the forecast state to the observations and assume that the model error is negligible. Here with a number of simplifying assumption, a framework is developed for isolating the model error given the forecast error at two lead-times. Two definitions are proposed for the Talagrand ratio tau, the fraction of the forecast error due to model error rather than initial condition error. Data from the CPTEC Eta Model running operationally over South America are used to calculate forecast error statistics and lower bounds for tau.

  18. Computing in the presence of soft bit errors. [caused by single event upset on spacecraft

    NASA Technical Reports Server (NTRS)

    Rasmussen, R. D.

    1984-01-01

    It is shown that single-event-upsets (SEUs) due to cosmic rays are a significant source of single bit error in spacecraft computers. The physical mechanism of SEU, electron hole generation by means of Linear Energy Transfer (LET), it discussed with reference made to the results of a study of the environmental effects on computer systems of the Galileo spacecraft. Techniques for making software more tolerant of cosmic ray effects are considered, including: reducing the number of registers used by the software; continuity testing of variables; redundant execution of major procedures for error detection; and encoding state variables to detect single-bit changes. Attention is also given to design modifications which may reduce the cosmic ray exposure of on-board hardware. These modifications include: shielding components operating in LEO; removing low-power Schottky parts; and the use of CMOS diodes. The SEU parameters of different electronic components are listed in a table.

  19. The Role of Model and Initial Condition Error in Numerical Weather Forecasting Investigated with an Observing System Simulation Experiment

    NASA Technical Reports Server (NTRS)

    Prive, Nikki C.; Errico, Ronald M.

    2013-01-01

    A series of experiments that explore the roles of model and initial condition error in numerical weather prediction are performed using an observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASA/GMAO). The use of an OSSE allows the analysis and forecast errors to be explicitly calculated, and different hypothetical observing networks can be tested with ease. In these experiments, both a full global OSSE framework and an 'identical twin' OSSE setup are utilized to compare the behavior of the data assimilation system and evolution of forecast skill with and without model error. The initial condition error is manipulated by varying the distribution and quality of the observing network and the magnitude of observation errors. The results show that model error has a strong impact on both the quality of the analysis field and the evolution of forecast skill, including both systematic and unsystematic model error components. With a realistic observing network, the analysis state retains a significant quantity of error due to systematic model error. If errors of the analysis state are minimized, model error acts to rapidly degrade forecast skill during the first 24-48 hours of forward integration. In the presence of model error, the impact of observation errors on forecast skill is small, but in the absence of model error, observation errors cause a substantial degradation of the skill of medium range forecasts.

  20. Impacts of uncertainties in European gridded precipitation observations on regional climate analysis

    PubMed Central

    Gobiet, Andreas

    2016-01-01

    ABSTRACT Gridded precipitation data sets are frequently used to evaluate climate models or to remove model output biases. Although precipitation data are error prone due to the high spatio‐temporal variability of precipitation and due to considerable measurement errors, relatively few attempts have been made to account for observational uncertainty in model evaluation or in bias correction studies. In this study, we compare three types of European daily data sets featuring two Pan‐European data sets and a set that combines eight very high‐resolution station‐based regional data sets. Furthermore, we investigate seven widely used, larger scale global data sets. Our results demonstrate that the differences between these data sets have the same magnitude as precipitation errors found in regional climate models. Therefore, including observational uncertainties is essential for climate studies, climate model evaluation, and statistical post‐processing. Following our results, we suggest the following guidelines for regional precipitation assessments. (1) Include multiple observational data sets from different sources (e.g. station, satellite, reanalysis based) to estimate observational uncertainties. (2) Use data sets with high station densities to minimize the effect of precipitation undersampling (may induce about 60% error in data sparse regions). The information content of a gridded data set is mainly related to its underlying station density and not to its grid spacing. (3) Consider undercatch errors of up to 80% in high latitudes and mountainous regions. (4) Analyses of small‐scale features and extremes are especially uncertain in gridded data sets. For higher confidence, use climate‐mean and larger scale statistics. In conclusion, neglecting observational uncertainties potentially misguides climate model development and can severely affect the results of climate change impact assessments. PMID:28111497

  1. Impacts of uncertainties in European gridded precipitation observations on regional climate analysis.

    PubMed

    Prein, Andreas F; Gobiet, Andreas

    2017-01-01

    Gridded precipitation data sets are frequently used to evaluate climate models or to remove model output biases. Although precipitation data are error prone due to the high spatio-temporal variability of precipitation and due to considerable measurement errors, relatively few attempts have been made to account for observational uncertainty in model evaluation or in bias correction studies. In this study, we compare three types of European daily data sets featuring two Pan-European data sets and a set that combines eight very high-resolution station-based regional data sets. Furthermore, we investigate seven widely used, larger scale global data sets. Our results demonstrate that the differences between these data sets have the same magnitude as precipitation errors found in regional climate models. Therefore, including observational uncertainties is essential for climate studies, climate model evaluation, and statistical post-processing. Following our results, we suggest the following guidelines for regional precipitation assessments. (1) Include multiple observational data sets from different sources (e.g. station, satellite, reanalysis based) to estimate observational uncertainties. (2) Use data sets with high station densities to minimize the effect of precipitation undersampling (may induce about 60% error in data sparse regions). The information content of a gridded data set is mainly related to its underlying station density and not to its grid spacing. (3) Consider undercatch errors of up to 80% in high latitudes and mountainous regions. (4) Analyses of small-scale features and extremes are especially uncertain in gridded data sets. For higher confidence, use climate-mean and larger scale statistics. In conclusion, neglecting observational uncertainties potentially misguides climate model development and can severely affect the results of climate change impact assessments.

  2. Validation of prostate-specific antigen laboratory values recorded in Surveillance, Epidemiology, and End Results registries.

    PubMed

    Adamo, Margaret Peggy; Boten, Jessica A; Coyle, Linda M; Cronin, Kathleen A; Lam, Clara J K; Negoita, Serban; Penberthy, Lynne; Stevens, Jennifer L; Ward, Kevin C

    2017-02-15

    Researchers have used prostate-specific antigen (PSA) values collected by central cancer registries to evaluate tumors for potential aggressive clinical disease. An independent study collecting PSA values suggested a high error rate (18%) related to implied decimal points. To evaluate the error rate in the Surveillance, Epidemiology, and End Results (SEER) program, a comprehensive review of PSA values recorded across all SEER registries was performed. Consolidated PSA values for eligible prostate cancer cases in SEER registries were reviewed and compared with text documentation from abstracted records. Four types of classification errors were identified: implied decimal point errors, abstraction or coding implementation errors, nonsignificant errors, and changes related to "unknown" values. A total of 50,277 prostate cancer cases diagnosed in 2012 were reviewed. Approximately 94.15% of cases did not have meaningful changes (85.85% correct, 5.58% with a nonsignificant change of <1 ng/mL, and 2.80% with no clinical change). Approximately 5.70% of cases had meaningful changes (1.93% due to implied decimal point errors, 1.54% due to abstract or coding errors, and 2.23% due to errors related to unknown categories). Only 419 of the original 50,277 cases (0.83%) resulted in a change in disease stage due to a corrected PSA value. The implied decimal error rate was only 1.93% of all cases in the current validation study, with a meaningful error rate of 5.81%. The reasons for the lower error rate in SEER are likely due to ongoing and rigorous quality control and visual editing processes by the central registries. The SEER program currently is reviewing and correcting PSA values back to 2004 and will re-release these data in the public use research file. Cancer 2017;123:697-703. © 2016 American Cancer Society. © 2016 The Authors. Cancer published by Wiley Periodicals, Inc. on behalf of American Cancer Society.

  3. EEG oscillatory patterns are associated with error prediction during music performance and are altered in musician's dystonia.

    PubMed

    Ruiz, María Herrojo; Strübing, Felix; Jabusch, Hans-Christian; Altenmüller, Eckart

    2011-04-15

    Skilled performance requires the ability to monitor ongoing behavior, detect errors in advance and modify the performance accordingly. The acquisition of fast predictive mechanisms might be possible due to the extensive training characterizing expertise performance. Recent EEG studies on piano performance reported a negative event-related potential (ERP) triggered in the ACC 70 ms before performance errors (pitch errors due to incorrect keypress). This ERP component, termed pre-error related negativity (pre-ERN), was assumed to reflect processes of error detection in advance. However, some questions remained to be addressed: (i) Does the electrophysiological marker prior to errors reflect an error signal itself or is it related instead to the implementation of control mechanisms? (ii) Does the posterior frontomedial cortex (pFMC, including ACC) interact with other brain regions to implement control adjustments following motor prediction of an upcoming error? (iii) Can we gain insight into the electrophysiological correlates of error prediction and control by assessing the local neuronal synchronization and phase interaction among neuronal populations? (iv) Finally, are error detection and control mechanisms defective in pianists with musician's dystonia (MD), a focal task-specific dystonia resulting from dysfunction of the basal ganglia-thalamic-frontal circuits? Consequently, we investigated the EEG oscillatory and phase synchronization correlates of error detection and control during piano performances in healthy pianists and in a group of pianists with MD. In healthy pianists, the main outcomes were increased pre-error theta and beta band oscillations over the pFMC and 13-15 Hz phase synchronization, between the pFMC and the right lateral prefrontal cortex, which predicted corrective mechanisms. In MD patients, the pattern of phase synchronization appeared in a different frequency band (6-8 Hz) and correlated with the severity of the disorder. The present findings shed new light on the neural mechanisms, which might implement motor prediction by means of forward control processes, as they function in healthy pianists and in their altered form in patients with MD. Copyright © 2010 Elsevier Inc. All rights reserved.

  4. Performance of concatenated Reed-Solomon/Viterbi channel coding

    NASA Technical Reports Server (NTRS)

    Divsalar, D.; Yuen, J. H.

    1982-01-01

    The concatenated Reed-Solomon (RS)/Viterbi coding system is reviewed. The performance of the system is analyzed and results are derived with a new simple approach. A functional model for the input RS symbol error probability is presented. Based on this new functional model, we compute the performance of a concatenated system in terms of RS word error probability, output RS symbol error probability, bit error probability due to decoding failure, and bit error probability due to decoding error. Finally we analyze the effects of the noisy carrier reference and the slow fading on the system performance.

  5. Minimization of the hole overcut and cylindricity errors during rotary ultrasonic drilling of Ti-6Al-4V

    NASA Astrophysics Data System (ADS)

    Nasr, M.; Anwar, S.; El-Tamimi, A.; Pervaiz, S.

    2018-04-01

    Titanium and its alloys e.g. Ti6Al4V have widespread applications in aerospace, automotive and medical industry. At the same time titanium and its alloys are regarded as difficult to machine materials due to their high strength and low thermal conductivity. Significant efforts have been dispensed to improve the accuracy of the machining processes for Ti6Al4V. The current study present the use of the rotary ultrasonic drilling (RUD) process for machining high quality holes in Ti6Al4V. The study takes into account the effects of the main RUD input parameters including spindle speed, ultrasonic power, feed rate and tool diameter on the key output responses related to the accuracy of the drilled holes including cylindricity and overcut errors. Analysis of variance (ANOVA) was employed to study the influence of the input parameters on cylindricity and overcut error. Later, regression models were developed to find the optimal set of input parameters to minimize the cylindricity and overcut errors.

  6. The pot calling the kettle black: the extent and type of errors in a computerized immunization registry and by parent report.

    PubMed

    MacDonald, Shannon E; Schopflocher, Donald P; Golonka, Richard P

    2014-01-04

    Accurate classification of children's immunization status is essential for clinical care, administration and evaluation of immunization programs, and vaccine program research. Computerized immunization registries have been proposed as a valuable alternative to provider paper records or parent report, but there is a need to better understand the challenges associated with their use. This study assessed the accuracy of immunization status classification in an immunization registry as compared to parent report and determined the number and type of errors occurring in both sources. This study was a sub-analysis of a larger study which compared the characteristics of children whose immunizations were up to date (UTD) at two years as compared to those not UTD. Children's immunization status was initially determined from a population-based immunization registry, and then compared to parent report of immunization status, as reported in a postal survey. Discrepancies between the two sources were adjudicated by review of immunization providers' hard-copy clinic records. Descriptive analyses included calculating proportions and confidence intervals for errors in classification and reporting of the type and frequency of errors. Among the 461 survey respondents, there were 60 discrepancies in immunization status. The majority of errors were due to parent report (n = 44), but the registry was not without fault (n = 16). Parents tended to erroneously report their child as UTD, whereas the registry was more likely to wrongly classify children as not UTD. Reasons for registry errors included failure to account for varicella disease history, variable number of doses required due to age at series initiation, and doses administered out of the region. These results confirm that parent report is often flawed, but also identify that registries are prone to misclassification of immunization status. Immunization program administrators and researchers need to institute measures to identify and reduce misclassification, in order for registries to play an effective role in the control of vaccine-preventable disease.

  7. The pot calling the kettle black: the extent and type of errors in a computerized immunization registry and by parent report

    PubMed Central

    2014-01-01

    Background Accurate classification of children’s immunization status is essential for clinical care, administration and evaluation of immunization programs, and vaccine program research. Computerized immunization registries have been proposed as a valuable alternative to provider paper records or parent report, but there is a need to better understand the challenges associated with their use. This study assessed the accuracy of immunization status classification in an immunization registry as compared to parent report and determined the number and type of errors occurring in both sources. Methods This study was a sub-analysis of a larger study which compared the characteristics of children whose immunizations were up to date (UTD) at two years as compared to those not UTD. Children’s immunization status was initially determined from a population-based immunization registry, and then compared to parent report of immunization status, as reported in a postal survey. Discrepancies between the two sources were adjudicated by review of immunization providers’ hard-copy clinic records. Descriptive analyses included calculating proportions and confidence intervals for errors in classification and reporting of the type and frequency of errors. Results Among the 461 survey respondents, there were 60 discrepancies in immunization status. The majority of errors were due to parent report (n = 44), but the registry was not without fault (n = 16). Parents tended to erroneously report their child as UTD, whereas the registry was more likely to wrongly classify children as not UTD. Reasons for registry errors included failure to account for varicella disease history, variable number of doses required due to age at series initiation, and doses administered out of the region. Conclusions These results confirm that parent report is often flawed, but also identify that registries are prone to misclassification of immunization status. Immunization program administrators and researchers need to institute measures to identify and reduce misclassification, in order for registries to play an effective role in the control of vaccine-preventable disease. PMID:24387002

  8. In the Aftermath: Attitudes of Anesthesiologists to Supportive Strategies After an Unexpected Intraoperative Patient Death.

    PubMed

    Heard, Gaylene C; Thomas, Rowan D; Sanderson, Penelope M

    2016-05-01

    Although most anesthesiologists will have 1 catastrophic perioperative event or more during their careers, there has been little research on their attitudes to assistive strategies after the event. There are wide-ranging emotional consequences for anesthesiologists involved in an unexpected intraoperative patient death, particularly if the anesthesiologist made an error. We used a between-groups survey study design to ask whether there are different attitudes to assistive strategies when a hypothetical patient death is caused by a drug error versus not caused by an error. First, we explored attitudes to generalized supportive strategies. Second, we examined our hypothesis that the presence of an error causing the hypothetical patient death would increase the perceived social stigma and self-stigma of help-seeking. Finally, we examined the strategies to assist help-seeking. An anonymous, mailed, self-administered survey was conducted with 1600 consultant anesthesiologists in Australia on the mailing list of the Australian and New Zealand College of Anaesthetists. The participants were randomized into "error" versus "no-error" groups for the hypothetical scenario of patient death due to anaphylaxis. Nonparametric, descriptive, parametric, and inferential tests were used for data analysis. P' is used where P values were corrected for multiple comparisons. There was a usable response rate of 48.9%. When an error had caused the hypothetical patient death, participants were more likely to agree with 4 of the 5 statements about support, including need for time off (P' = 0.003), counseling (P' < 0.001), a formal strategy for assistance (P' < 0.001), and the anesthesiologist not performing further cases that day (P' = 0.047). There were no differences between groups in perceived self-stigma (P = 0.98) or social stigma (P = 0.15) of seeking counseling, whether or not an error had caused the hypothetical patient death. Finally, when an error had caused the patient death, participants were more likely to agree with 2 of the 5 statements about help-seeking, including the need for a formal, hospital-based process that provides information on where to obtain professional counseling (P' = 0.006) and the availability of after-hours counseling services (P' = 0.035). Our participants were more likely to agree with assistive strategies such as not performing further work that day, time off, counseling, formal support strategies, and availability of after-hours counseling services, when the hypothetical patient death from anaphylaxis was due to an error. The perceived stigma toward attending counseling was not affected by the presence or absence of an error as the cause of the patient death, disproving our hypothesis.

  9. Magnetic heading reference

    NASA Technical Reports Server (NTRS)

    Garner, H. D. (Inventor)

    1977-01-01

    This invention employs a magnetometer as a magnetic heading reference for a vehicle such as a small aircraft. The magnetometer is mounted on a directional dial in the aircraft in the vicinity of the pilot such that it is free to turn with the dial about the yaw axis of the aircraft. The invention includes a circuit for generating a signal proportional to the northerly turning error produced in the magnetometer due to the vertical component of the earth's magnetic field. This generated signal is then subtracted from the output of the magnetometer to compensate for the northerly turning error.

  10. Curated eutherian third party data gene data sets.

    PubMed

    Premzl, Marko

    2016-03-01

    The free available eutherian genomic sequence data sets advanced scientific field of genomics. Of note, future revisions of gene data sets were expected, due to incompleteness of public eutherian genomic sequence assemblies and potential genomic sequence errors. The eutherian comparative genomic analysis protocol was proposed as guidance in protection against potential genomic sequence errors in public eutherian genomic sequences. The protocol was applicable in updates of 7 major eutherian gene data sets, including 812 complete coding sequences deposited in European Nucleotide Archive as curated third party data gene data sets.

  11. Erratum: "Discovery of a Second Millisecond Accreting Pulsar: XTE J1751-305"

    NASA Technical Reports Server (NTRS)

    Markwardt, Craig; Swank, J. H.; Strohmayer, T. E.; in 'tZand, J. J. M.; Marshall, F. E.

    2007-01-01

    The original Table 1 ("Timing Parameters of XTE J1751-305") contains one error. The epoch of pulsar mean longitude 90deg is incorrect due to a numerical conversion error in the preparation of the original table text. A corrected version of Table 1 is shown. For reference, the epoch of the ascending node is also included. The correct value was used in all of the analysis leading up to the paper. As T(sub 90) is a purely fiducial reference time, the scientific conclusions of the paper are unchanged.

  12. Primer ID Validates Template Sampling Depth and Greatly Reduces the Error Rate of Next-Generation Sequencing of HIV-1 Genomic RNA Populations

    PubMed Central

    Zhou, Shuntai; Jones, Corbin; Mieczkowski, Piotr

    2015-01-01

    ABSTRACT Validating the sampling depth and reducing sequencing errors are critical for studies of viral populations using next-generation sequencing (NGS). We previously described the use of Primer ID to tag each viral RNA template with a block of degenerate nucleotides in the cDNA primer. We now show that low-abundance Primer IDs (offspring Primer IDs) are generated due to PCR/sequencing errors. These artifactual Primer IDs can be removed using a cutoff model for the number of reads required to make a template consensus sequence. We have modeled the fraction of sequences lost due to Primer ID resampling. For a typical sequencing run, less than 10% of the raw reads are lost to offspring Primer ID filtering and resampling. The remaining raw reads are used to correct for PCR resampling and sequencing errors. We also demonstrate that Primer ID reveals bias intrinsic to PCR, especially at low template input or utilization. cDNA synthesis and PCR convert ca. 20% of RNA templates into recoverable sequences, and 30-fold sequence coverage recovers most of these template sequences. We have directly measured the residual error rate to be around 1 in 10,000 nucleotides. We use this error rate and the Poisson distribution to define the cutoff to identify preexisting drug resistance mutations at low abundance in an HIV-infected subject. Collectively, these studies show that >90% of the raw sequence reads can be used to validate template sampling depth and to dramatically reduce the error rate in assessing a genetically diverse viral population using NGS. IMPORTANCE Although next-generation sequencing (NGS) has revolutionized sequencing strategies, it suffers from serious limitations in defining sequence heterogeneity in a genetically diverse population, such as HIV-1 due to PCR resampling and PCR/sequencing errors. The Primer ID approach reveals the true sampling depth and greatly reduces errors. Knowing the sampling depth allows the construction of a model of how to maximize the recovery of sequences from input templates and to reduce resampling of the Primer ID so that appropriate multiplexing can be included in the experimental design. With the defined sampling depth and measured error rate, we are able to assign cutoffs for the accurate detection of minority variants in viral populations. This approach allows the power of NGS to be realized without having to guess about sampling depth or to ignore the problem of PCR resampling, while also being able to correct most of the errors in the data set. PMID:26041299

  13. Effects of Cloud on Goddard Lidar Observatory for Wind (GLOW) Performance and Analysis of Associated Errors

    NASA Astrophysics Data System (ADS)

    Bacha, Tulu

    The Goddard Lidar Observatory for Wind (GLOW), a mobile direct detection Doppler LIDAR based on molecular backscattering for measurement of wind in the troposphere and lower stratosphere region of atmosphere is operated and its errors characterized. It was operated at Howard University Beltsville Center for Climate Observation System (BCCOS) side by side with other operating instruments: the NASA/Langely Research Center Validation Lidar (VALIDAR), Leosphere WLS70, and other standard wind sensing instruments. The performance of Goddard Lidar Observatory for Wind (GLOW) is presented for various optical thicknesses of cloud conditions. It was also compared to VALIDAR under various conditions. These conditions include clear and cloudy sky regions. The performance degradation due to the presence of cirrus clouds is quantified by comparing the wind speed error to cloud thickness. The cloud thickness is quantified in terms of aerosol backscatter ratio (ASR) and cloud optical depth (COD). ASR and COD are determined from Howard University Raman Lidar (HURL) operating at the same station as GLOW. The wind speed error of GLOW was correlated with COD and aerosol backscatter ratio (ASR) which are determined from HURL data. The correlation related in a weak linear relationship. Finally, the wind speed measurements of GLOW were corrected using the quantitative relation from the correlation relations. Using ASR reduced the GLOW wind error from 19% to 8% in a thin cirrus cloud and from 58% to 28% in a relatively thick cloud. After correcting for cloud induced error, the remaining error is due to shot noise and atmospheric variability. Shot-noise error is the statistical random error of backscattered photons detected by photon multiplier tube (PMT) can only be minimized by averaging large number of data recorded. The atmospheric backscatter measured by GLOW along its line-of-sight direction is also used to analyze error due to atmospheric variability within the volume of measurement. GLOW scans in five different directions (vertical and at elevation angles of 45° in north, south, east, and west) to generate wind profiles. The non-uniformity of the atmosphere in all scanning directions is a factor contributing to the measurement error of GLOW. The atmospheric variability in the scanning region leads to difference in the intensity of backscattered signals for scanning directions. Taking the ratio of the north (east) to south (west) and comparing the statistical differences lead to a weak linear relation between atmospheric variability and line-of-sights wind speed differences. This relation was used to make correction which reduced by about 50%.

  14. Triangulation Error Analysis for the Barium Ion Cloud Experiment. M.S. Thesis - North Carolina State Univ.

    NASA Technical Reports Server (NTRS)

    Long, S. A. T.

    1973-01-01

    The triangulation method developed specifically for the Barium Ion Cloud Project is discussed. Expression for the four displacement errors, the three slope errors, and the curvature error in the triangulation solution due to a probable error in the lines-of-sight from the observation stations to points on the cloud are derived. The triangulation method is then used to determine the effect of the following on these different errors in the solution: the number and location of the stations, the observation duration, east-west cloud drift, the number of input data points, and the addition of extra cameras to one of the stations. The pointing displacement errors, and the pointing slope errors are compared. The displacement errors in the solution due to a probable error in the position of a moving station plus the weighting factors for the data from the moving station are also determined.

  15. Modeling SMAP Spacecraft Attitude Control Estimation Error Using Signal Generation Model

    NASA Technical Reports Server (NTRS)

    Rizvi, Farheen

    2016-01-01

    Two ground simulation software are used to model the SMAP spacecraft dynamics. The CAST software uses a higher fidelity model than the ADAMS software. The ADAMS software models the spacecraft plant, controller and actuator models, and assumes a perfect sensor and estimator model. In this simulation study, the spacecraft dynamics results from the ADAMS software are used as CAST software is unavailable. The main source of spacecraft dynamics error in the higher fidelity CAST software is due to the estimation error. A signal generation model is developed to capture the effect of this estimation error in the overall spacecraft dynamics. Then, this signal generation model is included in the ADAMS software spacecraft dynamics estimate such that the results are similar to CAST. This signal generation model has similar characteristics mean, variance and power spectral density as the true CAST estimation error. In this way, ADAMS software can still be used while capturing the higher fidelity spacecraft dynamics modeling from CAST software.

  16. Evaluating the prevalence and impact of examiner errors on the Wechsler scales of intelligence: A meta-analysis.

    PubMed

    Styck, Kara M; Walsh, Shana M

    2016-01-01

    The purpose of the present investigation was to conduct a meta-analysis of the literature on examiner errors for the Wechsler scales of intelligence. Results indicate that a mean of 99.7% of protocols contained at least 1 examiner error when studies that included a failure to record examinee responses as an error were combined and a mean of 41.2% of protocols contained at least 1 examiner error when studies that ignored errors of omission were combined. Furthermore, graduate student examiners were significantly more likely to make at least 1 error on Wechsler intelligence test protocols than psychologists. However, psychologists made significantly more errors per protocol than graduate student examiners regardless of the inclusion or exclusion of failure to record examinee responses as errors. On average, 73.1% of Full-Scale IQ (FSIQ) scores changed as a result of examiner errors, whereas 15.8%-77.3% of scores on the Verbal Comprehension Index (VCI), Perceptual Reasoning Index (PRI), Working Memory Index (WMI), and Processing Speed Index changed as a result of examiner errors. In addition, results suggest that examiners tend to overestimate FSIQ scores and underestimate VCI scores. However, no strong pattern emerged for the PRI and WMI. It can be concluded that examiner errors occur frequently and impact index and FSIQ scores. Consequently, current estimates for the standard error of measurement of popular IQ tests may not adequately capture the variance due to the examiner. (c) 2016 APA, all rights reserved).

  17. Negligence, genuine error, and litigation

    PubMed Central

    Sohn, David H

    2013-01-01

    Not all medical injuries are the result of negligence. In fact, most medical injuries are the result either of the inherent risk in the practice of medicine, or due to system errors, which cannot be prevented simply through fear of disciplinary action. This paper will discuss the differences between adverse events, negligence, and system errors; the current medical malpractice tort system in the United States; and review current and future solutions, including medical malpractice reform, alternative dispute resolution, health courts, and no-fault compensation systems. The current political environment favors investigation of non-cap tort reform remedies; investment into more rational oversight systems, such as health courts or no-fault systems may reap both quantitative and qualitative benefits for a less costly and safer health system. PMID:23426783

  18. Method and apparatus for correcting eddy current signal voltage for temperature effects

    DOEpatents

    Kustra, Thomas A.; Caffarel, Alfred J.

    1990-01-01

    An apparatus and method for measuring physical characteristics of an electrically conductive material by the use of eddy-current techniques and compensating measurement errors caused by changes in temperature includes a switching arrangement connected between primary and reference coils of an eddy-current probe which allows the probe to be selectively connected between an eddy current output oscilloscope and a digital ohm-meter for measuring the resistances of the primary and reference coils substantially at the time of eddy current measurement. In this way, changes in resistance due to temperature effects can be completely taken into account in determining the true error in the eddy current measurement. The true error can consequently be converted into an equivalent eddy current measurement correction.

  19. The precision of a special purpose analog computer in clinical cardiac output determination.

    PubMed Central

    Sullivan, F J; Mroz, E A; Miller, R E

    1975-01-01

    Three hundred dye-dilution curves taken during our first year of clinical experience with the Waters CO-4 cardiac output computer were analyzed to estimate the errors involved in its use. Provided that calibration is accurate and 5.0 mg of dye are injected for each curve, then the percentage standard deviation of measurement using this computer is about 8.7%. Included in this are the errors inherent in the computer, errors due to baseline drift, errors in the injection of dye and acutal variation of cardiac output over a series of successive determinations. The size of this error is comparable to that involved in manual calculation. The mean value of five successive curves will be within 10% of the real value in 99 cases out of 100. Advances in methodology and equipment are discussed which make calibration simpler and more accurate, and which should also improve the quality of computer determination. A list of suggestions is given to minimize the errors involved in the clinical use of this equipment. Images Fig. 4. PMID:1089394

  20. Object permanence in adult common marmosets (Callithrix jacchus): not everything is an "A-not-B" error that seems to be one.

    PubMed

    Kis, Anna; Gácsi, Márta; Range, Friederike; Virányi, Zsófia

    2012-01-01

    In this paper, we describe a behaviour pattern similar to the "A-not-B" error found in human infants and young apes in a monkey species, the common marmosets (Callithrix jacchus). In contrast to the classical explanation, recently it has been suggested that the "A-not-B" error committed by human infants is at least partially due to misinterpretation of the hider's ostensively communicated object hiding actions as potential 'teaching' demonstrations during the A trials. We tested whether this so-called Natural Pedagogy hypothesis would account for the A-not-B error that marmosets commit in a standard object permanence task, but found no support for the hypothesis in this species. Alternatively, we present evidence that lower level mechanisms, such as attention and motivation, play an important role in committing the "A-not-B" error in marmosets. We argue that these simple mechanisms might contribute to the effect of undeveloped object representational skills in other species including young non-human primates that commit the A-not-B error.

  1. Human factors in aircraft incidents - Results of a 7-year study (Andre Allard Memorial Lecture)

    NASA Technical Reports Server (NTRS)

    Billings, C. E.; Reynard, W. D.

    1984-01-01

    It is pointed out that nearly all fatal aircraft accidents are preventable, and that most such accidents are due to human error. The present discussion is concerned with the results of a seven-year study of the data collected by the NASA Aviation Safety Reporting System (ASRS). The Aviation Safety Reporting System was designed to stimulate as large a flow as possible of information regarding errors and operational problems in the conduct of air operations. It was implemented in April, 1976. In the following 7.5 years, 35,000 reports have been received from pilots, controllers, and the armed forces. Human errors are found in more than 80 percent of these reports. Attention is given to the types of events reported, possible causal factors in incidents, the relationship of incidents and accidents, and sources of error in the data. ASRS reports include sufficient detail to permit authorities to institute changes in the national aviation system designed to minimize the likelihood of human error, and to insulate the system against the effects of errors.

  2. Qualitative and quantitative assessment of Illumina's forensic STR and SNP kits on MiSeq FGx™.

    PubMed

    Sharma, Vishakha; Chow, Hoi Yan; Siegel, Donald; Wurmbach, Elisa

    2017-01-01

    Massively parallel sequencing (MPS) is a powerful tool transforming DNA analysis in multiple fields ranging from medicine, to environmental science, to evolutionary biology. In forensic applications, MPS offers the ability to significantly increase the discriminatory power of human identification as well as aid in mixture deconvolution. However, before the benefits of any new technology can be employed, a thorough evaluation of its quality, consistency, sensitivity, and specificity must be rigorously evaluated in order to gain a detailed understanding of the technique including sources of error, error rates, and other restrictions/limitations. This extensive study assessed the performance of Illumina's MiSeq FGx MPS system and ForenSeq™ kit in nine experimental runs including 314 reaction samples. In-depth data analysis evaluated the consequences of different assay conditions on test results. Variables included: sample numbers per run, targets per run, DNA input per sample, and replications. Results are presented as heat maps revealing patterns for each locus. Data analysis focused on read numbers (allele coverage), drop-outs, drop-ins, and sequence analysis. The study revealed that loci with high read numbers performed better and resulted in fewer drop-outs and well balanced heterozygous alleles. Several loci were prone to drop-outs which led to falsely typed homozygotes and therefore to genotype errors. Sequence analysis of allele drop-in typically revealed a single nucleotide change (deletion, insertion, or substitution). Analyses of sequences, no template controls, and spurious alleles suggest no contamination during library preparation, pooling, and sequencing, but indicate that sequencing or PCR errors may have occurred due to DNA polymerase infidelities. Finally, we found utilizing Illumina's FGx System at recommended conditions does not guarantee 100% outcomes for all samples tested, including the positive control, and required manual editing due to low read numbers and/or allele drop-in. These findings are important for progressing towards implementation of MPS in forensic DNA testing.

  3. Subaperture test of wavefront error of large telescopes: error sources and stitching performance simulations

    NASA Astrophysics Data System (ADS)

    Chen, Shanyong; Li, Shengyi; Wang, Guilin

    2014-11-01

    The wavefront error of large telescopes requires to be measured to check the system quality and also estimate the misalignment of the telescope optics including the primary, the secondary and so on. It is usually realized by a focal plane interferometer and an autocollimator flat (ACF) of the same aperture with the telescope. However, it is challenging for meter class telescopes due to high cost and technological challenges in producing the large ACF. Subaperture test with a smaller ACF is hence proposed in combination with advanced stitching algorithms. Major error sources include the surface error of the ACF, misalignment of the ACF and measurement noises. Different error sources have different impacts on the wavefront error. Basically the surface error of the ACF behaves like systematic error and the astigmatism will be cumulated and enlarged if the azimuth of subapertures remains fixed. It is difficult to accurately calibrate the ACF because it suffers considerable deformation induced by gravity or mechanical clamping force. Therefore a selfcalibrated stitching algorithm is employed to separate the ACF surface error from the subaperture wavefront error. We suggest the ACF be rotated around the optical axis of the telescope for subaperture test. The algorithm is also able to correct the subaperture tip-tilt based on the overlapping consistency. Since all subaperture measurements are obtained in the same imaging plane, lateral shift of the subapertures is always known and the real overlapping points can be recognized in this plane. Therefore lateral positioning error of subapertures has no impact on the stitched wavefront. In contrast, the angular positioning error changes the azimuth of the ACF and finally changes the systematic error. We propose an angularly uneven layout of subapertures to minimize the stitching error, which is very different from our knowledge. At last, measurement noises could never be corrected but be suppressed by means of averaging and environmental control. We simulate the performance of the stitching algorithm dealing with surface error and misalignment of the ACF, and noise suppression, which provides guidelines to optomechanical design of the stitching test system.

  4. Comparing errors in ED computer-assisted vs conventional pediatric drug dosing and administration.

    PubMed

    Yamamoto, Loren; Kanemori, Joan

    2010-06-01

    Compared to fixed-dose single-vial drug administration in adults, pediatric drug dosing and administration requires a series of calculations, all of which are potentially error prone. The purpose of this study is to compare error rates and task completion times for common pediatric medication scenarios using computer program assistance vs conventional methods. Two versions of a 4-part paper-based test were developed. Each part consisted of a set of medication administration and/or dosing tasks. Emergency department and pediatric intensive care unit nurse volunteers completed these tasks using both methods (sequence assigned to start with a conventional or a computer-assisted approach). Completion times, errors, and the reason for the error were recorded. Thirty-eight nurses completed the study. Summing the completion of all 4 parts, the mean conventional total time was 1243 seconds vs the mean computer program total time of 879 seconds (P < .001). The conventional manual method had a mean of 1.8 errors vs the computer program with a mean of 0.7 errors (P < .001). Of the 97 total errors, 36 were due to misreading the drug concentration on the label, 34 were due to calculation errors, and 8 were due to misplaced decimals. Of the 36 label interpretation errors, 18 (50%) occurred with digoxin or insulin. Computerized assistance reduced errors and the time required for drug administration calculations. A pattern of errors emerged, noting that reading/interpreting certain drug labels were more error prone. Optimizing the layout of drug labels could reduce the error rate for error-prone labels. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  5. Dynamic modelling and estimation of the error due to asynchronism in a redundant asynchronous multiprocessor system

    NASA Technical Reports Server (NTRS)

    Huynh, Loc C.; Duval, R. W.

    1986-01-01

    The use of Redundant Asynchronous Multiprocessor System to achieve ultrareliable Fault Tolerant Control Systems shows great promise. The development has been hampered by the inability to determine whether differences in the outputs of redundant CPU's are due to failures or to accrued error built up by slight differences in CPU clock intervals. This study derives an analytical dynamic model of the difference between redundant CPU's due to differences in their clock intervals and uses this model with on-line parameter identification to idenitify the differences in the clock intervals. The ability of this methodology to accurately track errors due to asynchronisity generate an error signal with the effect of asynchronisity removed and this signal may be used to detect and isolate actual system failures.

  6. 13Check_RNA: A tool to evaluate 13C chemical shifts assignments of RNA.

    PubMed

    Icazatti, A A; Martin, O A; Villegas, M; Szleifer, I; Vila, J A

    2018-06-19

    Chemical shifts (CS) are an important source of structural information of macromolecules such as RNA. In addition to the scarce availability of CS for RNA, the observed values are prone to errors due to a wrong re-calibration or miss assignments. Different groups have dedicated their efforts to correct CS systematic errors on RNA. Despite this, there are not automated and freely available algorithms for correct assignments of RNA 13C CS before their deposition to the BMRB or re-reference already deposited CS with systematic errors. Based on an existent method we have implemented an open source python module to correct 13C CS (from here on 13Cexp) systematic errors of RNAs and then return the results in 3 formats including the nmrstar one. This software is available on GitHub at https://github.com/BIOS-IMASL/13Check_RNA under a MIT license. Supplementary data are available at Bioinformatics online.

  7. 77 FR 41699 - Transportation of Household Goods in Interstate Commerce; Consumer Protection Regulations...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-16

    ... due Revision due to agency Collection Old burden to error error (old-- error) IC1: ``Ready to Move... Revisions of Estimates of Annual Costs to Respondents Total cost Collection New cost Old cost reduction (new--old) IC1: ``Ready to Move?'' $288,000 $720,000 -$432,000 ``Rights & Responsibilities'' 3,264,000 8,160...

  8. Self-calibration method without joint iteration for distributed small satellite SAR systems

    NASA Astrophysics Data System (ADS)

    Xu, Qing; Liao, Guisheng; Liu, Aifei; Zhang, Juan

    2013-12-01

    The performance of distributed small satellite synthetic aperture radar systems degrades significantly due to the unavoidable array errors, including gain, phase, and position errors, in real operating scenarios. In the conventional method proposed in (IEEE T Aero. Elec. Sys. 42:436-451, 2006), the spectrum components within one Doppler bin are considered as calibration sources. However, it is found in this article that the gain error estimation and the position error estimation in the conventional method can interact with each other. The conventional method may converge to suboptimal solutions in large position errors since it requires the joint iteration between gain-phase error estimation and position error estimation. In addition, it is also found that phase errors can be estimated well regardless of position errors when the zero Doppler bin is chosen. In this article, we propose a method obtained by modifying the conventional one, based on these two observations. In this modified method, gain errors are firstly estimated and compensated, which eliminates the interaction between gain error estimation and position error estimation. Then, by using the zero Doppler bin data, the phase error estimation can be performed well independent of position errors. Finally, position errors are estimated based on the Taylor-series expansion. Meanwhile, the joint iteration between gain-phase error estimation and position error estimation is not required. Therefore, the problem of suboptimal convergence, which occurs in the conventional method, can be avoided with low computational method. The modified method has merits of faster convergence and lower estimation error compared to the conventional one. Theoretical analysis and computer simulation results verified the effectiveness of the modified method.

  9. Sensitivity analysis of periodic errors in heterodyne interferometry

    NASA Astrophysics Data System (ADS)

    Ganguly, Vasishta; Kim, Nam Ho; Kim, Hyo Soo; Schmitz, Tony

    2011-03-01

    Periodic errors in heterodyne displacement measuring interferometry occur due to frequency mixing in the interferometer. These nonlinearities are typically characterized as first- and second-order periodic errors which cause a cyclical (non-cumulative) variation in the reported displacement about the true value. This study implements an existing analytical periodic error model in order to identify sensitivities of the first- and second-order periodic errors to the input parameters, including rotational misalignments of the polarizing beam splitter and mixing polarizer, non-orthogonality of the two laser frequencies, ellipticity in the polarizations of the two laser beams, and different transmission coefficients in the polarizing beam splitter. A local sensitivity analysis is first conducted to examine the sensitivities of the periodic errors with respect to each input parameter about the nominal input values. Next, a variance-based approach is used to study the global sensitivities of the periodic errors by calculating the Sobol' sensitivity indices using Monte Carlo simulation. The effect of variation in the input uncertainty on the computed sensitivity indices is examined. It is seen that the first-order periodic error is highly sensitive to non-orthogonality of the two linearly polarized laser frequencies, while the second-order error is most sensitive to the rotational misalignment between the laser beams and the polarizing beam splitter. A particle swarm optimization technique is finally used to predict the possible setup imperfections based on experimentally generated values for periodic errors.

  10. Error monitoring issues for common channel signaling

    NASA Astrophysics Data System (ADS)

    Hou, Victor T.; Kant, Krishna; Ramaswami, V.; Wang, Jonathan L.

    1994-04-01

    Motivated by field data which showed a large number of link changeovers and incidences of link oscillations between in-service and out-of-service states in common channel signaling (CCS) networks, a number of analyses of the link error monitoring procedures in the SS7 protocol were performed by the authors. This paper summarizes the results obtained thus far and include the following: (1) results of an exact analysis of the performance of the error monitoring procedures under both random and bursty errors; (2) a demonstration that there exists a range of error rates within which the error monitoring procedures of SS7 may induce frequent changeovers and changebacks; (3) an analysis of the performance ofthe SS7 level-2 transmission protocol to determine the tolerable error rates within which the delay requirements can be met; (4) a demonstration that the tolerable error rate depends strongly on various link and traffic characteristics, thereby implying that a single set of error monitor parameters will not work well in all situations; (5) some recommendations on a customizable/adaptable scheme of error monitoring with a discussion on their implementability. These issues may be particularly relevant in the presence of anticipated increases in SS7 traffic due to widespread deployment of Advanced Intelligent Network (AIN) and Personal Communications Service (PCS) as well as for developing procedures for high-speed SS7 links currently under consideration by standards bodies.

  11. Analysis of the effects of Eye-Tracker performance on the pulse positioning errors during refractive surgery☆

    PubMed Central

    Arba-Mosquera, Samuel; Aslanides, Ioannis M.

    2012-01-01

    Purpose To analyze the effects of Eye-Tracker performance on the pulse positioning errors during refractive surgery. Methods A comprehensive model, which directly considers eye movements, including saccades, vestibular, optokinetic, vergence, and miniature, as well as, eye-tracker acquisition rate, eye-tracker latency time, scanner positioning time, laser firing rate, and laser trigger delay have been developed. Results Eye-tracker acquisition rates below 100 Hz correspond to pulse positioning errors above 1.5 mm. Eye-tracker latency times to about 15 ms correspond to pulse positioning errors of up to 3.5 mm. Scanner positioning times to about 9 ms correspond to pulse positioning errors of up to 2 mm. Laser firing rates faster than eye-tracker acquisition rates basically duplicate pulse-positioning errors. Laser trigger delays to about 300 μs have minor to no impact on pulse-positioning errors. Conclusions The proposed model can be used for comparison of laser systems used for ablation processes. Due to the pseudo-random nature of eye movements, positioning errors of single pulses are much larger than observed decentrations in the clinical settings. There is no single parameter that ‘alone’ minimizes the positioning error. It is the optimal combination of the several parameters that minimizes the error. The results of this analysis are important to understand the limitations of correcting very irregular ablation patterns.

  12. [Monitoring medication errors in personalised dispensing using the Sentinel Surveillance System method].

    PubMed

    Pérez-Cebrián, M; Font-Noguera, I; Doménech-Moral, L; Bosó-Ribelles, V; Romero-Boyero, P; Poveda-Andrés, J L

    2011-01-01

    To assess the efficacy of a new quality control strategy based on daily randomised sampling and monitoring a Sentinel Surveillance System (SSS) medication cart, in order to identify medication errors and their origin at different levels of the process. Prospective quality control study with one year follow-up. A SSS medication cart was randomly selected once a week and double-checked before dispensing medication. Medication errors were recorded before it was taken to the relevant hospital ward. Information concerning complaints after receiving medication and 24-hour monitoring were also noted. Type and origin error data were assessed by a Unit Dose Quality Control Group, which proposed relevant improvement measures. Thirty-four SSS carts were assessed, including 5130 medication lines and 9952 dispensed doses, corresponding to 753 patients. Ninety erroneous lines (1.8%) and 142 mistaken doses (1.4%) were identified at the Pharmacy Department. The most frequent error was dose duplication (38%) and its main cause inappropriate management and forgetfulness (69%). Fifty medication complaints (6.6% of patients) were mainly due to new treatment at admission (52%), and 41 (0.8% of all medication lines), did not completely match the prescription (0.6% lines) as recorded by the Pharmacy Department. Thirty-seven (4.9% of patients) medication complaints due to changes at admission and 32 matching errors (0.6% medication lines) were recorded. The main cause also was inappropriate management and forgetfulness (24%). The simultaneous recording of incidences due to complaints and new medication coincided in 33.3%. In addition, 433 (4.3%) of dispensed doses were returned to the Pharmacy Department. After the Unit Dose Quality Control Group conducted their feedback analysis, 64 improvement measures for Pharmacy Department nurses, 37 for pharmacists, and 24 for the hospital ward were introduced. The SSS programme has proven to be useful as a quality control strategy to identify Unit Dose Distribution System errors at initial, intermediate and final stages of the process, improving the involvement of the Pharmacy Department and ward nurses. Copyright © 2009 SEFH. Published by Elsevier Espana. All rights reserved.

  13. Intimate Partner Violence, 1993-2010

    MedlinePlus

    ... appendix table 2 for standard errors. *Due to methodological changes, use caution when comparing 2006 NCVS criminal ... appendix table 2 for standard errors. *Due to methodological changes, use caution when comparing 2006 NCVS criminal ...

  14. Intrusion errors in visuospatial working memory performance.

    PubMed

    Cornoldi, Cesare; Mammarella, Nicola

    2006-02-01

    This study tested the hypothesis that failure in active visuospatial working memory tasks involves a difficulty in avoiding intrusions due to information that is already activated. Two experiments are described, in which participants were required to process several series of locations on a 4 x 4 matrix and then to produce only the final location of each series. Results revealed a higher number of errors due to already activated locations (intrusions) compared with errors due to new locations (inventions). Moreover, when participants were required to pay extra attention to some irrelevant (non-final) locations by tapping on the table, intrusion errors increased. Results are discussed in terms of current models of working memory functioning.

  15. Goldmann tonometer error correcting prism: clinical evaluation.

    PubMed

    McCafferty, Sean; Lim, Garrett; Duncan, William; Enikov, Eniko T; Schwiegerling, Jim; Levine, Jason; Kew, Corin

    2017-01-01

    Clinically evaluate a modified applanating surface Goldmann tonometer prism designed to substantially negate errors due to patient variability in biomechanics. A modified Goldmann prism with a correcting applanation tonometry surface (CATS) was mathematically optimized to minimize the intraocular pressure (IOP) measurement error due to patient variability in corneal thickness, stiffness, curvature, and tear film adhesion force. A comparative clinical study of 109 eyes measured IOP with CATS and Goldmann prisms. The IOP measurement differences between the CATS and Goldmann prisms were correlated to corneal thickness, hysteresis, and curvature. The CATS tonometer prism in correcting for Goldmann central corneal thickness (CCT) error demonstrated a reduction to <±2 mmHg in 97% of a standard CCT population. This compares to only 54% with CCT error <±2 mmHg using the Goldmann prism. Equal reductions of ~50% in errors due to corneal rigidity and curvature were also demonstrated. The results validate the CATS prism's improved accuracy and expected reduced sensitivity to Goldmann errors without IOP bias as predicted by mathematical modeling. The CATS replacement for the Goldmann prism does not change Goldmann measurement technique or interpretation.

  16. 26 CFR 301.6621-3 - Higher interest rate payable on large corporate underpayments.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... resulting from a math error on Y's return. Y did not request an abatement of the assessment pursuant to...,000 amount shown as due on the math error assessment notice (plus interest) on or before January 31...,000 amount shown as due on the math error assessment notice (plus interest) on or before January 31...

  17. 26 CFR 301.6621-3 - Higher interest rate payable on large corporate underpayments.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... resulting from a math error on Y's return. Y did not request an abatement of the assessment pursuant to...,000 amount shown as due on the math error assessment notice (plus interest) on or before January 31...,000 amount shown as due on the math error assessment notice (plus interest) on or before January 31...

  18. 26 CFR 301.6621-3 - Higher interest rate payable on large corporate underpayments.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... resulting from a math error on Y's return. Y did not request an abatement of the assessment pursuant to...,000 amount shown as due on the math error assessment notice (plus interest) on or before January 31...,000 amount shown as due on the math error assessment notice (plus interest) on or before January 31...

  19. 26 CFR 301.6621-3 - Higher interest rate payable on large corporate underpayments.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... resulting from a math error on Y's return. Y did not request an abatement of the assessment pursuant to...,000 amount shown as due on the math error assessment notice (plus interest) on or before January 31...,000 amount shown as due on the math error assessment notice (plus interest) on or before January 31...

  20. 26 CFR 301.6621-3 - Higher interest rate payable on large corporate underpayments.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... resulting from a math error on Y's return. Y did not request an abatement of the assessment pursuant to...,000 amount shown as due on the math error assessment notice (plus interest) on or before January 31...,000 amount shown as due on the math error assessment notice (plus interest) on or before January 31...

  1. Effects of tropospheric and ionospheric refraction errors in the utilization of GEOS-C altimeter data

    NASA Technical Reports Server (NTRS)

    Goad, C. C.

    1977-01-01

    The effects of tropospheric and ionospheric refraction errors are analyzed for the GEOS-C altimeter project in terms of their resultant effects on C-band orbits and the altimeter measurement itself. Operational procedures using surface meteorological measurements at ground stations and monthly means for ocean surface conditions are assumed, with no corrections made for ionospheric effects. Effects on the orbit height due to tropospheric errors are approximately 15 cm for single pass short arcs (such as for calibration) and 10 cm for global orbits of one revolution. Orbit height errors due to neglect of the ionosphere have an amplitude of approximately 40 cm when the orbits are determined from C-band range data with predominantly daylight tracking. Altimeter measurement errors are approximately 10 cm due to residual tropospheric refraction correction errors. Ionospheric effects on the altimeter range measurement are also on the order of 10 cm during the GEOS-C launch and early operation period.

  2. Geographically correlated orbit error

    NASA Technical Reports Server (NTRS)

    Rosborough, G. W.

    1989-01-01

    The dominant error source in estimating the orbital position of a satellite from ground based tracking data is the modeling of the Earth's gravity field. The resulting orbit error due to gravity field model errors are predominantly long wavelength in nature. This results in an orbit error signature that is strongly correlated over distances on the size of ocean basins. Anderle and Hoskin (1977) have shown that the orbit error along a given ground track also is correlated to some degree with the orbit error along adjacent ground tracks. This cross track correlation is verified here and is found to be significant out to nearly 1000 kilometers in the case of TOPEX/POSEIDON when using the GEM-T1 gravity model. Finally, it was determined that even the orbit error at points where ascending and descending ground traces cross is somewhat correlated. The implication of these various correlations is that the orbit error due to gravity error is geographically correlated. Such correlations have direct implications when using altimetry to recover oceanographic signals.

  3. Transperineal prostate biopsy under magnetic resonance image guidance: a needle placement accuracy study.

    PubMed

    Blumenfeld, Philip; Hata, Nobuhiko; DiMaio, Simon; Zou, Kelly; Haker, Steven; Fichtinger, Gabor; Tempany, Clare M C

    2007-09-01

    To quantify needle placement accuracy of magnetic resonance image (MRI)-guided core needle biopsy of the prostate. A total of 10 biopsies were performed with 18-gauge (G) core biopsy needle via a percutaneous transperineal approach. Needle placement error was assessed by comparing the coordinates of preplanned targets with the needle tip measured from the intraprocedural coherent gradient echo images. The source of these errors was subsequently investigated by measuring displacement caused by needle deflection and needle susceptibility artifact shift in controlled phantom studies. Needle placement error due to misalignment of the needle template guide was also evaluated. The mean and standard deviation (SD) of errors in targeted biopsies was 6.5 +/- 3.5 mm. Phantom experiments showed significant placement error due to needle deflection with a needle with an asymmetrically beveled tip (3.2-8.7 mm depending on tissue type) but significantly smaller error with a symmetrical bevel (0.6-1.1 mm). Needle susceptibility artifacts observed a shift of 1.6 +/- 0.4 mm from the true needle axis. Misalignment of the needle template guide contributed an error of 1.5 +/- 0.3 mm. Needle placement error was clinically significant in MRI-guided biopsy for diagnosis of prostate cancer. Needle placement error due to needle deflection was the most significant cause of error, especially for needles with an asymmetrical bevel. (c) 2007 Wiley-Liss, Inc.

  4. Geographically correlated errors observed from a laser-based short-arc technique

    NASA Astrophysics Data System (ADS)

    Bonnefond, P.; Exertier, P.; Barlier, F.

    1999-07-01

    The laser-based short-arc technique has been developed in order to avoid local errors which affect the dynamical orbit computation, such as those due to mismodeling in the geopotential. It is based on a geometric method and consists in fitting short arcs (about 4000 km), issued from a global orbit, with satellite laser ranging tracking measurements from a ground station network. Ninety-two TOPEX/Poseidon (T/P) cycles of laser-based short-arc orbits have then been compared to JGM-2 and JGM-3 T/P orbits computed by the Precise Orbit Determination (POD) teams (Service d'Orbitographie Doris/Centre National d'Etudes Spatiales and Goddard Space Flight Center/NASA) over two areas: (1) the Mediterranean area and (2) a part of the Pacific (including California and Hawaii) called hereafter the U.S. area. Geographically correlated orbit errors in these areas are clearly evidenced: for example, -2.6 cm and +0.7 cm for the Mediterranean and U.S. areas, respectively, relative to JGM-3 orbits. However, geographically correlated errors (GCE) which are commonly linked to errors in the gravity model, can also be due to systematic errors in the reference frame and/or to biases in the tracking measurements. The short-arc technique being very sensitive to such error sources, our analysis however demonstrates that the induced geographical systematic effects are at the level of 1-2 cm on the radial orbit component. Results are also compared with those obtained with the GPS-based reduced dynamic technique. The time-dependent part of GCE has also been studied. Over 6 years of T/P data, coherent signals in the radial component of T/P Precise Orbit Ephemeris (POE) are clearly evidenced with a time period of about 6 months. In addition, impact of time varying-error sources coming from the reference frame and the tracking data accuracy has been analyzed, showing a possible linear trend of about 0.5-1 mm/yr in the radial component of T/P POE.

  5. Physician's error: medical or legal concept?

    PubMed

    Mujovic-Zornic, Hajrija M

    2010-06-01

    This article deals with the common term of different physician's errors that often happen in daily practice of health care. Author begins with the term of medical malpractice, defined broadly as practice of unjustified acts or failures to act upon the part of a physician or other health care professionals, which results in harm to the patient. It is a common term that includes many types of medical errors, especially physician's errors. The author also discusses the concept of physician's error in particular, which is understood no more in traditional way only as classic error in acting something manually wrong without necessary skills (medical concept), but as an error which violates patient's basic rights and which has its final legal consequence (legal concept). In every case the essential element of liability is to establish this error as a breach of the physician's duty. The first point to note is that the standard of procedure and the standard of due care against which the physician will be judged is not going to be that of the ordinary reasonable man who enjoys no medical expertise. The court's decision should give finale answer and legal qualification in each concrete case. The author's conclusion is that higher protection of human rights in the area of health equaly demands broader concept of physician's error with the accent to its legal subject matter.

  6. Error compensation of single-antenna attitude determination using GNSS for Low-dynamic applications

    NASA Astrophysics Data System (ADS)

    Chen, Wen; Yu, Chao; Cai, Miaomiao

    2017-04-01

    GNSS-based single-antenna pseudo-attitude determination method has attracted more and more attention from the field of high-dynamic navigation due to its low cost, low system complexity, and no temporal accumulated errors. Related researches indicate that this method can be an important complement or even an alternative to the traditional sensors for general accuracy requirement (such as small UAV navigation). The application of single-antenna attitude determining method to low-dynamic carrier has just started. Different from the traditional multi-antenna attitude measurement technique, the pseudo-attitude attitude determination method calculates the rotation angle of the carrier trajectory relative to the earth. Thus it inevitably contains some deviations comparing with the real attitude angle. In low-dynamic application, these deviations are particularly noticeable, which may not be ignored. The causes of the deviations can be roughly classified into three categories, including the measurement error, the offset error, and the lateral error. Empirical correction strategies for the formal two errors have been promoted in previous study, but lack of theoretical support. In this paper, we will provide quantitative description of the three type of errors and discuss the related error compensation methods. Vehicle and shipborne experiments were carried out to verify the feasibility of the proposed correction methods. Keywords: Error compensation; Single-antenna; GNSS; Attitude determination; Low-dynamic

  7. Sinusoidal Siemens star spatial frequency response measurement errors due to misidentified target centers

    DOE PAGES

    Birch, Gabriel Carisle; Griffin, John Clark

    2015-07-23

    Numerous methods are available to measure the spatial frequency response (SFR) of an optical system. A recent change to the ISO 12233 photography resolution standard includes a sinusoidal Siemens star test target. We take the sinusoidal Siemens star proposed by the ISO 12233 standard, measure system SFR, and perform an analysis of errors induced by incorrectly identifying the center of a test target. We show a closed-form solution for the radial profile intensity measurement given an incorrectly determined center and describe how this error reduces the measured SFR of the system. As a result, using the closed-form solution, we proposemore » a two-step process by which test target centers are corrected and the measured SFR is restored to the nominal, correctly centered values.« less

  8. Is the four-day rotation of Venus illusory?. [includes systematic error in radial velocities of solar lines reflected from Venus

    NASA Technical Reports Server (NTRS)

    Young, A. T.

    1974-01-01

    An overlooked systematic error exists in the apparent radial velocities of solar lines reflected from regions of Venus near the terminator, owing to a combination of the finite angular size of the Sun and its large (2 km/sec) equatorial velocity of rotation. This error produces an apparent, but fictitious, retrograde component of planetary rotation, typically on the order of 40 meters/sec. Spectroscopic, photometric, and radiometric evidence against a 4-day atmospheric rotation is also reviewed. The bulk of the somewhat contradictory evidence seems to favor slow motions, on the order of 5 m/sec, in the atmosphere of Venus; the 4-day rotation may be due to a traveling wave-like disturbance, not bulk motions, driven by the UV albedo differences.

  9. Multicollinearity and Regression Analysis

    NASA Astrophysics Data System (ADS)

    Daoud, Jamal I.

    2017-12-01

    In regression analysis it is obvious to have a correlation between the response and predictor(s), but having correlation among predictors is something undesired. The number of predictors included in the regression model depends on many factors among which, historical data, experience, etc. At the end selection of most important predictors is something objective due to the researcher. Multicollinearity is a phenomena when two or more predictors are correlated, if this happens, the standard error of the coefficients will increase [8]. Increased standard errors means that the coefficients for some or all independent variables may be found to be significantly different from In other words, by overinflating the standard errors, multicollinearity makes some variables statistically insignificant when they should be significant. In this paper we focus on the multicollinearity, reasons and consequences on the reliability of the regression model.

  10. Technology-related medication errors in a tertiary hospital: a 5-year analysis of reported medication incidents.

    PubMed

    Samaranayake, N R; Cheung, S T D; Chui, W C M; Cheung, B M Y

    2012-12-01

    Healthcare technology is meant to reduce medication errors. The objective of this study was to assess unintended errors related to technologies in the medication use process. Medication incidents reported from 2006 to 2010 in a main tertiary care hospital were analysed by a pharmacist and technology-related errors were identified. Technology-related errors were further classified as socio-technical errors and device errors. This analysis was conducted using data from medication incident reports which may represent only a small proportion of medication errors that actually takes place in a hospital. Hence, interpretation of results must be tentative. 1538 medication incidents were reported. 17.1% of all incidents were technology-related, of which only 1.9% were device errors, whereas most were socio-technical errors (98.1%). Of these, 61.2% were linked to computerised prescription order entry, 23.2% to bar-coded patient identification labels, 7.2% to infusion pumps, 6.8% to computer-aided dispensing label generation and 1.5% to other technologies. The immediate causes for technology-related errors included, poor interface between user and computer (68.1%), improper procedures or rule violations (22.1%), poor interface between user and infusion pump (4.9%), technical defects (1.9%) and others (3.0%). In 11.4% of the technology-related incidents, the error was detected after the drug had been administered. A considerable proportion of all incidents were technology-related. Most errors were due to socio-technical issues. Unintended and unanticipated errors may happen when using technologies. Therefore, when using technologies, system improvement, awareness, training and monitoring are needed to minimise medication errors. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  11. Estimating Climatological Bias Errors for the Global Precipitation Climatology Project (GPCP)

    NASA Technical Reports Server (NTRS)

    Adler, Robert; Gu, Guojun; Huffman, George

    2012-01-01

    A procedure is described to estimate bias errors for mean precipitation by using multiple estimates from different algorithms, satellite sources, and merged products. The Global Precipitation Climatology Project (GPCP) monthly product is used as a base precipitation estimate, with other input products included when they are within +/- 50% of the GPCP estimates on a zonal-mean basis (ocean and land separately). The standard deviation s of the included products is then taken to be the estimated systematic, or bias, error. The results allow one to examine monthly climatologies and the annual climatology, producing maps of estimated bias errors, zonal-mean errors, and estimated errors over large areas such as ocean and land for both the tropics and the globe. For ocean areas, where there is the largest question as to absolute magnitude of precipitation, the analysis shows spatial variations in the estimated bias errors, indicating areas where one should have more or less confidence in the mean precipitation estimates. In the tropics, relative bias error estimates (s/m, where m is the mean precipitation) over the eastern Pacific Ocean are as large as 20%, as compared with 10%-15% in the western Pacific part of the ITCZ. An examination of latitudinal differences over ocean clearly shows an increase in estimated bias error at higher latitudes, reaching up to 50%. Over land, the error estimates also locate regions of potential problems in the tropics and larger cold-season errors at high latitudes that are due to snow. An empirical technique to area average the gridded errors (s) is described that allows one to make error estimates for arbitrary areas and for the tropics and the globe (land and ocean separately, and combined). Over the tropics this calculation leads to a relative error estimate for tropical land and ocean combined of 7%, which is considered to be an upper bound because of the lack of sign-of-the-error canceling when integrating over different areas with a different number of input products. For the globe the calculated relative error estimate from this study is about 9%, which is also probably a slight overestimate. These tropical and global estimated bias errors provide one estimate of the current state of knowledge of the planet's mean precipitation.

  12. Error due to unresolved scales in estimation problems for atmospheric data assimilation

    NASA Astrophysics Data System (ADS)

    Janjic, Tijana

    The error arising due to unresolved scales in data assimilation procedures is examined. The problem of estimating the projection of the state of a passive scalar undergoing advection at a sequence of times is considered. The projection belongs to a finite- dimensional function space and is defined on the continuum. Using the continuum projection of the state of a passive scalar, a mathematical definition is obtained for the error arising due to the presence, in the continuum system, of scales unresolved by the discrete dynamical model. This error affects the estimation procedure through point observations that include the unresolved scales. In this work, two approximate methods for taking into account the error due to unresolved scales and the resulting correlations are developed and employed in the estimation procedure. The resulting formulas resemble the Schmidt-Kalman filter and the usual discrete Kalman filter, respectively. For this reason, the newly developed filters are called the Schmidt-Kalman filter and the traditional filter. In order to test the assimilation methods, a two- dimensional advection model with nonstationary spectrum was developed for passive scalar transport in the atmosphere. An analytical solution on the sphere was found depicting the model dynamics evolution. Using this analytical solution the model error is avoided, and the error due to unresolved scales is the only error left in the estimation problem. It is demonstrated that the traditional and the Schmidt- Kalman filter work well provided the exact covariance function of the unresolved scales is known. However, this requirement is not satisfied in practice, and the covariance function must be modeled. The Schmidt-Kalman filter cannot be computed in practice without further approximations. Therefore, the traditional filter is better suited for practical use. Also, the traditional filter does not require modeling of the full covariance function of the unresolved scales, but only modeling of the covariance matrix obtained by evaluating the covariance function at the observation points. We first assumed that this covariance matrix is stationary and that the unresolved scales are not correlated between the observation points, i.e., the matrix is diagonal, and that the values along the diagonal are constant. Tests with these assumptions were unsuccessful, indicating that a more sophisticated model of the covariance is needed for assimilation of data with nonstationary spectrum. A new method for modeling the covariance matrix based on an extended set of modeling assumptions is proposed. First, it is assumed that the covariance matrix is diagonal, that is, that the unresolved scales are not correlated between the observation points. It is postulated that the values on the diagonal depend on a wavenumber that is characteristic for the unresolved part of the spectrum. It is further postulated that this characteristic wavenumber can be diagnosed from the observations and from the estimate of the projection of the state that is being estimated. It is demonstrated that the new method successfully overcomes previously encountered difficulties.

  13. A simulation study to determine the attenuation and bias in health risk estimates due to exposure measurement error in bi-pollutant models

    EPA Science Inventory

    To understand the combined health effects of exposure to ambient air pollutant mixtures, it is becoming more common to include multiple pollutants in epidemiologic models. However, the complex spatial and temporal pattern of ambient pollutant concentrations and related exposures ...

  14. Erratum: Correction to: Rearrangements of Open Magnetic Flux and Formation of Polar Coronal Holes in Cycle 24

    NASA Astrophysics Data System (ADS)

    Golubeva, E. M.; Mordvinov, A. V.

    2017-12-01

    Correction to: Solar Phys DOI 10.1007/s11207-017-1200-6 Due to an error during processing the Electronic Supplementary Material (ESM) supplied by the author was not included with the article. Please find in this correction document the reference to the ESM.

  15. Robust Connectivity in Sensory and Ad Hoc Network

    DTIC Science & Technology

    2011-02-01

    as the prior probability is π0 = 0.8, the error probability should be capped at 0.2. This seemingly pathological result is due to the fact that the...publications and is the author of the book Multirate and Wavelet Signal Processing (Academic Press, 1998). His research interests include multiscale signal and

  16. Types of diagnostic errors in neurological emergencies in the emergency department.

    PubMed

    Dubosh, Nicole M; Edlow, Jonathan A; Lefton, Micah; Pope, Jennifer V

    2015-02-01

    Neurological emergencies often pose diagnostic challenges for emergency physicians because these patients often present with atypical symptoms and standard imaging tests are imperfect. Misdiagnosis occurs due to a variety of errors. These can be classified as knowledge gaps, cognitive errors, and systems-based errors. The goal of this study was to describe these errors through review of quality assurance (QA) records. This was a retrospective pilot study of patients with neurological emergency diagnoses that were missed or delayed at one urban, tertiary academic emergency department. Cases meeting inclusion criteria were identified through review of QA records. Three emergency physicians independently reviewed each case and determined the type of error that led to the misdiagnosis. Proportions, confidence intervals, and a reliability coefficient were calculated. During the study period, 1168 cases were reviewed. Forty-two cases were found to include a neurological misdiagnosis and twenty-nine were determined to be the result of an error. The distribution of error types was as follows: knowledge gap 45.2% (95% CI 29.2, 62.2), cognitive error 29.0% (95% CI 15.9, 46.8), and systems-based error 25.8% (95% CI 13.5, 43.5). Cerebellar strokes were the most common type of stroke misdiagnosed, accounting for 27.3% of missed strokes. All three error types contributed to the misdiagnosis of neurological emergencies. Misdiagnosis of cerebellar lesions and erroneous radiology resident interpretations of neuroimaging were the most common mistakes. Understanding the types of errors may enable emergency physicians to develop possible solutions and avoid them in the future.

  17. Residual Optically Stimulated Luminescent (OSL) Signals For Al2O3: C and a Readout System With Reproducible Partial Signal Clearance.

    PubMed

    Abraham, Sara A; Kearfott, Kimberlee J

    2018-06-15

    Optically stimulated luminescent dosimeters are devices that, when stimulated with light, emit light in proportion to the integrated ionizing radiation dose. The stimulation of optically stimulated luminescent material results in the loss of a small fraction of signal stored within the dosimetric traps. Previous studies have investigated the signal loss due to readout stimulation and the optical annealing of optically stimulated luminescent dosimeters. This study builds on former research by examining the behavior of optically stimulated luminescent signals after annealing, exploring the functionality of a previously developed signal loss model, and comparing uncertainties for dosimeters reused with or without annealing. For a completely annealed dosimeter, the minimum signal level was 56 ± 8 counts, and readings followed a Gaussian distribution. For dosimeters above this signal level, the fractional signal loss due to the reading process has a linear relationship with the calculated signal. At low signal levels (below 20,000 counts) in this optically stimulated luminescent dosimeter system, calculated signal percent errors increase significantly but otherwise are on average 0.72 ± 0.27%, 0.40 ± 0.19%, 0.33 ± 0.12%, and 0.24 ± 0.07% for 30, 75, 150, and 300 readings, respectively. Theoretical calculations of uncertainties showed that annealing before reusing dosimeters allows for dose errors below 1% with as few as 30 readings. Reusing dosimeters multiple times increases the dose errors especially with low numbers of readouts, so theoretically around 300 readings would be necessary to achieve errors around 1% or below in most scenarios. Note that these dose errors do not include the error associated with the signal-to-dose conversion factor.

  18. Covariate Measurement Error Correction Methods in Mediation Analysis with Failure Time Data

    PubMed Central

    Zhao, Shanshan

    2014-01-01

    Summary Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This paper focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error and error associated with temporal variation. The underlying model with the ‘true’ mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling design. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. PMID:25139469

  19. Covariate measurement error correction methods in mediation analysis with failure time data.

    PubMed

    Zhao, Shanshan; Prentice, Ross L

    2014-12-01

    Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. © 2014, The International Biometric Society.

  20. Dual-energy X-ray absorptiometry: analysis of pediatric fat estimate errors due to tissue hydration effects.

    PubMed

    Testolin, C G; Gore, R; Rivkin, T; Horlick, M; Arbo, J; Wang, Z; Chiumello, G; Heymsfield, S B

    2000-12-01

    Dual-energy X-ray absorptiometry (DXA) percent (%) fat estimates may be inaccurate in young children, who typically have high tissue hydration levels. This study was designed to provide a comprehensive analysis of pediatric tissue hydration effects on DXA %fat estimates. Phase 1 was experimental and included three in vitro studies to establish the physical basis of DXA %fat-estimation models. Phase 2 extended phase 1 models and consisted of theoretical calculations to estimate the %fat errors emanating from previously reported pediatric hydration effects. Phase 1 experiments supported the two-compartment DXA soft tissue model and established that pixel ratio of low to high energy (R values) are a predictable function of tissue elemental content. In phase 2, modeling of reference body composition values from birth to age 120 mo revealed that %fat errors will arise if a "constant" adult lean soft tissue R value is applied to the pediatric population; the maximum %fat error, approximately 0.8%, would be present at birth. High tissue hydration, as observed in infants and young children, leads to errors in DXA %fat estimates. The magnitude of these errors based on theoretical calculations is small and may not be of clinical or research significance.

  1. Frequency and types of the medication errors in an academic emergency department in Iran: The emergent need for clinical pharmacy services in emergency departments.

    PubMed

    Zeraatchi, Alireza; Talebian, Mohammad-Taghi; Nejati, Amir; Dashti-Khavidaki, Simin

    2013-07-01

    Emergency departments (EDs) are characterized by simultaneous care of multiple patients with various medical conditions. Due to a large number of patients with complex diseases, speed and complexity of medication use, working in under-staffing and crowded environment, medication errors are commonly perpetrated by emergency care providers. This study was designed to evaluate the incidence of medication errors among patients attending to an ED in a teaching hospital in Iran. In this cross-sectional study, a total of 500 patients attending to ED were randomly assessed for incidence and types of medication errors. Some factors related to medication errors such as working shift, weekdays and schedule of the educational program of trainee were also evaluated. Nearly, 22% of patients experienced at least one medication error. The rate of medication errors were 0.41 errors per patient and 0.16 errors per ordered medication. The frequency of medication errors was higher in men, middle age patients, first weekdays, night-time work schedules and the first semester of educational year of new junior emergency medicine residents. More than 60% of errors were prescription errors by physicians and the remaining were transcription or administration errors by nurses. More than 35% of the prescribing errors happened during the selection of drug dose and frequency. The most common medication errors by nurses during the administration were omission error (16.2%) followed by unauthorized drug (6.4%). Most of the medication errors happened for anticoagulants and thrombolytics (41.2%) followed by antimicrobial agents (37.7%) and insulin (7.4%). In this study, at least one-fifth of the patients attending to ED experienced medication errors resulting from multiple factors. More common prescription errors happened during ordering drug dose and frequency. More common administration errors included dug omission or unauthorized drug.

  2. Observability Analysis of a MEMS INS/GPS Integration System with Gyroscope G-Sensitivity Errors

    PubMed Central

    Fan, Chen; Hu, Xiaoping; He, Xiaofeng; Tang, Kanghua; Luo, Bing

    2014-01-01

    Gyroscopes based on micro-electromechanical system (MEMS) technology suffer in high-dynamic applications due to obvious g-sensitivity errors. These errors can induce large biases in the gyroscope, which can directly affect the accuracy of attitude estimation in the integration of the inertial navigation system (INS) and the Global Positioning System (GPS). The observability determines the existence of solutions for compensating them. In this paper, we investigate the observability of the INS/GPS system with consideration of the g-sensitivity errors. In terms of two types of g-sensitivity coefficients matrix, we add them as estimated states to the Kalman filter and analyze the observability of three or nine elements of the coefficient matrix respectively. A global observable condition of the system is presented and validated. Experimental results indicate that all the estimated states, which include position, velocity, attitude, gyro and accelerometer bias, and g-sensitivity coefficients, could be made observable by maneuvering based on the conditions. Compared with the integration system without compensation for the g-sensitivity errors, the attitude accuracy is raised obviously. PMID:25171122

  3. Observability analysis of a MEMS INS/GPS integration system with gyroscope G-sensitivity errors.

    PubMed

    Fan, Chen; Hu, Xiaoping; He, Xiaofeng; Tang, Kanghua; Luo, Bing

    2014-08-28

    Gyroscopes based on micro-electromechanical system (MEMS) technology suffer in high-dynamic applications due to obvious g-sensitivity errors. These errors can induce large biases in the gyroscope, which can directly affect the accuracy of attitude estimation in the integration of the inertial navigation system (INS) and the Global Positioning System (GPS). The observability determines the existence of solutions for compensating them. In this paper, we investigate the observability of the INS/GPS system with consideration of the g-sensitivity errors. In terms of two types of g-sensitivity coefficients matrix, we add them as estimated states to the Kalman filter and analyze the observability of three or nine elements of the coefficient matrix respectively. A global observable condition of the system is presented and validated. Experimental results indicate that all the estimated states, which include position, velocity, attitude, gyro and accelerometer bias, and g-sensitivity coefficients, could be made observable by maneuvering based on the conditions. Compared with the integration system without compensation for the g-sensitivity errors, the attitude accuracy is raised obviously.

  4. The role of visual spatial attention in adult developmental dyslexia.

    PubMed

    Collis, Nathan L; Kohnen, Saskia; Kinoshita, Sachiko

    2013-01-01

    The present study investigated the nature of visual spatial attention deficits in adults with developmental dyslexia, using a partial report task with five-letter, digit, and symbol strings. Participants responded by a manual key press to one of nine alternatives, which included other characters in the string, allowing an assessment of position errors as well as intrusion errors. The results showed that the dyslexic adults performed significantly worse than age-matched controls with letter and digit strings but not with symbol strings. Both groups produced W-shaped serial position functions with letter and digit strings. The dyslexics' deficits with letter string stimuli were limited to position errors, specifically at the string-interior positions 2 and 4. These errors correlated with letter transposition reading errors (e.g., reading slat as "salt"), but not with the Rapid Automatized Naming (RAN) task. Overall, these results suggest that the dyslexic adults have a visual spatial attention deficit; however, the deficit does not reflect a reduced span in visual-spatial attention, but a deficit in processing a string of letters in parallel, probably due to difficulty in the coding of letter position.

  5. Intrinsic errors in transporting a single-spin qubit through a double quantum dot

    NASA Astrophysics Data System (ADS)

    Li, Xiao; Barnes, Edwin; Kestner, J. P.; Das Sarma, S.

    2017-07-01

    Coherent spatial transport or shuttling of a single electron spin through semiconductor nanostructures is an important ingredient in many spintronic and quantum computing applications. In this work we analyze the possible errors in solid-state quantum computation due to leakage in transporting a single-spin qubit through a semiconductor double quantum dot. In particular, we consider three possible sources of leakage errors associated with such transport: finite ramping times, spin-dependent tunneling rates between quantum dots induced by finite spin-orbit couplings, and the presence of multiple valley states. In each case we present quantitative estimates of the leakage errors, and discuss how they can be minimized. The emphasis of this work is on how to deal with the errors intrinsic to the ideal semiconductor structure, such as leakage due to spin-orbit couplings, rather than on errors due to defects or noise sources. In particular, we show that in order to minimize leakage errors induced by spin-dependent tunnelings, it is necessary to apply pulses to perform certain carefully designed spin rotations. We further develop a formalism that allows one to systematically derive constraints on the pulse shapes and present a few examples to highlight the advantage of such an approach.

  6. High frequency observations of Iapetus on the Green Bank Telescope aided by improvements in understanding the telescope response to wind

    NASA Astrophysics Data System (ADS)

    Ries, Paul A.

    2012-05-01

    The Green Bank Telescope is a 100m, fully steerable, single dish radio telescope located in Green Bank, West Virginia and capable of making observations from meter wavelengths to 3mm. However, observations at wavelengths short of 2 cm pose significant observational challenges due to pointing and surface errors. The first part of this thesis details efforts to combat wind-induced pointing errors, which reduce by half the amount of time available for high-frequency work on the telescope. The primary tool used for understanding these errors was an optical quadrant detector that monitored the motion of the telescope's feed arm. In this work, a calibration was developed that tied quadrant detector readings directly to telescope pointing error. These readings can be used for single-beam observations in order to determine if the telescope was blown off-source at some point due to wind. With observations with the 3 mm MUSTANG bolometer array, pointing errors due to wind can mostly be removed (> ⅔) during data reduction. Iapetus is a moon known for its stark albedo dichotomy, with the leading hemisphere only a tenth as bright as the trailing. In order to investigate this dichotomy, Iapetus was observed repeatedly with the GBT at wavelengths between 3 and 11 mm, with the original intention being to use the data to determine a thermal light-curve. Instead, the data showed incredible wavelength-dependent deviation from a black-body curve, with an emissivity as low as 0.3 at 9 mm. Numerous techniques were used to demonstrate that this low emissivity is a physical phenomenon rather than an observational one, including some using the quadrant detector to make sure the low emissivities are not due to being blown off source. This emissivity is the among the lowest ever detected in the solar system, but can be achieved using physically realistic ice models that are also used to model microwave emission from snowpacks and glaciers on Earth. These models indicate that the trailing hemisphere contains a scattering layer of depth 100 cm and grain size of 1-2 mm. The leading hemisphere is shown to exhibit a thermal depth effect.

  7. Complications: acknowledging, managing, and coping with human error.

    PubMed

    Helo, Sevann; Moulton, Carol-Anne E

    2017-08-01

    Errors are inherent in medicine due to the imperfectness of human nature. Health care providers may have a difficult time accepting their fallibility, acknowledging mistakes, and disclosing errors. Fear of litigation, shame, blame, and concern about reputation are just some of the barriers preventing physicians from being more candid with their patients, despite the supporting body of evidence that patients cite poor communication and lack of transparency as primary drivers to file a lawsuit in the wake of a medical complication. Proper error disclosure includes a timely explanation of what happened, who was involved, why the error occurred, and how it will be prevented in the future. Medical mistakes afford the opportunity for individuals and institutions to be candid about their weaknesses while improving patient care processes. When a physician takes the Hippocratic Oath they take on a tremendous sense of responsibility for the care of their patients, and often bear the burden of their mistakes in isolation. Physicians may struggle with guilt, shame, and a crisis of confidence, which may thwart efforts to identify areas for improvement that can lead to meaningful change. Coping strategies for providers include discussing the event with others, seeking professional counseling, and implementing quality improvement projects. Physicians and health care organizations need to find adaptive ways to deal with complications that will benefit patients, providers, and their institutions.

  8. Complications: acknowledging, managing, and coping with human error

    PubMed Central

    Moulton, Carol-Anne E.

    2017-01-01

    Errors are inherent in medicine due to the imperfectness of human nature. Health care providers may have a difficult time accepting their fallibility, acknowledging mistakes, and disclosing errors. Fear of litigation, shame, blame, and concern about reputation are just some of the barriers preventing physicians from being more candid with their patients, despite the supporting body of evidence that patients cite poor communication and lack of transparency as primary drivers to file a lawsuit in the wake of a medical complication. Proper error disclosure includes a timely explanation of what happened, who was involved, why the error occurred, and how it will be prevented in the future. Medical mistakes afford the opportunity for individuals and institutions to be candid about their weaknesses while improving patient care processes. When a physician takes the Hippocratic Oath they take on a tremendous sense of responsibility for the care of their patients, and often bear the burden of their mistakes in isolation. Physicians may struggle with guilt, shame, and a crisis of confidence, which may thwart efforts to identify areas for improvement that can lead to meaningful change. Coping strategies for providers include discussing the event with others, seeking professional counseling, and implementing quality improvement projects. Physicians and health care organizations need to find adaptive ways to deal with complications that will benefit patients, providers, and their institutions. PMID:28904910

  9. Out-of-plane ultrasonic velocity measurement

    DOEpatents

    Hall, M.S.; Brodeur, P.H.; Jackson, T.G.

    1998-07-14

    A method for improving the accuracy of measuring the velocity and time of flight of ultrasonic signals through moving web-like materials such as paper, paperboard and the like, includes a pair of ultrasonic transducers disposed on opposing sides of a moving web-like material. In order to provide acoustical coupling between the transducers and the web-like material, the transducers are disposed in fluid-filled wheels. Errors due to variances in the wheel thicknesses about their circumference which can affect time of flight measurements and ultimately the mechanical property being tested are compensated by averaging the ultrasonic signals for a predetermined number of revolutions. The invention further includes a method for compensating for errors resulting from the digitization of the ultrasonic signals. More particularly, the invention includes a method for eliminating errors known as trigger jitter inherent with digitizing oscilloscopes used to digitize the signals for manipulation by a digital computer. In particular, rather than cross-correlate ultrasonic signals taken during different sample periods as is known in the art in order to determine the time of flight of the ultrasonic signal through the moving web, a pulse echo box is provided to enable cross-correlation of predetermined transmitted ultrasonic signals with predetermined reflected ultrasonic or echo signals during the sample period. By cross-correlating ultrasonic signals in the same sample period, the error associated with trigger jitter is eliminated. 20 figs.

  10. Out-of-plane ultrasonic velocity measurement

    DOEpatents

    Hall, Maclin S.; Brodeur, Pierre H.; Jackson, Theodore G.

    1998-01-01

    A method for improving the accuracy of measuring the velocity and time of flight of ultrasonic signals through moving web-like materials such as paper, paperboard and the like, includes a pair of ultrasonic transducers disposed on opposing sides of a moving web-like material. In order to provide acoustical coupling between the transducers and the web-like material, the transducers are disposed in fluid-filled wheels. Errors due to variances in the wheel thicknesses about their circumference which can affect time of flight measurements and ultimately the mechanical property being tested are compensated by averaging the ultrasonic signals for a predetermined number of revolutions. The invention further includes a method for compensating for errors resulting from the digitization of the ultrasonic signals. More particularly, the invention includes a method for eliminating errors known as trigger jitter inherent with digitizing oscilloscopes used to digitize the signals for manipulation by a digital computer. In particular, rather than cross-correlate ultrasonic signals taken during different sample periods as is known in the art in order to determine the time of flight of the ultrasonic signal through the moving web, a pulse echo box is provided to enable cross-correlation of predetermined transmitted ultrasonic signals with predetermined reflected ultrasonic or echo signals during the sample period. By cross-correlating ultrasonic signals in the same sample period, the error associated with trigger jitter is eliminated.

  11. Error Budgets for the Exoplanet Starshade (exo-s) Probe-Class Mission Study

    NASA Technical Reports Server (NTRS)

    Shaklan, Stuart B.; Marchen, Luis; Cady, Eric; Ames, William; Lisman, P. Douglas; Martin, Stefan R.; Thomson, Mark; Regehr, Martin

    2015-01-01

    Exo-S is a probe-class mission study that includes the Dedicated mission, a 30 millimeters starshade co-launched with a 1.1 millimeter commercial telescope in an Earth-leading deep-space orbit, and the Rendezvous mission, a 34 millimeter starshade intended to work with a 2.4 millimeters telescope in an Earth-Sun L2 orbit. A third design, referred to as the Rendezvous Earth Finder mission, is based on a 40 millimeter starshade and is currently under study. This paper presents error budgets for the detection of Earth-like planets with each of these missions. The budgets include manufacture and deployment tolerances, the allowed thermal fluctuations and dynamic motions, formation flying alignment requirements, surface and edge reflectivity requirements, and the allowed transmission due to micrometeoroid damage.

  12. Error budgets for the Exoplanet Starshade (Exo-S) probe-class mission study

    NASA Astrophysics Data System (ADS)

    Shaklan, Stuart B.; Marchen, Luis; Cady, Eric; Ames, William; Lisman, P. Douglas; Martin, Stefan R.; Thomson, Mark; Regehr, Martin

    2015-09-01

    Exo-S is a probe-class mission study that includes the Dedicated mission, a 30 m starshade co-launched with a 1.1 m commercial telescope in an Earth-leading deep-space orbit, and the Rendezvous mission, a 34 m starshade intended to work with a 2.4 m telescope in an Earth-Sun L2 orbit. A third design, referred to as the Rendezvous Earth Finder mission, is based on a 40 m starshade and is currently under study. This paper presents error budgets for the detection of Earth-like planets with each of these missions. The budgets include manufacture and deployment tolerances, the allowed thermal fluctuations and dynamic motions, formation flying alignment requirements, surface and edge reflectivity requirements, and the allowed transmission due to micrometeoroid damage.

  13. Optimized distortion correction technique for echo planar imaging.

    PubMed

    Chen , N K; Wyrwicz, A M

    2001-03-01

    A new phase-shifted EPI pulse sequence is described that encodes EPI phase errors due to all off-resonance factors, including B(o) field inhomogeneity, eddy current effects, and gradient waveform imperfections. Combined with the previously proposed multichannel modulation postprocessing algorithm (Chen and Wyrwicz, MRM 1999;41:1206-1213), the encoded phase error information can be used to effectively remove geometric distortions in subsequent EPI scans. The proposed EPI distortion correction technique has been shown to be effective in removing distortions due to gradient waveform imperfections and phase gradient-induced eddy current effects. In addition, this new method retains advantages of the earlier method, such as simultaneous correction of different off-resonance factors without use of a complicated phase unwrapping procedure. The effectiveness of this technique is illustrated with EPI studies on phantoms and animal subjects. Implementation to different versions of EPI sequences is also described. Magn Reson Med 45:525-528, 2001. Copyright 2001 Wiley-Liss, Inc.

  14. Greenland ice sheet albedo variability and feedback: 2000-2015

    NASA Astrophysics Data System (ADS)

    Box, J. E.; van As, D.; Fausto, R. S.; Mottram, R.; Langen, P. P.; Steffen, K.

    2015-12-01

    Absorbed solar irradiance represents the dominant source of surface melt energy for Greenland ice. Surface melting has increased as part of a positive feedback amplifier due to surface darkening. The 16 most recent summers of observations from the NASA MODIS sensor indicate a darkening exceeding 6% in July when most melting occurs. Without the darkening, the increase in surface melting would be roughly half as large. A minority of the albedo decline signal may be from sensor degradation. So, in this study, MOD10A1 and MCD43 albedo products from MODIS are evaluated for sensor degradation and anisotropic reflectance errors. Errors are minimized through calibration to GC-Net and PROMICE Greenland snow and ice ground control data. The seasonal and spatial variability in Greenland snow and ice albedo over a 16 year period is presented, including quantifying changing absorbed solar irradiance and melt enhancement due to albedo feedback using the DMI HIRHAM5 5 km model.

  15. Active Mirror Predictive and Requirements Verification Software (AMP-ReVS)

    NASA Technical Reports Server (NTRS)

    Basinger, Scott A.

    2012-01-01

    This software is designed to predict large active mirror performance at various stages in the fabrication lifecycle of the mirror. It was developed for 1-meter class powered mirrors for astronomical purposes, but is extensible to other geometries. The package accepts finite element model (FEM) inputs and laboratory measured data for large optical-quality mirrors with active figure control. It computes phenomenological contributions to the surface figure error using several built-in optimization techniques. These phenomena include stresses induced in the mirror by the manufacturing process and the support structure, the test procedure, high spatial frequency errors introduced by the polishing process, and other process-dependent deleterious effects due to light-weighting of the mirror. Then, depending on the maturity of the mirror, it either predicts the best surface figure error that the mirror will attain, or it verifies that the requirements for the error sources have been met once the best surface figure error has been measured. The unique feature of this software is that it ties together physical phenomenology with wavefront sensing and control techniques and various optimization methods including convex optimization, Kalman filtering, and quadratic programming to both generate predictive models and to do requirements verification. This software combines three distinct disciplines: wavefront control, predictive models based on FEM, and requirements verification using measured data in a robust, reusable code that is applicable to any large optics for ground and space telescopes. The software also includes state-of-the-art wavefront control algorithms that allow closed-loop performance to be computed. It allows for quantitative trade studies to be performed for optical systems engineering, including computing the best surface figure error under various testing and operating conditions. After the mirror manufacturing process and testing have been completed, the software package can be used to verify that the underlying requirements have been met.

  16. Robust radio interferometric calibration using the t-distribution

    NASA Astrophysics Data System (ADS)

    Kazemi, S.; Yatawatta, S.

    2013-10-01

    A major stage of radio interferometric data processing is calibration or the estimation of systematic errors in the data and the correction for such errors. A stochastic error (noise) model is assumed, and in most cases, this underlying model is assumed to be Gaussian. However, outliers in the data due to interference or due to errors in the sky model would have adverse effects on processing based on a Gaussian noise model. Most of the shortcomings of calibration such as the loss in flux or coherence, and the appearance of spurious sources, could be attributed to the deviations of the underlying noise model. In this paper, we propose to improve the robustness of calibration by using a noise model based on Student's t-distribution. Student's t-noise is a special case of Gaussian noise when the variance is unknown. Unlike Gaussian-noise-model-based calibration, traditional least-squares minimization would not directly extend to a case when we have a Student's t-noise model. Therefore, we use a variant of the expectation-maximization algorithm, called the expectation-conditional maximization either algorithm, when we have a Student's t-noise model and use the Levenberg-Marquardt algorithm in the maximization step. We give simulation results to show the robustness of the proposed calibration method as opposed to traditional Gaussian-noise-model-based calibration, especially in preserving the flux of weaker sources that are not included in the calibration model.

  17. Estimate of higher order ionospheric errors in GNSS positioning

    NASA Astrophysics Data System (ADS)

    Hoque, M. Mainul; Jakowski, N.

    2008-10-01

    Precise navigation and positioning using GPS/GLONASS/Galileo require the ionospheric propagation errors to be accurately determined and corrected for. Current dual-frequency method of ionospheric correction ignores higher order ionospheric errors such as the second and third order ionospheric terms in the refractive index formula and errors due to bending of the signal. The total electron content (TEC) is assumed to be same at two GPS frequencies. All these assumptions lead to erroneous estimations and corrections of the ionospheric errors. In this paper a rigorous treatment of these problems is presented. Different approximation formulas have been proposed to correct errors due to excess path length in addition to the free space path length, TEC difference at two GNSS frequencies, and third-order ionospheric term. The GPS dual-frequency residual range errors can be corrected within millimeter level accuracy using the proposed correction formulas.

  18. Comparison of technique errors of intraoral radiographs taken on film v photostimulable phosphor (PSP) plates.

    PubMed

    Zhang, Wenjian; Huynh, Carolyn P; Abramovitch, Kenneth; Leon, Inga-Lill K; Arvizu, Liliana

    2012-06-01

    The objective of this study was to compare the technical errors of intraoral radiographs exposed on film v photostimulable phosphor (PSP) plates. The intraoral radiographic images exposed on phantoms from preclinical practical exams of dental and dental hygiene students were used. Each exam consisted of 10 designated periapical and bitewing views. A total of 107 film sets and 122 PSP sets were evaluated for technique errors, including placement, elongation, foreshortening, overlapping, cone cut, receptor bending, density, mounting, dot in apical area, and others. Some errors were further subcategorized as minor, major, or remake depending on the severity. The percentages of radiographs with various errors were compared between film and PSP by the Fisher's Exact Test. Compared with film, there was significantly less PSP foreshortening, elongation, and bending errors, but significantly more placement and overlapping errors. Using a wrong sized receptor due to the similarity of the color of the package sleeves is a unique PSP error. Optimum image quality is attainable with PSP plates as well as film. When switching from film to a PSP digital environment, more emphasis is necessary for placing the PSP plates, especially those with excessive packet edge, and then correcting the corresponding angulation for the beam alignment. Better design for improving intraoral visibility and easy identification of different sized PSP will improve the clinician's technical performance with this receptor.

  19. Cirrus Cloud Retrieval Using Infrared Sounding Data: Multilevel Cloud Errors.

    NASA Astrophysics Data System (ADS)

    Baum, Bryan A.; Wielicki, Bruce A.

    1994-01-01

    In this study we perform an error analysis for cloud-top pressure retrieval using the High-Resolution Infrared Radiometric Sounder (HIRS/2) 15-µm CO2 channels for the two-layer case of transmissive cirrus overlying an overcast, opaque stratiform cloud. This analysis includes standard deviation and bias error due to instrument noise and the presence of two cloud layers, the lower of which is opaque. Instantaneous cloud pressure retrieval errors are determined for a range of cloud amounts (0.1 1.0) and cloud-top pressures (850250 mb). Large cloud-top pressure retrieval errors are found to occur when a lower opaque layer is present underneath an upper transmissive cloud layer in the satellite field of view (FOV). Errors tend to increase with decreasing upper-cloud elective cloud amount and with decreasing cloud height (increasing pressure). Errors in retrieved upper-cloud pressure result in corresponding errors in derived effective cloud amount. For the case in which a HIRS FOV has two distinct cloud layers, the difference between the retrieved and actual cloud-top pressure is positive in all casts, meaning that the retrieved upper-cloud height is lower than the actual upper-cloud height. In addition, errors in retrieved cloud pressure are found to depend upon the lapse rate between the low-level cloud top and the surface. We examined which sounder channel combinations would minimize the total errors in derived cirrus cloud height caused by instrument noise and by the presence of a lower-level cloud. We find that while the sounding channels that peak between 700 and 1000 mb minimize random errors, the sounding channels that peak at 300—500 mb minimize bias errors. For a cloud climatology, the bias errors are most critical.

  20. A joint source-channel distortion model for JPEG compressed images.

    PubMed

    Sabir, Muhammad F; Sheikh, Hamid Rahim; Heath, Robert W; Bovik, Alan C

    2006-06-01

    The need for efficient joint source-channel coding (JSCC) is growing as new multimedia services are introduced in commercial wireless communication systems. An important component of practical JSCC schemes is a distortion model that can predict the quality of compressed digital multimedia such as images and videos. The usual approach in the JSCC literature for quantifying the distortion due to quantization and channel errors is to estimate it for each image using the statistics of the image for a given signal-to-noise ratio (SNR). This is not an efficient approach in the design of real-time systems because of the computational complexity. A more useful and practical approach would be to design JSCC techniques that minimize average distortion for a large set of images based on some distortion model rather than carrying out per-image optimizations. However, models for estimating average distortion due to quantization and channel bit errors in a combined fashion for a large set of images are not available for practical image or video coding standards employing entropy coding and differential coding. This paper presents a statistical model for estimating the distortion introduced in progressive JPEG compressed images due to quantization and channel bit errors in a joint manner. Statistical modeling of important compression techniques such as Huffman coding, differential pulse-coding modulation, and run-length coding are included in the model. Examples show that the distortion in terms of peak signal-to-noise ratio (PSNR) can be predicted within a 2-dB maximum error over a variety of compression ratios and bit-error rates. To illustrate the utility of the proposed model, we present an unequal power allocation scheme as a simple application of our model. Results show that it gives a PSNR gain of around 6.5 dB at low SNRs, as compared to equal power allocation.

  1. Analysis of GRACE Range-rate Residuals with Emphasis on Reprocessed Star-Camera Datasets

    NASA Astrophysics Data System (ADS)

    Goswami, S.; Flury, J.; Naeimi, M.; Bandikova, T.; Guerr, T. M.; Klinger, B.

    2015-12-01

    Since March 2002 the two GRACE satellites orbit the Earth at rela-tively low altitude. Determination of the gravity field of the Earth including itstemporal variations from the satellites' orbits and the inter-satellite measure-ments is the goal of the mission. Yet, the time-variable gravity signal has notbeen fully exploited. This can be seen better in the computed post-fit range-rateresiduals. The errors reflected in the range-rate residuals are due to the differ-ent sources as systematic errors, mismodelling errors and tone errors. Here, weanalyse the effect of three different star-camera data sets on the post-fit range-rate residuals. On the one hand, we consider the available attitude data andon other hand we take the two different data sets which has been reprocessedat Institute of Geodesy, Hannover and Institute of Theoretical Geodesy andSatellite Geodesy, TU Graz Austria respectively. Then the differences in therange-rate residuals computed from different attitude dataset are analyzed inthis study. Details will be given and results will be discussed.

  2. A Systems Modeling Approach for Risk Management of Command File Errors

    NASA Technical Reports Server (NTRS)

    Meshkat, Leila

    2012-01-01

    The main cause of commanding errors is often (but not always) due to procedures. Either lack of maturity in the processes, incompleteness of requirements or lack of compliance to these procedures. Other causes of commanding errors include lack of understanding of system states, inadequate communication, and making hasty changes in standard procedures in response to an unexpected event. In general, it's important to look at the big picture prior to making corrective actions. In the case of errors traced back to procedures, considering the reliability of the process as a metric during its' design may help to reduce risk. This metric is obtained by using data from Nuclear Industry regarding human reliability. A structured method for the collection of anomaly data will help the operator think systematically about the anomaly and facilitate risk management. Formal models can be used for risk based design and risk management. A generic set of models can be customized for a broad range of missions.

  3. Irregular analytical errors in diagnostic testing - a novel concept.

    PubMed

    Vogeser, Michael; Seger, Christoph

    2018-02-23

    In laboratory medicine, routine periodic analyses for internal and external quality control measurements interpreted by statistical methods are mandatory for batch clearance. Data analysis of these process-oriented measurements allows for insight into random analytical variation and systematic calibration bias over time. However, in such a setting, any individual sample is not under individual quality control. The quality control measurements act only at the batch level. Quantitative or qualitative data derived for many effects and interferences associated with an individual diagnostic sample can compromise any analyte. It is obvious that a process for a quality-control-sample-based approach of quality assurance is not sensitive to such errors. To address the potential causes and nature of such analytical interference in individual samples more systematically, we suggest the introduction of a new term called the irregular (individual) analytical error. Practically, this term can be applied in any analytical assay that is traceable to a reference measurement system. For an individual sample an irregular analytical error is defined as an inaccuracy (which is the deviation from a reference measurement procedure result) of a test result that is so high it cannot be explained by measurement uncertainty of the utilized routine assay operating within the accepted limitations of the associated process quality control measurements. The deviation can be defined as the linear combination of the process measurement uncertainty and the method bias for the reference measurement system. Such errors should be coined irregular analytical errors of the individual sample. The measurement result is compromised either by an irregular effect associated with the individual composition (matrix) of the sample or an individual single sample associated processing error in the analytical process. Currently, the availability of reference measurement procedures is still highly limited, but LC-isotope-dilution mass spectrometry methods are increasingly used for pre-market validation of routine diagnostic assays (these tests also involve substantial sets of clinical validation samples). Based on this definition/terminology, we list recognized causes of irregular analytical error as a risk catalog for clinical chemistry in this article. These issues include reproducible individual analytical errors (e.g. caused by anti-reagent antibodies) and non-reproducible, sporadic errors (e.g. errors due to incorrect pipetting volume due to air bubbles in a sample), which can both lead to inaccurate results and risks for patients.

  4. On Time/Space Aggregation of Fine-Scale Error Estimates (Invited)

    NASA Astrophysics Data System (ADS)

    Huffman, G. J.

    2013-12-01

    Estimating errors inherent in fine time/space-scale satellite precipitation data sets is still an on-going problem and a key area of active research. Complicating features of these data sets include the intrinsic intermittency of the precipitation in space and time and the resulting highly skewed distribution of precipitation rates. Additional issues arise from the subsampling errors that satellites introduce, the errors due to retrieval algorithms, and the correlated error that retrieval and merger algorithms sometimes introduce. Several interesting approaches have been developed recently that appear to make progress on these long-standing issues. At the same time, the monthly averages over 2.5°x2.5° grid boxes in the Global Precipitation Climatology Project (GPCP) Satellite-Gauge (SG) precipitation data set follow a very simple sampling-based error model (Huffman 1997) with coefficients that are set using coincident surface and GPCP SG data. This presentation outlines the unsolved problem of how to aggregate the fine-scale errors (discussed above) to an arbitrary time/space averaging volume for practical use in applications, reducing in the limit to simple Gaussian expressions at the monthly 2.5°x2.5° scale. Scatter diagrams with different time/space averaging show that the relationship between the satellite and validation data improves due to the reduction in random error. One of the key, and highly non-linear, issues is that fine-scale estimates tend to have large numbers of cases with points near the axes on the scatter diagram (one of the values is exactly or nearly zero, while the other value is higher). Averaging 'pulls' the points away from the axes and towards the 1:1 line, which usually happens for higher precipitation rates before lower rates. Given this qualitative observation of how aggregation affects error, we observe that existing aggregation rules, such as the Steiner et al. (2003) power law, only depend on the aggregated precipitation rate. Is this sufficient, or is it necessary to aggregate the precipitation error estimates across the time/space data cube used for averaging? At least for small time/space data cubes it would seem that the detailed variables that affect each precipitation error estimate in the aggregation, such as sensor type, land/ocean surface type, convective/stratiform type, and so on, drive variations that must be accounted for explicitly.

  5. Airplane wing vibrations due to atmospheric turbulence

    NASA Technical Reports Server (NTRS)

    Pastel, R. L.; Caruthers, J. E.; Frost, W.

    1981-01-01

    The magnitude of error introduced due to wing vibration when measuring atmospheric turbulence with a wind probe mounted at the wing tip was studied. It was also determined whether accelerometers mounted on the wing tip are needed to correct this error. A spectrum analysis approach is used to determine the error. Estimates of the B-57 wing characteristics are used to simulate the airplane wing, and von Karman's cross spectrum function is used to simulate atmospheric turbulence. It was found that wing vibration introduces large error in measured spectra of turbulence in the frequency's range close to the natural frequencies of the wing.

  6. Effects of holding time and measurement error on culturing Legionella in environmental water samples.

    PubMed

    Flanders, W Dana; Kirkland, Kimberly H; Shelton, Brian G

    2014-10-01

    Outbreaks of Legionnaires' disease require environmental testing of water samples from potentially implicated building water systems to identify the source of exposure. A previous study reports a large impact on Legionella sample results due to shipping and delays in sample processing. Specifically, this same study, without accounting for measurement error, reports more than half of shipped samples tested had Legionella levels that arbitrarily changed up or down by one or more logs, and the authors attribute this result to shipping time. Accordingly, we conducted a study to determine the effects of sample holding/shipping time on Legionella sample results while taking into account measurement error, which has previously not been addressed. We analyzed 159 samples, each split into 16 aliquots, of which one-half (8) were processed promptly after collection. The remaining half (8) were processed the following day to assess impact of holding/shipping time. A total of 2544 samples were analyzed including replicates. After accounting for inherent measurement error, we found that the effect of holding time on observed Legionella counts was small and should have no practical impact on interpretation of results. Holding samples increased the root mean squared error by only about 3-8%. Notably, for only one of 159 samples, did the average of the 8 replicate counts change by 1 log. Thus, our findings do not support the hypothesis of frequent, significant (≥= 1 log10 unit) Legionella colony count changes due to holding. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  7. Decision aids for multiple-decision disease management as affected by weather input errors.

    PubMed

    Pfender, W F; Gent, D H; Mahaffee, W F; Coop, L B; Fox, A D

    2011-06-01

    Many disease management decision support systems (DSSs) rely, exclusively or in part, on weather inputs to calculate an indicator for disease hazard. Error in the weather inputs, typically due to forecasting, interpolation, or estimation from off-site sources, may affect model calculations and management decision recommendations. The extent to which errors in weather inputs affect the quality of the final management outcome depends on a number of aspects of the disease management context, including whether management consists of a single dichotomous decision, or of a multi-decision process extending over the cropping season(s). Decision aids for multi-decision disease management typically are based on simple or complex algorithms of weather data which may be accumulated over several days or weeks. It is difficult to quantify accuracy of multi-decision DSSs due to temporally overlapping disease events, existence of more than one solution to optimizing the outcome, opportunities to take later recourse to modify earlier decisions, and the ongoing, complex decision process in which the DSS is only one component. One approach to assessing importance of weather input errors is to conduct an error analysis in which the DSS outcome from high-quality weather data is compared with that from weather data with various levels of bias and/or variance from the original data. We illustrate this analytical approach for two types of DSS, an infection risk index for hop powdery mildew and a simulation model for grass stem rust. Further exploration of analysis methods is needed to address problems associated with assessing uncertainty in multi-decision DSSs.

  8. The Characteristics and Causes of Push-off During Screw Rotor Manufacture, and its Compensation by Multi-axis Machine Path Calculations

    NASA Astrophysics Data System (ADS)

    Wang, X.; Holmes, C. S.

    2015-08-01

    When grinding helical components, errors occur at the beginning and end of the contact path between the component and grinding wheel. This is due to the forces on the component changing as the grinding wheel comes into and out-of full contact with the component. In addition, shaft bending may add depth changes which vary along the length. This may result in an interrupted contact line and increased noise from the rotors. Using on-board scanning, software has been developed to calculate a compensated grinding path, which includes the adjustments of head angle, work rotation and infeed. This grinding path compensates not only lead errors, but also reduces the profile errors as well. The program has been tested in rotor production and the results are shown.

  9. A simple solution for model comparison in bold imaging: the special case of reward prediction error and reward outcomes.

    PubMed

    Erdeniz, Burak; Rohe, Tim; Done, John; Seidler, Rachael D

    2013-01-01

    Conventional neuroimaging techniques provide information about condition-related changes of the BOLD (blood-oxygen-level dependent) signal, indicating only where and when the underlying cognitive processes occur. Recently, with the help of a new approach called "model-based" functional neuroimaging (fMRI), researchers are able to visualize changes in the internal variables of a time varying learning process, such as the reward prediction error or the predicted reward value of a conditional stimulus. However, despite being extremely beneficial to the imaging community in understanding the neural correlates of decision variables, a model-based approach to brain imaging data is also methodologically challenging due to the multicollinearity problem in statistical analysis. There are multiple sources of multicollinearity in functional neuroimaging including investigations of closely related variables and/or experimental designs that do not account for this. The source of multicollinearity discussed in this paper occurs due to correlation between different subjective variables that are calculated very close in time. Here, we review methodological approaches to analyzing such data by discussing the special case of separating the reward prediction error signal from reward outcomes.

  10. Asymmetric Memory Circuit Would Resist Soft Errors

    NASA Technical Reports Server (NTRS)

    Buehler, Martin G.; Perlman, Marvin

    1990-01-01

    Some nonlinear error-correcting codes more efficient in presence of asymmetry. Combination of circuit-design and coding concepts expected to make integrated-circuit random-access memories more resistant to "soft" errors (temporary bit errors, also called "single-event upsets" due to ionizing radiation). Integrated circuit of new type made deliberately more susceptible to one kind of bit error than to other, and associated error-correcting code adapted to exploit this asymmetry in error probabilities.

  11. Robust Linear Models for Cis-eQTL Analysis.

    PubMed

    Rantalainen, Mattias; Lindgren, Cecilia M; Holmes, Christopher C

    2015-01-01

    Expression Quantitative Trait Loci (eQTL) analysis enables characterisation of functional genetic variation influencing expression levels of individual genes. In outbread populations, including humans, eQTLs are commonly analysed using the conventional linear model, adjusting for relevant covariates, assuming an allelic dosage model and a Gaussian error term. However, gene expression data generally have noise that induces heavy-tailed errors relative to the Gaussian distribution and often include atypical observations, or outliers. Such departures from modelling assumptions can lead to an increased rate of type II errors (false negatives), and to some extent also type I errors (false positives). Careful model checking can reduce the risk of type-I errors but often not type II errors, since it is generally too time-consuming to carefully check all models with a non-significant effect in large-scale and genome-wide studies. Here we propose the application of a robust linear model for eQTL analysis to reduce adverse effects of deviations from the assumption of Gaussian residuals. We present results from a simulation study as well as results from the analysis of real eQTL data sets. Our findings suggest that in many situations robust models have the potential to provide more reliable eQTL results compared to conventional linear models, particularly in respect to reducing type II errors due to non-Gaussian noise. Post-genomic data, such as that generated in genome-wide eQTL studies, are often noisy and frequently contain atypical observations. Robust statistical models have the potential to provide more reliable results and increased statistical power under non-Gaussian conditions. The results presented here suggest that robust models should be considered routinely alongside other commonly used methodologies for eQTL analysis.

  12. Sensitivity analysis of brain morphometry based on MRI-derived surface models

    NASA Astrophysics Data System (ADS)

    Klein, Gregory J.; Teng, Xia; Schoenemann, P. T.; Budinger, Thomas F.

    1998-07-01

    Quantification of brain structure is important for evaluating changes in brain size with growth and aging and for characterizing neurodegeneration disorders. Previous quantification efforts using ex vivo techniques suffered considerable error due to shrinkage of the cerebrum after extraction from the skull, deformation of slices during sectioning, and numerous other factors. In vivo imaging studies of brain anatomy avoid these problems and allow repetitive studies following progression of brain structure changes due to disease or natural processes. We have developed a methodology for obtaining triangular mesh models of the cortical surface from MRI brain datasets. The cortex is segmented from nonbrain tissue using a 2D region-growing technique combined with occasional manual edits. Once segmented, thresholding and image morphological operations (erosions and openings) are used to expose the regions between adjacent surfaces in deep cortical folds. A 2D region- following procedure is then used to find a set of contours outlining the cortical boundary on each slice. The contours on all slices are tiled together to form a closed triangular mesh model approximating the cortical surface. This model can be used for calculation of cortical surface area and volume, as well as other parameters of interest. Except for the initial segmentation of the cortex from the skull, the technique is automatic and requires only modest computation time on modern workstations. Though the use of image data avoids many of the pitfalls of ex vivo and sectioning techniques, our MRI-based technique is still vulnerable to errors that may impact the accuracy of estimated brain structure parameters. Potential inaccuracies include segmentation errors due to incorrect thresholding, missed deep sulcal surfaces, falsely segmented holes due to image noise and surface tiling artifacts. The focus of this paper is the characterization of these errors and how they affect measurements of cortical surface area and volume.

  13. Are physicians' perceptions of healthcare quality and practice satisfaction affected by errors associated with electronic health record use?

    PubMed Central

    Wright, Adam; Simon, Steven R; Jenter, Chelsea A; Soran, Christine S; Volk, Lynn A; Bates, David W; Poon, Eric G

    2011-01-01

    Background Electronic health record (EHR) adoption is a national priority in the USA, and well-designed EHRs have the potential to improve quality and safety. However, physicians are reluctant to implement EHRs due to financial constraints, usability concerns, and apprehension about unintended consequences, including the introduction of medical errors related to EHR use. The goal of this study was to characterize and describe physicians' attitudes towards three consequences of EHR implementation: (1) the potential for EHRs to introduce new errors; (2) improvements in healthcare quality; and (3) changes in overall physician satisfaction. Methods Using data from a 2007 statewide survey of Massachusetts physicians, we conducted multivariate regression analysis to examine relationships between practice characteristics, perceptions of EHR-related errors, perceptions of healthcare quality, and overall physician satisfaction. Results 30% of physicians agreed that EHRs create new opportunities for error, but only 2% believed their EHR has created more errors than it prevented. With respect to perceptions of quality, there was no significant association between perceptions of EHR-associated errors and perceptions of EHR-associated changes in healthcare quality. Finally, physicians who believed that EHRs created new opportunities for error were less likely be satisfied with their practice situation (adjusted OR 0.49, p=0.001). Conclusions Almost one third of physicians perceived that EHRs create new opportunities for error. This perception was associated with lower levels of physician satisfaction. PMID:22199017

  14. Anterior Cingulate Cortex and Cognitive Control: Neuropsychological and Electrophysiological Findings in Two Patients with Lesions to Dorsomedial Prefrontal Cortex

    ERIC Educational Resources Information Center

    Lovstad, M.; Funderud, I.; Meling, T.; Kramer, U. M.; Voytek, B.; Due-Tonnessen, P.; Endestad, T.; Lindgren, M.; Knight, R. T.; Solbakk, A. K.

    2012-01-01

    Whereas neuroimaging studies of healthy subjects have demonstrated an association between the anterior cingulate cortex (ACC) and cognitive control functions, including response monitoring and error detection, lesion studies are sparse and have produced mixed results. Due to largely normal behavioral test results in two patients with medial…

  15. Effects of Kindergarten Retention on Children's Social-Emotional Development: An Application of Propensity Score Method to Multivariate, Multilevel Data

    ERIC Educational Resources Information Center

    Hong, Guanglei; Yu, Bing

    2008-01-01

    This study examines the effects of kindergarten retention on children's social-emotional development in the early, middle, and late elementary years. Previous studies have generated mixed results partly due to some major methodological challenges, including selection bias, measurement error, and divergent perceptions of multiple respondents in…

  16. A Study on Mutil-Scale Background Error Covariances in 3D-Var Data Assimilation

    NASA Astrophysics Data System (ADS)

    Zhang, Xubin; Tan, Zhe-Min

    2017-04-01

    The construction of background error covariances is a key component of three-dimensional variational data assimilation. There are different scale background errors and interactions among them in the numerical weather Prediction. However, the influence of these errors and their interactions cannot be represented in the background error covariances statistics when estimated by the leading methods. So, it is necessary to construct background error covariances influenced by multi-scale interactions among errors. With the NMC method, this article firstly estimates the background error covariances at given model-resolution scales. And then the information of errors whose scales are larger and smaller than the given ones is introduced respectively, using different nesting techniques, to estimate the corresponding covariances. The comparisons of three background error covariances statistics influenced by information of errors at different scales reveal that, the background error variances enhance particularly at large scales and higher levels when introducing the information of larger-scale errors by the lateral boundary condition provided by a lower-resolution model. On the other hand, the variances reduce at medium scales at the higher levels, while those show slight improvement at lower levels in the nested domain, especially at medium and small scales, when introducing the information of smaller-scale errors by nesting a higher-resolution model. In addition, the introduction of information of larger- (smaller-) scale errors leads to larger (smaller) horizontal and vertical correlation scales of background errors. Considering the multivariate correlations, the Ekman coupling increases (decreases) with the information of larger- (smaller-) scale errors included, whereas the geostrophic coupling in free atmosphere weakens in both situations. The three covariances obtained in above work are used in a data assimilation and model forecast system respectively, and then the analysis-forecast cycles for a period of 1 month are conducted. Through the comparison of both analyses and forecasts from this system, it is found that the trends for variation in analysis increments with information of different scale errors introduced are consistent with those for variation in variances and correlations of background errors. In particular, introduction of smaller-scale errors leads to larger amplitude of analysis increments for winds at medium scales at the height of both high- and low- level jet. And analysis increments for both temperature and humidity are greater at the corresponding scales at middle and upper levels under this circumstance. These analysis increments improve the intensity of jet-convection system which includes jets at different levels and coupling between them associated with latent heat release, and these changes in analyses contribute to the better forecasts for winds and temperature in the corresponding areas. When smaller-scale errors are included, analysis increments for humidity enhance significantly at large scales at lower levels to moisten southern analyses. This humidification devotes to correcting dry bias there and eventually improves forecast skill of humidity. Moreover, inclusion of larger- (smaller-) scale errors is beneficial for forecast quality of heavy (light) precipitation at large (small) scales due to the amplification (diminution) of intensity and area in precipitation forecasts but tends to overestimate (underestimate) light (heavy) precipitation .

  17. Errors in clinical laboratories or errors in laboratory medicine?

    PubMed

    Plebani, Mario

    2006-01-01

    Laboratory testing is a highly complex process and, although laboratory services are relatively safe, they are not as safe as they could or should be. Clinical laboratories have long focused their attention on quality control methods and quality assessment programs dealing with analytical aspects of testing. However, a growing body of evidence accumulated in recent decades demonstrates that quality in clinical laboratories cannot be assured by merely focusing on purely analytical aspects. The more recent surveys on errors in laboratory medicine conclude that in the delivery of laboratory testing, mistakes occur more frequently before (pre-analytical) and after (post-analytical) the test has been performed. Most errors are due to pre-analytical factors (46-68.2% of total errors), while a high error rate (18.5-47% of total errors) has also been found in the post-analytical phase. Errors due to analytical problems have been significantly reduced over time, but there is evidence that, particularly for immunoassays, interference may have a serious impact on patients. A description of the most frequent and risky pre-, intra- and post-analytical errors and advice on practical steps for measuring and reducing the risk of errors is therefore given in the present paper. Many mistakes in the Total Testing Process are called "laboratory errors", although these may be due to poor communication, action taken by others involved in the testing process (e.g., physicians, nurses and phlebotomists), or poorly designed processes, all of which are beyond the laboratory's control. Likewise, there is evidence that laboratory information is only partially utilized. A recent document from the International Organization for Standardization (ISO) recommends a new, broader definition of the term "laboratory error" and a classification of errors according to different criteria. In a modern approach to total quality, centered on patients' needs and satisfaction, the risk of errors and mistakes in pre- and post-examination steps must be minimized to guarantee the total quality of laboratory services.

  18. The Application of Social Characteristic and L1 Optimization in the Error Correction for Network Coding in Wireless Sensor Networks

    PubMed Central

    Zhang, Guangzhi; Cai, Shaobin; Xiong, Naixue

    2018-01-01

    One of the remarkable challenges about Wireless Sensor Networks (WSN) is how to transfer the collected data efficiently due to energy limitation of sensor nodes. Network coding will increase network throughput of WSN dramatically due to the broadcast nature of WSN. However, the network coding usually propagates a single original error over the whole network. Due to the special property of error propagation in network coding, most of error correction methods cannot correct more than C/2 corrupted errors where C is the max flow min cut of the network. To maximize the effectiveness of network coding applied in WSN, a new error-correcting mechanism to confront the propagated error is urgently needed. Based on the social network characteristic inherent in WSN and L1 optimization, we propose a novel scheme which successfully corrects more than C/2 corrupted errors. What is more, even if the error occurs on all the links of the network, our scheme also can correct errors successfully. With introducing a secret channel and a specially designed matrix which can trap some errors, we improve John and Yi’s model so that it can correct the propagated errors in network coding which usually pollute exactly 100% of the received messages. Taking advantage of the social characteristic inherent in WSN, we propose a new distributed approach that establishes reputation-based trust among sensor nodes in order to identify the informative upstream sensor nodes. With referred theory of social networks, the informative relay nodes are selected and marked with high trust value. The two methods of L1 optimization and utilizing social characteristic coordinate with each other, and can correct the propagated error whose fraction is even exactly 100% in WSN where network coding is performed. The effectiveness of the error correction scheme is validated through simulation experiments. PMID:29401668

  19. The Application of Social Characteristic and L1 Optimization in the Error Correction for Network Coding in Wireless Sensor Networks.

    PubMed

    Zhang, Guangzhi; Cai, Shaobin; Xiong, Naixue

    2018-02-03

    One of the remarkable challenges about Wireless Sensor Networks (WSN) is how to transfer the collected data efficiently due to energy limitation of sensor nodes. Network coding will increase network throughput of WSN dramatically due to the broadcast nature of WSN. However, the network coding usually propagates a single original error over the whole network. Due to the special property of error propagation in network coding, most of error correction methods cannot correct more than C /2 corrupted errors where C is the max flow min cut of the network. To maximize the effectiveness of network coding applied in WSN, a new error-correcting mechanism to confront the propagated error is urgently needed. Based on the social network characteristic inherent in WSN and L1 optimization, we propose a novel scheme which successfully corrects more than C /2 corrupted errors. What is more, even if the error occurs on all the links of the network, our scheme also can correct errors successfully. With introducing a secret channel and a specially designed matrix which can trap some errors, we improve John and Yi's model so that it can correct the propagated errors in network coding which usually pollute exactly 100% of the received messages. Taking advantage of the social characteristic inherent in WSN, we propose a new distributed approach that establishes reputation-based trust among sensor nodes in order to identify the informative upstream sensor nodes. With referred theory of social networks, the informative relay nodes are selected and marked with high trust value. The two methods of L1 optimization and utilizing social characteristic coordinate with each other, and can correct the propagated error whose fraction is even exactly 100% in WSN where network coding is performed. The effectiveness of the error correction scheme is validated through simulation experiments.

  20. Flux Sampling Errors for Aircraft and Towers

    NASA Technical Reports Server (NTRS)

    Mahrt, Larry

    1998-01-01

    Various errors and influences leading to differences between tower- and aircraft-measured fluxes are surveyed. This survey is motivated by reports in the literature that aircraft fluxes are sometimes smaller than tower-measured fluxes. Both tower and aircraft flux errors are larger with surface heterogeneity due to several independent effects. Surface heterogeneity may cause tower flux errors to increase with decreasing wind speed. Techniques to assess flux sampling error are reviewed. Such error estimates suffer various degrees of inapplicability in real geophysical time series due to nonstationarity of tower time series (or inhomogeneity of aircraft data). A new measure for nonstationarity is developed that eliminates assumptions on the form of the nonstationarity inherent in previous methods. When this nonstationarity measure becomes large, the surface energy imbalance increases sharply. Finally, strategies for obtaining adequate flux sampling using repeated aircraft passes and grid patterns are outlined.

  1. Errors in Bibliographic Citations: A Continuing Problem.

    ERIC Educational Resources Information Center

    Sweetland, James H.

    1989-01-01

    Summarizes studies examining citation errors and illustrates errors resulting from a lack of standardization, misunderstanding of foreign languages, failure to examine the document cited, and general lack of training in citation norms. It is argued that the failure to detect and correct citation errors is due to diffusion of responsibility in the…

  2. Management of failure after surgery for gastro-esophageal reflux disease.

    PubMed

    Gronnier, C; Degrandi, O; Collet, D

    2018-04-01

    Surgical treatment of gastro-esophageal reflux disease (ST-GERD) is well-codified and offers an alternative to long-term medical treatment with a better efficacy for short and long-term outcomes. However, failure of ST-GERD is observed in 2-20% of patients; management is challenging and not standardized. The aim of this study is to analyze the causes of failure and to provide a treatment algorithm. The clinical aspects of ST-GERD failure are variable including persistent reflux, dysphagia or permanent discomfort leading to an important degradation of the quality of life. A morphological and functional pre-therapeutic evaluation is necessary to: (i) determine whether the symptoms are due to recurrence of reflux or to an error in initial indication and (ii) to understand the cause of the failure. The most frequent causes of failure of ST-GERD include errors in the initial indication, which often only need medical treatment, and surgical technical errors, for which surgical redo surgery can be difficult. Multidisciplinary management is necessary in order to offer the best-adapted treatment. Copyright © 2018. Published by Elsevier Masson SAS.

  3. Effectively parameterizing dissipative particle dynamics using COSMO-SAC: A partition coefficient study

    NASA Astrophysics Data System (ADS)

    Saathoff, Jonathan

    2018-04-01

    Dissipative Particle Dynamics (DPD) provides a tool for studying phase behavior and interfacial phenomena for complex mixtures and macromolecules. Methods to quickly and automatically parameterize DPD greatly increase its effectiveness. One such method is to map predicted activity coefficients derived from COSMO-SAC onto DPD parameter sets. However, there are serious limitations to the accuracy of this mapping, including the inability of single DPD beads to reproduce asymmetric infinite dilution activity coefficients, the loss of precision when reusing parameters for different molecular fragments, and the error due to bonding beads together. This report describes these effects in quantitative detail and provides methods to mitigate much of their deleterious effects. This includes a novel approach to remove errors caused by bonding DPD beads together. Using these methods, logarithm hexane/water partition coefficients were calculated for 61 molecules. The root mean-squared error for these calculations was determined to be 0.14—a very low value—with respect to the final mapping procedure. Cognizance of the above limitations can greatly enhance the predictive power of DPD.

  4. Density Functional Calculations of Native Defects in CH 3 NH 3 PbI 3 : Effects of Spin–Orbit Coupling and Self-Interaction Error

    DOE PAGES

    Du, Mao-Hua

    2015-04-02

    We know that native point defects play an important role in carrier transport properties of CH3NH3PbI3. However, the nature of many important defects remains controversial due partly to the conflicting results reported by recent density functional theory (DFT) calculations. In this Letter, we show that self-interaction error and the neglect of spin–orbit coupling (SOC) in many previous DFT calculations resulted in incorrect positions of valence and conduction band edges, although their difference, which is the band gap, is in good agreement with the experimental value. Moreover, this problem has led to incorrect predictions of defect-level positions. Hybrid density functional calculations,more » which partially correct the self-interaction error and include the SOC, show that, among native point defects (including vacancies, interstitials, and antisites), only the iodine vacancy and its complexes induce deep electron and hole trapping levels inside of the band gap, acting as nonradiative recombination centers.« less

  5. Errors made by animals in memory paradigms are not always due to failure of memory.

    PubMed

    Wilkie, D M; Willson, R J; Carr, J A

    1999-01-01

    It is commonly assumed that errors in animal memory paradigms such as delayed matching to sample, radial mazes, and food-cache recovery are due to failures in memory for information necessary to perform the task successfully. A body of research, reviewed here, suggests that this is not always the case: animals sometimes make errors despite apparently being able to remember the appropriate information. In this paper a case study of this phenomenon is described, along with a demonstration of a simple procedural modification that successfully reduced these non-memory errors, thereby producing a better measure of memory.

  6. Two-dimensional confocal laser scanning microscopy image correlation for nanoparticle flow velocimetry

    NASA Astrophysics Data System (ADS)

    Jun, Brian; Giarra, Matthew; Golz, Brian; Main, Russell; Vlachos, Pavlos

    2016-11-01

    We present a methodology to mitigate the major sources of error associated with two-dimensional confocal laser scanning microscopy (CLSM) images of nanoparticles flowing through a microfluidic channel. The correlation-based velocity measurements from CLSM images are subject to random error due to the Brownian motion of nanometer-sized tracer particles, and a bias error due to the formation of images by raster scanning. Here, we develop a novel ensemble phase correlation with dynamic optimal filter that maximizes the correlation strength, which diminishes the random error. In addition, we introduce an analytical model of CLSM measurement bias error correction due to two-dimensional image scanning of tracer particles. We tested our technique using both synthetic and experimental images of nanoparticles flowing through a microfluidic channel. We observed that our technique reduced the error by up to a factor of ten compared to ensemble standard cross correlation (SCC) for the images tested in the present work. Subsequently, we will assess our framework further, by interrogating nanoscale flow in the cell culture environment (transport within the lacunar-canalicular system) to demonstrate our ability to accurately resolve flow measurements in a biological system.

  7. Simulating the performance of a distance-3 surface code in a linear ion trap

    NASA Astrophysics Data System (ADS)

    Trout, Colin J.; Li, Muyuan; Gutiérrez, Mauricio; Wu, Yukai; Wang, Sheng-Tao; Duan, Luming; Brown, Kenneth R.

    2018-04-01

    We explore the feasibility of implementing a small surface code with 9 data qubits and 8 ancilla qubits, commonly referred to as surface-17, using a linear chain of 171Yb+ ions. Two-qubit gates can be performed between any two ions in the chain with gate time increasing linearly with ion distance. Measurement of the ion state by fluorescence requires that the ancilla qubits be physically separated from the data qubits to avoid errors on the data due to scattered photons. We minimize the time required to measure one round of stabilizers by optimizing the mapping of the two-dimensional surface code to the linear chain of ions. We develop a physically motivated Pauli error model that allows for fast simulation and captures the key sources of noise in an ion trap quantum computer including gate imperfections and ion heating. Our simulations showed a consistent requirement of a two-qubit gate fidelity of ≥99.9% for the logical memory to have a better fidelity than physical two-qubit operations. Finally, we perform an analysis of the error subsets from the importance sampling method used to bound the logical error rates to gain insight into which error sources are particularly detrimental to error correction.

  8. Spelling errors among children with ADHD symptoms: the role of working memory.

    PubMed

    Re, Anna Maria; Mirandola, Chiara; Esposito, Stefania Sara; Capodieci, Agnese

    2014-09-01

    Research has shown that children with attention deficit/hyperactivity disorder (ADHD) may present a series of academic difficulties, including spelling errors. Given that correct spelling is supported by the phonological component of working memory (PWM), the present study examined whether or not the spelling difficulties of children with ADHD are emphasized when children's PWM is overloaded. A group of 19 children with ADHD symptoms (between 8 and 11 years of age), and a group of typically developing children matched for age, schooling, gender, rated intellectual abilities, and socioeconomic status, were administered two dictation texts: one under typical conditions and one under a pre-load condition that required the participants to remember a series of digits while writing. The results confirmed that children with ADHD symptoms have spelling difficulties, produce a higher percentages of errors compared to the control group children, and that these difficulties are enhanced under a higher load of PWM. An analysis of errors showed that this holds true, especially for phonological errors. The increased errors in the PWM condition was not due to a tradeoff between working memory and writing, as children with ADHD also performed more poorly in the PWM task. The theoretical and practical implications are discussed. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Investigation on synchronization of the offset printing process for fine patterning and precision overlay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kang, Dongwoo; Lee, Eonseok; Kim, Hyunchang

    2014-06-21

    Offset printing processes are promising candidates for producing printed electronics due to their capacity for fine patterning and suitability for mass production. To print high-resolution patterns with good overlay using offset printing, the velocities of two contact surfaces, which ink is transferred between, should be synchronized perfectly. However, an exact velocity of the contact surfaces is unknown due to several imperfections, including tolerances, blanket swelling, and velocity ripple, which prevents the system from being operated in the synchronized condition. In this paper, a novel method of measurement based on the sticking model of friction force was proposed to determine themore » best synchronized condition, i.e., the condition in which the rate of synchronization error is minimized. It was verified by experiment that the friction force can accurately represent the rate of synchronization error. Based on the measurement results of the synchronization error, the allowable margin of synchronization error when printing high-resolution patterns was investigated experimentally using reverse offset printing. There is a region where the patterning performance is unchanged even though the synchronization error is varied, and this may be viewed as indirect evidence that printability performance is secured when there is no slip at the contact interface. To understand what happens at the contact surfaces during ink transfer, the deformation model of the blanket's surface was developed. The model estimates how much deformation on the blanket's surface can be borne by the synchronization error when there is no slip at the contact interface. In addition, the model shows that the synchronization error results in scale variation in the machine direction (MD), which means that the printing registration in the MD can be adjusted actively by controlling the synchronization if there is a sufficient margin of synchronization error to guarantee printability. The effect of synchronization on the printing registration was verified experimentally using gravure offset printing. The variations in synchronization result in the differences in the MD scale, and the measured MD scale matches exactly with the modeled MD scale.« less

  10. Error of semiclassical eigenvalues in the semiclassical limit - an asymptotic analysis of the Sinai billiard

    NASA Astrophysics Data System (ADS)

    Dahlqvist, Per

    1999-10-01

    We estimate the error in the semiclassical trace formula for the Sinai billiard under the assumption that the largest source of error is due to penumbra diffraction: namely, diffraction effects for trajectories passing within a distance Ricons/Journals/Common/cdot" ALT="cdot" ALIGN="TOP"/>O((kR)-2/3) to the disc and trajectories being scattered in very forward directions. Here k is the momentum and R the radius of the scatterer. The semiclassical error is estimated by perturbing the Berry-Keating formula. The analysis necessitates an asymptotic analysis of very long periodic orbits. This is obtained within an approximation originally due to Baladi, Eckmann and Ruelle. We find that the average error, for sufficiently large values of kR, will exceed the mean level spacing.

  11. Modeling the Error of the Medtronic Paradigm Veo Enlite Glucose Sensor.

    PubMed

    Biagi, Lyvia; Ramkissoon, Charrise M; Facchinetti, Andrea; Leal, Yenny; Vehi, Josep

    2017-06-12

    Continuous glucose monitors (CGMs) are prone to inaccuracy due to time lags, sensor drift, calibration errors, and measurement noise. The aim of this study is to derive the model of the error of the second generation Medtronic Paradigm Veo Enlite (ENL) sensor and compare it with the Dexcom SEVEN PLUS (7P), G4 PLATINUM (G4P), and advanced G4 for Artificial Pancreas studies (G4AP) systems. An enhanced methodology to a previously employed technique was utilized to dissect the sensor error into several components. The dataset used included 37 inpatient sessions in 10 subjects with type 1 diabetes (T1D), in which CGMs were worn in parallel and blood glucose (BG) samples were analyzed every 15 ± 5 min Calibration error and sensor drift of the ENL sensor was best described by a linear relationship related to the gain and offset. The mean time lag estimated by the model is 9.4 ± 6.5 min. The overall average mean absolute relative difference (MARD) of the ENL sensor was 11.68 ± 5.07% Calibration error had the highest contribution to total error in the ENL sensor. This was also reported in the 7P, G4P, and G4AP. The model of the ENL sensor error will be useful to test the in silico performance of CGM-based applications, i.e., the artificial pancreas, employing this kind of sensor.

  12. Extending Moore's Law via Computationally Error Tolerant Computing.

    DOE PAGES

    Deng, Bobin; Srikanth, Sriseshan; Hein, Eric R.; ...

    2018-03-01

    Dennard scaling has ended. Lowering the voltage supply (V dd) to sub-volt levels causes intermittent losses in signal integrity, rendering further scaling (down) no longer acceptable as a means to lower the power required by a processor core. However, it is possible to correct the occasional errors caused due to lower V dd in an efficient manner and effectively lower power. By deploying the right amount and kind of redundancy, we can strike a balance between overhead incurred in achieving reliability and energy savings realized by permitting lower V dd. One promising approach is the Redundant Residue Number System (RRNS)more » representation. Unlike other error correcting codes, RRNS has the important property of being closed under addition, subtraction and multiplication, thus enabling computational error correction at a fraction of an overhead compared to conventional approaches. We use the RRNS scheme to design a Computationally-Redundant, Energy-Efficient core, including the microarchitecture, Instruction Set Architecture (ISA) and RRNS centered algorithms. Finally, from the simulation results, this RRNS system can reduce the energy-delay-product by about 3× for multiplication intensive workloads and by about 2× in general, when compared to a non-error-correcting binary core.« less

  13. Extending Moore's Law via Computationally Error Tolerant Computing.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deng, Bobin; Srikanth, Sriseshan; Hein, Eric R.

    Dennard scaling has ended. Lowering the voltage supply (V dd) to sub-volt levels causes intermittent losses in signal integrity, rendering further scaling (down) no longer acceptable as a means to lower the power required by a processor core. However, it is possible to correct the occasional errors caused due to lower V dd in an efficient manner and effectively lower power. By deploying the right amount and kind of redundancy, we can strike a balance between overhead incurred in achieving reliability and energy savings realized by permitting lower V dd. One promising approach is the Redundant Residue Number System (RRNS)more » representation. Unlike other error correcting codes, RRNS has the important property of being closed under addition, subtraction and multiplication, thus enabling computational error correction at a fraction of an overhead compared to conventional approaches. We use the RRNS scheme to design a Computationally-Redundant, Energy-Efficient core, including the microarchitecture, Instruction Set Architecture (ISA) and RRNS centered algorithms. Finally, from the simulation results, this RRNS system can reduce the energy-delay-product by about 3× for multiplication intensive workloads and by about 2× in general, when compared to a non-error-correcting binary core.« less

  14. High accuracy switched-current circuits using an improved dynamic mirror

    NASA Technical Reports Server (NTRS)

    Zweigle, G.; Fiez, T.

    1991-01-01

    The switched-current technique, a recently developed circuit approach to analog signal processing, has emerged as an alternative/compliment to the well established switched-capacitor circuit technique. High speed switched-current circuits offer potential cost and power savings over slower switched-capacitor circuits. Accuracy improvements are a primary concern at this stage in the development of the switched-current technique. Use of the dynamic current mirror has produced circuits that are insensitive to transistor matching errors. The dynamic current mirror has been limited by other sources of error including clock-feedthrough and voltage transient errors. In this paper we present an improved switched-current building block using the dynamic current mirror. Utilizing current feedback the errors due to current imbalance in the dynamic current mirror are reduced. Simulations indicate that this feedback can reduce total harmonic distortion by as much as 9 dB. Additionally, we have developed a clock-feedthrough reduction scheme for which simulations reveal a potential 10 dB total harmonic distortion improvement. The clock-feedthrough reduction scheme also significantly reduces offset errors and allows for cancellation with a constant current source. Experimental results confirm the simulated improvements.

  15. Characterizing the SWOT discharge error budget on the Sacramento River, CA

    NASA Astrophysics Data System (ADS)

    Yoon, Y.; Durand, M. T.; Minear, J. T.; Smith, L.; Merry, C. J.

    2013-12-01

    The Surface Water and Ocean Topography (SWOT) is an upcoming satellite mission (2020 year) that will provide surface-water elevation and surface-water extent globally. One goal of SWOT is the estimation of river discharge directly from SWOT measurements. SWOT discharge uncertainty is due to two sources. First, SWOT cannot measure channel bathymetry and determine roughness coefficient data necessary for discharge calculations directly; these parameters must be estimated from the measurements or from a priori information. Second, SWOT measurement errors directly impact the discharge estimate accuracy. This study focuses on characterizing parameter and measurement uncertainties for SWOT river discharge estimation. A Bayesian Markov Chain Monte Carlo scheme is used to calculate parameter estimates, given the measurements of river height, slope and width, and mass and momentum constraints. The algorithm is evaluated using simulated both SWOT and AirSWOT (the airborne version of SWOT) observations over seven reaches (about 40 km) of the Sacramento River. The SWOT and AirSWOT observations are simulated by corrupting the ';true' HEC-RAS hydraulic modeling results with the instrument error. This experiment answers how unknown bathymetry and roughness coefficients affect the accuracy of the river discharge algorithm. From the experiment, the discharge error budget is almost completely dominated by unknown bathymetry and roughness; 81% of the variance error is explained by uncertainties in bathymetry and roughness. Second, we show how the errors in water surface, slope, and width observations influence the accuracy of discharge estimates. Indeed, there is a significant sensitivity to water surface, slope, and width errors due to the sensitivity of bathymetry and roughness to measurement errors. Increasing water-surface error above 10 cm leads to a corresponding sharper increase of errors in bathymetry and roughness. Increasing slope error above 1.5 cm/km leads to a significant degradation due to direct error in the discharge estimates. As the width error increases past 20%, the discharge error budget is dominated by the width error. Above two experiments are performed based on AirSWOT scenarios. In addition, we explore the sensitivity of the algorithm to the SWOT scenarios.

  16. CMOS chip planarization by chemical mechanical polishing for a vertically stacked metal MEMS integration

    NASA Astrophysics Data System (ADS)

    Lee, Hocheol; Miller, Michele H.; Bifano, Thomas G.

    2004-01-01

    In this paper we present the planarization process of a CMOS chip for the integration of a microelectromechanical systems (MEMS) metal mirror array. The CMOS chip, which comes from a commercial foundry, has a bumpy passivation layer due to an underlying aluminum interconnect pattern (1.8 µm high), which is used for addressing individual micromirror array elements. To overcome the tendency for tilt error in the CMOS chip planarization, the approach is to sputter a thick layer of silicon nitride at low temperature and to surround the CMOS chip with dummy silicon pieces that define a polishing plane. The dummy pieces are first lapped down to the height of the CMOS chip, and then all pieces are polished. This process produced a chip surface with a root-mean-square flatness error of less than 100 nm, including tilt and curvature errors.

  17. Target Classification of Canonical Scatterers Using Classical Estimation and Dictionary Based Techniques

    DTIC Science & Technology

    2012-03-22

    shapes tested , when the objective parameter set was confined to a dictionary’s de - fined parameter space. These physical characteristics included...8 2.3 Hypothesis Testing and Detection Theory . . . . . . . . . . . . . . . 8 2.4 3-D SAR Scattering Models...basis pursuit de -noising (BPDN) algorithm is chosen to perform extraction due to inherent efficiency and error tolerance. Multiple shape dictionaries

  18. Stratospheric Assimilation of Chemical Tracer Observations Using a Kalman Filter. Pt. 2; Chi-Square Validated Results and Analysis of Variance and Correlation Dynamics

    NASA Technical Reports Server (NTRS)

    Menard, Richard; Chang, Lang-Ping

    1998-01-01

    A Kalman filter system designed for the assimilation of limb-sounding observations of stratospheric chemical tracers, which has four tunable covariance parameters, was developed in Part I (Menard et al. 1998) The assimilation results of CH4 observations from the Cryogenic Limb Array Etalon Sounder instrument (CLAES) and the Halogen Observation Experiment instrument (HALOE) on board of the Upper Atmosphere Research Satellite are described in this paper. A robust (chi)(sup 2) criterion, which provides a statistical validation of the forecast and observational error covariances, was used to estimate the tunable variance parameters of the system. In particular, an estimate of the model error variance was obtained. The effect of model error on the forecast error variance became critical after only three days of assimilation of CLAES observations, although it took 14 days of forecast to double the initial error variance. We further found that the model error due to numerical discretization as arising in the standard Kalman filter algorithm, is comparable in size to the physical model error due to wind and transport modeling errors together. Separate assimilations of CLAES and HALOE observations were compared to validate the state estimate away from the observed locations. A wave-breaking event that took place several thousands of kilometers away from the HALOE observation locations was well captured by the Kalman filter due to highly anisotropic forecast error correlations. The forecast error correlation in the assimilation of the CLAES observations was found to have a structure similar to that in pure forecast mode except for smaller length scales. Finally, we have conducted an analysis of the variance and correlation dynamics to determine their relative importance in chemical tracer assimilation problems. Results show that the optimality of a tracer assimilation system depends, for the most part, on having flow-dependent error correlation rather than on evolving the error variance.

  19. A Novel Methodology to Validate the Accuracy of Extraoral Dental Scanners and Digital Articulation Systems.

    PubMed

    Ellakwa, A; Elnajar, S; Littlefair, D; Sara, G

    2018-05-03

    The aim of the current study is to develop a novel method to investigate the accuracy of 3D scanners and digital articulation systems. An upper and a lower poured stone model were created by taking impression of fully dentate male (fifty years old) participant. Titanium spheres were added to the models to allow for an easily recognisable geometric shape for measurement after scanning and digital articulation. Measurements were obtained using a Coordinate Measuring Machine to record volumetric error, articulation error and clinical effect error. Three scanners were compared, including the Imetric 3D iScan d104i, Shining 3D AutoScan-DS100 and 3Shape D800, as well as their respective digital articulation software packages. Stoneglass Industries PDC digital articulation system was also applied to the Imetric scans for comparison with the CMM measurements. All the scans displayed low volumetric error (p⟩0.05), indicating that the scanners themselves had a minor contribution to the articulation and clinical effect errors. The PDC digital articulation system was found to deliver the lowest average errors, with good repeatability of results. The new measuring technique in the current study was able to assess the scanning and articulation accuracy of the four systems investigated. The PDC digital articulation system using Imetric scans was recommended as it displayed the lowest articulation error and clinical effect error with good repeatability. The low errors from the PDC system may have been due to its use of a 3D axis for alignment rather than the use of a best fit. Copyright© 2018 Dennis Barber Ltd.

  20. Prevalence of vision impairment and refractive error in school children in Ba Ria – Vung Tau province, Vietnam

    PubMed Central

    Paudel, Prakash; Ramson, Prasidh; Naduvilath, Thomas; Wilson, David; Phuong, Ha Thanh; Ho, Suit M; Giap, Nguyen V

    2014-01-01

    Background To assess the prevalence of vision impairment and refractive error in school children 12–15 years of age in Ba Ria – Vung Tau province, Vietnam. Design Prospective, cross-sectional study. Participants 2238 secondary school children. Methods Subjects were selected based on stratified multistage cluster sampling of 13 secondary schools from urban, rural and semi-urban areas. The examination included visual acuity measurements, ocular motility evaluation, cycloplegic autorefraction, and examination of the external eye, anterior segment, media and fundus. Main Outcome Measures Visual acuity and principal cause of vision impairment. Results The prevalence of uncorrected and presenting visual acuity ≤6/12 in the better eye were 19.4% (95% confidence interval, 12.5–26.3) and 12.2% (95% confidence interval, 8.8–15.6), respectively. Refractive error was the cause of vision impairment in 92.7%, amblyopia in 2.2%, cataract in 0.7%, retinal disorders in 0.4%, other causes in 1.5% and unexplained causes in the remaining 2.6%. The prevalence of vision impairment due to myopia in either eye (–0.50 diopter or greater) was 20.4% (95% confidence interval, 12.8–28.0), hyperopia (≥2.00 D) was 0.4% (95% confidence interval, 0.0–0.7) and emmetropia with astigmatism (≥0.75 D) was 0.7% (95% confidence interval, 0.2–1.2). Vision impairment due to myopia was associated with higher school grade and increased time spent reading and working on a computer. Conclusions Uncorrected refractive error, particularly myopia, among secondary school children in Vietnam is a major public health problem. School-based eye health initiative such as refractive error screening is warranted to reduce vision impairment. PMID:24299145

  1. Prevalence of vision impairment and refractive error in school children in Ba Ria - Vung Tau province, Vietnam.

    PubMed

    Paudel, Prakash; Ramson, Prasidh; Naduvilath, Thomas; Wilson, David; Phuong, Ha Thanh; Ho, Suit M; Giap, Nguyen V

    2014-04-01

    To assess the prevalence of vision impairment and refractive error in school children 12-15 years of age in Ba Ria - Vung Tau province, Vietnam. Prospective, cross-sectional study. 2238 secondary school children. Subjects were selected based on stratified multistage cluster sampling of 13 secondary schools from urban, rural and semi-urban areas. The examination included visual acuity measurements, ocular motility evaluation, cycloplegic autorefraction, and examination of the external eye, anterior segment, media and fundus. Visual acuity and principal cause of vision impairment. The prevalence of uncorrected and presenting visual acuity ≤6/12 in the better eye were 19.4% (95% confidence interval, 12.5-26.3) and 12.2% (95% confidence interval, 8.8-15.6), respectively. Refractive error was the cause of vision impairment in 92.7%, amblyopia in 2.2%, cataract in 0.7%, retinal disorders in 0.4%, other causes in 1.5% and unexplained causes in the remaining 2.6%. The prevalence of vision impairment due to myopia in either eye (-0.50 diopter or greater) was 20.4% (95% confidence interval, 12.8-28.0), hyperopia (≥2.00 D) was 0.4% (95% confidence interval, 0.0-0.7) and emmetropia with astigmatism (≥0.75 D) was 0.7% (95% confidence interval, 0.2-1.2). Vision impairment due to myopia was associated with higher school grade and increased time spent reading and working on a computer. Uncorrected refractive error, particularly myopia, among secondary school children in Vietnam is a major public health problem. School-based eye health initiative such as refractive error screening is warranted to reduce vision impairment. © 2013 The Authors. Clinical & Experimental Ophthalmology published by Wiley Publishing Asia Pty Ltd on behalf of Royal Australian and New Zealand College of Ophthalmologists.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ellefson, S; Department of Human Oncology, University of Wisconsin, Madison, WI; Culberson, W

    Purpose: Discrepancies in absolute dose values have been detected between the ViewRay treatment planning system and ArcCHECK readings when performing delivery quality assurance on the ViewRay system with the ArcCHECK-MR diode array (SunNuclear Corporation). In this work, we investigate whether these discrepancies are due to errors in the ViewRay planning and/or delivery system or due to errors in the ArcCHECK’s readings. Methods: Gamma analysis was performed on 19 ViewRay patient plans using the ArcCHECK. Frequency analysis on the dose differences was performed. To investigate whether discrepancies were due to measurement or delivery error, 10 diodes in low-gradient dose regions weremore » chosen to compare with ion chamber measurements in a PMMA phantom with the same size and shape as the ArcCHECK, provided by SunNuclear. The diodes chosen all had significant discrepancies in absolute dose values compared to the ViewRay TPS. Absolute doses to PMMA were compared between the ViewRay TPS calculations, ArcCHECK measurements, and measurements in the PMMA phantom. Results: Three of the 19 patient plans had 3%/3mm gamma passing rates less than 95%, and ten of the 19 plans had 2%/2mm passing rates less than 95%. Frequency analysis implied a non-random error process. Out of the 10 diode locations measured, ion chamber measurements were all within 2.2% error relative to the TPS and had a mean error of 1.2%. ArcCHECK measurements ranged from 4.5% to over 15% error relative to the TPS and had a mean error of 8.0%. Conclusion: The ArcCHECK performs well for quality assurance on the ViewRay under most circumstances. However, under certain conditions the absolute dose readings are significantly higher compared to the planned doses. As the ion chamber measurements consistently agree with the TPS, it can be concluded that the discrepancies are due to ArcCHECK measurement error and not TPS or delivery system error. This work was funded by the Bhudatt Paliwal Professorship and the University of Wisconsin Medical Radiation Research Center.« less

  3. Identification of Matra Region and Overlapping Characters for OCR of Printed Bengali Scripts

    NASA Astrophysics Data System (ADS)

    Goswami, Subhra Sundar

    One of the important reasons for poor recognition rate in optical character recognition (OCR) system is the error in character segmentation. In case of Bangla scripts, the errors occur due to several reasons, which include incorrect detection of matra (headline), over-segmentation and under-segmentation. We have proposed a robust method for detecting the headline region. Existence of overlapping characters (in under-segmented parts) in scanned printed documents is a major problem in designing an effective character segmentation procedure for OCR systems. In this paper, a predictive algorithm is developed for effectively identifying overlapping characters and then selecting the cut-borders for segmentation. Our method can be successfully used in achieving high recognition result.

  4. "Bed Side" Human Milk Analysis in the Neonatal Intensive Care Unit: A Systematic Review.

    PubMed

    Fusch, Gerhard; Kwan, Celia; Kotrri, Gynter; Fusch, Christoph

    2017-03-01

    Human milk analyzers can measure macronutrient content in native breast milk to tailor adequate supplementation with fortifiers. This article reviews all studies using milk analyzers, including (i) evaluation of devices, (ii) the impact of different conditions on the macronutrient analysis of human milk, and (iii) clinical trials to improve growth. Results lack consistency, potentially due to systematic errors in the validation of the device, or pre-analytical sample preparation errors like homogenization. It is crucial to introduce good laboratory and clinical practice when using these devices; otherwise a non-validated clinical usage can severely affect growth outcomes of infants. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Problems with small area surveys: lensing covariance of supernova distance measurements.

    PubMed

    Cooray, Asantha; Huterer, Dragan; Holz, Daniel E

    2006-01-20

    While luminosity distances from type Ia supernovae (SNe) are a powerful probe of cosmology, the accuracy with which these distances can be measured is limited by cosmic magnification due to gravitational lensing by the intervening large-scale structure. Spatial clustering of foreground mass leads to correlated errors in SNe distances. By including the full covariance matrix of SNe, we show that future wide-field surveys will remain largely unaffected by lensing correlations. However, "pencil beam" surveys, and those with narrow (but possibly long) fields of view, can be strongly affected. For a survey with 30 arcmin mean separation between SNe, lensing covariance leads to a approximately 45% increase in the expected errors in dark energy parameters.

  6. Picometer Level Modeling of a Shared Vertex Double Corner Cube in the Space Interferometry Mission Kite Testbed

    NASA Technical Reports Server (NTRS)

    Kuan, Gary M.; Dekens, Frank G.

    2006-01-01

    The Space Interferometry Mission (SIM) is a microarcsecond interferometric space telescope that requires picometer level precision measurements of its truss and interferometer baselines. Single-gauge metrology errors due to non-ideal physical characteristics of corner cubes reduce the angular measurement capability of the science instrument. Specifically, the non-common vertex error (NCVE) of a shared vertex, double corner cube introduces micrometer level single-gauge errors in addition to errors due to dihedral angles and reflection phase shifts. A modified SIM Kite Testbed containing an articulating double corner cube is modeled and the results are compared to the experimental testbed data. The results confirm modeling capability and viability of calibration techniques.

  7. General linear codes for fault-tolerant matrix operations on processor arrays

    NASA Technical Reports Server (NTRS)

    Nair, V. S. S.; Abraham, J. A.

    1988-01-01

    Various checksum codes have been suggested for fault-tolerant matrix computations on processor arrays. Use of these codes is limited due to potential roundoff and overflow errors. Numerical errors may also be misconstrued as errors due to physical faults in the system. In this a set of linear codes is identified which can be used for fault-tolerant matrix operations such as matrix addition, multiplication, transposition, and LU-decomposition, with minimum numerical error. Encoding schemes are given for some of the example codes which fall under the general set of codes. With the help of experiments, a rule of thumb for the selection of a particular code for a given application is derived.

  8. Voice Onset Time in Consonant Cluster Errors: Can Phonetic Accommodation Differentiate Cognitive from Motor Errors?

    ERIC Educational Resources Information Center

    Pouplier, Marianne; Marin, Stefania; Waltl, Susanne

    2014-01-01

    Purpose: Phonetic accommodation in speech errors has traditionally been used to identify the processing level at which an error has occurred. Recent studies have challenged the view that noncanonical productions may solely be due to phonetic, not phonological, processing irregularities, as previously assumed. The authors of the present study…

  9. Global land cover mapping: a review and uncertainty analysis

    USGS Publications Warehouse

    Congalton, Russell G.; Gu, Jianyu; Yadav, Kamini; Thenkabail, Prasad S.; Ozdogan, Mutlu

    2014-01-01

    Given the advances in remotely sensed imagery and associated technologies, several global land cover maps have been produced in recent times including IGBP DISCover, UMD Land Cover, Global Land Cover 2000 and GlobCover 2009. However, the utility of these maps for specific applications has often been hampered due to considerable amounts of uncertainties and inconsistencies. A thorough review of these global land cover projects including evaluating the sources of error and uncertainty is prudent and enlightening. Therefore, this paper describes our work in which we compared, summarized and conducted an uncertainty analysis of the four global land cover mapping projects using an error budget approach. The results showed that the classification scheme and the validation methodology had the highest error contribution and implementation priority. A comparison of the classification schemes showed that there are many inconsistencies between the definitions of the map classes. This is especially true for the mixed type classes for which thresholds vary for the attributes/discriminators used in the classification process. Examination of these four global mapping projects provided quite a few important lessons for the future global mapping projects including the need for clear and uniform definitions of the classification scheme and an efficient, practical, and valid design of the accuracy assessment.

  10. Physical Validation of TRMM TMI and PR Monthly Rain Products Over Oklahoma

    NASA Technical Reports Server (NTRS)

    Fisher, Brad L.

    2004-01-01

    The Tropical Rainfall Measuring Mission (TRMM) provides monthly rainfall estimates using data collected by the TRMM satellite. These estimates cover a substantial fraction of the earth's surface. The physical validation of TRMM estimates involves corroborating the accuracy of spaceborne estimates of areal rainfall by inferring errors and biases from ground-based rain estimates. The TRMM error budget consists of two major sources of error: retrieval and sampling. Sampling errors are intrinsic to the process of estimating monthly rainfall and occur because the satellite extrapolates monthly rainfall from a small subset of measurements collected only during satellite overpasses. Retrieval errors, on the other hand, are related to the process of collecting measurements while the satellite is overhead. One of the big challenges confronting the TRMM validation effort is how to best estimate these two main components of the TRMM error budget, which are not easily decoupled. This four-year study computed bulk sampling and retrieval errors for the TRMM microwave imager (TMI) and the precipitation radar (PR) by applying a technique that sub-samples gauge data at TRMM overpass times. Gridded monthly rain estimates are then computed from the monthly bulk statistics of the collected samples, providing a sensor-dependent gauge rain estimate that is assumed to include a TRMM equivalent sampling error. The sub-sampled gauge rain estimates are then used in conjunction with the monthly satellite and gauge (without sub- sampling) estimates to decouple retrieval and sampling errors. The computed mean sampling errors for the TMI and PR were 5.9% and 7.796, respectively, in good agreement with theoretical predictions. The PR year-to-year retrieval biases exceeded corresponding TMI biases, but it was found that these differences were partially due to negative TMI biases during cold months and positive TMI biases during warm months.

  11. tPA Prescription and Administration Errors within a Regional Stroke System

    PubMed Central

    Chung, Lee S; Tkach, Aleksander; Lingenfelter, Erin M; Dehoney, Sarah; Rollo, Jeannie; de Havenon, Adam; DeWitt, Lucy Dana; Grantz, Matthew Ryan; Wang, Haimei; Wold, Jana J; Hannon, Peter M; Weathered, Natalie R; Majersik, Jennifer J

    2015-01-01

    Background IV tPA utilization in acute ischemic stroke (AIS) requires weight-based dosing and a standardized infusion rate. In our regional network, we have tried to minimize tPA dosing errors. We describe the frequency and types of tPA administration errors made in our comprehensive stroke center (CSC) and at community hospitals (CHs) prior to transfer. Methods Using our stroke quality database, we extracted clinical and pharmacy information on all patients who received IV tPA from 2010–11 at the CSC or CH prior to transfer. All records were analyzed for the presence of inclusion/exclusion criteria deviations or tPA errors in prescription, reconstitution, dispensing, or administration, and analyzed for association with outcomes. Results We identified 131 AIS cases treated with IV tPA: 51% female; mean age 68; 32% treated at CSC, 68% at CH (including 26% by telestroke) from 22 CHs. tPA prescription and administration errors were present in 64% of all patients (41% CSC, 75% CH, p<0.001), the most common being incorrect dosage for body weight (19% CSC, 55% CH, p<0.001). Of the 27 overdoses, there were 3 deaths due to systemic hemorrhage or ICH. Nonetheless, outcomes (parenchymal hematoma, mortality, mRS) did not differ between CSC and CH patients nor between those with and without errors. Conclusion Despite focus on minimization of tPA administration errors in AIS patients, such errors were very common in our regional stroke system. Although an association between tPA errors and stroke outcomes was not demonstrated, quality assurance mechanisms are still necessary to reduce potentially dangerous, avoidable errors. PMID:26698642

  12. Radiofrequency Electromagnetic Radiation and Memory Performance: Sources of Uncertainty in Epidemiological Cohort Studies.

    PubMed

    Brzozek, Christopher; Benke, Kurt K; Zeleke, Berihun M; Abramson, Michael J; Benke, Geza

    2018-03-26

    Uncertainty in experimental studies of exposure to radiation from mobile phones has in the past only been framed within the context of statistical variability. It is now becoming more apparent to researchers that epistemic or reducible uncertainties can also affect the total error in results. These uncertainties are derived from a wide range of sources including human error, such as data transcription, model structure, measurement and linguistic errors in communication. The issue of epistemic uncertainty is reviewed and interpreted in the context of the MoRPhEUS, ExPOSURE and HERMES cohort studies which investigate the effect of radiofrequency electromagnetic radiation from mobile phones on memory performance. Research into this field has found inconsistent results due to limitations from a range of epistemic sources. Potential analytic approaches are suggested based on quantification of epistemic error using Monte Carlo simulation. It is recommended that future studies investigating the relationship between radiofrequency electromagnetic radiation and memory performance pay more attention to treatment of epistemic uncertainties as well as further research into improving exposure assessment. Use of directed acyclic graphs is also encouraged to display the assumed covariate relationship.

  13. Statistical analysis of AFE GN&C aeropass performance

    NASA Technical Reports Server (NTRS)

    Chang, Ho-Pen; French, Raymond A.

    1990-01-01

    Performance of the guidance, navigation, and control (GN&C) system used on the Aeroassist Flight Experiment (AFE) spacecraft has been studied with Monte Carlo techniques. The performance of the AFE GN&C is investigated with a 6-DOF numerical dynamic model which includes a Global Reference Atmospheric Model (GRAM) and a gravitational model with oblateness corrections. The study considers all the uncertainties due to the environment and the system itself. In the AFE's aeropass phase, perturbations on the system performance are caused by an error space which has over 20 dimensions of the correlated/uncorrelated error sources. The goal of this study is to determine, in a statistical sense, how much flight path angle error can be tolerated at entry interface (EI) and still have acceptable delta-V capability at exit to position the AFE spacecraft for recovery. Assuming there is fuel available to produce 380 ft/sec of delta-V at atmospheric exit, a 3-sigma standard deviation in flight path angle error of 0.04 degrees at EI would result in a 98-percent probability of mission success.

  14. Species-area relationships and extinction forecasts.

    PubMed

    Halley, John M; Sgardeli, Vasiliki; Monokrousos, Nikolaos

    2013-05-01

    The species-area relationship (SAR) predicts that smaller areas contain fewer species. This is the basis of the SAR method that has been used to forecast large numbers of species committed to extinction every year due to deforestation. The method has a number of issues that must be handled with care to avoid error. These include the functional form of the SAR, the choice of equation parameters, the sampling procedure used, extinction debt, and forest regeneration. Concerns about the accuracy of the SAR technique often cite errors not much larger than the natural scatter of the SAR itself. Such errors do not undermine the credibility of forecasts predicting large numbers of extinctions, although they may be a serious obstacle in other SAR applications. Very large errors can arise from misinterpretation of extinction debt, inappropriate functional form, and ignoring forest regeneration. Major challenges remain to understand better the relationship between sampling protocol and the functional form of SARs and the dynamics of relaxation, especially in continental areas, and to widen the testing of extinction forecasts. © 2013 New York Academy of Sciences.

  15. Learning from Past Classification Errors: Exploring Methods for Improving the Performance of a Deep Learning-based Building Extraction Model through Quantitative Analysis of Commission Errors for Optimal Sample Selection

    NASA Astrophysics Data System (ADS)

    Swan, B.; Laverdiere, M.; Yang, L.

    2017-12-01

    In the past five years, deep Convolutional Neural Networks (CNN) have been increasingly favored for computer vision applications due to their high accuracy and ability to generalize well in very complex problems; however, details of how they function and in turn how they may be optimized are still imperfectly understood. In particular, their complex and highly nonlinear network architecture, including many hidden layers and self-learned parameters, as well as their mathematical implications, presents open questions about how to effectively select training data. Without knowledge of the exact ways the model processes and transforms its inputs, intuition alone may fail as a guide to selecting highly relevant training samples. Working in the context of improving a CNN-based building extraction model used for the LandScan USA gridded population dataset, we have approached this problem by developing a semi-supervised, highly-scalable approach to select training samples from a dataset of identified commission errors. Due to the large scope this project, tens of thousands of potential samples could be derived from identified commission errors. To efficiently trim those samples down to a manageable and effective set for creating additional training sample, we statistically summarized the spectral characteristics of areas with rates of commission errors at the image tile level and grouped these tiles using affinity propagation. Highly representative members of each commission error cluster were then used to select sites for training sample creation. The model will be incrementally re-trained with the new training data to allow for an assessment of how the addition of different types of samples affects the model performance, such as precision and recall rates. By using quantitative analysis and data clustering techniques to select highly relevant training samples, we hope to improve model performance in a manner that is resource efficient, both in terms of training process and in sample creation.

  16. Experimental investigation of strain errors in stereo-digital image correlation due to camera calibration

    NASA Astrophysics Data System (ADS)

    Shao, Xinxing; Zhu, Feipeng; Su, Zhilong; Dai, Xiangjun; Chen, Zhenning; He, Xiaoyuan

    2018-03-01

    The strain errors in stereo-digital image correlation (DIC) due to camera calibration were investigated using precisely controlled numerical experiments and real experiments. Three-dimensional rigid body motion tests were conducted to examine the effects of camera calibration on the measured results. For a fully accurate calibration, rigid body motion causes negligible strain errors. However, for inaccurately calibrated camera parameters and a short working distance, rigid body motion will lead to more than 50-μɛ strain errors, which significantly affects the measurement. In practical measurements, it is impossible to obtain a fully accurate calibration; therefore, considerable attention should be focused on attempting to avoid these types of errors, especially for high-accuracy strain measurements. It is necessary to avoid large rigid body motions in both two-dimensional DIC and stereo-DIC.

  17. Global Precipitation Measurement Mission Launch and Commissioning

    NASA Technical Reports Server (NTRS)

    Davis, Nikesha; DeWeese, Keith; Vess, Melissa; O'Donnell, James R., Jr.; Welter, Gary

    2015-01-01

    During launch and early operation of the Global Precipitation Measurement (GPM) Mission, the Guidance, Navigation, and Control (GN&C) analysis team encountered four main on-orbit anomalies. These include: (1) unexpected shock from Solar Array deployment, (2) momentum buildup from the Magnetic Torquer Bars (MTBs) phasing errors, (3) transition into Safehold due to albedo induced Course Sun Sensor (CSS) anomaly, and (4) a flight software error that could cause a Safehold transition due to a Star Tracker occultation. This paper will discuss ways GN&C engineers identified the anomalies and tracked down the root causes. Flight data and GN&C on-board models will be shown to illustrate how each of these anomalies were investigated and mitigated before causing any harm to the spacecraft. On May 29, 2014, GPM was handed over to the Mission Flight Operations Team after a successful commissioning period. Currently, GPM is operating nominally on orbit, collecting meaningful scientific data that will significantly improve our understanding of the Earth's climate and water cycle.

  18. Global Precipitation Measurement Mission Launch and Commissioning

    NASA Technical Reports Server (NTRS)

    Davis, Nikesha; Deweese, Keith; Vess, Missie; Welter, Gary; O'Donnell, James R., Jr.

    2015-01-01

    During launch and early operation of the Global Precipitation Measurement (GPM) Mission, the Guidance, Navigation and Control (GNC) analysis team encountered four main on orbit anomalies. These include: (1) unexpected shock from Solar Array deployment, (2) momentum buildup from the Magnetic Torquer Bars (MTBs) phasing errors, (3) transition into Safehold due to albedo-induced Course Sun Sensor (CSS) anomaly, and (4) a flight software error that could cause a Safehold transition due to a Star Tracker occultation. This paper will discuss ways GNC engineers identified and tracked down the root causes. Flight data and GNC on board models will be shown to illustrate how each of these anomalies were investigated and mitigated before causing any harm to the spacecraft. On May 29, 2014, GPM was handed over to the Mission Flight Operations Team after a successful commissioning period. Currently, GPM is operating nominally on orbit, collecting meaningful scientific data that will significantly improve our understanding of the Earth's climate and water cycle.

  19. Automatic readout micrometer

    DOEpatents

    Lauritzen, Ted

    1982-01-01

    A measuring system is disclosed for surveying and very accurately positioning objects with respect to a reference line. A principal use of this surveying system is for accurately aligning the electromagnets which direct a particle beam emitted from a particle accelerator. Prior art surveying systems require highly skilled surveyors. Prior art systems include, for example, optical surveying systems which are susceptible to operator reading errors, and celestial navigation-type surveying systems, with their inherent complexities. The present invention provides an automatic readout micrometer which can very accurately measure distances. The invention has a simplicity of operation which practically eliminates the possibilities of operator optical reading error, owning to the elimination of traditional optical alignments for making measurements. The invention has an extendable arm which carries a laser surveying target. The extendable arm can be continuously positioned over its entire length of travel by either a coarse or fine adjustment without having the fine adjustment outrun the coarse adjustment until a reference laser beam is centered on the target as indicated by a digital readout. The length of the micrometer can then be accurately and automatically read by a computer and compared with a standardized set of alignment measurements. Due to its construction, the micrometer eliminates any errors due to temperature changes when the system is operated within a standard operating temperature range.

  20. Automatic readout micrometer

    DOEpatents

    Lauritzen, T.

    A measuring system is described for surveying and very accurately positioning objects with respect to a reference line. A principle use of this surveying system is for accurately aligning the electromagnets which direct a particle beam emitted from a particle accelerator. Prior art surveying systems require highly skilled surveyors. Prior art systems include, for example, optical surveying systems which are susceptible to operator reading errors, and celestial navigation-type surveying systems, with their inherent complexities. The present invention provides an automatic readout micrometer which can very accurately measure distances. The invention has a simplicity of operation which practically eliminates the possibilities of operator optical reading error, owning to the elimination of traditional optical alignments for making measurements. The invention has an extendable arm which carries a laser surveying target. The extendable arm can be continuously positioned over its entire length of travel by either a coarse of fine adjustment without having the fine adjustment outrun the coarse adjustment until a reference laser beam is centered on the target as indicated by a digital readout. The length of the micrometer can then be accurately and automatically read by a computer and compared with a standardized set of alignment measurements. Due to its construction, the micrometer eliminates any errors due to temperature changes when the system is operated within a standard operating temperature range.

  1. Pair natural orbital and canonical coupled cluster reaction enthalpies involving light to heavy alkali and alkaline earth metals: the importance of sub-valence correlation.

    PubMed

    Minenkov, Yury; Bistoni, Giovanni; Riplinger, Christoph; Auer, Alexander A; Neese, Frank; Cavallo, Luigi

    2017-04-05

    In this work, we tested canonical and domain based pair natural orbital coupled cluster methods (CCSD(T) and DLPNO-CCSD(T), respectively) for a set of 32 ligand exchange and association/dissociation reaction enthalpies involving ionic complexes of Li, Be, Na, Mg, Ca, Sr, Ba and Pb(ii). Two strategies were investigated: in the former, only valence electrons were included in the correlation treatment, giving rise to the computationally very efficient FC (frozen core) approach; in the latter, all non-ECP electrons were included in the correlation treatment, giving rise to the AE (all electron) approach. Apart from reactions involving Li and Be, the FC approach resulted in non-homogeneous performance. The FC approach leads to very small errors (<2 kcal mol -1 ) for some reactions of Na, Mg, Ca, Sr, Ba and Pb, while for a few reactions of Ca and Ba deviations up to 40 kcal mol -1 have been obtained. Large errors are both due to artificial mixing of the core (sub-valence) orbitals of metals and the valence orbitals of oxygen and halogens in the molecular orbitals treated as core, and due to neglecting core-core and core-valence correlation effects. These large errors are reduced to a few kcal mol -1 if the AE approach is used or the sub-valence orbitals of metals are included in the correlation treatment. On the technical side, the CCSD(T) and DLPNO-CCSD(T) results differ by a fraction of kcal mol -1 , indicating the latter method as the perfect choice when the CPU efficiency is essential. For completely black-box applications, as requested in catalysis or thermochemical calculations, we recommend the DLPNO-CCSD(T) method with all electrons that are not covered by effective core potentials included in the correlation treatment and correlation-consistent polarized core valence basis sets of cc-pwCVQZ(-PP) quality.

  2. A Rasch Perspective

    ERIC Educational Resources Information Center

    Schumacker, Randall E.; Smith, Everett V., Jr.

    2007-01-01

    Measurement error is a common theme in classical measurement models used in testing and assessment. In classical measurement models, the definition of measurement error and the subsequent reliability coefficients differ on the basis of the test administration design. Internal consistency reliability specifies error due primarily to poor item…

  3. Global modeling of land water and energy balances. Part I: The land dynamics (LaD) model

    USGS Publications Warehouse

    Milly, P.C.D.; Shmakin, A.B.

    2002-01-01

    A simple model of large-scale land (continental) water and energy balances is presented. The model is an extension of an earlier scheme with a record of successful application in climate modeling. The most important changes from the original model include 1) introduction of non-water-stressed stomatal control of transpiration, in order to correct a tendency toward excessive evaporation: 2) conversion from globally constant parameters (with the exception of vegetation-dependent snow-free surface albedo) to more complete vegetation and soil dependence of all parameters, in order to provide more realistic representation of geographic variations in water and energy balances and to enable model-based investigations of land-cover change; 3) introduction of soil sensible heat storage and transport, in order to move toward realistic diurnal-cycle modeling; 4) a groundwater (saturated-zone) storage reservoir, in order to provide more realistic temporal variability of runoff; and 5) a rudimentary runoff-routing scheme for delivery of runoff to the ocean, in order to provide realistic freshwater forcing of the ocean general circulation model component of a global climate model. The new model is tested with forcing from the International Satellite Land Surface Climatology Project Initiative I global dataset and a recently produced observation-based water-balance dataset for major river basins of the world. Model performance is evaluated by comparing computed and observed runoff ratios from many major river basins of the world. Special attention is given to distinguishing between two components of the apparent runoff ratio error: the part due to intrinsic model error and the part due to errors in the assumed precipitation forcing. The pattern of discrepancies between modeled and observed runoff ratios is consistent with results from a companion study of precipitation estimation errors. The new model is tuned by adjustment of a globally constant scale factor for non-water-stressed stomatal resistance. After tuning, significant overestimation of runoff is found in environments where an overall arid climate includes a brief but intense wet season. It is shown that this error may be explained by the neglect of upward soil water diffusion from below the root zone during the dry season. With the exception of such basins, and in the absence of precipitation errors. It is estimated that annual runoff ratios simulated by the model would have a root-mean-square error of about 0.05. The new model matches observations better than its predecessor, which has a negative runoff bias and greater scatter.

  4. The variability of atmospheric equivalent temperature for radar altimeter range correction

    NASA Technical Reports Server (NTRS)

    Liu, W. Timothy; Mock, Donald

    1990-01-01

    Two sets of data were used to test the validity of the presently used approximation for radar altimeter range correction due to atmospheric water vapor. The approximation includes an assumption of constant atmospheric equivalent temperature. The first data set includes monthly, three-dimensional, gridded temperature and humidity fields over global oceans for a 10-year period, and the second is comprised of daily or semidaily rawinsonde data at 17 island stations for a 7-year period. It is found that the standard method underestimates the variability of the equivalent temperature, and the approximation could introduce errors of 2 cm for monthly means. The equivalent temperature is found to have a strong meridional gradient, and the highest temporal variabilities are found over western boundary currents. The study affirms that the atmospheric water vapor is a good predictor for both the equivalent temperature and the range correction. A relation is proposed to reduce the error.

  5. Efficient computation of the Grünwald-Letnikov fractional diffusion derivative using adaptive time step memory

    NASA Astrophysics Data System (ADS)

    MacDonald, Christopher L.; Bhattacharya, Nirupama; Sprouse, Brian P.; Silva, Gabriel A.

    2015-09-01

    Computing numerical solutions to fractional differential equations can be computationally intensive due to the effect of non-local derivatives in which all previous time points contribute to the current iteration. In general, numerical approaches that depend on truncating part of the system history while efficient, can suffer from high degrees of error and inaccuracy. Here we present an adaptive time step memory method for smooth functions applied to the Grünwald-Letnikov fractional diffusion derivative. This method is computationally efficient and results in smaller errors during numerical simulations. Sampled points along the system's history at progressively longer intervals are assumed to reflect the values of neighboring time points. By including progressively fewer points backward in time, a temporally 'weighted' history is computed that includes contributions from the entire past of the system, maintaining accuracy, but with fewer points actually calculated, greatly improving computational efficiency.

  6. Understanding sources of uncertainty and bias error in models of human response to low amplitude sonic booms

    NASA Astrophysics Data System (ADS)

    Collmar, M.; Cook, B. G.; Cowart, R.; Freund, D.; Gavin, J.

    2015-10-01

    A pool of 240 subjects was exposed to a library of waveforms consisting of example signatures of low boom aircraft. The signature library included intentional variations in both loudness and spectral content, and were auralized using the Gulfstream SASS-II sonic boom simulator. Post-processing was used to quantify the impacts of test design decisions on the "quality" of the resultant database. Specific lessons learned from this study include insight regarding potential for bias error due to variations in loudness or peak over-pressure, sources of uncertainty and their relative importance on objective measurements and robustness of individual metrics to wide variations in spectral content. Results provide clear guidance for design of future large scale community surveys, where one must optimize the complex tradeoffs between the size of the surveyed population, spatial footprint of those participants, and the fidelity/density of objective measurements.

  7. The Higgs transverse momentum distribution at NNLL and its theoretical errors

    DOE PAGES

    Neill, Duff; Rothstein, Ira Z.; Vaidya, Varun

    2015-12-15

    In this letter, we present the NNLL-NNLO transverse momentum Higgs distribution arising from gluon fusion. In the regime p ⊥ << m h we include the resummation of the large logs at next to next-to leading order and then match on to the α 2 s fixed order result near p ⊥~m h. By utilizing the rapidity renormalization group (RRG) we are able to smoothly match between the resummed, small p ⊥ regime and the fixed order regime. We give a detailed discussion of the scale dependence of the result including an analysis of the rapidity scale dependence. Our centralmore » value differs from previous results, in the transition region as well as the tail, by an amount which is outside the error band. Lastly, this difference is due to the fact that the RRG profile allows us to smoothly turn off the resummation.« less

  8. A quantitative comparison of simultaneous BOLD fMRI and NIRS recordings during functional brain activation

    NASA Technical Reports Server (NTRS)

    Strangman, Gary; Culver, Joseph P.; Thompson, John H.; Boas, David A.; Sutton, J. P. (Principal Investigator)

    2002-01-01

    Near-infrared spectroscopy (NIRS) has been used to noninvasively monitor adult human brain function in a wide variety of tasks. While rough spatial correspondences with maps generated from functional magnetic resonance imaging (fMRI) have been found in such experiments, the amplitude correspondences between the two recording modalities have not been fully characterized. To do so, we simultaneously acquired NIRS and blood-oxygenation level-dependent (BOLD) fMRI data and compared Delta(1/BOLD) (approximately R(2)(*)) to changes in oxyhemoglobin, deoxyhemoglobin, and total hemoglobin concentrations derived from the NIRS data from subjects performing a simple motor task. We expected the correlation with deoxyhemoglobin to be strongest, due to the causal relation between changes in deoxyhemoglobin concentrations and BOLD signal. Instead we found highly variable correlations, suggesting the need to account for individual subject differences in our NIRS calculations. We argue that the variability resulted from systematic errors associated with each of the signals, including: (1) partial volume errors due to focal concentration changes, (2) wavelength dependence of this partial volume effect, (3) tissue model errors, and (4) possible spatial incongruence between oxy- and deoxyhemoglobin concentration changes. After such effects were accounted for, strong correlations were found between fMRI changes and all optical measures, with oxyhemoglobin providing the strongest correlation. Importantly, this finding held even when including scalp, skull, and inactive brain tissue in the average BOLD signal. This may reflect, at least in part, the superior contrast-to-noise ratio for oxyhemoglobin relative to deoxyhemoglobin (from optical measurements), rather than physiology related to BOLD signal interpretation.

  9. Comparison of PASCAL and FORTRAN for solving problems in the physical sciences

    NASA Technical Reports Server (NTRS)

    Watson, V. R.

    1981-01-01

    The paper compares PASCAL and FORTRAN for problem solving in the physical sciences, due to requests NASA has received to make PASCAL available on the Numerical Aerodynamic Simulator (scheduled to be operational in 1986). PASCAL disadvantages include the lack of scientific utility procedures equivalent to the IBM scientific subroutine package or the IMSL package which are available in FORTRAN. Advantages include a well-organized, easy to read and maintain writing code, range checking to prevent errors, and a broad selection of data types. It is concluded that FORTRAN may be the better language, although ADA (patterned after PASCAL) may surpass FORTRAN due to its ability to add complex and vector math, and the specify the precision and range of variables.

  10. Exploring the impact of forcing error characteristics on physically based snow simulations within a global sensitivity analysis framework

    NASA Astrophysics Data System (ADS)

    Raleigh, M. S.; Lundquist, J. D.; Clark, M. P.

    2015-07-01

    Physically based models provide insights into key hydrologic processes but are associated with uncertainties due to deficiencies in forcing data, model parameters, and model structure. Forcing uncertainty is enhanced in snow-affected catchments, where weather stations are scarce and prone to measurement errors, and meteorological variables exhibit high variability. Hence, there is limited understanding of how forcing error characteristics affect simulations of cold region hydrology and which error characteristics are most important. Here we employ global sensitivity analysis to explore how (1) different error types (i.e., bias, random errors), (2) different error probability distributions, and (3) different error magnitudes influence physically based simulations of four snow variables (snow water equivalent, ablation rates, snow disappearance, and sublimation). We use the Sobol' global sensitivity analysis, which is typically used for model parameters but adapted here for testing model sensitivity to coexisting errors in all forcings. We quantify the Utah Energy Balance model's sensitivity to forcing errors with 1 840 000 Monte Carlo simulations across four sites and five different scenarios. Model outputs were (1) consistently more sensitive to forcing biases than random errors, (2) generally less sensitive to forcing error distributions, and (3) critically sensitive to different forcings depending on the relative magnitude of errors. For typical error magnitudes found in areas with drifting snow, precipitation bias was the most important factor for snow water equivalent, ablation rates, and snow disappearance timing, but other forcings had a more dominant impact when precipitation uncertainty was due solely to gauge undercatch. Additionally, the relative importance of forcing errors depended on the model output of interest. Sensitivity analysis can reveal which forcing error characteristics matter most for hydrologic modeling.

  11. Fusing metabolomics data sets with heterogeneous measurement errors

    PubMed Central

    Waaijenborg, Sandra; Korobko, Oksana; Willems van Dijk, Ko; Lips, Mirjam; Hankemeier, Thomas; Wilderjans, Tom F.; Smilde, Age K.

    2018-01-01

    Combining different metabolomics platforms can contribute significantly to the discovery of complementary processes expressed under different conditions. However, analysing the fused data might be hampered by the difference in their quality. In metabolomics data, one often observes that measurement errors increase with increasing measurement level and that different platforms have different measurement error variance. In this paper we compare three different approaches to correct for the measurement error heterogeneity, by transformation of the raw data, by weighted filtering before modelling and by a modelling approach using a weighted sum of residuals. For an illustration of these different approaches we analyse data from healthy obese and diabetic obese individuals, obtained from two metabolomics platforms. Concluding, the filtering and modelling approaches that both estimate a model of the measurement error did not outperform the data transformation approaches for this application. This is probably due to the limited difference in measurement error and the fact that estimation of measurement error models is unstable due to the small number of repeats available. A transformation of the data improves the classification of the two groups. PMID:29698490

  12. Exception handling for sensor fusion

    NASA Astrophysics Data System (ADS)

    Chavez, G. T.; Murphy, Robin R.

    1993-08-01

    This paper presents a control scheme for handling sensing failures (sensor malfunctions, significant degradations in performance due to changes in the environment, and errant expectations) in sensor fusion for autonomous mobile robots. The advantages of the exception handling mechanism are that it emphasizes a fast response to sensing failures, is able to use only a partial causal model of sensing failure, and leads to a graceful degradation of sensing if the sensing failure cannot be compensated for. The exception handling mechanism consists of two modules: error classification and error recovery. The error classification module in the exception handler attempts to classify the type and source(s) of the error using a modified generate-and-test procedure. If the source of the error is isolated, the error recovery module examines its cache of recovery schemes, which either repair or replace the current sensing configuration. If the failure is due to an error in expectation or cannot be identified, the planner is alerted. Experiments using actual sensor data collected by the CSM Mobile Robotics/Machine Perception Laboratory's Denning mobile robot demonstrate the operation of the exception handling mechanism.

  13. Correlation Attenuation Due to Measurement Error: A New Approach Using the Bootstrap Procedure

    ERIC Educational Resources Information Center

    Padilla, Miguel A.; Veprinsky, Anna

    2012-01-01

    Issues with correlation attenuation due to measurement error are well documented. More than a century ago, Spearman proposed a correction for attenuation. However, this correction has seen very little use since it can potentially inflate the true correlation beyond one. In addition, very little confidence interval (CI) research has been done for…

  14. LOOP- SIMULATION OF THE AUTOMATIC FREQUENCY CONTROL SUBSYSTEM OF A DIFFERENTIAL MINIMUM SHIFT KEYING RECEIVER

    NASA Technical Reports Server (NTRS)

    Davarian, F.

    1994-01-01

    The LOOP computer program was written to simulate the Automatic Frequency Control (AFC) subsystem of a Differential Minimum Shift Keying (DMSK) receiver with a bit rate of 2400 baud. The AFC simulated by LOOP is a first order loop configuration with a first order R-C filter. NASA has been investigating the concept of mobile communications based on low-cost, low-power terminals linked via geostationary satellites. Studies have indicated that low bit rate transmission is suitable for this application, particularly from the frequency and power conservation point of view. A bit rate of 2400 BPS is attractive due to its applicability to the linear predictive coding of speech. Input to LOOP includes the following: 1) the initial frequency error; 2) the double-sided loop noise bandwidth; 3) the filter time constants; 4) the amount of intersymbol interference; and 5) the bit energy to noise spectral density. LOOP output includes: 1) the bit number and the frequency error of that bit; 2) the computed mean of the frequency error; and 3) the standard deviation of the frequency error. LOOP is written in MS SuperSoft FORTRAN 77 for interactive execution and has been implemented on an IBM PC operating under PC DOS with a memory requirement of approximately 40K of 8 bit bytes. This program was developed in 1986.

  15. Global Warming Estimation from MSU

    NASA Technical Reports Server (NTRS)

    Prabhakara, C.; Iacovazzi, Robert; Yoo, Jung-Moon

    1998-01-01

    Microwave Sounding Unit (MSU) radiometer observations in Ch 2 (53.74 GHz) from sequential, sun-synchronous, polar-orbiting NOAA satellites contain small systematic errors. Some of these errors are time-dependent and some are time-independent. Small errors in Ch 2 data of successive satellites arise from calibration differences. Also, successive NOAA satellites tend to have different Local Equatorial Crossing Times (LECT), which introduce differences in Ch 2 data due to the diurnal cycle. These two sources of systematic error are largely time independent. However, because of atmospheric drag, there can be a drift in the LECT of a given satellite, which introduces time-dependent systematic errors. One of these errors is due to the progressive chance in the diurnal cycle and the other is due to associated chances in instrument heating by the sun. In order to infer global temperature trend from the these MSU data, we have eliminated explicitly the time-independent systematic errors. Both of the time-dependent errors cannot be assessed from each satellite. For this reason, their cumulative effect on the global temperature trend is evaluated implicitly. Christy et al. (1998) (CSL). based on their method of analysis of the MSU Ch 2 data, infer a global temperature cooling trend (-0.046 K per decade) from 1979 to 1997, although their near nadir measurements yield near zero trend (0.003 K/decade). Utilising an independent method of analysis, we infer global temperature warmed by 0.12 +/- 0.06 C per decade from the observations of the MSU Ch 2 during the period 1980 to 1997.

  16. Selecting Statistical Quality Control Procedures for Limiting the Impact of Increases in Analytical Random Error on Patient Safety.

    PubMed

    Yago, Martín

    2017-05-01

    QC planning based on risk management concepts can reduce the probability of harming patients due to an undetected out-of-control error condition. It does this by selecting appropriate QC procedures to decrease the number of erroneous results reported. The selection can be easily made by using published nomograms for simple QC rules when the out-of-control condition results in increased systematic error. However, increases in random error also occur frequently and are difficult to detect, which can result in erroneously reported patient results. A statistical model was used to construct charts for the 1 ks and X /χ 2 rules. The charts relate the increase in the number of unacceptable patient results reported due to an increase in random error with the capability of the measurement procedure. They thus allow for QC planning based on the risk of patient harm due to the reporting of erroneous results. 1 ks Rules are simple, all-around rules. Their ability to deal with increases in within-run imprecision is minimally affected by the possible presence of significant, stable, between-run imprecision. X /χ 2 rules perform better when the number of controls analyzed during each QC event is increased to improve QC performance. Using nomograms simplifies the selection of statistical QC procedures to limit the number of erroneous patient results reported due to an increase in analytical random error. The selection largely depends on the presence or absence of stable between-run imprecision. © 2017 American Association for Clinical Chemistry.

  17. Design of a Pneumatic Tool for Manual Drilling Operations in Confined Spaces

    NASA Astrophysics Data System (ADS)

    Janicki, Benjamin

    This master's thesis describes the design process and testing results for a pneumatically actuated, manually-operated tool for confined space drilling operations. The purpose of this device is to back-drill pilot holes inside a commercial airplane wing. It is lightweight, and a "locator pin" enables the operator to align the drill over a pilot hole. A suction pad stabilizes the system, and an air motor and flexible drive shaft power the drill. Two testing procedures were performed to determine the practicality of this prototype. The first was the "offset drill test", which qualified the exit hole position error due to an initial position error relative to the original pilot hole. The results displayed a linear relationship, and it was determined that position errors of less than .060" would prevent the need for rework, with errors of up to .030" considered acceptable. For the second test, a series of holes were drilled with the pneumatic tool and analyzed for position error, diameter range, and cycle time. The position errors and hole diameter range were within the allowed tolerances. The average cycle time was 45 seconds, 73 percent of which was for drilling the hole, and 27 percent of which was for positioning the device. Recommended improvements are discussed in the conclusion, and include a more durable flexible drive shaft, a damper for drill feed control, and a more stable locator pin.

  18. Prevalence of refractive errors among school children in gondar town, northwest ethiopia.

    PubMed

    Yared, Assefa Wolde; Belaynew, Wasie Taye; Destaye, Shiferaw; Ayanaw, Tsegaw; Zelalem, Eshete

    2012-10-01

    Many children with poor vision due to refractive error remain undiagnosed and perform poorly in school. The situation is worse in the Sub-Saharan Africa, including Ethiopia, and current information is lacking. The objective of this study is to determine the prevalence of refractive error among children enrolled in elementary schools in Gondar town, Ethiopia. This was a cross-sectional study of 1852 students in 8 elementary schools. Subjects were selected by multistage random sampling. The study parameters were visual acuity (VA) evaluation and ocular examination. VA was measured by staff optometrists with the Snellen E-chart while students with subnormal vision were examined using pinhole, retinoscopy evaluation and subjective refraction by ophthalmologists. The study cohort was comprised of 45.8% males and 54.2% females from 8 randomly selected elementary schools with a response rate of 93%. Refractive errors in either eye were present in 174 (9.4%) children. Of these, myopia was diagnosed in 55 (31.6%) children in the right and left eyes followed by hyperopia in 46 (26.4%) and 39 (22.4%) in the right and left eyes respectively. Low myopia was the most common refractive error in 61 (49.2%) and 68 (50%) children for the right and left eyes respectively. Refractive error among children is a common problem in Gondar town and needs to be assessed at every health evaluation of school children for timely treatment.

  19. Cross Section Sensitivity and Propagated Errors in HZE Exposures

    NASA Technical Reports Server (NTRS)

    Heinbockel, John H.; Wilson, John W.; Blatnig, Steve R.; Qualls, Garry D.; Badavi, Francis F.; Cucinotta, Francis A.

    2005-01-01

    It has long been recognized that galactic cosmic rays are of such high energy that they tend to pass through available shielding materials resulting in exposure of astronauts and equipment within space vehicles and habitats. Any protection provided by shielding materials result not so much from stopping such particles but by changing their physical character in interaction with shielding material nuclei forming, hopefully, less dangerous species. Clearly, the fidelity of the nuclear cross-sections is essential to correct specification of shield design and sensitivity to cross-section error is important in guiding experimental validation of cross-section models and database. We examine the Boltzmann transport equation which is used to calculate dose equivalent during solar minimum, with units (cSv/yr), associated with various depths of shielding materials. The dose equivalent is a weighted sum of contributions from neutrons, protons, light ions, medium ions and heavy ions. We investigate the sensitivity of dose equivalent calculations due to errors in nuclear fragmentation cross-sections. We do this error analysis for all possible projectile-fragment combinations (14,365 such combinations) to estimate the sensitivity of the shielding calculations to errors in the nuclear fragmentation cross-sections. Numerical differentiation with respect to the cross-sections will be evaluated in a broad class of materials including polyethylene, aluminum and copper. We will identify the most important cross-sections for further experimental study and evaluate their impact on propagated errors in shielding estimates.

  20. A variable acceleration calibration system

    NASA Astrophysics Data System (ADS)

    Johnson, Thomas H.

    2011-12-01

    A variable acceleration calibration system that applies loads using gravitational and centripetal acceleration serves as an alternative, efficient and cost effective method for calibrating internal wind tunnel force balances. Two proof-of-concept variable acceleration calibration systems are designed, fabricated and tested. The NASA UT-36 force balance served as the test balance for the calibration experiments. The variable acceleration calibration systems are shown to be capable of performing three component calibration experiments with an approximate applied load error on the order of 1% of the full scale calibration loads. Sources of error are indentified using experimental design methods and a propagation of uncertainty analysis. Three types of uncertainty are indentified for the systems and are attributed to prediction error, calibration error and pure error. Angular velocity uncertainty is shown to be the largest indentified source of prediction error. The calibration uncertainties using a production variable acceleration based system are shown to be potentially equivalent to current methods. The production quality system can be realized using lighter materials and a more precise instrumentation. Further research is needed to account for balance deflection, forcing effects due to vibration, and large tare loads. A gyroscope measurement technique is shown to be capable of resolving the balance deflection angle calculation. Long term research objectives include a demonstration of a six degree of freedom calibration, and a large capacity balance calibration.

  1. Retractions by Pakistan Journal of Medical Sciences due to Scientific Misconduct.

    PubMed

    Jawaid, Shaukat Ali; Jawaid, Masood

    2016-08-01

    Under pressure to publish, academicians and research scientists are increasingly indulging in scientific misconduct leading to retraction of such papers when identified. Other reasons of retraction include scientific error and problems related to ethics. Four published manuscripts (three from Turkey and one from Pakistan) had to be retracted from Pakistan Journal of Medical Sciences from January 2014 to July 2015 due to scientific misconduct. There is a need to search for effective measures which could help reduce the number of retractions and prevent scientific literature from being further polluted, which seems to be increasing every year.

  2. Accuracy of outpatient service data for activity-based funding in New South Wales, Australia.

    PubMed

    Munyisia, Esther N; Reid, David; Yu, Ping

    2017-05-01

    Despite increasing research on activity-based funding (ABF), there is no empirical evidence on the accuracy of outpatient service data for payment. This study aimed to identify data entry errors affecting ABF in two drug and alcohol outpatient clinic services in Australia. An audit was carried out on healthcare workers' (doctors, nurses, psychologists, social workers, counsellors, and aboriginal health education officers) data entry errors in an outpatient electronic documentation system. Of the 6919 data entries in the electronic documentation system, 7.5% (518) had errors, 68.7% of the errors were related to a wrong primary activity, 14.5% were due to a wrong activity category, 14.5% were as a result of a wrong combination of primary activity and modality of care, 1.9% were due to inaccurate information on a client's presence during service delivery and 0.4% were related to a wrong modality of care. Data entry errors may affect the amount of funding received by a healthcare organisation, which in turn may affect the quality of treatment provided to clients due to the possibility of underfunding the organisation. To reduce errors or achieve an error-free environment, there is a need to improve the naming convention of data elements, their descriptions and alignment with the national standard classification of outpatient services. It is also important to support healthcare workers in their data entry by embedding safeguards in the electronic documentation system such as flags for inaccurate data elements.

  3. Carrier recovery methods for a dual-mode modem: A design approach

    NASA Technical Reports Server (NTRS)

    Richards, C. W.; Wilson, S. G.

    1984-01-01

    A dual mode model with selectable QPSK or 16-QASK modulation schemes is discussed. The theoretical reasoning as well as the practical trade-offs made during the development of a modem are presented, with attention given to the carrier recovery method used for coherent demodulation. Particular attention is given to carrier recovery methods that can provide little degradation due to phase error for both QPSK and 16-QASK, while being insensitive to the amplitude characteristic of a 16-QASK modulation scheme. A computer analysis of the degradation is symbol error rate (SER) for QPSK and 16-QASK due to phase error is prresented. Results find that an energy increase of roughly 4 dB is needed to maintain a SER of 1X10(-5) for QPSK with 20 deg of phase error and 16-QASK with 7 deg phase error.

  4. Errors in the estimation of approximate entropy and other recurrence-plot-derived indices due to the finite resolution of RR time series.

    PubMed

    García-González, Miguel A; Fernández-Chimeno, Mireya; Ramos-Castro, Juan

    2009-02-01

    An analysis of the errors due to the finite resolution of RR time series in the estimation of the approximate entropy (ApEn) is described. The quantification errors in the discrete RR time series produce considerable errors in the ApEn estimation (bias and variance) when the signal variability or the sampling frequency is low. Similar errors can be found in indices related to the quantification of recurrence plots. An easy way to calculate a figure of merit [the signal to resolution of the neighborhood ratio (SRN)] is proposed in order to predict when the bias in the indices could be high. When SRN is close to an integer value n, the bias is higher than when near n - 1/2 or n + 1/2. Moreover, if SRN is close to an integer value, the lower this value, the greater the bias is.

  5. High-precision pointing with the Sardinia Radio Telescope

    NASA Astrophysics Data System (ADS)

    Poppi, Sergio; Pernechele, Claudio; Pisanu, Tonino; Morsiani, Marco

    2010-07-01

    We present here the systems aimed to measure and minimize the pointing errors for the Sardinia Radio Telescope: they consist of an optical telescope to measure errors due to the mechanical structure deformations and a lasers system for the errors due to the subreflector displacement. We show here the results of the tests that we have done on the Medicina 32 meters VLBI radio telescope. The measurements demonstrate we can measure the pointing errors of the mechanical structure, with an accuracy of about ~1 arcsec. Moreover, we show the technique to measure the displacement of the subreflector, placed in the SRT at 22 meters from the main mirror, within +/-0.1 mm from its optimal position. These measurements show that we can obtain the needed accuracy to correct also the non repeatable pointing errors, which arise on time scale varying from seconds to minutes.

  6. Parent Preferences for Medical Error Disclosure: A Qualitative Study.

    PubMed

    Coffey, Maitreya; Espin, Sherry; Hahmann, Tara; Clairman, Hayyah; Lo, Lisha; Friedman, Jeremy N; Matlow, Anne

    2017-01-01

    According to disclosure guidelines, patients experiencing adverse events due to medical errors should be offered full disclosure, whereas disclosure of near misses is not traditionally expected. This may conflict with parental expectations; surveys reveal most parents expect full disclosure whether errors resulted in harm or not. Protocols regarding whether to include children in these discussions have not been established. This study explores parent preferences around disclosure and views on including children. Fifteen parents of hospitalized children participated in semistructured interviews. Three hypothetical scenarios of different severity were used to initiate discussion. Interviews were audiotaped, transcribed, and coded for emergent themes. Parents uniformly wanted disclosure if harm occurred, although fewer wanted their child informed. For nonharmful errors, most parents wanted disclosure for themselves but few for their children.With respect to including children in disclosure, parents preferred to assess their children's cognitive and emotional readiness to cope with disclosure, wishing to act as a "buffer" between the health care team and their children. Generally, as event severity decreased, they felt that risks of informing children outweighed benefits. Parents strongly emphasized needing reassurance of a good final outcome and anticipated difficulty managing their emotions. Parents have mixed expectations regarding disclosure. Although survey studies indicate a stronger desire for disclosure of nonharmful events than for adult patients, this qualitative study revealed a greater degree of hesitation and complexity. Parents have a great need for reassurance and consistently wish to act as a buffer between the health care team and their children. Copyright © 2017 by the American Academy of Pediatrics.

  7. Failure analysis and modeling of a multicomputer system. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Subramani, Sujatha Srinivasan

    1990-01-01

    This thesis describes the results of an extensive measurement-based analysis of real error data collected from a 7-machine DEC VaxCluster multicomputer system. In addition to evaluating basic system error and failure characteristics, we develop reward models to analyze the impact of failures and errors on the system. The results show that, although 98 percent of errors in the shared resources recover, they result in 48 percent of all system failures. The analysis of rewards shows that the expected reward rate for the VaxCluster decreases to 0.5 in 100 days for a 3 out of 7 model, which is well over a 100 times that for a 7-out-of-7 model. A comparison of the reward rates for a range of k-out-of-n models indicates that the maximum increase in reward rate (0.25) occurs in going from the 6-out-of-7 model to the 5-out-of-7 model. The analysis also shows that software errors have the lowest reward (0.2 vs. 0.91 for network errors). The large loss in reward rate for software errors is due to the fact that a large proportion (94 percent) of software errors lead to failure. In comparison, the high reward rate for network errors is due to fast recovery from a majority of these errors (median recovery duration is 0 seconds).

  8. Paediatric in-patient prescribing errors in Malaysia: a cross-sectional multicentre study.

    PubMed

    Khoo, Teik Beng; Tan, Jing Wen; Ng, Hoong Phak; Choo, Chong Ming; Bt Abdul Shukor, Intan Nor Chahaya; Teh, Siao Hean

    2017-06-01

    Background There is a lack of large comprehensive studies in developing countries on paediatric in-patient prescribing errors in different settings. Objectives To determine the characteristics of in-patient prescribing errors among paediatric patients. Setting General paediatric wards, neonatal intensive care units and paediatric intensive care units in government hospitals in Malaysia. Methods This is a cross-sectional multicentre study involving 17 participating hospitals. Drug charts were reviewed in each ward to identify the prescribing errors. All prescribing errors identified were further assessed for their potential clinical consequences, likely causes and contributing factors. Main outcome measures Incidence, types, potential clinical consequences, causes and contributing factors of the prescribing errors. Results The overall prescribing error rate was 9.2% out of 17,889 prescribed medications. There was no significant difference in the prescribing error rates between different types of hospitals or wards. The use of electronic prescribing had a higher prescribing error rate than manual prescribing (16.9 vs 8.2%, p < 0.05). Twenty eight (1.7%) prescribing errors were deemed to have serious potential clinical consequences and 2 (0.1%) were judged to be potentially fatal. Most of the errors were attributed to human factors, i.e. performance or knowledge deficit. The most common contributing factors were due to lack of supervision or of knowledge. Conclusions Although electronic prescribing may potentially improve safety, it may conversely cause prescribing errors due to suboptimal interfaces and cumbersome work processes. Junior doctors need specific training in paediatric prescribing and close supervision to reduce prescribing errors in paediatric in-patients.

  9. Effect of horizontal displacements due to ocean tide loading on the determination of polar motion and UT1

    NASA Astrophysics Data System (ADS)

    Scherneck, Hans-Georg; Haas, Rüdiger

    We show the influence of horizontal displacements due to ocean tide loading on the determination of polar motion and UT1 (PMU) on the daily and subdaily timescale. So called ‘virtual PMU variations’ due to modelling errors of ocean tide loading are predicted for geodetic Very Long Baseline Interferometry (VLBI) networks. This leads to errors of subdaily determination of PMU. The predicted effects are confirmed by the analysis of geodetic VLBI observations.

  10. Accuracy Study of a Robotic System for MRI-guided Prostate Needle Placement

    PubMed Central

    Seifabadi, Reza; Cho, Nathan BJ.; Song, Sang-Eun; Tokuda, Junichi; Hata, Nobuhiko; Tempany, Clare M.; Fichtinger, Gabor; Iordachita, Iulian

    2013-01-01

    Background Accurate needle placement is the first concern in percutaneous MRI-guided prostate interventions. In this phantom study, different sources contributing to the overall needle placement error of a MRI-guided robot for prostate biopsy have been identified, quantified, and minimized to the possible extent. Methods and Materials The overall needle placement error of the system was evaluated in a prostate phantom. This error was broken into two parts: the error associated with the robotic system (called before-insertion error) and the error associated with needle-tissue interaction (called due-to-insertion error). The before-insertion error was measured directly in a soft phantom and different sources contributing into this part were identified and quantified. A calibration methodology was developed to minimize the 4-DOF manipulator’s error. The due-to-insertion error was indirectly approximated by comparing the overall error and the before-insertion error. The effect of sterilization on the manipulator’s accuracy and repeatability was also studied. Results The average overall system error in phantom study was 2.5 mm (STD=1.1mm). The average robotic system error in super soft phantom was 1.3 mm (STD=0.7 mm). Assuming orthogonal error components, the needle-tissue interaction error was approximated to be 2.13 mm thus having larger contribution to the overall error. The average susceptibility artifact shift was 0.2 mm. The manipulator’s targeting accuracy was 0.71 mm (STD=0.21mm) after robot calibration. The robot’s repeatability was 0.13 mm. Sterilization had no noticeable influence on the robot’s accuracy and repeatability. Conclusions The experimental methodology presented in this paper may help researchers to identify, quantify, and minimize different sources contributing into the overall needle placement error of an MRI-guided robotic system for prostate needle placement. In the robotic system analyzed here, the overall error of the studied system remained within the acceptable range. PMID:22678990

  11. Accuracy study of a robotic system for MRI-guided prostate needle placement.

    PubMed

    Seifabadi, Reza; Cho, Nathan B J; Song, Sang-Eun; Tokuda, Junichi; Hata, Nobuhiko; Tempany, Clare M; Fichtinger, Gabor; Iordachita, Iulian

    2013-09-01

    Accurate needle placement is the first concern in percutaneous MRI-guided prostate interventions. In this phantom study, different sources contributing to the overall needle placement error of a MRI-guided robot for prostate biopsy have been identified, quantified and minimized to the possible extent. The overall needle placement error of the system was evaluated in a prostate phantom. This error was broken into two parts: the error associated with the robotic system (called 'before-insertion error') and the error associated with needle-tissue interaction (called 'due-to-insertion error'). Before-insertion error was measured directly in a soft phantom and different sources contributing into this part were identified and quantified. A calibration methodology was developed to minimize the 4-DOF manipulator's error. The due-to-insertion error was indirectly approximated by comparing the overall error and the before-insertion error. The effect of sterilization on the manipulator's accuracy and repeatability was also studied. The average overall system error in the phantom study was 2.5 mm (STD = 1.1 mm). The average robotic system error in the Super Soft plastic phantom was 1.3 mm (STD = 0.7 mm). Assuming orthogonal error components, the needle-tissue interaction error was found to be approximately 2.13 mm, thus making a larger contribution to the overall error. The average susceptibility artifact shift was 0.2 mm. The manipulator's targeting accuracy was 0.71 mm (STD = 0.21 mm) after robot calibration. The robot's repeatability was 0.13 mm. Sterilization had no noticeable influence on the robot's accuracy and repeatability. The experimental methodology presented in this paper may help researchers to identify, quantify and minimize different sources contributing into the overall needle placement error of an MRI-guided robotic system for prostate needle placement. In the robotic system analysed here, the overall error of the studied system remained within the acceptable range. Copyright © 2012 John Wiley & Sons, Ltd.

  12. Statistics of the residual refraction errors in laser ranging data

    NASA Technical Reports Server (NTRS)

    Gardner, C. S.

    1977-01-01

    A theoretical model for the range error covariance was derived by assuming that the residual refraction errors are due entirely to errors in the meteorological data which are used to calculate the atmospheric correction. The properties of the covariance function are illustrated by evaluating the theoretical model for the special case of a dense network of weather stations uniformly distributed within a circle.

  13. Nonlinear robust control of hypersonic aircrafts with interactions between flight dynamics and propulsion systems.

    PubMed

    Li, Zhaoying; Zhou, Wenjie; Liu, Hao

    2016-09-01

    This paper addresses the nonlinear robust tracking controller design problem for hypersonic vehicles. This problem is challenging due to strong coupling between the aerodynamics and the propulsion system, and the uncertainties involved in the vehicle dynamics including parametric uncertainties, unmodeled model uncertainties, and external disturbances. By utilizing the feedback linearization technique, a linear tracking error system is established with prescribed references. For the linear model, a robust controller is proposed based on the signal compensation theory to guarantee that the tracking error dynamics is robustly stable. Numerical simulation results are given to show the advantages of the proposed nonlinear robust control method, compared to the robust loop-shaping control approach. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  14. Development of a Laboratory for Improving Communication between Air Traffic Controllers and Pilots

    NASA Technical Reports Server (NTRS)

    Brammer, Anthony

    2003-01-01

    Runway incursions and other surface incidents are known to be significant threats to aviation safety and efficiency. Though the number of near mid-air collisions in U.S. air space has remained unchanged during the last five years, the number of runway incursions has increased and they are almost all due to human error. The three most common factors contributing to air traffic controller and pilot error in airport operations include two that involve failed auditory communication. This project addressed the problems of auditory communication in air traffic control from an acoustical standpoint, by establishing an acoustics laboratory designed for this purpose and initiating research into selected topics that show promise for improving voice communications between air traffic controllers and pilots.

  15. IMPROVED SPECTROPHOTOMETRIC CALIBRATION OF THE SDSS-III BOSS QUASAR SAMPLE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Margala, Daniel; Kirkby, David; Dawson, Kyle

    2016-11-10

    We present a model for spectrophotometric calibration errors in observations of quasars from the third generation of the Sloan Digital Sky Survey Baryon Oscillation Spectroscopic Survey (BOSS) and describe the correction procedure we have developed and applied to this sample. Calibration errors are primarily due to atmospheric differential refraction and guiding offsets during each exposure. The corrections potentially reduce the systematics for any studies of BOSS quasars, including the measurement of baryon acoustic oscillations using the Ly α forest. Our model suggests that, on average, the observed quasar flux in BOSS is overestimated by ∼19% at 3600 Å and underestimatedmore » by ∼24% at 10,000 Å. Our corrections for the entire BOSS quasar sample are publicly available.« less

  16. A simulation study to quantify the impacts of exposure measurement error on air pollution health risk estimates in copollutant time-series models.

    PubMed

    Dionisio, Kathie L; Chang, Howard H; Baxter, Lisa K

    2016-11-25

    Exposure measurement error in copollutant epidemiologic models has the potential to introduce bias in relative risk (RR) estimates. A simulation study was conducted using empirical data to quantify the impact of correlated measurement errors in time-series analyses of air pollution and health. ZIP-code level estimates of exposure for six pollutants (CO, NO x , EC, PM 2.5 , SO 4 , O 3 ) from 1999 to 2002 in the Atlanta metropolitan area were used to calculate spatial, population (i.e. ambient versus personal), and total exposure measurement error. Empirically determined covariance of pollutant concentration pairs and the associated measurement errors were used to simulate true exposure (exposure without error) from observed exposure. Daily emergency department visits for respiratory diseases were simulated using a Poisson time-series model with a main pollutant RR = 1.05 per interquartile range, and a null association for the copollutant (RR = 1). Monte Carlo experiments were used to evaluate the impacts of correlated exposure errors of different copollutant pairs. Substantial attenuation of RRs due to exposure error was evident in nearly all copollutant pairs studied, ranging from 10 to 40% attenuation for spatial error, 3-85% for population error, and 31-85% for total error. When CO, NO x or EC is the main pollutant, we demonstrated the possibility of false positives, specifically identifying significant, positive associations for copollutants based on the estimated type I error rate. The impact of exposure error must be considered when interpreting results of copollutant epidemiologic models, due to the possibility of attenuation of main pollutant RRs and the increased probability of false positives when measurement error is present.

  17. Correlation of Head Impacts to Change in Balance Error Scoring System Scores in Division I Men's Lacrosse Players.

    PubMed

    Miyashita, Theresa L; Diakogeorgiou, Eleni; Marrie, Kaitlyn

    Investigation into the effect of cumulative subconcussive head impacts has yielded various results in the literature, with many supporting a link to neurological deficits. Little research has been conducted on men's lacrosse and associated balance deficits from head impacts. (1) Athletes will commit more errors on the postseason Balance Error Scoring System (BESS) test. (2) There will be a positive correlation to change in BESS scores and head impact exposure data. Prospective longitudinal study. Level 3. Thirty-four Division I men's lacrosse players (age, 19.59 ± 1.42 years) wore helmets instrumented with a sensor to collect head impact exposure data over the course of a competitive season. Players completed a BESS test at the start and end of the competitive season. The number of errors from pre- to postseason increased during the double-leg stance on foam ( P < 0.001), tandem stance on foam ( P = 0.009), total number of errors on a firm surface ( P = 0.042), and total number of errors on a foam surface ( P = 0.007). There were significant correlations only between the total errors on a foam surface and linear acceleration ( P = 0.038, r = 0.36), head injury criteria ( P = 0.024, r = 0.39), and Gadd Severity Index scores ( P = 0.031, r = 0.37). Changes in the total number of errors on a foam surface may be considered a sensitive measure to detect balance deficits associated with cumulative subconcussive head impacts sustained over the course of 1 lacrosse season, as measured by average linear acceleration, head injury criteria, and Gadd Severity Index scores. If there is microtrauma to the vestibular system due to repetitive subconcussive impacts, only an assessment that highly stresses the vestibular system may be able to detect these changes. Cumulative subconcussive impacts may result in neurocognitive dysfunction, including balance deficits, which are associated with an increased risk for injury. The development of a strategy to reduce total number of head impacts may curb the associated sequelae. Incorporation of a modified BESS test, firm surface only, may not be recommended as it may not detect changes due to repetitive impacts over the course of a competitive season.

  18. Modeling human response errors in synthetic flight simulator domain

    NASA Technical Reports Server (NTRS)

    Ntuen, Celestine A.

    1992-01-01

    This paper presents a control theoretic approach to modeling human response errors (HRE) in the flight simulation domain. The human pilot is modeled as a supervisor of a highly automated system. The synthesis uses the theory of optimal control pilot modeling for integrating the pilot's observation error and the error due to the simulation model (experimental error). Methods for solving the HRE problem are suggested. Experimental verification of the models will be tested in a flight quality handling simulation.

  19. Short RNA indicator sequences are not completely degraded by autoclaving

    PubMed Central

    Unnithan, Veena V.; Unc, Adrian; Joe, Valerisa; Smith, Geoffrey B.

    2014-01-01

    Short indicator RNA sequences (<100 bp) persist after autoclaving and are recovered intact by molecular amplification. Primers targeting longer sequences are most likely to produce false positives due to amplification errors easily verified by melting curves analyses. If short indicator RNA sequences are used for virus identification and quantification then post autoclave RNA degradation methodology should be employed, which may include further autoclaving. PMID:24518856

  20. Erratum: Erratum to: A Novel Approach to Determination of Threshold for Stress Corrosion Cracking (KISCC) using Round Tensile Specimens

    NASA Astrophysics Data System (ADS)

    Singh Raman, R. K.; Rihan, R.; Ibrahim, R. N.

    2017-11-01

    Due to an error by the authors, the reference R. Rihan, R.K. Singh Raman, and R.N. Ibrahim: Materials Science and Engineering A, 2006, vol. 425, pp. 272-77 should have been included in the list of references as well as cited as a source of the data in Figures 11, 12 and 16.

  1. Glass viscosity calculation based on a global statistical modelling approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fluegel, Alex

    2007-02-01

    A global statistical glass viscosity model was developed for predicting the complete viscosity curve, based on more than 2200 composition-property data of silicate glasses from the scientific literature, including soda-lime-silica container and float glasses, TV panel glasses, borosilicate fiber wool and E type glasses, low expansion borosilicate glasses, glasses for nuclear waste vitrification, lead crystal glasses, binary alkali silicates, and various further compositions from over half a century. It is shown that within a measurement series from a specific laboratory the reported viscosity values are often over-estimated at higher temperatures due to alkali and boron oxide evaporation during the measurementmore » and glass preparation, including data by Lakatos et al. (1972) and the recently published High temperature glass melt property database for process modeling by Seward et al. (2005). Similarly, in the glass transition range many experimental data of borosilicate glasses are reported too high due to phase separation effects. The developed global model corrects those errors. The model standard error was 9-17°C, with R^2 = 0.985-0.989. The prediction 95% confidence interval for glass in mass production largely depends on the glass composition of interest, the composition uncertainty, and the viscosity level. New insights in the mixed-alkali effect are provided.« less

  2. Automated error correction in IBM quantum computer and explicit generalization

    NASA Astrophysics Data System (ADS)

    Ghosh, Debjit; Agarwal, Pratik; Pandey, Pratyush; Behera, Bikash K.; Panigrahi, Prasanta K.

    2018-06-01

    Construction of a fault-tolerant quantum computer remains a challenging problem due to unavoidable noise and fragile quantum states. However, this goal can be achieved by introducing quantum error-correcting codes. Here, we experimentally realize an automated error correction code and demonstrate the nondestructive discrimination of GHZ states in IBM 5-qubit quantum computer. After performing quantum state tomography, we obtain the experimental results with a high fidelity. Finally, we generalize the investigated code for maximally entangled n-qudit case, which could both detect and automatically correct any arbitrary phase-change error, or any phase-flip error, or any bit-flip error, or combined error of all types of error.

  3. Medication errors as malpractice-a qualitative content analysis of 585 medication errors by nurses in Sweden.

    PubMed

    Björkstén, Karin Sparring; Bergqvist, Monica; Andersén-Karlsson, Eva; Benson, Lina; Ulfvarson, Johanna

    2016-08-24

    Many studies address the prevalence of medication errors but few address medication errors serious enough to be regarded as malpractice. Other studies have analyzed the individual and system contributory factor leading to a medication error. Nurses have a key role in medication administration, and there are contradictory reports on the nurses' work experience in relation to the risk and type for medication errors. All medication errors where a nurse was held responsible for malpractice (n = 585) during 11 years in Sweden were included. A qualitative content analysis and classification according to the type and the individual and system contributory factors was made. In order to test for possible differences between nurses' work experience and associations within and between the errors and contributory factors, Fisher's exact test was used, and Cohen's kappa (k) was performed to estimate the magnitude and direction of the associations. There were a total of 613 medication errors in the 585 cases, the most common being "Wrong dose" (41 %), "Wrong patient" (13 %) and "Omission of drug" (12 %). In 95 % of the cases, an average of 1.4 individual contributory factors was found; the most common being "Negligence, forgetfulness or lack of attentiveness" (68 %), "Proper protocol not followed" (25 %), "Lack of knowledge" (13 %) and "Practice beyond scope" (12 %). In 78 % of the cases, an average of 1.7 system contributory factors was found; the most common being "Role overload" (36 %), "Unclear communication or orders" (30 %) and "Lack of adequate access to guidelines or unclear organisational routines" (30 %). The errors "Wrong patient due to mix-up of patients" and "Wrong route" and the contributory factors "Lack of knowledge" and "Negligence, forgetfulness or lack of attentiveness" were more common in less experienced nurses. The experienced nurses were more prone to "Practice beyond scope of practice" and to make errors in spite of "Lack of adequate access to guidelines or unclear organisational routines". Medication errors regarded as malpractice in Sweden were of the same character as medication errors worldwide. A complex interplay between individual and system factors often contributed to the errors.

  4. Factors that influence the generation of autobiographical memory conjunction errors

    PubMed Central

    Devitt, Aleea L.; Monk-Fromont, Edwin; Schacter, Daniel L.; Addis, Donna Rose

    2015-01-01

    The constructive nature of memory is generally adaptive, allowing us to efficiently store, process and learn from life events, and simulate future scenarios to prepare ourselves for what may come. However, the cost of a flexibly constructive memory system is the occasional conjunction error, whereby the components of an event are authentic, but the combination of those components is false. Using a novel recombination paradigm, it was demonstrated that details from one autobiographical memory may be incorrectly incorporated into another, forming autobiographical memory conjunction errors that elude typical reality monitoring checks. The factors that contribute to the creation of these conjunction errors were examined across two experiments. Conjunction errors were more likely to occur when the corresponding details were partially rather than fully recombined, likely due to increased plausibility and ease of simulation of partially recombined scenarios. Brief periods of imagination increased conjunction error rates, in line with the imagination inflation effect. Subjective ratings suggest that this inflation is due to similarity of phenomenological experience between conjunction and authentic memories, consistent with a source monitoring perspective. Moreover, objective scoring of memory content indicates that increased perceptual detail may be particularly important for the formation of autobiographical memory conjunction errors. PMID:25611492

  5. Factors that influence the generation of autobiographical memory conjunction errors.

    PubMed

    Devitt, Aleea L; Monk-Fromont, Edwin; Schacter, Daniel L; Addis, Donna Rose

    2016-01-01

    The constructive nature of memory is generally adaptive, allowing us to efficiently store, process and learn from life events, and simulate future scenarios to prepare ourselves for what may come. However, the cost of a flexibly constructive memory system is the occasional conjunction error, whereby the components of an event are authentic, but the combination of those components is false. Using a novel recombination paradigm, it was demonstrated that details from one autobiographical memory (AM) may be incorrectly incorporated into another, forming AM conjunction errors that elude typical reality monitoring checks. The factors that contribute to the creation of these conjunction errors were examined across two experiments. Conjunction errors were more likely to occur when the corresponding details were partially rather than fully recombined, likely due to increased plausibility and ease of simulation of partially recombined scenarios. Brief periods of imagination increased conjunction error rates, in line with the imagination inflation effect. Subjective ratings suggest that this inflation is due to similarity of phenomenological experience between conjunction and authentic memories, consistent with a source monitoring perspective. Moreover, objective scoring of memory content indicates that increased perceptual detail may be particularly important for the formation of AM conjunction errors.

  6. Prediction of transmission distortion for wireless video communication: analysis.

    PubMed

    Chen, Zhifeng; Wu, Dapeng

    2012-03-01

    Transmitting video over wireless is a challenging problem since video may be seriously distorted due to packet errors caused by wireless channels. The capability of predicting transmission distortion (i.e., video distortion caused by packet errors) can assist in designing video encoding and transmission schemes that achieve maximum video quality or minimum end-to-end video distortion. This paper is aimed at deriving formulas for predicting transmission distortion. The contribution of this paper is twofold. First, we identify the governing law that describes how the transmission distortion process evolves over time and analytically derive the transmission distortion formula as a closed-form function of video frame statistics, channel error statistics, and system parameters. Second, we identify, for the first time, two important properties of transmission distortion. The first property is that the clipping noise, which is produced by nonlinear clipping, causes decay of propagated error. The second property is that the correlation between motion-vector concealment error and propagated error is negative and has dominant impact on transmission distortion, compared with other correlations. Due to these two properties and elegant error/distortion decomposition, our formula provides not only more accurate prediction but also lower complexity than the existing methods.

  7. Reducing Bias and Error in the Correlation Coefficient Due to Nonnormality.

    PubMed

    Bishara, Anthony J; Hittner, James B

    2015-10-01

    It is more common for educational and psychological data to be nonnormal than to be approximately normal. This tendency may lead to bias and error in point estimates of the Pearson correlation coefficient. In a series of Monte Carlo simulations, the Pearson correlation was examined under conditions of normal and nonnormal data, and it was compared with its major alternatives, including the Spearman rank-order correlation, the bootstrap estimate, the Box-Cox transformation family, and a general normalizing transformation (i.e., rankit), as well as to various bias adjustments. Nonnormality caused the correlation coefficient to be inflated by up to +.14, particularly when the nonnormality involved heavy-tailed distributions. Traditional bias adjustments worsened this problem, further inflating the estimate. The Spearman and rankit correlations eliminated this inflation and provided conservative estimates. Rankit also minimized random error for most sample sizes, except for the smallest samples ( n = 10), where bootstrapping was more effective. Overall, results justify the use of carefully chosen alternatives to the Pearson correlation when normality is violated.

  8. Reducing Bias and Error in the Correlation Coefficient Due to Nonnormality

    PubMed Central

    Hittner, James B.

    2014-01-01

    It is more common for educational and psychological data to be nonnormal than to be approximately normal. This tendency may lead to bias and error in point estimates of the Pearson correlation coefficient. In a series of Monte Carlo simulations, the Pearson correlation was examined under conditions of normal and nonnormal data, and it was compared with its major alternatives, including the Spearman rank-order correlation, the bootstrap estimate, the Box–Cox transformation family, and a general normalizing transformation (i.e., rankit), as well as to various bias adjustments. Nonnormality caused the correlation coefficient to be inflated by up to +.14, particularly when the nonnormality involved heavy-tailed distributions. Traditional bias adjustments worsened this problem, further inflating the estimate. The Spearman and rankit correlations eliminated this inflation and provided conservative estimates. Rankit also minimized random error for most sample sizes, except for the smallest samples (n = 10), where bootstrapping was more effective. Overall, results justify the use of carefully chosen alternatives to the Pearson correlation when normality is violated. PMID:29795841

  9. A novel rotational matrix and translation vector algorithm: geometric accuracy for augmented reality in oral and maxillofacial surgeries.

    PubMed

    Murugesan, Yahini Prabha; Alsadoon, Abeer; Manoranjan, Paul; Prasad, P W C

    2018-06-01

    Augmented reality-based surgeries have not been successfully implemented in oral and maxillofacial areas due to limitations in geometric accuracy and image registration. This paper aims to improve the accuracy and depth perception of the augmented video. The proposed system consists of a rotational matrix and translation vector algorithm to reduce the geometric error and improve the depth perception by including 2 stereo cameras and a translucent mirror in the operating room. The results on the mandible/maxilla area show that the new algorithm improves the video accuracy by 0.30-0.40 mm (in terms of overlay error) and the processing rate to 10-13 frames/s compared to 7-10 frames/s in existing systems. The depth perception increased by 90-100 mm. The proposed system concentrates on reducing the geometric error. Thus, this study provides an acceptable range of accuracy with a shorter operating time, which provides surgeons with a smooth surgical flow. Copyright © 2018 John Wiley & Sons, Ltd.

  10. Operator- and software-related post-experimental variability and source of error in 2-DE analysis.

    PubMed

    Millioni, Renato; Puricelli, Lucia; Sbrignadello, Stefano; Iori, Elisabetta; Murphy, Ellen; Tessari, Paolo

    2012-05-01

    In the field of proteomics, several approaches have been developed for separating proteins and analyzing their differential relative abundance. One of the oldest, yet still widely used, is 2-DE. Despite the continuous advance of new methods, which are less demanding from a technical standpoint, 2-DE is still compelling and has a lot of potential for improvement. The overall variability which affects 2-DE includes biological, experimental, and post-experimental (software-related) variance. It is important to highlight how much of the total variability of this technique is due to post-experimental variability, which, so far, has been largely neglected. In this short review, we have focused on this topic and explained that post-experimental variability and source of error can be further divided into those which are software-dependent and those which are operator-dependent. We discuss these issues in detail, offering suggestions for reducing errors that may affect the quality of results, summarizing the advantages and drawbacks of each approach.

  11. Multi-canister overpack project -- verification and validation, MCNP 4A

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldmann, L.H.

    This supporting document contains the software verification and validation (V and V) package used for Phase 2 design of the Spent Nuclear Fuel Multi-Canister Overpack. V and V packages for both ANSYS and MCNP are included. Description of Verification Run(s): This software requires that it be compiled specifically for the machine it is to be used on. Therefore to facilitate ease in the verification process the software automatically runs 25 sample problems to ensure proper installation and compilation. Once the runs are completed the software checks for verification by performing a file comparison on the new output file and themore » old output file. Any differences between any of the files will cause a verification error. Due to the manner in which the verification is completed a verification error does not necessarily indicate a problem. This indicates that a closer look at the output files is needed to determine the cause of the error.« less

  12. Parenteral nutrition in patients with inborn errors of metabolism - a therapeutic problem.

    PubMed

    Kaluzny, L; Szczepanik, M; Siwinska-Mrozek, Z; Borkowska-Klos, M; Cichy, W; Walkowiak, J

    2014-06-01

    Parenteral nutrition is now a standard part of supportive treatment in pediatric departments. We describe four cases in which parenteral nutrition was extremely difficult due to coincidence with inborn errors of metabolism. The first two cases was fatty acid beta-oxidation disorders associated with necrotizing enterocolitis and congenital heart disease. Thus, limitations of intravenous lipid intake made it difficult to maintain a good nutritional status. The third case was phenylketonuria associated with a facial region tumour (rhabdomyosarcoma), in which parenteral nutrition was complicated because of a high phenylalanine content in the amino acid formulas for parenteral nutrition. The fourth patient was a child with late-diagnosed tyrosinemia type 1, complicated with encephalopathy - during intensive care treatment the patient needed nutritional support, including parenteral nutrition - we observed amino acid formula problems similar to those in the phenylketonuria patient. Parenteral nutrition in children with inborn errors of metabolism is a rare, but very important therapeutic problem. Total parenteral nutrition formulas are not prepared for this group of diseases.

  13. Errors in Computing the Normalized Protein Catabolic Rate due to Use of Single-pool Urea Kinetic Modeling or to Omission of the Residual Kidney Urea Clearance.

    PubMed

    Daugirdas, John T

    2017-07-01

    The protein catabolic rate normalized to body size (PCRn) often is computed in dialysis units to obtain information about protein ingestion. However, errors can manifest when inappropriate modeling methods are used. We used a variable volume 2-pool urea kinetic model to examine the percent errors in PCRn due to use of a 1-pool urea kinetic model or after omission of residual urea clearance (Kru). When a single-pool model was used, 2 sources of errors were identified. The first, dependent on the ratio of dialyzer urea clearance to urea distribution volume (K/V), resulted in a 7% inflation of the PCRn when K/V was in the range of 6 mL/min per L. A second, larger error appeared when Kt/V values were below 1.0 and was related to underestimation of urea distribution volume (due to overestimation of effective clearance) by the single-pool model. A previously reported prediction equation for PCRn was valid, but data suggest that it should be modified using 2-pool eKt/V and V coefficients instead of single-pool values. A third source of error, this one unrelated to use of a single-pool model, namely omission of Kru, was shown to result in an underestimation of PCRn, such that each ml/minute Kru per 35 L of V caused a 5.6% underestimate in PCRn. Marked overestimation of PCRn can result due to inappropriate use of a single-pool urea kinetic model, particularly when Kt/V <1.0 (as in short daily dialysis), or after omission of residual native kidney clearance. Copyright © 2017 National Kidney Foundation, Inc. Published by Elsevier Inc. All rights reserved.

  14. Water equivalent thickness of immobilization devices in proton therapy planning - Modelling at treatment planning and validation by measurements with a multi-layer ionization chamber.

    PubMed

    Fellin, Francesco; Righetto, Roberto; Fava, Giovanni; Trevisan, Diego; Amelio, Dante; Farace, Paolo

    2017-03-01

    To investigate the range errors made in treatment planning due to the presence of the immobilization devices along the proton beam path. The measured water equivalent thickness (WET) of selected devices was measured by a high-energy spot and a multi-layer ionization chamber and compared with that predicted by treatment planning system (TPS). Two treatment couches, two thermoplastic masks (both un-stretched and stretched) and one headrest were selected. At TPS, every immobilization device was modelled as being part of the patient. The following parameters were assessed: CT acquisition protocol, dose-calculation grid-sizes (1.5 and 3.0mm) and beam-entrance with respect to the devices (coplanar and non-coplanar). Finally, the potential errors produced by a wrong manual separation between treatment couch and the CT table (not present during treatment) were investigated. In the thermoplastic mask, there was a clear effect due to beam entrance, a moderate effect due to the CT protocols and almost no effect due to TPS grid-size, with 1mm errors observed only when thick un-stretched portions were crossed by non-coplanar beams. In the treatment couches the WET errors were negligible (<0.3mm) regardless of the grid-size and CT protocol. The potential range errors produced in the manual separation between treatment couch and CT table were small with 1.5mm grid-size, but could be >0.5mm with a 3.0mm grid-size. In the headrest, WET errors were negligible (0.2mm). With only one exception (un-stretched mask, non-coplanar beams), the WET of all the immobilization devices was properly modelled by the TPS. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  15. Register file soft error recovery

    DOEpatents

    Fleischer, Bruce M.; Fox, Thomas W.; Wait, Charles D.; Muff, Adam J.; Watson, III, Alfred T.

    2013-10-15

    Register file soft error recovery including a system that includes a first register file and a second register file that mirrors the first register file. The system also includes an arithmetic pipeline for receiving data read from the first register file, and error detection circuitry to detect whether the data read from the first register file includes corrupted data. The system further includes error recovery circuitry to insert an error recovery instruction into the arithmetic pipeline in response to detecting the corrupted data. The inserted error recovery instruction replaces the corrupted data in the first register file with a copy of the data from the second register file.

  16. Sources of error in the retracted scientific literature.

    PubMed

    Casadevall, Arturo; Steen, R Grant; Fang, Ferric C

    2014-09-01

    Retraction of flawed articles is an important mechanism for correction of the scientific literature. We recently reported that the majority of retractions are associated with scientific misconduct. In the current study, we focused on the subset of retractions for which no misconduct was identified, in order to identify the major causes of error. Analysis of the retraction notices for 423 articles indexed in PubMed revealed that the most common causes of error-related retraction are laboratory errors, analytical errors, and irreproducible results. The most common laboratory errors are contamination and problems relating to molecular biology procedures (e.g., sequencing, cloning). Retractions due to contamination were more common in the past, whereas analytical errors are now increasing in frequency. A number of publications that have not been retracted despite being shown to contain significant errors suggest that barriers to retraction may impede correction of the literature. In particular, few cases of retraction due to cell line contamination were found despite recognition that this problem has affected numerous publications. An understanding of the errors leading to retraction can guide practices to improve laboratory research and the integrity of the scientific literature. Perhaps most important, our analysis has identified major problems in the mechanisms used to rectify the scientific literature and suggests a need for action by the scientific community to adopt protocols that ensure the integrity of the publication process. © FASEB.

  17. An Introduction to Error Analysis for Quantitative Chemistry

    ERIC Educational Resources Information Center

    Neman, R. L.

    1972-01-01

    Describes two formulas for calculating errors due to instrument limitations which are usually found in gravimetric volumetric analysis and indicates their possible applications to other fields of science. (CC)

  18. SU-E-T-377: Inaccurate Positioning Might Introduce Significant MapCheck Calibration Error in Flatten Filter Free Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, S; Chao, C; Columbia University, NY, NY

    2014-06-01

    Purpose: This study investigates the calibration error of detector sensitivity for MapCheck due to inaccurate positioning of the device, which is not taken into account by the current commercial iterative calibration algorithm. We hypothesize the calibration is more vulnerable to the positioning error for the flatten filter free (FFF) beams than the conventional flatten filter flattened beams. Methods: MapCheck2 was calibrated with 10MV conventional and FFF beams, with careful alignment and with 1cm positioning error during calibration, respectively. Open fields of 37cmx37cm were delivered to gauge the impact of resultant calibration errors. The local calibration error was modeled as amore » detector independent multiplication factor, with which propagation error was estimated with positioning error from 1mm to 1cm. The calibrated sensitivities, without positioning error, were compared between the conventional and FFF beams to evaluate the dependence on the beam type. Results: The 1cm positioning error leads to 0.39% and 5.24% local calibration error in the conventional and FFF beams respectively. After propagating to the edges of MapCheck, the calibration errors become 6.5% and 57.7%, respectively. The propagation error increases almost linearly with respect to the positioning error. The difference of sensitivities between the conventional and FFF beams was small (0.11 ± 0.49%). Conclusion: The results demonstrate that the positioning error is not handled by the current commercial calibration algorithm of MapCheck. Particularly, the calibration errors for the FFF beams are ~9 times greater than those for the conventional beams with identical positioning error, and a small 1mm positioning error might lead to up to 8% calibration error. Since the sensitivities are only slightly dependent of the beam type and the conventional beam is less affected by the positioning error, it is advisable to cross-check the sensitivities between the conventional and FFF beams to detect potential calibration errors due to inaccurate positioning. This work was partially supported by a DOD Grant No.; DOD W81XWH1010862.« less

  19. The distribution of probability values in medical abstracts: an observational study.

    PubMed

    Ginsel, Bastiaan; Aggarwal, Abhinav; Xuan, Wei; Harris, Ian

    2015-11-26

    A relatively high incidence of p values immediately below 0.05 (such as 0.047 or 0.04) compared to p values immediately above 0.05 (such as 0.051 or 0.06) has been noticed anecdotally in published medical abstracts. If p values immediately below 0.05 are over-represented, such a distribution may reflect the true underlying distribution of p values or may be due to error (a false distribution). If due to error, a consistent over-representation of p values immediately below 0.05 would be a systematic error due either to publication bias or (overt or inadvertent) bias within studies. We searched the Medline 2012 database to identify abstracts containing a p value. Two thousand abstracts out of 80,649 abstracts were randomly selected. Two independent researchers extracted all p values. The p values were plotted and compared to a predicted curve. Chi square test was used to test assumptions and significance was set at 0.05. 2798 p value ranges and 3236 exact p values were reported. 4973 of these (82%) were significant (<0.05). There was an over-representation of p values immediately below 0.05 (between 0.01 and 0.049) compared to those immediately above 0.05 (between 0.05 and 0.1) (p = 0.001). The distribution of p values in reported medical abstracts provides evidence for systematic error in the reporting of p values. This may be due to publication bias, methodological errors (underpowering, selective reporting and selective analyses) or fraud.

  20. Chromosomal locus tracking with proper accounting of static and dynamic errors

    PubMed Central

    Backlund, Mikael P.; Joyner, Ryan; Moerner, W. E.

    2015-01-01

    The mean-squared displacement (MSD) and velocity autocorrelation (VAC) of tracked single particles or molecules are ubiquitous metrics for extracting parameters that describe the object’s motion, but they are both corrupted by experimental errors that hinder the quantitative extraction of underlying parameters. For the simple case of pure Brownian motion, the effects of localization error due to photon statistics (“static error”) and motion blur due to finite exposure time (“dynamic error”) on the MSD and VAC are already routinely treated. However, particles moving through complex environments such as cells, nuclei, or polymers often exhibit anomalous diffusion, for which the effects of these errors are less often sufficiently treated. We present data from tracked chromosomal loci in yeast that demonstrate the necessity of properly accounting for both static and dynamic error in the context of an anomalous diffusion that is consistent with a fractional Brownian motion (FBM). We compare these data to analytical forms of the expected values of the MSD and VAC for a general FBM in the presence of these errors. PMID:26172745

  1. Error-Based Design Space Windowing

    NASA Technical Reports Server (NTRS)

    Papila, Melih; Papila, Nilay U.; Shyy, Wei; Haftka, Raphael T.; Fitz-Coy, Norman

    2002-01-01

    Windowing of design space is considered in order to reduce the bias errors due to low-order polynomial response surfaces (RS). Standard design space windowing (DSW) uses a region of interest by setting a requirement on response level and checks it by a global RS predictions over the design space. This approach, however, is vulnerable since RS modeling errors may lead to the wrong region to zoom on. The approach is modified by introducing an eigenvalue error measure based on point-to-point mean squared error criterion. Two examples are presented to demonstrate the benefit of the error-based DSW.

  2. [Medication reconciliation: an innovative experience in an internal medicine unit to decrease errors due to inacurrate medication histories].

    PubMed

    Pérennes, Maud; Carde, Axel; Nicolas, Xavier; Dolz, Manuel; Bihannic, René; Grimont, Pauline; Chapot, Thierry; Granier, Hervé

    2012-03-01

    An inaccurate medication history may prevent the discovery of a pre-admission iatrogenic event or lead to interrupted drug therapy during hospitalization. Medication reconciliation is a process that ensures the transfer of medication information at admission to the hospital. The aims of this prospective study were to evaluate the interest in clinical practice of this concept and the resources needed for its implementation. We chose to include patients aged 65 years or over admitted in the internal medicine unit between June and October 2010. We obtained an accurate list of each patient's home medications. This list was then compared with medication orders. All medication variances were classified as intended or unintended. An internist and a pharmacist classified the clinical importance of each unintended variance. Sixty-one patients (mean age: 78 ± 7.4 years) were included in our study. We identified 38 unintended discrepancies. The average number of unintended discrepancies was 0.62 per patient. Twenty-five patients (41%) had one or more unintended discrepancies at admission. The contact with the community pharmacist permitted us to identify 21 (55%) unintended discrepancies. The most common errors were the omission of a regularly used medication (76%) and an incorrect dosage (16%). Our intervention resulted in order changes by the physician for 30 (79%) unintended discrepancies. Fifty percent of the unintended variances were judged by the internist and 76% by the pharmacist to be clinically significant. The admission to the hospital is a critical transition point for the continuity of care in medication management. Medication reconciliation can identify and resolve errors due to inaccurate medication histories. Copyright © 2011 Elsevier Masson SAS. All rights reserved.

  3. Evaluation of Two Computational Techniques of Calculating Multipath Using Global Positioning System Carrier Phase Measurements

    NASA Technical Reports Server (NTRS)

    Gomez, Susan F.; Hood, Laura; Panneton, Robert J.; Saunders, Penny E.; Adkins, Antha; Hwu, Shian U.; Lu, Ba P.

    1996-01-01

    Two computational techniques are used to calculate differential phase errors on Global Positioning System (GPS) carrier war phase measurements due to certain multipath-producing objects. The two computational techniques are a rigorous computati electromagnetics technique called Geometric Theory of Diffraction (GTD) and the other is a simple ray tracing method. The GTD technique has been used successfully to predict microwave propagation characteristics by taking into account the dominant multipath components due to reflections and diffractions from scattering structures. The ray tracing technique only solves for reflected signals. The results from the two techniques are compared to GPS differential carrier phase ns taken on the ground using a GPS receiver in the presence of typical International Space Station (ISS) interference structures. The calculations produced using the GTD code compared to the measured results better than the ray tracing technique. The agreement was good, demonstrating that the phase errors due to multipath can be modeled and characterized using the GTD technique and characterized to a lesser fidelity using the DECAT technique. However, some discrepancies were observed. Most of the discrepancies occurred at lower devations and were either due to phase center deviations of the antenna, the background multipath environment, or the receiver itself. Selected measured and predicted differential carrier phase error results are presented and compared. Results indicate that reflections and diffractions caused by the multipath producers, located near the GPS antennas, can produce phase shifts of greater than 10 mm, and as high as 95 mm. It should be noted tl the field test configuration was meant to simulate typical ISS structures, but the two environments are not identical. The GZ and DECAT techniques have been used to calculate phase errors due to multipath o the ISS configuration to quantify the expected attitude determination errors.

  4. Shape of the ocean surface and implications for the Earth's interior: GEOS-3 results

    NASA Technical Reports Server (NTRS)

    Chapman, M. E.; Talwani, M.; Kahle, H.; Bodine, J. H.

    1979-01-01

    A new set of 1 deg x 1 deg mean free air anomalies was used to construct a gravimetric geoid by Stokes' formula for the Indian Ocean. Utilizing such 1 deg x 1 deg geoid comparisons were made with GEOS-3 radar altimeter estimates of geoid height. Most commonly there were constant offsets and long wavelength discrepancies between the two data sets; there were many probable causes including radial orbit error, scale errors in the geoid, or bias errors in altitude determination. Across the Aleutian Trench the 1 deg x 1 deg gravimetric geoids did not measure the entire depth of the geoid anomaly due to averaging over 1 deg squares and subsequent aliasing of the data. After adjustment of GEOS-3 data to eliminate long wavelength discrepancies, agreement between the altimeter geoid and gravimetric geoid was between 1.7 and 2.7 meters in rms errors. For purposes of geological interpretation, techniques were developed to directly compute the geoid anomaly over models of density within the Earth. In observing the results from satellite altimetry it was possible to identify geoid anomalies over different geologic features in the ocean. Examples and significant results are reported.

  5. Uncertainty of the 20th century sea-level rise due to vertical land motion errors

    NASA Astrophysics Data System (ADS)

    Santamaría-Gómez, Alvaro; Gravelle, Médéric; Dangendorf, Sönke; Marcos, Marta; Spada, Giorgio; Wöppelmann, Guy

    2017-09-01

    Assessing the vertical land motion (VLM) at tide gauges (TG) is crucial to understanding global and regional mean sea-level changes (SLC) over the last century. However, estimating VLM with accuracy better than a few tenths of a millimeter per year is not a trivial undertaking and many factors, including the reference frame uncertainty, must be considered. Using a novel reconstruction approach and updated geodetic VLM corrections, we found the terrestrial reference frame and the estimated VLM uncertainty may contribute to the global SLC rate error by ± 0.2 mmyr-1. In addition, a spurious global SLC acceleration may be introduced up to ± 4.8 ×10-3 mmyr-2. Regional SLC rate and acceleration errors may be inflated by a factor 3 compared to the global. The difference of VLM from two independent Glacio-Isostatic Adjustment models introduces global SLC rate and acceleration biases at the level of ± 0.1 mmyr-1 and 2.8 ×10-3 mmyr-2, increasing up to 0.5 mm yr-1 and 9 ×10-3 mmyr-2 for the regional SLC. Errors in VLM corrections need to be budgeted when considering past and future SLC scenarios.

  6. Error field assessment from driven rotation of stable external kinks at EXTRAP-T2R reversed field pinch

    NASA Astrophysics Data System (ADS)

    Volpe, F. A.; Frassinetti, L.; Brunsell, P. R.; Drake, J. R.; Olofsson, K. E. J.

    2013-04-01

    A new non-disruptive error field (EF) assessment technique not restricted to low density and thus low beta was demonstrated at the EXTRAP-T2R reversed field pinch. Stable and marginally stable external kink modes of toroidal mode number n = 10 and n = 8, respectively, were generated, and their rotation sustained, by means of rotating magnetic perturbations of the same n. Due to finite EFs, and in spite of the applied perturbations rotating uniformly and having constant amplitude, the kink modes were observed to rotate non-uniformly and be modulated in amplitude. This behaviour was used to precisely infer the amplitude and approximately estimate the toroidal phase of the EF. A subsequent scan permitted to optimize the toroidal phase. The technique was tested against deliberately applied as well as intrinsic EFs of n = 8 and 10. Corrections equal and opposite to the estimated error fields were applied. The efficacy of the error compensation was indicated by the increased discharge duration and more uniform mode rotation in response to a uniformly rotating perturbation. The results are in good agreement with theory, and the extension to lower n, to tearing modes and to tokamaks, including ITER, is discussed.

  7. Radiofrequency Electromagnetic Radiation and Memory Performance: Sources of Uncertainty in Epidemiological Cohort Studies

    PubMed Central

    Zeleke, Berihun M.; Abramson, Michael J.; Benke, Geza

    2018-01-01

    Uncertainty in experimental studies of exposure to radiation from mobile phones has in the past only been framed within the context of statistical variability. It is now becoming more apparent to researchers that epistemic or reducible uncertainties can also affect the total error in results. These uncertainties are derived from a wide range of sources including human error, such as data transcription, model structure, measurement and linguistic errors in communication. The issue of epistemic uncertainty is reviewed and interpreted in the context of the MoRPhEUS, ExPOSURE and HERMES cohort studies which investigate the effect of radiofrequency electromagnetic radiation from mobile phones on memory performance. Research into this field has found inconsistent results due to limitations from a range of epistemic sources. Potential analytic approaches are suggested based on quantification of epistemic error using Monte Carlo simulation. It is recommended that future studies investigating the relationship between radiofrequency electromagnetic radiation and memory performance pay more attention to treatment of epistemic uncertainties as well as further research into improving exposure assessment. Use of directed acyclic graphs is also encouraged to display the assumed covariate relationship. PMID:29587425

  8. Merging gauge and satellite rainfall with specification of associated uncertainty across Australia

    NASA Astrophysics Data System (ADS)

    Woldemeskel, Fitsum M.; Sivakumar, Bellie; Sharma, Ashish

    2013-08-01

    Accurate estimation of spatial rainfall is crucial for modelling hydrological systems and planning and management of water resources. While spatial rainfall can be estimated either using rain gauge-based measurements or using satellite-based measurements, such estimates are subject to uncertainties due to various sources of errors in either case, including interpolation and retrieval errors. The purpose of the present study is twofold: (1) to investigate the benefit of merging rain gauge measurements and satellite rainfall data for Australian conditions and (2) to produce a database of retrospective rainfall along with a new uncertainty metric for each grid location at any timestep. The analysis involves four steps: First, a comparison of rain gauge measurements and the Tropical Rainfall Measuring Mission (TRMM) 3B42 data at such rain gauge locations is carried out. Second, gridded monthly rain gauge rainfall is determined using thin plate smoothing splines (TPSS) and modified inverse distance weight (MIDW) method. Third, the gridded rain gauge rainfall is merged with the monthly accumulated TRMM 3B42 using a linearised weighting procedure, the weights at each grid being calculated based on the error variances of each dataset. Finally, cross validation (CV) errors at rain gauge locations and standard errors at gridded locations for each timestep are estimated. The CV error statistics indicate that merging of the two datasets improves the estimation of spatial rainfall, and more so where the rain gauge network is sparse. The provision of spatio-temporal standard errors with the retrospective dataset is particularly useful for subsequent modelling applications where input error knowledge can help reduce the uncertainty associated with modelling outcomes.

  9. Unmodeled observation error induces bias when inferring patterns and dynamics of species occurrence via aural detections

    USGS Publications Warehouse

    McClintock, Brett T.; Bailey, Larissa L.; Pollock, Kenneth H.; Simons, Theodore R.

    2010-01-01

    The recent surge in the development and application of species occurrence models has been associated with an acknowledgment among ecologists that species are detected imperfectly due to observation error. Standard models now allow unbiased estimation of occupancy probability when false negative detections occur, but this is conditional on no false positive detections and sufficient incorporation of explanatory variables for the false negative detection process. These assumptions are likely reasonable in many circumstances, but there is mounting evidence that false positive errors and detection probability heterogeneity may be much more prevalent in studies relying on auditory cues for species detection (e.g., songbird or calling amphibian surveys). We used field survey data from a simulated calling anuran system of known occupancy state to investigate the biases induced by these errors in dynamic models of species occurrence. Despite the participation of expert observers in simplified field conditions, both false positive errors and site detection probability heterogeneity were extensive for most species in the survey. We found that even low levels of false positive errors, constituting as little as 1% of all detections, can cause severe overestimation of site occupancy, colonization, and local extinction probabilities. Further, unmodeled detection probability heterogeneity induced substantial underestimation of occupancy and overestimation of colonization and local extinction probabilities. Completely spurious relationships between species occurrence and explanatory variables were also found. Such misleading inferences would likely have deleterious implications for conservation and management programs. We contend that all forms of observation error, including false positive errors and heterogeneous detection probabilities, must be incorporated into the estimation framework to facilitate reliable inferences about occupancy and its associated vital rate parameters.

  10. Analysis of methods commonly used in biomedicine for treatment versus control comparison of very small samples.

    PubMed

    Ristić-Djurović, Jasna L; Ćirković, Saša; Mladenović, Pavle; Romčević, Nebojša; Trbovich, Alexander M

    2018-04-01

    A rough estimate indicated that use of samples of size not larger than ten is not uncommon in biomedical research and that many of such studies are limited to strong effects due to sample sizes smaller than six. For data collected from biomedical experiments it is also often unknown if mathematical requirements incorporated in the sample comparison methods are satisfied. Computer simulated experiments were used to examine performance of methods for qualitative sample comparison and its dependence on the effectiveness of exposure, effect intensity, distribution of studied parameter values in the population, and sample size. The Type I and Type II errors, their average, as well as the maximal errors were considered. The sample size 9 and the t-test method with p = 5% ensured error smaller than 5% even for weak effects. For sample sizes 6-8 the same method enabled detection of weak effects with errors smaller than 20%. If the sample sizes were 3-5, weak effects could not be detected with an acceptable error; however, the smallest maximal error in the most general case that includes weak effects is granted by the standard error of the mean method. The increase of sample size from 5 to 9 led to seven times more accurate detection of weak effects. Strong effects were detected regardless of the sample size and method used. The minimal recommended sample size for biomedical experiments is 9. Use of smaller sizes and the method of their comparison should be justified by the objective of the experiment. Copyright © 2018 Elsevier B.V. All rights reserved.

  11. Influence of uncorrected refractive error and unmet refractive error on visual impairment in a Brazilian population.

    PubMed

    Ferraz, Fabio H; Corrente, José E; Opromolla, Paula; Schellini, Silvana A

    2014-06-25

    The World Health Organization (WHO) definitions of blindness and visual impairment are widely based on best-corrected visual acuity excluding uncorrected refractive errors (URE) as a visual impairment cause. Recently, URE was included as a cause of visual impairment, thus emphasizing the burden of visual impairment due to refractive error (RE) worldwide is substantially higher. The purpose of the present study is to determine the reversal of visual impairment and blindness in the population correcting RE and possible associations between RE and individual characteristics. A cross-sectional study was conducted in nine counties of the western region of state of São Paulo, using systematic and random sampling of households between March 2004 and July 2005. Individuals aged more than 1 year old were included and were evaluated for demographic data, eye complaints, history, and eye exam, including no corrected visual acuity (NCVA), best corrected vision acuity (BCVA), automatic and manual refractive examination. The definition adopted for URE was applied to individuals with NCVA > 0.15 logMAR and BCVA ≤ 0.15 logMAR after refractive correction and unmet refractive error (UREN), individuals who had visual impairment or blindness (NCVA > 0.5 logMAR) and BCVA ≤ 0.5 logMAR after optical correction. A total of 70.2% of subjects had normal NCVA. URE was detected in 13.8%. Prevalence of 4.6% of optically reversible low vision and 1.8% of blindness reversible by optical correction were found. UREN was detected in 6.5% of individuals, more frequently observed in women over the age of 50 and in higher RE carriers. Visual impairment related to eye diseases is not reversible with spectacles. Using multivariate analysis, associations between URE and UREN with regard to sex, age and RE was observed. RE is an important cause of reversible blindness and low vision in the Brazilian population.

  12. Influence of uncorrected refractive error and unmet refractive error on visual impairment in a Brazilian population

    PubMed Central

    2014-01-01

    Background The World Health Organization (WHO) definitions of blindness and visual impairment are widely based on best-corrected visual acuity excluding uncorrected refractive errors (URE) as a visual impairment cause. Recently, URE was included as a cause of visual impairment, thus emphasizing the burden of visual impairment due to refractive error (RE) worldwide is substantially higher. The purpose of the present study is to determine the reversal of visual impairment and blindness in the population correcting RE and possible associations between RE and individual characteristics. Methods A cross-sectional study was conducted in nine counties of the western region of state of São Paulo, using systematic and random sampling of households between March 2004 and July 2005. Individuals aged more than 1 year old were included and were evaluated for demographic data, eye complaints, history, and eye exam, including no corrected visual acuity (NCVA), best corrected vision acuity (BCVA), automatic and manual refractive examination. The definition adopted for URE was applied to individuals with NCVA > 0.15 logMAR and BCVA ≤ 0.15 logMAR after refractive correction and unmet refractive error (UREN), individuals who had visual impairment or blindness (NCVA > 0.5 logMAR) and BCVA ≤ 0.5 logMAR after optical correction. Results A total of 70.2% of subjects had normal NCVA. URE was detected in 13.8%. Prevalence of 4.6% of optically reversible low vision and 1.8% of blindness reversible by optical correction were found. UREN was detected in 6.5% of individuals, more frequently observed in women over the age of 50 and in higher RE carriers. Visual impairment related to eye diseases is not reversible with spectacles. Using multivariate analysis, associations between URE and UREN with regard to sex, age and RE was observed. Conclusion RE is an important cause of reversible blindness and low vision in the Brazilian population. PMID:24965318

  13. The intention to disclose medical errors among doctors in a referral hospital in North Malaysia.

    PubMed

    Hs, Arvinder-Singh; Rashid, Abdul

    2017-01-23

    In this study, medical errors are defined as unintentional patient harm caused by a doctor's mistake. This topic, due to limited research, is poorly understood in Malaysia. The objective of this study was to determine the proportion of doctors intending to disclose medical errors, and their attitudes/perception pertaining to medical errors. This cross-sectional study was conducted at a tertiary public hospital from July- December 2015 among 276 randomly selected doctors. Data was collected using a standardized and validated self-administered questionnaire intending to measure disclosure and attitudes/perceptions. The scale had four vignettes in total two medical and two surgical. Each vignette consisted of five questions and each question measured the disclosure. Disclosure was categorised as "No Disclosure", "Partial Disclosure" or "Full Disclosure". Data was keyed in and analysed using STATA v 13.0. Only 10.1% (n = 28) intended to disclose medical errors. Most respondents felt that they possessed an attitude/perception of adequately disclosing errors to patients. There was a statistically significant difference (p < 0.001) when comparing the intention of disclosure with perceived disclosures. Most respondents were in common agreement that disclosing an error would make them less likely to get sued, that minor errors should be reported and that they experienced relief from disclosing errors. Most doctors in this study would not disclose medical errors although they perceived that the errors were serious and felt responsible for it. Poor disclosure could be due the fear of litigations and improper mechanisms/procedures available for disclosure.

  14. Trial-to-trial adaptation in control of arm reaching and standing posture

    PubMed Central

    Pienciak-Siewert, Alison; Horan, Dylan P.

    2016-01-01

    Classical theories of motor learning hypothesize that adaptation is driven by sensorimotor error; this is supported by studies of arm and eye movements that have shown that trial-to-trial adaptation increases with error. Studies of postural control have shown that anticipatory postural adjustments increase with the magnitude of a perturbation. However, differences in adaptation have been observed between the two modalities, possibly due to either the inherent instability or sensory uncertainty in standing posture. Therefore, we hypothesized that trial-to-trial adaptation in posture should be driven by error, similar to what is observed in arm reaching, but the nature of the relationship between error and adaptation may differ. Here we investigated trial-to-trial adaptation of arm reaching and postural control concurrently; subjects made reaching movements in a novel dynamic environment of varying strengths, while standing and holding the handle of a force-generating robotic arm. We found that error and adaptation increased with perturbation strength in both arm and posture. Furthermore, in both modalities, adaptation showed a significant correlation with error magnitude. Our results indicate that adaptation scales proportionally with error in the arm and near proportionally in posture. In posture only, adaptation was not sensitive to small error sizes, which were similar in size to errors experienced in unperturbed baseline movements due to inherent variability. This finding may be explained as an effect of uncertainty about the source of small errors. Our findings suggest that in rehabilitation, postural error size should be considered relative to the magnitude of inherent movement variability. PMID:27683888

  15. Trial-to-trial adaptation in control of arm reaching and standing posture.

    PubMed

    Pienciak-Siewert, Alison; Horan, Dylan P; Ahmed, Alaa A

    2016-12-01

    Classical theories of motor learning hypothesize that adaptation is driven by sensorimotor error; this is supported by studies of arm and eye movements that have shown that trial-to-trial adaptation increases with error. Studies of postural control have shown that anticipatory postural adjustments increase with the magnitude of a perturbation. However, differences in adaptation have been observed between the two modalities, possibly due to either the inherent instability or sensory uncertainty in standing posture. Therefore, we hypothesized that trial-to-trial adaptation in posture should be driven by error, similar to what is observed in arm reaching, but the nature of the relationship between error and adaptation may differ. Here we investigated trial-to-trial adaptation of arm reaching and postural control concurrently; subjects made reaching movements in a novel dynamic environment of varying strengths, while standing and holding the handle of a force-generating robotic arm. We found that error and adaptation increased with perturbation strength in both arm and posture. Furthermore, in both modalities, adaptation showed a significant correlation with error magnitude. Our results indicate that adaptation scales proportionally with error in the arm and near proportionally in posture. In posture only, adaptation was not sensitive to small error sizes, which were similar in size to errors experienced in unperturbed baseline movements due to inherent variability. This finding may be explained as an effect of uncertainty about the source of small errors. Our findings suggest that in rehabilitation, postural error size should be considered relative to the magnitude of inherent movement variability. Copyright © 2016 the American Physiological Society.

  16. HRA Aerospace Challenges

    NASA Technical Reports Server (NTRS)

    DeMott, Diana

    2013-01-01

    Compared to equipment designed to perform the same function over and over, humans are just not as reliable. Computers and machines perform the same action in the same way repeatedly getting the same result, unless equipment fails or a human interferes. Humans who are supposed to perform the same actions repeatedly often perform them incorrectly due to a variety of issues including: stress, fatigue, illness, lack of training, distraction, acting at the wrong time, not acting when they should, not following procedures, misinterpreting information or inattention to detail. Why not use robots and automatic controls exclusively if human error is so common? In an emergency or off normal situation that the computer, robotic element, or automatic control system is not designed to respond to, the result is failure unless a human can intervene. The human in the loop may be more likely to cause an error, but is also more likely to catch the error and correct it. When it comes to unexpected situations, or performing multiple tasks outside the defined mission parameters, humans are the only viable alternative. Human Reliability Assessments (HRA) identifies ways to improve human performance and reliability and can lead to improvements in systems designed to interact with humans. Understanding the context of the situation that can lead to human errors, which include taking the wrong action, no action or making bad decisions provides additional information to mitigate risks. With improved human reliability comes reduced risk for the overall operation or project.

  17. Efficiency degradation due to tracking errors for point focusing solar collectors

    NASA Technical Reports Server (NTRS)

    Hughes, R. O.

    1978-01-01

    An important parameter in the design of point focusing solar collectors is the intercept factor which is a measure of efficiency and of energy available for use in the receiver. Using statistical methods, an expression of the expected value of the intercept factor is derived for various configurations and control law implementations. The analysis assumes that a radially symmetric flux distribution (not necessarily Gaussian) is generated at the focal plane due to the sun's finite image and various reflector errors. The time-varying tracking errors are assumed to be uniformly distributed within the threshold limits and allows the expected value calculation.

  18. How Angular Velocity Features and Different Gyroscope Noise Types Interact and Determine Orientation Estimation Accuracy.

    PubMed

    Pasciuto, Ilaria; Ligorio, Gabriele; Bergamini, Elena; Vannozzi, Giuseppe; Sabatini, Angelo Maria; Cappozzo, Aurelio

    2015-09-18

    In human movement analysis, 3D body segment orientation can be obtained through the numerical integration of gyroscope signals. These signals, however, are affected by errors that, for the case of micro-electro-mechanical systems, are mainly due to: constant bias, scale factor, white noise, and bias instability. The aim of this study is to assess how the orientation estimation accuracy is affected by each of these disturbances, and whether it is influenced by the angular velocity magnitude and 3D distribution across the gyroscope axes. Reference angular velocity signals, either constant or representative of human walking, were corrupted with each of the four noise types within a simulation framework. The magnitude of the angular velocity affected the error in the orientation estimation due to each noise type, except for the white noise. Additionally, the error caused by the constant bias was also influenced by the angular velocity 3D distribution. As the orientation error depends not only on the noise itself but also on the signal it is applied to, different sensor placements could enhance or mitigate the error due to each disturbance, and special attention must be paid in providing and interpreting measures of accuracy for orientation estimation algorithms.

  19. How Angular Velocity Features and Different Gyroscope Noise Types Interact and Determine Orientation Estimation Accuracy

    PubMed Central

    Pasciuto, Ilaria; Ligorio, Gabriele; Bergamini, Elena; Vannozzi, Giuseppe; Sabatini, Angelo Maria; Cappozzo, Aurelio

    2015-01-01

    In human movement analysis, 3D body segment orientation can be obtained through the numerical integration of gyroscope signals. These signals, however, are affected by errors that, for the case of micro-electro-mechanical systems, are mainly due to: constant bias, scale factor, white noise, and bias instability. The aim of this study is to assess how the orientation estimation accuracy is affected by each of these disturbances, and whether it is influenced by the angular velocity magnitude and 3D distribution across the gyroscope axes. Reference angular velocity signals, either constant or representative of human walking, were corrupted with each of the four noise types within a simulation framework. The magnitude of the angular velocity affected the error in the orientation estimation due to each noise type, except for the white noise. Additionally, the error caused by the constant bias was also influenced by the angular velocity 3D distribution. As the orientation error depends not only on the noise itself but also on the signal it is applied to, different sensor placements could enhance or mitigate the error due to each disturbance, and special attention must be paid in providing and interpreting measures of accuracy for orientation estimation algorithms. PMID:26393606

  20. Jason-2 systematic error analysis in the GPS derived orbits

    NASA Astrophysics Data System (ADS)

    Melachroinos, S.; Lemoine, F. G.; Zelensky, N. P.; Rowlands, D. D.; Luthcke, S. B.; Chinn, D. S.

    2011-12-01

    Several results related to global or regional sea level changes still too often rely on the assumption that orbit errors coming from station coordinates adoption can be neglected in the total error budget (Ceri et al. 2010). In particular Instantaneous crust-fixed coordinates are obtained by adding to the linear ITRF model the geophysical high-frequency variations. In principle, geocenter motion should also be included in this computation, in order to reference these coordinates to the center of mass of the whole Earth. This correction is currently not applied when computing GDR orbits. Cerri et al. (2010) performed an analysis of systematic errors common to all coordinates along the North/South direction, as this type of bias, also known as Z-shift, has a clear impact on MSL estimates due to the unequal distribution of continental surface in the northern and southern hemispheres. The goal of this paper is to specifically study the main source of errors which comes from the current imprecision in the Z-axis realization of the frame. We focus here on the time variability of this Z-shift, which we can decompose in a drift and a periodic component due to the presumably omitted geocenter motion. A series of Jason-2 GPS-only orbits have been computed at NASA GSFC, using both IGS05 and IGS08. These orbits have been shown to agree radially at less than 1 cm RMS vs our SLR/DORIS std0905 and std1007 reduced-dynamic orbits and in comparison with orbits produced by other analysis centers (Melachroinos et al. 2011). Our GPS-only JASON-2 orbit accuracy is assessed using a number of tests including analysis of independent SLR and altimeter crossover residuals, orbit overlap differences, and direct comparison to orbits generated at GSFC using SLR and DORIS tracking, and to orbits generated externally at other centers. Tests based on SLR-crossover residuals provide the best performance indicator for independent validation of the NASA/GSFC GPS-only reduced dynamic orbits. Reduced dynamic versus dynamic orbit differences are used to characterize the remaining force model error and TRF instability. At first, we quantify the effect of a North/South displacement of the tracking reference points for each of the three techniques. We then compare these results to the study of Morel and Willis (2005) and Ceri et al. (2010). We extend the analysis to the most recent Jason-2 cycles. We evaluate the GPS vs SLR & DORIS orbits produced using the GEODYN.

  1. Error baseline rates of five sample preparation methods used to characterize RNA virus populations.

    PubMed

    Kugelman, Jeffrey R; Wiley, Michael R; Nagle, Elyse R; Reyes, Daniel; Pfeffer, Brad P; Kuhn, Jens H; Sanchez-Lockhart, Mariano; Palacios, Gustavo F

    2017-01-01

    Individual RNA viruses typically occur as populations of genomes that differ slightly from each other due to mutations introduced by the error-prone viral polymerase. Understanding the variability of RNA virus genome populations is critical for understanding virus evolution because individual mutant genomes may gain evolutionary selective advantages and give rise to dominant subpopulations, possibly even leading to the emergence of viruses resistant to medical countermeasures. Reverse transcription of virus genome populations followed by next-generation sequencing is the only available method to characterize variation for RNA viruses. However, both steps may lead to the introduction of artificial mutations, thereby skewing the data. To better understand how such errors are introduced during sample preparation, we determined and compared error baseline rates of five different sample preparation methods by analyzing in vitro transcribed Ebola virus RNA from an artificial plasmid-based system. These methods included: shotgun sequencing from plasmid DNA or in vitro transcribed RNA as a basic "no amplification" method, amplicon sequencing from the plasmid DNA or in vitro transcribed RNA as a "targeted" amplification method, sequence-independent single-primer amplification (SISPA) as a "random" amplification method, rolling circle reverse transcription sequencing (CirSeq) as an advanced "no amplification" method, and Illumina TruSeq RNA Access as a "targeted" enrichment method. The measured error frequencies indicate that RNA Access offers the best tradeoff between sensitivity and sample preparation error (1.4-5) of all compared methods.

  2. A pharmacometric case study regarding the sensitivity of structural model parameter estimation to error in patient reported dosing times.

    PubMed

    Knights, Jonathan; Rohatagi, Shashank

    2015-12-01

    Although there is a body of literature focused on minimizing the effect of dosing inaccuracies on pharmacokinetic (PK) parameter estimation, most of the work centers on missing doses. No attempt has been made to specifically characterize the effect of error in reported dosing times. Additionally, existing work has largely dealt with cases in which the compound of interest is dosed at an interval no less than its terminal half-life. This work provides a case study investigating how error in patient reported dosing times might affect the accuracy of structural model parameter estimation under sparse sampling conditions when the dosing interval is less than the terminal half-life of the compound, and the underlying kinetics are monoexponential. Additional effects due to noncompliance with dosing events are not explored and it is assumed that the structural model and reasonable initial estimates of the model parameters are known. Under the conditions of our simulations, with structural model CV % ranging from ~20 to 60 %, parameter estimation inaccuracy derived from error in reported dosing times was largely controlled around 10 % on average. Given that no observed dosing was included in the design and sparse sampling was utilized, we believe these error results represent a practical ceiling given the variability and parameter estimates for the one-compartment model. The findings suggest additional investigations may be of interest and are noteworthy given the inability of current PK software platforms to accommodate error in dosing times.

  3. Subnanosecond GPS-based clock synchronization and precision deep-space tracking

    NASA Technical Reports Server (NTRS)

    Dunn, C. E.; Lichten, S. M.; Jefferson, D. C.; Border, J. S.

    1992-01-01

    Interferometric spacecraft tracking is accomplished by the Deep Space Network (DSN) by comparing the arrival time of electromagnetic spacecraft signals at ground antennas separated by baselines on the order of 8000 km. Clock synchronization errors within and between DSN stations directly impact the attainable tracking accuracy, with a 0.3-nsec error in clock synchronization resulting in an 11-nrad angular position error. This level of synchronization is currently achieved by observing a quasar which is angularly close to the spacecraft just after the spacecraft observations. By determining the differential arrival times of the random quasar signal at the stations, clock offsets and propagation delays within the atmosphere and within the DSN stations are calibrated. Recent developments in time transfer techniques may allow medium accuracy (50-100 nrad) spacecraft tracking without near-simultaneous quasar-based calibrations. Solutions are presented for a worldwide network of Global Positioning System (GPS) receivers in which the formal errors for DSN clock offset parameters are less than 0.5 nsec. Comparisons of clock rate offsets derived from GPS measurements and from very long baseline interferometry (VLBI), as well as the examination of clock closure, suggest that these formal errors are a realistic measure of GPS-based clock offset precision and accuracy. Incorporating GPS-based clock synchronization measurements into a spacecraft differential ranging system would allow tracking without near-simultaneous quasar observations. The impact on individual spacecraft navigation-error sources due to elimination of quasar-based calibrations is presented. System implementation, including calibration of station electronic delays, is discussed.

  4. Sub-nanosecond clock synchronization and precision deep space tracking

    NASA Technical Reports Server (NTRS)

    Dunn, Charles; Lichten, Stephen; Jefferson, David; Border, James S.

    1992-01-01

    Interferometric spacecraft tracking is accomplished at the NASA Deep Space Network (DSN) by comparing the arrival time of electromagnetic spacecraft signals to ground antennas separated by baselines on the order of 8000 km. Clock synchronization errors within and between DSN stations directly impact the attainable tracking accuracy, with a 0.3 ns error in clock synchronization resulting in an 11 nrad angular position error. This level of synchronization is currently achieved by observing a quasar which is angularly close to the spacecraft just after the spacecraft observations. By determining the differential arrival times of the random quasar signal at the stations, clock synchronization and propagation delays within the atmosphere and within the DSN stations are calibrated. Recent developments in time transfer techniques may allow medium accuracy (50-100 nrad) spacecraft observations without near-simultaneous quasar-based calibrations. Solutions are presented for a global network of GPS receivers in which the formal errors in clock offset parameters are less than 0.5 ns. Comparisons of clock rate offsets derived from GPS measurements and from very long baseline interferometry and the examination of clock closure suggest that these formal errors are a realistic measure of GPS-based clock offset precision and accuracy. Incorporating GPS-based clock synchronization measurements into a spacecraft differential ranging system would allow tracking without near-simultaneous quasar observations. The impact on individual spacecraft navigation error sources due to elimination of quasar-based calibrations is presented. System implementation, including calibration of station electronic delays, is discussed.

  5. Error baseline rates of five sample preparation methods used to characterize RNA virus populations

    PubMed Central

    Kugelman, Jeffrey R.; Wiley, Michael R.; Nagle, Elyse R.; Reyes, Daniel; Pfeffer, Brad P.; Kuhn, Jens H.; Sanchez-Lockhart, Mariano; Palacios, Gustavo F.

    2017-01-01

    Individual RNA viruses typically occur as populations of genomes that differ slightly from each other due to mutations introduced by the error-prone viral polymerase. Understanding the variability of RNA virus genome populations is critical for understanding virus evolution because individual mutant genomes may gain evolutionary selective advantages and give rise to dominant subpopulations, possibly even leading to the emergence of viruses resistant to medical countermeasures. Reverse transcription of virus genome populations followed by next-generation sequencing is the only available method to characterize variation for RNA viruses. However, both steps may lead to the introduction of artificial mutations, thereby skewing the data. To better understand how such errors are introduced during sample preparation, we determined and compared error baseline rates of five different sample preparation methods by analyzing in vitro transcribed Ebola virus RNA from an artificial plasmid-based system. These methods included: shotgun sequencing from plasmid DNA or in vitro transcribed RNA as a basic “no amplification” method, amplicon sequencing from the plasmid DNA or in vitro transcribed RNA as a “targeted” amplification method, sequence-independent single-primer amplification (SISPA) as a “random” amplification method, rolling circle reverse transcription sequencing (CirSeq) as an advanced “no amplification” method, and Illumina TruSeq RNA Access as a “targeted” enrichment method. The measured error frequencies indicate that RNA Access offers the best tradeoff between sensitivity and sample preparation error (1.4−5) of all compared methods. PMID:28182717

  6. The effect of the dynamic wet troposphere on VLBI measurements

    NASA Technical Reports Server (NTRS)

    Treuhaft, R. N.; Lanyi, G. E.

    1986-01-01

    Calculations using a statistical model of water vapor fluctuations yield the effect of the dynamic wet troposphere on Very Long Baseline Interferometry (VLBI) measurements. The statistical model arises from two primary assumptions: (1) the spatial structure of refractivity fluctuations can be closely approximated by elementary (Kolmogorov) turbulence theory, and (2) temporal fluctuations are caused by spatial patterns which are moved over a site by the wind. The consequences of these assumptions are outlined for the VLBI delay and delay rate observables. For example, wet troposphere induced rms delays for Deep Space Network (DSN) VLBI at 20-deg elevation are about 3 cm of delay per observation, which is smaller, on the average, than other known error sources in the current DSN VLBI data set. At 20-deg elevation for 200-s time intervals, water vapor induces approximately 1.5 x 10 to the minus 13th power s/s in the Allan standard deviation of interferometric delay, which is a measure of the delay rate observable error. In contrast to the delay error, the delay rate measurement error is dominated by water vapor fluctuations. Water vapor induced VLBI parameter errors and correlations are calculated. For the DSN, baseline length parameter errors due to water vapor fluctuations are in the range of 3 to 5 cm. The above physical assumptions also lead to a method for including the water vapor fluctuations in the parameter estimation procedure, which is used to extract baseline and source information from the VLBI observables.

  7. Disambiguating ventral striatum fMRI-related bold signal during reward prediction in schizophrenia

    PubMed Central

    Morris, R W; Vercammen, A; Lenroot, R; Moore, L; Langton, J M; Short, B; Kulkarni, J; Curtis, J; O'Donnell, M; Weickert, C S; Weickert, T W

    2012-01-01

    Reward detection, surprise detection and prediction-error signaling have all been proposed as roles for the ventral striatum (vStr). Previous neuroimaging studies of striatal function in schizophrenia have found attenuated neural responses to reward-related prediction errors; however, as prediction errors represent a discrepancy in mesolimbic neural activity between expected and actual events, it is critical to examine responses to both expected and unexpected rewards (URs) in conjunction with expected and UR omissions in order to clarify the nature of ventral striatal dysfunction in schizophrenia. In the present study, healthy adults and people with schizophrenia were tested with a reward-related prediction-error task during functional magnetic resonance imaging to determine whether schizophrenia is associated with altered neural responses in the vStr to rewards, surprise prediction errors or all three factors. In healthy adults, we found neural responses in the vStr were correlated more specifically with prediction errors than to surprising events or reward stimuli alone. People with schizophrenia did not display the normal differential activation between expected and URs, which was partially due to exaggerated ventral striatal responses to expected rewards (right vStr) but also included blunted responses to unexpected outcomes (left vStr). This finding shows that neural responses, which typically are elicited by surprise, can also occur to well-predicted events in schizophrenia and identifies aberrant activity in the vStr as a key node of dysfunction in the neural circuitry used to differentiate expected and unexpected feedback in schizophrenia. PMID:21709684

  8. Prevalence of Refractive Errors Among School Children in Gondar Town, Northwest Ethiopia

    PubMed Central

    Yared, Assefa Wolde; Belaynew, Wasie Taye; Destaye, Shiferaw; Ayanaw, Tsegaw; Zelalem, Eshete

    2012-01-01

    Purpose: Many children with poor vision due to refractive error remain undiagnosed and perform poorly in school. The situation is worse in the Sub-Saharan Africa, including Ethiopia, and current information is lacking. The objective of this study is to determine the prevalence of refractive error among children enrolled in elementary schools in Gondar town, Ethiopia. Materials and Methods: This was a cross-sectional study of 1852 students in 8 elementary schools. Subjects were selected by multistage random sampling. The study parameters were visual acuity (VA) evaluation and ocular examination. VA was measured by staff optometrists with the Snellen E-chart while students with subnormal vision were examined using pinhole, retinoscopy evaluation and subjective refraction by ophthalmologists. Results: The study cohort was comprised of 45.8% males and 54.2% females from 8 randomly selected elementary schools with a response rate of 93%. Refractive errors in either eye were present in 174 (9.4%) children. Of these, myopia was diagnosed in 55 (31.6%) children in the right and left eyes followed by hyperopia in 46 (26.4%) and 39 (22.4%) in the right and left eyes respectively. Low myopia was the most common refractive error in 61 (49.2%) and 68 (50%) children for the right and left eyes respectively. Conclusions: Refractive error among children is a common problem in Gondar town and needs to be assessed at every health evaluation of school children for timely treatment. PMID:23248538

  9. Error reduction study employing a pseudo-random binary sequence for use in acoustic pyrometry of gases

    NASA Astrophysics Data System (ADS)

    Ewan, B. C. R.; Ireland, S. N.

    2000-12-01

    Acoustic pyrometry uses the temperature dependence of sound speed in materials to measure temperature. This is normally achieved by measuring the transit time for a sound signal over a known path length and applying the material relation between temperature and velocity to extract an "average" temperature. Sources of error associated with the measurement of mean transit time are discussed in implementing the technique in gases, one of the principal causes being background noise in typical industrial environments. A number of transmitted signal and processing strategies which can be used in the area are examined and the expected error in mean transit time associated with each technique is quantified. Transmitted signals included pulses, pure frequencies, chirps, and pseudorandom binary sequences (prbs), while processing involves edge detection and correlation. Errors arise through the misinterpretation of the positions of edge arrival or correlation peaks due to instantaneous deviations associated with background noise and these become more severe as signal to noise amplitude ratios decrease. Population errors in the mean transit time are estimated for the different measurement strategies and it is concluded that PRBS combined with correlation can provide the lowest errors when operating in high noise environments. The operation of an instrument based on PRBS transmitted signals is described and test results under controlled noise conditions are presented. These confirm the value of the strategy and demonstrate that measurements can be made with signal to noise amplitude ratios down to 0.5.

  10. Computational investigations and grid refinement study of 3D transient flow in a cylindrical tank using OpenFOAM

    NASA Astrophysics Data System (ADS)

    Mohd Sakri, F.; Mat Ali, M. S.; Sheikh Salim, S. A. Z.

    2016-10-01

    The study of physic fluid for a liquid draining inside a tank is easily accessible using numerical simulation. However, numerical simulation is expensive when the liquid draining involves the multi-phase problem. Since an accurate numerical simulation can be obtained if a proper method for error estimation is accomplished, this paper provides systematic assessment of error estimation due to grid convergence error using OpenFOAM. OpenFOAM is an open source CFD-toolbox and it is well-known among the researchers and institutions because of its free applications and ready to use. In this study, three types of grid resolution are used: coarse, medium and fine grids. Grid Convergence Index (GCI) is applied to estimate the error due to the grid sensitivity. A monotonic convergence condition is obtained in this study that shows the grid convergence error has been progressively reduced. The fine grid has the GCI value below 1%. The extrapolated value from Richardson Extrapolation is in the range of the GCI obtained.

  11. On the accuracy and precision of numerical waveforms: effect of waveform extraction methodology

    NASA Astrophysics Data System (ADS)

    Chu, Tony; Fong, Heather; Kumar, Prayush; Pfeiffer, Harald P.; Boyle, Michael; Hemberger, Daniel A.; Kidder, Lawrence E.; Scheel, Mark A.; Szilagyi, Bela

    2016-08-01

    We present a new set of 95 numerical relativity simulations of non-precessing binary black holes (BBHs). The simulations sample comprehensively both black-hole spins up to spin magnitude of 0.9, and cover mass ratios 1-3. The simulations cover on average 24 inspiral orbits, plus merger and ringdown, with low initial orbital eccentricities e\\lt {10}-4. A subset of the simulations extends the coverage of non-spinning BBHs up to mass ratio q = 10. Gravitational waveforms at asymptotic infinity are computed with two independent techniques: extrapolation and Cauchy characteristic extraction. An error analysis based on noise-weighted inner products is performed. We find that numerical truncation error, error due to gravitational wave extraction, and errors due to the Fourier transformation of signals with finite length of the numerical waveforms are of similar magnitude, with gravitational wave extraction errors dominating at noise-weighted mismatches of ˜ 3× {10}-4. This set of waveforms will serve to validate and improve aligned-spin waveform models for gravitational wave science.

  12. Multi-muscle FES force control of the human arm for arbitrary goals.

    PubMed

    Schearer, Eric M; Liao, Yu-Wei; Perreault, Eric J; Tresch, Matthew C; Memberg, William D; Kirsch, Robert F; Lynch, Kevin M

    2014-05-01

    We present a method for controlling a neuroprosthesis for a paralyzed human arm using functional electrical stimulation (FES) and characterize the errors of the controller. The subject has surgically implanted electrodes for stimulating muscles in her shoulder and arm. Using input/output data, a model mapping muscle stimulations to isometric endpoint forces measured at the subject's hand was identified. We inverted the model of this redundant and coupled multiple-input multiple-output system by minimizing muscle activations and used this inverse for feedforward control. The magnitude of the total root mean square error over a grid in the volume of achievable isometric endpoint force targets was 11% of the total range of achievable forces. Major sources of error were random error due to trial-to-trial variability and model bias due to nonstationary system properties. Because the muscles working collectively are the actuators of the skeletal system, the quantification of errors in force control guides designs of motion controllers for multi-joint, multi-muscle FES systems that can achieve arbitrary goals.

  13. Robust quantum logic in neutral atoms via adiabatic Rydberg dressing

    DOE PAGES

    Keating, Tyler; Cook, Robert L.; Hankin, Aaron M.; ...

    2015-01-28

    We study a scheme for implementing a controlled-Z (CZ) gate between two neutral-atom qubits based on the Rydberg blockade mechanism in a manner that is robust to errors caused by atomic motion. By employing adiabatic dressing of the ground electronic state, we can protect the gate from decoherence due to random phase errors that typically arise because of atomic thermal motion. In addition, the adiabatic protocol allows for a Doppler-free configuration that involves counterpropagating lasers in a σ +/σ - orthogonal polarization geometry that further reduces motional errors due to Doppler shifts. The residual motional error is dominated by dipole-dipolemore » forces acting on doubly-excited Rydberg atoms when the blockade is imperfect. As a result, for reasonable parameters, with qubits encoded into the clock states of 133Cs, we predict that our protocol could produce a CZ gate in < 10 μs with error probability on the order of 10 -3.« less

  14. Error Modelling for Multi-Sensor Measurements in Infrastructure-Free Indoor Navigation

    PubMed Central

    Ruotsalainen, Laura; Kirkko-Jaakkola, Martti; Rantanen, Jesperi; Mäkelä, Maija

    2018-01-01

    The long-term objective of our research is to develop a method for infrastructure-free simultaneous localization and mapping (SLAM) and context recognition for tactical situational awareness. Localization will be realized by propagating motion measurements obtained using a monocular camera, a foot-mounted Inertial Measurement Unit (IMU), sonar, and a barometer. Due to the size and weight requirements set by tactical applications, Micro-Electro-Mechanical (MEMS) sensors will be used. However, MEMS sensors suffer from biases and drift errors that may substantially decrease the position accuracy. Therefore, sophisticated error modelling and implementation of integration algorithms are key for providing a viable result. Algorithms used for multi-sensor fusion have traditionally been different versions of Kalman filters. However, Kalman filters are based on the assumptions that the state propagation and measurement models are linear with additive Gaussian noise. Neither of the assumptions is correct for tactical applications, especially for dismounted soldiers, or rescue personnel. Therefore, error modelling and implementation of advanced fusion algorithms are essential for providing a viable result. Our approach is to use particle filtering (PF), which is a sophisticated option for integrating measurements emerging from pedestrian motion having non-Gaussian error characteristics. This paper discusses the statistical modelling of the measurement errors from inertial sensors and vision based heading and translation measurements to include the correct error probability density functions (pdf) in the particle filter implementation. Then, model fitting is used to verify the pdfs of the measurement errors. Based on the deduced error models of the measurements, particle filtering method is developed to fuse all this information, where the weights of each particle are computed based on the specific models derived. The performance of the developed method is tested via two experiments, one at a university’s premises and another in realistic tactical conditions. The results show significant improvement on the horizontal localization when the measurement errors are carefully modelled and their inclusion into the particle filtering implementation correctly realized. PMID:29443918

  15. Stem revenue losses with effective CDM management.

    PubMed

    Alwell, Michael

    2003-09-01

    Effective CDM management not only minimizes revenue losses due to denied claims, but also helps eliminate administrative costs associated with correcting coding errors. Accountability for CDM management should be assigned to a single individual, who ideally reports to the CFO or high-level finance director. If your organization is prone to making billing errors due to CDM deficiencies, you should consider purchasing CDM software to help you manage your CDM.

  16. Case report of a near medical event in stereotactic radiotherapy due to improper units of measure from a treatment planning system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gladstone, D. J.; Li, S.; Jarvis, L. A.

    2011-07-15

    Purpose: The authors hereby notify the Radiation Oncology community of a potentially lethal error due to improper implementation of linear units of measure in a treatment planning system. The authors report an incident in which a patient was nearly mistreated during a stereotactic radiotherapy procedure due to inappropriate reporting of stereotactic coordinates by the radiation therapy treatment planning system in units of centimeter rather than in millimeter. The authors suggest a method to detect such errors during treatment planning so they are caught and corrected prior to the patient positioning for treatment on the treatment machine. Methods: Using pretreatment imaging,more » the authors found that stereotactic coordinates are reported with improper linear units by a treatment planning system. The authors have implemented a redundant, independent method of stereotactic coordinate calculation. Results: Implementation of a double check of stereotactic coordinates via redundant, independent calculation is simple and accurate. Use of this technique will avoid any future error in stereotactic treatment coordinates due to improper linear units, transcription, or other similar errors. Conclusions: The authors recommend an independent double check of stereotactic treatment coordinates during the treatment planning process in order to avoid potential mistreatment of patients.« less

  17. Impact of exposure measurement error in air pollution epidemiology: effect of error type in time-series studies.

    PubMed

    Goldman, Gretchen T; Mulholland, James A; Russell, Armistead G; Strickland, Matthew J; Klein, Mitchel; Waller, Lance A; Tolbert, Paige E

    2011-06-22

    Two distinctly different types of measurement error are Berkson and classical. Impacts of measurement error in epidemiologic studies of ambient air pollution are expected to depend on error type. We characterize measurement error due to instrument imprecision and spatial variability as multiplicative (i.e. additive on the log scale) and model it over a range of error types to assess impacts on risk ratio estimates both on a per measurement unit basis and on a per interquartile range (IQR) basis in a time-series study in Atlanta. Daily measures of twelve ambient air pollutants were analyzed: NO2, NOx, O3, SO2, CO, PM10 mass, PM2.5 mass, and PM2.5 components sulfate, nitrate, ammonium, elemental carbon and organic carbon. Semivariogram analysis was applied to assess spatial variability. Error due to this spatial variability was added to a reference pollutant time-series on the log scale using Monte Carlo simulations. Each of these time-series was exponentiated and introduced to a Poisson generalized linear model of cardiovascular disease emergency department visits. Measurement error resulted in reduced statistical significance for the risk ratio estimates for all amounts (corresponding to different pollutants) and types of error. When modelled as classical-type error, risk ratios were attenuated, particularly for primary air pollutants, with average attenuation in risk ratios on a per unit of measurement basis ranging from 18% to 92% and on an IQR basis ranging from 18% to 86%. When modelled as Berkson-type error, risk ratios per unit of measurement were biased away from the null hypothesis by 2% to 31%, whereas risk ratios per IQR were attenuated (i.e. biased toward the null) by 5% to 34%. For CO modelled error amount, a range of error types were simulated and effects on risk ratio bias and significance were observed. For multiplicative error, both the amount and type of measurement error impact health effect estimates in air pollution epidemiology. By modelling instrument imprecision and spatial variability as different error types, we estimate direction and magnitude of the effects of error over a range of error types.

  18. A correlated meta-analysis strategy for data mining "OMIC" scans.

    PubMed

    Province, Michael A; Borecki, Ingrid B

    2013-01-01

    Meta-analysis is becoming an increasingly popular and powerful tool to integrate findings across studies and OMIC dimensions. But there is the danger that hidden dependencies between putatively "independent" studies can cause inflation of type I error, due to reinforcement of the evidence from false-positive findings. We present here a simple method for conducting meta-analyses that automatically estimates the degree of any such non-independence between OMIC scans and corrects the inference for it, retaining the proper type I error structure. The method does not require the original data from the source studies, but operates only on summary analysis results from these in OMIC scans. The method is applicable in a wide variety of situations including combining GWAS and or sequencing scan results across studies with dependencies due to overlapping subjects, as well as to scans of correlated traits, in a meta-analysis scan for pleiotropic genetic effects. The method correctly detects which scans are actually independent in which case it yields the traditional meta-analysis, so it may safely be used in all cases, when there is even a suspicion of correlation amongst scans.

  19. An Experimental Study of a Six Key Handprint Chord Keyboard.

    DTIC Science & Technology

    1986-05-01

    analysis: sequence time, list time, and errors, is better divided by group of tests, beginning or ending. This division forms a logical outline from which...accomplished pianists . Due to the limited amount of time at the keyboard that volunteers were willing to endure, asymptotic behavior was not reached...considerable attention , and it includes an idea of time 1152 quite different from that enunciated by Newton. According to this theory, 1226 there is no

  20. Review of current GPS methodologies for producing accurate time series and their error sources

    NASA Astrophysics Data System (ADS)

    He, Xiaoxing; Montillet, Jean-Philippe; Fernandes, Rui; Bos, Machiel; Yu, Kegen; Hua, Xianghong; Jiang, Weiping

    2017-05-01

    The Global Positioning System (GPS) is an important tool to observe and model geodynamic processes such as plate tectonics and post-glacial rebound. In the last three decades, GPS has seen tremendous advances in the precision of the measurements, which allow researchers to study geophysical signals through a careful analysis of daily time series of GPS receiver coordinates. However, the GPS observations contain errors and the time series can be described as the sum of a real signal and noise. The signal itself can again be divided into station displacements due to geophysical causes and to disturbing factors. Examples of the latter are errors in the realization and stability of the reference frame and corrections due to ionospheric and tropospheric delays and GPS satellite orbit errors. There is an increasing demand on detecting millimeter to sub-millimeter level ground displacement signals in order to further understand regional scale geodetic phenomena hence requiring further improvements in the sensitivity of the GPS solutions. This paper provides a review spanning over 25 years of advances in processing strategies, error mitigation methods and noise modeling for the processing and analysis of GPS daily position time series. The processing of the observations is described step-by-step and mainly with three different strategies in order to explain the weaknesses and strengths of the existing methodologies. In particular, we focus on the choice of the stochastic model in the GPS time series, which directly affects the estimation of the functional model including, for example, tectonic rates, seasonal signals and co-seismic offsets. Moreover, the geodetic community continues to develop computational methods to fully automatize all phases from analysis of GPS time series. This idea is greatly motivated by the large number of GPS receivers installed around the world for diverse applications ranging from surveying small deformations of civil engineering structures (e.g., subsidence of the highway bridge) to the detection of particular geophysical signals.

  1. Should Studies of Diabetes Treatment Stratification Correct for Baseline HbA1c?

    PubMed Central

    Jones, Angus G.; Lonergan, Mike; Henley, William E.; Pearson, Ewan R.; Hattersley, Andrew T.; Shields, Beverley M.

    2016-01-01

    Aims Baseline HbA1c is a major predictor of response to glucose lowering therapy and therefore a potential confounder in studies aiming to identify other predictors. However, baseline adjustment may introduce error if the association between baseline HbA1c and response is substantially due to measurement error and regression to the mean. We aimed to determine whether studies of predictors of response should adjust for baseline HbA1c. Methods We assessed the relationship between baseline HbA1c and glycaemic response in 257 participants treated with GLP-1R agonists and assessed whether it reflected measurement error and regression to the mean using duplicate ‘pre-baseline’ HbA1c measurements not included in the response variable. In this cohort and an additional 2659 participants treated with sulfonylureas we assessed the relationship between covariates associated with baseline HbA1c and treatment response with and without baseline adjustment, and with a bias correction using pre-baseline HbA1c to adjust for the effects of error in baseline HbA1c. Results Baseline HbA1c was a major predictor of response (R2 = 0.19,β = -0.44,p<0.001).The association between pre-baseline and response was similar suggesting the greater response at higher baseline HbA1cs is not mainly due to measurement error and subsequent regression to the mean. In unadjusted analysis in both cohorts, factors associated with baseline HbA1c were associated with response, however these associations were weak or absent after adjustment for baseline HbA1c. Bias correction did not substantially alter associations. Conclusions Adjustment for the baseline HbA1c measurement is a simple and effective way to reduce bias in studies of predictors of response to glucose lowering therapy. PMID:27050911

  2. The Power of the Spectrum: Combining Numerical Proxy System Models with Analytical Error Spectra to Better Understand Timescale Dependent Proxy Uncertainty

    NASA Astrophysics Data System (ADS)

    Dolman, A. M.; Laepple, T.; Kunz, T.

    2017-12-01

    Understanding the uncertainties associated with proxy-based reconstructions of past climate is critical if they are to be used to validate climate models and contribute to a comprehensive understanding of the climate system. Here we present two related and complementary approaches to quantifying proxy uncertainty. The proxy forward model (PFM) "sedproxy" bitbucket.org/ecus/sedproxy numerically simulates the creation, archiving and observation of marine sediment archived proxies such as Mg/Ca in foraminiferal shells and the alkenone unsaturation index UK'37. It includes the effects of bioturbation, bias due to seasonality in the rate of proxy creation, aliasing of the seasonal temperature cycle into lower frequencies, and error due to cleaning, processing and measurement of samples. Numerical PFMs have the advantage of being very flexible, allowing many processes to be modelled and assessed for their importance. However, as more and more proxy-climate data become available, their use in advanced data products necessitates rapid estimates of uncertainties for both the raw reconstructions, and their smoothed/derived products, where individual measurements have been aggregated to coarser time scales or time-slices. To address this, we derive closed-form expressions for power spectral density of the various error sources. The power spectra describe both the magnitude and autocorrelation structure of the error, allowing timescale dependent proxy uncertainty to be estimated from a small number of parameters describing the nature of the proxy, and some simple assumptions about the variance of the true climate signal. We demonstrate and compare both approaches for time-series of the last millennia, Holocene, and the deglaciation. While the numerical forward model can create pseudoproxy records driven by climate model simulations, the analytical model of proxy error allows for a comprehensive exploration of parameter space and mapping of climate signal re-constructability, conditional on the climate and sampling conditions.

  3. Prevalence of visual impairment due to uncorrected refractive error: Results from Delhi-Rapid Assessment of Visual Impairment Study.

    PubMed

    Senjam, Suraj Singh; Vashist, Praveen; Gupta, Noopur; Malhotra, Sumit; Misra, Vasundhara; Bhardwaj, Amit; Gupta, Vivek

    2016-05-01

    To estimate the prevalence of visual impairment (VI) due to uncorrected refractive error (URE) and to assess the barriers to utilization of services in the adult urban population of Delhi. A population-based rapid assessment of VI was conducted among people aged 40 years and above in 24 randomly selected clusters of East Delhi district. Presenting visual acuity (PVA) was assessed in each eye using Snellen's "E" chart. Pinhole examination was done if PVA was <20/60 in either eye and ocular examination to ascertain the cause of VI. Barriers to utilization of services for refractive error were recorded with questionnaires. Of 2421 individuals enumerated, 2331 (96%) individuals were examined. Females were 50.7% among them. The mean age of all examined subjects was 51.32 ± 10.5 years (standard deviation). VI in either eye due to URE was present in 275 individuals (11.8%, 95% confidence interval [CI]: 10.5-13.1). URE was identified as the most common cause (53.4%) of VI. The overall prevalence of VI due to URE in the study population was 6.1% (95% CI: 5.1-7.0). The elder population as well as females were more likely to have VI due to URE (odds ratio [OR] = 12.3; P < 0.001 and OR = 1.5; P < 0.02). Lack of felt need was the most common reported barrier (31.5%). The prevalence of VI due to URE among the urban adult population of Delhi is still high despite the availability of abundant eye care facilities. The majority of reported barriers are related to human behavior and attitude toward the refractive error. Understanding these aspects will help in planning appropriate strategies to eliminate VI due to URE.

  4. Computerized pharmaceutical intervention to reduce reconciliation errors at hospital discharge in Spain: an interrupted time-series study.

    PubMed

    García-Molina Sáez, C; Urbieta Sanz, E; Madrigal de Torres, M; Vicente Vera, T; Pérez Cárceles, M D

    2016-04-01

    It is well known that medication reconciliation at discharge is a key strategy to ensure proper drug prescription and the effectiveness and safety of any treatment. Different types of interventions to reduce reconciliation errors at discharge have been tested, many of which are based on the use of electronic tools as they are useful to optimize the medication reconciliation process. However, not all countries are progressing at the same speed in this task and not all tools are equally effective. So it is important to collate updated country-specific data in order to identify possible strategies for improvement in each particular region. Our aim therefore was to analyse the effectiveness of a computerized pharmaceutical intervention to reduce reconciliation errors at discharge in Spain. A quasi-experimental interrupted time-series study was carried out in the cardio-pneumology unit of a general hospital from February to April 2013. The study consisted of three phases: pre-intervention, intervention and post-intervention, each involving 23 days of observations. At the intervention period, a pharmacist was included in the medical team and entered the patient's pre-admission medication in a computerized tool integrated into the electronic clinical history of the patient. The effectiveness was evaluated by the differences between the mean percentages of reconciliation errors in each period using a Mann-Whitney U test accompanied by Bonferroni correction, eliminating autocorrelation of the data by first using an ARIMA analysis. In addition, the types of error identified and their potential seriousness were analysed. A total of 321 patients (119, 105 and 97 in each phase, respectively) were included in the study. For the 3966 medicaments recorded, 1087 reconciliation errors were identified in 77·9% of the patients. The mean percentage of reconciliation errors per patient in the first period of the study was 42·18%, falling to 19·82% during the intervention period (P = 0·000). When the intervention was withdrawn, the mean percentage of reconciliation errors increased again to 27·72% (P = 0·008). The difference between the percentages of pre- and post-intervention periods was statistically significant (P = 0·000). Most reconciliation errors were due to omission (46·7%) or incomplete prescription (43·8%), and 35·3% of which could have caused harm to the patient. A computerized pharmaceutical intervention is shown to reduce reconciliation errors in the context of a high incidence of such errors. © 2016 John Wiley & Sons Ltd.

  5. Increased instrument intelligence--can it reduce laboratory error?

    PubMed

    Jekelis, Albert W

    2005-01-01

    Recent literature has focused on the reduction of laboratory errors and the potential impact on patient management. This study assessed the intelligent, automated preanalytical process-control abilities in newer generation analyzers as compared with older analyzers and the impact on error reduction. Three generations of immuno-chemistry analyzers were challenged with pooled human serum samples for a 3-week period. One of the three analyzers had an intelligent process of fluidics checks, including bubble detection. Bubbles can cause erroneous results due to incomplete sample aspiration. This variable was chosen because it is the most easily controlled sample defect that can be introduced. Traditionally, lab technicians have had to visually inspect each sample for the presence of bubbles. This is time consuming and introduces the possibility of human error. Instruments with bubble detection may be able to eliminate the human factor and reduce errors associated with the presence of bubbles. Specific samples were vortexed daily to introduce a visible quantity of bubbles, then immediately placed in the daily run. Errors were defined as a reported result greater than three standard deviations below the mean and associated with incomplete sample aspiration of the analyte of the individual analyzer Three standard deviations represented the target limits of proficiency testing. The results of the assays were examined for accuracy and precision. Efficiency, measured as process throughput, was also measured to associate a cost factor and potential impact of the error detection on the overall process. The analyzer performance stratified according to their level of internal process control The older analyzers without bubble detection reported 23 erred results. The newest analyzer with bubble detection reported one specimen incorrectly. The precision and accuracy of the nonvortexed specimens were excellent and acceptable for all three analyzers. No errors were found in the nonvortexed specimens. There were no significant differences in overall process time for any of the analyzers when tests were arranged in an optimal configuration. The analyzer with advanced fluidic intelligence demostrated the greatest ability to appropriately deal with an incomplete aspiration by not processing and reporting a result for the sample. This study suggests that preanalytical process-control capabilities could reduce errors. By association, it implies that similar intelligent process controls could favorably impact the error rate and, in the case of this instrument, do it without negatively impacting process throughput. Other improvements may be realized as a result of having an intelligent error-detection process including further reduction in misreported results, fewer repeats, less operator intervention, and less reagent waste.

  6. Pre-University Students' Errors in Integration of Rational Functions and Implications for Classroom Teaching

    ERIC Educational Resources Information Center

    Yee, Ng Kin; Lam, Toh Tin

    2008-01-01

    This paper reports on students' errors in performing integration of rational functions, a topic of calculus in the pre-university mathematics classrooms. Generally the errors could be classified as those due to the students' weak algebraic concepts and their lack of understanding of the concept of integration. With the students' inability to link…

  7. The Accuracy of Aggregate Student Growth Percentiles as Indicators of Educator Performance

    ERIC Educational Resources Information Center

    Castellano, Katherine E.; McCaffrey, Daniel F.

    2017-01-01

    Mean or median student growth percentiles (MGPs) are a popular measure of educator performance, but they lack rigorous evaluation. This study investigates the error in MGP due to test score measurement error (ME). Using analytic derivations, we find that errors in the commonly used MGP are correlated with average prior latent achievement: Teachers…

  8. 26 CFR 301.6213-1 - Restrictions applicable to deficiencies; petition to Tax Court.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... restrictions on assessment of deficiencies—(1) Mathematical errors. If a taxpayer is notified of an additional amount of tax due on account of a mathematical error appearing upon the return, such notice is not deemed... to a mathematical error appearing on the return. That is, the district director or the director of...

  9. Comparison of different methods to compute a preliminary orbit of Space Debris using radar observations

    NASA Astrophysics Data System (ADS)

    Ma, Hélène; Gronchi, Giovanni F.

    2014-07-01

    We advertise a new method of preliminary orbit determination for space debris using radar observations, which we call Infang †. We can perform a linkage of two sets of four observations collected at close times. The context is characterized by the accuracy of the range ρ, whereas the right ascension α and the declination δ are much more inaccurate due to observational errors. This method can correct α, δ, assuming the exact knowledge of the range ρ. Considering no perturbations from the J 2 effect, but including errors in the observations, we can compare the new method, the classical method of Gibbs, and the more recent Keplerian integrals method. The development of Infang is still on-going and will be further improved and tested.

  10. [Gene therapy for the treatment of inborn errors of metabolism].

    PubMed

    Pérez-López, Jordi

    2014-06-16

    Due to the enzymatic defect in inborn errors of metabolism, there is a blockage in the metabolic pathways and an accumulation of toxic metabolites. Currently available therapies include dietary restriction, empowering of alternative metabolic pathways, and the replacement of the deficient enzyme by cell transplantation, liver transplantation or administration of the purified enzyme. Gene therapy, using the transfer in the body of the correct copy of the altered gene by a vector, is emerging as a promising treatment. However, the difficulty of vectors currently used to cross the blood brain barrier, the immune response, the cellular toxicity and potential oncogenesis are some limitations that could greatly limit its potential clinical application in human beings. Copyright © 2013 Elsevier España, S.L. All rights reserved.

  11. VHF command system study. [spectral analysis of GSFC VHF-PSK and VHF-FSK Command Systems

    NASA Technical Reports Server (NTRS)

    Gee, T. H.; Geist, J. M.

    1973-01-01

    Solutions are provided to specific problems arising in the GSFC VHF-PSK and VHF-FSK Command Systems in support of establishment and maintenance of Data Systems Standards. Signal structures which incorporate transmission on the uplink of a clock along with the PSK or FSK data are considered. Strategies are developed for allocating power between the clock and data, and spectral analyses are performed. Bit error probability and other probabilities pertinent to correct transmission of command messages are calculated. Biphase PCM/PM and PCM/FM are considered as candidate modulation techniques on the telemetry downlink, with application to command verification. Comparative performance of PCM/PM and PSK systems is given special attention, including implementation considerations. Gain in bit error performance due to coding is also considered.

  12. A comparison of methods for DPLL loop filter design

    NASA Technical Reports Server (NTRS)

    Aguirre, S.; Hurd, W. J.; Kumar, R.; Statman, J.

    1986-01-01

    Four design methodologies for loop filters for a class of digital phase-locked loops (DPLLs) are presented. The first design maps an optimum analog filter into the digital domain; the second approach designs a filter that minimizes in discrete time weighted combination of the variance of the phase error due to noise and the sum square of the deterministic phase error component; the third method uses Kalman filter estimation theory to design a filter composed of a least squares fading memory estimator and a predictor. The last design relies on classical theory, including rules for the design of compensators. Linear analysis is used throughout the article to compare different designs, and includes stability, steady state performance and transient behavior of the loops. Design methodology is not critical when the loop update rate can be made high relative to loop bandwidth, as the performance approaches that of continuous time. For low update rates, however, the miminization method is significantly superior to the other methods.

  13. DABAM: an open-source database of X-ray mirrors metrology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanchez del Rio, Manuel; Bianchi, Davide; Cocco, Daniele

    2016-04-20

    An open-source database containing metrology data for X-ray mirrors is presented. It makes available metrology data (mirror heights and slopes profiles) that can be used with simulation tools for calculating the effects of optical surface errors in the performances of an optical instrument, such as a synchrotron beamline. A typical case is the degradation of the intensity profile at the focal position in a beamline due to mirror surface errors. This database for metrology (DABAM) aims to provide to the users of simulation tools the data of real mirrors. The data included in the database are described in this paper,more » with details of how the mirror parameters are stored. An accompanying software is provided to allow simple access and processing of these data, calculate the most usual statistical parameters, and also include the option of creating input files for most used simulation codes. Some optics simulations are presented and discussed to illustrate the real use of the profiles from the database.« less

  14. DABAM: an open-source database of X-ray mirrors metrology

    PubMed Central

    Sanchez del Rio, Manuel; Bianchi, Davide; Cocco, Daniele; Glass, Mark; Idir, Mourad; Metz, Jim; Raimondi, Lorenzo; Rebuffi, Luca; Reininger, Ruben; Shi, Xianbo; Siewert, Frank; Spielmann-Jaeggi, Sibylle; Takacs, Peter; Tomasset, Muriel; Tonnessen, Tom; Vivo, Amparo; Yashchuk, Valeriy

    2016-01-01

    An open-source database containing metrology data for X-ray mirrors is presented. It makes available metrology data (mirror heights and slopes profiles) that can be used with simulation tools for calculating the effects of optical surface errors in the performances of an optical instrument, such as a synchrotron beamline. A typical case is the degradation of the intensity profile at the focal position in a beamline due to mirror surface errors. This database for metrology (DABAM) aims to provide to the users of simulation tools the data of real mirrors. The data included in the database are described in this paper, with details of how the mirror parameters are stored. An accompanying software is provided to allow simple access and processing of these data, calculate the most usual statistical parameters, and also include the option of creating input files for most used simulation codes. Some optics simulations are presented and discussed to illustrate the real use of the profiles from the database. PMID:27140145

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanchez del Rio, Manuel; Bianchi, Davide; Cocco, Daniele

    An open-source database containing metrology data for X-ray mirrors is presented. It makes available metrology data (mirror heights and slopes profiles) that can be used with simulation tools for calculating the effects of optical surface errors in the performances of an optical instrument, such as a synchrotron beamline. A typical case is the degradation of the intensity profile at the focal position in a beamline due to mirror surface errors. This database for metrology (DABAM) aims to provide to the users of simulation tools the data of real mirrors. The data included in the database are described in this paper,more » with details of how the mirror parameters are stored. An accompanying software is provided to allow simple access and processing of these data, calculate the most usual statistical parameters, and also include the option of creating input files for most used simulation codes. Some optics simulations are presented and discussed to illustrate the real use of the profiles from the database.« less

  16. DABAM: An open-source database of X-ray mirrors metrology

    DOE PAGES

    Sanchez del Rio, Manuel; Bianchi, Davide; Cocco, Daniele; ...

    2016-05-01

    An open-source database containing metrology data for X-ray mirrors is presented. It makes available metrology data (mirror heights and slopes profiles) that can be used with simulation tools for calculating the effects of optical surface errors in the performances of an optical instrument, such as a synchrotron beamline. A typical case is the degradation of the intensity profile at the focal position in a beamline due to mirror surface errors. This database for metrology (DABAM) aims to provide to the users of simulation tools the data of real mirrors. The data included in the database are described in this paper,more » with details of how the mirror parameters are stored. An accompanying software is provided to allow simple access and processing of these data, calculate the most usual statistical parameters, and also include the option of creating input files for most used simulation codes. In conclusion, some optics simulations are presented and discussed to illustrate the real use of the profiles from the database.« less

  17. DABAM: an open-source database of X-ray mirrors metrology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanchez del Rio, Manuel; Bianchi, Davide; Cocco, Daniele

    An open-source database containing metrology data for X-ray mirrors is presented. It makes available metrology data (mirror heights and slopes profiles) that can be used with simulation tools for calculating the effects of optical surface errors in the performances of an optical instrument, such as a synchrotron beamline. A typical case is the degradation of the intensity profile at the focal position in a beamline due to mirror surface errors. This database for metrology (DABAM) aims to provide to the users of simulation tools the data of real mirrors. The data included in the database are described in this paper,more » with details of how the mirror parameters are stored. An accompanying software is provided to allow simple access and processing of these data, calculate the most usual statistical parameters, and also include the option of creating input files for most used simulation codes. Some optics simulations are presented and discussed to illustrate the real use of the profiles from the database.« less

  18. DABAM: An open-source database of X-ray mirrors metrology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanchez del Rio, Manuel; Bianchi, Davide; Cocco, Daniele

    An open-source database containing metrology data for X-ray mirrors is presented. It makes available metrology data (mirror heights and slopes profiles) that can be used with simulation tools for calculating the effects of optical surface errors in the performances of an optical instrument, such as a synchrotron beamline. A typical case is the degradation of the intensity profile at the focal position in a beamline due to mirror surface errors. This database for metrology (DABAM) aims to provide to the users of simulation tools the data of real mirrors. The data included in the database are described in this paper,more » with details of how the mirror parameters are stored. An accompanying software is provided to allow simple access and processing of these data, calculate the most usual statistical parameters, and also include the option of creating input files for most used simulation codes. In conclusion, some optics simulations are presented and discussed to illustrate the real use of the profiles from the database.« less

  19. Applicability of Infrared Photorefraction for Measurement of Accommodation in Awake-Behaving Normal and Strabismic Monkeys

    PubMed Central

    Bossong, Heather; Swann, Michelle; Glasser, Adrian; Das, Vallabh E.

    2010-01-01

    Purpose This study was designed to use infrared photorefraction to measure accommodation in awake-behaving normal and strabismic monkeys and describe properties of photorefraction calibrations in these monkeys. Methods Ophthalmic trial lenses were used to calibrate the slope of pupil vertical pixel intensity profile measurements that were made with a custom-built infrared photorefractor. Day to day variability in photorefraction calibration curves, variability in calibration coefficients due to misalignment of the photorefractor Purkinje image and the center of the pupil, and variability in refractive error due to off-axis measurements were evaluated. Results The linear range of calibration of the photorefractor was found for ophthalmic lenses ranging from –1 D to +4 D. Calibration coefficients were different across monkeys tested (two strabismic, one normal) but were similar for each monkey over different experimental days. In both normal and strabismic monkeys, small misalignment of the photorefractor Purkinje image with the center of pupil resulted in only small changes in calibration coefficients, that were not statistically significant (P > 0.05). Off-axis measurement of refractive error was also small in the normal and strabismic monkeys (~1 D to 2 D) as long as the magnitude of misalignment was <10°. Conclusions Remote infrared photorefraction is suitable for measuring accommodation in awake, behaving normal, and strabismic monkeys. Specific challenges posed by the strabismic monkeys, such as possible misalignment of the photorefractor Purkinje image and the center of the pupil during either calibration or measurement of accommodation, that may arise due to unsteady fixation or small eye movements including nystagmus, results in small changes in measured refractive error. PMID:19029024

  20. Ophthalmologic abnormalities among students with cognitive impairment in eastern Taiwan: The special group with undetected visual impairment.

    PubMed

    Tsao, Wei-Shan; Hsieh, Hsi-Pao; Chuang, Yi-Ting; Sheu, Min-Muh

    2017-05-01

    Students with cognitive impairment are at increased risk of suffering from visual impairment due to refractive errors and ocular disease, which can adversely influence learning and daily activities. The purpose of this study was to evaluate the ocular and visual status among students at the special education school in Hualien. All students at the National Hualien Special Education School were evaluated. Full eye examinations were conducted by a skilled ophthalmologist. The students' medical records and disability types were reviewed. A total of 241 students, aged 7-18 years, were examined. Visual acuity could be assessed in 138 students. A total of 169/477 (35.4%) eyes were found to suffer from refractive errors, including 20 eyes with high myopia (≤-6.0 D) and 16 eyes with moderate hypermetropia (+3.0 D to +5.0 D). A total of 84/241 (34.8%) students needed spectacles to correct their vision, thus improving their daily activities and learning process, but only 15/241 (6.2%) students were wearing suitable corrective spectacles. A total of 55/241 students (22.8%) had ocular disorders, which influenced their visual function. The multiple disability group had a statistically significant higher prevalence of ocular disorders (32.9%) than the simple intellectual disability group (19.6%). Students with cognitive impairment in eastern Taiwan have a high risk of visual impairment due to refractive errors and ocular disorders. Importantly, many students have unrecognized correctable refractive errors. Regular ophthalmic examination should be administered to address this issue and prevent further disability in this already handicapped group. Copyright © 2016. Published by Elsevier B.V.

  1. Assessment of Satellite Surface Radiation Products in Highland Regions with Tibet Instrumental Data

    NASA Technical Reports Server (NTRS)

    Yang, Kun; Koike, Toshio; Stackhouse, Paul; Mikovitz, Colleen

    2006-01-01

    This study presents results of comparisons between instrumental radiation data in the elevated Tibetan Plateau and two global satellite products: the Global Energy and Water Cycle Experiment - Surface Radiation Budget (GEWEX-SRB) and International Satellite Cloud Climatology Project - Flux Data (ISCCP-FD). In general, shortwave radiation (SW) is estimated better by ISCCP-FD while longwave radiation (LW) is estimated better by GEWEX-SRB, but all the radiation components in both products are under-estimated. Severe and systematic errors were found in monthly-mean SRB SW (on plateau-average, -48 W/sq m for downward SW and -18 W/sq m for upward SW) and FD LW (on plateau-average, -37 W/sq m for downward LW and -62 W/sq m for upward LW) for radiation. Errors in monthly-mean diurnal variations are even larger than the monthly mean errors. Though the LW errors can be reduced about 10 W/sq m after a correction for altitude difference between the site and SRB and FD grids, these errors are still higher than that for other regions. The large errors in SRB SW was mainly due to a processing mistake for elevation effect, but the errors in SRB LW was mainly due to significant errors in input data. We suggest reprocessing satellite surface radiation budget data, at least for highland areas like Tibet.

  2. Uses of tuberculosis mortality surveillance to identify programme errors and improve database reporting.

    PubMed

    Selig, L; Guedes, R; Kritski, A; Spector, N; Lapa E Silva, J R; Braga, J U; Trajman, A

    2009-08-01

    In 2006, 848 persons died from tuberculosis (TB) in Rio de Janeiro, Brazil, corresponding to a mortality rate of 5.4 per 100 000 population. No specific TB death surveillance actions are currently in place in Brazil. Two public general hospitals with large open emergency rooms in Rio de Janeiro City. To evaluate the contribution of TB death surveillance in detecting gaps in TB control. We conducted a survey of TB deaths from September 2005 to August 2006. Records of TB-related deaths and deaths due to undefined causes were investigated. Complementary data were gathered from the mortality and TB notification databases. Seventy-three TB-related deaths were investigated. Transmission hazards were identified among firefighters, health care workers and in-patients. Management errors included failure to isolate suspected cases, to confirm TB, to correct drug doses in underweight patients and to trace contacts. Following the survey, 36 cases that had not previously been notified were included in the national TB notification database and the outcome of 29 notified cases was corrected. TB mortality surveillance can contribute to TB monitoring and evaluation by detecting correctable and specific programme- and hospital-based care errors, and by improving the accuracy of TB database reporting. Specific local and programmatic interventions can be proposed as a result.

  3. [Responsibility due to medication errors in France: a study based on SHAM insurance data].

    PubMed

    Theissen, A; Orban, J-C; Fuz, F; Guerin, J-P; Flavin, P; Albertini, S; Maricic, S; Saquet, D; Niccolai, P

    2015-03-01

    The safe medication practices at the hospital constitute a major public health problem. Drug supply chain is a complex process, potentially source of errors and damages for the patient. SHAM insurances are the biggest French provider of medical liability insurances and a relevant source of data on the health care complications. The main objective of the study was to analyze the type and cause of medication errors declared to SHAM and having led to a conviction by a court. We did a retrospective study on insurance claims provided by SHAM insurances with a medication error and leading to a condemnation over a 6-year period (between 2005 and 2010). Thirty-one cases were analysed, 21 for scheduled activity and 10 for emergency activity. Consequences of claims were mostly serious (12 deaths, 14 serious complications, 5 simple complications). The types of medication errors were a drug monitoring error (11 cases), an administration error (5 cases), an overdose (6 cases), an allergy (4 cases), a contraindication (3 cases) and an omission (2 cases). Intravenous route of administration was involved in 19 of 31 cases (61%). The causes identified by the court expert were an error related to service organization (11), an error related to medical practice (11) or nursing practice (13). Only one claim was due to the hospital pharmacy. The claim related to drug supply chain is infrequent but potentially serious. These data should help strengthen quality approach in risk management. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  4. Dipole estimation errors due to not incorporating anisotropic conductivities in realistic head models for EEG source analysis

    NASA Astrophysics Data System (ADS)

    Hallez, Hans; Staelens, Steven; Lemahieu, Ignace

    2009-10-01

    EEG source analysis is a valuable tool for brain functionality research and for diagnosing neurological disorders, such as epilepsy. It requires a geometrical representation of the human head or a head model, which is often modeled as an isotropic conductor. However, it is known that some brain tissues, such as the skull or white matter, have an anisotropic conductivity. Many studies reported that the anisotropic conductivities have an influence on the calculated electrode potentials. However, few studies have assessed the influence of anisotropic conductivities on the dipole estimations. In this study, we want to determine the dipole estimation errors due to not taking into account the anisotropic conductivities of the skull and/or brain tissues. Therefore, head models are constructed with the same geometry, but with an anisotropically conducting skull and/or brain tissue compartment. These head models are used in simulation studies where the dipole location and orientation error is calculated due to neglecting anisotropic conductivities of the skull and brain tissue. Results show that not taking into account the anisotropic conductivities of the skull yields a dipole location error between 2 and 25 mm, with an average of 10 mm. When the anisotropic conductivities of the brain tissues are neglected, the dipole location error ranges between 0 and 5 mm. In this case, the average dipole location error was 2.3 mm. In all simulations, the dipole orientation error was smaller than 10°. We can conclude that the anisotropic conductivities of the skull have to be incorporated to improve the accuracy of EEG source analysis. The results of the simulation, as presented here, also suggest that incorporation of the anisotropic conductivities of brain tissues is not necessary. However, more studies are needed to confirm these suggestions.

  5. High performance interconnection between high data rate networks

    NASA Technical Reports Server (NTRS)

    Foudriat, E. C.; Maly, K.; Overstreet, C. M.; Zhang, L.; Sun, W.

    1992-01-01

    The bridge/gateway system needed to interconnect a wide range of computer networks to support a wide range of user quality-of-service requirements is discussed. The bridge/gateway must handle a wide range of message types including synchronous and asynchronous traffic, large, bursty messages, short, self-contained messages, time critical messages, etc. It is shown that messages can be classified into three basic classes, synchronous and large and small asynchronous messages. The first two require call setup so that packet identification, buffer handling, etc. can be supported in the bridge/gateway. Identification enables resequences in packet size. The third class is for messages which do not require call setup. Resequencing hardware based to handle two types of resequencing problems is presented. The first is for a virtual parallel circuit which can scramble channel bytes. The second system is effective in handling both synchronous and asynchronous traffic between networks with highly differing packet sizes and data rates. The two other major needs for the bridge/gateway are congestion and error control. A dynamic, lossless congestion control scheme which can easily support effective error correction is presented. Results indicate that the congestion control scheme provides close to optimal capacity under congested conditions. Under conditions where error may develop due to intervening networks which are not lossless, intermediate error recovery and correction takes 1/3 less time than equivalent end-to-end error correction under similar conditions.

  6. Differential sea-state bias: A case study using TOPEX/POSEIDON data

    NASA Technical Reports Server (NTRS)

    Stewart, Robert H.; Devalla, B.

    1994-01-01

    We used selected data from the NASA altimeter TOPEX/POSEIDON to calculate differences in range measured by the C and Ku-band altimeters when the satellite overflew 5 to 15 m waves late at night. The range difference is due to free electrons in the ionosphere and to errors in sea-state bias. For the selected data the ionospheric influence on Ku range is less than 2 cm. Any difference in range over short horizontal distances is due only to a small along-track variability of the ionosphere and to errors in calculating the differential sea-state bias. We find that there is a barely detectable error in the bias in the geophysical data records. The wave-induced error in the ionospheric correction is less than 0.2% of significant wave height. The equivalent error in differential range is less than 1% of wave height. Errors in the differential sea-state bias calculations appear to be small even for extreme wave heights that greatly exceed the conditions on which the bias is based. The results also improved our confidence in the sea-state bias correction used for calculating the geophysical data records. Any error in the correction must influence Ku and C-band ranges almost equally.

  7. Finite-time sliding surface constrained control for a robot manipulator with an unknown deadzone and disturbance.

    PubMed

    Ik Han, Seong; Lee, Jangmyung

    2016-11-01

    This paper presents finite-time sliding mode control (FSMC) with predefined constraints for the tracking error and sliding surface in order to obtain robust positioning of a robot manipulator with input nonlinearity due to an unknown deadzone and external disturbance. An assumed model feedforward FSMC was designed to avoid tedious identification procedures for the manipulator parameters and to obtain a fast response time. Two constraint switching control functions based on the tracking error and finite-time sliding surface were added to the FSMC to guarantee the predefined tracking performance despite the presence of an unknown deadzone and disturbance. The tracking error due to the deadzone and disturbance can be suppressed within the predefined error boundary simply by tuning the gain value of the constraint switching function and without the addition of an extra compensator. Therefore, the designed constraint controller has a simpler structure than conventional transformed error constraint methods and the sliding surface constraint scheme can also indirectly guarantee the tracking error constraint while being more stable than the tracking error constraint control. A simulation and experiment were performed on an articulated robot manipulator to validate the proposed control schemes. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  8. Modulation of error-sensitivity during a prism adaptation task in people with cerebellar degeneration

    PubMed Central

    Shadmehr, Reza; Ohminami, Shinya; Tsutsumi, Ryosuke; Shirota, Yuichiro; Shimizu, Takahiro; Tanaka, Nobuyuki; Terao, Yasuo; Tsuji, Shoji; Ugawa, Yoshikazu; Uchimura, Motoaki; Inoue, Masato; Kitazawa, Shigeru

    2015-01-01

    Cerebellar damage can profoundly impair human motor adaptation. For example, if reaching movements are perturbed abruptly, cerebellar damage impairs the ability to learn from the perturbation-induced errors. Interestingly, if the perturbation is imposed gradually over many trials, people with cerebellar damage may exhibit improved adaptation. However, this result is controversial, since the differential effects of gradual vs. abrupt protocols have not been observed in all studies. To examine this question, we recruited patients with pure cerebellar ataxia due to cerebellar cortical atrophy (n = 13) and asked them to reach to a target while viewing the scene through wedge prisms. The prisms were computer controlled, making it possible to impose the full perturbation abruptly in one trial, or build up the perturbation gradually over many trials. To control visual feedback, we employed shutter glasses that removed visual feedback during the reach, allowing us to measure trial-by-trial learning from error (termed error-sensitivity), and trial-by-trial decay of motor memory (termed forgetting). We found that the patients benefited significantly from the gradual protocol, improving their performance with respect to the abrupt protocol by exhibiting smaller errors during the exposure block, and producing larger aftereffects during the postexposure block. Trial-by-trial analysis suggested that this improvement was due to increased error-sensitivity in the gradual protocol. Therefore, cerebellar patients exhibited an improved ability to learn from error if they experienced those errors gradually. This improvement coincided with increased error-sensitivity and was present in both groups of subjects, suggesting that control of error-sensitivity may be spared despite cerebellar damage. PMID:26311179

  9. [From the concept of guilt to the value-free notification of errors in medicine. Risks, errors and patient safety].

    PubMed

    Haller, U; Welti, S; Haenggi, D; Fink, D

    2005-06-01

    The number of liability cases but also the size of individual claims due to alleged treatment errors are increasing steadily. Spectacular sentences, especially in the USA, encourage this trend. Wherever human beings work, errors happen. The health care system is particularly susceptible and shows a high potential for errors. Therefore risk management has to be given top priority in hospitals. Preparing the introduction of critical incident reporting (CIR) as the means to notify errors is time-consuming and calls for a change in attitude because in many places the necessary base of trust has to be created first. CIR is not made to find the guilty and punish them but to uncover the origins of errors in order to eliminate them. The Department of Anesthesiology of the University Hospital of Basel has developed an electronic error notification system, which, in collaboration with the Swiss Medical Association, allows each specialist society to participate electronically in a CIR system (CIRS) in order to create the largest database possible and thereby to allow statements concerning the extent and type of error sources in medicine. After a pilot project in 2000-2004, the Swiss Society of Gynecology and Obstetrics is now progressively introducing the 'CIRS Medical' of the Swiss Medical Association. In our country, such programs are vulnerable to judicial intervention due to the lack of explicit legal guarantees of protection. High-quality data registration and skillful counseling are all the more important. Hospital directors and managers are called upon to examine those incidents which are based on errors inherent in the system.

  10. A method for sensitivity analysis to assess the effects of measurement error in multiple exposure variables using external validation data.

    PubMed

    Agogo, George O; van der Voet, Hilko; van 't Veer, Pieter; Ferrari, Pietro; Muller, David C; Sánchez-Cantalejo, Emilio; Bamia, Christina; Braaten, Tonje; Knüppel, Sven; Johansson, Ingegerd; van Eeuwijk, Fred A; Boshuizen, Hendriek C

    2016-10-13

    Measurement error in self-reported dietary intakes is known to bias the association between dietary intake and a health outcome of interest such as risk of a disease. The association can be distorted further by mismeasured confounders, leading to invalid results and conclusions. It is, however, difficult to adjust for the bias in the association when there is no internal validation data. We proposed a method to adjust for the bias in the diet-disease association (hereafter, association), due to measurement error in dietary intake and a mismeasured confounder, when there is no internal validation data. The method combines prior information on the validity of the self-report instrument with the observed data to adjust for the bias in the association. We compared the proposed method with the method that ignores the confounder effect, and with the method that ignores measurement errors completely. We assessed the sensitivity of the estimates to various magnitudes of measurement error, error correlations and uncertainty in the literature-reported validation data. We applied the methods to fruits and vegetables (FV) intakes, cigarette smoking (confounder) and all-cause mortality data from the European Prospective Investigation into Cancer and Nutrition study. Using the proposed method resulted in about four times increase in the strength of association between FV intake and mortality. For weakly correlated errors, measurement error in the confounder minimally affected the hazard ratio estimate for FV intake. The effect was more pronounced for strong error correlations. The proposed method permits sensitivity analysis on measurement error structures and accounts for uncertainties in the reported validity coefficients. The method is useful in assessing the direction and quantifying the magnitude of bias in the association due to measurement errors in the confounders.

  11. Skylab water balance error analysis

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1977-01-01

    Estimates of the precision of the net water balance were obtained for the entire Skylab preflight and inflight phases as well as for the first two weeks of flight. Quantitative estimates of both total sampling errors and instrumentation errors were obtained. It was shown that measurement error is minimal in comparison to biological variability and little can be gained from improvement in analytical accuracy. In addition, a propagation of error analysis demonstrated that total water balance error could be accounted for almost entirely by the errors associated with body mass changes. Errors due to interaction between terms in the water balance equation (covariances) represented less than 10% of the total error. Overall, the analysis provides evidence that daily measurements of body water changes obtained from the indirect balance technique are reasonable, precise, and relaible. The method is not biased toward net retention or loss.

  12. Analysis of the impact of error detection on computer performance

    NASA Technical Reports Server (NTRS)

    Shin, K. C.; Lee, Y. H.

    1983-01-01

    Conventionally, reliability analyses either assume that a fault/error is detected immediately following its occurrence, or neglect damages caused by latent errors. Though unrealistic, this assumption was imposed in order to avoid the difficulty of determining the respective probabilities that a fault induces an error and the error is then detected in a random amount of time after its occurrence. As a remedy for this problem a model is proposed to analyze the impact of error detection on computer performance under moderate assumptions. Error latency, the time interval between occurrence and the moment of detection, is used to measure the effectiveness of a detection mechanism. This model is used to: (1) predict the probability of producing an unreliable result, and (2) estimate the loss of computation due to fault and/or error.

  13. Deep data fusion method for missile-borne inertial/celestial system

    NASA Astrophysics Data System (ADS)

    Zhang, Chunxi; Chen, Xiaofei; Lu, Jiazhen; Zhang, Hao

    2018-05-01

    Strap-down inertial-celestial integrated navigation system has the advantages of autonomy and high precision and is very useful for ballistic missiles. The star sensor installation error and inertial measurement error have a great influence for the system performance. Based on deep data fusion, this paper establishes measurement equations including star sensor installation error and proposes the deep fusion filter method. Simulations including misalignment error, star sensor installation error, IMU error are analyzed. Simulation results indicate that the deep fusion method can estimate the star sensor installation error and IMU error. Meanwhile, the method can restrain the misalignment errors caused by instrument errors.

  14. Improving Planck calibration by including frequency-dependent relativistic corrections

    NASA Astrophysics Data System (ADS)

    Quartin, Miguel; Notari, Alessio

    2015-09-01

    The Planck satellite detectors are calibrated in the 2015 release using the "orbital dipole", which is the time-dependent dipole generated by the Doppler effect due to the motion of the satellite around the Sun. Such an effect has also relativistic time-dependent corrections of relative magnitude 10-3, due to coupling with the "solar dipole" (the motion of the Sun compared to the CMB rest frame), which are included in the data calibration by the Planck collaboration. We point out that such corrections are subject to a frequency-dependent multiplicative factor. This factor differs from unity especially at the highest frequencies, relevant for the HFI instrument. Since currently Planck calibration errors are dominated by systematics, to the point that polarization data is currently unreliable at large scales, such a correction can in principle be highly relevant for future data releases.

  15. Analysis of space telescope data collection system

    NASA Technical Reports Server (NTRS)

    Ingels, F. M.; Schoggen, W. O.

    1982-01-01

    An analysis of the expected performance for the Multiple Access (MA) system is provided. The analysis covers the expected bit error rate performance, the effects of synchronization loss, the problem of self-interference, and the problem of phase ambiguity. The problem of false acceptance of a command word due to data inversion is discussed. A mathematical determination of the probability of accepting an erroneous command word due to a data inversion is presented. The problem is examined for three cases: (1) a data inversion only, (2) a data inversion and a random error within the same command word, and a block (up to 256 48-bit words) containing both a data inversion and a random error.

  16. Effects of various experimental parameters on errors in triangulation solution of elongated object in space. [barium ion cloud

    NASA Technical Reports Server (NTRS)

    Long, S. A. T.

    1975-01-01

    The effects of various experimental parameters on the displacement errors in the triangulation solution of an elongated object in space due to pointing uncertainties in the lines of sight have been determined. These parameters were the number and location of observation stations, the object's location in latitude and longitude, and the spacing of the input data points on the azimuth-elevation image traces. The displacement errors due to uncertainties in the coordinates of a moving station have been determined as functions of the number and location of the stations. The effects of incorporating the input data from additional cameras at one of the stations were also investigated.

  17. Treatable newborn and infant seizures due to inborn errors of metabolism.

    PubMed

    Campistol, Jaume; Plecko, Barbara

    2015-09-01

    About 25% of seizures in the neonatal period have causes other than asphyxia, ischaemia or intracranial bleeding. Among these are primary genetic epileptic encephalopathies with sometimes poor prognosis and high mortality. In addition, some forms of neonatal infant seizures are due to inborn errors of metabolism that do not respond to common AEDs, but are amenable to specific treatment. In this situation, early recognition can allow seizure control and will prevent neurological deterioration and long-term sequelae. We review the group of inborn errors of metabolism that lead to newborn/infant seizures and epilepsy, of which the treatment with cofactors is very different to that used in typical epilepsy management.

  18. Application of optimal data assimilation techniques in oceanography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, R.N.

    Application of optimal data assimilation methods in oceanography is, if anything, more important than it is in numerical weather prediction, due to the sparsity of data. Here, a general framework is presented and practical examples taken from the author`s work are described, with the purpose of conveying to the reader some idea of the state of the art of data assimilation in oceanography. While no attempt is made to be exhaustive, references to other lines of research are included. Major challenges to the community include design of statistical error models and handling of strong nonlinearity.

  19. The Deference Due the Oracle: Computerized Text Analysis in a Basic Writing Class.

    ERIC Educational Resources Information Center

    Otte, George

    1989-01-01

    Describes how a computerized text analysis program can help students discover error patterns in their writing, and notes how students' responses to analyses can reduce errors and improve their writing. (MM)

  20. Horizon sensor errors calculated by computer models compared with errors measured in orbit

    NASA Technical Reports Server (NTRS)

    Ward, K. A.; Hogan, R.; Andary, J.

    1982-01-01

    Using a computer program to model the earth's horizon and to duplicate the signal processing procedure employed by the ESA (Earth Sensor Assembly), errors due to radiance variation have been computed for a particular time of the year. Errors actually occurring in flight at the same time of year are inferred from integrated rate gyro data for a satellite of the TIROS series of NASA weather satellites (NOAA-A). The predicted performance is compared with actual flight history.

  1. Wavefront-Error Performance Characterization for the James Webb Space Telescope (JWST) Integrated Science Instrument Module (ISIM) Science Instruments

    NASA Technical Reports Server (NTRS)

    Aronstein, David L.; Smith, J. Scott; Zielinski, Thomas P.; Telfer, Randal; Tournois, Severine C.; Moore, Dustin B.; Fienup, James R.

    2016-01-01

    The science instruments (SIs) comprising the James Webb Space Telescope (JWST) Integrated Science Instrument Module (ISIM) were tested in three cryogenic-vacuum test campaigns in the NASA Goddard Space Flight Center (GSFC)'s Space Environment Simulator (SES). In this paper, we describe the results of optical wavefront-error performance characterization of the SIs. The wavefront error is determined using image-based wavefront sensing (also known as phase retrieval), and the primary data used by this process are focus sweeps, a series of images recorded by the instrument under test in its as-used configuration, in which the focal plane is systematically changed from one image to the next. High-precision determination of the wavefront error also requires several sources of secondary data, including 1) spectrum, apodization, and wavefront-error characterization of the optical ground-support equipment (OGSE) illumination module, called the OTE Simulator (OSIM), 2) plate scale measurements made using a Pseudo-Nonredundant Mask (PNRM), and 3) pupil geometry predictions as a function of SI and field point, which are complicated because of a tricontagon-shaped outer perimeter and small holes that appear in the exit pupil due to the way that different light sources are injected into the optical path by the OGSE. One set of wavefront-error tests, for the coronagraphic channel of the Near-Infrared Camera (NIRCam) Longwave instruments, was performed using data from transverse translation diversity sweeps instead of focus sweeps, in which a sub-aperture is translated andor rotated across the exit pupil of the system.Several optical-performance requirements that were verified during this ISIM-level testing are levied on the uncertainties of various wavefront-error-related quantities rather than on the wavefront errors themselves. This paper also describes the methodology, based on Monte Carlo simulations of the wavefront-sensing analysis of focus-sweep data, used to establish the uncertainties of the wavefront error maps.

  2. Wavefront-Error Performance Characterization for the James Webb Space Telescope (JWST) Integrated Science Instrument Module (ISIM) Science Instruments

    NASA Technical Reports Server (NTRS)

    Aronstein, David L.; Smith, J. Scott; Zielinski, Thomas P.; Telfer, Randal; Tournois, Severine C.; Moore, Dustin B.; Fienup, James R.

    2016-01-01

    The science instruments (SIs) comprising the James Webb Space Telescope (JWST) Integrated Science Instrument Module (ISIM) were tested in three cryogenic-vacuum test campaigns in the NASA Goddard Space Flight Center (GSFC)'s Space Environment Simulator (SES) test chamber. In this paper, we describe the results of optical wavefront-error performance characterization of the SIs. The wavefront error is determined using image-based wavefront sensing, and the primary data used by this process are focus sweeps, a series of images recorded by the instrument under test in its as-used configuration, in which the focal plane is systematically changed from one image to the next. High-precision determination of the wavefront error also requires several sources of secondary data, including 1) spectrum, apodization, and wavefront-error characterization of the optical ground-support equipment (OGSE) illumination module, called the OTE Simulator (OSIM), 2) F-number and pupil-distortion measurements made using a pseudo-nonredundant mask (PNRM), and 3) pupil geometry predictions as a function of SI and field point, which are complicated because of a tricontagon-shaped outer perimeter and small holes that appear in the exit pupil due to the way that different light sources are injected into the optical path by the OGSE. One set of wavefront-error tests, for the coronagraphic channel of the Near-Infrared Camera (NIRCam) Longwave instruments, was performed using data from transverse translation diversity sweeps instead of focus sweeps, in which a sub-aperture is translated and/or rotated across the exit pupil of the system. Several optical-performance requirements that were verified during this ISIM-level testing are levied on the uncertainties of various wavefront-error-related quantities rather than on the wavefront errors themselves. This paper also describes the methodology, based on Monte Carlo simulations of the wavefront-sensing analysis of focus-sweep data, used to establish the uncertainties of the wavefront-error maps.

  3. Decomposition of Sources of Errors in Seasonal Streamflow Forecasting over the U.S. Sunbelt

    NASA Technical Reports Server (NTRS)

    Mazrooei, Amirhossein; Sinah, Tusshar; Sankarasubramanian, A.; Kumar, Sujay V.; Peters-Lidard, Christa D.

    2015-01-01

    Seasonal streamflow forecasts, contingent on climate information, can be utilized to ensure water supply for multiple uses including municipal demands, hydroelectric power generation, and for planning agricultural operations. However, uncertainties in the streamflow forecasts pose significant challenges in their utilization in real-time operations. In this study, we systematically decompose various sources of errors in developing seasonal streamflow forecasts from two Land Surface Models (LSMs) (Noah3.2 and CLM2), which are forced with downscaled and disaggregated climate forecasts. In particular, the study quantifies the relative contributions of the sources of errors from LSMs, climate forecasts, and downscaling/disaggregation techniques in developing seasonal streamflow forecast. For this purpose, three month ahead seasonal precipitation forecasts from the ECHAM4.5 general circulation model (GCM) were statistically downscaled from 2.8deg to 1/8deg spatial resolution using principal component regression (PCR) and then temporally disaggregated from monthly to daily time step using kernel-nearest neighbor (K-NN) approach. For other climatic forcings, excluding precipitation, we considered the North American Land Data Assimilation System version 2 (NLDAS-2) hourly climatology over the years 1979 to 2010. Then the selected LSMs were forced with precipitation forecasts and NLDAS-2 hourly climatology to develop retrospective seasonal streamflow forecasts over a period of 20 years (1991-2010). Finally, the performance of LSMs in forecasting streamflow under different schemes was analyzed to quantify the relative contribution of various sources of errors in developing seasonal streamflow forecast. Our results indicate that the most dominant source of errors during winter and fall seasons is the errors due to ECHAM4.5 precipitation forecasts, while temporal disaggregation scheme contributes to maximum errors during summer season.

  4. Improved Uncertainty Quantification in Groundwater Flux Estimation Using GRACE

    NASA Astrophysics Data System (ADS)

    Reager, J. T., II; Rao, P.; Famiglietti, J. S.; Turmon, M.

    2015-12-01

    Groundwater change is difficult to monitor over large scales. One of the most successful approaches is in the remote sensing of time-variable gravity using NASA Gravity Recovery and Climate Experiment (GRACE) mission data, and successful case studies have created the opportunity to move towards a global groundwater monitoring framework for the world's largest aquifers. To achieve these estimates, several approximations are applied, including those in GRACE processing corrections, the formulation of the formal GRACE errors, destriping and signal recovery, and the numerical model estimation of snow water, surface water and soil moisture storage states used to isolate a groundwater component. A major weakness in these approaches is inconsistency: different studies have used different sources of primary and ancillary data, and may achieve different results based on alternative choices in these approximations. In this study, we present two cases of groundwater change estimation in California and the Colorado River basin, selected for their good data availability and varied climates. We achieve a robust numerical estimate of post-processing uncertainties resulting from land-surface model structural shortcomings and model resolution errors. Groundwater variations should demonstrate less variability than the overlying soil moisture state does, as groundwater has a longer memory of past events due to buffering by infiltration and drainage rate limits. We apply a model ensemble approach in a Bayesian framework constrained by the assumption of decreasing signal variability with depth in the soil column. We also discuss time variable errors vs. time constant errors, across-scale errors v. across-model errors, and error spectral content (across scales and across model). More robust uncertainty quantification for GRACE-based groundwater estimates would take all of these issues into account, allowing for more fair use in management applications and for better integration of GRACE-based measurements with observations from other sources.

  5. 46 CFR Exhibit No. 1 to Subpart Q... - Application for Refund or Waiver of Freight Charges Due to Tariff or Quoting Error

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 46 Shipping 9 2012-10-01 2012-10-01 false Application for Refund or Waiver of Freight Charges Due to Tariff or Quoting Error No. Exhibit No. 1 to Subpart Q [§ 502.271(d)] of Part 502 Shipping... or Waiver of Freight Charges Pt. 502, Subpt. Q, Exh. 1 Exhibit No. 1 to Subpart Q [§ 502.271(d)] of...

  6. 46 CFR Exhibit No. 1 to Subpart Q... - Application for Refund or Waiver of Freight Charges Due to Tariff or Quoting Error

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 9 2013-10-01 2013-10-01 false Application for Refund or Waiver of Freight Charges Due to Tariff or Quoting Error No. Exhibit No. 1 to Subpart Q [§ 502.271(d)] of Part 502 Shipping... or Waiver of Freight Charges Pt. 502, Subpt. Q, Exh. 1 Exhibit No. 1 to Subpart Q [§ 502.271(d)] of...

  7. 46 CFR Exhibit No. 1 to Subpart Q... - Application for Refund or Waiver of Freight Charges Due to Tariff or Quoting Error

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 9 2014-10-01 2014-10-01 false Application for Refund or Waiver of Freight Charges Due to Tariff or Quoting Error No. Exhibit No. 1 to Subpart Q [§ 502.271(d)] of Part 502 Shipping... or Waiver of Freight Charges Pt. 502, Subpt. Q, Exh. 1 Exhibit No. 1 to Subpart Q [§ 502.271(d)] of...

  8. 46 CFR Exhibit No. 1 to Subpart Q... - Application for Refund or Waiver of Freight Charges Due to Tariff or Quoting Error

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 9 2010-10-01 2010-10-01 false Application for Refund or Waiver of Freight Charges Due to Tariff or Quoting Error No. Exhibit No. 1 to Subpart Q [§ 502.271(d)] of Part 502 Shipping... or Waiver of Freight Charges Pt. 502, Subpt. Q, Exh. 1 Exhibit No. 1 to Subpart Q [§ 502.271(d)] of...

  9. 46 CFR Exhibit No. 1 to Subpart Q... - Application for Refund or Waiver of Freight Charges Due to Tariff or Quoting Error

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 9 2011-10-01 2011-10-01 false Application for Refund or Waiver of Freight Charges Due to Tariff or Quoting Error No. Exhibit No. 1 to Subpart Q [§ 502.271(d)] of Part 502 Shipping... or Waiver of Freight Charges Pt. 502, Subpt. Q, Exh. 1 Exhibit No. 1 to Subpart Q [§ 502.271(d)] of...

  10. PLATFORM DEFORMATION PHASE CORRECTION FOR THE AMiBA-13 COPLANAR INTERFEROMETER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, Yu-Wei; Lin, Kai-Yang; Huang, Yau-De

    2013-05-20

    We present a new way to solve the platform deformation problem of coplanar interferometers. The platform of a coplanar interferometer can be deformed due to driving forces and gravity. A deformed platform will induce extra components into the geometric delay of each baseline and change the phases of observed visibilities. The reconstructed images will also be diluted due to the errors of the phases. The platform deformations of The Yuan-Tseh Lee Array for Microwave Background Anisotropy (AMiBA) were modeled based on photogrammetry data with about 20 mount pointing positions. We then used the differential optical pointing error between two opticalmore » telescopes to fit the model parameters in the entire horizontal coordinate space. With the platform deformation model, we can predict the errors of the geometric phase delays due to platform deformation with a given azimuth and elevation of the targets and calibrators. After correcting the phases of the radio point sources in the AMiBA interferometric data, we recover 50%-70% flux loss due to phase errors. This allows us to restore more than 90% of a source flux. The method outlined in this work is not only applicable to the correction of deformation for other coplanar telescopes but also to single-dish telescopes with deformation problems. This work also forms the basis of the upcoming science results of AMiBA-13.« less

  11. Error Patterns in Ordering Fractions among At-Risk Fourth-Grade Students

    PubMed Central

    Malone, Amelia S.; Fuchs, Lynn S.

    2016-01-01

    The 3 purposes of this study were to: (a) describe fraction ordering errors among at-risk 4th-grade students; (b) assess the effect of part-whole understanding and accuracy of fraction magnitude estimation on the probability of committing errors; and (c) examine the effect of students' ability to explain comparing problems on the probability of committing errors. Students (n = 227) completed a 9-item ordering test. A high proportion (81%) of problems were completed incorrectly. Most (65% of) errors were due to students misapplying whole number logic to fractions. Fraction-magnitude estimation skill, but not part-whole understanding, significantly predicted the probability of committing this type of error. Implications for practice are discussed. PMID:26966153

  12. SBAR improves communication and safety climate and decreases incident reports due to communication errors in an anaesthetic clinic: a prospective intervention study.

    PubMed

    Randmaa, Maria; Mårtensson, Gunilla; Leo Swenne, Christine; Engström, Maria

    2014-01-21

    We aimed to examine staff members' perceptions of communication within and between different professions, safety attitudes and psychological empowerment, prior to and after implementation of the communication tool Situation-Background-Assessment-Recommendation (SBAR) at an anaesthetic clinic. The aim was also to study whether there was any change in the proportion of incident reports caused by communication errors. A prospective intervention study with comparison group using preassessments and postassessments. Questionnaire data were collected from staff in an intervention (n=100) and a comparison group (n=69) at the anaesthetic clinic in two hospitals prior to (2011) and after (2012) implementation of SBAR. The proportion of incident reports due to communication errors was calculated during a 1-year period prior to and after implementation. Anaesthetic clinics at two hospitals in Sweden. All licensed practical nurses, registered nurses and physicians working in the operating theatres, intensive care units and postanaesthesia care units at anaesthetic clinics in two hospitals were invited to participate. Implementation of SBAR in an anaesthetic clinic. The primary outcomes were staff members' perception of communication within and between different professions, as well as their perceptions of safety attitudes. Secondary outcomes were psychological empowerment and incident reports due to error of communication. In the intervention group, there were statistically significant improvements in the factors 'Between-group communication accuracy' (p=0.039) and 'Safety climate' (p=0.011). The proportion of incident reports due to communication errors decreased significantly (p<0.0001) in the intervention group, from 31% to 11%. Implementing the communication tool SBAR in anaesthetic clinics was associated with improvement in staff members' perception of communication between professionals and their perception of the safety climate as well as with a decreased proportion of incident reports related to communication errors. ISRCTN37251313.

  13. Errors in fluid therapy in medical wards.

    PubMed

    Mousavi, Maryam; Khalili, Hossein; Dashti-Khavidaki, Simin

    2012-04-01

    Intravenous fluid therapy remains an essential part of patients' care during hospitalization. There are only few studies that focused on fluid therapy in the hospitalized patients, and there is not any consensus statement about fluid therapy in patients who are hospitalized in medical wards. The aim of the present study was to assess intravenous fluid therapy status and related errors in the patients during the course of hospitalization in the infectious diseases wards of a referral teaching hospital. This study was conducted in the infectious diseases wards of Imam Khomeini Complex Hospital, Tehran, Iran. During a retrospective study, data related to intravenous fluid therapy were collected by two clinical pharmacists of infectious diseases from 2008 to 2010. Intravenous fluid therapy information including indication, type, volume and rate of fluid administration was recorded for each patient. An internal protocol for intravenous fluid therapy was designed based on literature review and available recommendations. The data related to patients' fluid therapy were compared with this protocol. The fluid therapy was considered appropriate if it was compatible with the protocol regarding indication of intravenous fluid therapy, type, electrolyte content and rate of fluid administration. Any mistake in the selection of fluid type, content, volume and rate of administration was considered as intravenous fluid therapy errors. Five hundred and ninety-six of medication errors were detected during the study period in the patients. Overall rate of fluid therapy errors was 1.3 numbers per patient during hospitalization. Errors in the rate of fluid administration (29.8%), incorrect fluid volume calculation (26.5%) and incorrect type of fluid selection (24.6%) were the most common types of errors. The patients' male sex, old age, baseline renal diseases, diabetes co-morbidity, and hospitalization due to endocarditis, HIV infection and sepsis are predisposing factors for the occurrence of fluid therapy errors in the patients. Our result showed that intravenous fluid therapy errors occurred commonly in the hospitalized patients especially in the medical wards. Improvement in knowledge and attention of health-care workers about these errors are essential for preventing of medication errors in aspect of fluid therapy.

  14. Determination of the optical properties of semi-infinite turbid media from frequency-domain reflectance close to the source.

    PubMed

    Kienle, A; Patterson, M S

    1997-09-01

    We investigate theoretically the errors in determining the reduced scattering and absorption coefficients of semi-infinite turbid media from frequency-domain reflectance measurements made at small distances between the source and the detector(s). The errors are due to the uncertainties in the measurement of the phase, the modulation and the steady-state reflectance as well as to the diffusion approximation which is used as a theoretical model to describe light propagation in tissue. Configurations using one and two detectors are examined for the measurement of the phase and the modulation and for the measurement of the phase and the steady-state reflectance. Three solutions of the diffusion equation are investigated. We show that measurements of the phase and the steady-state reflectance at two different distances are best suited for the determination of the optical properties close to the source. For this arrangement the errors in the absorption coefficient due to typical uncertainties in the measurement are greater than those resulting from the application of the diffusion approximation at a modulation frequency of 200 MHz. A Monte Carlo approach is also examined; this avoids the errors due to the diffusion approximation.

  15. Evaluation of a UMLS Auditing Process of Semantic Type Assignments

    PubMed Central

    Gu, Huanying; Hripcsak, George; Chen, Yan; Morrey, C. Paul; Elhanan, Gai; Cimino, James J.; Geller, James; Perl, Yehoshua

    2007-01-01

    The UMLS is a terminological system that integrates many source terminologies. Each concept in the UMLS is assigned one or more semantic types from the Semantic Network, an upper level ontology for biomedicine. Due to the complexity of the UMLS, errors exist in the semantic type assignments. Finding assignment errors may unearth modeling errors. Even with sophisticated tools, discovering assignment errors requires manual review. In this paper we describe the evaluation of an auditing project of UMLS semantic type assignments. We studied the performance of the auditors who reviewed potential errors. We found that four auditors, interacting according to a multi-step protocol, identified a high rate of errors (one or more errors in 81% of concepts studied) and that results were sufficiently reliable (0.67 to 0.70) for the two most common types of errors. However, reliability was low for each individual auditor, suggesting that review of potential errors is resource-intensive. PMID:18693845

  16. Improved astigmatic focus error detection method

    NASA Technical Reports Server (NTRS)

    Bernacki, Bruce E.

    1992-01-01

    All easy-to-implement focus- and track-error detection methods presently used in magneto-optical (MO) disk drives using pre-grooved media suffer from a side effect known as feedthrough. Feedthrough is the unwanted focus error signal (FES) produced when the optical head is seeking a new track, and light refracted from the pre-grooved disk produces an erroneous FES. Some focus and track-error detection methods are more resistant to feedthrough, but tend to be complicated and/or difficult to keep in alignment as a result of environmental insults. The astigmatic focus/push-pull tracking method is an elegant, easy-to-align focus- and track-error detection method. Unfortunately, it is also highly susceptible to feedthrough when astigmatism is present, with the worst effects caused by astigmatism oriented such that the tangential and sagittal foci are at 45 deg to the track direction. This disclosure outlines a method to nearly completely eliminate the worst-case form of feedthrough due to astigmatism oriented 45 deg to the track direction. Feedthrough due to other primary aberrations is not improved, but performance is identical to the unimproved astigmatic method.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Batista, Antonio J. N.; Santos, Bruno; Fernandes, Ana

    The data acquisition and control instrumentation cubicles room of the ITER tokamak will be irradiated with neutrons during the fusion reactor operation. A Virtex-6 FPGA from Xilinx (XC6VLX365T-1FFG1156C) is used on the ATCA-IO-PROCESSOR board, included in the ITER Catalog of I and C products - Fast Controllers. The Virtex-6 is a re-programmable logic device where the configuration is stored in Static RAM (SRAM), functional data stored in dedicated Block RAM (BRAM) and functional state logic in Flip-Flops. Single Event Upsets (SEU) due to the ionizing radiation of neutrons causes soft errors, unintended changes (bit-flips) to the values stored in statemore » elements of the FPGA. The SEU monitoring and soft errors repairing, when possible, were explored in this work. An FPGA built-in Soft Error Mitigation (SEM) controller detects and corrects soft errors in the FPGA configuration memory. Novel SEU sensors with Error Correction Code (ECC) detect and repair the BRAM memories. Proper management of SEU can increase reliability and availability of control instrumentation hardware for nuclear applications. The results of the tests performed using the SEM controller and the BRAM SEU sensors are presented for a Virtex-6 FPGA (XC6VLX240T-1FFG1156C) when irradiated with neutrons from the Portuguese Research Reactor (RPI), a 1 MW nuclear fission reactor operated by IST in the neighborhood of Lisbon. Results show that the proposed SEU mitigation technique is able to repair the majority of the detected SEU errors in the configuration and BRAM memories. (authors)« less

  18. Prevalence of refractive error in Europe: the European Eye Epidemiology (E(3)) Consortium.

    PubMed

    Williams, Katie M; Verhoeven, Virginie J M; Cumberland, Phillippa; Bertelsen, Geir; Wolfram, Christian; Buitendijk, Gabriëlle H S; Hofman, Albert; van Duijn, Cornelia M; Vingerling, Johannes R; Kuijpers, Robert W A M; Höhn, René; Mirshahi, Alireza; Khawaja, Anthony P; Luben, Robert N; Erke, Maja Gran; von Hanno, Therese; Mahroo, Omar; Hogg, Ruth; Gieger, Christian; Cougnard-Grégoire, Audrey; Anastasopoulos, Eleftherios; Bron, Alain; Dartigues, Jean-François; Korobelnik, Jean-François; Creuzot-Garcher, Catherine; Topouzis, Fotis; Delcourt, Cécile; Rahi, Jugnoo; Meitinger, Thomas; Fletcher, Astrid; Foster, Paul J; Pfeiffer, Norbert; Klaver, Caroline C W; Hammond, Christopher J

    2015-04-01

    To estimate the prevalence of refractive error in adults across Europe. Refractive data (mean spherical equivalent) collected between 1990 and 2013 from fifteen population-based cohort and cross-sectional studies of the European Eye Epidemiology (E(3)) Consortium were combined in a random effects meta-analysis stratified by 5-year age intervals and gender. Participants were excluded if they were identified as having had cataract surgery, retinal detachment, refractive surgery or other factors that might influence refraction. Estimates of refractive error prevalence were obtained including the following classifications: myopia ≤-0.75 diopters (D), high myopia ≤-6D, hyperopia ≥1D and astigmatism ≥1D. Meta-analysis of refractive error was performed for 61,946 individuals from fifteen studies with median age ranging from 44 to 81 and minimal ethnic variation (98 % European ancestry). The age-standardised prevalences (using the 2010 European Standard Population, limited to those ≥25 and <90 years old) were: myopia 30.6 % [95 % confidence interval (CI) 30.4-30.9], high myopia 2.7 % (95 % CI 2.69-2.73), hyperopia 25.2 % (95 % CI 25.0-25.4) and astigmatism 23.9 % (95 % CI 23.7-24.1). Age-specific estimates revealed a high prevalence of myopia in younger participants [47.2 % (CI 41.8-52.5) in 25-29 years-olds]. Refractive error affects just over a half of European adults. The greatest burden of refractive error is due to myopia, with high prevalence rates in young adults. Using the 2010 European population estimates, we estimate there are 227.2 million people with myopia across Europe.

  19. Uncertainty issues in forest monitoring: All you wanted to know about uncertainties and never dared to ask

    Treesearch

    Michael Köhl; Charles Scott; Daniel Plugge

    2013-01-01

    Uncertainties are a composite of errors arising from observations and the appropriateness of models. An error budget approach can be used to identify and accumulate the sources of errors to estimate change in emissions between two points in time. Various forest monitoring approaches can be used to estimate the changes in emissions due to deforestation and forest...

  20. Error processing in current and former cocaine users

    PubMed Central

    Castelluccio, Brian C.; Meda, Shashwath A.; Muska, Christine E.; Stevens, Michael C.; Pearlson, Godfrey D.

    2013-01-01

    Deficits in response inhibition and error processing can result in maladaptive behavior, including failure to use past mistakes to inform present decisions. A specific deficit in inhibiting a prepotent response represents one aspect of impulsivity and is a prominent feature of addictive behaviors in general, including cocaine abuse/dependence. Brain regions implicated in cognitive control exhibit reduced activation in cocaine abusers. The purposes of the present investigation were (1) to identify neural differences associated with error processing in current and former cocaine-dependent individuals compared to healthy controls and (2) to determine whether former, long-term abstinent cocaine users showed similar differences compared with current users. The present study used an fMRI Go/No-Go task to investigate differences in BOLD response to correct rejections and false alarms between current cocaine users (n=30), former cocaine users (n=29), and healthy controls (n=35). Impulsivity trait measures were also assessed and compared with BOLD activity. Nineteen regions of interest previously implicated in errors of disinhibition were queried. There were no group differences in the correct rejections condition, but both current and former users exhibited increased BOLD response relative to controls for false alarms. In current users, the pregenual cingulate gyrus and left angular/supramarginal gyri overactivated. In former users, the right middle frontal/precentral gyri, right inferior parietal lobule, and left angular/supramarginal gyri overactivated. Overall, our results support a hypothesis that neural activity in former users differs more from healthy controls than that of current users due to cognitive compensation that facilitates abstinence. PMID:23949893

  1. Three-dimensional modeling of the cochlea by use of an arc fitting approach.

    PubMed

    Schurzig, Daniel; Lexow, G Jakob; Majdani, Omid; Lenarz, Thomas; Rau, Thomas S

    2016-12-01

    A cochlea modeling approach is presented allowing for a user defined degree of geometry simplification which automatically adjusts to the patient specific anatomy. Model generation can be performed in a straightforward manner due to error estimation prior to the actual generation, thus minimizing modeling time. Therefore, the presented technique is well suited for a wide range of applications including finite element analyses where geometrical simplifications are often inevitable. The method is presented for n=5 cochleae which were segmented using a custom software for increased accuracy. The linear basilar membrane cross sections are expanded to areas while the scalae contours are reconstructed by a predefined number of arc segments. Prior to model generation, geometrical errors are evaluated locally for each cross section as well as globally for the resulting models and their basal turn profiles. The final combination of all reconditioned features to a 3D volume is performed in Autodesk Inventor using the loft feature. Due to the volume generation based on cubic splines, low errors could be achieved even for low numbers of arc segments and provided cross sections, both of which correspond to a strong degree of model simplification. Model generation could be performed in a time efficient manner. The proposed simplification method was proven to be well suited for the helical cochlea geometry. The generated output data can be imported into commercial software tools for various analyses representing a time efficient way to create cochlea models optimally suited for the desired task.

  2. Underestimating the effects of spatial heterogeneity due to individual movement and spatial scale: infectious disease as an example

    USGS Publications Warehouse

    Cross, Paul C.; Caillaud, Damien; Heisey, Dennis M.

    2013-01-01

    Many ecological and epidemiological studies occur in systems with mobile individuals and heterogeneous landscapes. Using a simulation model, we show that the accuracy of inferring an underlying biological process from observational data depends on movement and spatial scale of the analysis. As an example, we focused on estimating the relationship between host density and pathogen transmission. Observational data can result in highly biased inference about the underlying process when individuals move among sampling areas. Even without sampling error, the effect of host density on disease transmission is underestimated by approximately 50 % when one in ten hosts move among sampling areas per lifetime. Aggregating data across larger regions causes minimal bias when host movement is low, and results in less biased inference when movement rates are high. However, increasing data aggregation reduces the observed spatial variation, which would lead to the misperception that a spatially targeted control effort may not be very effective. In addition, averaging over the local heterogeneity will result in underestimating the importance of spatial covariates. Minimizing the bias due to movement is not just about choosing the best spatial scale for analysis, but also about reducing the error associated with using the sampling location as a proxy for an individual’s spatial history. This error associated with the exposure covariate can be reduced by choosing sampling regions with less movement, including longitudinal information of individuals’ movements, or reducing the window of exposure by using repeated sampling or younger individuals.

  3. Importance of anthropogenic climate impact, sampling error and urban development in sewer system design.

    PubMed

    Egger, C; Maurer, M

    2015-04-15

    Urban drainage design relying on observed precipitation series neglects the uncertainties associated with current and indeed future climate variability. Urban drainage design is further affected by the large stochastic variability of precipitation extremes and sampling errors arising from the short observation periods of extreme precipitation. Stochastic downscaling addresses anthropogenic climate impact by allowing relevant precipitation characteristics to be derived from local observations and an ensemble of climate models. This multi-climate model approach seeks to reflect the uncertainties in the data due to structural errors of the climate models. An ensemble of outcomes from stochastic downscaling allows for addressing the sampling uncertainty. These uncertainties are clearly reflected in the precipitation-runoff predictions of three urban drainage systems. They were mostly due to the sampling uncertainty. The contribution of climate model uncertainty was found to be of minor importance. Under the applied greenhouse gas emission scenario (A1B) and within the period 2036-2065, the potential for urban flooding in our Swiss case study is slightly reduced on average compared to the reference period 1981-2010. Scenario planning was applied to consider urban development associated with future socio-economic factors affecting urban drainage. The impact of scenario uncertainty was to a large extent found to be case-specific, thus emphasizing the need for scenario planning in every individual case. The results represent a valuable basis for discussions of new drainage design standards aiming specifically to include considerations of uncertainty. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Surface characterization protocol for precision aspheric optics

    NASA Astrophysics Data System (ADS)

    Sarepaka, RamaGopal V.; Sakthibalan, Siva; Doodala, Somaiah; Panwar, Rakesh S.; Kotaria, Rajendra

    2017-10-01

    In Advanced Optical Instrumentation, Aspherics provide an effective performance alternative. The aspheric fabrication and surface metrology, followed by aspheric design are complementary iterative processes for Precision Aspheric development. As in fabrication, a holistic approach of aspheric surface characterization is adopted to evaluate actual surface error and to aim at the deliverance of aspheric optics with desired surface quality. Precision optical surfaces are characterized by profilometry or by interferometry. Aspheric profiles are characterized by contact profilometers, through linear surface scans to analyze their Form, Figure and Finish errors. One must ensure that, the surface characterization procedure does not add to the resident profile errors (generated during the aspheric surface fabrication). This presentation examines the errors introduced post-surface generation and during profilometry of aspheric profiles. This effort is to identify sources of errors and is to optimize the metrology process. The sources of error during profilometry may be due to: profilometer settings, work-piece placement on the profilometer stage, selection of zenith/nadir points of aspheric profiles, metrology protocols, clear aperture - diameter analysis, computational limitations of the profiler and the software issues etc. At OPTICA, a PGI 1200 FTS contact profilometer (Taylor-Hobson make) is used for this study. Precision Optics of various profiles are studied, with due attention to possible sources of errors during characterization, with multi-directional scan approach for uniformity and repeatability of error estimation. This study provides an insight of aspheric surface characterization and helps in optimal aspheric surface production methodology.

  5. Disclosing harmful medical errors to patients: tackling three tough cases.

    PubMed

    Gallagher, Thomas H; Bell, Sigall K; Smith, Kelly M; Mello, Michelle M; McDonald, Timothy B

    2009-09-01

    A gap exists between recommendations to disclose errors to patients and current practice. This gap may reflect important, yet unanswered questions about implementing disclosure principles. We explore some of these unanswered questions by presenting three real cases that pose challenging disclosure dilemmas. The first case involves a pancreas transplant that failed due to the pancreas graft being discarded, an error that was not disclosed partly because the family did not ask clarifying questions. Relying on patient or family questions to determine the content of disclosure is problematic. We propose a standard of materiality that can help clinicians to decide what information to disclose. The second case involves a fatal diagnostic error that the patient's widower was unaware had happened. The error was not disclosed out of concern that disclosure would cause the widower more harm than good. This case highlights how institutions can overlook patients' and families' needs following errors and emphasizes that benevolent deception has little role in disclosure. Institutions should consider whether involving neutral third parties could make disclosures more patient centered. The third case presents an intraoperative cardiac arrest due to a large air embolism where uncertainty around the clinical event was high and complicated the disclosure. Uncertainty is common to many medical errors but should not deter open conversations with patients and families about what is and is not known about the event. Continued discussion within the medical profession about applying disclosure principles to real-world cases can help to better meet patients' and families' needs following medical errors.

  6. A 2 × 2 taxonomy of multilevel latent contextual models: accuracy-bias trade-offs in full and partial error correction models.

    PubMed

    Lüdtke, Oliver; Marsh, Herbert W; Robitzsch, Alexander; Trautwein, Ulrich

    2011-12-01

    In multilevel modeling, group-level variables (L2) for assessing contextual effects are frequently generated by aggregating variables from a lower level (L1). A major problem of contextual analyses in the social sciences is that there is no error-free measurement of constructs. In the present article, 2 types of error occurring in multilevel data when estimating contextual effects are distinguished: unreliability that is due to measurement error and unreliability that is due to sampling error. The fact that studies may or may not correct for these 2 types of error can be translated into a 2 × 2 taxonomy of multilevel latent contextual models comprising 4 approaches: an uncorrected approach, partial correction approaches correcting for either measurement or sampling error (but not both), and a full correction approach that adjusts for both sources of error. It is shown mathematically and with simulated data that the uncorrected and partial correction approaches can result in substantially biased estimates of contextual effects, depending on the number of L1 individuals per group, the number of groups, the intraclass correlation, the number of indicators, and the size of the factor loadings. However, the simulation study also shows that partial correction approaches can outperform full correction approaches when the data provide only limited information in terms of the L2 construct (i.e., small number of groups, low intraclass correlation). A real-data application from educational psychology is used to illustrate the different approaches.

  7. Intervention strategies for the management of human error

    NASA Technical Reports Server (NTRS)

    Wiener, Earl L.

    1993-01-01

    This report examines the management of human error in the cockpit. The principles probably apply as well to other applications in the aviation realm (e.g. air traffic control, dispatch, weather, etc.) as well as other high-risk systems outside of aviation (e.g. shipping, high-technology medical procedures, military operations, nuclear power production). Management of human error is distinguished from error prevention. It is a more encompassing term, which includes not only the prevention of error, but also a means of disallowing an error, once made, from adversely affecting system output. Such techniques include: traditional human factors engineering, improvement of feedback and feedforward of information from system to crew, 'error-evident' displays which make erroneous input more obvious to the crew, trapping of errors within a system, goal-sharing between humans and machines (also called 'intent-driven' systems), paperwork management, and behaviorally based approaches, including procedures, standardization, checklist design, training, cockpit resource management, etc. Fifteen guidelines for the design and implementation of intervention strategies are included.

  8. Straightening of a wavy strip: An elastic-plastic contact problem including snap-through

    NASA Technical Reports Server (NTRS)

    Fischer, D. F.; Rammerstorfer, F. G.

    1980-01-01

    The nonlinear behavior of a wave like deformed metal strip during the levelling process were calculated. Elastic-plastic material behavior as well as nonlinearities due to large deformations were considered. The considered problem lead to a combined stability and contact problem. It is shown that, despite the initially concentrated loading, neglecting the change of loading conditions due to altered contact domains may lead to a significant error in the evaluation of the nonlinear behavior and particularly to an underestimation of the stability limit load. The stability was examined by considering the load deflection path and the behavior of a load-dependent current stiffness parameter in combination with the determinant of the current stiffness matrix.

  9. A fiber Bragg grating sensor system for estimating the large deflection of a lightweight flexible beam

    NASA Astrophysics Data System (ADS)

    Peng, Te; Yang, Yangyang; Ma, Lina; Yang, Huayong

    2016-10-01

    A sensor system based on fiber Bragg grating (FBG) is presented which is to estimate the deflection of a lightweight flexible beam, including the tip position and the tip rotation angle. In this paper, the classical problem of the deflection of a lightweight flexible beam of linear elastic material is analysed. We present the differential equation governing the behavior of a physical system and show that this equation although straightforward in appearance, is in fact rather difficult to solve due to the presence of a non-linear term. We used epoxy glue to attach the FBG sensors to specific locations upper and lower surface of the beam in order to measure local strain measurements. A quasi-distributed FBG static strain sensor network is designed and established. The estimation results from FBG sensors are also compared to reference displacements from the ANSYS simulation results and the experimental results obtained in the laboratory in the static case. The errors of the estimation by FBG sensors are analysed for further error-correction and option-design. When the load weight is 20g, the precision is the highest, the position errors ex and ex are 0.19%, 0.14% respectively, the rotation error eθ, is 1.23%.

  10. After the Medication Error: Recent Nursing Graduates' Reflections on Adequacy of Education.

    PubMed

    Treiber, Linda A; Jones, Jackie H

    2018-05-01

    The purpose of this study was to better understand individual- and system-level factors surrounding making a medication error from the perspective of recent Bachelor of Science in Nursing graduates. Online survey mixed-methods items included perceptions of adequacy of preparatory nursing education, contributory variables, emotional responses, and treatment by employer following the error. Of the 168 respondents, 55% had made a medication error. Errors resulted from inexperience, rushing, technology, staffing, and patient acuity. Twenty-four percent did not report their errors. Key themes for improving education included more practice in varied clinical areas, intensive pharmacological preparation, practical instruction in functioning within the health care environment, and coping after making medication errors. Errors generally caused emotional distress in the error maker. Overall, perceived treatment after the error reflected supportive environments, where nurses were generally treated with respect, fair treatment, and understanding. Opportunities for nursing education include second victim awareness and reinforcing professional practice standards. [J Nurs Educ. 2018;57(5):275-280.]. Copyright 2018, SLACK Incorporated.

  11. Recovering area-to-mass ratio of resident space objects through data mining

    NASA Astrophysics Data System (ADS)

    Peng, Hao; Bai, Xiaoli

    2018-01-01

    The area-to-mass ratio (AMR) of a resident space object (RSO) is an important parameter for improved space situation awareness capability due to its effect on the non-conservative forces including the atmosphere drag force and the solar radiation pressure force. However, information about AMR is often not provided in most space catalogs. The present paper investigates recovering the AMR information from the consistency error, which refers to the difference between the orbit predicted from an earlier estimate and the orbit estimated at the current epoch. A data mining technique, particularly the random forest (RF) method, is used to discover the relationship between the consistency error and the AMR. Using a simulation-based space catalog environment as the testbed, this paper demonstrates that the classification RF model can determine the RSO's category AMR and the regression RF model can generate continuous AMR values, both with good accuracies. Furthermore, the paper reveals that by recording additional information besides the consistency error, the RF model can estimate the AMR with even higher accuracy.

  12. Universal Capacitance Model for Real-Time Biomass in Cell Culture.

    PubMed

    Konakovsky, Viktor; Yagtu, Ali Civan; Clemens, Christoph; Müller, Markus Michael; Berger, Martina; Schlatter, Stefan; Herwig, Christoph

    2015-09-02

    : Capacitance probes have the potential to revolutionize bioprocess control due to their safe and robust use and ability to detect even the smallest capacitors in the form of biological cells. Several techniques have evolved to model biomass statistically, however, there are problems with model transfer between cell lines and process conditions. Errors of transferred models in the declining phase of the culture range for linear models around +100% or worse, causing unnecessary delays with test runs during bioprocess development. The goal of this work was to develop one single universal model which can be adapted by considering a potentially mechanistic factor to estimate biomass in yet untested clones and scales. The novelty of this work is a methodology to select sensitive frequencies to build a statistical model which can be shared among fermentations with an error between 9% and 38% (mean error around 20%) for the whole process, including the declining phase. A simple linear factor was found to be responsible for the transferability of biomass models between cell lines, indicating a link to their phenotype or physiology.

  13. An evaluation of water vapor radiometer data for calibration of the wet path delay in very long baseline interferometry experiments

    NASA Technical Reports Server (NTRS)

    Kuehn, C. E.; Himwich, W. E.; Clark, T. A.; Ma, C.

    1991-01-01

    The internal consistency of the baseline-length measurements derived from analysis of several independent VLBI experiments is an estimate of the measurement precision. The paper investigates whether the inclusion of water vapor radiometer (WVR) data as an absolute calibration of the propagation delay due to water vapor improves the precision of VLBI baseline-length measurements. The paper analyzes 28 International Radio Interferometric Surveying runs between June 1988 and January 1989; WVR measurements were made during each session. The addition of WVR data decreased the scatter of the length measurements of the baselines by 5-10 percent. The observed reduction in the scatter of the baseline lengths is less than what is expected from the behavior of the formal errors, which suggest that the baseline-length measurement precision should improve 10-20 percent if WVR data are included in the analysis. The discrepancy between the formal errors and the baseline-length results can be explained as the consequence of systematic errors in the dry-mapping function parameters, instrumental biases in the WVR and the barometer, or both.

  14. Ranging/tracking system for proximity operations

    NASA Technical Reports Server (NTRS)

    Nilsen, P.; Udalov, S.

    1982-01-01

    The hardware development and testing phase of a hand held radar for the ranging and tracking for Shuttle proximity operations are considered. The radar is to measure range to a 3 sigma accuracy of 1 m (3.28 ft) to a maximum range of 1850 m (6000 ft) and velocity to a 3 sigma accuracy of 0.03 m/s (0.1 ft/s). Size and weight are similar to hand held radars, frequently seen in use by motorcycle police officers. Meeting these goals for a target in free space was very difficult to obtain in the testing program; however, at a range of approximately 700 m, the 3 sigma range error was found to be 0.96 m. It is felt that much of this error is due to clutter in the test environment. As an example of the velocity accuracy, at a range of 450 m, a 3 sigma velocity error of 0.02 m/s was measured. The principles of the radar and recommended changes to its design are given. Analyses performed in support of the design process, the actual circuit diagrams, and the software listing are included.

  15. Locked-mode avoidance and recovery without momentum input

    NASA Astrophysics Data System (ADS)

    Delgado-Aparicio, L.; Rice, J. E.; Wolfe, S.; Cziegler, I.; Gao, C.; Granetz, R.; Wukitch, S.; Terry, J.; Greenwald, M.; Sugiyama, L.; Hubbard, A.; Hugges, J.; Marmar, E.; Phillips, P.; Rowan, W.

    2015-11-01

    Error-field-induced locked-modes (LMs) have been studied in Alcator C-Mod at ITER-Bϕ, without NBI fueling and momentum input. Delay of the mode-onset and locked-mode recovery has been successfully obtained without external momentum input using Ion Cyclotron Resonance Heating (ICRH). The use of external heating in-sync with the error-field ramp-up resulted in a successful delay of the mode-onset when PICRH > 1 MW, which demonstrates the existence of a power threshold to ``unlock'' the mode; in the presence of an error field the L-mode discharge can transition into H-mode only when PICRH > 2 MW and at high densities, avoiding also the density pump-out. The effects of ion heating observed on unlocking the core plasma may be due to ICRH induced flows in the plasma boundary, or modifications of plasma profiles that changed the underlying turbulence. This work was performed under US DoE contracts including DE-FC02-99ER54512 and others at MIT, DE-FG03-96ER-54373 at University of Texas at Austin, and DE-AC02-09CH11466 at PPPL.

  16. Erratum to: Minimization of extracellular space as a driving force in prokaryote association and the origin of eukaryotes.

    PubMed

    Hooper, Scott L; Burstein, Helaine J

    2015-03-28

    Following the publication of this article [1] it was noticed that, due to an error on the part of the publisher, the 2nd round of comments submitted by Reviewer 1, Dr. López-García, were unintentionally omitted during the peer review process. As a consequence of this error, the authors were unable to reply to Dr. López-García's comments and subsequently revise their manuscript accordingly (where appropriate).In fairness to both the authors and reviewer, Dr. López-García's (Reviewer 1) 2nd round of comments are now included below and Scott L Hooper and Helaine J Burstein (author) were given the opportunity to reply. Any consequent amendments to the research article [1] are outlined in the author's replies.

  17. Model misspecification detection by means of multiple generator errors, using the observed potential map.

    PubMed

    Zhang, Z; Jewett, D L

    1994-01-01

    Due to model misspecification, currently-used Dipole Source Localization (DSL) methods may contain Multiple-Generator Errors (MulGenErrs) when fitting simultaneously-active dipoles. The size of the MulGenErr is a function of both the model used, and the dipole parameters, including the dipoles' waveforms (time-varying magnitudes). For a given fitting model, by examining the variation of the MulGenErrs (or the fit parameters) under different waveforms for the same generating-dipoles, the accuracy of the fitting model for this set of dipoles can be determined. This method of testing model misspecification can be applied to evoked potential maps even when the parameters of the generating-dipoles are unknown. The dipole parameters fitted in a model should only be accepted if the model can be shown to be sufficiently accurate.

  18. Dual-Pulse Pulse Position Modulation (DPPM) for Deep-Space Optical Communications: Performance and Practicality Analysis

    NASA Technical Reports Server (NTRS)

    Li, Jing; Hylton, Alan; Budinger, James; Nappier, Jennifer; Downey, Joseph; Raible, Daniel

    2012-01-01

    Due to its simplicity and robustness against wavefront distortion, pulse position modulation (PPM) with photon counting detector has been seriously considered for long-haul optical wireless systems. This paper evaluates the dual-pulse case and compares it with the conventional single-pulse case. Analytical expressions for symbol error rate and bit error rate are first derived and numerically evaluated, for the strong, negative-exponential turbulent atmosphere; and bandwidth efficiency and throughput are subsequently assessed. It is shown that, under a set of practical constraints including pulse width and pulse repetition frequency (PRF), dual-pulse PPM enables a better channel utilization and hence a higher throughput than it single-pulse counterpart. This result is new and different from the previous idealistic studies that showed multi-pulse PPM provided no essential information-theoretic gains than single-pulse PPM.

  19. When ottoman is easier than chair: an inverse frequency effect in jargon aphasia.

    PubMed

    Marshall, J; Pring, T; Chiat, S; Robson, J

    2001-02-01

    This paper presents evidence of an inverse frequency effect in jargon aphasia. The subject (JP) showed a pre-disposition for low frequency word production on a range of tasks, including picture naming, sentence completion and naming in categories. Her real word errors were also striking, in that these tended to be lower in frequency than the target. Reading data suggested that the inverse frequency effect was present only when production was semantically mediated. It was therefore hypothesised that the effect was at least partly due to the semantic characteristics of low frequency items. Some support for this was obtained from a comprehension task showing that JP's understanding of low frequency terms, which she often produced as errors, was superior to her understanding of high frequency terms. Possible explanations for these findings are considered.

  20. A comparison of registration errors with imageless computer navigation during MIS total knee arthroplasty versus standard incision total knee arthroplasty: a cadaveric study.

    PubMed

    Davis, Edward T; Pagkalos, Joseph; Gallie, Price A M; Macgroarty, Kelly; Waddell, James P; Schemitsch, Emil H

    2015-01-01

    Optimal component alignment in total knee arthroplasty has been associated with better functional outcome as well as improved implant longevity. The ability to align components optimally during minimally invasive (MIS) total knee replacement (TKR) has been a cause of concern. Computer navigation is a useful aid in achieving the desired alignment although it is limited by the error during the manual registration of landmarks. Our study aims to compare the registration process error between a standard and a MIS surgical approach. We hypothesized that performing the registration error via an MIS approach would increase the registration process error. Five fresh frozen lower limbs were routinely prepared and draped. The registration process was performed through an MIS approach. This was then extended to the standard approach and the registration was performed again. Two surgeons performed the registration process five times with each approach. Performing the registration process through the MIS approach was not associated with higher error compared to the standard approach in the alignment parameters of interest. This rejects our hypothesis. Image-free navigated MIS TKR does not appear to carry higher risk of component malalignment due to the registration process error. Navigation can be used during MIS TKR to improve alignment without reduced accuracy due to the approach.

  1. Systematic error of diode thermometer.

    PubMed

    Iskrenovic, Predrag S

    2009-08-01

    Semiconductor diodes are often used for measuring temperatures. The forward voltage across a diode decreases, approximately linearly, with the increase in temperature. The applied method is mainly the simplest one. A constant direct current flows through the diode, and voltage is measured at diode terminals. The direct current that flows through the diode, putting it into operating mode, heats up the diode. The increase in temperature of the diode-sensor, i.e., the systematic error due to self-heating, depends on the intensity of current predominantly and also on other factors. The results of systematic error measurements due to heating up by the forward-bias current have been presented in this paper. The measurements were made at several diodes over a wide range of bias current intensity.

  2. Gain loss and noise temperature degradation due to subreflector rotations in a Cassegrain antenna

    NASA Astrophysics Data System (ADS)

    Lamb, J. W.; Olver, A. D.

    An evaluation of performance degradation due to subreflector rotations is reported for the 15 m UK/NL Millimetrewave Radio Telescope Cassegrain antenna. The analytical treatment of the phase errors shows that the optimum point for the center of rotation of the subreflector is the primary focus, indicating astigmatic error, and it is shown that a compromise must be made betwen mechanical and electrical performance. Gain deterioration due to spillover is only weakly dependent on zc, and this loss decreases as z(c) moves towards the subreflector vertex. The associated spillover gives rise to a noise temperature which is calculated to be a few degrees K.

  3. Error analysis and correction of lever-type stylus profilometer based on Nelder-Mead Simplex method

    NASA Astrophysics Data System (ADS)

    Hu, Chunbing; Chang, Suping; Li, Bo; Wang, Junwei; Zhang, Zhongyu

    2017-10-01

    Due to the high measurement accuracy and wide range of applications, lever-type stylus profilometry is commonly used in industrial research areas. However, the error caused by the lever structure has a great influence on the profile measurement, thus this paper analyzes the error of high-precision large-range lever-type stylus profilometry. The errors are corrected by the Nelder-Mead Simplex method, and the results are verified by the spherical surface calibration. It can be seen that this method can effectively reduce the measurement error and improve the accuracy of the stylus profilometry in large-scale measurement.

  4. Error Estimation of Pathfinder Version 5.3 SST Level 3C Using Three-way Error Analysis

    NASA Astrophysics Data System (ADS)

    Saha, K.; Dash, P.; Zhao, X.; Zhang, H. M.

    2017-12-01

    One of the essential climate variables for monitoring as well as detecting and attributing climate change, is Sea Surface Temperature (SST). A long-term record of global SSTs are available with observations obtained from ships in the early days to the more modern observation based on in-situ as well as space-based sensors (satellite/aircraft). There are inaccuracies associated with satellite derived SSTs which can be attributed to the errors associated with spacecraft navigation, sensor calibrations, sensor noise, retrieval algorithms, and leakages due to residual clouds. Thus it is important to estimate accurate errors in satellite derived SST products to have desired results in its applications.Generally for validation purposes satellite derived SST products are compared against the in-situ SSTs which have inaccuracies due to spatio/temporal inhomogeneity between in-situ and satellite measurements. A standard deviation in their difference fields usually have contributions from both satellite as well as the in-situ measurements. A real validation of any geophysical variable must require the knowledge of the "true" value of the said variable. Therefore a one-to-one comparison of satellite based SST with in-situ data does not truly provide us the real error in the satellite SST and there will be ambiguity due to errors in the in-situ measurements and their collocation differences. A Triple collocation (TC) or three-way error analysis using 3 mutually independent error-prone measurements, can be used to estimate root-mean square error (RMSE) associated with each of the measurements with high level of accuracy without treating any one system a perfectly-observed "truth". In this study we are estimating the absolute random errors associated with Pathfinder Version 5.3 Level-3C SST product Climate Data record. Along with the in-situ SST data, the third source of dataset used for this analysis is the AATSR reprocessing of climate (ARC) dataset for the corresponding period. All three SST observations are collocated, and statistics of difference between each pair is estimated. Instead of using a traditional TC analysis we have implemented the Extended Triple Collocation (ETC) approach to estimate the correlation coefficient of each measurement system w.r.t. the unknown target variable along with their RMSE.

  5. On the timing problem in optical PPM communications.

    NASA Technical Reports Server (NTRS)

    Gagliardi, R. M.

    1971-01-01

    Investigation of the effects of imperfect timing in a direct-detection (noncoherent) optical system using pulse-position-modulation bits. Special emphasis is placed on specification of timing accuracy, and an examination of system degradation when this accuracy is not attained. Bit error probabilities are shown as a function of timing errors, from which average error probabilities can be computed for specific synchronization methods. Of significant importance is shown to be the presence of a residual, or irreducible error probability, due entirely to the timing system, that cannot be overcome by the data channel.

  6. Special Issue on Uncertainty Quantification in Multiscale System Design and Simulation

    DOE PAGES

    Wang, Yan; Swiler, Laura

    2017-09-07

    The importance of uncertainty has been recognized in various modeling, simulation, and analysis applications, where inherent assumptions and simplifications affect the accuracy of model predictions for physical phenomena. As model predictions are now heavily relied upon for simulation-based system design, which includes new materials, vehicles, mechanical and civil structures, and even new drugs, wrong model predictions could potentially cause catastrophic consequences. Therefore, uncertainty and associated risks due to model errors should be quantified to support robust systems engineering.

  7. Special Issue on Uncertainty Quantification in Multiscale System Design and Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yan; Swiler, Laura

    The importance of uncertainty has been recognized in various modeling, simulation, and analysis applications, where inherent assumptions and simplifications affect the accuracy of model predictions for physical phenomena. As model predictions are now heavily relied upon for simulation-based system design, which includes new materials, vehicles, mechanical and civil structures, and even new drugs, wrong model predictions could potentially cause catastrophic consequences. Therefore, uncertainty and associated risks due to model errors should be quantified to support robust systems engineering.

  8. Practical tolerancing and performance implications for XUV projection lithography reduction systems (Poster Paper)

    NASA Astrophysics Data System (ADS)

    Viswanathan, Vriddhachalam K.

    1992-07-01

    Practical considerations that will strongly affect the imaging capabilities of reflecting systems for extreme-ultraviolet (XUV) projection lithography include manufacturing tolerances and thermal distortion of the mirror surfaces due to absorption of a fraction of the incident radiation beam. We have analyzed the potential magnitudes of these effects for two types of reflective projection optical designs. We find that concentric, symmetric two-mirror systems are less sensitive to manufacturing errors and thermal distortion than off-axis, four-mirror systems.

  9. Analysis of Aerosols and Fallout from High-Explosive Dust Clouds. Volume 2

    DTIC Science & Technology

    1977-03-01

    to the situation at hand, where S is an absolute error, AIV- 30 Rio • N(i, - N(i,2) It is to be noted, however, that the Poisson formula specifies the...sphere TNT detonation near Grand Junction, Colorado, on November 13, 1972. Data from the resulting dust cloud was collected by two aircraft and includes...variations. rio Measurements of carbon monoxide were inconclusive due to an unusually high noise level (6 to 8 ppm or considerably higher than the carbon

  10. Simulation of a navigator algorithm for a low-cost GPS receiver

    NASA Technical Reports Server (NTRS)

    Hodge, W. F.

    1980-01-01

    The analytical structure of an existing navigator algorithm for a low cost global positioning system receiver is described in detail to facilitate its implementation on in-house digital computers and real-time simulators. The material presented includes a simulation of GPS pseudorange measurements, based on a two-body representation of the NAVSTAR spacecraft orbits, and a four component model of the receiver bias errors. A simpler test for loss of pseudorange measurements due to spacecraft shielding is also noted.

  11. On the use of flux limiters in the discrete ordinates method for 3D radiation calculations in absorbing and scattering media

    NASA Astrophysics Data System (ADS)

    Godoy, William F.; DesJardin, Paul E.

    2010-05-01

    The application of flux limiters to the discrete ordinates method (DOM), SN, for radiative transfer calculations is discussed and analyzed for 3D enclosures for cases in which the intensities are strongly coupled to each other such as: radiative equilibrium and scattering media. A Newton-Krylov iterative method (GMRES) solves the final systems of linear equations along with a domain decomposition strategy for parallel computation using message passing libraries in a distributed memory system. Ray effects due to angular discretization and errors due to domain decomposition are minimized until small variations are introduced by these effects in order to focus on the influence of flux limiters on errors due to spatial discretization, known as numerical diffusion, smearing or false scattering. Results are presented for the DOM-integrated quantities such as heat flux, irradiation and emission. A variety of flux limiters are compared to "exact" solutions available in the literature, such as the integral solution of the RTE for pure absorbing-emitting media and isotropic scattering cases and a Monte Carlo solution for a forward scattering case. Additionally, a non-homogeneous 3D enclosure is included to extend the use of flux limiters to more practical cases. The overall balance of convergence, accuracy, speed and stability using flux limiters is shown to be superior compared to step schemes for any test case.

  12. Clover: Compiler directed lightweight soft error resilience

    DOE PAGES

    Liu, Qingrui; Lee, Dongyoon; Jung, Changhee; ...

    2015-05-01

    This paper presents Clover, a compiler directed soft error detection and recovery scheme for lightweight soft error resilience. The compiler carefully generates soft error tolerant code based on idem-potent processing without explicit checkpoint. During program execution, Clover relies on a small number of acoustic wave detectors deployed in the processor to identify soft errors by sensing the wave made by a particle strike. To cope with DUE (detected unrecoverable errors) caused by the sensing latency of error detection, Clover leverages a novel selective instruction duplication technique called tail-DMR (dual modular redundancy). Once a soft error is detected by either themore » sensor or the tail-DMR, Clover takes care of the error as in the case of exception handling. To recover from the error, Clover simply redirects program control to the beginning of the code region where the error is detected. Lastly, the experiment results demonstrate that the average runtime overhead is only 26%, which is a 75% reduction compared to that of the state-of-the-art soft error resilience technique.« less

  13. Goldmann Tonometer Prism with an Optimized Error Correcting Applanation Surface.

    PubMed

    McCafferty, Sean; Lim, Garrett; Duncan, William; Enikov, Eniko; Schwiegerling, Jim

    2016-09-01

    We evaluate solutions for an applanating surface modification to the Goldmann tonometer prism, which substantially negates the errors due to patient variability in biomechanics. A modified Goldmann or correcting applanation tonometry surface (CATS) prism is presented which was optimized to minimize the intraocular pressure (IOP) error due to corneal thickness, stiffness, curvature, and tear film. Mathematical modeling with finite element analysis (FEA) and manometric IOP referenced cadaver eyes were used to optimize and validate the design. Mathematical modeling of the optimized CATS prism indicates an approximate 50% reduction in each of the corneal biomechanical and tear film errors. Manometric IOP referenced pressure in cadaveric eyes demonstrates substantial equivalence to GAT in nominal eyes with the CATS prism as predicted by modeling theory. A CATS modified Goldmann prism is theoretically able to significantly improve the accuracy of IOP measurement without changing Goldmann measurement technique or interpretation. Clinical validation is needed but the analysis indicates a reduction in CCT error alone to less than ±2 mm Hg using the CATS prism in 100% of a standard population compared to only 54% less than ±2 mm Hg error with the present Goldmann prism. This article presents an easily adopted novel approach and critical design parameters to improve the accuracy of a Goldmann applanating tonometer.

  14. Aging and the intrusion superiority effect in visuo-spatial working memory.

    PubMed

    Cornoldi, Cesare; Bassani, Chiara; Berto, Rita; Mammarella, Nicola

    2007-01-01

    This study investigated the active component of visuo-spatial working memory (VSWM) in younger and older adults testing the hypotheses that elderly individuals have a poorer performance than younger ones and that errors in active VSWM tasks depend, at least partially, on difficulties in avoiding intrusions (i.e., avoiding already activated information). In two experiments, participants were presented with sequences of matrices on which three positions were pointed out sequentially: their task was to process all the positions but indicate only the final position of each sequence. Results showed a poorer performance in the elderly compared to the younger group and a higher number of intrusion (errors due to activated but irrelevant positions) rather than invention (errors consisting of pointing out a position never indicated by the experiementer) errors. The number of errors increased when a concurrent task was introduced (Experiment 1) and it was affected by different patterns of matrices (Experiment 2). In general, results show that elderly people have an impaired VSWM and produce a large number of errors due to inhibition failures. However, both the younger and the older adults' visuo-spatial working memory was affected by the presence of activated irrelevant information, the reduction of the available resources, and task constraints.

  15. The nature of articulation errors in Egyptian Arabic-speaking children with velopharyngeal insufficiency due to cleft palate.

    PubMed

    Abou-Elsaad, Tamer; Baz, Hemmat; Afsah, Omayma; Mansy, Alzahraa

    2015-09-01

    Even with early surgical repair, the majority of cleft palate children demonstrate articulation errors and have typical cleft palate speech. Was to determine the nature of articulation errors of Arabic consonants in Egyptian Arabic-speaking children with velopharyngeal insufficiency (VPI). Thirty Egyptian Arabic-speaking children with VPI due to cleft palate (whether primary repaired or secondary repaired) were studied. Auditory perceptual assessment (APA) of children speech was conducted. Nasopharyngoscopy was done to assess the velopharyngeal port (VPP) movements while the child was repeating speech tasks. Mansoura Arabic Articulation test (MAAT) was performed to analyze the consonants articulation of these children. The most frequent type of articulatory errors observed was substitution, more specifically, backing. Pharyngealization of anterior fricatives was the most frequent substitution, especially for the /s/ sound. The most frequent substituting sounds for other sounds were /ʔ/ followed by /k/ and /n/ sounds. Significant correlations were found between the degrees of the open nasality and VPP closure and the articulation errors. On the other hand, the sounds (/ʔ/,/ħ/,/ʕ/,/n/,/w/,/j/) were normally articulated in all studied group. The determination of articulation errors in VPI children could guide the therapists for designing appropriate speech therapy programs for these cases. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  16. Dosage uniformity problems which occur due to technological errors in extemporaneously prepared suppositories in hospitals and pharmacies

    PubMed Central

    Kalmár, Éva; Lasher, Jason Richard; Tarry, Thomas Dean; Myers, Andrea; Szakonyi, Gerda; Dombi, György; Baki, Gabriella; Alexander, Kenneth S.

    2013-01-01

    The availability of suppositories in Hungary, especially in clinical pharmacy practice, is usually provided by extemporaneous preparations. Due to the known advantages of rectal drug administration, its benefits are frequently utilized in pediatrics. However, errors during the extemporaneous manufacturing process can lead to non-homogenous drug distribution within the dosage units. To determine the root cause of these errors and provide corrective actions, we studied suppository samples prepared with exactly known errors using both cerimetric titration and HPLC technique. Our results show that the most frequent technological error occurs when the pharmacist fails to use the correct displacement factor in the calculations which could lead to a 4.6% increase/decrease in the assay in individual dosage units. The second most important source of error can occur when the molding excess is calculated solely for the suppository base. This can further dilute the final suppository drug concentration causing the assay to be as low as 80%. As a conclusion we emphasize that the application of predetermined displacement factors in calculations for the formulation of suppositories is highly important, which enables the pharmacist to produce a final product containing exactly the determined dose of an active substance despite the different densities of the components. PMID:25161378

  17. Dosage uniformity problems which occur due to technological errors in extemporaneously prepared suppositories in hospitals and pharmacies.

    PubMed

    Kalmár, Eva; Lasher, Jason Richard; Tarry, Thomas Dean; Myers, Andrea; Szakonyi, Gerda; Dombi, György; Baki, Gabriella; Alexander, Kenneth S

    2014-09-01

    The availability of suppositories in Hungary, especially in clinical pharmacy practice, is usually provided by extemporaneous preparations. Due to the known advantages of rectal drug administration, its benefits are frequently utilized in pediatrics. However, errors during the extemporaneous manufacturing process can lead to non-homogenous drug distribution within the dosage units. To determine the root cause of these errors and provide corrective actions, we studied suppository samples prepared with exactly known errors using both cerimetric titration and HPLC technique. Our results show that the most frequent technological error occurs when the pharmacist fails to use the correct displacement factor in the calculations which could lead to a 4.6% increase/decrease in the assay in individual dosage units. The second most important source of error can occur when the molding excess is calculated solely for the suppository base. This can further dilute the final suppository drug concentration causing the assay to be as low as 80%. As a conclusion we emphasize that the application of predetermined displacement factors in calculations for the formulation of suppositories is highly important, which enables the pharmacist to produce a final product containing exactly the determined dose of an active substance despite the different densities of the components.

  18. Latent Structure Agreement Analysis

    DTIC Science & Technology

    1989-11-01

    correct for bias in estimation of disease prevalence due to misclassification error [39]. Software Varying panel latent class agreement models can be...D., and L. M. Irwig, "Estimation of Test Error Rates, Disease Prevalence and Relative Risk from Misclassified Data: A Review," Journal of Clinical

  19. How common are cognitive errors in cases presented at emergency medicine resident morbidity and mortality conferences?

    PubMed

    Chu, David; Xiao, Jane; Shah, Payal; Todd, Brett

    2018-06-20

    Cognitive errors are a major contributor to medical error. Traditionally, medical errors at teaching hospitals are analyzed in morbidity and mortality (M&M) conferences. We aimed to describe the frequency of cognitive errors in relation to the occurrence of diagnostic and other error types, in cases presented at an emergency medicine (EM) resident M&M conference. We conducted a retrospective study of all cases presented at a suburban US EM residency monthly M&M conference from September 2011 to August 2016. Each case was reviewed using the electronic medical record (EMR) and notes from the M&M case by two EM physicians. Each case was categorized by type of primary medical error that occurred as described by Okafor et al. When a diagnostic error occurred, the case was reviewed for contributing cognitive and non-cognitive factors. Finally, when a cognitive error occurred, the case was classified into faulty knowledge, faulty data gathering or faulty synthesis, as described by Graber et al. Disagreements in error type were mediated by a third EM physician. A total of 87 M&M cases were reviewed; the two reviewers agreed on 73 cases, and 14 cases required mediation by a third reviewer. Forty-eight cases involved diagnostic errors, 47 of which were cognitive errors. Of these 47 cases, 38 involved faulty synthesis, 22 involved faulty data gathering and only 11 involved faulty knowledge. Twenty cases contained more than one type of cognitive error. Twenty-nine cases involved both a resident and an attending physician, while 17 cases involved only an attending physician. Twenty-one percent of the resident cases involved all three cognitive errors, while none of the attending cases involved all three. Forty-one percent of the resident cases and only 6% of the attending cases involved faulty knowledge. One hundred percent of the resident cases and 94% of the attending cases involved faulty synthesis. Our review of 87 EM M&M cases revealed that cognitive errors are commonly involved in cases presented, and that these errors are less likely due to deficient knowledge and more likely due to faulty synthesis. M&M conferences may therefore provide an excellent forum to discuss cognitive errors and how to reduce their occurrence.

  20. Using a Commercial Ethernet PHY Device in a Radiation Environment

    NASA Technical Reports Server (NTRS)

    Parks, Jeremy; Arani, Michael; Arroyo, Roberto

    2014-01-01

    This work involved placing a commercial Ethernet PHY on its own power boundary, with limited current supply, and providing detection methods to determine when the device is not operating and when it needs either a reset or power-cycle. The device must be radiation-tested and free of destructive latchup errors. The commercial Ethernet PHY's own power boundary must be supplied by a current-limited power regulator that must have an enable (for power cycling), and its maximum power output must not exceed the PHY's input requirements, thus preventing damage to the device. A regulator with configurable output limits and short-circuit protection (such as the RHFL4913, rad hard positive voltage regulator family) is ideal. This will prevent a catastrophic failure due to radiation (such as a short between the commercial device's power and ground) from taking down the board's main power. Logic provided on the board will detect errors in the PHY. An FPGA (field-programmable gate array) with embedded Ethernet MAC (Media Access Control) will work well. The error detection includes monitoring the PHY's interrupt line, and the status of the Ethernet's switched power. When the PHY is determined to be non-functional, the logic device resets the PHY, which will often clear radiation induced errors. If this doesn't work, the logic device power-cycles the FPGA by toggling the regulator's enable input. This should clear almost all radiation induced errors provided the device is not latched up.

  1. Water-quality models to assess algal community dynamics, water quality, and fish habitat suitability for two agricultural land-use dominated lakes in Minnesota, 2014

    USGS Publications Warehouse

    Smith, Erik A.; Kiesling, Richard L.; Ziegeweid, Jeffrey R.

    2017-07-20

    Fish habitat can degrade in many lakes due to summer blue-green algal blooms. Predictive models are needed to better manage and mitigate loss of fish habitat due to these changes. The U.S. Geological Survey (USGS), in cooperation with the Minnesota Department of Natural Resources, developed predictive water-quality models for two agricultural land-use dominated lakes in Minnesota—Madison Lake and Pearl Lake, which are part of Minnesota’s sentinel lakes monitoring program—to assess algal community dynamics, water quality, and fish habitat suitability of these two lakes under recent (2014) meteorological conditions. The interaction of basin processes to these two lakes, through the delivery of nutrient loads, were simulated using CE-QUAL-W2, a carbon-based, laterally averaged, two-dimensional water-quality model that predicts distribution of temperature and oxygen from interactions between nutrient cycling, primary production, and trophic dynamics.The CE-QUAL-W2 models successfully predicted water temperature and dissolved oxygen on the basis of the two metrics of mean absolute error and root mean square error. For Madison Lake, the mean absolute error and root mean square error were 0.53 and 0.68 degree Celsius, respectively, for the vertical temperature profile comparisons; for Pearl Lake, the mean absolute error and root mean square error were 0.71 and 0.95 degree Celsius, respectively, for the vertical temperature profile comparisons. Temperature and dissolved oxygen were key metrics for calibration targets. These calibrated lake models also simulated algal community dynamics and water quality. The model simulations presented potential explanations for persistently large total phosphorus concentrations in Madison Lake, key differences in nutrient concentrations between these lakes, and summer blue-green algal bloom persistence.Fish habitat suitability simulations for cool-water and warm-water fish indicated that, in general, both lakes contained a large proportion of good-growth habitat and a sustained period of optimal growth habitat in the summer, without any periods of lethal oxythermal habitat. For Madison and Pearl Lakes, examples of important cool-water fish, particularly game fish, include northern pike (Esox lucius), walleye (Sander vitreus), and black crappie (Pomoxis nigromaculatus); examples of important warm-water fish include bluegill (Lepomis macrochirus), largemouth bass (Micropterus salmoides), and smallmouth bass (Micropterus dolomieu). Sensitivity analyses were completed to understand lake response effects through the use of controlled departures on certain calibrated model parameters and input nutrient loads. These sensitivity analyses also operated as land-use change scenarios because alterations in agricultural practices, for example, could potentially increase or decrease nutrient loads.

  2. [Errors in Peruvian medical journals references].

    PubMed

    Huamaní, Charles; Pacheco-Romero, José

    2009-01-01

    References are fundamental in our studies; an adequate selection is asimportant as an adequate description. To determine the number of errors in a sample of references found in Peruvian medical journals. We reviewed 515 scientific papers references selected by systematic randomized sampling and corroborated reference information with the original document or its citation in Pubmed, LILACS or SciELO-Peru. We found errors in 47,6% (245) of the references, identifying 372 types of errors; the most frequent were errors in presentation style (120), authorship (100) and title (100), mainly due to spelling mistakes (91). References error percentage was high, varied and multiple. We suggest systematic revision of references in the editorial process as well as to extend the discussion on this theme. references, periodicals, research, bibliometrics.

  3. Sensitivity Analysis of Repeat Track Estimation Techniques for Detection of Elevation Change in Polar Ice Sheets

    NASA Astrophysics Data System (ADS)

    Harpold, R. E.; Urban, T. J.; Schutz, B. E.

    2008-12-01

    Interest in elevation change detection in the polar regions has increased recently due to concern over the potential sea level rise from the melting of the polar ice caps. Repeat track analysis can be used to estimate elevation change rate by fitting elevation data to model parameters. Several aspects of this method have been tested to improve the recovery of the model parameters. Elevation data from ICESat over Antarctica and Greenland from 2003-2007 are used to test several grid sizes and types, such as grids based on latitude and longitude and grids centered on the ICESat reference groundtrack. Different sets of parameters are estimated, some of which include seasonal terms or alternate types of slopes (linear, quadratic, etc.). In addition, the effects of including crossovers and other solution constraints are evaluated. Simulated data are used to infer potential errors due to unmodeled parameters.

  4. Improving Planck calibration by including frequency-dependent relativistic corrections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quartin, Miguel; Notari, Alessio, E-mail: mquartin@if.ufrj.br, E-mail: notari@ffn.ub.es

    2015-09-01

    The Planck satellite detectors are calibrated in the 2015 release using the 'orbital dipole', which is the time-dependent dipole generated by the Doppler effect due to the motion of the satellite around the Sun. Such an effect has also relativistic time-dependent corrections of relative magnitude 10{sup −3}, due to coupling with the 'solar dipole' (the motion of the Sun compared to the CMB rest frame), which are included in the data calibration by the Planck collaboration. We point out that such corrections are subject to a frequency-dependent multiplicative factor. This factor differs from unity especially at the highest frequencies, relevantmore » for the HFI instrument. Since currently Planck calibration errors are dominated by systematics, to the point that polarization data is currently unreliable at large scales, such a correction can in principle be highly relevant for future data releases.« less

  5. Predicting protein concentrations with ELISA microarray assays, monotonic splines and Monte Carlo simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daly, Don S.; Anderson, Kevin K.; White, Amanda M.

    Background: A microarray of enzyme-linked immunosorbent assays, or ELISA microarray, predicts simultaneously the concentrations of numerous proteins in a small sample. These predictions, however, are uncertain due to processing error and biological variability. Making sound biological inferences as well as improving the ELISA microarray process require require both concentration predictions and creditable estimates of their errors. Methods: We present a statistical method based on monotonic spline statistical models, penalized constrained least squares fitting (PCLS) and Monte Carlo simulation (MC) to predict concentrations and estimate prediction errors in ELISA microarray. PCLS restrains the flexible spline to a fit of assay intensitymore » that is a monotone function of protein concentration. With MC, both modeling and measurement errors are combined to estimate prediction error. The spline/PCLS/MC method is compared to a common method using simulated and real ELISA microarray data sets. Results: In contrast to the rigid logistic model, the flexible spline model gave credible fits in almost all test cases including troublesome cases with left and/or right censoring, or other asymmetries. For the real data sets, 61% of the spline predictions were more accurate than their comparable logistic predictions; especially the spline predictions at the extremes of the prediction curve. The relative errors of 50% of comparable spline and logistic predictions differed by less than 20%. Monte Carlo simulation rendered acceptable asymmetric prediction intervals for both spline and logistic models while propagation of error produced symmetric intervals that diverged unrealistically as the standard curves approached horizontal asymptotes. Conclusions: The spline/PCLS/MC method is a flexible, robust alternative to a logistic/NLS/propagation-of-error method to reliably predict protein concentrations and estimate their errors. The spline method simplifies model selection and fitting, and reliably estimates believable prediction errors. For the 50% of the real data sets fit well by both methods, spline and logistic predictions are practically indistinguishable, varying in accuracy by less than 15%. The spline method may be useful when automated prediction across simultaneous assays of numerous proteins must be applied routinely with minimal user intervention.« less

  6. Estimation of attitude sensor timetag biases

    NASA Technical Reports Server (NTRS)

    Sedlak, J.

    1995-01-01

    This paper presents an extended Kalman filter for estimating attitude sensor timing errors. Spacecraft attitude is determined by finding the mean rotation from a set of reference vectors in inertial space to the corresponding observed vectors in the body frame. Any timing errors in the observations can lead to attitude errors if either the spacecraft is rotating or the reference vectors themselves vary with time. The state vector here consists of the attitude quaternion, timetag biases, and, optionally, gyro drift rate biases. The filter models the timetags as random walk processes: their expectation values propagate as constants and white noise contributes to their covariance. Thus, this filter is applicable to cases where the true timing errors are constant or slowly varying. The observability of the state vector is studied first through an examination of the algebraic observability condition and then through several examples with simulated star tracker timing errors. The examples use both simulated and actual flight data from the Extreme Ultraviolet Explorer (EUVE). The flight data come from times when EUVE had a constant rotation rate, while the simulated data feature large angle attitude maneuvers. The tests include cases with timetag errors on one or two sensors, both constant and time-varying, and with and without gyro bias errors. Due to EUVE's sensor geometry, the observability of the state vector is severely limited when the spacecraft rotation rate is constant. In the absence of attitude maneuvers, the state elements are highly correlated, and the state estimate is unreliable. The estimates are particularly sensitive to filter mistuning in this case. The EUVE geometry, though, is a degenerate case having coplanar sensors and rotation vector. Observability is much improved and the filter performs well when the rate is either varying or noncoplanar with the sensors, as during a slew. Even with bad geometry and constant rates, if gyro biases are independently known, the timetag error for a single sensor can be accurately estimated as long as its boresight is not too close to the spacecraft rotation axis.

  7. Effects of error covariance structure on estimation of model averaging weights and predictive performance

    USGS Publications Warehouse

    Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.

    2013-01-01

    When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek obtained from the iterative two-stage method also improved predictive performance of the individual models and model averaging in both synthetic and experimental studies.

  8. Empirical ocean color algorithms and bio-optical properties of the western coastal waters of Svalbard, Arctic

    NASA Astrophysics Data System (ADS)

    Son, Young-Sun; Kim, Hyun-cheol

    2018-05-01

    Chlorophyll (Chl) concentration is one of the key indicators identifying changes in the Arctic marine ecosystem. However, current Chl algorithms are not accurate in the Arctic Ocean due to different bio-optical properties from those in the lower latitude oceans. In this study, we evaluated the current Chl algorithms and analyzed the cause of the error in the western coastal waters of Svalbard, which are known to be sensitive to climate change. The NASA standard algorithms showed to overestimate the Chl concentration in the region. This was due to the high non-algal particles (NAP) absorption and colored dissolved organic matter (CDOM) variability at the blue wavelength. In addition, at lower Chl concentrations (0.1-0.3 mg m-3), chlorophyll-specific absorption coefficients were ∼2.3 times higher than those of other Arctic oceans. This was another reason for the overestimation of Chl concentration. OC4 algorithm-based regionally tuned-Svalbard Chl (SC4) algorithm for retrieving more accurate Chl estimates reduced the mean absolute percentage difference (APD) error from 215% to 49%, the mean relative percentage difference (RPD) error from 212% to 16%, and the normalized root mean square (RMS) error from 211% to 68%. This region has abundant suspended matter due to the melting of tidal glaciers. We evaluated the performance of total suspended matter (TSM) algorithms. Previous published TSM algorithms generally overestimated the TSM concentration in this region. The Svalbard TSM-single band algorithm for low TSM range (ST-SB-L) decreased the APD and RPD errors by 52% and 14%, respectively, but the RMS error still remained high (105%).

  9. Quantifying the Contributions of Environmental Parameters to Ceres Surface Net Radiation Error in China

    NASA Astrophysics Data System (ADS)

    Pan, X.; Yang, Y.; Liu, Y.; Fan, X.; Shan, L.; Zhang, X.

    2018-04-01

    Error source analyses are critical for the satellite-retrieved surface net radiation (Rn) products. In this study, we evaluate the Rn error sources in the Clouds and the Earth's Radiant Energy System (CERES) project at 43 sites from July in 2007 to December in 2007 in China. The results show that cloud fraction (CF), land surface temperature (LST), atmospheric temperature (AT) and algorithm error dominate the Rn error, with error contributions of -20, 15, 10 and 10 W/m2 (net shortwave (NSW)/longwave (NLW) radiation), respectively. For NSW, the dominant error source is algorithm error (more than 10 W/m2), particularly in spring and summer with abundant cloud. For NLW, due to the high sensitivity of algorithm and large LST/CF error, LST and CF are the largest error sources, especially in northern China. The AT influences the NLW error large in southern China because of the large AT error in there. The total precipitable water has weak influence on Rn error even with the high sensitivity of algorithm. In order to improve Rn quality, CF and LST (AT) error in northern (southern) China should be decreased.

  10. The Phylogeny of Rickettsia Using Different Evolutionary Signatures: How Tree-Like is Bacterial Evolution?

    PubMed Central

    Murray, Gemma G. R.; Weinert, Lucy A.; Rhule, Emma L.; Welch, John J.

    2016-01-01

    Rickettsia is a genus of intracellular bacteria whose hosts and transmission strategies are both impressively diverse, and this is reflected in a highly dynamic genome. Some previous studies have described the evolutionary history of Rickettsia as non-tree-like, due to incongruity between phylogenetic reconstructions using different portions of the genome. Here, we reconstruct the Rickettsia phylogeny using whole-genome data, including two new genomes from previously unsampled host groups. We find that a single topology, which is supported by multiple sources of phylogenetic signal, well describes the evolutionary history of the core genome. We do observe extensive incongruence between individual gene trees, but analyses of simulations over a single topology and interspersed partitions of sites show that this is more plausibly attributed to systematic error than to horizontal gene transfer. Some conflicting placements also result from phylogenetic analyses of accessory genome content (i.e., gene presence/absence), but we argue that these are also due to systematic error, stemming from convergent genome reduction, which cannot be accommodated by existing phylogenetic methods. Our results show that, even within a single genus, tests for gene exchange based on phylogenetic incongruence may be susceptible to false positives. PMID:26559010

  11. Supersonic Retropropulsion Experimental Results from the NASA Langley Unitary Plan Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Berry, Scott A.; Rhode, Matthew N.; Edquist, Karl T.; Player, Charles J.

    2011-01-01

    A new supersonic retropropulsion experimental effort, intended to provide code validation data, was recently completed in the Langley Research Center Unitary Plan Wind Tunnel Test Section 2 over the Mach number range from 2.4 to 4.6. The experimental model was designed using insights gained from pre-test computations, which were instrumental for sizing and refining the model to minimize tunnel wall interference and internal flow separation concerns. A 5-in diameter 70-deg sphere-cone forebody with a roughly 10-in long cylindrical aftbody was the baseline configuration selected for this study. The forebody was designed to accommodate up to four 4:1 area ratio supersonic nozzles. Primary measurements for this model were a large number of surface pressures on the forebody and aftbody. Supplemental data included high-speed Schlieren video and internal pressures and temperatures. The run matrix was developed to allow for the quantification of various sources of experimental uncertainty, such as random errors due to run-to-run variations and bias errors due to flow field or model misalignments. Preliminary results and observations from the test are presented, while detailed data and uncertainty analyses are ongoing.

  12. Refined energetic ordering for sulphate-water (n = 3-6) clusters using high-level electronic structure calculations

    NASA Astrophysics Data System (ADS)

    Lambrecht, Daniel S.; McCaslin, Laura; Xantheas, Sotiris S.; Epifanovsky, Evgeny; Head-Gordon, Martin

    2012-10-01

    This work reports refinements of the energetic ordering of the known low-energy structures of sulphate-water clusters ? (n = 3-6) using high-level electronic structure methods. Coupled cluster singles and doubles with perturbative triples (CCSD(T)) is used in combination with an estimate of basis set effects up to the complete basis set limit using second-order Møller-Plesset theory. Harmonic zero-point energy (ZPE), included at the B3LYP/6-311 + + G(3df,3pd) level, was found to have a significant effect on the energetic ordering. In fact, we show that the energetic ordering is a result of a delicate balance between the electronic and vibrational energies. Limitations of the ZPE calculations, both due to electronic structure errors, and use of the harmonic approximation, probably constitute the largest remaining errors. Due to the often small energy differences between cluster isomers, and the significant role of ZPE, deuteration can alter the relative energies of low-lying structures, and, when it is applied in conjunction with calculated harmonic ZPEs, even alters the global minimum for n = 5. Experiments on deuterated clusters, as well as more sophisticated vibrational calculations, may therefore be quite interesting.

  13. Impairment assessment of orthogonal frequency division multiplexing over dispersion-managed links in backbone and backhaul networks

    NASA Astrophysics Data System (ADS)

    Tamilarasan, Ilavarasan; Saminathan, Brindha; Murugappan, Meenakshi

    2016-04-01

    The past decade has seen the phenomenal usage of orthogonal frequency division multiplexing (OFDM) in the wired as well as wireless communication domains, and it is also proposed in the literature as a future proof technique for the implementation of flexible resource allocation in cognitive optical networks. Fiber impairment assessment and adaptive compensation becomes critical in such implementations. A comprehensive analytical model for impairments in OFDM-based fiber links is developed. The proposed model includes the combined impact of laser phase fluctuations, fiber dispersion, self phase modulation, cross phase modulation, four-wave mixing, the nonlinear phase noise due to the interaction of amplified spontaneous emission with fiber nonlinearities, and the photodetector noises. The bit error rate expression for the proposed model is derived based on error vector magnitude estimation. The performance analysis of the proposed model is presented and compared for dispersion compensated and uncompensated backbone/backhaul links. The results suggest that OFDM would perform better for uncompensated links than the compensated links due to the negligible FWM effects and there is a need for flexible compensation. The proposed model can be employed in cognitive optical networks for accurate assessment of fiber-related impairments.

  14. Sodium Absorption from the Exoplanetary Atmosphere of HD 189733b Detected in the Optical Transmission Spectrum

    NASA Astrophysics Data System (ADS)

    Redfield, Seth; Endl, Michael; Cochran, William D.; Koesterke, Lars

    2008-01-01

    We present the first ground-based detection of sodium absorption in the transmission spectrum of an extrasolar planet. Absorption due to the atmosphere of the extrasolar planet HD 189733b is detected in both lines of the Na I doublet. High spectral resolution observations were taken of 11 transits with the High Resolution Spectrograph (HRS) on the 9.2 m Hobby-Eberly Telescope (HET). The Na I absorption in the transmission spectrum due to HD 189733b is (- 67.2 +/- 20.7) × 10-5 deeper in the "narrow" spectral band that encompasses both lines relative to adjacent bands. The 1 σ error includes both random and systematic errors, and the detection is >3 σ. This amount of relative absorption in Na I for HD 189733b is ~3 times larger than that detected for HD 209458b by Charbonneau et al. (2002) and indicates that these two hot Jupiters may have significantly different atmospheric properties. Based on observations obtained with the Hobby-Eberly Telescope, which is a joint project of the University of Texas at Austin, the Pennsylvania State University, Stanford University, Ludwig-Maximilians-Universität München, and Georg-August-Universität Göttingen.

  15. Malformation syndromes caused by disorders of cholesterol synthesis

    PubMed Central

    Porter, Forbes D.; Herman, Gail E.

    2011-01-01

    Cholesterol homeostasis is critical for normal growth and development. In addition to being a major membrane lipid, cholesterol has multiple biological functions. These roles include being a precursor molecule for the synthesis of steroid hormones, neuroactive steroids, oxysterols, and bile acids. Cholesterol is also essential for the proper maturation and signaling of hedgehog proteins, and thus cholesterol is critical for embryonic development. After birth, most tissues can obtain cholesterol from either endogenous synthesis or exogenous dietary sources, but prior to birth, the human fetal tissues are dependent on endogenous synthesis. Due to the blood-brain barrier, brain tissue cannot utilize dietary or peripherally produced cholesterol. Generally, inborn errors of cholesterol synthesis lead to both a deficiency of cholesterol and increased levels of potentially bioactive or toxic precursor sterols. Over the past couple of decades, a number of human malformation syndromes have been shown to be due to inborn errors of cholesterol synthesis. Herein, we will review clinical and basic science aspects of Smith-Lemli-Opitz syndrome, desmosterolosis, lathosterolosis, HEM dysplasia, X-linked dominant chondrodysplasia punctata, Congenital Hemidysplasia with Ichthyosiform erythroderma and Limb Defects Syndrome, sterol-C-4 methyloxidase-like deficiency, and Antley-Bixler syndrome. PMID:20929975

  16. Project Report: Reducing Color Rivalry in Imagery for Conjugated Multiple Bandpass Filter Based Stereo Endoscopy

    NASA Technical Reports Server (NTRS)

    Ream, Allen

    2011-01-01

    A pair of conjugated multiple bandpass filters (CMBF) can be used to create spatially separated pupils in a traditional lens and imaging sensor system allowing for the passive capture of stereo video. This method is especially useful for surgical endoscopy where smaller cameras are needed to provide ample room for manipulating tools while also granting improved visualizations of scene depth. The significant issue in this process is that, due to the complimentary nature of the filters, the colors seen through each filter do not match each other, and also differ from colors as seen under a white illumination source. A color correction model was implemented that included optimized filter selection, such that the degree of necessary post-processing correction was minimized, and a chromatic adaptation transformation that attempted to fix the imaged colors tristimulus indices based on the principle of color constancy. Due to fabrication constraints, only dual bandpass filters were feasible. The theoretical average color error after correction between these filters was still above the fusion limit meaning that rivalry conditions are possible during viewing. This error can be minimized further by designing the filters for a subset of colors corresponding to specific working environments.

  17. Mitigating leakage errors due to cavity modes in a superconducting quantum computer

    NASA Astrophysics Data System (ADS)

    McConkey, T. G.; Béjanin, J. H.; Earnest, C. T.; McRae, C. R. H.; Pagel, Z.; Rinehart, J. R.; Mariantoni, M.

    2018-07-01

    A practical quantum computer requires quantum bit (qubit) operations with low error probabilities in extensible architectures. We study a packaging method that makes it possible to address hundreds of superconducting qubits by means of coaxial Pogo pins. A qubit chip is housed in a superconducting box, where both box and chip dimensions lead to unwanted modes that can interfere with qubit operations. We analyze these interference effects in the context of qubit coherent leakage and qubit decoherence induced by damped modes. We propose two methods, half-wave fencing and antinode pinning, to mitigate the resulting errors by detuning the resonance frequency of the modes from the qubit frequency. We perform electromagnetic field simulations indicating that the resonance frequency of the modes increases with the number of installed pins and can be engineered to be significantly higher than the highest qubit frequency. We estimate that the error probabilities and decoherence rates due to suitably shifted modes in realistic scenarios can be up to two orders of magnitude lower than the state-of-the-art superconducting qubit error and decoherence rates. Our methods can be extended to different types of packages that do not rely on Pogo pins. Conductive bump bonds, for example, can serve the same purpose in qubit architectures based on flip chip technology. Metalized vias, instead, can be used to mitigate modes due to the increasing size of the dielectric substrate on which qubit arrays are patterned.

  18. Impact of temporal upscaling and chemical transport model horizontal resolution on reducing ozone exposure misclassification

    NASA Astrophysics Data System (ADS)

    Xu, Yadong; Serre, Marc L.; Reyes, Jeanette M.; Vizuete, William

    2017-10-01

    We have developed a Bayesian Maximum Entropy (BME) framework that integrates observations from a surface monitoring network and predictions from a Chemical Transport Model (CTM) to create improved exposure estimates that can be resolved into any spatial and temporal resolution. The flexibility of the framework allows for input of data in any choice of time scales and CTM predictions of any spatial resolution with varying associated degrees of estimation error and cost in terms of implementation and computation. This study quantifies the impact on exposure estimation error due to these choices by first comparing estimations errors when BME relied on ozone concentration data either as an hourly average, the daily maximum 8-h average (DM8A), or the daily 24-h average (D24A). Our analysis found that the use of DM8A and D24A data, although less computationally intensive, reduced estimation error more when compared to the use of hourly data. This was primarily due to the poorer CTM model performance in the hourly average predicted ozone. Our second analysis compared spatial variability and estimation errors when BME relied on CTM predictions with a grid cell resolution of 12 × 12 km2 versus a coarser resolution of 36 × 36 km2. Our analysis found that integrating the finer grid resolution CTM predictions not only reduced estimation error, but also increased the spatial variability in daily ozone estimates by 5 times. This improvement was due to the improved spatial gradients and model performance found in the finer resolved CTM simulation. The integration of observational and model predictions that is permitted in a BME framework continues to be a powerful approach for improving exposure estimates of ambient air pollution. The results of this analysis demonstrate the importance of also understanding model performance variability and its implications on exposure error.

  19. Simulations of Dynamical Friction Including Spatially-Varying Magnetic Fields

    NASA Astrophysics Data System (ADS)

    Bell, G. I.; Bruhwiler, D. L.; Litvinenko, V. N.; Busby, R.; Abell, D. T.; Messmer, P.; Veitzer, S.; Cary, J. R.

    2006-03-01

    A proposed luminosity upgrade to the Relativistic Heavy Ion Collider (RHIC) includes a novel electron cooling section, which would use ˜55 MeV electrons to cool fully-ionized 100 GeV/nucleon gold ions. We consider the dynamical friction force exerted on individual ions due to a relevant electron distribution. The electrons may be focussed by a strong solenoid field, with sensitive dependence on errors, or by a wiggler field. In the rest frame of the relativistic co-propagating electron and ion beams, where the friction force can be simulated for nonrelativistic motion and electrostatic fields, the Lorentz transform of these spatially-varying magnetic fields includes strong, rapidly-varying electric fields. Previous friction force simulations for unmagnetized electrons or error-free solenoids used a 4th-order Hermite algorithm, which is not well-suited for the inclusion of strong, rapidly-varying external fields. We present here a new algorithm for friction force simulations, using an exact two-body collision model to accurately resolve close interactions between electron/ion pairs. This field-free binary-collision model is combined with a modified Boris push, using an operator-splitting approach, to include the effects of external fields. The algorithm has been implemented in the VORPAL code and successfully benchmarked.

  20. Micro-tensile testing system

    DOEpatents

    Wenski, Edward G [Lenexa, KS

    2007-08-21

    A micro-tensile testing system providing a stand-alone test platform for testing and reporting physical or engineering properties of test samples of materials having thicknesses of approximately between 0.002 inch and 0.030 inch, including, for example, LiGA engineered materials. The testing system is able to perform a variety of static, dynamic, and cyclic tests. The testing system includes a rigid frame and adjustable gripping supports to minimize measurement errors due to deflection or bending under load; serrated grips for securing the extremely small test sample; high-speed laser scan micrometers for obtaining accurate results; and test software for controlling the testing procedure and reporting results.

  1. Micro-tensile testing system

    DOEpatents

    Wenski, Edward G.

    2006-01-10

    A micro-tensile testing system providing a stand-alone test platform for testing and reporting physical or engineering properties of test samples of materials having thicknesses of approximately between 0.002 inch and 0.030 inch, including, for example, LiGA engineered materials. The testing system is able to perform a variety of static, dynamic, and cyclic tests. The testing system includes a rigid frame and adjustable gripping supports to minimize measurement errors due to deflection or bending under load; serrated grips for securing the extremely small test sample; high-speed laser scan micrometers for obtaining accurate results; and test software for controlling the testing procedure and reporting results.

  2. Micro-tensile testing system

    DOEpatents

    Wenski, Edward G [Lenexa, KS

    2007-07-17

    A micro-tensile testing system providing a stand-alone test platform for testing and reporting physical or engineering properties of test samples of materials having thicknesses of approximately between 0.002 inch and 0.030 inch, including, for example, LiGA engineered materials. The testing system is able to perform a variety of static, dynamic, and cyclic tests. The testing system includes a rigid frame and adjustable gripping supports to minimize measurement errors due to deflection or bending under load; serrated grips for securing the extremely small test sample; high-speed laser scan micrometers for obtaining accurate results; and test software for controlling the testing procedure and reporting results.

  3. Real-time auto-adaptive margin generation for MLC-tracked radiotherapy

    NASA Astrophysics Data System (ADS)

    Glitzner, M.; Fast, M. F.; de Senneville, B. Denis; Nill, S.; Oelfke, U.; Lagendijk, J. J. W.; Raaymakers, B. W.; Crijns, S. P. M.

    2017-01-01

    In radiotherapy, abdominal and thoracic sites are candidates for performing motion tracking. With real-time control it is possible to adjust the multileaf collimator (MLC) position to the target position. However, positions are not perfectly matched and position errors arise from system delays and complicated response of the electromechanic MLC system. Although, it is possible to compensate parts of these errors by using predictors, residual errors remain and need to be compensated to retain target coverage. This work presents a method to statistically describe tracking errors and to automatically derive a patient-specific, per-segment margin to compensate the arising underdosage on-line, i.e. during plan delivery. The statistics of the geometric error between intended and actual machine position are derived using kernel density estimators. Subsequently a margin is calculated on-line according to a selected coverage parameter, which determines the amount of accepted underdosage. The margin is then applied onto the actual segment to accommodate the positioning errors in the enlarged segment. The proof-of-concept was tested in an on-line tracking experiment and showed the ability to recover underdosages for two test cases, increasing {{V}90 %} in the underdosed area about 47 % and 41 % , respectively. The used dose model was able to predict the loss of dose due to tracking errors and could be used to infer the necessary margins. The implementation had a running time of 23 ms which is compatible with real-time requirements of MLC tracking systems. The auto-adaptivity to machine and patient characteristics makes the technique a generic yet intuitive candidate to avoid underdosages due to MLC tracking errors.

  4. Medical error and related factors during internship and residency.

    PubMed

    Ahmadipour, Habibeh; Nahid, Mortazavi

    2015-01-01

    It is difficult to determine the real incidence of medical errors due to the lack of a precise definition of errors, as well as the failure to report them under certain circumstances. We carried out a cross- sectional study in Kerman University of Medical Sciences, Iran in 2013. The participants were selected through the census method. The data were collected using a self-administered questionnaire, which consisted of questions on the participants' demographic data and questions on the medical errors committed. The data were analysed by SPSS 19. It was found that 270 participants had committed medical errors. There was no significant difference in the frequency of errors committed by interns and residents. In the case of residents, the most common error was misdiagnosis and in that of interns, errors related to history-taking and physical examination. Considering that medical errors are common in the clinical setting, the education system should train interns and residents to prevent the occurrence of errors. In addition, the system should develop a positive attitude among them so that they can deal better with medical errors.

  5. Identification of Carbon loss in the production of pilot-scale Carbon nanotube using gauze reactor

    NASA Astrophysics Data System (ADS)

    Wulan, P. P. D. K.; Purwanto, W. W.; Yeni, N.; Lestari, Y. D.

    2018-03-01

    Carbon loss more than 65% was the major obstacles in the Carbon Nanotube (CNT) production using gauze pilot scale reactor. The results showed that the initial carbon loss calculation is 27.64%. The calculation of carbon loss, then, takes place with various corrections parameters of: product flow rate error measurement, feed flow rate changes, gas product composition by Gas Chromatography Flame Ionization Detector (GC FID), and the carbon particulate by glass fiber filters. Error of product flow rate due to the measurement with bubble soap gives calculation error of carbon loss for about ± 4.14%. Changes in the feed flow rate due to CNT growth in the reactor reduce carbon loss by 4.97%. The detection of secondary hydrocarbon with GC FID during CNT production process reduces carbon loss by 5.14%. Particulates carried by product stream are very few and merely correct the carbon loss about 0.05%. Taking all the factors into account, the amount of carbon loss within this study is (17.21 ± 4.14)%. Assuming that 4.14% of carbon loss is due to the error measurement of product flow rate, the amount of carbon loss is 13.07%. It means that more than 57% of carbon loss within this study is identified.

  6. Evaluation of the medical malpractice cases concluded in the General Assembly of Council of Forensic Medicine.

    PubMed

    Yazıcı, Yüksel Aydın; Şen, Humman; Aliustaoğlu, Suheyla; Sezer, Yiğit; İnce, Cengiz Haluk

    2015-05-01

    Malpractice is an occasion that occurs due to defective treatment in the course of providing health services. Neither all of the errors within the medical practices are medical malpractices, nor all of the medical malpractices result in harm and judicial process. Injuries occurring at the time of treatment process may result from a complication or medical malpractice. This study aims to evaluate the reports of the controversial cases brought to trial with the claim of medical malpractice, compiled by The Council of Forensic Medicine. Our study includes all of the cases brought to the Ministry of Justice, Council of Forensic Medicine General Assembly with the claim of medical malpractice within a period of 11 years between 2000 and 2011 (n=330). In our study, we saw that 33.3% of the 330 cases were detected as "medical malpractice" by the General assembly. Within this 33.3% segment cases, 14.2% of them resulted from treatment errors such as wrong or incomplete treatment and surgery, use of wrong medication, running late for a true diagnosis after necessary examination, inappropriate medical processes as well as applied treatment having causality with an emergent injury to the patient. 9.7% of them emerged from diagnosis errors like failure to diagnose, wrong diagnosis, lack of consultation request, lack of transfer to a top centre, lack of intervention resulting from not recognizing the postoperative complication on time. 8.8% of them occurred because of careless intervention such as lack of necessary care and attention, lack of post operation follow-ups, lack of essential informing, absenteeism when called for a patient, intervention under suboptimal conditions. Whereas 0.3% of them developed from errors due to inexperience, 0.3% of them were detected to have occurred because of the administrative mistakes following malfunction of healthcare system. It is very important to analyze the errors properly in order to get the medical malpractice under control. Going through the errors, on which process of health service they occur and their owners; keeping the record of all examinations and treatments in the course of health service regularly and properly will be a cornerstone for both occupational and forensic medicine practices to be standardized.

  7. Optimal interpolation schemes to constrain pmPM2.5 in regional modeling over the United States

    NASA Astrophysics Data System (ADS)

    Sousan, Sinan Dhia Jameel

    This thesis presents the use of data assimilation with optimal interpolation (OI) to develop atmospheric aerosol concentration estimates for the United States at high spatial and temporal resolutions. Concentration estimates are highly desirable for a wide range of applications, including visibility, climate, and human health. OI is a viable data assimilation method that can be used to improve Community Multiscale Air Quality (CMAQ) model fine particulate matter (PM2.5) estimates. PM2.5 is the mass of solid and liquid particles with diameters less than or equal to 2.5 µm suspended in the gas phase. OI was employed by combining model estimates with satellite and surface measurements. The satellite data assimilation combined 36 x 36 km aerosol concentrations from CMAQ with aerosol optical depth (AOD) measured by MODIS and AERONET over the continental United States for 2002. Posterior model concentrations generated by the OI algorithm were compared with surface PM2.5 measurements to evaluate a number of possible data assimilation parameters, including model error, observation error, and temporal averaging assumptions. Evaluation was conducted separately for six geographic U.S. regions in 2002. Variability in model error and MODIS biases limited the effectiveness of a single data assimilation system for the entire continental domain. The best combinations of four settings and three averaging schemes led to a domain-averaged improvement in fractional error from 1.2 to 0.97 and from 0.99 to 0.89 at respective IMPROVE and STN monitoring sites. For 38% of OI results, MODIS OI degraded the forward model skill due to biases and outliers in MODIS AOD. Surface data assimilation combined 36 × 36 km aerosol concentrations from the CMAQ model with surface PM2.5 measurements over the continental United States for 2002. The model error covariance matrix was constructed by using the observational method. The observation error covariance matrix included site representation that scaled the observation error by land use (i.e. urban or rural locations). In theory, urban locations should have less effect on surrounding areas than rural sites, which can be controlled using site representation error. The annual evaluations showed substantial improvements in model performance with increases in the correlation coefficient from 0.36 (prior) to 0.76 (posterior), and decreases in the fractional error from 0.43 (prior) to 0.15 (posterior). In addition, the normalized mean error decreased from 0.36 (prior) to 0.13 (posterior), and the RMSE decreased from 5.39 µg m-3 (prior) to 2.32 µg m-3 (posterior). OI decreased model bias for both large spatial areas and point locations, and could be extended to more advanced data assimilation methods. The current work will be applied to a five year (2000-2004) CMAQ simulation aimed at improving aerosol model estimates. The posterior model concentrations will be used to inform exposure studies over the U.S. that relate aerosol exposure to mortality and morbidity rates. Future improvements for the OI techniques used in the current study will include combining both surface and satellite data to improve posterior model estimates. Satellite data have high spatial and temporal resolutions in comparison to surface measurements, which are scarce but more accurate than model estimates. The satellite data are subject to noise affected by location and season of retrieval. The implementation of OI to combine satellite and surface data sets has the potential to improve posterior model estimates for locations that have no direct measurements.

  8. The evolution of Crew Resource Management training in commercial aviation

    NASA Technical Reports Server (NTRS)

    Helmreich, R. L.; Merritt, A. C.; Wilhelm, J. A.

    1999-01-01

    In this study, we describe changes in the nature of Crew Resource Management (CRM) training in commercial aviation, including its shift from cockpit to crew resource management. Validation of the impact of CRM is discussed. Limitations of CRM, including lack of cross-cultural generality are considered. An overarching framework that stresses error management to increase acceptance of CRM concepts is presented. The error management approach defines behavioral strategies taught in CRM as error countermeasures that are employed to avoid error, to trap errors committed, and to mitigate the consequences of error.

  9. Evidence Report: Risk of Performance Errors Due to Training Deficiencies

    NASA Technical Reports Server (NTRS)

    Barshi, Immanuel

    2012-01-01

    The Risk of Performance Errors Due to Training Deficiencies is identified by the National Aeronautics and Space Administration (NASA) Human Research Program (HRP) as a recognized risk to human health and performance in space. The HRP Program Requirements Document (PRD) defines these risks. This Evidence Report provides a summary of the evidence that has been used to identify and characterize this risk. Given that training content, timing, intervals, and delivery methods must support crew task performance, and given that training paradigms will be different for long-duration missions with increased crew autonomy, there is a risk that operators will lack the skills or knowledge necessary to complete critical tasks, resulting in flight and ground crew errors and inefficiencies, failed mission and program objectives, and an increase in crew injuries.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Covington, E; Younge, K; Chen, X

    Purpose: To evaluate the effectiveness of an automated plan check tool to improve first-time plan quality as well as standardize and document performance of physics plan checks. Methods: The Plan Checker Tool (PCT) uses the Eclipse Scripting API to check and compare data from the treatment planning system (TPS) and treatment management system (TMS). PCT was created to improve first-time plan quality, reduce patient delays, increase efficiency of our electronic workflow, and to standardize and partially automate plan checks in the TPS. A framework was developed which can be configured with different reference values and types of checks. One examplemore » is the prescribed dose check where PCT flags the user when the planned dose and the prescribed dose disagree. PCT includes a comprehensive checklist of automated and manual checks that are documented when performed by the user. A PDF report is created and automatically uploaded into the TMS. Prior to and during PCT development, errors caught during plan checks and also patient delays were tracked in order to prioritize which checks should be automated. The most common and significant errors were determined. Results: Nineteen of 33 checklist items were automated with data extracted with the PCT. These include checks for prescription, reference point and machine scheduling errors which are three of the top six causes of patient delays related to physics and dosimetry. Since the clinical roll-out, no delays have been due to errors that are automatically flagged by the PCT. Development continues to automate the remaining checks. Conclusion: With PCT, 57% of the physics plan checklist has been partially or fully automated. Treatment delays have declined since release of the PCT for clinical use. By tracking delays and errors, we have been able to measure the effectiveness of automating checks and are using this information to prioritize future development. This project was supported in part by P01CA059827.« less

  11. Modeling misidentification errors in capture-recapture studies using photographic identification of evolving marks

    USGS Publications Warehouse

    Yoshizaki, J.; Pollock, K.H.; Brownie, C.; Webster, R.A.

    2009-01-01

    Misidentification of animals is potentially important when naturally existing features (natural tags) are used to identify individual animals in a capture-recapture study. Photographic identification (photoID) typically uses photographic images of animals' naturally existing features as tags (photographic tags) and is subject to two main causes of identification errors: those related to quality of photographs (non-evolving natural tags) and those related to changes in natural marks (evolving natural tags). The conventional methods for analysis of capture-recapture data do not account for identification errors, and to do so requires a detailed understanding of the misidentification mechanism. Focusing on the situation where errors are due to evolving natural tags, we propose a misidentification mechanism and outline a framework for modeling the effect of misidentification in closed population studies. We introduce methods for estimating population size based on this model. Using a simulation study, we show that conventional estimators can seriously overestimate population size when errors due to misidentification are ignored, and that, in comparison, our new estimators have better properties except in cases with low capture probabilities (<0.2) or low misidentification rates (<2.5%). ?? 2009 by the Ecological Society of America.

  12. Uncertainty analysis of thermocouple measurements used in normal and abnormal thermal environment experiments at Sandia's Radiant Heat Facility and Lurance Canyon Burn Site.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakos, James Thomas

    2004-04-01

    It would not be possible to confidently qualify weapon systems performance or validate computer codes without knowing the uncertainty of the experimental data used. This report provides uncertainty estimates associated with thermocouple data for temperature measurements from two of Sandia's large-scale thermal facilities. These two facilities (the Radiant Heat Facility (RHF) and the Lurance Canyon Burn Site (LCBS)) routinely gather data from normal and abnormal thermal environment experiments. They are managed by Fire Science & Technology Department 09132. Uncertainty analyses were performed for several thermocouple (TC) data acquisition systems (DASs) used at the RHF and LCBS. These analyses apply tomore » Type K, chromel-alumel thermocouples of various types: fiberglass sheathed TC wire, mineral-insulated, metal-sheathed (MIMS) TC assemblies, and are easily extended to other TC materials (e.g., copper-constantan). Several DASs were analyzed: (1) A Hewlett-Packard (HP) 3852A system, and (2) several National Instrument (NI) systems. The uncertainty analyses were performed on the entire system from the TC to the DAS output file. Uncertainty sources include TC mounting errors, ANSI standard calibration uncertainty for Type K TC wire, potential errors due to temperature gradients inside connectors, extension wire uncertainty, DAS hardware uncertainties including noise, common mode rejection ratio, digital voltmeter accuracy, mV to temperature conversion, analog to digital conversion, and other possible sources. Typical results for 'normal' environments (e.g., maximum of 300-400 K) showed the total uncertainty to be about {+-}1% of the reading in absolute temperature. In high temperature or high heat flux ('abnormal') thermal environments, total uncertainties range up to {+-}2-3% of the reading (maximum of 1300 K). The higher uncertainties in abnormal thermal environments are caused by increased errors due to the effects of imperfect TC attachment to the test item. 'Best practices' are provided in Section 9 to help the user to obtain the best measurements possible.« less

  13. MO-C-17A-04: Forecasting Longitudinal Changes in Oropharyngeal Tumor Morphology Throughout the Course of Head and Neck Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yock, A; UT Graduate School of Biomedical Sciences, Houston, TX; Rao, A

    2014-06-15

    Purpose: To generate, evaluate, and compare models that predict longitudinal changes in tumor morphology throughout the course of radiation therapy. Methods: Two morphology feature vectors were used to describe the size, shape, and position of 35 oropharyngeal GTVs at each treatment fraction during intensity-modulated radiation therapy. The feature vectors comprised the coordinates of the GTV centroids and one of two shape descriptors. One shape descriptor was based on radial distances between the GTV centroid and 614 GTV surface landmarks. The other was based on a spherical harmonic decomposition of these distances. Feature vectors over the course of therapy were describedmore » using static, linear, and mean models. The error of these models in forecasting GTV morphology was evaluated with leave-one-out cross-validation, and their accuracy was compared using Wilcoxon signed-rank tests. The effect of adjusting model parameters at 1, 2, 3, or 5 time points (adjustment points) was also evaluated. Results: The addition of a single adjustment point to the static model decreased the median error in forecasting the position of GTV surface landmarks by 1.2 mm (p<0.001). Additional adjustment points further decreased forecast error by about 0.4 mm each. The linear model decreased forecast error compared to the static model for feature vectors based on both shape descriptors (0.2 mm), while the mean model did so only for those based on the inter-landmark distances (0.2 mm). The decrease in forecast error due to adding adjustment points was greater than that due to model selection. Both effects diminished with subsequent adjustment points. Conclusion: Models of tumor morphology that include information from prior patients and/or prior treatment fractions are able to predict the tumor surface at each treatment fraction during radiation therapy. The predicted tumor morphology can be compared with patient anatomy or dose distributions, opening the possibility of anticipatory re-planning. American Legion Auxiliary Fellowship; The University of Texas Graduate School of Biomedical Sciences at Houston.« less

  14. The Number of Patients and Events Required to Limit the Risk of Overestimation of Intervention Effects in Meta-Analysis—A Simulation Study

    PubMed Central

    Thorlund, Kristian; Imberger, Georgina; Walsh, Michael; Chu, Rong; Gluud, Christian; Wetterslev, Jørn; Guyatt, Gordon; Devereaux, Philip J.; Thabane, Lehana

    2011-01-01

    Background Meta-analyses including a limited number of patients and events are prone to yield overestimated intervention effect estimates. While many assume bias is the cause of overestimation, theoretical considerations suggest that random error may be an equal or more frequent cause. The independent impact of random error on meta-analyzed intervention effects has not previously been explored. It has been suggested that surpassing the optimal information size (i.e., the required meta-analysis sample size) provides sufficient protection against overestimation due to random error, but this claim has not yet been validated. Methods We simulated a comprehensive array of meta-analysis scenarios where no intervention effect existed (i.e., relative risk reduction (RRR) = 0%) or where a small but possibly unimportant effect existed (RRR = 10%). We constructed different scenarios by varying the control group risk, the degree of heterogeneity, and the distribution of trial sample sizes. For each scenario, we calculated the probability of observing overestimates of RRR>20% and RRR>30% for each cumulative 500 patients and 50 events. We calculated the cumulative number of patients and events required to reduce the probability of overestimation of intervention effect to 10%, 5%, and 1%. We calculated the optimal information size for each of the simulated scenarios and explored whether meta-analyses that surpassed their optimal information size had sufficient protection against overestimation of intervention effects due to random error. Results The risk of overestimation of intervention effects was usually high when the number of patients and events was small and this risk decreased exponentially over time as the number of patients and events increased. The number of patients and events required to limit the risk of overestimation depended considerably on the underlying simulation settings. Surpassing the optimal information size generally provided sufficient protection against overestimation. Conclusions Random errors are a frequent cause of overestimation of intervention effects in meta-analyses. Surpassing the optimal information size will provide sufficient protection against overestimation. PMID:22028777

  15. Diffraction analysis and evaluation of several focus- and track-error detection schemes for magneto-optical disk systems

    NASA Technical Reports Server (NTRS)

    Bernacki, Bruce E.; Mansuripur, M.

    1992-01-01

    A commonly used tracking method on pre-grooved magneto-optical (MO) media is the push-pull technique, and the astigmatic method is a popular focus-error detection approach. These two methods are analyzed using DIFFRACT, a general-purpose scalar diffraction modeling program, to observe the effects on the error signals due to focusing lens misalignment, Seidel aberrations, and optical crosstalk (feedthrough) between the focusing and tracking servos. Using the results of the astigmatic/push-pull system as a basis for comparison, a novel focus/track-error detection technique that utilizes a ring toric lens is evaluated as well as the obscuration method (focus error detection only).

  16. A high accuracy magnetic heading system composed of fluxgate magnetometers and a microcomputer

    NASA Astrophysics Data System (ADS)

    Liu, Sheng-Wu; Zhang, Zhao-Nian; Hung, James C.

    The authors present a magnetic heading system consisting of two fluxgate magnetometers and a single-chip microcomputer. The system, when compared to gyro compasses, is smaller in size, lighter in weight, simpler in construction, quicker in reaction time, free from drift, and more reliable. Using a microcomputer in the system, heading error due to compass deviation, sensor offsets, scale factor uncertainty, and sensor tilts can be compensated with the help of an error model. The laboratory test of a typical system showed that the accuracy of the system was improved from more than 8 deg error without error compensation to less than 0.3 deg error with compensation.

  17. Towards a robust BCI: error potentials and online learning.

    PubMed

    Buttfield, Anna; Ferrez, Pierre W; Millán, José del R

    2006-06-01

    Recent advances in the field of brain-computer interfaces (BCIs) have shown that BCIs have the potential to provide a powerful new channel of communication, completely independent of muscular and nervous systems. However, while there have been successful laboratory demonstrations, there are still issues that need to be addressed before BCIs can be used by nonexperts outside the laboratory. At IDIAP Research Institute, we have been investigating several areas that we believe will allow us to improve the robustness, flexibility, and reliability of BCIs. One area is recognition of cognitive error states, that is, identifying errors through the brain's reaction to mistakes. The production of these error potentials (ErrP) in reaction to an error made by the user is well established. We have extended this work by identifying a similar but distinct ErrP that is generated in response to an error made by the interface, (a misinterpretation of a command that the user has given). This ErrP can be satisfactorily identified in single trials and can be demonstrated to improve the theoretical performance of a BCI. A second area of research is online adaptation of the classifier. BCI signals change over time, both between sessions and within a single session, due to a number of factors. This means that a classifier trained on data from a previous session will probably not be optimal for a new session. In this paper, we present preliminary results from our investigations into supervised online learning that can be applied in the initial training phase. We also discuss the future direction of this research, including the combination of these two currently separate issues to create a potentially very powerful BCI.

  18. Truths, errors, and lies around "reflex sympathetic dystrophy" and "complex regional pain syndrome".

    PubMed

    Ochoa, J L

    1999-10-01

    The shifting paradigm of reflex sympathetic dystrophy-sympathetically maintained pains-complex regional pain syndrome is characterized by vestigial truths and understandable errors, but also unjustifiable lies. It is true that patients with organically based neuropathic pain harbor unquestionable and physiologically demonstrable evidence of nerve fiber dysfunction leading to a predictable clinical profile with stereotyped temporal evolution. In turn, patients with psychogenic pseudoneuropathy, sustained by conversion-somatization-malingering, not only lack physiological evidence of structural nerve fiber disease but display a characteristically atypical, half-subjective, psychophysical sensory-motor profile. The objective vasomotor signs may have any variety of neurogenic, vasogenic, and psychogenic origins. Neurological differential diagnosis of "neuropathic pain" versus pseudoneuropathy is straight forward provided that stringent requirements of neurological semeiology are not bypassed. Embarrassing conceptual errors explain the assumption that there exists a clinically relevant "sympathetically maintained pain" status. Errors include historical misinterpretation of vasomotor signs in symptomatic body parts, and misconstruing symptomatic relief after "diagnostic" sympathetic blocks, due to lack of consideration of the placebo effect which explains the outcome. It is a lie that sympatholysis may specifically cure patients with unqualified "reflex sympathetic dystrophy." This was already stated by the father of sympathectomy, René Leriche, more than half a century ago. As extrapolated from observations in animals with gross experimental nerve injury, adducing hypothetical, untestable, secondary central neuron sensitization to explain psychophysical sensory-motor complaints displayed by patients with blatantly absent nerve fiber injury, is not an error, but a lie. While conceptual errors are not only forgivable, but natural to inexact medical science, lies particularly when entrepreneurially inspired are condemnable and call for peer intervention.

  19. Autoregressive Modeling of Drift and Random Error to Characterize a Continuous Intravascular Glucose Monitoring Sensor.

    PubMed

    Zhou, Tony; Dickson, Jennifer L; Geoffrey Chase, J

    2018-01-01

    Continuous glucose monitoring (CGM) devices have been effective in managing diabetes and offer potential benefits for use in the intensive care unit (ICU). Use of CGM devices in the ICU has been limited, primarily due to the higher point accuracy errors over currently used traditional intermittent blood glucose (BG) measures. General models of CGM errors, including drift and random errors, are lacking, but would enable better design of protocols to utilize these devices. This article presents an autoregressive (AR) based modeling method that separately characterizes the drift and random noise of the GlySure CGM sensor (GlySure Limited, Oxfordshire, UK). Clinical sensor data (n = 33) and reference measurements were used to generate 2 AR models to describe sensor drift and noise. These models were used to generate 100 Monte Carlo simulations based on reference blood glucose measurements. These were then compared to the original CGM clinical data using mean absolute relative difference (MARD) and a Trend Compass. The point accuracy MARD was very similar between simulated and clinical data (9.6% vs 9.9%). A Trend Compass was used to assess trend accuracy, and found simulated and clinical sensor profiles were similar (simulated trend index 11.4° vs clinical trend index 10.9°). The model and method accurately represents cohort sensor behavior over patients, providing a general modeling approach to any such sensor by separately characterizing each type of error that can arise in the data. Overall, it enables better protocol design based on accurate expected CGM sensor behavior, as well as enabling the analysis of what level of each type of sensor error would be necessary to obtain desired glycemic control safety and performance with a given protocol.

  20. Medication administration errors from a nursing viewpoint: a formal consensus of definition and scenarios using a Delphi technique.

    PubMed

    Shawahna, Ramzi; Masri, Dina; Al-Gharabeh, Rawan; Deek, Rawan; Al-Thayba, Lama; Halaweh, Masa

    2016-02-01

    To develop and achieve formal consensus on a definition of medication administration errors and scenarios that should or should not be considered as medication administration errors in hospitalised patient settings. Medication administration errors occur frequently in hospitalised patient settings. Currently, there is no formal consensus on a definition of medication administration errors or scenarios that should or should not be considered as medication administration errors. This was a descriptive study using Delphi technique. A panel of experts (n = 50) recruited from major hospitals, nursing schools and universities in Palestine took part in the study. Three Delphi rounds were followed to achieve consensus on a proposed definition of medication administration errors and a series of 61 scenarios representing potential medication administration error situations formulated into a questionnaire. In the first Delphi round, key contact nurses' views on medication administration errors were explored. In the second Delphi round, consensus was achieved to accept the proposed definition of medication administration errors and to include 36 (59%) scenarios and exclude 1 (1·6%) as medication administration errors. In the third Delphi round, consensus was achieved to consider further 14 (23%) and exclude 2 (3·3%) as medication administration errors while the remaining eight (13·1%) were considered equivocal. Of the 61 scenarios included in the Delphi process, experts decided to include 50 scenarios as medication administration errors, exclude three scenarios and include or exclude eight scenarios depending on the individual clinical situation. Consensus on a definition and scenarios representing medication administration errors can be achieved using formal consensus techniques. Researchers should be aware that using different definitions of medication administration errors, inclusion or exclusion of medication administration error situations could significantly affect the rate of medication administration errors reported in their studies. Consensual definitions and medication administration error situations can be used in future epidemiology studies investigating medication administration errors in hospitalised patient settings which may permit and promote direct comparisons of different studies. © 2015 John Wiley & Sons Ltd.

  1. QUANTIFYING UNCERTAINTY DUE TO RANDOM ERRORS FOR MOMENT ANALYSES OF BREAKTHROUGH CURVES

    EPA Science Inventory

    The uncertainty in moments calculated from breakthrough curves (BTCs) is investigated as a function of random measurement errors in the data used to define the BTCs. The method presented assumes moments are calculated by numerical integration using the trapezoidal rule, and is t...

  2. 22 CFR 34.18 - Waivers of indebtedness.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... known through the exercise of due diligence that an error existed but failed to take corrective action... elapsed between the erroneous payment and discovery of the error and notification of the employee; (D... to duty because of disability (supported by an acceptable medical certificate); and (D) Whether...

  3. Working memory load impairs the evaluation of behavioral errors in the medial frontal cortex.

    PubMed

    Maier, Martin E; Steinhauser, Marco

    2017-10-01

    Early error monitoring in the medial frontal cortex enables error detection and the evaluation of error significance, which helps prioritize adaptive control. This ability has been assumed to be independent from central capacity, a limited pool of resources assumed to be involved in cognitive control. The present study investigated whether error evaluation depends on central capacity by measuring the error-related negativity (Ne/ERN) in a flanker paradigm while working memory load was varied on two levels. We used a four-choice flanker paradigm in which participants had to classify targets while ignoring flankers. Errors could be due to responding either to the flankers (flanker errors) or to none of the stimulus elements (nonflanker errors). With low load, the Ne/ERN was larger for flanker errors than for nonflanker errors-an effect that has previously been interpreted as reflecting differential significance of these error types. With high load, no such effect of error type on the Ne/ERN was observable. Our findings suggest that working memory load does not impair the generation of an Ne/ERN per se but rather impairs the evaluation of error significance. They demonstrate that error monitoring is composed of capacity-dependent and capacity-independent mechanisms. © 2017 Society for Psychophysiological Research.

  4. Ensemble Kalman filters for dynamical systems with unresolved turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grooms, Ian, E-mail: grooms@cims.nyu.edu; Lee, Yoonsang; Majda, Andrew J.

    Ensemble Kalman filters are developed for turbulent dynamical systems where the forecast model does not resolve all the active scales of motion. Coarse-resolution models are intended to predict the large-scale part of the true dynamics, but observations invariably include contributions from both the resolved large scales and the unresolved small scales. The error due to the contribution of unresolved scales to the observations, called ‘representation’ or ‘representativeness’ error, is often included as part of the observation error, in addition to the raw measurement error, when estimating the large-scale part of the system. It is here shown how stochastic superparameterization (amore » multiscale method for subgridscale parameterization) can be used to provide estimates of the statistics of the unresolved scales. In addition, a new framework is developed wherein small-scale statistics can be used to estimate both the resolved and unresolved components of the solution. The one-dimensional test problem from dispersive wave turbulence used here is computationally tractable yet is particularly difficult for filtering because of the non-Gaussian extreme event statistics and substantial small scale turbulence: a shallow energy spectrum proportional to k{sup −5/6} (where k is the wavenumber) results in two-thirds of the climatological variance being carried by the unresolved small scales. Because the unresolved scales contain so much energy, filters that ignore the representation error fail utterly to provide meaningful estimates of the system state. Inclusion of a time-independent climatological estimate of the representation error in a standard framework leads to inaccurate estimates of the large-scale part of the signal; accurate estimates of the large scales are only achieved by using stochastic superparameterization to provide evolving, large-scale dependent predictions of the small-scale statistics. Again, because the unresolved scales contain so much energy, even an accurate estimate of the large-scale part of the system does not provide an accurate estimate of the true state. By providing simultaneous estimates of both the large- and small-scale parts of the solution, the new framework is able to provide accurate estimates of the true system state.« less

  5. The economics of health care quality and medical errors.

    PubMed

    Andel, Charles; Davidow, Stephen L; Hollander, Mark; Moreno, David A

    2012-01-01

    Hospitals have been looking for ways to improve quality and operational efficiency and cut costs for nearly three decades, using a variety of quality improvement strategies. However, based on recent reports, approximately 200,000 Americans die from preventable medical errors including facility-acquired conditions and millions may experience errors. In 2008, medical errors cost the United States $19.5 billion. About 87 percent or $17 billion were directly associated with additional medical cost, including: ancillary services, prescription drug services, and inpatient and outpatient care, according to a study sponsored by the Society for Actuaries and conducted by Milliman in 2010. Additional costs of $1.4 billion were attributed to increased mortality rates with $1.1 billion or 10 million days of lost productivity from missed work based on short-term disability claims. The authors estimate that the economic impact is much higher, perhaps nearly $1 trillion annually when quality-adjusted life years (QALYs) are applied to those that die. Using the Institute of Medicine's (IOM) estimate of 98,000 deaths due to preventable medical errors annually in its 1998 report, To Err Is Human, and an average of ten lost years of life at $75,000 to $100,000 per year, there is a loss of $73.5 billion to $98 billion in QALYs for those deaths--conservatively. These numbers are much greater than those we cite from studies that explore the direct costs of medical errors. And if the estimate of a recent Health Affairs article is correct-preventable death being ten times the IOM estimate-the cost is $735 billion to $980 billion. Quality care is less expensive care. It is better, more efficient, and by definition, less wasteful. It is the right care, at the right time, every time. It should mean that far fewer patients are harmed or injured. Obviously, quality care is not being delivered consistently throughout U.S. hospitals. Whatever the measure, poor quality is costing payers and society a great deal. However, health care leaders and professionals are focusing on quality and patient safety in ways they never have before because the economics of quality have changed substantially.

  6. On the predictivity of pore-scale simulations: Estimating uncertainties with multilevel Monte Carlo

    NASA Astrophysics Data System (ADS)

    Icardi, Matteo; Boccardo, Gianluca; Tempone, Raúl

    2016-09-01

    A fast method with tunable accuracy is proposed to estimate errors and uncertainties in pore-scale and Digital Rock Physics (DRP) problems. The overall predictivity of these studies can be, in fact, hindered by many factors including sample heterogeneity, computational and imaging limitations, model inadequacy and not perfectly known physical parameters. The typical objective of pore-scale studies is the estimation of macroscopic effective parameters such as permeability, effective diffusivity and hydrodynamic dispersion. However, these are often non-deterministic quantities (i.e., results obtained for specific pore-scale sample and setup are not totally reproducible by another ;equivalent; sample and setup). The stochastic nature can arise due to the multi-scale heterogeneity, the computational and experimental limitations in considering large samples, and the complexity of the physical models. These approximations, in fact, introduce an error that, being dependent on a large number of complex factors, can be modeled as random. We propose a general simulation tool, based on multilevel Monte Carlo, that can reduce drastically the computational cost needed for computing accurate statistics of effective parameters and other quantities of interest, under any of these random errors. This is, to our knowledge, the first attempt to include Uncertainty Quantification (UQ) in pore-scale physics and simulation. The method can also provide estimates of the discretization error and it is tested on three-dimensional transport problems in heterogeneous materials, where the sampling procedure is done by generation algorithms able to reproduce realistic consolidated and unconsolidated random sphere and ellipsoid packings and arrangements. A totally automatic workflow is developed in an open-source code [1], that include rigid body physics and random packing algorithms, unstructured mesh discretization, finite volume solvers, extrapolation and post-processing techniques. The proposed method can be efficiently used in many porous media applications for problems such as stochastic homogenization/upscaling, propagation of uncertainty from microscopic fluid and rock properties to macro-scale parameters, robust estimation of Representative Elementary Volume size for arbitrary physics.

  7. A Probabilistic, Facility-Centric Approach to Lightning Strike Location

    NASA Technical Reports Server (NTRS)

    Huddleston, Lisa L.; Roeder, William p.; Merceret, Francis J.

    2012-01-01

    A new probabilistic facility-centric approach to lightning strike location has been developed. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even with the location error ellipse. This technique is adapted from a method of calculating the probability of debris collisionith spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force Station. Future applications could include forensic meteorology.

  8. A Computational Intelligence (CI) Approach to the Precision Mars Lander Problem

    NASA Technical Reports Server (NTRS)

    Birge, Brian; Walberg, Gerald

    2002-01-01

    A Mars precision landing requires a landed footprint of no more than 100 meters. Obstacles to reducing the landed footprint include trajectory dispersions due to initial atmospheric entry conditions such as entry angle, parachute deployment height, environment parameters such as wind, atmospheric density, parachute deployment dynamics, unavoidable injection error or propagated error from launch, etc. Computational Intelligence (CI) techniques such as Artificial Neural Nets and Particle Swarm Optimization have been shown to have great success with other control problems. The research period extended previous work on investigating applicability of the computational intelligent approaches. The focus of this investigation was on Particle Swarm Optimization and basic Neural Net architectures. The research investigating these issues was performed for the grant cycle from 5/15/01 to 5/15/02. Matlab 5.1 and 6.0 along with NASA's POST were the primary computational tools.

  9. A Backscatter-Lidar Forward-Operator

    NASA Astrophysics Data System (ADS)

    Geisinger, Armin; Behrendt, Andreas; Wulfmeyer, Volker; Vogel, Bernhard; Mattis, Ina; Flentje, Harald; Förstner, Jochen; Potthast, Roland

    2015-04-01

    We have developed a forward-operator which is capable of calculating virtual lidar profiles from atmospheric state simulations. The operator allows us to compare lidar measurements and model simulations based on the same measurement parameter: the lidar backscatter profile. This method simplifies qualitative comparisons and also makes quantitative comparisons possible, including statistical error quantification. Implemented into an aerosol-capable model system, the operator will act as a component to assimilate backscatter-lidar measurements. As many weather services maintain already networks of backscatter-lidars, such data are acquired already in an operational manner. To estimate and quantify errors due to missing or uncertain aerosol information, we started sensitivity studies about several scattering parameters such as the aerosol size and both the real and imaginary part of the complex index of refraction. Furthermore, quantitative and statistical comparisons between measurements and virtual measurements are shown in this study, i.e. applying the backscatter-lidar forward-operator on model output.

  10. Multi-spectral pyrometer for gas turbine blade temperature measurement

    NASA Astrophysics Data System (ADS)

    Gao, Shan; Wang, Lixin; Feng, Chi

    2014-09-01

    To achieve the highest possible turbine inlet temperature requires to accurately measuring the turbine blade temperature. If the temperature of blade frequent beyond the design limits, it will seriously reduce the service life. The problem for the accuracy of the temperature measurement includes the value of the target surface emissivity is unknown and the emissivity model is variability and the thermal radiation of the high temperature environment. In this paper, the multi-spectral pyrometer is designed provided mainly for range 500-1000°, and present a model corrected in terms of the error due to the reflected radiation only base on the turbine geometry and the physical properties of the material. Under different working conditions, the method can reduce the measurement error from the reflect radiation of vanes, make measurement closer to the actual temperature of the blade and calculating the corresponding model through genetic algorithm. The experiment shows that this method has higher accuracy measurements.

  11. Image motion compensation on the Spacelab 2 Solar Optical Universal Polarimeter /SL2 SOUP/

    NASA Technical Reports Server (NTRS)

    Tarbell, T. D.; Duncan, D. W.; Finch, M. L.; Spence, G.

    1981-01-01

    The SOUP experiment on Spacelab 2 includes a 30 cm visible light telescope and focal plane package mounted on the Instrument Pointing System (IPS). Scientific goals of the experiment dictate pointing stability requirements of less than 0.05 arcsecond jitter over periods of 5-20 seconds. Quantitative derivations of these requirements from two different aspects are presented: (1) avoidance of motion blurring of diffraction-limited images; (2) precise coalignment of consecutive frames to allow measurement of small image differences. To achieve this stability, a fine guider system capable of removing residual jitter of the IPS and image motions generated on the IPS cruciform instrument support structure has been constructed. This system uses solar limb detectors in the prime focal plane to derive an error signal. Image motion due to pointing errors is compensated by the agile secondary mirror mounted on piezoelectric transducers, controlled by a closed-loop servo system.

  12. Analytical minimization of synchronicity errors in stochastic identification

    NASA Astrophysics Data System (ADS)

    Bernal, D.

    2018-01-01

    An approach to minimize error due to synchronicity faults in stochastic system identification is presented. The scheme is based on shifting the time domain signals so the phases of the fundamental eigenvector estimated from the spectral density are zero. A threshold on the mean of the amplitude-weighted absolute value of these phases, above which signal shifting is deemed justified, is derived and found to be proportional to the first mode damping ratio. It is shown that synchronicity faults do not map precisely to phasor multiplications in subspace identification and that the accuracy of spectral density estimated eigenvectors, for inputs with arbitrary spectral density, decrease with increasing mode number. Selection of a corrective strategy based on signal alignment, instead of eigenvector adjustment using phasors, is shown to be the product of the foregoing observations. Simulations that include noise and non-classical damping suggest that the scheme can provide sufficient accuracy to be of practical value.

  13. Economic impact of medication error: a systematic review.

    PubMed

    Walsh, Elaine K; Hansen, Christina Raae; Sahm, Laura J; Kearney, Patricia M; Doherty, Edel; Bradley, Colin P

    2017-05-01

    Medication error is a significant source of morbidity and mortality among patients. Clinical and cost-effectiveness evidence are required for the implementation of quality of care interventions. Reduction of error-related cost is a key potential benefit of interventions addressing medication error. The aim of this review was to describe and quantify the economic burden associated with medication error. PubMed, Cochrane, Embase, CINAHL, EconLit, ABI/INFORM, Business Source Complete were searched. Studies published 2004-2016 assessing the economic impact of medication error were included. Cost values were expressed in Euro 2015. A narrative synthesis was performed. A total of 4572 articles were identified from database searching, and 16 were included in the review. One study met all applicable quality criteria. Fifteen studies expressed economic impact in monetary terms. Mean cost per error per study ranged from €2.58 to €111 727.08. Healthcare costs were used to measure economic impact in 15 of the included studies with one study measuring litigation costs. Four studies included costs incurred in primary care with the remaining 12 measuring hospital costs. Five studies looked at general medication error in a general population with 11 studies reporting the economic impact of an individual type of medication error or error within a specific patient population. Considerable variability existed between studies in terms of financial cost, patients, settings and errors included. Many were of poor quality. Assessment of economic impact was conducted predominantly in the hospital setting with little assessment of primary care impact. Limited parameters were used to establish economic impact. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  14. NASA's New Orbital Debris Engineering Model, ORDEM2010

    NASA Technical Reports Server (NTRS)

    Krisko, Paula H.

    2010-01-01

    This paper describes the functionality and use of ORDEM2010, which replaces ORDEM2000, as the NASA Orbital Debris Program Office (ODPO) debris engineering model. Like its predecessor, ORDEM2010 serves the ODPO mission of providing spacecraft designers/operators and debris observers with a publicly available model to calculate orbital debris flux by current-state-of-knowledge methods. The key advance in ORDEM2010 is the input file structure of the yearly debris populations from 1995-2035 of sizes 10 micron - 1 m. These files include debris from low-Earth orbits (LEO) through geosynchronous orbits (GEO). Stable orbital elements (i.e., those that do not randomize on a sub-year timescale) are included in the files as are debris size, debris number, material density, random error and population error. Material density is implemented from ground-test data into the NASA breakup model and assigned to debris fragments accordingly. The random and population errors are due to machine error and uncertainties in debris sizes. These high-fidelity population files call for a much higher-level model analysis than what was possible with the populations of ORDEM2000. Population analysis in the ORDEM2010 model consists of mapping matrices that convert the debris population elements to debris fluxes. One output mode results in a spacecraft encompassing 3-D igloo of debris flux, compartmentalized by debris size, velocity, pitch, and yaw with respect to spacecraft ram direction. The second output mode provides debris flux through an Earth-based telescope/radar beam from LEO through GEO. This paper compares the new ORDEM2010 with ORDEM2000 in terms of processes and results with examples of specific orbits.

  15. A (72, 36; 15) box code

    NASA Technical Reports Server (NTRS)

    Solomon, G.

    1993-01-01

    A (72,36;15) box code is constructed as a 9 x 8 matrix whose columns add to form an extended BCH-Hamming (8,4;4) code and whose rows sum to odd or even parity. The newly constructed code, due to its matrix form, is easily decodable for all seven-error and many eight-error patterns. The code comes from a slight modification in the parity (eighth) dimension of the Reed-Solomon (8,4;5) code over GF(512). Error correction uses the row sum parity information to detect errors, which then become erasures in a Reed-Solomon correction algorithm.

  16. Two-Step Fair Scheduling of Continuous Media Streams over Error-Prone Wireless Channels

    NASA Astrophysics Data System (ADS)

    Oh, Soohyun; Lee, Jin Wook; Park, Taejoon; Jo, Tae-Chang

    In wireless cellular networks, streaming of continuous media (with strict QoS requirements) over wireless links is challenging due to their inherent unreliability characterized by location-dependent, bursty errors. To address this challenge, we present a two-step scheduling algorithm for a base station to provide streaming of continuous media to wireless clients over the error-prone wireless links. The proposed algorithm is capable of minimizing the packet loss rate of individual clients in the presence of error bursts, by transmitting packets in the round-robin manner and also adopting a mechanism for channel prediction and swapping.

  17. Posteriori error determination and grid adaptation for AMR and ALE computational fluid dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lapenta, G. M.

    2002-01-01

    We discuss grid adaptation for application to AMR and ALE codes. Two new contributions are presented. First, a new method to locate the regions where the truncation error is being created due to an insufficient accuracy: the operator recovery error origin (OREO) detector. The OREO detector is automatic, reliable, easy to implement and extremely inexpensive. Second, a new grid motion technique is presented for application to ALE codes. The method is based on the Brackbill-Saltzman approach but it is directly linked to the OREO detector and moves the grid automatically to minimize the error.

  18. Error in Airspeed Measurement Due to the Static-Pressure Field Ahead of an Airplane at Transonic Speeds

    NASA Technical Reports Server (NTRS)

    O'Bryan, Thomas C; Danforth, Edward C B; Johnston, J Ford

    1955-01-01

    The magnitude and variation of the static-pressure error for various distances ahead of sharp-nose bodies and open-nose air inlets and for a distance of 1 chord ahead of the wing tip of a swept wing are defined by a combination of experiment and theory. The mechanism of the error is discussed in some detail to show the contributing factors that make up the error. The information presented provides a useful means for choosing a proper location for measurement of static pressure for most purposes.

  19. Impact of an antiretroviral stewardship strategy on medication error rates.

    PubMed

    Shea, Katherine M; Hobbs, Athena Lv; Shumake, Jason D; Templet, Derek J; Padilla-Tolentino, Eimeira; Mondy, Kristin E

    2018-05-02

    The impact of an antiretroviral stewardship strategy on medication error rates was evaluated. This single-center, retrospective, comparative cohort study included patients at least 18 years of age infected with human immunodeficiency virus (HIV) who were receiving antiretrovirals and admitted to the hospital. A multicomponent approach was developed and implemented and included modifications to the order-entry and verification system, pharmacist education, and a pharmacist-led antiretroviral therapy checklist. Pharmacists performed prospective audits using the checklist at the time of order verification. To assess the impact of the intervention, a retrospective review was performed before and after implementation to assess antiretroviral errors. Totals of 208 and 24 errors were identified before and after the intervention, respectively, resulting in a significant reduction in the overall error rate ( p < 0.001). In the postintervention group, significantly lower medication error rates were found in both patient admissions containing at least 1 medication error ( p < 0.001) and those with 2 or more errors ( p < 0.001). Significant reductions were also identified in each error type, including incorrect/incomplete medication regimen, incorrect dosing regimen, incorrect renal dose adjustment, incorrect administration, and the presence of a major drug-drug interaction. A regression tree selected ritonavir as the only specific medication that best predicted more errors preintervention ( p < 0.001); however, no antiretrovirals reliably predicted errors postintervention. An antiretroviral stewardship strategy for hospitalized HIV patients including prospective audit by staff pharmacists through use of an antiretroviral medication therapy checklist at the time of order verification decreased error rates. Copyright © 2018 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  20. Dosimetric impact of geometric errors due to respiratory motion prediction on dynamic multileaf collimator-based four-dimensional radiation delivery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vedam, S.; Docef, A.; Fix, M.

    2005-06-15

    The synchronization of dynamic multileaf collimator (DMLC) response with respiratory motion is critical to ensure the accuracy of DMLC-based four dimensional (4D) radiation delivery. In practice, however, a finite time delay (response time) between the acquisition of tumor position and multileaf collimator response necessitates predictive models of respiratory tumor motion to synchronize radiation delivery. Predicting a complex process such as respiratory motion introduces geometric errors, which have been reported in several publications. However, the dosimetric effect of such errors on 4D radiation delivery has not yet been investigated. Thus, our aim in this work was to quantify the dosimetric effectsmore » of geometric error due to prediction under several different conditions. Conformal and intensity modulated radiation therapy (IMRT) plans for a lung patient were generated for anterior-posterior/posterior-anterior (AP/PA) beam arrangements at 6 and 18 MV energies to provide planned dose distributions. Respiratory motion data was obtained from 60 diaphragm-motion fluoroscopy recordings from five patients. A linear adaptive filter was employed to predict the tumor position. The geometric error of prediction was defined as the absolute difference between predicted and actual positions at each diaphragm position. Distributions of geometric error of prediction were obtained for all of the respiratory motion data. Planned dose distributions were then convolved with distributions for the geometric error of prediction to obtain convolved dose distributions. The dosimetric effect of such geometric errors was determined as a function of several variables: response time (0-0.6 s), beam energy (6/18 MV), treatment delivery (3D/4D), treatment type (conformal/IMRT), beam direction (AP/PA), and breathing training type (free breathing/audio instruction/visual feedback). Dose difference and distance-to-agreement analysis was employed to quantify results. Based on our data, the dosimetric impact of prediction (a) increased with response time, (b) was larger for 3D radiation therapy as compared with 4D radiation therapy, (c) was relatively insensitive to change in beam energy and beam direction, (d) was greater for IMRT distributions as compared with conformal distributions, (e) was smaller than the dosimetric impact of latency, and (f) was greatest for respiration motion with audio instructions, followed by visual feedback and free breathing. Geometric errors of prediction that occur during 4D radiation delivery introduce dosimetric errors that are dependent on several factors, such as response time, treatment-delivery type, and beam energy. Even for relatively small response times of 0.6 s into the future, dosimetric errors due to prediction could approach delivery errors when respiratory motion is not accounted for at all. To reduce the dosimetric impact, better predictive models and/or shorter response times are required.« less

  1. Probabilistic model of nonlinear penalties due to collision-induced timing jitter for calculation of the bit error ratio in wavelength-division-multiplexed return-to-zero systems

    NASA Astrophysics Data System (ADS)

    Sinkin, Oleg V.; Grigoryan, Vladimir S.; Menyuk, Curtis R.

    2006-12-01

    We introduce a fully deterministic, computationally efficient method for characterizing the effect of nonlinearity in optical fiber transmission systems that utilize wavelength-division multiplexing and return-to-zero modulation. The method accurately accounts for bit-pattern-dependent nonlinear distortion due to collision-induced timing jitter and for amplifier noise. We apply this method to calculate the error probability as a function of channel spacing in a prototypical multichannel return-to-zero undersea system.

  2. Counteracting structural errors in ensemble forecast of influenza outbreaks.

    PubMed

    Pei, Sen; Shaman, Jeffrey

    2017-10-13

    For influenza forecasts generated using dynamical models, forecast inaccuracy is partly attributable to the nonlinear growth of error. As a consequence, quantification of the nonlinear error structure in current forecast models is needed so that this growth can be corrected and forecast skill improved. Here, we inspect the error growth of a compartmental influenza model and find that a robust error structure arises naturally from the nonlinear model dynamics. By counteracting these structural errors, diagnosed using error breeding, we develop a new forecast approach that combines dynamical error correction and statistical filtering techniques. In retrospective forecasts of historical influenza outbreaks for 95 US cities from 2003 to 2014, overall forecast accuracy for outbreak peak timing, peak intensity and attack rate, are substantially improved for predicted lead times up to 10 weeks. This error growth correction method can be generalized to improve the forecast accuracy of other infectious disease dynamical models.Inaccuracy of influenza forecasts based on dynamical models is partly due to nonlinear error growth. Here the authors address the error structure of a compartmental influenza model, and develop a new improved forecast approach combining dynamical error correction and statistical filtering techniques.

  3. Tolerance analysis of optical telescopes using coherent addition of wavefront errors

    NASA Technical Reports Server (NTRS)

    Davenport, J. W.

    1982-01-01

    A near diffraction-limited telescope requires that tolerance analysis be done on the basis of system wavefront error. One method of analyzing the wavefront error is to represent the wavefront error function in terms of its Zernike polynomial expansion. A Ramsey-Korsch ray trace package, a computer program that simulates the tracing of rays through an optical telescope system, was expanded to include the Zernike polynomial expansion up through the fifth-order spherical term. An option to determine a 3 dimensional plot of the wavefront error function was also included in the Ramsey-Korsch package. Several assimulation runs were analyzed to determine the particular set of coefficients in the Zernike expansion that are effected by various errors such as tilt, decenter and despace. A 3 dimensional plot of each error up through the fifth-order spherical term was also included in the study. Tolerance analysis data are presented.

  4. Error detection method

    DOEpatents

    Olson, Eric J.

    2013-06-11

    An apparatus, program product, and method that run an algorithm on a hardware based processor, generate a hardware error as a result of running the algorithm, generate an algorithm output for the algorithm, compare the algorithm output to another output for the algorithm, and detect the hardware error from the comparison. The algorithm is designed to cause the hardware based processor to heat to a degree that increases the likelihood of hardware errors to manifest, and the hardware error is observable in the algorithm output. As such, electronic components may be sufficiently heated and/or sufficiently stressed to create better conditions for generating hardware errors, and the output of the algorithm may be compared at the end of the run to detect a hardware error that occurred anywhere during the run that may otherwise not be detected by traditional methodologies (e.g., due to cooling, insufficient heat and/or stress, etc.).

  5. Space charge enhanced plasma gradient effects on satellite electric field measurements

    NASA Technical Reports Server (NTRS)

    Diebold, Dan; Hershkowitz, Noah; Dekock, J.; Intrator, T.; Hsieh, M-K.

    1991-01-01

    It has been recognized that plasma gradients can cause error in magnetospheric electric field measurements made by double probes. Space charge enhanced Plasma Gradient Induced Error (PGIE) is discussed in general terms, presenting the results of a laboratory experiment designed to demonstrate this error, and deriving a simple expression that quantifies this error. Experimental conditions were not identical to magnetospheric conditions, although efforts were made to insure the relevant physics applied to both cases. The experimental data demonstrate some of the possible errors in electric field measurements made by strongly emitting probes due to space charge effects in the presence of plasma gradients. Probe errors in space and laboratory conditions are discussed, as well as experimental error. In the final section, theoretical aspects are examined and an expression is derived for the maximum steady state space charge enhanced PGIE taken by two identical current biased probes.

  6. Error Propagation Dynamics of PIV-based Pressure Field Calculations: How well does the pressure Poisson solver perform inherently?

    PubMed

    Pan, Zhao; Whitehead, Jared; Thomson, Scott; Truscott, Tadd

    2016-08-01

    Obtaining pressure field data from particle image velocimetry (PIV) is an attractive technique in fluid dynamics due to its noninvasive nature. The application of this technique generally involves integrating the pressure gradient or solving the pressure Poisson equation using a velocity field measured with PIV. However, very little research has been done to investigate the dynamics of error propagation from PIV-based velocity measurements to the pressure field calculation. Rather than measure the error through experiment, we investigate the dynamics of the error propagation by examining the Poisson equation directly. We analytically quantify the error bound in the pressure field, and are able to illustrate the mathematical roots of why and how the Poisson equation based pressure calculation propagates error from the PIV data. The results show that the error depends on the shape and type of boundary conditions, the dimensions of the flow domain, and the flow type.

  7. Efficiency, error and yield in light-directed maskless synthesis of DNA microarrays

    PubMed Central

    2011-01-01

    Background Light-directed in situ synthesis of DNA microarrays using computer-controlled projection from a digital micromirror device--maskless array synthesis (MAS)--has proved to be successful at both commercial and laboratory scales. The chemical synthetic cycle in MAS is quite similar to that of conventional solid-phase synthesis of oligonucleotides, but the complexity of microarrays and unique synthesis kinetics on the glass substrate require a careful tuning of parameters and unique modifications to the synthesis cycle to obtain optimal deprotection and phosphoramidite coupling. In addition, unintended deprotection due to scattering and diffraction introduce insertion errors that contribute significantly to the overall error rate. Results Stepwise phosphoramidite coupling yields have been greatly improved and are now comparable to those obtained in solid phase synthesis of oligonucleotides. Extended chemical exposure in the synthesis of complex, long oligonucleotide arrays result in lower--but still high--final average yields which approach 99%. The new synthesis chemistry includes elimination of the standard oxidation until the final step, and improved coupling and light deprotection. Coupling Insertions due to stray light are the limiting factor in sequence quality for oligonucleotide synthesis for gene assembly. Diffraction and local flare are by far the largest contributors to loss of optical contrast. Conclusions Maskless array synthesis is an efficient and versatile method for synthesizing high density arrays of long oligonucleotides for hybridization- and other molecular binding-based experiments. For applications requiring high sequence purity, such as gene assembly, diffraction and flare remain significant obstacles, but can be significantly reduced with straightforward experimental strategies. PMID:22152062

  8. Data-quality measures for stakeholder-implemented watershed-monitoring programs

    USGS Publications Warehouse

    Greve, Adrienne I.

    2002-01-01

    Community-based watershed groups, many of which collect environmental data, have steadily increased in number over the last decade. The data generated by these programs are often underutilized due to uncertainty in the quality of data produced. The incorporation of data-quality measures into stakeholder monitoring programs lends statistical validity to data. Data-quality measures are divided into three steps: quality assurance, quality control, and quality assessment. The quality-assurance step attempts to control sources of error that cannot be directly quantified. This step is part of the design phase of a monitoring program and includes clearly defined, quantifiable objectives, sampling sites that meet the objectives, standardized protocols for sample collection, and standardized laboratory methods. Quality control (QC) is the collection of samples to assess the magnitude of error in a data set due to sampling, processing, transport, and analysis. In order to design a QC sampling program, a series of issues needs to be considered: (1) potential sources of error, (2) the type of QC samples, (3) inference space, (4) the number of QC samples, and (5) the distribution of the QC samples. Quality assessment is the process of evaluating quality-assurance measures and analyzing the QC data in order to interpret the environmental data. Quality assessment has two parts: one that is conducted on an ongoing basis as the monitoring program is running, and one that is conducted during the analysis of environmental data. The discussion of the data-quality measures is followed by an example of their application to a monitoring program in the Big Thompson River watershed of northern Colorado.

  9. Remote Sensing of Stratospheric Trace Gases by TELIS

    NASA Astrophysics Data System (ADS)

    Xu, Jian; Schreier, Franz; Doicu, Adrian; Birk, Manfred; Wagner, Georg; Trautmann, Thomas

    2015-11-01

    TELIS (TErahertz and submillimeter LImb Sounder) is a balloon-borne cryogenic heterodyne spectrometer with two far infrared and submillimeter channels (1.8THz and 480-650GHz developed by DLR and SRON, respectively). The instrument was designed to investigate atmospheric chemistry and dynamics with a focus on the stratosphere. Between 2009 and 2011, TELIS participated in three winter campaigns in Kiruna, Sweden. The recent campaign took place in 2014 over Timmins, Canada. During previous campaigns, TELIS shared a stratospheric balloon gondola with the balloon version of MIPAS (MIPAS-B) and mini-DOAS. The primary scientific goal of these campaigns has been to monitor the time-dependent chemistry of chlorine and bromine, and to achieve the closure of chemical families inside the polar vortex. In this work, we present retrieved profiles of ozone (O3), hydrogen chlorine (HCl), carbon monoxide (CO), and hydroxyl radical (OH) obtained by the 1.8 THz channel from the polar winter flights during 2009-2011. Furthermore, the corresponding retrieval algorithm is briefly described. The quality of the retrieval products is analyzed in a quantitative manner including: error characterization, internal comparisons of the two different channels, and external comparisons with coincident spaceborne observations. The errors due to the instrument parameters and pressure dominate in the upper troposphere and lower stratosphere, while the errors at higher altitudes are mainly due to the spectroscopic parameters and the radiometric calibration. The comparisons with other limb sounders help us to assess the measurement capabilities and instrument characteristics of TELIS, thereby establishing the instrument as a valuable tool to study the chemical interactions in the stratosphere.

  10. Bladder cancer diagnosis with CT urography: test characteristics and reasons for false-positive and false-negative results.

    PubMed

    Trinh, Tony W; Glazer, Daniel I; Sadow, Cheryl A; Sahni, V Anik; Geller, Nina L; Silverman, Stuart G

    2018-03-01

    To determine test characteristics of CT urography for detecting bladder cancer in patients with hematuria and those undergoing surveillance, and to analyze reasons for false-positive and false-negative results. A HIPAA-compliant, IRB-approved retrospective review of reports from 1623 CT urograms between 10/2010 and 12/31/2013 was performed. 710 examinations for hematuria or bladder cancer history were compared to cystoscopy performed within 6 months. Reference standard was surgical pathology or 1-year minimum clinical follow-up. False-positive and false-negative examinations were reviewed to determine reasons for errors. Ninety-five bladder cancers were detected. CT urography accuracy: was 91.5% (650/710), sensitivity 86.3% (82/95), specificity 92.4% (568/615), positive predictive value 63.6% (82/129), and negative predictive value was 97.8% (568/581). Of 43 false positives, the majority of interpretation errors were due to benign prostatic hyperplasia (n = 12), trabeculated bladder (n = 9), and treatment changes (n = 8). Other causes include blood clots, mistaken normal anatomy, infectious/inflammatory changes, or had no cystoscopic correlate. Of 13 false negatives, 11 were due to technique, one to a large urinary residual, one to artifact. There were no errors in perception. CT urography is an accurate test for diagnosing bladder cancer; however, in protocols relying predominantly on excretory phase images, overall sensitivity remains insufficient to obviate cystoscopy. Awareness of bladder cancer mimics may reduce false-positive results. Improvements in CTU technique may reduce false-negative results.

  11. Validation of the ICU-DaMa tool for automatically extracting variables for minimum dataset and quality indicators: The importance of data quality assessment.

    PubMed

    Sirgo, Gonzalo; Esteban, Federico; Gómez, Josep; Moreno, Gerard; Rodríguez, Alejandro; Blanch, Lluis; Guardiola, Juan José; Gracia, Rafael; De Haro, Lluis; Bodí, María

    2018-04-01

    Big data analytics promise insights into healthcare processes and management, improving outcomes while reducing costs. However, data quality is a major challenge for reliable results. Business process discovery techniques and an associated data model were used to develop data management tool, ICU-DaMa, for extracting variables essential for overseeing the quality of care in the intensive care unit (ICU). To determine the feasibility of using ICU-DaMa to automatically extract variables for the minimum dataset and ICU quality indicators from the clinical information system (CIS). The Wilcoxon signed-rank test and Fisher's exact test were used to compare the values extracted from the CIS with ICU-DaMa for 25 variables from all patients attended in a polyvalent ICU during a two-month period against the gold standard of values manually extracted by two trained physicians. Discrepancies with the gold standard were classified into plausibility, conformance, and completeness errors. Data from 149 patients were included. Although there were no significant differences between the automatic method and the manual method, we detected differences in values for five variables, including one plausibility error and two conformance and completeness errors. Plausibility: 1) Sex, ICU-DaMa incorrectly classified one male patient as female (error generated by the Hospital's Admissions Department). Conformance: 2) Reason for isolation, ICU-DaMa failed to detect a human error in which a professional misclassified a patient's isolation. 3) Brain death, ICU-DaMa failed to detect another human error in which a professional likely entered two mutually exclusive values related to the death of the patient (brain death and controlled donation after circulatory death). Completeness: 4) Destination at ICU discharge, ICU-DaMa incorrectly classified two patients due to a professional failing to fill out the patient discharge form when thepatients died. 5) Length of continuous renal replacement therapy, data were missing for one patient because the CRRT device was not connected to the CIS. Automatic generation of minimum dataset and ICU quality indicators using ICU-DaMa is feasible. The discrepancies were identified and can be corrected by improving CIS ergonomics, training healthcare professionals in the culture of the quality of information, and using tools for detecting and correcting data errors. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. Probabilities of Occurrence of Baro-Fuze System Errors More Than 1000 Feet Too Low Due to Ignoring the Meteorological Forecast

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Charles, B.N.

    1955-05-12

    Charts of the geographical distribution of the annual and seasonal D-values and their standard deviations at altitudes of 4500, 6000, and 7000 feeet over Eurasia are derived, which are used to estimate the frequency of baro system errors.

  13. Error sources in passive and active microwave satellite soil moisture over Australia

    USDA-ARS?s Scientific Manuscript database

    Development of a long-term climate record of soil moisture (SM) involves combining historic and present satellite-retrieved SM data sets. This in turn requires a consistent characterization and deep understanding of the systematic differences and errors in the individual data sets, which vary due to...

  14. Decision Aids for Multiple-Decision Disease Management as Affected by Weather Input Errors

    USDA-ARS?s Scientific Manuscript database

    Many disease management decision support systems (DSS) rely, exclusively or in part, on weather inputs to calculate an indicator for disease hazard. Error in the weather inputs, typically due to forecasting, interpolation or estimation from off-site sources, may affect model calculations and manage...

  15. Measurement variability error for estimates of volume change

    Treesearch

    James A. Westfall; Paul L. Patterson

    2007-01-01

    Using quality assurance data, measurement variability distributions were developed for attributes that affect tree volume prediction. Random deviations from the measurement variability distributions were applied to 19381 remeasured sample trees in Maine. The additional error due to measurement variation and measurement bias was estimated via a simulation study for...

  16. Quantitative analysis of the radiation error for aerial coiled-fiber-optic distributed temperature sensing deployments using reinforcing fabric as support structure

    NASA Astrophysics Data System (ADS)

    Sigmund, Armin; Pfister, Lena; Sayde, Chadi; Thomas, Christoph K.

    2017-06-01

    In recent years, the spatial resolution of fiber-optic distributed temperature sensing (DTS) has been enhanced in various studies by helically coiling the fiber around a support structure. While solid polyvinyl chloride tubes are an appropriate support structure under water, they can produce considerable errors in aerial deployments due to the radiative heating or cooling. We used meshed reinforcing fabric as a novel support structure to measure high-resolution vertical temperature profiles with a height of several meters above a meadow and within and above a small lake. This study aimed at quantifying the radiation error for the coiled DTS system and the contribution caused by the novel support structure via heat conduction. A quantitative and comprehensive energy balance model is proposed and tested, which includes the shortwave radiative, longwave radiative, convective, and conductive heat transfers and allows for modeling fiber temperatures as well as quantifying the radiation error. The sensitivity of the energy balance model to the conduction error caused by the reinforcing fabric is discussed in terms of its albedo, emissivity, and thermal conductivity. Modeled radiation errors amounted to -1.0 and 1.3 K at 2 m height but ranged up to 2.8 K for very high incoming shortwave radiation (1000 J s-1 m-2) and very weak winds (0.1 m s-1). After correcting for the radiation error by means of the presented energy balance, the root mean square error between DTS and reference air temperatures from an aspirated resistance thermometer or an ultrasonic anemometer was 0.42 and 0.26 K above the meadow and the lake, respectively. Conduction between reinforcing fabric and fiber cable had a small effect on fiber temperatures (< 0.18 K). Only for locations where the plastic rings that supported the reinforcing fabric touched the fiber-optic cable were significant temperature artifacts of up to 2.5 K observed. Overall, the reinforcing fabric offers several advantages over conventional support structures published to date in the literature as it minimizes both radiation and conduction errors.

  17. Modeling and characterization of multipath in global navigation satellite system ranging signals

    NASA Astrophysics Data System (ADS)

    Weiss, Jan Peter

    The Global Positioning System (GPS) provides position, velocity, and time information to users in anywhere near the earth in real-time and regardless of weather conditions. Since the system became operational, improvements in many areas have reduced systematic errors affecting GPS measurements such that multipath, defined as any signal taking a path other than the direct, has become a significant, if not dominant, error source for many applications. This dissertation utilizes several approaches to characterize and model multipath errors in GPS measurements. Multipath errors in GPS ranging signals are characterized for several receiver systems and environments. Experimental P(Y) code multipath data are analyzed for ground stations with multipath levels ranging from minimal to severe, a C-12 turboprop, an F-18 jet, and an aircraft carrier. Comparisons between receivers utilizing single patch antennas and multi-element arrays are also made. In general, the results show significant reductions in multipath with antenna array processing, although large errors can occur even with this kind of equipment. Analysis of airborne platform multipath shows that the errors tend to be small in magnitude because the size of the aircraft limits the geometric delay of multipath signals, and high in frequency because aircraft dynamics cause rapid variations in geometric delay. A comprehensive multipath model is developed and validated. The model integrates 3D structure models, satellite ephemerides, electromagnetic ray-tracing algorithms, and detailed antenna and receiver models to predict multipath errors. Validation is performed by comparing experimental and simulated multipath via overall error statistics, per satellite time histories, and frequency content analysis. The validation environments include two urban buildings, an F-18, an aircraft carrier, and a rural area where terrain multipath dominates. The validated models are used to identify multipath sources, characterize signal properties, evaluate additional antenna and receiver tracking configurations, and estimate the reflection coefficients of multipath-producing surfaces. Dynamic models for an F-18 landing on an aircraft carrier correlate aircraft dynamics to multipath frequency content; the model also characterizes the separate contributions of multipath due to the aircraft, ship, and ocean to the overall error statistics. Finally, reflection coefficients for multipath produced by terrain are estimated via a least-squares algorithm.

  18. Risk behaviours for organism transmission in health care delivery-A two month unstructured observational study.

    PubMed

    Lindberg, Maria; Lindberg, Magnus; Skytt, Bernice

    2017-05-01

    Errors in infection control practices risk patient safety. The probability for errors can increase when care practices become more multifaceted. It is therefore fundamental to track risk behaviours and potential errors in various care situations. The aim of this study was to describe care situations involving risk behaviours for organism transmission that could lead to subsequent healthcare-associated infections. Unstructured nonparticipant observations were performed at three medical wards. Healthcare personnel (n=27) were shadowed, in total 39h, on randomly selected weekdays between 7:30 am and 12 noon. Content analysis was used to inductively categorize activities into tasks and based on the character into groups. Risk behaviours for organism transmission were deductively classified into types of errors. Multiple response crosstabs procedure was used to visualize the number and proportion of errors in tasks. One-Way ANOVA with Bonferroni post Hoc test was used to determine differences among the three groups of activities. The qualitative findings gives an understanding of that risk behaviours for organism transmission goes beyond the five moments of hand hygiene and also includes the handling and placement of materials and equipment. The tasks with the highest percentage of errors were; 'personal hygiene', 'elimination' and 'dressing/wound care'. The most common types of errors in all identified tasks were; 'hand disinfection', 'glove usage', and 'placement of materials'. Significantly more errors (p<0.0001) were observed the more multifaceted (single, combined or interrupted) the activity was. The numbers and types of errors as well as the character of activities performed in care situations described in this study confirm the need to improve current infection control practices. It is fundamental that healthcare personnel practice good hand hygiene however effective preventive hygiene is complex in healthcare activities due to the multifaceted care situations, especially when activities are interrupted. A deeper understanding of infection control practices that goes beyond the sense of security by means of hand disinfection and use of gloves is needed as materials and surfaces in the care environment might be contaminated and thus pose a risk for organism transmission. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Measuring Two Decades of Ice Mass Loss using GRACE and SLR

    NASA Astrophysics Data System (ADS)

    Bonin, J. A.; Chambers, D. P.

    2016-12-01

    We use Satellite Laser Ranging (SLR) to extend the time series of ice mass change back in time to 1994. The SLR series is of far lesser spatial resolution than GRACE, so we apply a constrained inversion technique to better localize the signal. We approximate the likely errors due to SLR's measurement errors combined with the inversion errors from using a low-resolution series, then estimate the interannual mass change over Greenland and Antarctica.

  20. A Novel Approach of Understanding and Incorporating Error of Chemical Transport Models into a Geostatistical Framework

    NASA Astrophysics Data System (ADS)

    Reyes, J.; Vizuete, W.; Serre, M. L.; Xu, Y.

    2015-12-01

    The EPA employs a vast monitoring network to measure ambient PM2.5 concentrations across the United States with one of its goals being to quantify exposure within the population. However, there are several areas of the country with sparse monitoring spatially and temporally. One means to fill in these monitoring gaps is to use PM2.5 modeled estimates from Chemical Transport Models (CTMs) specifically the Community Multi-scale Air Quality (CMAQ) model. CMAQ is able to provide complete spatial coverage but is subject to systematic and random error due to model uncertainty. Due to the deterministic nature of CMAQ, often these uncertainties are not quantified. Much effort is employed to quantify the efficacy of these models through different metrics of model performance. Currently evaluation is specific to only locations with observed data. Multiyear studies across the United States are challenging because the error and model performance of CMAQ are not uniform over such large space/time domains. Error changes regionally and temporally. Because of the complex mix of species that constitute PM2.5, CMAQ error is also a function of increasing PM2.5 concentration. To address this issue we introduce a model performance evaluation for PM2.5 CMAQ that is regionalized and non-linear. This model performance evaluation leads to error quantification for each CMAQ grid. Areas and time periods of error being better qualified. The regionalized error correction approach is non-linear and is therefore more flexible at characterizing model performance than approaches that rely on linearity assumptions and assume homoscedasticity of CMAQ predictions errors. Corrected CMAQ data are then incorporated into the modern geostatistical framework of Bayesian Maximum Entropy (BME). Through cross validation it is shown that incorporating error-corrected CMAQ data leads to more accurate estimates than just using observed data by themselves.

Top