Unforced errors and error reduction in tennis
Brody, H
2006-01-01
Only at the highest level of tennis is the number of winners comparable to the number of unforced errors. As the average player loses many more points due to unforced errors than due to winners by an opponent, if the rate of unforced errors can be reduced, it should lead to an increase in points won. This article shows how players can improve their game by understanding and applying the laws of physics to reduce the number of unforced errors. PMID:16632568
Comparing errors in ED computer-assisted vs conventional pediatric drug dosing and administration.
Yamamoto, Loren; Kanemori, Joan
2010-06-01
Compared to fixed-dose single-vial drug administration in adults, pediatric drug dosing and administration requires a series of calculations, all of which are potentially error prone. The purpose of this study is to compare error rates and task completion times for common pediatric medication scenarios using computer program assistance vs conventional methods. Two versions of a 4-part paper-based test were developed. Each part consisted of a set of medication administration and/or dosing tasks. Emergency department and pediatric intensive care unit nurse volunteers completed these tasks using both methods (sequence assigned to start with a conventional or a computer-assisted approach). Completion times, errors, and the reason for the error were recorded. Thirty-eight nurses completed the study. Summing the completion of all 4 parts, the mean conventional total time was 1243 seconds vs the mean computer program total time of 879 seconds (P < .001). The conventional manual method had a mean of 1.8 errors vs the computer program with a mean of 0.7 errors (P < .001). Of the 97 total errors, 36 were due to misreading the drug concentration on the label, 34 were due to calculation errors, and 8 were due to misplaced decimals. Of the 36 label interpretation errors, 18 (50%) occurred with digoxin or insulin. Computerized assistance reduced errors and the time required for drug administration calculations. A pattern of errors emerged, noting that reading/interpreting certain drug labels were more error prone. Optimizing the layout of drug labels could reduce the error rate for error-prone labels. Copyright (c) 2010 Elsevier Inc. All rights reserved.
Errors as a Means of Reducing Impulsive Food Choice.
Sellitto, Manuela; di Pellegrino, Giuseppe
2016-06-05
Nowadays, the increasing incidence of eating disorders due to poor self-control has given rise to increased obesity and other chronic weight problems, and ultimately, to reduced life expectancy. The capacity to refrain from automatic responses is usually high in situations in which making errors is highly likely. The protocol described here aims at reducing imprudent preference in women during hypothetical intertemporal choices about appetitive food by associating it with errors. First, participants undergo an error task where two different edible stimuli are associated with two different error likelihoods (high and low). Second, they make intertemporal choices about the two edible stimuli, separately. As a result, this method decreases the discount rate for future amounts of the edible reward that cued higher error likelihood, selectively. This effect is under the influence of the self-reported hunger level. The present protocol demonstrates that errors, well known as motivationally salient events, can induce the recruitment of cognitive control, thus being ultimately useful in reducing impatient choices for edible commodities.
Errors as a Means of Reducing Impulsive Food Choice
Sellitto, Manuela; di Pellegrino, Giuseppe
2016-01-01
Nowadays, the increasing incidence of eating disorders due to poor self-control has given rise to increased obesity and other chronic weight problems, and ultimately, to reduced life expectancy. The capacity to refrain from automatic responses is usually high in situations in which making errors is highly likely. The protocol described here aims at reducing imprudent preference in women during hypothetical intertemporal choices about appetitive food by associating it with errors. First, participants undergo an error task where two different edible stimuli are associated with two different error likelihoods (high and low). Second, they make intertemporal choices about the two edible stimuli, separately. As a result, this method decreases the discount rate for future amounts of the edible reward that cued higher error likelihood, selectively. This effect is under the influence of the self-reported hunger level. The present protocol demonstrates that errors, well known as motivationally salient events, can induce the recruitment of cognitive control, thus being ultimately useful in reducing impatient choices for edible commodities. PMID:27341281
The Deference Due the Oracle: Computerized Text Analysis in a Basic Writing Class.
ERIC Educational Resources Information Center
Otte, George
1989-01-01
Describes how a computerized text analysis program can help students discover error patterns in their writing, and notes how students' responses to analyses can reduce errors and improve their writing. (MM)
Zhou, Shuntai; Jones, Corbin; Mieczkowski, Piotr
2015-01-01
ABSTRACT Validating the sampling depth and reducing sequencing errors are critical for studies of viral populations using next-generation sequencing (NGS). We previously described the use of Primer ID to tag each viral RNA template with a block of degenerate nucleotides in the cDNA primer. We now show that low-abundance Primer IDs (offspring Primer IDs) are generated due to PCR/sequencing errors. These artifactual Primer IDs can be removed using a cutoff model for the number of reads required to make a template consensus sequence. We have modeled the fraction of sequences lost due to Primer ID resampling. For a typical sequencing run, less than 10% of the raw reads are lost to offspring Primer ID filtering and resampling. The remaining raw reads are used to correct for PCR resampling and sequencing errors. We also demonstrate that Primer ID reveals bias intrinsic to PCR, especially at low template input or utilization. cDNA synthesis and PCR convert ca. 20% of RNA templates into recoverable sequences, and 30-fold sequence coverage recovers most of these template sequences. We have directly measured the residual error rate to be around 1 in 10,000 nucleotides. We use this error rate and the Poisson distribution to define the cutoff to identify preexisting drug resistance mutations at low abundance in an HIV-infected subject. Collectively, these studies show that >90% of the raw sequence reads can be used to validate template sampling depth and to dramatically reduce the error rate in assessing a genetically diverse viral population using NGS. IMPORTANCE Although next-generation sequencing (NGS) has revolutionized sequencing strategies, it suffers from serious limitations in defining sequence heterogeneity in a genetically diverse population, such as HIV-1 due to PCR resampling and PCR/sequencing errors. The Primer ID approach reveals the true sampling depth and greatly reduces errors. Knowing the sampling depth allows the construction of a model of how to maximize the recovery of sequences from input templates and to reduce resampling of the Primer ID so that appropriate multiplexing can be included in the experimental design. With the defined sampling depth and measured error rate, we are able to assign cutoffs for the accurate detection of minority variants in viral populations. This approach allows the power of NGS to be realized without having to guess about sampling depth or to ignore the problem of PCR resampling, while also being able to correct most of the errors in the data set. PMID:26041299
Oliven, A; Zalman, D; Shilankov, Y; Yeshurun, D; Odeh, M
2002-01-01
Computerized prescription of drugs is expected to reduce the number of many preventable drug ordering errors. In the present study we evaluated the usefullness of a computerized drug order entry (CDOE) system in reducing prescription errors. A department of internal medicine using a comprehensive CDOE, which included also patient-related drug-laboratory, drug-disease and drug-allergy on-line surveillance was compared to a similar department in which drug orders were handwritten. CDOE reduced prescription errors to 25-35%. The causes of errors remained similar, and most errors, on both departments, were associated with abnormal renal function and electrolyte balance. Residual errors remaining on the CDOE-using department were due to handwriting on the typed order, failure to feed patients' diseases, and system failures. The use of CDOE was associated with a significant reduction in mean hospital stay and in the number of changes performed in the prescription. The findings of this study both quantity the impact of comprehensive CDOE on prescription errors and delineate the causes for remaining errors.
Errors made by animals in memory paradigms are not always due to failure of memory.
Wilkie, D M; Willson, R J; Carr, J A
1999-01-01
It is commonly assumed that errors in animal memory paradigms such as delayed matching to sample, radial mazes, and food-cache recovery are due to failures in memory for information necessary to perform the task successfully. A body of research, reviewed here, suggests that this is not always the case: animals sometimes make errors despite apparently being able to remember the appropriate information. In this paper a case study of this phenomenon is described, along with a demonstration of a simple procedural modification that successfully reduced these non-memory errors, thereby producing a better measure of memory.
Errors in clinical laboratories or errors in laboratory medicine?
Plebani, Mario
2006-01-01
Laboratory testing is a highly complex process and, although laboratory services are relatively safe, they are not as safe as they could or should be. Clinical laboratories have long focused their attention on quality control methods and quality assessment programs dealing with analytical aspects of testing. However, a growing body of evidence accumulated in recent decades demonstrates that quality in clinical laboratories cannot be assured by merely focusing on purely analytical aspects. The more recent surveys on errors in laboratory medicine conclude that in the delivery of laboratory testing, mistakes occur more frequently before (pre-analytical) and after (post-analytical) the test has been performed. Most errors are due to pre-analytical factors (46-68.2% of total errors), while a high error rate (18.5-47% of total errors) has also been found in the post-analytical phase. Errors due to analytical problems have been significantly reduced over time, but there is evidence that, particularly for immunoassays, interference may have a serious impact on patients. A description of the most frequent and risky pre-, intra- and post-analytical errors and advice on practical steps for measuring and reducing the risk of errors is therefore given in the present paper. Many mistakes in the Total Testing Process are called "laboratory errors", although these may be due to poor communication, action taken by others involved in the testing process (e.g., physicians, nurses and phlebotomists), or poorly designed processes, all of which are beyond the laboratory's control. Likewise, there is evidence that laboratory information is only partially utilized. A recent document from the International Organization for Standardization (ISO) recommends a new, broader definition of the term "laboratory error" and a classification of errors according to different criteria. In a modern approach to total quality, centered on patients' needs and satisfaction, the risk of errors and mistakes in pre- and post-examination steps must be minimized to guarantee the total quality of laboratory services.
Error reduction by combining strapdown inertial measurement units in a baseball stitch
NASA Astrophysics Data System (ADS)
Tracy, Leah
A poor musical performance is rarely due to an inferior instrument. When a device is under performing, the temptation is to find a better device or a new technology to achieve performance objectives; however, another solution may be improving how existing technology is used through a better understanding of device characteristics, i.e., learning to play the instrument better. This thesis explores improving position and attitude estimates of inertial navigation systems (INS) through an understanding of inertial sensor errors, manipulating inertial measurement units (IMUs) to reduce that error and multisensor fusion of multiple IMUs to reduce error in a GPS denied environment.
Information systems and human error in the lab.
Bissell, Michael G
2004-01-01
Health system costs in clinical laboratories are incurred daily due to human error. Indeed, a major impetus for automating clinical laboratories has always been the opportunity it presents to simultaneously reduce cost and improve quality of operations by decreasing human error. But merely automating these processes is not enough. To the extent that introduction of these systems results in operators having less practice in dealing with unexpected events or becoming deskilled in problemsolving, however new kinds of error will likely appear. Clinical laboratories could potentially benefit by integrating findings on human error from modern behavioral science into their operations. Fully understanding human error requires a deep understanding of human information processing and cognition. Predicting and preventing negative consequences requires application of this understanding to laboratory operations. Although the occurrence of a particular error at a particular instant cannot be absolutely prevented, human error rates can be reduced. The following principles are key: an understanding of the process of learning in relation to error; understanding the origin of errors since this knowledge can be used to reduce their occurrence; optimal systems should be forgiving to the operator by absorbing errors, at least for a time; although much is known by industrial psychologists about how to write operating procedures and instructions in ways that reduce the probability of error, this expertise is hardly ever put to use in the laboratory; and a feedback mechanism must be designed into the system that enables the operator to recognize in real time that an error has occurred.
Goldmann tonometer error correcting prism: clinical evaluation.
McCafferty, Sean; Lim, Garrett; Duncan, William; Enikov, Eniko T; Schwiegerling, Jim; Levine, Jason; Kew, Corin
2017-01-01
Clinically evaluate a modified applanating surface Goldmann tonometer prism designed to substantially negate errors due to patient variability in biomechanics. A modified Goldmann prism with a correcting applanation tonometry surface (CATS) was mathematically optimized to minimize the intraocular pressure (IOP) measurement error due to patient variability in corneal thickness, stiffness, curvature, and tear film adhesion force. A comparative clinical study of 109 eyes measured IOP with CATS and Goldmann prisms. The IOP measurement differences between the CATS and Goldmann prisms were correlated to corneal thickness, hysteresis, and curvature. The CATS tonometer prism in correcting for Goldmann central corneal thickness (CCT) error demonstrated a reduction to <±2 mmHg in 97% of a standard CCT population. This compares to only 54% with CCT error <±2 mmHg using the Goldmann prism. Equal reductions of ~50% in errors due to corneal rigidity and curvature were also demonstrated. The results validate the CATS prism's improved accuracy and expected reduced sensitivity to Goldmann errors without IOP bias as predicted by mathematical modeling. The CATS replacement for the Goldmann prism does not change Goldmann measurement technique or interpretation.
NASA Technical Reports Server (NTRS)
Kuan, Gary M.; Dekens, Frank G.
2006-01-01
The Space Interferometry Mission (SIM) is a microarcsecond interferometric space telescope that requires picometer level precision measurements of its truss and interferometer baselines. Single-gauge metrology errors due to non-ideal physical characteristics of corner cubes reduce the angular measurement capability of the science instrument. Specifically, the non-common vertex error (NCVE) of a shared vertex, double corner cube introduces micrometer level single-gauge errors in addition to errors due to dihedral angles and reflection phase shifts. A modified SIM Kite Testbed containing an articulating double corner cube is modeled and the results are compared to the experimental testbed data. The results confirm modeling capability and viability of calibration techniques.
Poon, Eric G; Cina, Jennifer L; Churchill, William W; Mitton, Patricia; McCrea, Michelle L; Featherstone, Erica; Keohane, Carol A; Rothschild, Jeffrey M; Bates, David W; Gandhi, Tejal K
2005-01-01
We performed a direct observation pre-post study to evaluate the impact of barcode technology on medication dispensing errors and potential adverse drug events in the pharmacy of a tertiary-academic medical center. We found that barcode technology significantly reduced the rate of target dispensing errors leaving the pharmacy by 85%, from 0.37% to 0.06%. The rate of potential adverse drug events (ADEs) due to dispensing errors was also significantly reduced by 63%, from 0.19% to 0.069%. In a 735-bed hospital where 6 million doses of medications are dispensed per year, this technology is expected to prevent about 13,000 dispensing errors and 6,000 potential ADEs per year. PMID:16779372
NASA Astrophysics Data System (ADS)
Xu, Yadong; Serre, Marc L.; Reyes, Jeanette M.; Vizuete, William
2017-10-01
We have developed a Bayesian Maximum Entropy (BME) framework that integrates observations from a surface monitoring network and predictions from a Chemical Transport Model (CTM) to create improved exposure estimates that can be resolved into any spatial and temporal resolution. The flexibility of the framework allows for input of data in any choice of time scales and CTM predictions of any spatial resolution with varying associated degrees of estimation error and cost in terms of implementation and computation. This study quantifies the impact on exposure estimation error due to these choices by first comparing estimations errors when BME relied on ozone concentration data either as an hourly average, the daily maximum 8-h average (DM8A), or the daily 24-h average (D24A). Our analysis found that the use of DM8A and D24A data, although less computationally intensive, reduced estimation error more when compared to the use of hourly data. This was primarily due to the poorer CTM model performance in the hourly average predicted ozone. Our second analysis compared spatial variability and estimation errors when BME relied on CTM predictions with a grid cell resolution of 12 × 12 km2 versus a coarser resolution of 36 × 36 km2. Our analysis found that integrating the finer grid resolution CTM predictions not only reduced estimation error, but also increased the spatial variability in daily ozone estimates by 5 times. This improvement was due to the improved spatial gradients and model performance found in the finer resolved CTM simulation. The integration of observational and model predictions that is permitted in a BME framework continues to be a powerful approach for improving exposure estimates of ambient air pollution. The results of this analysis demonstrate the importance of also understanding model performance variability and its implications on exposure error.
Error-Based Design Space Windowing
NASA Technical Reports Server (NTRS)
Papila, Melih; Papila, Nilay U.; Shyy, Wei; Haftka, Raphael T.; Fitz-Coy, Norman
2002-01-01
Windowing of design space is considered in order to reduce the bias errors due to low-order polynomial response surfaces (RS). Standard design space windowing (DSW) uses a region of interest by setting a requirement on response level and checks it by a global RS predictions over the design space. This approach, however, is vulnerable since RS modeling errors may lead to the wrong region to zoom on. The approach is modified by introducing an eigenvalue error measure based on point-to-point mean squared error criterion. Two examples are presented to demonstrate the benefit of the error-based DSW.
Influence of measurement error on Maxwell's demon
NASA Astrophysics Data System (ADS)
Sørdal, Vegard; Bergli, Joakim; Galperin, Y. M.
2017-06-01
In any general cycle of measurement, feedback, and erasure, the measurement will reduce the entropy of the system when information about the state is obtained, while erasure, according to Landauer's principle, is accompanied by a corresponding increase in entropy due to the compression of logical and physical phase space. The total process can in principle be fully reversible. A measurement error reduces the information obtained and the entropy decrease in the system. The erasure still gives the same increase in entropy, and the total process is irreversible. Another consequence of measurement error is that a bad feedback is applied, which further increases the entropy production if the proper protocol adapted to the expected error rate is not applied. We consider the effect of measurement error on a realistic single-electron box Szilard engine, and we find the optimal protocol for the cycle as a function of the desired power P and error ɛ .
Elimination of Emergency Department Medication Errors Due To Estimated Weights.
Greenwalt, Mary; Griffen, David; Wilkerson, Jim
2017-01-01
From 7/2014 through 6/2015, 10 emergency department (ED) medication dosing errors were reported through the electronic incident reporting system of an urban academic medical center. Analysis of these medication errors identified inaccurate estimated weight on patients as the root cause. The goal of this project was to reduce weight-based dosing medication errors due to inaccurate estimated weights on patients presenting to the ED. Chart review revealed that 13.8% of estimated weights documented on admitted ED patients varied more than 10% from subsequent actual admission weights recorded. A random sample of 100 charts containing estimated weights revealed 2 previously unreported significant medication dosage errors (.02 significant error rate). Key improvements included removing barriers to weighing ED patients, storytelling to engage staff and change culture, and removal of the estimated weight documentation field from the ED electronic health record (EHR) forms. With these improvements estimated weights on ED patients, and the resulting medication errors, were eliminated.
Extending the impulse response in order to reduce errors due to impulse noise and signal fading
NASA Technical Reports Server (NTRS)
Webb, Joseph A.; Rolls, Andrew J.; Sirisena, H. R.
1988-01-01
A finite impulse response (FIR) digital smearing filter was designed to produce maximum intersymbol interference and maximum extension of the impulse response of the signal in a noiseless binary channel. A matched FIR desmearing filter at the receiver then reduced the intersymbol interference to zero. Signal fades were simulated by means of 100 percent signal blockage in the channel. Smearing and desmearing filters of length 256, 512, and 1024 were used for these simulations. Results indicate that impulse response extension by means of bit smearing appears to be a useful technique for correcting errors due to impulse noise or signal fading in a binary channel.
Identification of Carbon loss in the production of pilot-scale Carbon nanotube using gauze reactor
NASA Astrophysics Data System (ADS)
Wulan, P. P. D. K.; Purwanto, W. W.; Yeni, N.; Lestari, Y. D.
2018-03-01
Carbon loss more than 65% was the major obstacles in the Carbon Nanotube (CNT) production using gauze pilot scale reactor. The results showed that the initial carbon loss calculation is 27.64%. The calculation of carbon loss, then, takes place with various corrections parameters of: product flow rate error measurement, feed flow rate changes, gas product composition by Gas Chromatography Flame Ionization Detector (GC FID), and the carbon particulate by glass fiber filters. Error of product flow rate due to the measurement with bubble soap gives calculation error of carbon loss for about ± 4.14%. Changes in the feed flow rate due to CNT growth in the reactor reduce carbon loss by 4.97%. The detection of secondary hydrocarbon with GC FID during CNT production process reduces carbon loss by 5.14%. Particulates carried by product stream are very few and merely correct the carbon loss about 0.05%. Taking all the factors into account, the amount of carbon loss within this study is (17.21 ± 4.14)%. Assuming that 4.14% of carbon loss is due to the error measurement of product flow rate, the amount of carbon loss is 13.07%. It means that more than 57% of carbon loss within this study is identified.
Plan for Quality to Improve Patient Safety at the Point of Care
Ehrmeyer, Sharon S.
2011-01-01
The U.S. Institute of Medicine (IOM) much publicized report in “To Err is Human” (2000, National Academy Press) stated that as many as 98 000 hospitalized patients in the U.S. die each year due to preventable medical errors. This revelation about medical error and patient safety focused the public and the medical community's attention on errors in healthcare delivery including laboratory and point-of-care-testing (POCT). Errors introduced anywhere in the POCT process clearly can impact quality and place patient's safety at risk. While POCT performed by or near the patient reduces the potential of some errors, the process presents many challenges to quality with its multiple tests sites, test menus, testing devices and non-laboratory analysts, who often have little understanding of quality testing. Incoherent or no regulations and the rapid availability of test results for immediate clinical intervention can further amplify errors. System planning and management of the entire POCT process are essential to reduce errors and improve quality and patient safety. PMID:21808107
Indaram, Maanasa; VanderVeen, Deborah K
2018-01-01
Advances in surgical techniques allow implantation of intraocular lenses (IOL) with cataract extraction, even in young children. However, there are several challenges unique to the pediatric population that result in greater degrees of postoperative refractive error compared to adults. Literature review of the techniques and outcomes of pediatric cataract surgery with IOL implantation. Pediatric cataract surgery is associated with several sources of postoperative refractive error. These include planned refractive error based on age or fellow eye status, loss of accommodation, and unexpected refractive errors due to inaccuracies in biometry technique, use of IOL power formulas based on adult normative values, and late refractive changes due to unpredictable eye growth. Several factors can preclude the achievement of optimal refractive status following pediatric cataract extraction with IOL implantation. There is a need for new technology to reduce postoperative refractive surprises and address refractive adjustment in a growing eye.
NASA Astrophysics Data System (ADS)
Jun, Brian; Giarra, Matthew; Golz, Brian; Main, Russell; Vlachos, Pavlos
2016-11-01
We present a methodology to mitigate the major sources of error associated with two-dimensional confocal laser scanning microscopy (CLSM) images of nanoparticles flowing through a microfluidic channel. The correlation-based velocity measurements from CLSM images are subject to random error due to the Brownian motion of nanometer-sized tracer particles, and a bias error due to the formation of images by raster scanning. Here, we develop a novel ensemble phase correlation with dynamic optimal filter that maximizes the correlation strength, which diminishes the random error. In addition, we introduce an analytical model of CLSM measurement bias error correction due to two-dimensional image scanning of tracer particles. We tested our technique using both synthetic and experimental images of nanoparticles flowing through a microfluidic channel. We observed that our technique reduced the error by up to a factor of ten compared to ensemble standard cross correlation (SCC) for the images tested in the present work. Subsequently, we will assess our framework further, by interrogating nanoscale flow in the cell culture environment (transport within the lacunar-canalicular system) to demonstrate our ability to accurately resolve flow measurements in a biological system.
Reducing Bias and Error in the Correlation Coefficient Due to Nonnormality
ERIC Educational Resources Information Center
Bishara, Anthony J.; Hittner, James B.
2015-01-01
It is more common for educational and psychological data to be nonnormal than to be approximately normal. This tendency may lead to bias and error in point estimates of the Pearson correlation coefficient. In a series of Monte Carlo simulations, the Pearson correlation was examined under conditions of normal and nonnormal data, and it was compared…
Detecting and overcoming systematic errors in genome-scale phylogenies.
Rodríguez-Ezpeleta, Naiara; Brinkmann, Henner; Roure, Béatrice; Lartillot, Nicolas; Lang, B Franz; Philippe, Hervé
2007-06-01
Genome-scale data sets result in an enhanced resolution of the phylogenetic inference by reducing stochastic errors. However, there is also an increase of systematic errors due to model violations, which can lead to erroneous phylogenies. Here, we explore the impact of systematic errors on the resolution of the eukaryotic phylogeny using a data set of 143 nuclear-encoded proteins from 37 species. The initial observation was that, despite the impressive amount of data, some branches had no significant statistical support. To demonstrate that this lack of resolution is due to a mutual annihilation of phylogenetic and nonphylogenetic signals, we created a series of data sets with slightly different taxon sampling. As expected, these data sets yielded strongly supported but mutually exclusive trees, thus confirming the presence of conflicting phylogenetic and nonphylogenetic signals in the original data set. To decide on the correct tree, we applied several methods expected to reduce the impact of some kinds of systematic error. Briefly, we show that (i) removing fast-evolving positions, (ii) recoding amino acids into functional categories, and (iii) using a site-heterogeneous mixture model (CAT) are three effective means of increasing the ratio of phylogenetic to nonphylogenetic signal. Finally, our results allow us to formulate guidelines for detecting and overcoming phylogenetic artefacts in genome-scale phylogenetic analyses.
Error analysis and correction of lever-type stylus profilometer based on Nelder-Mead Simplex method
NASA Astrophysics Data System (ADS)
Hu, Chunbing; Chang, Suping; Li, Bo; Wang, Junwei; Zhang, Zhongyu
2017-10-01
Due to the high measurement accuracy and wide range of applications, lever-type stylus profilometry is commonly used in industrial research areas. However, the error caused by the lever structure has a great influence on the profile measurement, thus this paper analyzes the error of high-precision large-range lever-type stylus profilometry. The errors are corrected by the Nelder-Mead Simplex method, and the results are verified by the spherical surface calibration. It can be seen that this method can effectively reduce the measurement error and improve the accuracy of the stylus profilometry in large-scale measurement.
Refractive eye surgery in treating functional amblyopia in children.
Levenger, Samuel; Nemet, Pinhas; Hirsh, Ami; Kremer, Israel; Nemet, Arie
2006-01-01
While excimer laser refractive surgery is recommended and highly successful for correcting refractive errors in adults, its use in children has not been extensively exercised or studied. We report our experience treating children with amblyopia due to high anisometropia, high astigmatism, high myopia and with associated developmental delay. Review of patient records of our refractive clinic. A retrospective review was made of all 11 children with stable refractive errors who were unsuccessfully treated non-surgically and then underwent corneal refractive surgery and in one case, lenticular surgery. Seven had high myopic anisometropia, 2 had high astigmatism, and two had high myopia--one with Down's Syndrome and one with agenesis of the corpus callosum. The surgical refractive treatment eliminated or reduced the anisometropia, reduced the astigmatic error, improved vision and improved the daily function of the children with developmental delay. There were no complications or untoward results. Refractive surgery is safe and effective in treating children with high myopic anisometropia, high astigmatism, high myopia and developmental delay due to the resulting poor vision. Surgery can improve visual acuity in amblyopia not responding to routine treatment by correcting the refractive error and refractive aberrations.
Inducible DNA-repair systems in yeast: competition for lesions.
Mitchel, R E; Morrison, D P
1987-03-01
DNA lesions may be recognized and repaired by more than one DNA-repair process. If two repair systems with different error frequencies have overlapping lesion specificity and one or both is inducible, the resulting variable competition for the lesions can change the biological consequences of these lesions. This concept was demonstrated by observing mutation in yeast cells (Saccharomyces cerevisiae) exposed to combinations of mutagens under conditions which influenced the induction of error-free recombinational repair or error-prone repair. Total mutation frequency was reduced in a manner proportional to the dose of 60Co-gamma- or 254 nm UV radiation delivered prior to or subsequent to an MNNG exposure. Suppression was greater per unit radiation dose in cells gamma-irradiated in O2 as compared to N2. A rad3 (excision-repair) mutant gave results similar to wild-type but mutation in a rad52 (rec-) mutant exposed to MNNG was not suppressed by radiation. Protein-synthesis inhibition with heat shock or cycloheximide indicated that it was the mutation due to MNNG and not that due to radiation which had changed. These results indicate that MNNG lesions are recognized by both the recombinational repair system and the inducible error-prone system, but that gamma-radiation induction of error-free recombinational repair resulted in increased competition for the lesions, thereby reducing mutation. Similarly, gamma-radiation exposure resulted in a radiation dose-dependent reduction in mutation due to MNU, EMS, ENU and 8-MOP + UVA, but no reduction in mutation due to MMS. These results suggest that the number of mutational MMS lesions recognizable by the recombinational repair system must be very small relative to those produced by the other agents. MNNG induction of the inducible error-prone systems however, did not alter mutation frequencies due to ENU or MMS exposure but, in contrast to radiation, increased the mutagenic effectiveness of EMS. These experiments demonstrate that in this lower eukaryote, mutagen exposure does not necessarily result in a fixed risk of mutation, but that the risk can be markedly influenced by a variety of external stimuli including heat shock or exposure to other mutagens.
NASA Astrophysics Data System (ADS)
Mohd Sakri, F.; Mat Ali, M. S.; Sheikh Salim, S. A. Z.
2016-10-01
The study of physic fluid for a liquid draining inside a tank is easily accessible using numerical simulation. However, numerical simulation is expensive when the liquid draining involves the multi-phase problem. Since an accurate numerical simulation can be obtained if a proper method for error estimation is accomplished, this paper provides systematic assessment of error estimation due to grid convergence error using OpenFOAM. OpenFOAM is an open source CFD-toolbox and it is well-known among the researchers and institutions because of its free applications and ready to use. In this study, three types of grid resolution are used: coarse, medium and fine grids. Grid Convergence Index (GCI) is applied to estimate the error due to the grid sensitivity. A monotonic convergence condition is obtained in this study that shows the grid convergence error has been progressively reduced. The fine grid has the GCI value below 1%. The extrapolated value from Richardson Extrapolation is in the range of the GCI obtained.
Robust quantum logic in neutral atoms via adiabatic Rydberg dressing
Keating, Tyler; Cook, Robert L.; Hankin, Aaron M.; ...
2015-01-28
We study a scheme for implementing a controlled-Z (CZ) gate between two neutral-atom qubits based on the Rydberg blockade mechanism in a manner that is robust to errors caused by atomic motion. By employing adiabatic dressing of the ground electronic state, we can protect the gate from decoherence due to random phase errors that typically arise because of atomic thermal motion. In addition, the adiabatic protocol allows for a Doppler-free configuration that involves counterpropagating lasers in a σ +/σ - orthogonal polarization geometry that further reduces motional errors due to Doppler shifts. The residual motional error is dominated by dipole-dipolemore » forces acting on doubly-excited Rydberg atoms when the blockade is imperfect. As a result, for reasonable parameters, with qubits encoded into the clock states of 133Cs, we predict that our protocol could produce a CZ gate in < 10 μs with error probability on the order of 10 -3.« less
Berwid, Olga G.; Halperin, Jeffrey M.; Johnson, Ray E.; Marks, David J.
2013-01-01
Background Attention-Deficit/Hyperactivity Disorder has been associated with deficits in self-regulatory cognitive processes, some of which are thought to lie at the heart of the disorder. Slowing of reaction times (RTs) for correct responses following errors made during decision tasks has been interpreted as an indication of intact self-regulatory functioning and has been shown to be attenuated in school-aged children with ADHD. This study attempted to examine whether ADHD symptoms are associated with an early-emerging deficit in post-error slowing. Method A computerized two-choice RT task was administered to an ethnically diverse sample of preschool-aged children classified as either ‘control’ (n = 120) or ‘hyperactive/inattentive’ (HI; n = 148) using parent- and teacher-rated ADHD symptoms. Analyses were conducted to determine whether HI preschoolers exhibit a deficit in this self-regulatory ability. Results HI children exhibited reduced post-error slowing relative to controls on the trials selected for analysis. Supplementary analyses indicated that this may have been due to a reduced proportion of trials following errors on which HI children slowed rather than to a reduction in the absolute magnitude of slowing on all trials following errors. Conclusions High levels of ADHD symptoms in preschoolers may be associated with a deficit in error processing as indicated by post-error slowing. The results of supplementary analyses suggest that this deficit is perhaps more a result of failures to perceive errors than of difficulties with executive control. PMID:23387525
Development of WRF-ROI system by incorporating eigen-decomposition
NASA Astrophysics Data System (ADS)
Kim, S.; Noh, N.; Song, H.; Lim, G.
2011-12-01
This study presents the development of WRF-ROI system, which is the implementation of Retrospective Optimal Interpolation (ROI) to the Weather Research and Forecasting model (WRF). ROI is a new data assimilation algorithm introduced by Song et al. (2009) and Song and Lim (2009). The formulation of ROI is similar with that of Optimal Interpolation (OI), but ROI iteratively assimilates an observation set at a post analysis time into a prior analysis, possibly providing the high quality reanalysis data. ROI method assimilates the data at post analysis time using perturbation method (Errico and Raeder, 1999) without adjoint model. In previous study, ROI method is applied to Lorenz 40-variable model (Lorenz, 1996) to validate the algorithm and to investigate the capability. It is therefore required to apply this ROI method into a more realistic and complicated model framework such as WRF. In this research, the reduced-rank formulation of ROI is used instead of a reduced-resolution method. The computational costs can be reduced due to the eigen-decomposition of background error covariance in the reduced-rank method. When single profile of observations is assimilated in the WRF-ROI system by incorporating eigen-decomposition, the analysis error tends to be reduced if compared with the background error. The difference between forecast errors with assimilation and without assimilation is obviously increased as time passed, which means the improvement of forecast error by assimilation.
Patient safety in otolaryngology: a descriptive review.
Danino, Julian; Muzaffar, Jameel; Metcalfe, Chris; Coulson, Chris
2017-03-01
Human evaluation and judgement may include errors that can have disastrous results. Within medicine and healthcare there has been slow progress towards major changes in safety. Healthcare lags behind other specialised industries, such as aviation and nuclear power, where there have been significant improvements in overall safety, especially in reducing risk of errors. Following several high profile cases in the USA during the 1990s, a report titled "To Err Is Human: Building a Safer Health System" was published. The report extrapolated that in the USA approximately 50,000 to 100,000 patients may die each year as a result of medical errors. Traditionally otolaryngology has always been regarded as a "safe specialty". A study in the USA in 2004 inferred that there may be 2600 cases of major morbidity and 165 deaths within the specialty. MEDLINE via PubMed interface was searched for English language articles published between 2000 and 2012. Each combined two or three of the keywords noted earlier. Limitations are related to several generic topics within patient safety in otolaryngology. Other areas covered have been current relevant topics due to recent interest or new advances in technology. There has been a heightened awareness within the healthcare community of patient safety; it has become a major priority. Focus has shifted from apportioning blame to prevention of the errors and implementation of patient safety mechanisms in healthcare delivery. Type of Errors can be divided into errors due to action and errors due to knowledge or planning. In healthcare there are several factors that may influence adverse events and patient safety. Although technology may improve patient safety, it also introduces new sources of error. The ability to work with people allows for the increase in safety netting. Team working has been shown to have a beneficial effect on patient safety. Any field of work involving human decision-making will always have a risk of error. Within Otolaryngology, although patient safety has evolved along similar themes as other surgical specialties; there are several specific high-risk areas. Medical error is a common problem and its human cost is of immense importance. Steps to reduce such errors require the identification of high-risk practice within a complex healthcare system. The commitment to patient safety and quality improvement in medicine depend on personal responsibility and professional accountability.
An organizational approach to understanding patient safety and medical errors.
Kaissi, Amer
2006-01-01
Progress in patient safety, or lack thereof, is a cause for great concern. In this article, we argue that the patient safety movement has failed to reach its goals of eradicating or, at least, significantly reducing errors because of an inappropriate focus on provider and patient-level factors with no real attention to the organizational factors that affect patient safety. We describe an organizational approach to patient safety using different organizational theory perspectives and make several propositions to push patient safety research and practice in a direction that is more likely to improve care processes and outcomes. From a Contingency Theory perspective, we suggest that health care organizations, in general, operate under a misfit between contingencies and structures. This misfit is mainly due to lack of flexibility, cost containment, and lack of regulations, thus explaining the high level of errors committed in these organizations. From an organizational culture perspective, we argue that health care organizations must change their assumptions, beliefs, values, and artifacts to change their culture from a culture of blame to a culture of safety and thus reduce medical errors. From an organizational learning perspective, we discuss how reporting, analyzing, and acting on error information can result in reduced errors in health care organizations.
High accuracy switched-current circuits using an improved dynamic mirror
NASA Technical Reports Server (NTRS)
Zweigle, G.; Fiez, T.
1991-01-01
The switched-current technique, a recently developed circuit approach to analog signal processing, has emerged as an alternative/compliment to the well established switched-capacitor circuit technique. High speed switched-current circuits offer potential cost and power savings over slower switched-capacitor circuits. Accuracy improvements are a primary concern at this stage in the development of the switched-current technique. Use of the dynamic current mirror has produced circuits that are insensitive to transistor matching errors. The dynamic current mirror has been limited by other sources of error including clock-feedthrough and voltage transient errors. In this paper we present an improved switched-current building block using the dynamic current mirror. Utilizing current feedback the errors due to current imbalance in the dynamic current mirror are reduced. Simulations indicate that this feedback can reduce total harmonic distortion by as much as 9 dB. Additionally, we have developed a clock-feedthrough reduction scheme for which simulations reveal a potential 10 dB total harmonic distortion improvement. The clock-feedthrough reduction scheme also significantly reduces offset errors and allows for cancellation with a constant current source. Experimental results confirm the simulated improvements.
Pyrometer with tracking balancing
NASA Astrophysics Data System (ADS)
Ponomarev, D. B.; Zakharenko, V. A.; Shkaev, A. G.
2018-04-01
Currently, one of the main metrological noncontact temperature measurement challenges is the emissivity uncertainty. This paper describes a pyrometer with emissivity effect diminishing through the use of a measuring scheme with tracking balancing in which the radiation receiver is a null-indicator. In this paper the results of the prototype pyrometer absolute error study in surfaces temperature measurement of aluminum and nickel samples are presented. There is absolute error calculated values comparison considering the emissivity table values with errors on the results of experimental measurements by the proposed method. The practical implementation of the proposed technical solution has allowed two times to reduce the error due to the emissivity uncertainty.
Elliott, Rachel A; Putman, Koen D; Franklin, Matthew; Annemans, Lieven; Verhaeghe, Nick; Eden, Martin; Hayre, Jasdeep; Rodgers, Sarah; Sheikh, Aziz; Avery, Anthony J
2014-06-01
We recently showed that a pharmacist-led information technology-based intervention (PINCER) was significantly more effective in reducing medication errors in general practices than providing simple feedback on errors, with cost per error avoided at £79 (US$131). We aimed to estimate cost effectiveness of the PINCER intervention by combining effectiveness in error reduction and intervention costs with the effect of the individual errors on patient outcomes and healthcare costs, to estimate the effect on costs and QALYs. We developed Markov models for each of six medication errors targeted by PINCER. Clinical event probability, treatment pathway, resource use and costs were extracted from literature and costing tariffs. A composite probabilistic model combined patient-level error models with practice-level error rates and intervention costs from the trial. Cost per extra QALY and cost-effectiveness acceptability curves were generated from the perspective of NHS England, with a 5-year time horizon. The PINCER intervention generated £2,679 less cost and 0.81 more QALYs per practice [incremental cost-effectiveness ratio (ICER): -£3,037 per QALY] in the deterministic analysis. In the probabilistic analysis, PINCER generated 0.001 extra QALYs per practice compared with simple feedback, at £4.20 less per practice. Despite this extremely small set of differences in costs and outcomes, PINCER dominated simple feedback with a mean ICER of -£3,936 (standard error £2,970). At a ceiling 'willingness-to-pay' of £20,000/QALY, PINCER reaches 59 % probability of being cost effective. PINCER produced marginal health gain at slightly reduced overall cost. Results are uncertain due to the poor quality of data to inform the effect of avoiding errors.
Precise automatic differential stellar photometry
NASA Technical Reports Server (NTRS)
Young, Andrew T.; Genet, Russell M.; Boyd, Louis J.; Borucki, William J.; Lockwood, G. Wesley
1991-01-01
The factors limiting the precision of differential stellar photometry are reviewed. Errors due to variable atmospheric extinction can be reduced to below 0.001 mag at good sites by utilizing the speed of robotic telescopes. Existing photometric systems produce aliasing errors, which are several millimagnitudes in general but may be reduced to about a millimagnitude in special circumstances. Conventional differential photometry neglects several other important effects, which are discussed in detail. If all of these are properly handled, it appears possible to do differential photometry of variable stars with an overall precision of 0.001 mag with ground based robotic telescopes.
A Bayesian approach to model structural error and input variability in groundwater modeling
NASA Astrophysics Data System (ADS)
Xu, T.; Valocchi, A. J.; Lin, Y. F. F.; Liang, F.
2015-12-01
Effective water resource management typically relies on numerical models to analyze groundwater flow and solute transport processes. Model structural error (due to simplification and/or misrepresentation of the "true" environmental system) and input forcing variability (which commonly arises since some inputs are uncontrolled or estimated with high uncertainty) are ubiquitous in groundwater models. Calibration that overlooks errors in model structure and input data can lead to biased parameter estimates and compromised predictions. We present a fully Bayesian approach for a complete assessment of uncertainty for spatially distributed groundwater models. The approach explicitly recognizes stochastic input and uses data-driven error models based on nonparametric kernel methods to account for model structural error. We employ exploratory data analysis to assist in specifying informative prior for error models to improve identifiability. The inference is facilitated by an efficient sampling algorithm based on DREAM-ZS and a parameter subspace multiple-try strategy to reduce the required number of forward simulations of the groundwater model. We demonstrate the Bayesian approach through a synthetic case study of surface-ground water interaction under changing pumping conditions. It is found that explicit treatment of errors in model structure and input data (groundwater pumping rate) has substantial impact on the posterior distribution of groundwater model parameters. Using error models reduces predictive bias caused by parameter compensation. In addition, input variability increases parametric and predictive uncertainty. The Bayesian approach allows for a comparison among the contributions from various error sources, which could inform future model improvement and data collection efforts on how to best direct resources towards reducing predictive uncertainty.
Accuracy of outpatient service data for activity-based funding in New South Wales, Australia.
Munyisia, Esther N; Reid, David; Yu, Ping
2017-05-01
Despite increasing research on activity-based funding (ABF), there is no empirical evidence on the accuracy of outpatient service data for payment. This study aimed to identify data entry errors affecting ABF in two drug and alcohol outpatient clinic services in Australia. An audit was carried out on healthcare workers' (doctors, nurses, psychologists, social workers, counsellors, and aboriginal health education officers) data entry errors in an outpatient electronic documentation system. Of the 6919 data entries in the electronic documentation system, 7.5% (518) had errors, 68.7% of the errors were related to a wrong primary activity, 14.5% were due to a wrong activity category, 14.5% were as a result of a wrong combination of primary activity and modality of care, 1.9% were due to inaccurate information on a client's presence during service delivery and 0.4% were related to a wrong modality of care. Data entry errors may affect the amount of funding received by a healthcare organisation, which in turn may affect the quality of treatment provided to clients due to the possibility of underfunding the organisation. To reduce errors or achieve an error-free environment, there is a need to improve the naming convention of data elements, their descriptions and alignment with the national standard classification of outpatient services. It is also important to support healthcare workers in their data entry by embedding safeguards in the electronic documentation system such as flags for inaccurate data elements.
Gravity field recovery in the framework of a Geodesy and Time Reference in Space (GETRIS)
NASA Astrophysics Data System (ADS)
Hauk, Markus; Schlicht, Anja; Pail, Roland; Murböck, Michael
2017-04-01
The study ;Geodesy and Time Reference in Space; (GETRIS), funded by European Space Agency (ESA), evaluates the potential and opportunities coming along with a global space-borne infrastructure for data transfer, clock synchronization and ranging. Gravity field recovery could be one of the first beneficiary applications of such an infrastructure. This paper analyzes and evaluates the two-way high-low satellite-to-satellite-tracking as a novel method and as a long-term perspective for the determination of the Earth's gravitational field, using it as a synergy of one-way high-low combined with low-low satellite-to-satellite-tracking, in order to generate adequate de-aliasing products. First planned as a constellation of geostationary satellites, it turned out, that an integration of European Union Global Navigation Satellite System (Galileo) satellites (equipped with inter-Galileo links) into a Geostationary Earth Orbit (GEO) constellation would extend the capability of such a mission constellation remarkably. We report about simulations of different Galileo and Low Earth Orbiter (LEO) satellite constellations, computed using time variable geophysical background models, to determine temporal changes in the Earth's gravitational field. Our work aims at an error analysis of this new satellite/instrument scenario by investigating the impact of different error sources. Compared to a low-low satellite-to-satellite-tracking mission, results show reduced temporal aliasing errors due to a more isotropic error behavior caused by an improved observation geometry, predominantly in near-radial direction within the inter-satellite-links, as well as the potential of an improved gravity recovery with higher spatial and temporal resolution. The major error contributors of temporal gravity retrieval are aliasing errors due to undersampling of high frequency signals (mainly atmosphere, ocean and ocean tides). In this context, we investigate adequate methods to reduce these errors. We vary the number of Galileo and LEO satellites and show reduced errors in the temporal gravity field solutions for this enhanced inter-satellite-links. Based on the GETRIS infrastructure, the multiplicity of satellites enables co-estimating short-period long-wavelength gravity field signals, indicating it as powerful method for non-tidal aliasing reduction.
Yago, Martín
2017-05-01
QC planning based on risk management concepts can reduce the probability of harming patients due to an undetected out-of-control error condition. It does this by selecting appropriate QC procedures to decrease the number of erroneous results reported. The selection can be easily made by using published nomograms for simple QC rules when the out-of-control condition results in increased systematic error. However, increases in random error also occur frequently and are difficult to detect, which can result in erroneously reported patient results. A statistical model was used to construct charts for the 1 ks and X /χ 2 rules. The charts relate the increase in the number of unacceptable patient results reported due to an increase in random error with the capability of the measurement procedure. They thus allow for QC planning based on the risk of patient harm due to the reporting of erroneous results. 1 ks Rules are simple, all-around rules. Their ability to deal with increases in within-run imprecision is minimally affected by the possible presence of significant, stable, between-run imprecision. X /χ 2 rules perform better when the number of controls analyzed during each QC event is increased to improve QC performance. Using nomograms simplifies the selection of statistical QC procedures to limit the number of erroneous patient results reported due to an increase in analytical random error. The selection largely depends on the presence or absence of stable between-run imprecision. © 2017 American Association for Clinical Chemistry.
Paediatric in-patient prescribing errors in Malaysia: a cross-sectional multicentre study.
Khoo, Teik Beng; Tan, Jing Wen; Ng, Hoong Phak; Choo, Chong Ming; Bt Abdul Shukor, Intan Nor Chahaya; Teh, Siao Hean
2017-06-01
Background There is a lack of large comprehensive studies in developing countries on paediatric in-patient prescribing errors in different settings. Objectives To determine the characteristics of in-patient prescribing errors among paediatric patients. Setting General paediatric wards, neonatal intensive care units and paediatric intensive care units in government hospitals in Malaysia. Methods This is a cross-sectional multicentre study involving 17 participating hospitals. Drug charts were reviewed in each ward to identify the prescribing errors. All prescribing errors identified were further assessed for their potential clinical consequences, likely causes and contributing factors. Main outcome measures Incidence, types, potential clinical consequences, causes and contributing factors of the prescribing errors. Results The overall prescribing error rate was 9.2% out of 17,889 prescribed medications. There was no significant difference in the prescribing error rates between different types of hospitals or wards. The use of electronic prescribing had a higher prescribing error rate than manual prescribing (16.9 vs 8.2%, p < 0.05). Twenty eight (1.7%) prescribing errors were deemed to have serious potential clinical consequences and 2 (0.1%) were judged to be potentially fatal. Most of the errors were attributed to human factors, i.e. performance or knowledge deficit. The most common contributing factors were due to lack of supervision or of knowledge. Conclusions Although electronic prescribing may potentially improve safety, it may conversely cause prescribing errors due to suboptimal interfaces and cumbersome work processes. Junior doctors need specific training in paediatric prescribing and close supervision to reduce prescribing errors in paediatric in-patients.
Retinal Image Quality During Accommodation
López-Gil, N.; Martin, J.; Liu, T.; Bradley, A.; Díaz-Muñoz, D.; Thibos, L.
2013-01-01
Purpose We asked if retinal image quality is maximum during accommodation, or sub-optimal due to accommodative error, when subjects perform an acuity task. Methods Subjects viewed a monochromatic (552nm), high-contrast letter target placed at various viewing distances. Wavefront aberrations of the accommodating eye were measured near the endpoint of an acuity staircase paradigm. Refractive state, defined as the optimum target vergence for maximising retinal image quality, was computed by through-focus wavefront analysis to find the power of the virtual correcting lens that maximizes visual Strehl ratio. Results Despite changes in ocular aberrations and pupil size during binocular viewing, retinal image quality and visual acuity typically remain high for all target vergences. When accommodative errors lead to sub-optimal retinal image quality, acuity and measured image quality both decline. However, the effect of accommodation errors of on visual acuity are mitigated by pupillary constriction associated with accommodation and binocular convergence and also to binocular summation of dissimilar retinal image blur. Under monocular viewing conditions some subjects displayed significant accommodative lag that reduced visual performance, an effect that was exacerbated by pharmacological dilation of the pupil. Conclusions Spurious measurement of accommodative error can be avoided when the image quality metric used to determine refractive state is compatible with the focusing criteria used by the visual system to control accommodation. Real focusing errors of the accommodating eye do not necessarily produce a reliably measurable loss of image quality or clinically significant loss of visual performance, probably because of increased depth-of-focus due to pupil constriction. When retinal image quality is close to maximum achievable (given the eye’s higher-order aberrations), acuity is also near maximum. A combination of accommodative lag, reduced image quality, and reduced visual function may be a useful sign for diagnosing functionally-significant accommodative errors indicating the need for therapeutic intervention. PMID:23786386
Retinal image quality during accommodation.
López-Gil, Norberto; Martin, Jesson; Liu, Tao; Bradley, Arthur; Díaz-Muñoz, David; Thibos, Larry N
2013-07-01
We asked if retinal image quality is maximum during accommodation, or sub-optimal due to accommodative error, when subjects perform an acuity task. Subjects viewed a monochromatic (552 nm), high-contrast letter target placed at various viewing distances. Wavefront aberrations of the accommodating eye were measured near the endpoint of an acuity staircase paradigm. Refractive state, defined as the optimum target vergence for maximising retinal image quality, was computed by through-focus wavefront analysis to find the power of the virtual correcting lens that maximizes visual Strehl ratio. Despite changes in ocular aberrations and pupil size during binocular viewing, retinal image quality and visual acuity typically remain high for all target vergences. When accommodative errors lead to sub-optimal retinal image quality, acuity and measured image quality both decline. However, the effect of accommodation errors of on visual acuity are mitigated by pupillary constriction associated with accommodation and binocular convergence and also to binocular summation of dissimilar retinal image blur. Under monocular viewing conditions some subjects displayed significant accommodative lag that reduced visual performance, an effect that was exacerbated by pharmacological dilation of the pupil. Spurious measurement of accommodative error can be avoided when the image quality metric used to determine refractive state is compatible with the focusing criteria used by the visual system to control accommodation. Real focusing errors of the accommodating eye do not necessarily produce a reliably measurable loss of image quality or clinically significant loss of visual performance, probably because of increased depth-of-focus due to pupil constriction. When retinal image quality is close to maximum achievable (given the eye's higher-order aberrations), acuity is also near maximum. A combination of accommodative lag, reduced image quality, and reduced visual function may be a useful sign for diagnosing functionally-significant accommodative errors indicating the need for therapeutic intervention. © 2013 The Authors Ophthalmic & Physiological Optics © 2013 The College of Optometrists.
NASA Astrophysics Data System (ADS)
Hillman, B. R.; Marchand, R.; Ackerman, T. P.
2016-12-01
Satellite instrument simulators have emerged as a means to reduce errors in model evaluation by producing simulated or psuedo-retrievals from model fields, which account for limitations in the satellite retrieval process. Because of the mismatch in resolved scales between satellite retrievals and large-scale models, model cloud fields must first be downscaled to scales consistent with satellite retrievals. This downscaling is analogous to that required for model radiative transfer calculations. The assumption is often made in both model radiative transfer codes and satellite simulators that the unresolved clouds follow maximum-random overlap with horizontally homogeneous cloud condensate amounts. We examine errors in simulated MISR and CloudSat retrievals that arise due to these assumptions by applying the MISR and CloudSat simulators to cloud resolving model (CRM) output generated by the Super-parameterized Community Atmosphere Model (SP-CAM). Errors are quantified by comparing simulated retrievals performed directly on the CRM fields with those simulated by first averaging the CRM fields to approximately 2-degree resolution, applying a "subcolumn generator" to regenerate psuedo-resolved cloud and precipitation condensate fields, and then applying the MISR and CloudSat simulators on the regenerated condensate fields. We show that errors due to both assumptions of maximum-random overlap and homogeneous condensate are significant (relative to uncertainties in the observations and other simulator limitations). The treatment of precipitation is particularly problematic for CloudSat-simulated radar reflectivity. We introduce an improved subcolumn generator for use with the simulators, and show that these errors can be greatly reduced by replacing the maximum-random overlap assumption with the more realistic generalized overlap and incorporating a simple parameterization of subgrid-scale cloud and precipitation condensate heterogeneity. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000. SAND NO. SAND2016-7485 A
Linear and Order Statistics Combiners for Pattern Classification
NASA Technical Reports Server (NTRS)
Tumer, Kagan; Ghosh, Joydeep; Lau, Sonie (Technical Monitor)
2001-01-01
Several researchers have experimentally shown that substantial improvements can be obtained in difficult pattern recognition problems by combining or integrating the outputs of multiple classifiers. This chapter provides an analytical framework to quantify the improvements in classification results due to combining. The results apply to both linear combiners and order statistics combiners. We first show that to a first order approximation, the error rate obtained over and above the Bayes error rate, is directly proportional to the variance of the actual decision boundaries around the Bayes optimum boundary. Combining classifiers in output space reduces this variance, and hence reduces the 'added' error. If N unbiased classifiers are combined by simple averaging. the added error rate can be reduced by a factor of N if the individual errors in approximating the decision boundaries are uncorrelated. Expressions are then derived for linear combiners which are biased or correlated, and the effect of output correlations on ensemble performance is quantified. For order statistics based non-linear combiners, we derive expressions that indicate how much the median, the maximum and in general the i-th order statistic can improve classifier performance. The analysis presented here facilitates the understanding of the relationships among error rates, classifier boundary distributions, and combining in output space. Experimental results on several public domain data sets are provided to illustrate the benefits of combining and to support the analytical results.
Active full-shell grazing-incidence optics
NASA Astrophysics Data System (ADS)
Roche, Jacqueline M.; Elsner, Ronald F.; Ramsey, Brian D.; O'Dell, Stephen L.; Kolodziejczak, Jeffrey J.; Weisskopf, Martin C.; Gubarev, Mikhail V.
2016-09-01
MSFC has a long history of developing full-shell grazing-incidence x-ray optics for both narrow (pointed) and wide field (surveying) applications. The concept presented in this paper shows the potential to use active optics to switch between narrow and wide-field geometries, while maintaining large effective area and high angular resolution. In addition, active optics has the potential to reduce errors due to mounting and manufacturing lightweight optics. The design presented corrects low spatial frequency error and has significantly fewer actuators than other concepts presented thus far in the field of active x-ray optics. Using a finite element model, influence functions are calculated using active components on a full-shell grazing-incidence optic. Next, the ability of the active optic to effect a change of optical prescription and to correct for errors due to manufacturing and mounting is modeled.
Active Full-Shell Grazing-Incidence Optics
NASA Technical Reports Server (NTRS)
Davis, Jacqueline M.; Elsner, Ronald F.; Ramsey, Brian D.; O'Dell, Stephen L.; Kolodziejczak, Jeffery; Weisskopf, Martin C.; Gubarev, Mikhail V.
2016-01-01
MSFC has a long history of developing full-shell grazing-incidence x-ray optics for both narrow (pointed) and wide field (surveying) applications. The concept presented in this paper shows the potential to use active optics to switch between narrow and wide-field geometries, while maintaining large effective area and high angular resolution. In addition, active optics has the potential to reduce errors due to mounting and manufacturing lightweight optics. The design presented corrects low spatial frequency error and has significantly fewer actuators than other concepts presented thus far in the field of active x-ray optics. Using a finite element model, influence functions are calculated using active components on a full-shell grazing-incidence optic. Next, the ability of the active optic to effect a change of optical prescription and to correct for errors due to manufacturing and mounting is modeled.
Use of Earth's magnetic field for mitigating gyroscope errors regardless of magnetic perturbation.
Afzal, Muhammad Haris; Renaudin, Valérie; Lachapelle, Gérard
2011-01-01
Most portable systems like smart-phones are equipped with low cost consumer grade sensors, making them useful as Pedestrian Navigation Systems (PNS). Measurements of these sensors are severely contaminated by errors caused due to instrumentation and environmental issues rendering the unaided navigation solution with these sensors of limited use. The overall navigation error budget associated with pedestrian navigation can be categorized into position/displacement errors and attitude/orientation errors. Most of the research is conducted for tackling and reducing the displacement errors, which either utilize Pedestrian Dead Reckoning (PDR) or special constraints like Zero velocity UPdaTes (ZUPT) and Zero Angular Rate Updates (ZARU). This article targets the orientation/attitude errors encountered in pedestrian navigation and develops a novel sensor fusion technique to utilize the Earth's magnetic field, even perturbed, for attitude and rate gyroscope error estimation in pedestrian navigation environments where it is assumed that Global Navigation Satellite System (GNSS) navigation is denied. As the Earth's magnetic field undergoes severe degradations in pedestrian navigation environments, a novel Quasi-Static magnetic Field (QSF) based attitude and angular rate error estimation technique is developed to effectively use magnetic measurements in highly perturbed environments. The QSF scheme is then used for generating the desired measurements for the proposed Extended Kalman Filter (EKF) based attitude estimator. Results indicate that the QSF measurements are capable of effectively estimating attitude and gyroscope errors, reducing the overall navigation error budget by over 80% in urban canyon environment.
Use of Earth’s Magnetic Field for Mitigating Gyroscope Errors Regardless of Magnetic Perturbation
Afzal, Muhammad Haris; Renaudin, Valérie; Lachapelle, Gérard
2011-01-01
Most portable systems like smart-phones are equipped with low cost consumer grade sensors, making them useful as Pedestrian Navigation Systems (PNS). Measurements of these sensors are severely contaminated by errors caused due to instrumentation and environmental issues rendering the unaided navigation solution with these sensors of limited use. The overall navigation error budget associated with pedestrian navigation can be categorized into position/displacement errors and attitude/orientation errors. Most of the research is conducted for tackling and reducing the displacement errors, which either utilize Pedestrian Dead Reckoning (PDR) or special constraints like Zero velocity UPdaTes (ZUPT) and Zero Angular Rate Updates (ZARU). This article targets the orientation/attitude errors encountered in pedestrian navigation and develops a novel sensor fusion technique to utilize the Earth’s magnetic field, even perturbed, for attitude and rate gyroscope error estimation in pedestrian navigation environments where it is assumed that Global Navigation Satellite System (GNSS) navigation is denied. As the Earth’s magnetic field undergoes severe degradations in pedestrian navigation environments, a novel Quasi-Static magnetic Field (QSF) based attitude and angular rate error estimation technique is developed to effectively use magnetic measurements in highly perturbed environments. The QSF scheme is then used for generating the desired measurements for the proposed Extended Kalman Filter (EKF) based attitude estimator. Results indicate that the QSF measurements are capable of effectively estimating attitude and gyroscope errors, reducing the overall navigation error budget by over 80% in urban canyon environment. PMID:22247672
Comparison of two reconfigurable N×N interconnects for a recurrent neural network
NASA Astrophysics Data System (ADS)
Berger, Christoph; Collings, Neil; Pourzand, Ali R.; Volkel, Reinnard
1996-11-01
Two different methods of pattern replication (conventional and interlaced fan-out) have been investigated and experimentally tested in a reconfigurable 5X5 optical interconnect. Similar alignment problems due to imaging errors (field curvature) were observed in both systems. We conclude that of the two methods the interlaced fan-out is better suited to avoid these imaging errors, to reduce system size and to implement an optical feedback loop.
Cost effectiveness of ergonomic redesign of electronic motherboard.
Sen, Rabindra Nath; Yeow, Paul H P
2003-09-01
A case study to illustrate the cost effectiveness of ergonomic redesign of electronic motherboard was presented. The factory was running at a loss due to the high costs of rejects and poor quality and productivity. Subjective assessments and direct observations were made on the factory. Investigation revealed that due to motherboard design errors, the machine had difficulty in placing integrated circuits onto the pads, the operators had much difficulty in manual soldering certain components and much unproductive manual cleaning (MC) was required. Consequently, there were high rejects and occupational health and safety (OHS) problems, such as, boredom and work discomfort. Also, much labour and machine costs were spent on repairs. The motherboard was redesigned to correct the design errors, to allow more components to be machine soldered and to reduce MC. This eliminated rejects, reduced repairs, saved US dollars 581495/year and improved operators' OHS. The customer also saved US dollars 142105/year on loss of business.
NASA Technical Reports Server (NTRS)
Harwit, M.
1977-01-01
Sources of noise and error correcting procedures characteristic of Hadamard transform optical systems were investigated. Reduction of spectral noise due to noise spikes in the data, the effect of random errors, the relative performance of Fourier and Hadamard transform spectrometers operated under identical detector-noise-limited conditions, and systematic means for dealing with mask defects are among the topics discussed. The distortion in Hadamard transform optical instruments caused by moving Masks, incorrect mask alignment, missing measurements, and diffraction is analyzed and techniques for reducing or eliminating this distortion are described.
NASA Astrophysics Data System (ADS)
Bacha, Tulu
The Goddard Lidar Observatory for Wind (GLOW), a mobile direct detection Doppler LIDAR based on molecular backscattering for measurement of wind in the troposphere and lower stratosphere region of atmosphere is operated and its errors characterized. It was operated at Howard University Beltsville Center for Climate Observation System (BCCOS) side by side with other operating instruments: the NASA/Langely Research Center Validation Lidar (VALIDAR), Leosphere WLS70, and other standard wind sensing instruments. The performance of Goddard Lidar Observatory for Wind (GLOW) is presented for various optical thicknesses of cloud conditions. It was also compared to VALIDAR under various conditions. These conditions include clear and cloudy sky regions. The performance degradation due to the presence of cirrus clouds is quantified by comparing the wind speed error to cloud thickness. The cloud thickness is quantified in terms of aerosol backscatter ratio (ASR) and cloud optical depth (COD). ASR and COD are determined from Howard University Raman Lidar (HURL) operating at the same station as GLOW. The wind speed error of GLOW was correlated with COD and aerosol backscatter ratio (ASR) which are determined from HURL data. The correlation related in a weak linear relationship. Finally, the wind speed measurements of GLOW were corrected using the quantitative relation from the correlation relations. Using ASR reduced the GLOW wind error from 19% to 8% in a thin cirrus cloud and from 58% to 28% in a relatively thick cloud. After correcting for cloud induced error, the remaining error is due to shot noise and atmospheric variability. Shot-noise error is the statistical random error of backscattered photons detected by photon multiplier tube (PMT) can only be minimized by averaging large number of data recorded. The atmospheric backscatter measured by GLOW along its line-of-sight direction is also used to analyze error due to atmospheric variability within the volume of measurement. GLOW scans in five different directions (vertical and at elevation angles of 45° in north, south, east, and west) to generate wind profiles. The non-uniformity of the atmosphere in all scanning directions is a factor contributing to the measurement error of GLOW. The atmospheric variability in the scanning region leads to difference in the intensity of backscattered signals for scanning directions. Taking the ratio of the north (east) to south (west) and comparing the statistical differences lead to a weak linear relation between atmospheric variability and line-of-sights wind speed differences. This relation was used to make correction which reduced by about 50%.
Kienle, A; Patterson, M S
1997-09-01
We investigate theoretically the errors in determining the reduced scattering and absorption coefficients of semi-infinite turbid media from frequency-domain reflectance measurements made at small distances between the source and the detector(s). The errors are due to the uncertainties in the measurement of the phase, the modulation and the steady-state reflectance as well as to the diffusion approximation which is used as a theoretical model to describe light propagation in tissue. Configurations using one and two detectors are examined for the measurement of the phase and the modulation and for the measurement of the phase and the steady-state reflectance. Three solutions of the diffusion equation are investigated. We show that measurements of the phase and the steady-state reflectance at two different distances are best suited for the determination of the optical properties close to the source. For this arrangement the errors in the absorption coefficient due to typical uncertainties in the measurement are greater than those resulting from the application of the diffusion approximation at a modulation frequency of 200 MHz. A Monte Carlo approach is also examined; this avoids the errors due to the diffusion approximation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Viraganathan, H; Jiang, R; Chow, J
Purpose: We proposed a method to predict the change of dose-volume histogram (DVH) for PTV due to patient weight loss in prostate volumetric modulated arc therapy (VMAT). This method is based on a pre-calculated patient dataset and DVH curve fitting using the Gaussian error function (GEF). Methods: Pre-calculated dose-volume data from patients having weight loss in prostate VMAT was employed to predict the change of PTV coverage due to reduced depth in external contour. The effect of patient weight loss in treatment was described by a prostate dose-volume factor (PDVF), which was evaluated by the prostate PTV. Along with themore » PDVF, the GEF was used to fit into the DVH curve for the PTV. To predict a new DVH due to weight loss, parameters from the GEF describing the shape of DVH curve were determined. Since the parameters were related to the PDVF as per the specific reduced depth, we could first predict the PDVF at a reduced depth based on the prostate size from the pre-calculated dataset. Then parameters of the GEF could be determined from the PDVF to plot the new DVH for the PTV corresponding to the reduced depth. Results: A MATLAB program was built basing on the patient dataset with different prostate sizes. We input data of the prostate size and reduced depth of the patient into the program. The program then calculated the PDVF and DVH for the PTV considering the patient weight loss. The program was verified by different patient cases with various reduced depths. Conclusion: Our method can estimate the change of DVH for the PTV due to patient weight loss quickly without CT rescan and replan. This would help the radiation staff to predict the change of PTV coverage, when patient’s external contour reduced in prostate VMAT.« less
Cross, Paul C.; Caillaud, Damien; Heisey, Dennis M.
2013-01-01
Many ecological and epidemiological studies occur in systems with mobile individuals and heterogeneous landscapes. Using a simulation model, we show that the accuracy of inferring an underlying biological process from observational data depends on movement and spatial scale of the analysis. As an example, we focused on estimating the relationship between host density and pathogen transmission. Observational data can result in highly biased inference about the underlying process when individuals move among sampling areas. Even without sampling error, the effect of host density on disease transmission is underestimated by approximately 50 % when one in ten hosts move among sampling areas per lifetime. Aggregating data across larger regions causes minimal bias when host movement is low, and results in less biased inference when movement rates are high. However, increasing data aggregation reduces the observed spatial variation, which would lead to the misperception that a spatially targeted control effort may not be very effective. In addition, averaging over the local heterogeneity will result in underestimating the importance of spatial covariates. Minimizing the bias due to movement is not just about choosing the best spatial scale for analysis, but also about reducing the error associated with using the sampling location as a proxy for an individual’s spatial history. This error associated with the exposure covariate can be reduced by choosing sampling regions with less movement, including longitudinal information of individuals’ movements, or reducing the window of exposure by using repeated sampling or younger individuals.
Assessment of Satellite Surface Radiation Products in Highland Regions with Tibet Instrumental Data
NASA Technical Reports Server (NTRS)
Yang, Kun; Koike, Toshio; Stackhouse, Paul; Mikovitz, Colleen
2006-01-01
This study presents results of comparisons between instrumental radiation data in the elevated Tibetan Plateau and two global satellite products: the Global Energy and Water Cycle Experiment - Surface Radiation Budget (GEWEX-SRB) and International Satellite Cloud Climatology Project - Flux Data (ISCCP-FD). In general, shortwave radiation (SW) is estimated better by ISCCP-FD while longwave radiation (LW) is estimated better by GEWEX-SRB, but all the radiation components in both products are under-estimated. Severe and systematic errors were found in monthly-mean SRB SW (on plateau-average, -48 W/sq m for downward SW and -18 W/sq m for upward SW) and FD LW (on plateau-average, -37 W/sq m for downward LW and -62 W/sq m for upward LW) for radiation. Errors in monthly-mean diurnal variations are even larger than the monthly mean errors. Though the LW errors can be reduced about 10 W/sq m after a correction for altitude difference between the site and SRB and FD grids, these errors are still higher than that for other regions. The large errors in SRB SW was mainly due to a processing mistake for elevation effect, but the errors in SRB LW was mainly due to significant errors in input data. We suggest reprocessing satellite surface radiation budget data, at least for highland areas like Tibet.
New architecture for dynamic frame-skipping transcoder.
Fung, Kai-Tat; Chan, Yui-Lam; Siu, Wan-Chi
2002-01-01
Transcoding is a key technique for reducing the bit rate of a previously compressed video signal. A high transcoding ratio may result in an unacceptable picture quality when the full frame rate of the incoming video bitstream is used. Frame skipping is often used as an efficient scheme to allocate more bits to the representative frames, so that an acceptable quality for each frame can be maintained. However, the skipped frame must be decompressed completely, which might act as a reference frame to nonskipped frames for reconstruction. The newly quantized discrete cosine transform (DCT) coefficients of the prediction errors need to be re-computed for the nonskipped frame with reference to the previous nonskipped frame; this can create undesirable complexity as well as introduce re-encoding errors. In this paper, we propose new algorithms and a novel architecture for frame-rate reduction to improve picture quality and to reduce complexity. The proposed architecture is mainly performed on the DCT domain to achieve a transcoder with low complexity. With the direct addition of DCT coefficients and an error compensation feedback loop, re-encoding errors are reduced significantly. Furthermore, we propose a frame-rate control scheme which can dynamically adjust the number of skipped frames according to the incoming motion vectors and re-encoding errors due to transcoding such that the decoded sequence can have a smooth motion as well as better transcoded pictures. Experimental results show that, as compared to the conventional transcoder, the new architecture for frame-skipping transcoder is more robust, produces fewer requantization errors, and has reduced computational complexity.
NASA Astrophysics Data System (ADS)
Neulist, Joerg; Armbruster, Walter
2005-05-01
Model-based object recognition in range imagery typically involves matching the image data to the expected model data for each feasible model and pose hypothesis. Since the matching procedure is computationally expensive, the key to efficient object recognition is the reduction of the set of feasible hypotheses. This is particularly important for military vehicles, which may consist of several large moving parts such as the hull, turret, and gun of a tank, and hence require an eight or higher dimensional pose space to be searched. The presented paper outlines techniques for reducing the set of feasible hypotheses based on an estimation of target dimensions and orientation. Furthermore, the presence of a turret and a main gun and their orientations are determined. The vehicle parts dimensions as well as their error estimates restrict the number of model hypotheses whereas the position and orientation estimates and their error bounds reduce the number of pose hypotheses needing to be verified. The techniques are applied to several hundred laser radar images of eight different military vehicles with various part classifications and orientations. On-target resolution in azimuth, elevation and range is about 30 cm. The range images contain up to 20% dropouts due to atmospheric absorption. Additionally some target retro-reflectors produce outliers due to signal crosstalk. The presented algorithms are extremely robust with respect to these and other error sources. The hypothesis space for hull orientation is reduced to about 5 degrees as is the error for turret rotation and gun elevation, provided the main gun is visible.
Adaptive reduction of constitutive model-form error using a posteriori error estimation techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bishop, Joseph E.; Brown, Judith Alice
In engineering practice, models are typically kept as simple as possible for ease of setup and use, computational efficiency, maintenance, and overall reduced complexity to achieve robustness. In solid mechanics, a simple and efficient constitutive model may be favored over one that is more predictive, but is difficult to parameterize, is computationally expensive, or is simply not available within a simulation tool. In order to quantify the modeling error due to the choice of a relatively simple and less predictive constitutive model, we adopt the use of a posteriori model-form error-estimation techniques. Based on local error indicators in the energymore » norm, an algorithm is developed for reducing the modeling error by spatially adapting the material parameters in the simpler constitutive model. The resulting material parameters are not material properties per se, but depend on the given boundary-value problem. As a first step to the more general nonlinear case, we focus here on linear elasticity in which the “complex” constitutive model is general anisotropic elasticity and the chosen simpler model is isotropic elasticity. As a result, the algorithm for adaptive error reduction is demonstrated using two examples: (1) A transversely-isotropic plate with hole subjected to tension, and (2) a transversely-isotropic tube with two side holes subjected to torsion.« less
Adaptive reduction of constitutive model-form error using a posteriori error estimation techniques
Bishop, Joseph E.; Brown, Judith Alice
2018-06-15
In engineering practice, models are typically kept as simple as possible for ease of setup and use, computational efficiency, maintenance, and overall reduced complexity to achieve robustness. In solid mechanics, a simple and efficient constitutive model may be favored over one that is more predictive, but is difficult to parameterize, is computationally expensive, or is simply not available within a simulation tool. In order to quantify the modeling error due to the choice of a relatively simple and less predictive constitutive model, we adopt the use of a posteriori model-form error-estimation techniques. Based on local error indicators in the energymore » norm, an algorithm is developed for reducing the modeling error by spatially adapting the material parameters in the simpler constitutive model. The resulting material parameters are not material properties per se, but depend on the given boundary-value problem. As a first step to the more general nonlinear case, we focus here on linear elasticity in which the “complex” constitutive model is general anisotropic elasticity and the chosen simpler model is isotropic elasticity. As a result, the algorithm for adaptive error reduction is demonstrated using two examples: (1) A transversely-isotropic plate with hole subjected to tension, and (2) a transversely-isotropic tube with two side holes subjected to torsion.« less
Bubalo, Joseph; Warden, Bruce A; Wiegel, Joshua J; Nishida, Tess; Handel, Evelyn; Svoboda, Leanne M; Nguyen, Lam; Edillo, P Neil
2014-12-01
Medical errors, in particular medication errors, continue to be a troublesome factor in the delivery of safe and effective patient care. Antineoplastic agents represent a group of medications highly susceptible to medication errors due to their complex regimens and narrow therapeutic indices. As the majority of these medication errors are frequently associated with breakdowns in poorly defined systems, developing technologies and evolving workflows seem to be a logical approach to provide added safeguards against medication errors. This article will review both the pros and cons of today's technologies and their ability to simplify the medication use process, reduce medication errors, improve documentation, improve healthcare costs and increase provider efficiency as relates to the use of antineoplastic therapy throughout the medication use process. Several technologies, mainly computerized provider order entry (CPOE), barcode medication administration (BCMA), smart pumps, electronic medication administration record (eMAR), and telepharmacy, have been well described and proven to reduce medication errors, improve adherence to quality metrics, and/or improve healthcare costs in a broad scope of patients. The utilization of these technologies during antineoplastic therapy is weak at best and lacking for most. Specific to the antineoplastic medication use system, the only technology with data to adequately support a claim of reduced medication errors is CPOE. In addition to the benefits these technologies can provide, it is also important to recognize their potential to induce new types of errors and inefficiencies which can negatively impact patient care. The utilization of technology reduces but does not eliminate the potential for error. The evidence base to support technology in preventing medication errors is limited in general but even more deficient in the realm of antineoplastic therapy. Though CPOE has the best evidence to support its use in the antineoplastic population, benefit from many other technologies may have to be inferred based on data from other patient populations. As health systems begin to widely adopt and implement new technologies it is important to critically assess their effectiveness in improving patient safety. © The Author(s) 2013 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
Image stretching on a curved surface to improve satellite gridding
NASA Technical Reports Server (NTRS)
Ormsby, J. P.
1975-01-01
A method for substantially reducing gridding errors due to satellite roll, pitch and yaw is given. A gimbal-mounted curved screen, scaled to 1:7,500,000, is used to stretch the satellite image whereby visible landmarks coincide with a projected map outline. The resulting rms position errors averaged 10.7 km as compared with 25.6 and 34.9 km for two samples of satellite imagery upon which image stretching was not performed.
Optimization of 100-meter Green Bank Telescope
NASA Technical Reports Server (NTRS)
Strain, Douglas
1994-01-01
Candidate designs for NRAO's 100-m clear-aperture radio telescope were evaluated and optimized by JPL using JPL-developed structural optimization and analysis software. The weight of a non-optimum design was reduced from 9.4 million pounds to 9.2 million pounds. The half-pathlength error due to gravity deformations was reduced from 0.041-inch rms to 0.034-inch rms.
Multitasking simulation: Present application and future directions.
Adams, Traci Nicole; Rho, Jason C
2017-02-01
The Accreditation Council for Graduate Medical Education lists multi-tasking as a core competency in several medical specialties due to increasing demands on providers to manage the care of multiple patients simultaneously. Trainees often learn multitasking on the job without any formal curriculum, leading to high error rates. Multitasking simulation training has demonstrated success in reducing error rates among trainees. Studies of multitasking simulation demonstrate that this type of simulation is feasible, does not hinder the acquisition of procedural skill, and leads to better performance during subsequent periods of multitasking. Although some healthcare agencies have discouraged multitasking due to higher error rates among multitasking providers, it cannot be eliminated entirely in settings such as the emergency department in which providers care for more than one patient simultaneously. Simulation can help trainees to identify situations in which multitasking is inappropriate, while preparing them for situations in which multitasking is inevitable.
Computing in the presence of soft bit errors. [caused by single event upset on spacecraft
NASA Technical Reports Server (NTRS)
Rasmussen, R. D.
1984-01-01
It is shown that single-event-upsets (SEUs) due to cosmic rays are a significant source of single bit error in spacecraft computers. The physical mechanism of SEU, electron hole generation by means of Linear Energy Transfer (LET), it discussed with reference made to the results of a study of the environmental effects on computer systems of the Galileo spacecraft. Techniques for making software more tolerant of cosmic ray effects are considered, including: reducing the number of registers used by the software; continuity testing of variables; redundant execution of major procedures for error detection; and encoding state variables to detect single-bit changes. Attention is also given to design modifications which may reduce the cosmic ray exposure of on-board hardware. These modifications include: shielding components operating in LEO; removing low-power Schottky parts; and the use of CMOS diodes. The SEU parameters of different electronic components are listed in a table.
Davis, Edward T; Pagkalos, Joseph; Gallie, Price A M; Macgroarty, Kelly; Waddell, James P; Schemitsch, Emil H
2015-01-01
Optimal component alignment in total knee arthroplasty has been associated with better functional outcome as well as improved implant longevity. The ability to align components optimally during minimally invasive (MIS) total knee replacement (TKR) has been a cause of concern. Computer navigation is a useful aid in achieving the desired alignment although it is limited by the error during the manual registration of landmarks. Our study aims to compare the registration process error between a standard and a MIS surgical approach. We hypothesized that performing the registration error via an MIS approach would increase the registration process error. Five fresh frozen lower limbs were routinely prepared and draped. The registration process was performed through an MIS approach. This was then extended to the standard approach and the registration was performed again. Two surgeons performed the registration process five times with each approach. Performing the registration process through the MIS approach was not associated with higher error compared to the standard approach in the alignment parameters of interest. This rejects our hypothesis. Image-free navigated MIS TKR does not appear to carry higher risk of component malalignment due to the registration process error. Navigation can be used during MIS TKR to improve alignment without reduced accuracy due to the approach.
Effect of Pointing Error on the BER Performance of an Optical CDMA FSO Link with SIK Receiver
NASA Astrophysics Data System (ADS)
Nazrul Islam, A. K. M.; Majumder, S. P.
2017-12-01
An analytical approach is presented for an optical code division multiple access (OCDMA) system over free space optical (FSO) channel considering the effect of pointing error between the transmitter and the receiver. Analysis is carried out with an optical sequence inverse keying (SIK) correlator receiver with intensity modulation and direct detection (IM/DD) to find the bit error rate (BER) with pointing error. The results are evaluated numerically in terms of signal-to-noise plus multi-access interference (MAI) ratio, BER and power penalty due to pointing error. It is noticed that the OCDMA FSO system is highly affected by pointing error with significant power penalty at a BER of 10-6 and 10-9. For example, penalty at BER 10-9 is found to be 9 dB corresponding to normalized pointing error of 1.4 for 16 users with processing gain of 256 and is reduced to 6.9 dB when the processing gain is increased to 1,024.
An error-based micro-sensor capture system for real-time motion estimation
NASA Astrophysics Data System (ADS)
Yang, Lin; Ye, Shiwei; Wang, Zhibo; Huang, Zhipei; Wu, Jiankang; Kong, Yongmei; Zhang, Li
2017-10-01
A wearable micro-sensor motion capture system with 16 IMUs and an error-compensatory complementary filter algorithm for real-time motion estimation has been developed to acquire accurate 3D orientation and displacement in real life activities. In the proposed filter algorithm, the gyroscope bias error, orientation error and magnetic disturbance error are estimated and compensated, significantly reducing the orientation estimation error due to sensor noise and drift. Displacement estimation, especially for activities such as jumping, has been the challenge in micro-sensor motion capture. An adaptive gait phase detection algorithm has been developed to accommodate accurate displacement estimation in different types of activities. The performance of this system is benchmarked with respect to the results of VICON optical capture system. The experimental results have demonstrated effectiveness of the system in daily activities tracking, with estimation error 0.16 ± 0.06 m for normal walking and 0.13 ± 0.11 m for jumping motions. Research supported by the National Natural Science Foundation of China (Nos. 61431017, 81272166).
NASA Technical Reports Server (NTRS)
Ulvestad, J. S.
1989-01-01
Errors from a number of sources in astrometric very long baseline interferometry (VLBI) have been reduced in recent years through a variety of methods of calibration and modeling. Such reductions have led to a situation in which the extended structure of the natural radio sources used in VLBI is a significant error source in the effort to improve the accuracy of the radio reference frame. In the past, work has been done on individual radio sources to establish the magnitude of the errors caused by their particular structures. The results of calculations on 26 radio sources are reported in which an effort is made to determine the typical delay and delay-rate errors for a number of sources having different types of structure. It is found that for single observations of the types of radio sources present in astrometric catalogs, group-delay and phase-delay scatter in the 50 to 100 psec range due to source structure can be expected at 8.4 GHz on the intercontinental baselines available in the Deep Space Network (DSN). Delay-rate scatter of approx. 5 x 10(exp -15) sec sec(exp -1) (or approx. 0.002 mm sec (exp -1) is also expected. If such errors mapped directly into source position errors, they would correspond to position uncertainties of approx. 2 to 5 nrad, similar to the best position determinations in the current JPL VLBI catalog. With the advent of wider bandwidth VLBI systems on the large DSN antennas, the system noise will be low enough so that the structure-induced errors will be a significant part of the error budget. Several possibilities for reducing the structure errors are discussed briefly, although it is likely that considerable effort will have to be devoted to the structure problem in order to reduce the typical error by a factor of two or more.
An approach to develop an algorithm to detect the climbing height in radial-axial ring rolling
NASA Astrophysics Data System (ADS)
Husmann, Simon; Hohmann, Magnus; Kuhlenkötter, Bernd
2017-10-01
Radial-axial ring rolling is the mainly used forming process to produce seamless rings, which are applied in miscellaneous industries like the energy sector, the aerospace technology or in the automotive industry. Due to the simultaneously forming in two opposite rolling gaps and the fact that ring rolling is a mass forming process, different errors could occur during the rolling process. Ring climbing is one of the most occurring process errors leading to a distortion of the ring's cross section and a deformation of the rings geometry. The conventional sensors of a radial-axial rolling machine could not detect this error. Therefore, it is a common strategy to roll a slightly bigger ring, so that random occurring process errors could be reduce afterwards by removing the additional material. The LPS installed an image processing system to the radial rolling gap of their ring rolling machine to enable the recognition and measurement of climbing rings and by this, to reduce the additional material. This paper presents the algorithm which enables the image processing system to detect the error of a climbing ring and ensures comparable reliable results for the measurement of the climbing height of the rings.
Dealing with systematic laser scanner errors due to misalignment at area-based deformation analyses
NASA Astrophysics Data System (ADS)
Holst, Christoph; Medić, Tomislav; Kuhlmann, Heiner
2018-04-01
The ability to acquire rapid, dense and high quality 3D data has made terrestrial laser scanners (TLS) a desirable instrument for tasks demanding a high geometrical accuracy, such as geodetic deformation analyses. However, TLS measurements are influenced by systematic errors due to internal misalignments of the instrument. The resulting errors in the point cloud might exceed the magnitude of random errors. Hence, it is important to assure that the deformation analysis is not biased by these influences. In this study, we propose and evaluate several strategies for reducing the effect of TLS misalignments on deformation analyses. The strategies are based on the bundled in-situ self-calibration and on the exploitation of two-face measurements. The strategies are verified analyzing the deformation of the Onsala Space Observatory's radio telescope's main reflector. It is demonstrated that either two-face measurements as well as the in-situ calibration of the laser scanner in a bundle adjustment improve the results of deformation analysis. The best solution is gained by a combination of both strategies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weston, Louise Marie
2007-09-01
A recent report on criticality accidents in nuclear facilities indicates that human error played a major role in a significant number of incidents with serious consequences and that some of these human errors may be related to the emotional state of the individual. A pre-shift test to detect a deleterious emotional state could reduce the occurrence of such errors in critical operations. The effectiveness of pre-shift testing is a challenge because of the need to gather predictive data in a relatively short test period and the potential occurrence of learning effects due to a requirement for frequent testing. This reportmore » reviews the different types of reliability and validity methods and testing and statistical analysis procedures to validate measures of emotional state. The ultimate value of a validation study depends upon the percentage of human errors in critical operations that are due to the emotional state of the individual. A review of the literature to identify the most promising predictors of emotional state for this application is highly recommended.« less
NASA Astrophysics Data System (ADS)
Chaves-Montero, Jonás; Angulo, Raúl E.; Hernández-Monteagudo, Carlos
2018-07-01
In the upcoming era of high-precision galaxy surveys, it becomes necessary to understand the impact of redshift uncertainties on cosmological observables. In this paper we explore the effect of sub-percent photometric redshift errors (photo-z errors) on galaxy clustering and baryonic acoustic oscillations (BAOs). Using analytic expressions and results from 1000 N-body simulations, we show how photo-z errors modify the amplitude of moments of the 2D power spectrum, their variances, the amplitude of BAOs, and the cosmological information in them. We find that (a) photo-z errors suppress the clustering on small scales, increasing the relative importance of shot noise, and thus reducing the interval of scales available for BAO analyses; (b) photo-z errors decrease the smearing of BAOs due to non-linear redshift-space distortions (RSDs) by giving less weight to line-of-sight modes; and (c) photo-z errors (and small-scale RSD) induce a scale dependence on the information encoded in the BAO scale, and that reduces the constraining power on the Hubble parameter. Using these findings, we propose a template that extracts unbiased cosmological information from samples with photo-z errors with respect to cases without them. Finally, we provide analytic expressions to forecast the precision in measuring the BAO scale, showing that spectro-photometric surveys will measure the expansion history of the Universe with a precision competitive to that of spectroscopic surveys.
Error reporting in transfusion medicine at a tertiary care centre: a patient safety initiative.
Elhence, Priti; Shenoy, Veena; Verma, Anupam; Sachan, Deepti
2012-11-01
Errors in the transfusion process can compromise patient safety. A study was undertaken at our center to identify the errors in the transfusion process and their causes in order to reduce their occurrence by corrective and preventive actions. All near miss, no harm events and adverse events reported in the 'transfusion process' during 1 year study period were recorded, classified and analyzed at a tertiary care teaching hospital in North India. In total, 285 transfusion related events were reported during the study period. Of these, there were four adverse (1.5%), 10 no harm (3.5%) and 271 (95%) near miss events. Incorrect blood component transfusion rate was 1 in 6031 component units. ABO incompatible transfusion rate was one in 15,077 component units issued or one in 26,200 PRBC units issued and acute hemolytic transfusion reaction due to ABO incompatible transfusion was 1 in 60,309 component units issued. Fifty-three percent of the antecedent near miss events were bedside events. Patient sample handling errors were the single largest category of errors (n=94, 33%) followed by errors in labeling and blood component handling and storage in user areas. The actual and near miss event data obtained through this initiative provided us with clear evidence about latent defects and critical points in the transfusion process so that corrective and preventive actions could be taken to reduce errors and improve transfusion safety.
NASA Astrophysics Data System (ADS)
Chaves-Montero, Jonás; Angulo, Raúl E.; Hernández-Monteagudo, Carlos
2018-04-01
In the upcoming era of high-precision galaxy surveys, it becomes necessary to understand the impact of redshift uncertainties on cosmological observables. In this paper we explore the effect of sub-percent photometric redshift errors (photo-z errors) on galaxy clustering and baryonic acoustic oscillations (BAO). Using analytic expressions and results from 1 000 N-body simulations, we show how photo-z errors modify the amplitude of moments of the 2D power spectrum, their variances, the amplitude of BAO, and the cosmological information in them. We find that: a) photo-z errors suppress the clustering on small scales, increasing the relative importance of shot noise, and thus reducing the interval of scales available for BAO analyses; b) photo-z errors decrease the smearing of BAO due to non-linear redshift-space distortions (RSD) by giving less weight to line-of-sight modes; and c) photo-z errors (and small-scale RSD) induce a scale dependence on the information encoded in the BAO scale, and that reduces the constraining power on the Hubble parameter. Using these findings, we propose a template that extracts unbiased cosmological information from samples with photo-z errors with respect to cases without them. Finally, we provide analytic expressions to forecast the precision in measuring the BAO scale, showing that spectro-photometric surveys will measure the expansion history of the Universe with a precision competitive to that of spectroscopic surveys.
Correction of Pelvic Tilt and Pelvic Rotation in Cup Measurement after THA - An Experimental Study.
Schwarz, Timo Julian; Weber, Markus; Dornia, Christian; Worlicek, Michael; Renkawitz, Tobias; Grifka, Joachim; Craiovan, Benjamin
2017-09-01
Purpose Accurate assessment of cup orientation on postoperative pelvic radiographs is essential for evaluating outcome after THA. Here, we present a novel method for correcting measurement inaccuracies due to pelvic tilt and rotation. Method In an experimental setting, a cup was implanted into a dummy pelvis, and its final position was verified via CT. To show the effect of pelvic tilt and rotation on cup position, the dummy was fixed to a rack to achieve a tilt between + 15° anterior and -15° posterior and 0° to 20° rotation to the contralateral side. According to Murray's definitions of anteversion and inclination, we created a novel corrective procedure to measure cup position in the pelvic reference frame (anterior pelvic plane) to compensate measurement errors due to pelvic tilt and rotation. Results The cup anteversion measured on CT was 23.3°; on AP pelvic radiographs, however, variations in pelvic tilt (± 15°) resulted in anteversion angles between 11.0° and 36.2° (mean error 8.3°± 3.9°). The cup inclination was 34.1° on CT and ranged between 31.0° and 38.7° (m. e. 2.3°± 1.5°) on radiographs. Pelvic rotation between 0° and 20° showed high variation in radiographic anteversion (21.2°-31.2°, m. e. 6.0°± 3.1°) and inclination (34.1°-27.2°, m. e. 3.4°± 2.5°). Our novel correction algorithm for pelvic tilt reduced the mean error in anteversion measurements to 0.6°± 0.2° and in inclination measurements to 0.7° (SD± 0.2). Similarly, the mean error due to pelvic rotation was reduced to 0.4°± 0.4° for anteversion and to 1.3°± 0.8 for inclination. Conclusion Pelvic tilt and pelvic rotation may lead to misinterpretation of cup position on anteroposterior pelvic radiographs. Mathematical correction concepts have the potential to significantly reduce these errors, and could be implemented in future radiological software tools. Key Points · Pelvic tilt and rotation influence cup orientation after THA. · Cup anteversion and inclination should be referenced to the pelvis. · Radiological measurement errors of cup position may be reduced by mathematical concepts. Citation Format · Schwarz TJ, Weber M, Dornia C et al. Correction of Pelvic Tilt and Pelvic Rotation in Cup Measurement after THA - An Experimental Study. Fortschr Röntgenstr 2017; 189: 864 - 873. © Georg Thieme Verlag KG Stuttgart · New York.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Heng, E-mail: hengli@mdanderson.org; Zhu, X. Ronald; Zhang, Xiaodong
Purpose: To develop and validate a novel delivery strategy for reducing the respiratory motion–induced dose uncertainty of spot-scanning proton therapy. Methods and Materials: The spot delivery sequence was optimized to reduce dose uncertainty. The effectiveness of the delivery sequence optimization was evaluated using measurements and patient simulation. One hundred ninety-one 2-dimensional measurements using different delivery sequences of a single-layer uniform pattern were obtained with a detector array on a 1-dimensional moving platform. Intensity modulated proton therapy plans were generated for 10 lung cancer patients, and dose uncertainties for different delivery sequences were evaluated by simulation. Results: Without delivery sequence optimization,more » the maximum absolute dose error can be up to 97.2% in a single measurement, whereas the optimized delivery sequence results in a maximum absolute dose error of ≤11.8%. In patient simulation, the optimized delivery sequence reduces the mean of fractional maximum absolute dose error compared with the regular delivery sequence by 3.3% to 10.6% (32.5-68.0% relative reduction) for different patients. Conclusions: Optimizing the delivery sequence can reduce dose uncertainty due to respiratory motion in spot-scanning proton therapy, assuming the 4-dimensional CT is a true representation of the patients' breathing patterns.« less
Nucleonic coal detector with independent, hydropneumatic suspension
NASA Technical Reports Server (NTRS)
Jones, E. W.; Handy, K.
1977-01-01
The design of a nucleonic, coal interface detector which measures the depth of coal on the roof and floor of a coal mine is presented. The nucleonic source and the nucleonic detector are on independent hydropneumatic suspensions to reduce the measurement errors due to air gap.
Comparisons of single event vulnerability of GaAs SRAMS
NASA Astrophysics Data System (ADS)
Weatherford, T. R.; Hauser, J. R.; Diehl, S. E.
1986-12-01
A GaAs MESFET/JFET model incorporated into SPICE has been used to accurately describe C-EJFET, E/D MESFET and D MESFET/resistor GaAs memory technologies. These cells have been evaluated for critical charges due to gate-to-drain and drain-to-source charge collection. Low gate-to-drain critical charges limit conventional GaAs SRAM soft error rates to approximately 1E-6 errors/bit-day. SEU hardening approaches including decoupling resistors, diodes, and FETs have been investigated. Results predict GaAs RAM cell critical charges can be increased to over 0.1 pC. Soft error rates in such hardened memories may approach 1E-7 errors/bit-day without significantly reducing memory speed. Tradeoffs between hardening level, performance and fabrication complexity are discussed.
Cryptographic robustness of a quantum cryptography system using phase-time coding
DOE Office of Scientific and Technical Information (OSTI.GOV)
Molotkov, S. N.
2008-01-15
A cryptographic analysis is presented of a new quantum key distribution protocol using phase-time coding. An upper bound is obtained for the error rate that guarantees secure key distribution. It is shown that the maximum tolerable error rate for this protocol depends on the counting rate in the control time slot. When no counts are detected in the control time slot, the protocol guarantees secure key distribution if the bit error rate in the sifted key does not exceed 50%. This protocol partially discriminates between errors due to system defects (e.g., imbalance of a fiber-optic interferometer) and eavesdropping. In themore » absence of eavesdropping, the counts detected in the control time slot are not caused by interferometer imbalance, which reduces the requirements for interferometer stability.« less
The design and analysis of single flank transmission error testor for loaded gears
NASA Technical Reports Server (NTRS)
Houser, D. R.; Bassett, D. E.
1985-01-01
Due to geometrical imperfections in gears and finite tooth stiffnesses, the motion transmitted from an input gear shaft to an output gear shaft will not have conjugate action. In order to strengthen the understanding of transmission error and to verify mathematical models of gear transmission error, a test stand that will measure the transmission error of a gear pair at operating loads, but at reduced speeds would be desirable. This document describes the design and development of a loaded transmission error tester. For a gear box with a gear ratio of one, few tooth meshing combinations will occur during a single test. In order to observe the effects of different tooth mesh combinations and to increase the ability to load test gear pairs with higher gear ratios, the system was designed around a gear box with a gear ratio of two.
Robust Linear Models for Cis-eQTL Analysis.
Rantalainen, Mattias; Lindgren, Cecilia M; Holmes, Christopher C
2015-01-01
Expression Quantitative Trait Loci (eQTL) analysis enables characterisation of functional genetic variation influencing expression levels of individual genes. In outbread populations, including humans, eQTLs are commonly analysed using the conventional linear model, adjusting for relevant covariates, assuming an allelic dosage model and a Gaussian error term. However, gene expression data generally have noise that induces heavy-tailed errors relative to the Gaussian distribution and often include atypical observations, or outliers. Such departures from modelling assumptions can lead to an increased rate of type II errors (false negatives), and to some extent also type I errors (false positives). Careful model checking can reduce the risk of type-I errors but often not type II errors, since it is generally too time-consuming to carefully check all models with a non-significant effect in large-scale and genome-wide studies. Here we propose the application of a robust linear model for eQTL analysis to reduce adverse effects of deviations from the assumption of Gaussian residuals. We present results from a simulation study as well as results from the analysis of real eQTL data sets. Our findings suggest that in many situations robust models have the potential to provide more reliable eQTL results compared to conventional linear models, particularly in respect to reducing type II errors due to non-Gaussian noise. Post-genomic data, such as that generated in genome-wide eQTL studies, are often noisy and frequently contain atypical observations. Robust statistical models have the potential to provide more reliable results and increased statistical power under non-Gaussian conditions. The results presented here suggest that robust models should be considered routinely alongside other commonly used methodologies for eQTL analysis.
Yang, Jie; Liu, Qingquan; Dai, Wei; Ding, Renhui
2016-08-01
Due to the solar radiation effect, current air temperature sensors inside a thermometer screen or radiation shield may produce measurement errors that are 0.8 °C or higher. To improve the observation accuracy, an aspirated temperature measurement platform is designed. A computational fluid dynamics (CFD) method is implemented to analyze and calculate the radiation error of the aspirated temperature measurement platform under various environmental conditions. Then, a radiation error correction equation is obtained by fitting the CFD results using a genetic algorithm (GA) method. In order to verify the performance of the temperature sensor, the aspirated temperature measurement platform, temperature sensors with a naturally ventilated radiation shield, and a thermometer screen are characterized in the same environment to conduct the intercomparison. The average radiation errors of the sensors in the naturally ventilated radiation shield and the thermometer screen are 0.44 °C and 0.25 °C, respectively. In contrast, the radiation error of the aspirated temperature measurement platform is as low as 0.05 °C. This aspirated temperature sensor allows the radiation error to be reduced by approximately 88.6% compared to the naturally ventilated radiation shield, and allows the error to be reduced by a percentage of approximately 80% compared to the thermometer screen. The mean absolute error and root mean square error between the correction equation and experimental results are 0.032 °C and 0.036 °C, respectively, which demonstrates the accuracy of the CFD and GA methods proposed in this research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Jie, E-mail: yangjie396768@163.com; School of Atmospheric Physics, Nanjing University of Information Science and Technology, Nanjing 210044; Liu, Qingquan
Due to the solar radiation effect, current air temperature sensors inside a thermometer screen or radiation shield may produce measurement errors that are 0.8 °C or higher. To improve the observation accuracy, an aspirated temperature measurement platform is designed. A computational fluid dynamics (CFD) method is implemented to analyze and calculate the radiation error of the aspirated temperature measurement platform under various environmental conditions. Then, a radiation error correction equation is obtained by fitting the CFD results using a genetic algorithm (GA) method. In order to verify the performance of the temperature sensor, the aspirated temperature measurement platform, temperature sensors withmore » a naturally ventilated radiation shield, and a thermometer screen are characterized in the same environment to conduct the intercomparison. The average radiation errors of the sensors in the naturally ventilated radiation shield and the thermometer screen are 0.44 °C and 0.25 °C, respectively. In contrast, the radiation error of the aspirated temperature measurement platform is as low as 0.05 °C. This aspirated temperature sensor allows the radiation error to be reduced by approximately 88.6% compared to the naturally ventilated radiation shield, and allows the error to be reduced by a percentage of approximately 80% compared to the thermometer screen. The mean absolute error and root mean square error between the correction equation and experimental results are 0.032 °C and 0.036 °C, respectively, which demonstrates the accuracy of the CFD and GA methods proposed in this research.« less
Restrictions on surgical resident shift length does not impact type of medical errors.
Anderson, Jamie E; Goodman, Laura F; Jensen, Guy W; Salcedo, Edgardo S; Galante, Joseph M
2017-05-15
In 2011, resident duty hours were restricted in an attempt to improve patient safety and resident education. With the goal of reducing fatigue, shorter shift length leads to more patient handoffs, raising concerns about adverse effects on patient safety. This study seeks to determine whether differences in duty-hour restrictions influence types of errors made by residents. This is a nested retrospective cohort study at a surgery department in an academic medical center. During 2013-14, standard 2011 duty hours were in place for residents. In 2014-15, duty-hour restrictions at the study site were relaxed ("flexible") with no restrictions on shift length. We reviewed all morbidity and mortality submissions from July 1, 2013-June 30, 2015 and compared differences in types of errors between these periods. A total of 383 patients experienced adverse events, including 59 deaths (15.4%). Comparing standard versus flexible periods, there was no difference in mortality (15.7% versus 12.6%, P = 0.479) or complication rates (2.6% versus 2.5%, P = 0.696). There was no difference in types of errors between periods (P = 0.050-0.808). The most number of errors were due to cognitive failures (229, 59.6%), whereas the fewest number of errors were due to team failure (127, 33.2%). By subset, technical errors resulted in the highest number of errors (169, 44.1%). There were no differences between types of errors of cases that were nonelective, at night, or involving residents. Among adverse events reported in this departmental surgical morbidity and mortality, there were no differences in types of errors when resident duty hours were less restrictive. Copyright © 2017 Elsevier Inc. All rights reserved.
Temperature and pressure effects on capacitance probe cryogenic liquid level measurement accuracy
NASA Technical Reports Server (NTRS)
Edwards, Lawrence G.; Haberbusch, Mark
1993-01-01
The inaccuracies of liquid nitrogen and liquid hydrogen level measurements by use of a coaxial capacitance probe were investigated as a function of fluid temperatures and pressures. Significant liquid level measurement errors were found to occur due to the changes in the fluids dielectric constants which develop over the operating temperature and pressure ranges of the cryogenic storage tanks. The level measurement inaccuracies can be reduced by using fluid dielectric correction factors based on measured fluid temperatures and pressures. The errors in the corrected liquid level measurements were estimated based on the reported calibration errors of the temperature and pressure measurement systems. Experimental liquid nitrogen (LN2) and liquid hydrogen (LH2) level measurements were obtained using the calibrated capacitance probe equations and also by the dielectric constant correction factor method. The liquid levels obtained by the capacitance probe for the two methods were compared with the liquid level estimated from the fluid temperature profiles. Results show that the dielectric constant corrected liquid levels agreed within 0.5 percent of the temperature profile estimated liquid level. The uncorrected dielectric constant capacitance liquid level measurements deviated from the temperature profile level by more than 5 percent. This paper identifies the magnitude of liquid level measurement error that can occur for LN2 and LH2 fluids due to temperature and pressure effects on the dielectric constants over the tank storage conditions from 5 to 40 psia. A method of reducing the level measurement errors by using dielectric constant correction factors based on fluid temperature and pressure measurements is derived. The improved accuracy by use of the correction factors is experimentally verified by comparing liquid levels derived from fluid temperature profiles.
Error recovery in shared memory multiprocessors using private caches
NASA Technical Reports Server (NTRS)
Wu, Kun-Lung; Fuchs, W. Kent; Patel, Janak H.
1990-01-01
The problem of recovering from processor transient faults in shared memory multiprocesses systems is examined. A user-transparent checkpointing and recovery scheme using private caches is presented. Processes can recover from errors due to faulty processors by restarting from the checkpointed computation state. Implementation techniques using checkpoint identifiers and recovery stacks are examined as a means of reducing performance degradation in processor utilization during normal execution. This cache-based checkpointing technique prevents rollback propagation, provides rapid recovery, and can be integrated into standard cache coherence protocols. An analytical model is used to estimate the relative performance of the scheme during normal execution. Extensions to take error latency into account are presented.
Goldman, Gretchen T; Mulholland, James A; Russell, Armistead G; Strickland, Matthew J; Klein, Mitchel; Waller, Lance A; Tolbert, Paige E
2011-06-22
Two distinctly different types of measurement error are Berkson and classical. Impacts of measurement error in epidemiologic studies of ambient air pollution are expected to depend on error type. We characterize measurement error due to instrument imprecision and spatial variability as multiplicative (i.e. additive on the log scale) and model it over a range of error types to assess impacts on risk ratio estimates both on a per measurement unit basis and on a per interquartile range (IQR) basis in a time-series study in Atlanta. Daily measures of twelve ambient air pollutants were analyzed: NO2, NOx, O3, SO2, CO, PM10 mass, PM2.5 mass, and PM2.5 components sulfate, nitrate, ammonium, elemental carbon and organic carbon. Semivariogram analysis was applied to assess spatial variability. Error due to this spatial variability was added to a reference pollutant time-series on the log scale using Monte Carlo simulations. Each of these time-series was exponentiated and introduced to a Poisson generalized linear model of cardiovascular disease emergency department visits. Measurement error resulted in reduced statistical significance for the risk ratio estimates for all amounts (corresponding to different pollutants) and types of error. When modelled as classical-type error, risk ratios were attenuated, particularly for primary air pollutants, with average attenuation in risk ratios on a per unit of measurement basis ranging from 18% to 92% and on an IQR basis ranging from 18% to 86%. When modelled as Berkson-type error, risk ratios per unit of measurement were biased away from the null hypothesis by 2% to 31%, whereas risk ratios per IQR were attenuated (i.e. biased toward the null) by 5% to 34%. For CO modelled error amount, a range of error types were simulated and effects on risk ratio bias and significance were observed. For multiplicative error, both the amount and type of measurement error impact health effect estimates in air pollution epidemiology. By modelling instrument imprecision and spatial variability as different error types, we estimate direction and magnitude of the effects of error over a range of error types.
Linear quadratic Gaussian and feedforward controllers for the DSS-13 antenna
NASA Technical Reports Server (NTRS)
Gawronski, W. K.; Racho, C. S.; Mellstrom, J. A.
1994-01-01
The controller development and the tracking performance evaluation for the DSS-13 antenna are presented. A trajectory preprocessor, linear quadratic Gaussian (LQG) controller, feedforward controller, and their combination were designed, built, analyzed, and tested. The antenna exhibits nonlinear behavior when the input to the antenna and/or the derivative of this input exceeds the imposed limits; for slewing and acquisition commands, these limits are typically violated. A trajectory preprocessor was designed to ensure that the antenna behaves linearly, just to prevent nonlinear limit cycling. The estimator model for the LQG controller was identified from the data obtained from the field test. Based on an LQG balanced representation, a reduced-order LQG controller was obtained. The feedforward controller and the combination of the LQG and feedforward controller were also investigated. The performance of the controllers was evaluated with the tracking errors (due to following a trajectory) and the disturbance errors (due to the disturbances acting on the antenna). The LQG controller has good disturbance rejection properties and satisfactory tracking errors. The feedforward controller has small tracking errors but poor disturbance rejection properties. The combined LQG and feedforward controller exhibits small tracking errors as well as good disturbance rejection properties. However, the cost for this performance is the complexity of the controller.
Inui, Hiroshi; Taketomi, Shuji; Tahara, Keitarou; Yamagami, Ryota; Sanada, Takaki; Tanaka, Sakae
2017-03-01
Bone cutting errors can cause malalignment of unicompartmental knee arthroplasties (UKA). Although the extent of tibial malalignment due to horizontal cutting errors has been well reported, there is a lack of studies evaluating malalignment as a consequence of keel cutting errors, particularly in the Oxford UKA. The purpose of this study was to examine keel cutting errors during Oxford UKA placement using a navigation system and to clarify whether two different tibial keel cutting techniques would have different error rates. The alignment of the tibial cut surface after a horizontal osteotomy and the surface of the tibial trial component was measured with a navigation system. Cutting error was defined as the angular difference between these measurements. The following two techniques were used: the standard "pushing" technique in 83 patients (group P) and a modified "dolphin" technique in 41 patients (group D). In all 123 patients studied, the mean absolute keel cutting error was 1.7° and 1.4° in the coronal and sagittal planes, respectively. In group P, there were 22 outlier patients (27 %) in the coronal plane and 13 (16 %) in the sagittal plane. Group D had three outlier patients (8 %) in the coronal plane and none (0 %) in the sagittal plane. Significant differences were observed in the outlier ratio of these techniques in both the sagittal (P = 0.014) and coronal (P = 0.008) planes. Our study demonstrated overall keel cutting errors of 1.7° in the coronal plane and 1.4° in the sagittal plane. The "dolphin" technique was found to significantly reduce keel cutting errors on the tibial side. This technique will be useful for accurate component positioning and therefore improve the longevity of Oxford UKAs. Retrospective comparative study, Level III.
FAMA: Fast Automatic MOOG Analysis
NASA Astrophysics Data System (ADS)
Magrini, Laura; Randich, Sofia; Friel, Eileen; Spina, Lorenzo; Jacobson, Heather; Cantat-Gaudin, Tristan; Donati, Paolo; Baglioni, Roberto; Maiorca, Enrico; Bragaglia, Angela; Sordo, Rosanna; Vallenari, Antonella
2014-02-01
FAMA (Fast Automatic MOOG Analysis), written in Perl, computes the atmospheric parameters and abundances of a large number of stars using measurements of equivalent widths (EWs) automatically and independently of any subjective approach. Based on the widely-used MOOG code, it simultaneously searches for three equilibria, excitation equilibrium, ionization balance, and the relationship between logn(FeI) and the reduced EWs. FAMA also evaluates the statistical errors on individual element abundances and errors due to the uncertainties in the stellar parameters. Convergence criteria are not fixed "a priori" but instead are based on the quality of the spectra.
System safety management: A new discipline
NASA Technical Reports Server (NTRS)
Pope, W. C.
1971-01-01
The systems theory is discussed in relation to safety management. It is suggested that systems safety management, as a new discipline, holds great promise for reducing operating errors, conserving labor resources, avoiding operating costs due to mistakes, and for improving managerial techniques. It is pointed out that managerial failures or system breakdowns are the basic reasons for human errors and condition defects. In this respect, a recommendation is made that safety engineers stop visualizing the problem only with the individual (supervisor or employee) and see the problem from the systems point of view.
Retrievals of water quality parameters from satellite measurements over optically shallow waters have been problematic due to bottom contamination of the signals. As a result, large errors are associated with derived water column properties. These deficiencies greatly reduce the ...
NASA Astrophysics Data System (ADS)
Son, Young-Sun; Kim, Hyun-cheol
2018-05-01
Chlorophyll (Chl) concentration is one of the key indicators identifying changes in the Arctic marine ecosystem. However, current Chl algorithms are not accurate in the Arctic Ocean due to different bio-optical properties from those in the lower latitude oceans. In this study, we evaluated the current Chl algorithms and analyzed the cause of the error in the western coastal waters of Svalbard, which are known to be sensitive to climate change. The NASA standard algorithms showed to overestimate the Chl concentration in the region. This was due to the high non-algal particles (NAP) absorption and colored dissolved organic matter (CDOM) variability at the blue wavelength. In addition, at lower Chl concentrations (0.1-0.3 mg m-3), chlorophyll-specific absorption coefficients were ∼2.3 times higher than those of other Arctic oceans. This was another reason for the overestimation of Chl concentration. OC4 algorithm-based regionally tuned-Svalbard Chl (SC4) algorithm for retrieving more accurate Chl estimates reduced the mean absolute percentage difference (APD) error from 215% to 49%, the mean relative percentage difference (RPD) error from 212% to 16%, and the normalized root mean square (RMS) error from 211% to 68%. This region has abundant suspended matter due to the melting of tidal glaciers. We evaluated the performance of total suspended matter (TSM) algorithms. Previous published TSM algorithms generally overestimated the TSM concentration in this region. The Svalbard TSM-single band algorithm for low TSM range (ST-SB-L) decreased the APD and RPD errors by 52% and 14%, respectively, but the RMS error still remained high (105%).
Huang, C.; Townshend, J.R.G.; Liang, S.; Kalluri, S.N.V.; DeFries, R.S.
2002-01-01
Measured and modeled point spread functions (PSF) of sensor systems indicate that a significant portion of the recorded signal of each pixel of a satellite image originates from outside the area represented by that pixel. This hinders the ability to derive surface information from satellite images on a per-pixel basis. In this study, the impact of the PSF of the Moderate Resolution Imaging Spectroradiometer (MODIS) 250 m bands was assessed using four images representing different landscapes. Experimental results showed that though differences between pixels derived with and without PSF effects were small on the average, the PSF generally brightened dark objects and darkened bright objects. This impact of the PSF lowered the performance of a support vector machine (SVM) classifier by 5.4% in overall accuracy and increased the overall root mean square error (RMSE) by 2.4% in estimating subpixel percent land cover. An inversion method based on the known PSF model reduced the signals originating from surrounding areas by as much as 53%. This method differs from traditional PSF inversion deconvolution methods in that the PSF was adjusted with lower weighting factors for signals originating from neighboring pixels than those specified by the PSF model. By using this deconvolution method, the lost classification accuracy due to residual impact of PSF effects was reduced to only 1.66% in overall accuracy. The increase in the RMSE of estimated subpixel land cover proportions due to the residual impact of PSF effects was reduced to 0.64%. Spatial aggregation also effectively reduced the errors in estimated land cover proportion images. About 50% of the estimation errors were removed after applying the deconvolution method and aggregating derived proportion images to twice their dimensional pixel size. ?? 2002 Elsevier Science Inc. All rights reserved.
McMahon, Camilla M.; Henderson, Heather A.
2014-01-01
Error-monitoring, or the ability to recognize one's mistakes and implement behavioral changes to prevent further mistakes, may be impaired in individuals with Autism Spectrum Disorder (ASD). Children and adolescents (ages 9-19) with ASD (n = 42) and typical development (n = 42) completed two face processing tasks that required discrimination of either the gender or affect of standardized face stimuli. Post-error slowing and the difference in Error-Related Negativity amplitude between correct and incorrect responses (ERNdiff) were used to index error-monitoring ability. Overall, ERNdiff increased with age. On the Gender Task, individuals with ASD had a smaller ERNdiff than individuals with typical development; however, on the Affect Task, there were no significant diagnostic group differences on ERNdiff. Individuals with ASD may have ERN amplitudes similar to those observed in individuals with typical development in more social contexts compared to less social contexts due to greater consequences for errors, more effortful processing, and/or reduced processing efficiency in these contexts. Across all participants, more post-error slowing on the Affect Task was associated with better social cognitive skills. PMID:25066088
Murugesan, Yahini Prabha; Alsadoon, Abeer; Manoranjan, Paul; Prasad, P W C
2018-06-01
Augmented reality-based surgeries have not been successfully implemented in oral and maxillofacial areas due to limitations in geometric accuracy and image registration. This paper aims to improve the accuracy and depth perception of the augmented video. The proposed system consists of a rotational matrix and translation vector algorithm to reduce the geometric error and improve the depth perception by including 2 stereo cameras and a translucent mirror in the operating room. The results on the mandible/maxilla area show that the new algorithm improves the video accuracy by 0.30-0.40 mm (in terms of overlay error) and the processing rate to 10-13 frames/s compared to 7-10 frames/s in existing systems. The depth perception increased by 90-100 mm. The proposed system concentrates on reducing the geometric error. Thus, this study provides an acceptable range of accuracy with a shorter operating time, which provides surgeons with a smooth surgical flow. Copyright © 2018 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Chand, Naresh; Magill, Peter D.; Swaminathan, Venkat S.; Yadvish, R. D.
1999-04-01
For low cost fiber-to-the-home (FTTH) passive optical networks (PON), we have studied the delivery of broadcast digital video as an overlay to baseband switched digital services on the same fiber using a single transmitter and a single receiver. We have multiplexed the baseband data at 155.52 Mbps with digital video QPSK channels in the 270 - 1450 MHz range with minimal degradation. We used an additional 860 MHz carrier modulated with 8 Mbps QPSK as a test-signal. An optical to electrical (O/E) receiver using an APD satisfies the power budget needs of ITU-T document G983.x for both class B and C operations (i.e., receiver sensitivity less than -33 dBm for a 10-10 bit error rate) without any FEC for both data and video. The PIN diode O/E receiver nearly satisfies the need for class B operation (-30 dBm receiver sensitivity) of G983 with FEC in QPSK FDM video. For a 155.52 Mbps baseband data transmission and for a given bit error rate, there is approximately 6 dBo1 optical power penalty due to video overlay. Of this, 1 dBo penalty is due to biasing the laser with an extinction ratio reduced from 10 dBo to approximately 6 dBo, and approximately 5 dBo penalty is due to receiver bandwidth increasing from approximately 100 MHz to approximately 1 GHz. The penalty due to receiver is after optimizing the filter for baseband data, and is caused by the reduced value of feedback resistor of the first stage transimpedance amplifier. The optical power penalty for video transmission is about 2 dBo due to reduced optical modulation index.
NASA Astrophysics Data System (ADS)
Bukhari, W.; Hong, S.-M.
2016-03-01
The prediction as well as the gating of respiratory motion have received much attention over the last two decades for reducing the targeting error of the radiation treatment beam due to respiratory motion. In this article, we present a real-time algorithm for predicting respiratory motion in 3D space and realizing a gating function without pre-specifying a particular phase of the patient’s breathing cycle. The algorithm, named EKF-GPRN+ , first employs an extended Kalman filter (EKF) independently along each coordinate to predict the respiratory motion and then uses a Gaussian process regression network (GPRN) to correct the prediction error of the EKF in 3D space. The GPRN is a nonparametric Bayesian algorithm for modeling input-dependent correlations between the output variables in multi-output regression. Inference in GPRN is intractable and we employ variational inference with mean field approximation to compute an approximate predictive mean and predictive covariance matrix. The approximate predictive mean is used to correct the prediction error of the EKF. The trace of the approximate predictive covariance matrix is utilized to capture the uncertainty in EKF-GPRN+ prediction error and systematically identify breathing points with a higher probability of large prediction error in advance. This identification enables us to pause the treatment beam over such instances. EKF-GPRN+ implements a gating function by using simple calculations based on the trace of the predictive covariance matrix. Extensive numerical experiments are performed based on a large database of 304 respiratory motion traces to evaluate EKF-GPRN+ . The experimental results show that the EKF-GPRN+ algorithm reduces the patient-wise prediction error to 38%, 40% and 40% in root-mean-square, compared to no prediction, at lookahead lengths of 192 ms, 384 ms and 576 ms, respectively. The EKF-GPRN+ algorithm can further reduce the prediction error by employing the gating function, albeit at the cost of reduced duty cycle. The error reduction allows the clinical target volume to planning target volume (CTV-PTV) margin to be reduced, leading to decreased normal-tissue toxicity and possible dose escalation. The CTV-PTV margin is also evaluated to quantify clinical benefits of EKF-GPRN+ prediction.
Errors in radiation oncology: A study in pathways and dosimetric impact
Drzymala, Robert E.; Purdy, James A.; Michalski, Jeff
2005-01-01
As complexity for treating patients increases, so does the risk of error. Some publications have suggested that record and verify (R&V) systems may contribute in propagating errors. Direct data transfer has the potential to eliminate most, but not all, errors. And although the dosimetric consequences may be obvious in some cases, a detailed study does not exist. In this effort, we examined potential errors in terms of scenarios, pathways of occurrence, and dosimetry. Our goal was to prioritize error prevention according to likelihood of event and dosimetric impact. For conventional photon treatments, we investigated errors of incorrect source‐to‐surface distance (SSD), energy, omitted wedge (physical, dynamic, or universal) or compensating filter, incorrect wedge or compensating filter orientation, improper rotational rate for arc therapy, and geometrical misses due to incorrect gantry, collimator or table angle, reversed field settings, and setup errors. For electron beam therapy, errors investigated included incorrect energy, incorrect SSD, along with geometric misses. For special procedures we examined errors for total body irradiation (TBI, incorrect field size, dose rate, treatment distance) and LINAC radiosurgery (incorrect collimation setting, incorrect rotational parameters). Likelihood of error was determined and subsequently rated according to our history of detecting such errors. Dosimetric evaluation was conducted by using dosimetric data, treatment plans, or measurements. We found geometric misses to have the highest error probability. They most often occurred due to improper setup via coordinate shift errors or incorrect field shaping. The dosimetric impact is unique for each case and depends on the proportion of fields in error and volume mistreated. These errors were short‐lived due to rapid detection via port films. The most significant dosimetric error was related to a reversed wedge direction. This may occur due to incorrect collimator angle or wedge orientation. For parallel‐opposed 60° wedge fields, this error could be as high as 80% to a point off‐axis. Other examples of dosimetric impact included the following: SSD, ~2%/cm for photons or electrons; photon energy (6 MV vs. 18 MV), on average 16% depending on depth, electron energy, ~0.5cm of depth coverage per MeV (mega‐electron volt). Of these examples, incorrect distances were most likely but rapidly detected by in vivo dosimetry. Errors were categorized by occurrence rate, methods and timing of detection, longevity, and dosimetric impact. Solutions were devised according to these criteria. To date, no one has studied the dosimetric impact of global errors in radiation oncology. Although there is heightened awareness that with increased use of ancillary devices and automation, there must be a parallel increase in quality check systems and processes, errors do and will continue to occur. This study has helped us identify and prioritize potential errors in our clinic according to frequency and dosimetric impact. For example, to reduce the use of an incorrect wedge direction, our clinic employs off‐axis in vivo dosimetry. To avoid a treatment distance setup error, we use both vertical table settings and optical distance indicator (ODI) values to properly set up fields. As R&V systems become more automated, more accurate and efficient data transfer will occur. This will require further analysis. Finally, we have begun examining potential intensity‐modulated radiation therapy (IMRT) errors according to the same criteria. PACS numbers: 87.53.Xd, 87.53.St PMID:16143793
Study on compensation algorithm of head skew in hard disk drives
NASA Astrophysics Data System (ADS)
Xiao, Yong; Ge, Xiaoyu; Sun, Jingna; Wang, Xiaoyan
2011-10-01
In hard disk drives (HDDs), head skew among multiple heads is pre-calibrated during manufacturing process. In real applications with high capacity of storage, the head stack may be tilted due to environmental change, resulting in additional head skew errors from outer diameter (OD) to inner diameter (ID). In case these errors are below the preset threshold for power on recalibration, the current strategy may not be aware, and drive performance under severe environment will be degraded. In this paper, in-the-field compensation of small DC head skew variation across stroke is proposed, where a zone table has been equipped. Test results demonstrating its effectiveness to reduce observer error and to enhance drive performance via accurate prediction of DC head skew are provided.
Nguyen, Hung P.; Dingwell, Jonathan B.
2012-01-01
Determining how the human nervous system contends with neuro-motor noise is vital to understanding how humans achieve accurate goal-directed movements. Experimentally, people learning skilled tasks tend to reduce variability in distal joint movements more than in proximal joint movements. This suggests that they might be imposing greater control over distal joints than proximal joints. However, the reasons for this remain unclear, largely because it is not experimentally possible to directly manipulate either the noise or the control at each joint independently. Therefore, this study used a 2 degree-of-freedom torque driven arm model to determine how different combinations of noise and/or control independently applied at each joint affected the reaching accuracy and the total work required to make the movement. Signal-dependent noise was simultaneously and independently added to the shoulder and elbow torques to induce endpoint errors during planar reaching. Feedback control was then applied, independently and jointly, at each joint to reduce endpoint error due to the added neuromuscular noise. Movement direction and the inertia distribution along the arm were varied to quantify how these biomechanical variations affected the system performance. Endpoint error and total net work were computed as dependent measures. When each joint was independently subjected to noise in the absence of control, endpoint errors were more sensitive to distal (elbow) noise than to proximal (shoulder) noise for nearly all combinations of reaching direction and inertia ratio. The effects of distal noise on endpoint errors were more pronounced when inertia was distributed more toward the forearm. In contrast, the total net work decreased as mass was shifted to the upper arm for reaching movements in all directions. When noise was present at both joints and joint control was implemented, controlling the distal joint alone reduced endpoint errors more than controlling the proximal joint alone for nearly all combinations of reaching direction and inertia ratio. Applying control only at the distal joint was more effective at reducing endpoint errors when more of the mass was more proximally distributed. Likewise, controlling the distal joint alone required less total net work than controlling the proximal joint alone for nearly all combinations of reaching distance and inertia ratio. It is more efficient to reduce endpoint error and energetic cost by selectively applying control to reduce variability in the distal joint than the proximal joint. The reasons for this arise from the biomechanical configuration of the arm itself. PMID:22757504
Nguyen, Hung P; Dingwell, Jonathan B
2012-06-01
Determining how the human nervous system contends with neuro-motor noise is vital to understanding how humans achieve accurate goal-directed movements. Experimentally, people learning skilled tasks tend to reduce variability in distal joint movements more than in proximal joint movements. This suggests that they might be imposing greater control over distal joints than proximal joints. However, the reasons for this remain unclear, largely because it is not experimentally possible to directly manipulate either the noise or the control at each joint independently. Therefore, this study used a 2 degree-of-freedom torque driven arm model to determine how different combinations of noise and/or control independently applied at each joint affected the reaching accuracy and the total work required to make the movement. Signal-dependent noise was simultaneously and independently added to the shoulder and elbow torques to induce endpoint errors during planar reaching. Feedback control was then applied, independently and jointly, at each joint to reduce endpoint error due to the added neuromuscular noise. Movement direction and the inertia distribution along the arm were varied to quantify how these biomechanical variations affected the system performance. Endpoint error and total net work were computed as dependent measures. When each joint was independently subjected to noise in the absence of control, endpoint errors were more sensitive to distal (elbow) noise than to proximal (shoulder) noise for nearly all combinations of reaching direction and inertia ratio. The effects of distal noise on endpoint errors were more pronounced when inertia was distributed more toward the forearm. In contrast, the total net work decreased as mass was shifted to the upper arm for reaching movements in all directions. When noise was present at both joints and joint control was implemented, controlling the distal joint alone reduced endpoint errors more than controlling the proximal joint alone for nearly all combinations of reaching direction and inertia ratio. Applying control only at the distal joint was more effective at reducing endpoint errors when more of the mass was more proximally distributed. Likewise, controlling the distal joint alone required less total net work than controlling the proximal joint alone for nearly all combinations of reaching distance and inertia ratio. It is more efficient to reduce endpoint error and energetic cost by selectively applying control to reduce variability in the distal joint than the proximal joint. The reasons for this arise from the biomechanical configuration of the arm itself.
NASA Astrophysics Data System (ADS)
Malys, Brian J.; Piotrowski, Michelle L.; Owens, Kevin G.
2018-02-01
Frustrated by worse than expected error for both peak area and time-of-flight (TOF) in matrix assisted laser desorption ionization (MALDI) experiments using samples prepared by electrospray deposition, it was finally determined that there was a correlation between sample location on the target plate and the measured TOF/peak area. Variations in both TOF and peak area were found to be due to small differences in the initial position of ions formed in the source region of the TOF mass spectrometer. These differences arise largely from misalignment of the instrument sample stage, with a smaller contribution arising from the non-ideal shape of the target plates used. By physically measuring the target plates used and comparing TOF data collected from three different instruments, an estimate of the magnitude and direction of the sample stage misalignment was determined for each of the instruments. A correction method was developed to correct the TOFs and peak areas obtained for a given combination of target plate and instrument. Two correction factors are determined, one by initially collecting spectra from each sample position used and another by using spectra from a single position for each set of samples on a target plate. For TOF and mass values, use of the correction factor reduced the error by a factor of 4, with the relative standard deviation (RSD) of the corrected masses being reduced to 12-24 ppm. For the peak areas, the RSD was reduced from 28% to 16% for samples deposited twice onto two target plates over two days.
NASA Astrophysics Data System (ADS)
Malys, Brian J.; Piotrowski, Michelle L.; Owens, Kevin G.
2017-12-01
Frustrated by worse than expected error for both peak area and time-of-flight (TOF) in matrix assisted laser desorption ionization (MALDI) experiments using samples prepared by electrospray deposition, it was finally determined that there was a correlation between sample location on the target plate and the measured TOF/peak area. Variations in both TOF and peak area were found to be due to small differences in the initial position of ions formed in the source region of the TOF mass spectrometer. These differences arise largely from misalignment of the instrument sample stage, with a smaller contribution arising from the non-ideal shape of the target plates used. By physically measuring the target plates used and comparing TOF data collected from three different instruments, an estimate of the magnitude and direction of the sample stage misalignment was determined for each of the instruments. A correction method was developed to correct the TOFs and peak areas obtained for a given combination of target plate and instrument. Two correction factors are determined, one by initially collecting spectra from each sample position used and another by using spectra from a single position for each set of samples on a target plate. For TOF and mass values, use of the correction factor reduced the error by a factor of 4, with the relative standard deviation (RSD) of the corrected masses being reduced to 12-24 ppm. For the peak areas, the RSD was reduced from 28% to 16% for samples deposited twice onto two target plates over two days. [Figure not available: see fulltext.
García-Molina Sáez, C; Urbieta Sanz, E; Madrigal de Torres, M; Vicente Vera, T; Pérez Cárceles, M D
2016-04-01
It is well known that medication reconciliation at discharge is a key strategy to ensure proper drug prescription and the effectiveness and safety of any treatment. Different types of interventions to reduce reconciliation errors at discharge have been tested, many of which are based on the use of electronic tools as they are useful to optimize the medication reconciliation process. However, not all countries are progressing at the same speed in this task and not all tools are equally effective. So it is important to collate updated country-specific data in order to identify possible strategies for improvement in each particular region. Our aim therefore was to analyse the effectiveness of a computerized pharmaceutical intervention to reduce reconciliation errors at discharge in Spain. A quasi-experimental interrupted time-series study was carried out in the cardio-pneumology unit of a general hospital from February to April 2013. The study consisted of three phases: pre-intervention, intervention and post-intervention, each involving 23 days of observations. At the intervention period, a pharmacist was included in the medical team and entered the patient's pre-admission medication in a computerized tool integrated into the electronic clinical history of the patient. The effectiveness was evaluated by the differences between the mean percentages of reconciliation errors in each period using a Mann-Whitney U test accompanied by Bonferroni correction, eliminating autocorrelation of the data by first using an ARIMA analysis. In addition, the types of error identified and their potential seriousness were analysed. A total of 321 patients (119, 105 and 97 in each phase, respectively) were included in the study. For the 3966 medicaments recorded, 1087 reconciliation errors were identified in 77·9% of the patients. The mean percentage of reconciliation errors per patient in the first period of the study was 42·18%, falling to 19·82% during the intervention period (P = 0·000). When the intervention was withdrawn, the mean percentage of reconciliation errors increased again to 27·72% (P = 0·008). The difference between the percentages of pre- and post-intervention periods was statistically significant (P = 0·000). Most reconciliation errors were due to omission (46·7%) or incomplete prescription (43·8%), and 35·3% of which could have caused harm to the patient. A computerized pharmaceutical intervention is shown to reduce reconciliation errors in the context of a high incidence of such errors. © 2016 John Wiley & Sons Ltd.
Dasgupta, Subhashish; Banerjee, Rupak K; Hariharan, Prasanna; Myers, Matthew R
2011-02-01
Experimental studies of thermal effects in high-intensity focused ultrasound (HIFU) procedures are often performed with the aid of fine wire thermocouples positioned within tissue phantoms. Thermocouple measurements are subject to several types of error which must be accounted for before reliable inferences can be made on the basis of the measurements. Thermocouple artifact due to viscous heating is one source of error. A second is the uncertainty regarding the position of the beam relative to the target location or the thermocouple junction, due to the error in positioning the beam at the junction. This paper presents a method for determining the location of the beam relative to a fixed pair of thermocouples. The localization technique reduces the uncertainty introduced by positioning errors associated with very narrow HIFU beams. The technique is presented in the context of an investigation into the effect of blood flow through large vessels on the efficacy of HIFU procedures targeted near the vessel. Application of the beam localization method allowed conclusions regarding the effects of blood flow to be drawn from previously inconclusive (because of localization uncertainties) data. Comparison of the position-adjusted transient temperature profiles for flow rates of 0 and 400ml/min showed that blood flow can reduce temperature elevations by more than 10%, when the HIFU focus is within a 2mm distance from the vessel wall. At acoustic power levels of 17.3 and 24.8W there is a 20- to 70-fold decrease in thermal dose due to the convective cooling effect of blood flow, implying a shrinkage in lesion size. The beam-localization technique also revealed the level of thermocouple artifact as a function of sonication time, providing investigators with an indication of the quality of thermocouple data for a given exposure time. The maximum artifact was found to be double the measured temperature rise, during initial few seconds of sonication. Copyright © 2010 Elsevier B.V. All rights reserved.
Analysis of target wavefront error for secondary mirror of a spaceborne telescope
NASA Astrophysics Data System (ADS)
Chang, Shenq-Tsong; Lin, Wei-Cheng; Kuo, Ching-Hsiang; Chan, Chia-Yen; Lin, Yu-Chuan; Huang, Ting-Ming
2014-09-01
During the fabrication of an aspherical mirror, the inspection of the residual wavefront error is critical. In the program of a spaceborne telescope development, primary mirror is made of ZERODUR with clear aperture of 450 mm. The mass is 10 kg after lightweighting. Deformation of mirror due to gravity is expected; hence uniform supporting measured by load cells has been applied to reduce the gravity effect. Inspection has been taken to determine the residual wavefront error at the configuration of mirror face upwards. Correction polishing has been performed according to the measurement. However, after comparing with the data measured by bench test while the primary mirror is at a configuration of mirror face horizontal, deviations have been found for the two measurements. Optical system that is not able to meet the requirement is predicted according to the measured wavefront error by bench test. A target wavefront error of secondary mirror is therefore analyzed to correct that of primary mirror. Optical performance accordingly is presented.
Improved Calibration through SMAP RFI Change Detection
NASA Technical Reports Server (NTRS)
Piepmeier, Jeffrey; De Amici, Giovanni; Mohammed, Priscilla; Peng, Jinzheng
2017-01-01
Anthropogenic Radio-Frequency Interference (RFI) drove both the SMAP (Soil Moisture Active Passive) microwave radiometer hardware and Level 1 science algorithm designs to use new technology and techniques for the first time on a spaceflight project. Care was taken to provide special features allowing the detection and removal of harmful interference in order to meet the error budget. Nonetheless, the project accepted a risk that RFI and its mitigation would exceed the 1.3-K error budget. Thus, RFI will likely remain a challenge afterwards due to its changing and uncertain nature. To address the challenge, we seek to answer the following questions: How does RFI evolve over the SMAP lifetime? What calibration error does the changing RFI environment cause? Can time series information be exploited to reduce these errors and improve calibration for all science products reliant upon SMAP radiometer data? In this talk, we address the first question.
Analysis on the optical aberration effect on spectral resolution of coded aperture spectroscopy
NASA Astrophysics Data System (ADS)
Hao, Peng; Chi, Mingbo; Wu, Yihui
2017-10-01
The coded aperture spectrometer can achieve high throughput and high spectral resolution by replacing the traditional single slit with two-dimensional array slits manufactured by MEMS technology. However, the sampling accuracy of coding spectrum image will be distorted due to the existence of system aberrations, machining error, fixing errors and so on, resulting in the declined spectral resolution. The influence factor of the spectral resolution come from the decode error, the spectral resolution of each column, and the column spectrum offset correction. For the Czerny-Turner spectrometer, the spectral resolution of each column most depend on the astigmatism, in this coded aperture spectroscopy, the uncorrected astigmatism does result in degraded performance. Some methods must be used to reduce or remove the limiting astigmatism. The curvature of field and the spectral curvature can be result in the spectrum revision errors.
Feedback on prescribing errors to junior doctors: exploring views, problems and preferred methods.
Bertels, Jeroen; Almoudaris, Alex M; Cortoos, Pieter-Jan; Jacklin, Ann; Franklin, Bryony Dean
2013-06-01
Prescribing errors are common in hospital inpatients. However, the literature suggests that doctors are often unaware of their errors as they are not always informed of them. It has been suggested that providing more feedback to prescribers may reduce subsequent error rates. Only few studies have investigated the views of prescribers towards receiving such feedback, or the views of hospital pharmacists as potential feedback providers. Our aim was to explore the views of junior doctors and hospital pharmacists regarding feedback on individual doctors' prescribing errors. Objectives were to determine how feedback was currently provided and any associated problems, to explore views on other approaches to feedback, and to make recommendations for designing suitable feedback systems. A large London NHS hospital trust. To explore views on current and possible feedback mechanisms, self-administered questionnaires were given to all junior doctors and pharmacists, combining both 5-point Likert scale statements and open-ended questions. Agreement scores for statements regarding perceived prescribing error rates, opinions on feedback, barriers to feedback, and preferences for future practice. Response rates were 49% (37/75) for junior doctors and 57% (57/100) for pharmacists. In general, doctors did not feel threatened by feedback on their prescribing errors. They felt that feedback currently provided was constructive but often irregular and insufficient. Most pharmacists provided feedback in various ways; however some did not or were inconsistent. They were willing to provide more feedback, but did not feel it was always effective or feasible due to barriers such as communication problems and time constraints. Both professional groups preferred individual feedback with additional regular generic feedback on common or serious errors. Feedback on prescribing errors was valued and acceptable to both professional groups. From the results, several suggested methods of providing feedback on prescribing errors emerged. Addressing barriers such as the identification of individual prescribers would facilitate feedback in practice. Research investigating whether or not feedback reduces the subsequent error rate is now needed.
Focal spot motion of linear accelerators and its effect on portal image analysis.
Sonke, Jan-Jakob; Brand, Bob; van Herk, Marcel
2003-06-01
The focal spot of a linear accelerator is often considered to have a fully stable position. In practice, however, the beam control loop of a linear accelerator needs to stabilize after the beam is turned on. As a result, some motion of the focal spot might occur during the start-up phase of irradiation. When acquiring portal images, this motion will affect the projected position of anatomy and field edges, especially when low exposures are used. In this paper, the motion of the focal spot and the effect of this motion on portal image analysis are quantified. A slightly tilted narrow slit phantom was placed at the isocenter of several linear accelerators and images were acquired (3.5 frames per second) by means of an amorphous silicon flat panel imager positioned approximately 0.7 m below the isocenter. The motion of the focal spot was determined by converting the tilted slit images to subpixel accurate line spread functions. The error in portal image analysis due to focal spot motionwas estimated by a subtraction of the relative displacement of the projected slit from the relative displacement of the field edges. It was found that the motion of the focal spot depends on the control system and design of the accelerator. The shift of the focal spot at the start of irradiation ranges between 0.05-0.7 mm in the gun-target (GT) direction. In the left-right (AB) direction the shift is generally smaller. The resulting error in portal image analysis due to focal spotmotion ranges between 0.05-1.1 mm for a dose corresponding to two monitor units (MUs). For 20 MUs, the effect of the focal spot motion reduces to 0.01-0.3 mm. The error in portal image analysis due to focal spot motion can be reduced by reducing the applied dose rate.
Crowded field photometry with deconvolved images.
NASA Astrophysics Data System (ADS)
Linde, P.; Spännare, S.
A local implementation of the Lucy-Richardson algorithm has been used to deconvolve a set of crowded stellar field images. The effects of deconvolution on detection limits as well as on photometric and astrometric properties have been investigated as a function of the number of deconvolution iterations. Results show that deconvolution improves detection of faint stars, although artifacts are also found. Deconvolution provides more stars measurable without significant degradation of positional accuracy. The photometric precision is affected by deconvolution in several ways. Errors due to unresolved images are notably reduced, while flux redistribution between stars and background increases the errors.
Gorgich, Enam Alhagh Charkhat; Barfroshan, Sanam; Ghoreishi, Gholamreza; Yaghoobi, Maryam
2016-01-01
Introduction and Aim: Medication errors as a serious problem in world and one of the most common medical errors that threaten patient safety and may lead to even death of them. The purpose of this study was to investigate the causes of medication errors and strategies to prevention of them from nurses and nursing student viewpoint. Materials & Methods: This cross-sectional descriptive study was conducted on 327 nursing staff of khatam-al-anbia hospital and 62 intern nursing students in nursing and midwifery school of Zahedan, Iran, enrolled through the availability sampling in 2015. The data were collected by the valid and reliable questionnaire. To analyze the data, descriptive statistics, T-test and ANOVA were applied by use of SPSS16 software. Findings: The results showed that the most common causes of medications errors in nursing were tiredness due increased workload (97.8%), and in nursing students were drug calculation, (77.4%). The most important way for prevention in nurses and nursing student opinion, was reducing the work pressure by increasing the personnel, proportional to the number and condition of patients and also creating a unit as medication calculation. Also there was a significant relationship between the type of ward and the mean of medication errors in two groups. Conclusion: Based on the results it is recommended that nurse-managers resolve the human resources problem, provide workshops and in-service education about preparing medications, side-effects of drugs and pharmacological knowledge. Using electronic medications cards is a measure which reduces medications errors. PMID:27045413
Lee, Sangyoon; Hu, Xinda; Hua, Hong
2016-05-01
Many error sources have been explored in regards to the depth perception problem in augmented reality environments using optical see-through head-mounted displays (OST-HMDs). Nonetheless, two error sources are commonly neglected: the ray-shift phenomenon and the change in interpupillary distance (IPD). The first source of error arises from the difference in refraction for virtual and see-through optical paths caused by an optical combiner, which is required of OST-HMDs. The second occurs from the change in the viewer's IPD due to eye convergence. In this paper, we analyze the effects of these two error sources on near-field depth perception and propose methods to compensate for these two types of errors. Furthermore, we investigate their effectiveness through an experiment comparing the conditions with and without our error compensation methods applied. In our experiment, participants estimated the egocentric depth of a virtual and a physical object located at seven different near-field distances (40∼200 cm) using a perceptual matching task. Although the experimental results showed different patterns depending on the target distance, the results demonstrated that the near-field depth perception error can be effectively reduced to a very small level (at most 1 percent error) by compensating for the two mentioned error sources.
Workflow Enhancement (WE) Improves Safety in Radiation Oncology: Putting the WE and Team Together
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chao, Samuel T., E-mail: chaos@ccf.org; Rose Ella Burkhardt Brain Tumor and Neuro-oncology Center, Cleveland Clinic, Cleveland, Ohio; Meier, Tim
Purpose: To review the impact of a workflow enhancement (WE) team in reducing treatment errors that reach patients within radiation oncology. Methods and Materials: It was determined that flaws in our workflow and processes resulted in errors reaching the patient. The process improvement team (PIT) was developed in 2010 to reduce errors and was later modified in 2012 into the current WE team. Workflow issues and solutions were discussed in PIT and WE team meetings. Due to tensions within PIT that resulted in employee dissatisfaction, there was a 6-month hiatus between the end of PIT and initiation of the renamed/redesigned WEmore » team. In addition to the PIT/WE team forms, the department had separate incident forms to document treatment errors reaching the patient. These incident forms are rapidly reviewed and monitored by our departmental and institutional quality and safety groups, reflecting how seriously these forms are treated. The number of these incident forms was compared before and after instituting the WE team. Results: When PIT was disbanded, a number of errors seemed to occur in succession, requiring reinstitution and redesign of this team, rebranded the WE team. Interestingly, the number of incident forms per patient visits did not change when comparing 6 months during the PIT, 6 months during the hiatus, and the first 6 months after instituting the WE team (P=.85). However, 6 to 12 months after instituting the WE team, the number of incident forms per patient visits decreased (P=.028). After the WE team, employee satisfaction and commitment to quality increased as demonstrated by Gallup surveys, suggesting a correlation to the WE team. Conclusions: A team focused on addressing workflow and improving processes can reduce the number of errors reaching the patient. Time is necessary before a reduction in errors reaching patients will be seen.« less
Optimizing X-ray mirror thermal performance using matched profile cooling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Lin; Cocco, Daniele; Kelez, Nicholas
2015-08-07
To cover a large photon energy range, the length of an X-ray mirror is often longer than the beam footprint length for much of the applicable energy range. To limit thermal deformation of such a water-cooled X-ray mirror, a technique using side cooling with a cooled length shorter than the beam footprint length is proposed. This cooling length can be optimized by using finite-element analysis. For the Kirkpatrick–Baez (KB) mirrors at LCLS-II, the thermal deformation can be reduced by a factor of up to 30, compared with full-length cooling. Furthermore, a second, alternative technique, based on a similar principle ismore » presented: using a long, single-length cooling block on each side of the mirror and adding electric heaters between the cooling blocks and the mirror substrate. The electric heaters consist of a number of cells, located along the mirror length. The total effective length of the electric heater can then be adjusted by choosing which cells to energize, using electric power supplies. The residual height error can be minimized to 0.02 nm RMS by using optimal heater parameters (length and power density). Compared with a case without heaters, this residual height error is reduced by a factor of up to 45. The residual height error in the LCLS-II KB mirrors, due to free-electron laser beam heat load, can be reduced by a factor of ~11belowthe requirement. The proposed techniques are also effective in reducing thermal slope errors and are, therefore, applicable to white beam mirrors in synchrotron radiation beamlines.« less
NASA Technical Reports Server (NTRS)
Daily, J. W.
1978-01-01
Laser induced fluorescence spectroscopy of flames is discussed, and derived uncertainty relations are used to calculate detectability limits due to statistical errors. Interferences due to Rayleigh scattering from molecules as well as Mie scattering and incandescence from particles have been examined for their effect on detectability limits. Fluorescence trapping is studied, and some methods for reducing the effect are considered. Fluorescence trapping places an upper limit on the number density of the fluorescing species that can be measured without signal loss.
Qubit-qubit interaction in quantum computers: errors and scaling laws
NASA Astrophysics Data System (ADS)
Gea-Banacloche, Julio R.
1998-07-01
This paper explores the limitations that interaction between the physical qubits making up a quantum computer may impose on the computer's performance. For computers using atoms as qubits, magnetic dipole-dipole interactions are likely to be dominant; various types of errors which they might introduce are considered here. The strength of the interaction may be reduce by increasing the distance between qubits, which in general will make the computer slower. For ion-chain based quantum computers the slowing down due to this effect is found to be generally more sever than that due to other causes. In particular, this effect alone would be enough to make these systems unacceptably slow for large-scale computation, whether they use the center of mass motion as the 'bus' or whether they do this via an optical cavity mode.
Impact of Temporal Masking of Flip-Flop Upsets on Soft Error Rates of Sequential Circuits
NASA Astrophysics Data System (ADS)
Chen, R. M.; Mahatme, N. N.; Diggins, Z. J.; Wang, L.; Zhang, E. X.; Chen, Y. P.; Liu, Y. N.; Narasimham, B.; Witulski, A. F.; Bhuva, B. L.; Fleetwood, D. M.
2017-08-01
Reductions in single-event (SE) upset (SEU) rates for sequential circuits due to temporal masking effects are evaluated. The impacts of supply voltage, combinational-logic delay, flip-flop (FF) SEU performance, and particle linear energy transfer (LET) values are analyzed for SE cross sections of sequential circuits. Alpha particles and heavy ions with different LET values are used to characterize the circuits fabricated at the 40-nm bulk CMOS technology node. Experimental results show that increasing the delay of the logic circuit present between FFs and decreasing the supply voltage are two effective ways of reducing SE error rates for sequential circuits for particles with low LET values due to temporal masking. SEU-hardened FFs benefit less from temporal masking than conventional FFs. Circuit hardening implications for SEU-hardened and unhardened FFs are discussed.
A novel validation and calibration method for motion capture systems based on micro-triangulation.
Nagymáté, Gergely; Tuchband, Tamás; Kiss, Rita M
2018-06-06
Motion capture systems are widely used to measure human kinematics. Nevertheless, users must consider system errors when evaluating their results. Most validation techniques for these systems are based on relative distance and displacement measurements. In contrast, our study aimed to analyse the absolute volume accuracy of optical motion capture systems by means of engineering surveying reference measurement of the marker coordinates (uncertainty: 0.75 mm). The method is exemplified on an 18 camera OptiTrack Flex13 motion capture system. The absolute accuracy was defined by the root mean square error (RMSE) between the coordinates measured by the camera system and by engineering surveying (micro-triangulation). The original RMSE of 1.82 mm due to scaling error was managed to be reduced to 0.77 mm while the correlation of errors to their distance from the origin reduced from 0.855 to 0.209. A simply feasible but less accurate absolute accuracy compensation method using tape measure on large distances was also tested, which resulted in similar scaling compensation compared to the surveying method or direct wand size compensation by a high precision 3D scanner. The presented validation methods can be less precise in some respects as compared to previous techniques, but they address an error type, which has not been and cannot be studied with the previous validation methods. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Gilles, Luc; Wang, Lianqi; Ellerbroek, Brent
2008-07-01
This paper describes the modeling effort undertaken to derive the wavefront error (WFE) budget for the Narrow Field Infrared Adaptive Optics System (NFIRAOS), which is the facility, laser guide star (LGS), dual-conjugate adaptive optics (AO) system for the Thirty Meter Telescope (TMT). The budget describes the expected performance of NFIRAOS at zenith, and has been decomposed into (i) first-order turbulence compensation terms (120 nm on-axis), (ii) opto-mechanical implementation errors (84 nm), (iii) AO component errors and higher-order effects (74 nm) and (iv) tip/tilt (TT) wavefront errors at 50% sky coverage at the galactic pole (61 nm) with natural guide star (NGS) tip/tilt/focus/astigmatism (TTFA) sensing in J band. A contingency of about 66 nm now exists to meet the observatory requirement document (ORD) total on-axis wavefront error of 187 nm, mainly on account of reduced TT errors due to updated windshake modeling and a low read-noise NGS wavefront sensor (WFS) detector. A detailed breakdown of each of these top-level terms is presented, together with a discussion on its evaluation using a mix of high-order zonal and low-order modal Monte Carlo simulations.
Optics measurement algorithms and error analysis for the proton energy frontier
NASA Astrophysics Data System (ADS)
Langner, A.; Tomás, R.
2015-03-01
Optics measurement algorithms have been improved in preparation for the commissioning of the LHC at higher energy, i.e., with an increased damage potential. Due to machine protection considerations the higher energy sets tighter limits in the maximum excitation amplitude and the total beam charge, reducing the signal to noise ratio of optics measurements. Furthermore the precision in 2012 (4 TeV) was insufficient to understand beam size measurements and determine interaction point (IP) β -functions (β*). A new, more sophisticated algorithm has been developed which takes into account both the statistical and systematic errors involved in this measurement. This makes it possible to combine more beam position monitor measurements for deriving the optical parameters and demonstrates to significantly improve the accuracy and precision. Measurements from the 2012 run have been reanalyzed which, due to the improved algorithms, result in a significantly higher precision of the derived optical parameters and decreased the average error bars by a factor of three to four. This allowed the calculation of β* values and demonstrated to be fundamental in the understanding of emittance evolution during the energy ramp.
Chu, David; Xiao, Jane; Shah, Payal; Todd, Brett
2018-06-20
Cognitive errors are a major contributor to medical error. Traditionally, medical errors at teaching hospitals are analyzed in morbidity and mortality (M&M) conferences. We aimed to describe the frequency of cognitive errors in relation to the occurrence of diagnostic and other error types, in cases presented at an emergency medicine (EM) resident M&M conference. We conducted a retrospective study of all cases presented at a suburban US EM residency monthly M&M conference from September 2011 to August 2016. Each case was reviewed using the electronic medical record (EMR) and notes from the M&M case by two EM physicians. Each case was categorized by type of primary medical error that occurred as described by Okafor et al. When a diagnostic error occurred, the case was reviewed for contributing cognitive and non-cognitive factors. Finally, when a cognitive error occurred, the case was classified into faulty knowledge, faulty data gathering or faulty synthesis, as described by Graber et al. Disagreements in error type were mediated by a third EM physician. A total of 87 M&M cases were reviewed; the two reviewers agreed on 73 cases, and 14 cases required mediation by a third reviewer. Forty-eight cases involved diagnostic errors, 47 of which were cognitive errors. Of these 47 cases, 38 involved faulty synthesis, 22 involved faulty data gathering and only 11 involved faulty knowledge. Twenty cases contained more than one type of cognitive error. Twenty-nine cases involved both a resident and an attending physician, while 17 cases involved only an attending physician. Twenty-one percent of the resident cases involved all three cognitive errors, while none of the attending cases involved all three. Forty-one percent of the resident cases and only 6% of the attending cases involved faulty knowledge. One hundred percent of the resident cases and 94% of the attending cases involved faulty synthesis. Our review of 87 EM M&M cases revealed that cognitive errors are commonly involved in cases presented, and that these errors are less likely due to deficient knowledge and more likely due to faulty synthesis. M&M conferences may therefore provide an excellent forum to discuss cognitive errors and how to reduce their occurrence.
Birch, Gabriel Carisle; Griffin, John Clark
2015-07-23
Numerous methods are available to measure the spatial frequency response (SFR) of an optical system. A recent change to the ISO 12233 photography resolution standard includes a sinusoidal Siemens star test target. We take the sinusoidal Siemens star proposed by the ISO 12233 standard, measure system SFR, and perform an analysis of errors induced by incorrectly identifying the center of a test target. We show a closed-form solution for the radial profile intensity measurement given an incorrectly determined center and describe how this error reduces the measured SFR of the system. As a result, using the closed-form solution, we proposemore » a two-step process by which test target centers are corrected and the measured SFR is restored to the nominal, correctly centered values.« less
The Strategies to Homogenize PET/CT Metrics: The Case of Onco-Haematological Clinical Trials
Chauvie, Stephane; Bergesio, Fabrizio
2016-01-01
Positron emission tomography (PET) has been a widely used tool in oncology for staging lymphomas for a long time. Recently, several large clinical trials demonstrated its utility in therapy management during treatment, paving the way to personalized medicine. In doing so, the traditional way of reporting PET based on the extent of disease has been complemented by a discrete scale that takes in account tumour metabolism. However, due to several technical, physical and biological limitations in the use of PET uptake as a biomarker, stringent rules have been used in clinical trials to reduce the errors in its evaluation. Within this manuscript we will describe shortly the evolution in PET reporting, examine the main errors in uptake measurement, and analyse which strategy the clinical trials applied to reduce them. PMID:28536393
Sharing Vital Signs between mobile phone applications.
Karlen, Walter; Dumont, Guy A; Scheffer, Cornie
2014-01-01
We propose a communication library, ShareVitalSigns, for the standardized exchange of vital sign information between health applications running on mobile platforms. The library allows an application to request one or multiple vital signs from independent measurement applications on the Android OS. Compatible measurement applications are automatically detected and can be launched from within the requesting application, simplifying the work flow for the user and reducing typing errors. Data is shared between applications using intents, a passive data structure available on Android OS. The library is accompanied by a test application which serves as a demonstrator. The secure exchange of vital sign information using a standardized library like ShareVitalSigns will facilitate the integration of measurement applications into diagnostic and other high level health monitoring applications and reduce errors due to manual entry of information.
A fatal outcome after unintentional overdosing of rivastigmine patches.
Lövborg, Henrik; Jönsson, Anna K; Hägg, Staffan
2012-02-01
Rivastigmine is an acetylcholine esterase inhibitor used in the treatment of dementia. Patches with rivastigmine for transdermal delivery have been used to increase compliance and to reduce side effects. We describe an 87-year old male with dementia treated with multiple rivastigmine patches (Exelon 9,5 mg/24 h) who developed nausea, vomiting and renal failure with disturbed electrolytes resulting in death. The symptoms occurred after six rivastigmine patches had concomitantly been erroneously applied by health care personnel on two consecutive days. The terminal cause of death was considered to be uremia from an acute tubular necrosis that was assessed as a result of dehydration through vomiting. The rivastigmine intoxication was assessed as having caused or contributed to the dehydrated condition. The medication error occurred at least partly due to ambiguous labeling. The clinical signs were not initially recognized as adverse effects of rivastigmine. The presented case is a description of a rivastigmine overdose due to a medication error involving patches. This case indicates the importance of clear and unambiguous instructions to avoid administration errors with patches and to be vigilant to adverse drug reactions for early detection and correction of drug administration errors. In particular, instructions clearly indicating that only one patch should be applied at a time are important.
NASA Astrophysics Data System (ADS)
Alvarez, Jose; Massey, Steven; Kalitsov, Alan; Velev, Julian
Nanopore sequencing via transverse current has emerged as a competitive candidate for mapping DNA methylation without needed bisulfite-treatment, fluorescent tag, or PCR amplification. By eliminating the error producing amplification step, long read lengths become feasible, which greatly simplifies the assembly process and reduces the time and the cost inherent in current technologies. However, due to the large error rates of nanopore sequencing, single base resolution has not been reached. A very important source of noise is the intrinsic structural noise in the electric signature of the nucleotide arising from the influence of neighboring nucleotides. In this work we perform calculations of the tunneling current through DNA molecules in nanopores using the non-equilibrium electron transport method within an effective multi-orbital tight-binding model derived from first-principles calculations. We develop a base-calling algorithm accounting for the correlations of the current through neighboring bases, which in principle can reduce the error rate below any desired precision. Using this method we show that we can clearly distinguish DNA methylation and other base modifications based on the reading of the tunneling current.
Application of adaptive Kalman filter in vehicle laser Doppler velocimetry
NASA Astrophysics Data System (ADS)
Fan, Zhe; Sun, Qiao; Du, Lei; Bai, Jie; Liu, Jingyun
2018-03-01
Due to the variation of road conditions and motor characteristics of vehicle, great root-mean-square (rms) error and outliers would be caused. Application of Kalman filter in laser Doppler velocimetry(LDV) is important to improve the velocity measurement accuracy. In this paper, the state-space model is built by using current statistical model. A strategy containing two steps is adopted to make the filter adaptive and robust. First, the acceleration variance is adaptively adjusted by using the difference of predictive observation and measured observation. Second, the outliers would be identified and the measured noise variance would be adjusted according to the orthogonal property of innovation to reduce the impaction of outliers. The laboratory rotating table experiments show that adaptive Kalman filter greatly reduces the rms error from 0.59 cm/s to 0.22 cm/s and has eliminated all the outliers. Road experiments compared with a microwave radar show that the rms error of LDV is 0.0218 m/s, and it proves that the adaptive Kalman filtering is suitable for vehicle speed signal processing.
Weaver, Amy L; Stutzman, Sonja E; Supnet, Charlene; Olson, DaiWai M
2016-03-01
The emergency department (ED) is demanding and high risk. The impact of sleep quantity has been hypothesized to impact patient care. This study investigated the hypothesis that fatigue and impaired mentation, due to sleep disturbance and shortened overall sleeping hours, would lead to increased nursing errors. This is a prospective observational study of 30 ED nurses using self-administered survey and sleep architecture measured by wrist actigraphy as predictors of self-reported error rates. An actigraphy device was worn prior to working a 12-hour shift and nurses completed the Pittsburgh Sleep Quality Index (PSQI). Error rates were reported on a visual analog scale at the end of a 12-hour shift. The PSQI responses indicated that 73.3% of subjects had poor sleep quality. Lower sleep quality measured by actigraphy (hours asleep/hours in bed) was associated with higher self-perceived minor errors. Sleep quantity (total hours slept) was not associated with minor, moderate, nor severe errors. Our study found that ED nurses' sleep quality, immediately prior to a working 12-hour shift, is more predictive of error than sleep quantity. These results present evidence that a "good night's sleep" prior to working a nursing shift in the ED is beneficial for reducing minor errors. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Werner, C. L.; Wegmuller, U.; Strozzi, T.; Wiesmann, A.
2006-12-01
Principle contributors to the noise in differential SAR interferograms are temporal phase stability of the surface, geometry relating to baseline and surface slope, and propagation path delay variations due to tropospheric water vapor and the ionosphere. Time series analysis of multiple interferograms generated from a stack of SAR SLC images seeks to determine the deformation history of the surface while reducing errors. Only those scatterers within a resolution element that are stable and coherent for each interferometric pair contribute to the desired deformation signal. Interferograms with baselines exceeding 1/3 the critical baseline have substantial geometrical decorrelation for distributed targets. Short baseline pairs with multiple reference scenes can be combined using least-squares estimation to obtain a global deformation solution. Alternately point-like persistent scatterers can be identified in scenes that do not exhibit geometrical decorrelation associated with large baselines. In this approach interferograms are formed from a stack of SAR complex images using a single reference scene. Stable distributed scatter pixels are excluded however due to the presence of large baselines. We apply both point- based and short-baseline methodologies and compare results for a stack of fine-beam Radarsat data acquired in 2002-2004 over a rapidly subsiding oil field near Lost Hills, CA. We also investigate the density of point-like scatters with respect to image resolution. The primary difficulty encountered when applying time series methods is phase unwrapping errors due to spatial and temporal gaps. Phase unwrapping requires sufficient spatial and temporal sampling. Increasing the SAR range bandwidth increases the range resolution as well as increasing the critical interferometric baseline that defines the required satellite orbital tube diameter. Sufficient spatial sampling also permits unwrapping because of the reduced phase/pixel gradient. Short time intervals further reduce the differential phase due to deformation when the deformation is continuous. Lower frequency systems (L- vs. C-Band) substantially improve the ability to unwrap the phase correctly by directly reducing both interferometric phase amplitude and temporal decorrelation.
The accuracy of estimates of the overturning circulation from basin-wide mooring arrays
NASA Astrophysics Data System (ADS)
Sinha, B.; Smeed, D. A.; McCarthy, G.; Moat, B. I.; Josey, S. A.; Hirschi, J. J.-M.; Frajka-Williams, E.; Blaker, A. T.; Rayner, D.; Madec, G.
2018-01-01
Previous modeling and observational studies have established that it is possible to accurately monitor the Atlantic Meridional Overturning Circulation (AMOC) at 26.5°N using a coast-to-coast array of instrumented moorings supplemented by direct transport measurements in key boundary regions (the RAPID/MOCHA/WBTS Array). The main sources of observational and structural errors have been identified in a variety of individual studies. Here a unified framework for identifying and quantifying structural errors associated with the RAPID array-based AMOC estimates is established using a high-resolution (eddy resolving at low-mid latitudes, eddy permitting elsewhere) ocean general circulation model, which simulates the ocean state between 1978 and 2010. We define a virtual RAPID array in the model in close analogy to the real RAPID array and compare the AMOC estimate from the virtual array with the true model AMOC. The model analysis suggests that the RAPID method underestimates the mean AMOC by ∼1.5 Sv (1 Sv = 106 m3 s-1) at ∼900 m depth, however it captures the variability to high accuracy. We examine three major contributions to the streamfunction bias: (i) due to the assumption of a single fixed reference level for calculation of geostrophic transports, (ii) due to regions not sampled by the array and (iii) due to ageostrophic transport. A key element in (i) and (iii) is use of the model sea surface height to establish the true (or absolute) geostrophic transport. In the upper 2000 m, we find that the reference level bias is strongest and most variable in time, whereas the bias due to unsampled regions is largest below 3000 m. The ageostrophic transport is significant in the upper 1000 m but shows very little variability. The results establish, for the first time, the uncertainty of the AMOC estimate due to the combined structural errors in the measurement design and suggest ways in which the error could be reduced. Our work has applications to basin-wide circulation measurement arrays at other latitudes and in other basins as well as quantifying systematic errors in ocean model estimates of the AMOC at 26.5°N.
The effect of tropospheric fluctuations on the accuracy of water vapor radiometry
NASA Technical Reports Server (NTRS)
Wilcox, J. Z.
1992-01-01
Line-of-sight path delay calibration accuracies of 1 mm are needed to improve both angular and Doppler tracking capabilities. Fluctuations in the refractivity of tropospheric water vapor limit the present accuracies to about 1 nrad for the angular position and to a delay rate of 3x10(exp -13) sec/sec over a 100-sec time interval for Doppler tracking. This article describes progress in evaluating the limitations of the technique of water vapor radiometry at the 1-mm level. The two effects evaluated here are: (1) errors arising from tip-curve calibration of WVR's in the presence of tropospheric fluctuations and (2) errors due to the use of nonzero beamwidths for water vapor radiometer (WVR) horns. The error caused by tropospheric water vapor fluctuations during instrument calibration from a single tip curve is 0.26 percent in the estimated gain for a tip-curve duration of several minutes or less. This gain error causes a 3-mm bias and a 1-mm scale factor error in the estimated path delay at a 10-deg elevation per 1 g/cm(sup 2) of zenith water vapor column density present in the troposphere during the astrometric observation. The error caused by WVR beam averaging of tropospheric fluctuations is 3 mm at a 10-deg elevation per 1 g/cm(sup 2) of zenith water vapor (and is proportionally higher for higher water vapor content) for current WVR beamwidths (full width at half maximum of approximately 6 deg). This is a stochastic error (which cannot be calibrated) and which can be reduced to about half of its instantaneous value by time averaging the radio signal over several minutes. The results presented here suggest two improvements to WVR design: first, the gain of the instruments should be stabilized to 4 parts in 10(exp 4) over a calibration period lasting 5 hours, and second, the WVR antenna beamwidth should be reduced to about 0.2 deg. This will reduce the error induced by water vapor fluctuations in the estimated path delays to less than 1 mm for the elevation range from zenith to 6 deg for most observation weather conditions.
NASA Astrophysics Data System (ADS)
Lin, Tsungpo
Performance engineers face the major challenge in modeling and simulation for the after-market power system due to system degradation and measurement errors. Currently, the majority in power generation industries utilizes the deterministic data matching method to calibrate the model and cascade system degradation, which causes significant calibration uncertainty and also the risk of providing performance guarantees. In this research work, a maximum-likelihood based simultaneous data reconciliation and model calibration (SDRMC) is used for power system modeling and simulation. By replacing the current deterministic data matching with SDRMC one can reduce the calibration uncertainty and mitigate the error propagation to the performance simulation. A modeling and simulation environment for a complex power system with certain degradation has been developed. In this environment multiple data sets are imported when carrying out simultaneous data reconciliation and model calibration. Calibration uncertainties are estimated through error analyses and populated to performance simulation by using principle of error propagation. System degradation is then quantified by performance comparison between the calibrated model and its expected new & clean status. To mitigate smearing effects caused by gross errors, gross error detection (GED) is carried out in two stages. The first stage is a screening stage, in which serious gross errors are eliminated in advance. The GED techniques used in the screening stage are based on multivariate data analysis (MDA), including multivariate data visualization and principal component analysis (PCA). Subtle gross errors are treated at the second stage, in which the serial bias compensation or robust M-estimator is engaged. To achieve a better efficiency in the combined scheme of the least squares based data reconciliation and the GED technique based on hypotheses testing, the Levenberg-Marquardt (LM) algorithm is utilized as the optimizer. To reduce the computation time and stabilize the problem solving for a complex power system such as a combined cycle power plant, meta-modeling using the response surface equation (RSE) and system/process decomposition are incorporated with the simultaneous scheme of SDRMC. The goal of this research work is to reduce the calibration uncertainties and, thus, the risks of providing performance guarantees arisen from uncertainties in performance simulation.
Correcting for deformation in skin-based marker systems.
Alexander, E J; Andriacchi, T P
2001-03-01
A new technique is described that reduces error due to skin movement artifact in the opto-electronic measurement of in vivo skeletal motion. This work builds on a previously described point cluster technique marker set and estimation algorithm by extending the transformation equations to the general deformation case using a set of activity-dependent deformation models. Skin deformation during activities of daily living are modeled as consisting of a functional form defined over the observation interval (the deformation model) plus additive noise (modeling error). The method is described as an interval deformation technique. The method was tested using simulation trials with systematic and random components of deformation error introduced into marker position vectors. The technique was found to substantially outperform methods that require rigid-body assumptions. The method was tested in vivo on a patient fitted with an external fixation device (Ilizarov). Simultaneous measurements from markers placed on the Ilizarov device (fixed to bone) were compared to measurements derived from skin-based markers. The interval deformation technique reduced the errors in limb segment pose estimate by 33 and 25% compared to the classic rigid-body technique for position and orientation, respectively. This newly developed method has demonstrated that by accounting for the changing shape of the limb segment, a substantial improvement in the estimates of in vivo skeletal movement can be achieved.
NASA Astrophysics Data System (ADS)
Halomoan Siregar, Budi; Dewi, Izwita; Andriani, Ade
2018-03-01
The purpose of this study is to analyse the types of students errors and causes of them in solving of pedagogic problems. The type of this research is qualitative descriptive, conducted on 34 students of mathematics education in academic year 2017 to 2018. The data in this study is obtained through interviews and tests. Furthermore, the data is then analyzed through three stages: 1) data reduction, 2) data description, and 3) conclusions. The data is reduced by organizing and classifying them in order to obtain meaningful information. After reducing, then the data presented in a simple form of narrative, graphics, and tables to illustrate clearly the errors of students. Based on the information then drawn a conclusion. The results of this study indicate that the students made various errors: 1) they made a mistake in answer what being asked at the problem, because they misunderstood the problem, 2) they fail to plan the learning process based on constructivism, due to lack of understanding of how to design the learning, 3) they determine an inappropriate learning tool, because they did not understand what kind of learning tool is relevant to use.
Urban, Michal; Leššo, Roman; Pelclová, Daniela
2016-07-01
The purpose of the article was to study unintentional pharmaceutical-related poisonings committed by laypersons that were reported to the Toxicological Information Centre in the Czech Republic. Identifying frequency, sources, reasons and consequences of the medication errors in laypersons could help to reduce the overall rate of medication errors. Records of medication error enquiries from 2013 to 2014 were extracted from the electronic database, and the following variables were reviewed: drug class, dosage form, dose, age of the subject, cause of the error, time interval from ingestion to the call, symptoms, prognosis at the time of the call and first aid recommended. Of the calls, 1354 met the inclusion criteria. Among them, central nervous system-affecting drugs (23.6%), respiratory drugs (18.5%) and alimentary drugs (16.2%) were the most common drug classes involved in the medication errors. The highest proportion of the patients was in the youngest age subgroup 0-5 year-old (46%). The reasons for the medication errors involved the leaflet misinterpretation and mistaken dose (53.6%), mixing up medications (19.2%), attempting to reduce pain with repeated doses (6.4%), erroneous routes of administration (2.2%), psychiatric/elderly patients (2.7%), others (9.0%) or unknown (6.9%). A high proportion of children among the patients may be due to the fact that children's dosages for many drugs vary by their weight, and more medications come in a variety of concentrations. Most overdoses could be prevented by safer labelling, proper cap closure systems for liquid products and medication reconciliation by both physicians and pharmacists. © 2016 Nordic Association for the Publication of BCPT (former Nordic Pharmacological Society).
Khammarnia, Mohammad; Sharifian, Roxana; Zand, Farid; Barati, Omid; Keshtkaran, Ali; Sabetian, Golnar; Shahrokh, , Nasim; Setoodezadeh, Fatemeh
2017-01-01
Background: One way to reduce medical errors associated with physician orders is computerized physician order entry (CPOE) software. This study was conducted to compare prescription orders between 2 groups before and after CPOE implementation in a hospital. Methods: We conducted a before-after prospective study in 2 intensive care unit (ICU) wards (as intervention and control wards) in the largest tertiary public hospital in South of Iran during 2014 and 2016. All prescription orders were validated by a clinical pharmacist and an ICU physician. The rates of ordering the errors in medical orders were compared before (manual ordering) and after implementation of the CPOE. A standard checklist was used for data collection. For the data analysis, SPSS Version 21, descriptive statistics, and analytical tests such as McNemar, chi-square, and logistic regression were used. Results: The CPOE significantly decreased 2 types of errors, illegible orders and lack of writing the drug form, in the intervention ward compared to the control ward (p< 0.05); however, the 2 errors increased due to the defect in the CPOE (p< 0.001). The use of CPOE decreased the prescription errors from 19% to 3% (p= 0.001), However, no differences were observed in the control ward (p<0.05). In addition, more errors occurred in the morning shift (p< 0.001). Conclusion: In general, the use of CPOE significantly reduced the prescription errors. Nonetheless, more caution should be exercised in the use of this system, and its deficiencies should be resolved. Furthermore, it is recommended that CPOE be used to improve the quality of delivered services in hospitals. PMID:29445698
Khammarnia, Mohammad; Sharifian, Roxana; Zand, Farid; Barati, Omid; Keshtkaran, Ali; Sabetian, Golnar; Shahrokh, Nasim; Setoodezadeh, Fatemeh
2017-01-01
Background: One way to reduce medical errors associated with physician orders is computerized physician order entry (CPOE) software. This study was conducted to compare prescription orders between 2 groups before and after CPOE implementation in a hospital. Methods: We conducted a before-after prospective study in 2 intensive care unit (ICU) wards (as intervention and control wards) in the largest tertiary public hospital in South of Iran during 2014 and 2016. All prescription orders were validated by a clinical pharmacist and an ICU physician. The rates of ordering the errors in medical orders were compared before (manual ordering) and after implementation of the CPOE. A standard checklist was used for data collection. For the data analysis, SPSS Version 21, descriptive statistics, and analytical tests such as McNemar, chi-square, and logistic regression were used. Results: The CPOE significantly decreased 2 types of errors, illegible orders and lack of writing the drug form, in the intervention ward compared to the control ward (p< 0.05); however, the 2 errors increased due to the defect in the CPOE (p< 0.001). The use of CPOE decreased the prescription errors from 19% to 3% (p= 0.001), However, no differences were observed in the control ward (p<0.05). In addition, more errors occurred in the morning shift (p< 0.001). Conclusion: In general, the use of CPOE significantly reduced the prescription errors. Nonetheless, more caution should be exercised in the use of this system, and its deficiencies should be resolved. Furthermore, it is recommended that CPOE be used to improve the quality of delivered services in hospitals.
Economic measurement of medical errors using a hospital claims database.
David, Guy; Gunnarsson, Candace L; Waters, Heidi C; Horblyuk, Ruslan; Kaplan, Harold S
2013-01-01
The primary objective of this study was to estimate the occurrence and costs of medical errors from the hospital perspective. Methods from a recent actuarial study of medical errors were used to identify medical injuries. A visit qualified as an injury visit if at least 1 of 97 injury groupings occurred at that visit, and the percentage of injuries caused by medical error was estimated. Visits with more than four injuries were removed from the population to avoid overestimation of cost. Population estimates were extrapolated from the Premier hospital database to all US acute care hospitals. There were an estimated 161,655 medical errors in 2008 and 170,201 medical errors in 2009. Extrapolated to the entire US population, there were more than 4 million unique injury visits containing more than 1 million unique medical errors each year. This analysis estimated that the total annual cost of measurable medical errors in the United States was $985 million in 2008 and just over $1 billion in 2009. The median cost per error to hospitals was $892 for 2008 and rose to $939 in 2009. Nearly one third of all medical injuries were due to error in each year. Medical errors directly impact patient outcomes and hospitals' profitability, especially since 2008 when Medicare stopped reimbursing hospitals for care related to certain preventable medical errors. Hospitals must rigorously analyze causes of medical errors and implement comprehensive preventative programs to reduce their occurrence as the financial burden of medical errors shifts to hospitals. Copyright © 2013 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Trajectory Design to Mitigate Risk on the Transiting Exoplanet Survey Satellite (TESS) Mission
NASA Technical Reports Server (NTRS)
Dichmann, Donald
2016-01-01
The Transiting Exoplanet Survey Satellite (TESS) will employ a highly eccentric Earth orbit, in 2:1 lunar resonance, reached with a lunar flyby preceded by 3.5 phasing loops. The TESS mission has limited propellant and several orbit constraints. Based on analysis and simulation, we have designed the phasing loops to reduce delta-V and to mitigate risk due to maneuver execution errors. We have automated the trajectory design process and use distributed processing to generate and to optimize nominal trajectories, check constraint satisfaction, and finally model the effects of maneuver errors to identify trajectories that best meet the mission requirements.
Determining the refractive index and thickness of thin films from prism coupler measurements
NASA Technical Reports Server (NTRS)
Kirsch, S. T.
1981-01-01
A simple method of determining thin film parameters from mode indices measured using a prism coupler is described. The problem is reduced to doing two least squares straight line fits through measured mode indices vs effective mode number. The slope and y intercept of the line are simply related to the thickness and refractive index of film, respectively. The approach takes into account the correlation between as well as the uncertainty in the individual measurements from all sources of error to give precise error tolerances on the best fit values. Due to the precision of the tolerances, anisotropic films can be identified and characterized.
Robust Transceiver Design for Multiuser MIMO Downlink with Channel Uncertainties
NASA Astrophysics Data System (ADS)
Miao, Wei; Li, Yunzhou; Chen, Xiang; Zhou, Shidong; Wang, Jing
This letter addresses the problem of robust transceiver design for the multiuser multiple-input-multiple-output (MIMO) downlink where the channel state information at the base station (BS) is imperfect. A stochastic approach which minimizes the expectation of the total mean square error (MSE) of the downlink conditioned on the channel estimates under a total transmit power constraint is adopted. The iterative algorithm reported in [2] is improved to handle the proposed robust optimization problem. Simulation results show that our proposed robust scheme effectively reduces the performance loss due to channel uncertainties and outperforms existing methods, especially when the channel errors of the users are different.
Davis, Stephen Jerome; Hurtado, Josephine; Nguyen, Rosemary; Huynh, Tran; Lindon, Ivan; Hudnall, Cedric; Bork, Sara
2017-01-01
Background: USP <797> regulatory requirements have mandated that pharmacies improve aseptic techniques and cleanliness of the medication preparation areas. In addition, the Institute for Safe Medication Practices (ISMP) recommends that technology and automation be used as much as possible for preparing and verifying compounded sterile products. Objective: To determine the benefits associated with the implementation of the workflow management system, such as reducing medication preparation and delivery errors, reducing quantity and frequency of medication errors, avoiding costs, and enhancing the organization's decision to move toward positive patient identification (PPID). Methods: At Texas Children's Hospital, data were collected and analyzed from January 2014 through August 2014 in the pharmacy areas in which the workflow management system would be implemented. Data were excluded for September 2014 during the workflow management system oral liquid implementation phase. Data were collected and analyzed from October 2014 through June 2015 to determine whether the implementation of the workflow management system reduced the quantity and frequency of reported medication errors. Data collected and analyzed during the study period included the quantity of doses prepared, number of incorrect medication scans, number of doses discontinued from the workflow management system queue, and the number of doses rejected. Data were collected and analyzed to identify patterns of incorrect medication scans, to determine reasons for rejected medication doses, and to determine the reduction in wasted medications. Results: During the 17-month study period, the pharmacy department dispensed 1,506,220 oral liquid and injectable medication doses. From October 2014 through June 2015, the pharmacy department dispensed 826,220 medication doses that were prepared and checked via the workflow management system. Of those 826,220 medication doses, there were 16 reported incorrect volume errors. The error rate after the implementation of the workflow management system averaged 8.4%, which was a 1.6% reduction. After the implementation of the workflow management system, the average number of reported oral liquid medication and injectable medication errors decreased to 0.4 and 0.2 times per week, respectively. Conclusion: The organization was able to achieve its purpose and goal of improving the provision of quality pharmacy care through optimal medication use and safety by reducing medication preparation errors. Error rates decreased and the workflow processes were streamlined, which has led to seamless operations within the pharmacy department. There has been significant cost avoidance and waste reduction and enhanced interdepartmental satisfaction due to the reduction of reported medication errors.
[Innovative training for enhancing patient safety. Safety culture and integrated concepts].
Rall, M; Schaedle, B; Zieger, J; Naef, W; Weinlich, M
2002-11-01
Patient safety is determined by the performance safety of the medical team. Errors in medicine are amongst the leading causes of death of hospitalized patients. These numbers call for action. Backgrounds, methods and new forms of training are introduced in this article. Concepts from safety research are transformed to the field of emergency medical treatment. Strategies from realistic patient simulator training sessions and innovative training concepts are discussed. The reasons for the high numbers of errors in medicine are not due to a lack of medical knowledge, but due to human factors and organisational circumstances. A first step towards an improved patient safety is to accept this. We always need to be prepared that errors will occur. A next step would be to separate "error" from guilt (culture of blame) allowing for a real analysis of accidents and establishment of meaningful incident reporting systems. Concepts with a good success record from aviation like "crew resource management" (CRM) training have been adapted my medicine and are ready to use. These concepts require theoretical education as well as practical training. Innovative team training sessions using realistic patient simulator systems with video taping (for self reflexion) and interactive debriefing following the sessions are very promising. As the need to reduce error rates in medicine is very high and the reasons, methods and training concepts are known, we are urged to implement these new training concepts widely and consequently. To err is human - not to counteract it is not.
Influence of video compression on the measurement error of the television system
NASA Astrophysics Data System (ADS)
Sotnik, A. V.; Yarishev, S. N.; Korotaev, V. V.
2015-05-01
Video data require a very large memory capacity. Optimal ratio quality / volume video encoding method is one of the most actual problem due to the urgent need to transfer large amounts of video over various networks. The technology of digital TV signal compression reduces the amount of data used for video stream representation. Video compression allows effective reduce the stream required for transmission and storage. It is important to take into account the uncertainties caused by compression of the video signal in the case of television measuring systems using. There are a lot digital compression methods. The aim of proposed work is research of video compression influence on the measurement error in television systems. Measurement error of the object parameter is the main characteristic of television measuring systems. Accuracy characterizes the difference between the measured value abd the actual parameter value. Errors caused by the optical system can be selected as a source of error in the television systems measurements. Method of the received video signal processing is also a source of error. Presence of error leads to large distortions in case of compression with constant data stream rate. Presence of errors increases the amount of data required to transmit or record an image frame in case of constant quality. The purpose of the intra-coding is reducing of the spatial redundancy within a frame (or field) of television image. This redundancy caused by the strong correlation between the elements of the image. It is possible to convert an array of image samples into a matrix of coefficients that are not correlated with each other, if one can find corresponding orthogonal transformation. It is possible to apply entropy coding to these uncorrelated coefficients and achieve a reduction in the digital stream. One can select such transformation that most of the matrix coefficients will be almost zero for typical images . Excluding these zero coefficients also possible reducing of the digital stream. Discrete cosine transformation is most widely used among possible orthogonal transformation. Errors of television measuring systems and data compression protocols analyzed In this paper. The main characteristics of measuring systems and detected sources of their error detected. The most effective methods of video compression are determined. The influence of video compression error on television measuring systems was researched. Obtained results will increase the accuracy of the measuring systems. In television image quality measuring system reduces distortion identical distortion in analog systems and specific distortions resulting from the process of coding / decoding digital video signal and errors in the transmission channel. By the distortions associated with encoding / decoding signal include quantization noise, reducing resolution, mosaic effect, "mosquito" effect edging on sharp drops brightness, blur colors, false patterns, the effect of "dirty window" and other defects. The size of video compression algorithms used in television measuring systems based on the image encoding with intra- and inter prediction individual fragments. The process of encoding / decoding image is non-linear in space and in time, because the quality of the playback of a movie at the reception depends on the pre- and post-history of a random, from the preceding and succeeding tracks, which can lead to distortion of the inadequacy of the sub-picture and a corresponding measuring signal.
Low-dimensional Representation of Error Covariance
NASA Technical Reports Server (NTRS)
Tippett, Michael K.; Cohn, Stephen E.; Todling, Ricardo; Marchesin, Dan
2000-01-01
Ensemble and reduced-rank approaches to prediction and assimilation rely on low-dimensional approximations of the estimation error covariances. Here stability properties of the forecast/analysis cycle for linear, time-independent systems are used to identify factors that cause the steady-state analysis error covariance to admit a low-dimensional representation. A useful measure of forecast/analysis cycle stability is the bound matrix, a function of the dynamics, observation operator and assimilation method. Upper and lower estimates for the steady-state analysis error covariance matrix eigenvalues are derived from the bound matrix. The estimates generalize to time-dependent systems. If much of the steady-state analysis error variance is due to a few dominant modes, the leading eigenvectors of the bound matrix approximate those of the steady-state analysis error covariance matrix. The analytical results are illustrated in two numerical examples where the Kalman filter is carried to steady state. The first example uses the dynamics of a generalized advection equation exhibiting nonmodal transient growth. Failure to observe growing modes leads to increased steady-state analysis error variances. Leading eigenvectors of the steady-state analysis error covariance matrix are well approximated by leading eigenvectors of the bound matrix. The second example uses the dynamics of a damped baroclinic wave model. The leading eigenvectors of a lowest-order approximation of the bound matrix are shown to approximate well the leading eigenvectors of the steady-state analysis error covariance matrix.
ERIC Educational Resources Information Center
Green, Samuel B.; Thompson, Marilyn S.; Poirier, Jennifer
1999-01-01
The use of Lagrange multiplier (LM) tests in specification searches and the efforts that involve the addition of extraneous parameters to models are discussed. Presented are a rationale and strategy for conducting specification searches in two stages that involve adding parameters to LM tests to maximize fit and then deleting parameters not needed…
The role of visual spatial attention in adult developmental dyslexia.
Collis, Nathan L; Kohnen, Saskia; Kinoshita, Sachiko
2013-01-01
The present study investigated the nature of visual spatial attention deficits in adults with developmental dyslexia, using a partial report task with five-letter, digit, and symbol strings. Participants responded by a manual key press to one of nine alternatives, which included other characters in the string, allowing an assessment of position errors as well as intrusion errors. The results showed that the dyslexic adults performed significantly worse than age-matched controls with letter and digit strings but not with symbol strings. Both groups produced W-shaped serial position functions with letter and digit strings. The dyslexics' deficits with letter string stimuli were limited to position errors, specifically at the string-interior positions 2 and 4. These errors correlated with letter transposition reading errors (e.g., reading slat as "salt"), but not with the Rapid Automatized Naming (RAN) task. Overall, these results suggest that the dyslexic adults have a visual spatial attention deficit; however, the deficit does not reflect a reduced span in visual-spatial attention, but a deficit in processing a string of letters in parallel, probably due to difficulty in the coding of letter position.
NASA Astrophysics Data System (ADS)
Hu, Qing-Qing; Freier, Christian; Leykauf, Bastian; Schkolnik, Vladimir; Yang, Jun; Krutzik, Markus; Peters, Achim
2017-09-01
Precisely evaluating the systematic error induced by the quadratic Zeeman effect is important for developing atom interferometer gravimeters aiming at an accuracy in the μ Gal regime (1 μ Gal =10-8m /s2 ≈10-9g ). This paper reports on the experimental investigation of Raman spectroscopy-based magnetic field measurements and the evaluation of the systematic error in the gravimetric atom interferometer (GAIN) due to quadratic Zeeman effect. We discuss Raman duration and frequency step-size-dependent magnetic field measurement uncertainty, present vector light shift and tensor light shift induced magnetic field measurement offset, and map the absolute magnetic field inside the interferometer chamber of GAIN with an uncertainty of 0.72 nT and a spatial resolution of 12.8 mm. We evaluate the quadratic Zeeman-effect-induced gravity measurement error in GAIN as 2.04 μ Gal . The methods shown in this paper are important for precisely mapping the absolute magnetic field in vacuum and reducing the quadratic Zeeman-effect-induced systematic error in Raman transition-based precision measurements, such as atomic interferometer gravimeters.
NASA Astrophysics Data System (ADS)
Watmough, Gary R.; Atkinson, Peter M.; Hutton, Craig W.
2011-04-01
The automated cloud cover assessment (ACCA) algorithm has provided automated estimates of cloud cover for the Landsat ETM+ mission since 2001. However, due to the lack of a band around 1.375 μm, cloud edges and transparent clouds such as cirrus cannot be detected. Use of Landsat ETM+ imagery for terrestrial land analysis is further hampered by the relatively long revisit period due to a nadir only viewing sensor. In this study, the ACCA threshold parameters were altered to minimise omission errors in the cloud masks. Object-based analysis was used to reduce the commission errors from the extended cloud filters. The method resulted in the removal of optically thin cirrus cloud and cloud edges which are often missed by other methods in sub-tropical areas. Although not fully automated, the principles of the method developed here provide an opportunity for using otherwise sub-optimal or completely unusable Landsat ETM+ imagery for operational applications. Where specific images are required for particular research goals the method can be used to remove cloud and transparent cloud helping to reduce bias in subsequent land cover classifications.
NASA Astrophysics Data System (ADS)
Bukhari, W.; Hong, S.-M.
2015-01-01
Motion-adaptive radiotherapy aims to deliver a conformal dose to the target tumour with minimal normal tissue exposure by compensating for tumour motion in real time. The prediction as well as the gating of respiratory motion have received much attention over the last two decades for reducing the targeting error of the treatment beam due to respiratory motion. In this article, we present a real-time algorithm for predicting and gating respiratory motion that utilizes a model-based and a model-free Bayesian framework by combining them in a cascade structure. The algorithm, named EKF-GPR+, implements a gating function without pre-specifying a particular region of the patient’s breathing cycle. The algorithm first employs an extended Kalman filter (LCM-EKF) to predict the respiratory motion and then uses a model-free Gaussian process regression (GPR) to correct the error of the LCM-EKF prediction. The GPR is a non-parametric Bayesian algorithm that yields predictive variance under Gaussian assumptions. The EKF-GPR+ algorithm utilizes the predictive variance from the GPR component to capture the uncertainty in the LCM-EKF prediction error and systematically identify breathing points with a higher probability of large prediction error in advance. This identification allows us to pause the treatment beam over such instances. EKF-GPR+ implements the gating function by using simple calculations based on the predictive variance with no additional detection mechanism. A sparse approximation of the GPR algorithm is employed to realize EKF-GPR+ in real time. Extensive numerical experiments are performed based on a large database of 304 respiratory motion traces to evaluate EKF-GPR+. The experimental results show that the EKF-GPR+ algorithm effectively reduces the prediction error in a root-mean-square (RMS) sense by employing the gating function, albeit at the cost of a reduced duty cycle. As an example, EKF-GPR+ reduces the patient-wise RMS error to 37%, 39% and 42% in percent ratios relative to no prediction for a duty cycle of 80% at lookahead lengths of 192 ms, 384 ms and 576 ms, respectively. The experiments also confirm that EKF-GPR+ controls the duty cycle with reasonable accuracy.
Radar error statistics for the space shuttle
NASA Technical Reports Server (NTRS)
Lear, W. M.
1979-01-01
Radar error statistics of C-band and S-band that are recommended for use with the groundtracking programs to process space shuttle tracking data are presented. The statistics are divided into two parts: bias error statistics, using the subscript B, and high frequency error statistics, using the subscript q. Bias errors may be slowly varying to constant. High frequency random errors (noise) are rapidly varying and may or may not be correlated from sample to sample. Bias errors were mainly due to hardware defects and to errors in correction for atmospheric refraction effects. High frequency noise was mainly due to hardware and due to atmospheric scintillation. Three types of atmospheric scintillation were identified: horizontal, vertical, and line of sight. This was the first time that horizontal and line of sight scintillations were identified.
Efficient Z gates for quantum computing
NASA Astrophysics Data System (ADS)
McKay, David C.; Wood, Christopher J.; Sheldon, Sarah; Chow, Jerry M.; Gambetta, Jay M.
2017-08-01
For superconducting qubits, microwave pulses drive rotations around the Bloch sphere. The phase of these drives can be used to generate zero-duration arbitrary virtual Z gates, which, combined with two Xπ /2 gates, can generate any SU(2) gate. Here we show how to best utilize these virtual Z gates to both improve algorithms and correct pulse errors. We perform randomized benchmarking using a Clifford set of Hadamard and Z gates and show that the error per Clifford is reduced versus a set consisting of standard finite-duration X and Y gates. Z gates can correct unitary rotation errors for weakly anharmonic qubits as an alternative to pulse-shaping techniques such as derivative removal by adiabatic gate (DRAG). We investigate leakage and show that a combination of DRAG pulse shaping to minimize leakage and Z gates to correct rotation errors realizes a 13.3 ns Xπ /2 gate characterized by low error [1.95 (3 ) ×10-4] and low leakage [3.1 (6 ) ×10-6] . Ultimately leakage is limited by the finite temperature of the qubit, but this limit is two orders of magnitude smaller than pulse errors due to decoherence.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oyeyemi, Victor B.; Keith, John A.; Pavone, Michele
2012-01-11
Density functional theory (DFT) is often used to determine the electronic and geometric structures of molecules. While studying alkynyl radicals, we discovered that DFT exchange-correlation (XC) functionals containing less than ~22% Hartree–Fock (HF) exchange led to qualitatively different structures than those predicted from ab initio HF and post-HF calculations or DFT XCs containing 25% or more HF exchange. We attribute this discrepancy to rehybridization at the radical center due to electron delocalization across the triple bonds of the alkynyl groups, which itself is an artifact of self-interaction and delocalization errors. Inclusion of sufficient exact exchange reduces these errors and suppressesmore » this erroneous delocalization; we find that a threshold amount is needed for accurate structure determinations. Finally, below this threshold, significant errors in predicted alkyne thermochemistry emerge as a consequence.« less
Pricing Employee Stock Options (ESOs) with Random Lattice
NASA Astrophysics Data System (ADS)
Chendra, E.; Chin, L.; Sukmana, A.
2018-04-01
Employee Stock Options (ESOs) are stock options granted by companies to their employees. Unlike standard options that can be traded by typical institutional or individual investors, employees cannot sell or transfer their ESOs to other investors. The sale restrictions may induce the ESO’s holder to exercise them earlier. In much cited paper, Hull and White propose a binomial lattice in valuing ESOs which assumes that employees will exercise voluntarily their ESOs if the stock price reaches a horizontal psychological barrier. Due to nonlinearity errors, the numerical pricing results oscillate significantly so they may lead to large pricing errors. In this paper, we use the random lattice method to price the Hull-White ESOs model. This method can reduce the nonlinearity error by aligning a layer of nodes of the random lattice with a psychological barrier.
NASA Astrophysics Data System (ADS)
Wang, X.; Holmes, C. S.
2015-08-01
When grinding helical components, errors occur at the beginning and end of the contact path between the component and grinding wheel. This is due to the forces on the component changing as the grinding wheel comes into and out-of full contact with the component. In addition, shaft bending may add depth changes which vary along the length. This may result in an interrupted contact line and increased noise from the rotors. Using on-board scanning, software has been developed to calculate a compensated grinding path, which includes the adjustments of head angle, work rotation and infeed. This grinding path compensates not only lead errors, but also reduces the profile errors as well. The program has been tested in rotor production and the results are shown.
A simulation for gravity fine structure recovery from low-low GRAVSAT SST data
NASA Technical Reports Server (NTRS)
Estes, R. H.; Lancaster, E. R.
1976-01-01
Covariance error analysis techniques were applied to investigate estimation strategies for the low-low SST mission for accurate local recovery of gravitational fine structure, considering the aliasing effects of unsolved for parameters. A 5 degree by 5 degree surface density block representation of the high order geopotential was utilized with the drag-free low-low GRAVSAT configuration in a circular polar orbit at 250 km altitude. Recovery of local sets of density blocks from long data arcs was found not to be feasible due to strong aliasing effects. The error analysis for the recovery of local sets of density blocks using independent short data arcs demonstrated that the estimation strategy of simultaneously estimating a local set of blocks covered by data and two "buffer layers" of blocks not covered by data greatly reduced aliasing errors.
Error correction in short time steps during the application of quantum gates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Castro, L.A. de, E-mail: leonardo.castro@usp.br; Napolitano, R.D.J.
2016-04-15
We propose a modification of the standard quantum error-correction method to enable the correction of errors that occur due to the interaction with a noisy environment during quantum gates without modifying the codification used for memory qubits. Using a perturbation treatment of the noise that allows us to separate it from the ideal evolution of the quantum gate, we demonstrate that in certain cases it is necessary to divide the logical operation in short time steps intercalated by correction procedures. A prescription of how these gates can be constructed is provided, as well as a proof that, even for themore » cases when the division of the quantum gate in short time steps is not necessary, this method may be advantageous for reducing the total duration of the computation.« less
Method and system for reducing errors in vehicle weighing systems
Hively, Lee M.; Abercrombie, Robert K.
2010-08-24
A method and system (10, 23) for determining vehicle weight to a precision of <0.1%, uses a plurality of weight sensing elements (23), a computer (10) for reading in weighing data for a vehicle (25) and produces a dataset representing the total weight of a vehicle via programming (40-53) that is executable by the computer (10) for (a) providing a plurality of mode parameters that characterize each oscillatory mode in the data due to movement of the vehicle during weighing, (b) by determining the oscillatory mode at which there is a minimum error in the weighing data; (c) processing the weighing data to remove that dynamical oscillation from the weighing data; and (d) repeating steps (a)-(c) until the error in the set of weighing data is <0.1% in the vehicle weight.
Kiymaz, Dilek; Koç, Zeliha
2018-03-01
To determine individual and professional factors affecting the tendency of emergency unit nurses to make medical errors and their attitudes towards these errors in Turkey. Compared with other units, the emergency unit is an environment where there is an increased tendency for making medical errors due to its intensive and rapid pace, noise and complex and dynamic structure. A descriptive cross-sectional study. The study was carried out from 25 July 2014-16 September 2015 with the participation of 284 nurses who volunteered to take part in the study. Data were gathered using the data collection survey for nurses, the Medical Error Tendency Scale and the Medical Error Attitude Scale. It was determined that 40.1% of the nurses previously witnessed medical errors, 19.4% made a medical error in the last year, 17.6% of medical errors were caused by medication errors where the wrong medication was administered in the wrong dose, and none of the nurses filled out a case report form about the medical errors they made. Regarding the factors that caused medical errors in the emergency unit, 91.2% of the nurses stated excessive workload as a cause; 85.1% stated an insufficient number of nurses; and 75.4% stated fatigue, exhaustion and burnout. The study showed that nurses who loved their job were satisfied with their unit and who always worked during day shifts had a lower medical error tendency. It is suggested to consider the following actions: increase awareness about medical errors, organise training to reduce errors in medication administration, develop procedures and protocols specific to the emergency unit health care and create an environment which is not punitive wherein nurses can safely report medical errors. © 2017 John Wiley & Sons Ltd.
Palmer, Antony L; Bradley, David A; Nisbet, Andrew
2015-03-08
This work considers a previously overlooked uncertainty present in film dosimetry which results from moderate curvature of films during the scanning process. Small film samples are particularly susceptible to film curling which may be undetected or deemed insignificant. In this study, we consider test cases with controlled induced curvature of film and with film raised horizontally above the scanner plate. We also evaluate the difference in scans of a film irradiated with a typical brachytherapy dose distribution with the film naturally curved and with the film held flat on the scanner. Typical naturally occurring curvature of film at scanning, giving rise to a maximum height 1 to 2 mm above the scan plane, may introduce dose errors of 1% to 4%, and considerably reduce gamma evaluation passing rates when comparing film-measured doses with treatment planning system-calculated dose distributions, a common application of film dosimetry in radiotherapy. The use of a triple-channel dosimetry algorithm appeared to mitigate the error due to film curvature compared to conventional single-channel film dosimetry. The change in pixel value and calibrated reported dose with film curling or height above the scanner plate may be due to variations in illumination characteristics, optical disturbances, or a Callier-type effect. There is a clear requirement for physically flat films at scanning to avoid the introduction of a substantial error source in film dosimetry. Particularly for small film samples, a compression glass plate above the film is recommended to ensure flat-film scanning. This effect has been overlooked to date in the literature.
Method to improve the blade tip-timing accuracy of fiber bundle sensor under varying tip clearance
NASA Astrophysics Data System (ADS)
Duan, Fajie; Zhang, Jilong; Jiang, Jiajia; Guo, Haotian; Ye, Dechao
2016-01-01
Blade vibration measurement based on the blade tip-timing method has become an industry-standard procedure. Fiber bundle sensors are widely used for tip-timing measurement. However, the variation of clearance between the sensor and the blade will bring a tip-timing error to fiber bundle sensors due to the change in signal amplitude. This article presents methods based on software and hardware to reduce the error caused by the tip clearance change. The software method utilizes both the rising and falling edges of the tip-timing signal to determine the blade arrival time, and a calibration process suitable for asymmetric tip-timing signals is presented. The hardware method uses an automatic gain control circuit to stabilize the signal amplitude. Experiments are conducted and the results prove that both methods can effectively reduce the impact of tip clearance variation on the blade tip-timing and improve the accuracy of measurements.
Error trends in SASS winds as functions of atmospheric stability and sea surface temperature
NASA Technical Reports Server (NTRS)
Liu, W. T.
1983-01-01
Wind speed measurements obtained with the scatterometer instrument aboard the Seasat satellite are compared equivalent neutral wind measurements obtained from ship reports in the western N. Atlantic and eastern N. Pacific where the concentration of ship reports are high and the ranges of atmospheric stability and sea surface temperature are large. It is found that at low wind speeds the difference between satellite measurements and surface reports depends on sea surface temperature. At wind speeds higher than 8 m/s the dependence was greatly reduced. The removal of systematic errors due to fluctuations in atmospheric stability reduced the r.m.s. difference from 1.7 m/s to 0.8 m/s. It is suggested that further clarification of the effects of fluctuations in atmospheric stability on Seasat wind speed measurements should increase their reliability in the future.
Multi-spectral pyrometer for gas turbine blade temperature measurement
NASA Astrophysics Data System (ADS)
Gao, Shan; Wang, Lixin; Feng, Chi
2014-09-01
To achieve the highest possible turbine inlet temperature requires to accurately measuring the turbine blade temperature. If the temperature of blade frequent beyond the design limits, it will seriously reduce the service life. The problem for the accuracy of the temperature measurement includes the value of the target surface emissivity is unknown and the emissivity model is variability and the thermal radiation of the high temperature environment. In this paper, the multi-spectral pyrometer is designed provided mainly for range 500-1000°, and present a model corrected in terms of the error due to the reflected radiation only base on the turbine geometry and the physical properties of the material. Under different working conditions, the method can reduce the measurement error from the reflect radiation of vanes, make measurement closer to the actual temperature of the blade and calculating the corresponding model through genetic algorithm. The experiment shows that this method has higher accuracy measurements.
NASA Astrophysics Data System (ADS)
Kumagai, Toshiki; Hibino, Kenichi; Nagaike, Yasunari
2017-03-01
Internally scattered light in a Fizeau interferometer is generated from dust, defects, imperfect coating of the optical components, and multiple reflections inside the collimator lens. It produces additional noise fringes in the observed interference image and degrades the repeatability of the phase measurement. A method to reduce the phase measurement error is proposed, in which the test surface is mechanically translated between each phase measurement in addition to an ordinary phase shift of the reference surface. It is shown that a linear combination of several measured phases at different test surface positions can reduce the phase errors caused by the scattered light. The combination can also compensate for the nonuniformity of the phase shift that occurs in spherical tests. A symmetric sampling of the phase measurements can cancel the additional primary spherical aberrations that occur when the test surface is out of the null position of the confocal configuration.
Analysis of Solar Spectral Irradiance Measurements from the SBUV/2-Series and the SSBUV Instruments
NASA Technical Reports Server (NTRS)
Cebula, Richard P.; DeLand, Matthew T.; Hilsenrath, Ernest
1997-01-01
During this period of performance, 1 March 1997 - 31 August 1997, the NOAA-11 SBUV/2 solar spectral irradiance data set was validated using both internal and external assessments. Initial quality checking revealed minor problems with the data (e.g. residual goniometric errors, that were manifest as differences between the two scans acquired each day). The sources of these errors were determined and the errors were corrected. Time series were constructed for selected wavelengths and the solar irradiance changes measured by the instrument were compared to a Mg II proxy-based model of short- and long-term solar irradiance variations. This analysis suggested that errors due to residual, uncorrected long-term instrument drift have been reduced to less than 1-2% over the entire 5.5 year NOAA-11 data record. Detailed statistical analysis was performed. This analysis, which will be documented in a manuscript now in preparation, conclusively demonstrates the evolution of solar rotation periodicity and strength during solar cycle 22.
NASA Technical Reports Server (NTRS)
Wang, Qinglin; Gogineni, S. P.
1991-01-01
A numerical procedure for estimating the true scattering coefficient, sigma(sup 0), from measurements made using wide-beam antennas. The use of wide-beam antennas results in an inaccurate estimate of sigma(sup 0) if the narrow-beam approximation is used in the retrieval process for sigma(sup 0). To reduce this error, a correction procedure was proposed that estimates the error resulting from the narrow-beam approximation and uses the error to obtain a more accurate estimate of sigma(sup 0). An exponential model was assumed to take into account the variation of sigma(sup 0) with incidence angles, and the model parameters are estimated from measured data. Based on the model and knowledge of the antenna pattern, the procedure calculates the error due to the narrow-beam approximation. The procedure is shown to provide a significant improvement in estimation of sigma(sup 0) obtained with wide-beam antennas. The proposed procedure is also shown insensitive to the assumed sigma(sup 0) model.
NASA Astrophysics Data System (ADS)
Xiao, Zhili; Tan, Chao; Dong, Feng
2017-08-01
Magnetic induction tomography (MIT) is a promising technique for continuous monitoring of intracranial hemorrhage due to its contactless nature, low cost and capacity to penetrate the high-resistivity skull. The inter-tissue inductive coupling increases with frequency, which may lead to errors in multi-frequency imaging at high frequency. The effect of inter-tissue inductive coupling was investigated to improve the multi-frequency imaging of hemorrhage. An analytical model of inter-tissue inductive coupling based on the equivalent circuit was established. A set of new multi-frequency decomposition equations separating the phase shift of hemorrhage from other brain tissues was derived by employing the coupling information to improve the multi-frequency imaging of intracranial hemorrhage. The decomposition error and imaging error are both decreased after considering the inter-tissue inductive coupling information. The study reveals that the introduction of inter-tissue inductive coupling can reduce the errors of multi-frequency imaging, promoting the development of intracranial hemorrhage monitoring by multi-frequency MIT.
Photonic Doppler velocimetry probe designed with stereo imaging
NASA Astrophysics Data System (ADS)
Malone, Robert M.; Cata, Brian M.; Daykin, Edward P.; Esquibel, David L.; Frogget, Brent C.; Holtkamp, David B.; Kaufman, Morris I.; McGillivray, Kevin D.; Palagi, Martin J.; Pazuchanics, Peter; Romero, Vincent T.; Sorenson, Danny S.
2014-09-01
During the fabrication of an aspherical mirror, the inspection of the residual wavefront error is critical. In the program of a spaceborne telescope development, primary mirror is made of ZERODUR with clear aperture of 450 mm. The mass is 10 kg after lightweighting. Deformation of mirror due to gravity is expected; hence uniform supporting measured by load cells has been applied to reduce the gravity effect. Inspection has been taken to determine the residual wavefront error at the configuration of mirror face upwards. Correction polishing has been performed according to the measurement. However, after comparing with the data measured by bench test while the primary mirror is at a configuration of mirror face horizontal, deviations have been found for the two measurements. Optical system that is not able to meet the requirement is predicted according to the measured wavefront error by bench test. A target wavefront error of secondary mirror is therefore analyzed to correct that of primary mirror. Optical performance accordingly is presented.
A Systems Modeling Approach for Risk Management of Command File Errors
NASA Technical Reports Server (NTRS)
Meshkat, Leila
2012-01-01
The main cause of commanding errors is often (but not always) due to procedures. Either lack of maturity in the processes, incompleteness of requirements or lack of compliance to these procedures. Other causes of commanding errors include lack of understanding of system states, inadequate communication, and making hasty changes in standard procedures in response to an unexpected event. In general, it's important to look at the big picture prior to making corrective actions. In the case of errors traced back to procedures, considering the reliability of the process as a metric during its' design may help to reduce risk. This metric is obtained by using data from Nuclear Industry regarding human reliability. A structured method for the collection of anomaly data will help the operator think systematically about the anomaly and facilitate risk management. Formal models can be used for risk based design and risk management. A generic set of models can be customized for a broad range of missions.
A method for optical ground station reduce alignment error in satellite-ground quantum experiments
NASA Astrophysics Data System (ADS)
He, Dong; Wang, Qiang; Zhou, Jian-Wei; Song, Zhi-Jun; Zhong, Dai-Jun; Jiang, Yu; Liu, Wan-Sheng; Huang, Yong-Mei
2018-03-01
A satellite dedicated for quantum science experiments, has been developed and successfully launched from Jiuquan, China, on August 16, 2016. Two new optical ground stations (OGSs) were built to cooperate with the satellite to complete satellite-ground quantum experiments. OGS corrected its pointing direction by satellite trajectory error to coarse tracking system and uplink beacon sight, therefore fine tracking CCD and uplink beacon optical axis alignment accuracy was to ensure that beacon could cover the quantum satellite in all time when it passed the OGSs. Unfortunately, when we tested specifications of the OGSs, due to the coarse tracking optical system was commercial telescopes, the change of position of the target in the coarse CCD was up to 600μrad along with the change of elevation angle. In this paper, a method of reduce alignment error between beacon beam and fine tracking CCD is proposed. Firstly, OGS fitted the curve of target positions in coarse CCD along with the change of elevation angle. Secondly, OGS fitted the curve of hexapod secondary mirror positions along with the change of elevation angle. Thirdly, when tracking satellite, the fine tracking error unloaded on the real-time zero point position of coarse CCD which computed by the firstly calibration data. Simultaneously the positions of the hexapod secondary mirror were adjusted by the secondly calibration data. Finally the experiment result is proposed. Results show that the alignment error is less than 50μrad.
Jason-2 systematic error analysis in the GPS derived orbits
NASA Astrophysics Data System (ADS)
Melachroinos, S.; Lemoine, F. G.; Zelensky, N. P.; Rowlands, D. D.; Luthcke, S. B.; Chinn, D. S.
2011-12-01
Several results related to global or regional sea level changes still too often rely on the assumption that orbit errors coming from station coordinates adoption can be neglected in the total error budget (Ceri et al. 2010). In particular Instantaneous crust-fixed coordinates are obtained by adding to the linear ITRF model the geophysical high-frequency variations. In principle, geocenter motion should also be included in this computation, in order to reference these coordinates to the center of mass of the whole Earth. This correction is currently not applied when computing GDR orbits. Cerri et al. (2010) performed an analysis of systematic errors common to all coordinates along the North/South direction, as this type of bias, also known as Z-shift, has a clear impact on MSL estimates due to the unequal distribution of continental surface in the northern and southern hemispheres. The goal of this paper is to specifically study the main source of errors which comes from the current imprecision in the Z-axis realization of the frame. We focus here on the time variability of this Z-shift, which we can decompose in a drift and a periodic component due to the presumably omitted geocenter motion. A series of Jason-2 GPS-only orbits have been computed at NASA GSFC, using both IGS05 and IGS08. These orbits have been shown to agree radially at less than 1 cm RMS vs our SLR/DORIS std0905 and std1007 reduced-dynamic orbits and in comparison with orbits produced by other analysis centers (Melachroinos et al. 2011). Our GPS-only JASON-2 orbit accuracy is assessed using a number of tests including analysis of independent SLR and altimeter crossover residuals, orbit overlap differences, and direct comparison to orbits generated at GSFC using SLR and DORIS tracking, and to orbits generated externally at other centers. Tests based on SLR-crossover residuals provide the best performance indicator for independent validation of the NASA/GSFC GPS-only reduced dynamic orbits. Reduced dynamic versus dynamic orbit differences are used to characterize the remaining force model error and TRF instability. At first, we quantify the effect of a North/South displacement of the tracking reference points for each of the three techniques. We then compare these results to the study of Morel and Willis (2005) and Ceri et al. (2010). We extend the analysis to the most recent Jason-2 cycles. We evaluate the GPS vs SLR & DORIS orbits produced using the GEODYN.
An Enhanced MEMS Error Modeling Approach Based on Nu-Support Vector Regression
Bhatt, Deepak; Aggarwal, Priyanka; Bhattacharya, Prabir; Devabhaktuni, Vijay
2012-01-01
Micro Electro Mechanical System (MEMS)-based inertial sensors have made possible the development of a civilian land vehicle navigation system by offering a low-cost solution. However, the accurate modeling of the MEMS sensor errors is one of the most challenging tasks in the design of low-cost navigation systems. These sensors exhibit significant errors like biases, drift, noises; which are negligible for higher grade units. Different conventional techniques utilizing the Gauss Markov model and neural network method have been previously utilized to model the errors. However, Gauss Markov model works unsatisfactorily in the case of MEMS units due to the presence of high inherent sensor errors. On the other hand, modeling the random drift utilizing Neural Network (NN) is time consuming, thereby affecting its real-time implementation. We overcome these existing drawbacks by developing an enhanced Support Vector Machine (SVM) based error model. Unlike NN, SVMs do not suffer from local minimisation or over-fitting problems and delivers a reliable global solution. Experimental results proved that the proposed SVM approach reduced the noise standard deviation by 10–35% for gyroscopes and 61–76% for accelerometers. Further, positional error drifts under static conditions improved by 41% and 80% in comparison to NN and GM approaches. PMID:23012552
NASA Astrophysics Data System (ADS)
Mena, Marcelo Andres
During 2004 and 2006 the University of Iowa provided air quality forecast support for flight planning of the ICARTT and MILAGRO field campaigns. A method for improvement of model performance in comparison to observations is showed. The method allows identifying sources of model error from boundary conditions and emissions inventories. Simultaneous analysis of horizontal interpolation of model error and error covariance showed that error in ozone modeling is highly correlated to the error of its precursors, and that there is geographical correlation also. During ICARTT ozone modeling error was improved by updating from the National Emissions Inventory from 1999 and 2001, and furthermore by updating large point source emissions from continuous monitoring data. Further improvements were achieved by reducing area emissions of NOx y 60% for states in the Southeast United States. Ozone error was highly correlated to NOy error during this campaign. Also ozone production in the United States was most sensitive to NOx emissions. During MILAGRO model performance in terms of correlation coefficients was higher, but model error in ozone modeling was high due overestimation of NOx and VOC emissions in Mexico City during forecasting. Large model improvements were shown by decreasing NOx emissions in Mexico City by 50% and VOC by 60%. Recurring ozone error is spatially correlated to CO and NOy error. Sensitivity studies show that Mexico City aerosol can reduce regional photolysis rates by 40% and ozone formation by 5-10%. Mexico City emissions can enhance NOy and O3 concentrations over the Gulf of Mexico in up to 10-20%. Mexico City emissions can convert regional ozone production regimes from VOC to NOx limited. A method of interpolation of observations along flight tracks is shown, which can be used to infer on the direction of outflow plumes. The use of ratios such as O3/NOy and NOx/NOy can be used to provide information on chemical characteristics of the plume, such as age, and ozone production regime. Interpolated MTBE observations can be used as a tracer of urban mobile source emissions. Finally procedures for estimating and gridding emissions inventories in Brazil and Mexico are presented.
Samaranayake, N R; Cheung, S T D; Chui, W C M; Cheung, B M Y
2012-12-01
Healthcare technology is meant to reduce medication errors. The objective of this study was to assess unintended errors related to technologies in the medication use process. Medication incidents reported from 2006 to 2010 in a main tertiary care hospital were analysed by a pharmacist and technology-related errors were identified. Technology-related errors were further classified as socio-technical errors and device errors. This analysis was conducted using data from medication incident reports which may represent only a small proportion of medication errors that actually takes place in a hospital. Hence, interpretation of results must be tentative. 1538 medication incidents were reported. 17.1% of all incidents were technology-related, of which only 1.9% were device errors, whereas most were socio-technical errors (98.1%). Of these, 61.2% were linked to computerised prescription order entry, 23.2% to bar-coded patient identification labels, 7.2% to infusion pumps, 6.8% to computer-aided dispensing label generation and 1.5% to other technologies. The immediate causes for technology-related errors included, poor interface between user and computer (68.1%), improper procedures or rule violations (22.1%), poor interface between user and infusion pump (4.9%), technical defects (1.9%) and others (3.0%). In 11.4% of the technology-related incidents, the error was detected after the drug had been administered. A considerable proportion of all incidents were technology-related. Most errors were due to socio-technical issues. Unintended and unanticipated errors may happen when using technologies. Therefore, when using technologies, system improvement, awareness, training and monitoring are needed to minimise medication errors. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Phonological and Motor Errors in Individuals with Acquired Sound Production Impairment
ERIC Educational Resources Information Center
Buchwald, Adam; Miozzo, Michele
2012-01-01
Purpose: This study aimed to compare sound production errors arising due to phonological processing impairment with errors arising due to motor speech impairment. Method: Two speakers with similar clinical profiles who produced similar consonant cluster simplification errors were examined using a repetition task. We compared both overall accuracy…
Attenuation-emission alignment in cardiac PET∕CT based on consistency conditions
Alessio, Adam M.; Kinahan, Paul E.; Champley, Kyle M.; Caldwell, James H.
2010-01-01
Purpose: In cardiac PET and PET∕CT imaging, misaligned transmission and emission images are a common problem due to respiratory and cardiac motion. This misalignment leads to erroneous attenuation correction and can cause errors in perfusion mapping and quantification. This study develops and tests a method for automated alignment of attenuation and emission data. Methods: The CT-based attenuation map is iteratively transformed until the attenuation corrected emission data minimize an objective function based on the Radon consistency conditions. The alignment process is derived from previous work by Welch et al. [“Attenuation correction in PET using consistency information,” IEEE Trans. Nucl. Sci. 45, 3134–3141 (1998)] for stand-alone PET imaging. The process was evaluated with the simulated data and measured patient data from multiple cardiac ammonia PET∕CT exams. The alignment procedure was applied to simulations of five different noise levels with three different initial attenuation maps. For the measured patient data, the alignment procedure was applied to eight attenuation-emission combinations with initially acceptable alignment and eight combinations with unacceptable alignment. The initially acceptable alignment studies were forced out of alignment a known amount and quantitatively evaluated for alignment and perfusion accuracy. The initially unacceptable studies were compared to the proposed aligned images in a blinded side-by-side review. Results: The proposed automatic alignment procedure reduced errors in the simulated data and iteratively approaches global minimum solutions with the patient data. In simulations, the alignment procedure reduced the root mean square error to less than 5 mm and reduces the axial translation error to less than 1 mm. In patient studies, the procedure reduced the translation error by >50% and resolved perfusion artifacts after a known misalignment for the eight initially acceptable patient combinations. The side-by-side review of the proposed aligned attenuation-emission maps and initially misaligned attenuation-emission maps revealed that reviewers preferred the proposed aligned maps in all cases, except one inconclusive case. Conclusions: The proposed alignment procedure offers an automatic method to reduce attenuation correction artifacts in cardiac PET∕CT and provides a viable supplement to subjective manual realignment tools. PMID:20384256
Geographically correlated errors observed from a laser-based short-arc technique
NASA Astrophysics Data System (ADS)
Bonnefond, P.; Exertier, P.; Barlier, F.
1999-07-01
The laser-based short-arc technique has been developed in order to avoid local errors which affect the dynamical orbit computation, such as those due to mismodeling in the geopotential. It is based on a geometric method and consists in fitting short arcs (about 4000 km), issued from a global orbit, with satellite laser ranging tracking measurements from a ground station network. Ninety-two TOPEX/Poseidon (T/P) cycles of laser-based short-arc orbits have then been compared to JGM-2 and JGM-3 T/P orbits computed by the Precise Orbit Determination (POD) teams (Service d'Orbitographie Doris/Centre National d'Etudes Spatiales and Goddard Space Flight Center/NASA) over two areas: (1) the Mediterranean area and (2) a part of the Pacific (including California and Hawaii) called hereafter the U.S. area. Geographically correlated orbit errors in these areas are clearly evidenced: for example, -2.6 cm and +0.7 cm for the Mediterranean and U.S. areas, respectively, relative to JGM-3 orbits. However, geographically correlated errors (GCE) which are commonly linked to errors in the gravity model, can also be due to systematic errors in the reference frame and/or to biases in the tracking measurements. The short-arc technique being very sensitive to such error sources, our analysis however demonstrates that the induced geographical systematic effects are at the level of 1-2 cm on the radial orbit component. Results are also compared with those obtained with the GPS-based reduced dynamic technique. The time-dependent part of GCE has also been studied. Over 6 years of T/P data, coherent signals in the radial component of T/P Precise Orbit Ephemeris (POE) are clearly evidenced with a time period of about 6 months. In addition, impact of time varying-error sources coming from the reference frame and the tracking data accuracy has been analyzed, showing a possible linear trend of about 0.5-1 mm/yr in the radial component of T/P POE.
Increased instrument intelligence--can it reduce laboratory error?
Jekelis, Albert W
2005-01-01
Recent literature has focused on the reduction of laboratory errors and the potential impact on patient management. This study assessed the intelligent, automated preanalytical process-control abilities in newer generation analyzers as compared with older analyzers and the impact on error reduction. Three generations of immuno-chemistry analyzers were challenged with pooled human serum samples for a 3-week period. One of the three analyzers had an intelligent process of fluidics checks, including bubble detection. Bubbles can cause erroneous results due to incomplete sample aspiration. This variable was chosen because it is the most easily controlled sample defect that can be introduced. Traditionally, lab technicians have had to visually inspect each sample for the presence of bubbles. This is time consuming and introduces the possibility of human error. Instruments with bubble detection may be able to eliminate the human factor and reduce errors associated with the presence of bubbles. Specific samples were vortexed daily to introduce a visible quantity of bubbles, then immediately placed in the daily run. Errors were defined as a reported result greater than three standard deviations below the mean and associated with incomplete sample aspiration of the analyte of the individual analyzer Three standard deviations represented the target limits of proficiency testing. The results of the assays were examined for accuracy and precision. Efficiency, measured as process throughput, was also measured to associate a cost factor and potential impact of the error detection on the overall process. The analyzer performance stratified according to their level of internal process control The older analyzers without bubble detection reported 23 erred results. The newest analyzer with bubble detection reported one specimen incorrectly. The precision and accuracy of the nonvortexed specimens were excellent and acceptable for all three analyzers. No errors were found in the nonvortexed specimens. There were no significant differences in overall process time for any of the analyzers when tests were arranged in an optimal configuration. The analyzer with advanced fluidic intelligence demostrated the greatest ability to appropriately deal with an incomplete aspiration by not processing and reporting a result for the sample. This study suggests that preanalytical process-control capabilities could reduce errors. By association, it implies that similar intelligent process controls could favorably impact the error rate and, in the case of this instrument, do it without negatively impacting process throughput. Other improvements may be realized as a result of having an intelligent error-detection process including further reduction in misreported results, fewer repeats, less operator intervention, and less reagent waste.
Advancing the research agenda for diagnostic error reduction.
Zwaan, Laura; Schiff, Gordon D; Singh, Hardeep
2013-10-01
Diagnostic errors remain an underemphasised and understudied area of patient safety research. We briefly summarise the methods that have been used to conduct research on epidemiology, contributing factors and interventions related to diagnostic error and outline directions for future research. Research methods that have studied epidemiology of diagnostic error provide some estimate on diagnostic error rates. However, there appears to be a large variability in the reported rates due to the heterogeneity of definitions and study methods used. Thus, future methods should focus on obtaining more precise estimates in different settings of care. This would lay the foundation for measuring error rates over time to evaluate improvements. Research methods have studied contributing factors for diagnostic error in both naturalistic and experimental settings. Both approaches have revealed important and complementary information. Newer conceptual models from outside healthcare are needed to advance the depth and rigour of analysis of systems and cognitive insights of causes of error. While the literature has suggested many potentially fruitful interventions for reducing diagnostic errors, most have not been systematically evaluated and/or widely implemented in practice. Research is needed to study promising intervention areas such as enhanced patient involvement in diagnosis, improving diagnosis through the use of electronic tools and identification and reduction of specific diagnostic process 'pitfalls' (eg, failure to conduct appropriate diagnostic evaluation of a breast lump after a 'normal' mammogram). The last decade of research on diagnostic error has made promising steps and laid a foundation for more rigorous methods to advance the field.
Projection-Based Reduced Order Modeling for Spacecraft Thermal Analysis
NASA Technical Reports Server (NTRS)
Qian, Jing; Wang, Yi; Song, Hongjun; Pant, Kapil; Peabody, Hume; Ku, Jentung; Butler, Charles D.
2015-01-01
This paper presents a mathematically rigorous, subspace projection-based reduced order modeling (ROM) methodology and an integrated framework to automatically generate reduced order models for spacecraft thermal analysis. Two key steps in the reduced order modeling procedure are described: (1) the acquisition of a full-scale spacecraft model in the ordinary differential equation (ODE) and differential algebraic equation (DAE) form to resolve its dynamic thermal behavior; and (2) the ROM to markedly reduce the dimension of the full-scale model. Specifically, proper orthogonal decomposition (POD) in conjunction with discrete empirical interpolation method (DEIM) and trajectory piece-wise linear (TPWL) methods are developed to address the strong nonlinear thermal effects due to coupled conductive and radiative heat transfer in the spacecraft environment. Case studies using NASA-relevant satellite models are undertaken to verify the capability and to assess the computational performance of the ROM technique in terms of speed-up and error relative to the full-scale model. ROM exhibits excellent agreement in spatiotemporal thermal profiles (<0.5% relative error in pertinent time scales) along with salient computational acceleration (up to two orders of magnitude speed-up) over the full-scale analysis. These findings establish the feasibility of ROM to perform rational and computationally affordable thermal analysis, develop reliable thermal control strategies for spacecraft, and greatly reduce the development cycle times and costs.
Shape adjustment optimization and experiment of cable-membrane reflectors
NASA Astrophysics Data System (ADS)
Du, Jingli; Gu, Yongzhen; Bao, Hong; Wang, Congsi; Chen, Xiaofeng
2018-05-01
Cable-membrane structures are widely employed for large space reflectors due to their lightweight, compact and easy package. In these structures, membranes are attached to cable net, serving as reflectors themselves or as supporting structures for other reflective surface. The cable length and membrane shape have to be carefully designed and fabricated to guarantee the desired reflector surface shape. However, due to inevitable error in cable length and membrane shape during the manufacture and assembly of cable-membrane reflectors, some cables have to be designed to be capable of length adjustment. By carefully adjusting the length of these cables, the degeneration in reflector shape precision due to this inevitable error can be effectively reduced. In the paper a shape adjustment algorithm for cable-membrane reflectors is proposed. Meanwhile, model updating is employed during shape adjustment to decrease the discrepancy of the numerical model with respect to the actual reflector. This discrepancy has to be considered because during attaching membranes to cable net, the accuracy of the membrane shape is hard to guarantee. Numerical examples and experimental results demonstrate the proposed method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vedam, S.; Docef, A.; Fix, M.
2005-06-15
The synchronization of dynamic multileaf collimator (DMLC) response with respiratory motion is critical to ensure the accuracy of DMLC-based four dimensional (4D) radiation delivery. In practice, however, a finite time delay (response time) between the acquisition of tumor position and multileaf collimator response necessitates predictive models of respiratory tumor motion to synchronize radiation delivery. Predicting a complex process such as respiratory motion introduces geometric errors, which have been reported in several publications. However, the dosimetric effect of such errors on 4D radiation delivery has not yet been investigated. Thus, our aim in this work was to quantify the dosimetric effectsmore » of geometric error due to prediction under several different conditions. Conformal and intensity modulated radiation therapy (IMRT) plans for a lung patient were generated for anterior-posterior/posterior-anterior (AP/PA) beam arrangements at 6 and 18 MV energies to provide planned dose distributions. Respiratory motion data was obtained from 60 diaphragm-motion fluoroscopy recordings from five patients. A linear adaptive filter was employed to predict the tumor position. The geometric error of prediction was defined as the absolute difference between predicted and actual positions at each diaphragm position. Distributions of geometric error of prediction were obtained for all of the respiratory motion data. Planned dose distributions were then convolved with distributions for the geometric error of prediction to obtain convolved dose distributions. The dosimetric effect of such geometric errors was determined as a function of several variables: response time (0-0.6 s), beam energy (6/18 MV), treatment delivery (3D/4D), treatment type (conformal/IMRT), beam direction (AP/PA), and breathing training type (free breathing/audio instruction/visual feedback). Dose difference and distance-to-agreement analysis was employed to quantify results. Based on our data, the dosimetric impact of prediction (a) increased with response time, (b) was larger for 3D radiation therapy as compared with 4D radiation therapy, (c) was relatively insensitive to change in beam energy and beam direction, (d) was greater for IMRT distributions as compared with conformal distributions, (e) was smaller than the dosimetric impact of latency, and (f) was greatest for respiration motion with audio instructions, followed by visual feedback and free breathing. Geometric errors of prediction that occur during 4D radiation delivery introduce dosimetric errors that are dependent on several factors, such as response time, treatment-delivery type, and beam energy. Even for relatively small response times of 0.6 s into the future, dosimetric errors due to prediction could approach delivery errors when respiratory motion is not accounted for at all. To reduce the dosimetric impact, better predictive models and/or shorter response times are required.« less
Optical Enhancement of Exoskeleton-Based Estimation of Glenohumeral Angles
Cortés, Camilo; Unzueta, Luis; de los Reyes-Guzmán, Ana; Ruiz, Oscar E.; Flórez, Julián
2016-01-01
In Robot-Assisted Rehabilitation (RAR) the accurate estimation of the patient limb joint angles is critical for assessing therapy efficacy. In RAR, the use of classic motion capture systems (MOCAPs) (e.g., optical and electromagnetic) to estimate the Glenohumeral (GH) joint angles is hindered by the exoskeleton body, which causes occlusions and magnetic disturbances. Moreover, the exoskeleton posture does not accurately reflect limb posture, as their kinematic models differ. To address the said limitations in posture estimation, we propose installing the cameras of an optical marker-based MOCAP in the rehabilitation exoskeleton. Then, the GH joint angles are estimated by combining the estimated marker poses and exoskeleton Forward Kinematics. Such hybrid system prevents problems related to marker occlusions, reduced camera detection volume, and imprecise joint angle estimation due to the kinematic mismatch of the patient and exoskeleton models. This paper presents the formulation, simulation, and accuracy quantification of the proposed method with simulated human movements. In addition, a sensitivity analysis of the method accuracy to marker position estimation errors, due to system calibration errors and marker drifts, has been carried out. The results show that, even with significant errors in the marker position estimation, method accuracy is adequate for RAR. PMID:27403044
Won, Jongsung; Cheng, Jack C P; Lee, Ghang
2016-03-01
Waste generated in construction and demolition processes comprised around 50% of the solid waste in South Korea in 2013. Many cases show that design validation based on building information modeling (BIM) is an effective means to reduce the amount of construction waste since construction waste is mainly generated due to improper design and unexpected changes in the design and construction phases. However, the amount of construction waste that could be avoided by adopting BIM-based design validation has been unknown. This paper aims to estimate the amount of construction waste prevented by a BIM-based design validation process based on the amount of construction waste that might be generated due to design errors. Two project cases in South Korea were studied in this paper, with 381 and 136 design errors detected, respectively during the BIM-based design validation. Each design error was categorized according to its cause and the likelihood of detection before construction. The case studies show that BIM-based design validation could prevent 4.3-15.2% of construction waste that might have been generated without using BIM. Copyright © 2015 Elsevier Ltd. All rights reserved.
Shanmuga Doss, Sreeja; Bhatt, Nirav Pravinbhai; Jayaraman, Guhan
2017-08-15
There is an unreasonably high variation in the literature reports on molecular weight of hyaluronic acid (HA) estimated using conventional size exclusion chromatography (SEC). This variation is most likely due to errors in estimation. Working with commercially available HA molecular weight standards, this work examines the extent of error in molecular weight estimation due to two factors: use of non-HA based calibration and concentration of sample injected into the SEC column. We develop a multivariate regression correlation to correct for concentration effect. Our analysis showed that, SEC calibration based on non-HA standards like polyethylene oxide and pullulan led to approximately 2 and 10 times overestimation, respectively, when compared to HA-based calibration. Further, we found that injected sample concentration has an effect on molecular weight estimation. Even at 1g/l injected sample concentration, HA molecular weight standards of 0.7 and 1.64MDa showed appreciable underestimation of 11-24%. The multivariate correlation developed was found to reduce error in estimations at 1g/l to <4%. The correlation was also successfully applied to accurately estimate the molecular weight of HA produced by a recombinant Lactococcus lactis fermentation. Copyright © 2017 Elsevier B.V. All rights reserved.
Gao, Jun; Wang, Shu-Peng; Gu, Xing-Fa; Yu, Tao; Fang, Li
2012-06-01
With the development of the quantitative researches using ocean color remote sensing data sets, study on reducing the uncertainty of the response of the ocean color remote sensors to the polarization characteristics of the target has been attracting more and more attention recently. Taking MODIS as an example, the polarization distribution in the whole field of view was analyzed. For the atmosphere path radiance and the apparent radiance considering the coupling between ocean surface and atmosphere, the polarization distribution has a strong relation with the imaging geometry. Compared to the contribution of the polarization from the rough sea surface, the contribution from the atmosphere is dominated. Based on the polarization characteristics in the field of view, the influence of the polarization coupling error on the quality of the satellite data was studied with the assumption of different polarization sensitivities. It was found that errors due to polarization sensitivity in the field of view are lower than water leaving radiance only when the polarization sensitivity is less than 2%. And in this case it can meet the need of the retrieval of water leaving radiative products. The method of the compensation for the polarization coupling error due to the atmosphere is proposed, which proved to be effective to improve the utilization of satellite data and the accuracy of measured radiance by remote sensor.
Asteroid approach covariance analysis for the Clementine mission
NASA Technical Reports Server (NTRS)
Ionasescu, Rodica; Sonnabend, David
1993-01-01
The Clementine mission is designed to test Strategic Defense Initiative Organization (SDIO) technology, the Brilliant Pebbles and Brilliant Eyes sensors, by mapping the moon surface and flying by the asteroid Geographos. The capability of two of the instruments available on board the spacecraft, the lidar (laser radar) and the UV/Visible camera is used in the covariance analysis to obtain the spacecraft delivery uncertainties at the asteroid. These uncertainties are due primarily to asteroid ephemeris uncertainties. On board optical navigation reduces the uncertainty in the knowledge of the spacecraft position in the direction perpendicular to the incoming asymptote to a one-sigma value of under 1 km, at the closest approach distance of 100 km. The uncertainty in the knowledge of the encounter time is about 0.1 seconds for a flyby velocity of 10.85 km/s. The magnitude of these uncertainties is due largely to Center Finding Errors (CFE). These systematic errors represent the accuracy expected in locating the center of the asteroid in the optical navigation images, in the absence of a topographic model for the asteroid. The direction of the incoming asymptote cannot be estimated accurately until minutes before the asteroid flyby, and correcting for it would require autonomous navigation. Orbit determination errors dominate over maneuver execution errors, and the final delivery accuracy attained is basically the orbit determination uncertainty before the final maneuver.
On the use of programmable hardware and reduced numerical precision in earth-system modeling.
Düben, Peter D; Russell, Francis P; Niu, Xinyu; Luk, Wayne; Palmer, T N
2015-09-01
Programmable hardware, in particular Field Programmable Gate Arrays (FPGAs), promises a significant increase in computational performance for simulations in geophysical fluid dynamics compared with CPUs of similar power consumption. FPGAs allow adjusting the representation of floating-point numbers to specific application needs. We analyze the performance-precision trade-off on FPGA hardware for the two-scale Lorenz '95 model. We scale the size of this toy model to that of a high-performance computing application in order to make meaningful performance tests. We identify the minimal level of precision at which changes in model results are not significant compared with a maximal precision version of the model and find that this level is very similar for cases where the model is integrated for very short or long intervals. It is therefore a useful approach to investigate model errors due to rounding errors for very short simulations (e.g., 50 time steps) to obtain a range for the level of precision that can be used in expensive long-term simulations. We also show that an approach to reduce precision with increasing forecast time, when model errors are already accumulated, is very promising. We show that a speed-up of 1.9 times is possible in comparison to FPGA simulations in single precision if precision is reduced with no strong change in model error. The single-precision FPGA setup shows a speed-up of 2.8 times in comparison to our model implementation on two 6-core CPUs for large model setups.
Refractive error and visual impairment in private school children in Ghana.
Kumah, Ben D; Ebri, Anne; Abdul-Kabir, Mohammed; Ahmed, Abdul-Sadik; Koomson, Nana Ya; Aikins, Samual; Aikins, Amos; Amedo, Angela; Lartey, Seth; Naidoo, Kovin
2013-12-01
To assess the prevalence of refractive error and visual impairment in private school children in Ghana. A random selection of geographically defined classes in clusters was used to identify a sample of school children aged 12 to 15 years in the Ashanti Region. Children in 60 clusters were enumerated and examined in classrooms. The examination included visual acuity, retinoscopy, autorefraction under cycloplegia, and examination of anterior segment, media, and fundus. For quality assurance, a random sample of children with reduced and normal vision were selected and re-examined independently. A total of 2454 children attending 53 private schools were enumerated, and of these, 2435 (99.2%) were examined. Prevalence of uncorrected, presenting, and best visual acuity of 20/40 or worse in the better eye was 3.7, 3.5, and 0.4%, respectively. Refractive error was the cause of reduced vision in 71.7% of 152 eyes, amblyopia in 9.9%, retinal disorders in 5.9%, and corneal opacity in 4.6%. Exterior and anterior segment abnormalities occurred in 43 (1.8%) children. Myopia (at least -0.50 D) in one or both eyes was present in 3.2% of children when measured with retinoscopy and in 3.4% measured with autorefraction. Myopia was not significantly associated with gender (P = 0.82). Hyperopia (+2.00 D or more) in at least one eye was present in 0.3% of children with retinoscopy and autorefraction. The prevalence of reduced vision in Ghanaian private school children due to uncorrected refractive error was low. However, the prevalence of amblyopia, retinal disorders, and corneal opacities indicate the need for early interventions.
Propagation of stage measurement uncertainties to streamflow time series
NASA Astrophysics Data System (ADS)
Horner, Ivan; Le Coz, Jérôme; Renard, Benjamin; Branger, Flora; McMillan, Hilary
2016-04-01
Streamflow uncertainties due to stage measurements errors are generally overlooked in the promising probabilistic approaches that have emerged in the last decade. We introduce an original error model for propagating stage uncertainties through a stage-discharge rating curve within a Bayesian probabilistic framework. The method takes into account both rating curve (parametric errors and structural errors) and stage uncertainty (systematic and non-systematic errors). Practical ways to estimate the different types of stage errors are also presented: (1) non-systematic errors due to instrument resolution and precision and non-stationary waves and (2) systematic errors due to gauge calibration against the staff gauge. The method is illustrated at a site where the rating-curve-derived streamflow can be compared with an accurate streamflow reference. The agreement between the two time series is overall satisfying. Moreover, the quantification of uncertainty is also satisfying since the streamflow reference is compatible with the streamflow uncertainty intervals derived from the rating curve and the stage uncertainties. Illustrations from other sites are also presented. Results are much contrasted depending on the site features. In some cases, streamflow uncertainty is mainly due to stage measurement errors. The results also show the importance of discriminating systematic and non-systematic stage errors, especially for long term flow averages. Perspectives for improving and validating the streamflow uncertainty estimates are eventually discussed.
[Epidemiology of refractive errors].
Wolfram, C
2017-07-01
Refractive errors are very common and can lead to severe pathological changes in the eye. This article analyzes the epidemiology of refractive errors in the general population in Germany and worldwide and describes common definitions for refractive errors and clinical characteristics for pathologicaal changes. Refractive errors differ between age groups due to refractive changes during the life time and also due to generation-specific factors. Current research about the etiology of refractive errors has strengthened the influence of environmental factors, which led to new strategies for the prevention of refractive pathologies.
Critical older driver errors in a national sample of serious U.S. crashes.
Cicchino, Jessica B; McCartt, Anne T
2015-07-01
Older drivers are at increased risk of crash involvement per mile traveled. The purpose of this study was to examine older driver errors in serious crashes to determine which errors are most prevalent. The National Highway Traffic Safety Administration's National Motor Vehicle Crash Causation Survey collected in-depth, on-scene data for a nationally representative sample of 5470 U.S. police-reported passenger vehicle crashes during 2005-2007 for which emergency medical services were dispatched. There were 620 crashes involving 647 drivers aged 70 and older, representing 250,504 crash-involved older drivers. The proportion of various critical errors made by drivers aged 70 and older were compared with those made by drivers aged 35-54. Driver error was the critical reason for 97% of crashes involving older drivers. Among older drivers who made critical errors, the most common were inadequate surveillance (33%) and misjudgment of the length of a gap between vehicles or of another vehicle's speed, illegal maneuvers, medical events, and daydreaming (6% each). Inadequate surveillance (33% vs. 22%) and gap or speed misjudgment errors (6% vs. 3%) were more prevalent among older drivers than middle-aged drivers. Seventy-one percent of older drivers' inadequate surveillance errors were due to looking and not seeing another vehicle or failing to see a traffic control rather than failing to look, compared with 40% of inadequate surveillance errors among middle-aged drivers. About two-thirds (66%) of older drivers' inadequate surveillance errors and 77% of their gap or speed misjudgment errors were made when turning left at intersections. When older drivers traveled off the edge of the road or traveled over the lane line, this was most commonly due to non-performance errors such as medical events (51% and 44%, respectively), whereas middle-aged drivers were involved in these crash types for other reasons. Gap or speed misjudgment errors and inadequate surveillance errors were significantly more prevalent among female older drivers than among female middle-aged drivers, but the prevalence of these errors did not differ significantly between older and middle-aged male drivers. These errors comprised 51% of errors among older female drivers but only 31% among older male drivers. Efforts to reduce older driver crash involvements should focus on diminishing the likelihood of the most common driver errors. Countermeasures that simplify or remove the need to make left turns across traffic such as roundabouts, protected left turn signals, and diverging diamond intersection designs could decrease the frequency of inadequate surveillance and gap or speed misjudgment errors. In the future, vehicle-to-vehicle and vehicle-to-infrastructure communications may also help protect older drivers from these errors. Copyright © 2015 Elsevier Ltd. All rights reserved.
The effects of error augmentation on learning to walk on a narrow balance beam.
Domingo, Antoinette; Ferris, Daniel P
2010-10-01
Error augmentation during training has been proposed as a means to facilitate motor learning due to the human nervous system's reliance on performance errors to shape motor commands. We studied the effects of error augmentation on short-term learning of walking on a balance beam to determine whether it had beneficial effects on motor performance. Four groups of able-bodied subjects walked on a treadmill-mounted balance beam (2.5-cm wide) before and after 30 min of training. During training, two groups walked on the beam with a destabilization device that augmented error (Medium and High Destabilization groups). A third group walked on a narrower beam (1.27-cm) to augment error (Narrow). The fourth group practiced walking on the 2.5-cm balance beam (Wide). Subjects in the Wide group had significantly greater improvements after training than the error augmentation groups. The High Destabilization group had significantly less performance gains than the Narrow group in spite of similar failures per minute during training. In a follow-up experiment, a fifth group of subjects (Assisted) practiced with a device that greatly reduced catastrophic errors (i.e., stepping off the beam) but maintained similar pelvic movement variability. Performance gains were significantly greater in the Wide group than the Assisted group, indicating that catastrophic errors were important for short-term learning. We conclude that increasing errors during practice via destabilization and a narrower balance beam did not improve short-term learning of beam walking. In addition, the presence of qualitatively catastrophic errors seems to improve short-term learning of walking balance.
Trajectory Design Enhancements to Mitigate Risk for the Transiting Exoplanet Survey Satellite (TESS)
NASA Technical Reports Server (NTRS)
Dichmann, Donald; Parker, Joel; Nickel, Craig; Lutz, Stephen
2016-01-01
The Transiting Exoplanet Survey Satellite (TESS) will employ a highly eccentric Earth orbit, in 2:1 lunar resonance, which will be reached with a lunar flyby preceded by 3.5 phasing loops. The TESS mission has limited propellant and several constraints on the science orbit and on the phasing loops. Based on analysis and simulation, we have designed the phasing loops to reduce delta-V (DV) and to mitigate risk due to maneuver execution errors. We have automated the trajectory design process and use distributed processing to generate and optimal nominal trajectories; to check constraint satisfaction; and finally to model the effects of maneuver errors to identify trajectories that best meet the mission requirements.
Guidance and navigation for rendezvous with an uncooperative target
NASA Astrophysics Data System (ADS)
Telaar, J.; Schlaile, C.; Sommer, J.
2018-06-01
This paper presents a guidance strategy for a rendezvous with an uncooperative target. In the applied design reference mission, a spiral approach is commanded ensuring a collision-free relative orbit due to e/i-vector separation. The dimensions of the relative orbit are successively reduced by Δv commands which at the same time improve the observability of the relative state. The navigation is based on line-of-sight measurements. The relative state is estimated by an extended Kalman filter (EKF). The performance of this guidance and navigation strategy is demonstrated by extensive Monte Carlo simulations taking into account all major uncertainties like measurement errors, Δv execution errors, and differential drag.
Improved Correction System for Vibration Sensitive Inertial Angle of Attack Measurement Devices
NASA Technical Reports Server (NTRS)
Crawford, Bradley L.; Finley, Tom D.
2000-01-01
Inertial angle of attack (AoA) devices currently in use at NASA Langley Research Center (LaRC) are subject to inaccuracies due to centrifugal accelerations caused by model dynamics, also known as sting whip. Recent literature suggests that these errors can be as high as 0.25 deg. With the current AoA accuracy target at LaRC being 0.01 deg., there is a dire need for improvement. With other errors in the inertial system (temperature, rectification, resolution, etc.) having been reduced to acceptable levels, a system is currently being developed at LaRC to measure and correct for the sting-whip-induced errors. By using miniaturized piezoelectric accelerometers and magnetohydrodynamic rate sensors, not only can the total centrifugal acceleration be measured, but yaw and pitch dynamics in the tunnel can also be characterized. These corrections can be used to determine a tunnel's past performance and can also indicate where efforts need to be concentrated to reduce these dynamics. Included in this paper are data on individual sensors, laboratory testing techniques, package evaluation, and wind tunnel test results on a High Speed Research (HSR) model in the Langley 16-Foot Transonic Wind Tunnel.
Improving TCP Network Performance by Detecting and Reacting to Packet Reordering
NASA Technical Reports Server (NTRS)
Kruse, Hans; Ostermann, Shawn; Allman, Mark
2003-01-01
There are many factors governing the performance of TCP-basec applications traversing satellite channels. The end-to-end performance of TCP is known to be degraded by the reordering, delay, noise and asymmetry inherent in geosynchronous systems. This result has been largely based on experiments that evaluate the performance of TCP in single flow tests. While single flow tests are useful for deriving information on the theoretical behavior of TCP and allow for easy diagnosis of problems they do not represent a broad range of realistic situations and therefore cannot be used to authoritatively comment on performance issues. The experiments discussed in this report test TCP s performance in a more dynamic environment with competing traffic flows from hundreds of TCP connections running simultaneously across the satellite channel. Another aspect we investigate is TCP's reaction to bit errors on satellite channels. TCP interprets loss as a sign of network congestion. This causes TCP to reduce its transmission rate leading to reduced performance when loss is due to corruption. We allowed the bit error rate on our satellite channel to vary widely and tested the performance of TCP as a function of these bit error rates. Our results show that the average performance of TCP on satellite channels is good even under conditions of loss as high as bit error rates of 10(exp -5)
Gravity and Nonconservative Force Model Tuning for the GEOSAT Follow-On Spacecraft
NASA Technical Reports Server (NTRS)
Lemoine, Frank G.; Zelensky, Nikita P.; Rowlands, David D.; Luthcke, Scott B.; Chinn, Douglas S.; Marr, Gregory C.; Smith, David E. (Technical Monitor)
2000-01-01
The US Navy's GEOSAT Follow-On spacecraft was launched on February 10, 1998 and the primary objective of the mission was to map the oceans using a radar altimeter. Three radar altimeter calibration campaigns have been conducted in 1999 and 2000. The spacecraft is tracked by satellite laser ranging (SLR) and Doppler beacons and a limited amount of data have been obtained from the Global Positioning Receiver (GPS) on board the satellite. Even with EGM96, the predicted radial orbit error due to gravity field mismodelling (to 70x70) remains high at 2.61 cm (compared to 0.88 cm for TOPEX). We report on the preliminary gravity model tuning for GFO using SLR, and altimeter crossover data. Preliminary solutions using SLR and GFO/GFO crossover data from CalVal campaigns I and II in June-August 1999, and January-February 2000 have reduced the predicted radial orbit error to 1.9 cm and further reduction will be possible when additional data are added to the solutions. The gravity model tuning has improved principally the low order m-daily terms and has reduced significantly the geographically correlated error present in this satellite orbit. In addition to gravity field mismodelling, the largest contributor to the orbit error is the non-conservative force mismodelling. We report on further nonconservative force model tuning results using available data from over one cycle in beta prime.
Non-integer expansion embedding techniques for reversible image watermarking
NASA Astrophysics Data System (ADS)
Xiang, Shijun; Wang, Yi
2015-12-01
This work aims at reducing the embedding distortion of prediction-error expansion (PE)-based reversible watermarking. In the classical PE embedding method proposed by Thodi and Rodriguez, the predicted value is rounded to integer number for integer prediction-error expansion (IPE) embedding. The rounding operation makes a constraint on a predictor's performance. In this paper, we propose a non-integer PE (NIPE) embedding approach, which can proceed non-integer prediction errors for embedding data into an audio or image file by only expanding integer element of a prediction error while keeping its fractional element unchanged. The advantage of the NIPE embedding technique is that the NIPE technique can really bring a predictor into full play by estimating a sample/pixel in a noncausal way in a single pass since there is no rounding operation. A new noncausal image prediction method to estimate a pixel with four immediate pixels in a single pass is included in the proposed scheme. The proposed noncausal image predictor can provide better performance than Sachnev et al.'s noncausal double-set prediction method (where data prediction in two passes brings a distortion problem due to the fact that half of the pixels were predicted with the watermarked pixels). In comparison with existing several state-of-the-art works, experimental results have shown that the NIPE technique with the new noncausal prediction strategy can reduce the embedding distortion for the same embedding payload.
Decentralized control of sound radiation using iterative loop recovery.
Schiller, Noah H; Cabell, Randolph H; Fuller, Chris R
2010-10-01
A decentralized model-based control strategy is designed to reduce low-frequency sound radiation from periodically stiffened panels. While decentralized control systems tend to be scalable, performance can be limited due to modeling error introduced by the unmodeled interaction between neighboring control units. Since bounds on modeling error are not known in advance, it is difficult to ensure the decentralized control system will be robust without making the controller overly conservative. Therefore an iterative approach is suggested, which utilizes frequency-shaped loop recovery. The approach accounts for modeling error introduced by neighboring control loops, requires no communication between subsystems, and is relatively simple. The control strategy is evaluated numerically using a model of a stiffened aluminum panel that is representative of the sidewall of an aircraft. Simulations demonstrate that the iterative approach can achieve significant reductions in radiated sound power from the stiffened panel without destabilizing neighboring control units.
Decentralized Control of Sound Radiation Using Iterative Loop Recovery
NASA Technical Reports Server (NTRS)
Schiller, Noah H.; Cabell, Randolph H.; Fuller, Chris R.
2009-01-01
A decentralized model-based control strategy is designed to reduce low-frequency sound radiation from periodically stiffened panels. While decentralized control systems tend to be scalable, performance can be limited due to modeling error introduced by the unmodeled interaction between neighboring control units. Since bounds on modeling error are not known in advance, it is difficult to ensure the decentralized control system will be robust without making the controller overly conservative. Therefore an iterative approach is suggested, which utilizes frequency-shaped loop recovery. The approach accounts for modeling error introduced by neighboring control loops, requires no communication between subsystems, and is relatively simple. The control strategy is evaluated numerically using a model of a stiffened aluminum panel that is representative of the sidewall of an aircraft. Simulations demonstrate that the iterative approach can achieve significant reductions in radiated sound power from the stiffened panel without destabilizing neighboring control units.
Enhanced Pedestrian Navigation Based on Course Angle Error Estimation Using Cascaded Kalman Filters
Park, Chan Gook
2018-01-01
An enhanced pedestrian dead reckoning (PDR) based navigation algorithm, which uses two cascaded Kalman filters (TCKF) for the estimation of course angle and navigation errors, is proposed. The proposed algorithm uses a foot-mounted inertial measurement unit (IMU), waist-mounted magnetic sensors, and a zero velocity update (ZUPT) based inertial navigation technique with TCKF. The first stage filter estimates the course angle error of a human, which is closely related to the heading error of the IMU. In order to obtain the course measurements, the filter uses magnetic sensors and a position-trace based course angle. For preventing magnetic disturbance from contaminating the estimation, the magnetic sensors are attached to the waistband. Because the course angle error is mainly due to the heading error of the IMU, and the characteristic error of the heading angle is highly dependent on that of the course angle, the estimated course angle error is used as a measurement for estimating the heading error in the second stage filter. At the second stage, an inertial navigation system-extended Kalman filter-ZUPT (INS-EKF-ZUPT) method is adopted. As the heading error is estimated directly by using course-angle error measurements, the estimation accuracy for the heading and yaw gyro bias can be enhanced, compared with the ZUPT-only case, which eventually enhances the position accuracy more efficiently. The performance enhancements are verified via experiments, and the way-point position error for the proposed method is compared with those for the ZUPT-only case and with other cases that use ZUPT and various types of magnetic heading measurements. The results show that the position errors are reduced by a maximum of 90% compared with the conventional ZUPT based PDR algorithms. PMID:29690539
New GRACE-Derived Storage Change Estimates Using Empirical Mode Extraction
NASA Astrophysics Data System (ADS)
Aierken, A.; Lee, H.; Yu, H.; Ate, P.; Hossain, F.; Basnayake, S. B.; Jayasinghe, S.; Saah, D. S.; Shum, C. K.
2017-12-01
Estimated mass change from GRACE spherical harmonic solutions have north/south stripes and east/west banded errors due to random noise and modeling errors. Low pass filters like decorrelation and Gaussian smoothing are typically applied to reduce noise and errors. However, these filters introduce leakage errors that need to be addressed. GRACE mascon estimates (JPL and CSR mascon solutions) do not need decorrelation or Gaussian smoothing and offer larger signal magnitudes compared to the GRACE spherical harmonics (SH) filtered results. However, a recent study [Chen et al., JGR, 2017] demonstrated that both JPL and CSR mascon solutions also have leakage errors. We developed a new postprocessing method based on empirical mode decomposition to estimate mass change from GRACE SH solutions without decorrelation and Gaussian smoothing, the two main sources of leakage errors. We found that, without any post processing, the noise and errors in spherical harmonic solutions introduced very clear high frequency components in the spatial domain. By removing these high frequency components and reserve the overall pattern of the signal, we obtained better mass estimates with minimum leakage errors. The new global mass change estimates captured all the signals observed by GRACE without the stripe errors. Results were compared with traditional methods over the Tonle Sap Basin in Cambodia, Northwestern India, Central Valley in California, and the Caspian Sea. Our results provide larger signal magnitudes which are in good agreement with the leakage corrected (forward modeled) SH results.
NASA Astrophysics Data System (ADS)
Ries, Paul A.
2012-05-01
The Green Bank Telescope is a 100m, fully steerable, single dish radio telescope located in Green Bank, West Virginia and capable of making observations from meter wavelengths to 3mm. However, observations at wavelengths short of 2 cm pose significant observational challenges due to pointing and surface errors. The first part of this thesis details efforts to combat wind-induced pointing errors, which reduce by half the amount of time available for high-frequency work on the telescope. The primary tool used for understanding these errors was an optical quadrant detector that monitored the motion of the telescope's feed arm. In this work, a calibration was developed that tied quadrant detector readings directly to telescope pointing error. These readings can be used for single-beam observations in order to determine if the telescope was blown off-source at some point due to wind. With observations with the 3 mm MUSTANG bolometer array, pointing errors due to wind can mostly be removed (> ⅔) during data reduction. Iapetus is a moon known for its stark albedo dichotomy, with the leading hemisphere only a tenth as bright as the trailing. In order to investigate this dichotomy, Iapetus was observed repeatedly with the GBT at wavelengths between 3 and 11 mm, with the original intention being to use the data to determine a thermal light-curve. Instead, the data showed incredible wavelength-dependent deviation from a black-body curve, with an emissivity as low as 0.3 at 9 mm. Numerous techniques were used to demonstrate that this low emissivity is a physical phenomenon rather than an observational one, including some using the quadrant detector to make sure the low emissivities are not due to being blown off source. This emissivity is the among the lowest ever detected in the solar system, but can be achieved using physically realistic ice models that are also used to model microwave emission from snowpacks and glaciers on Earth. These models indicate that the trailing hemisphere contains a scattering layer of depth 100 cm and grain size of 1-2 mm. The leading hemisphere is shown to exhibit a thermal depth effect.
Trinh, Tony W; Glazer, Daniel I; Sadow, Cheryl A; Sahni, V Anik; Geller, Nina L; Silverman, Stuart G
2018-03-01
To determine test characteristics of CT urography for detecting bladder cancer in patients with hematuria and those undergoing surveillance, and to analyze reasons for false-positive and false-negative results. A HIPAA-compliant, IRB-approved retrospective review of reports from 1623 CT urograms between 10/2010 and 12/31/2013 was performed. 710 examinations for hematuria or bladder cancer history were compared to cystoscopy performed within 6 months. Reference standard was surgical pathology or 1-year minimum clinical follow-up. False-positive and false-negative examinations were reviewed to determine reasons for errors. Ninety-five bladder cancers were detected. CT urography accuracy: was 91.5% (650/710), sensitivity 86.3% (82/95), specificity 92.4% (568/615), positive predictive value 63.6% (82/129), and negative predictive value was 97.8% (568/581). Of 43 false positives, the majority of interpretation errors were due to benign prostatic hyperplasia (n = 12), trabeculated bladder (n = 9), and treatment changes (n = 8). Other causes include blood clots, mistaken normal anatomy, infectious/inflammatory changes, or had no cystoscopic correlate. Of 13 false negatives, 11 were due to technique, one to a large urinary residual, one to artifact. There were no errors in perception. CT urography is an accurate test for diagnosing bladder cancer; however, in protocols relying predominantly on excretory phase images, overall sensitivity remains insufficient to obviate cystoscopy. Awareness of bladder cancer mimics may reduce false-positive results. Improvements in CTU technique may reduce false-negative results.
Plessen, Kerstin J.; Allen, Elena A.; Eichele, Heike; van Wageningen, Heidi; Høvik, Marie Farstad; Sørensen, Lin; Worren, Marius Kalsås; Hugdahl, Kenneth; Eichele, Tom
2016-01-01
Background We examined the blood-oxygen level–dependent (BOLD) activation in brain regions that signal errors and their association with intraindividual behavioural variability and adaptation to errors in children with attention-deficit/hyperactivity disorder (ADHD). Methods We acquired functional MRI data during a Flanker task in medication-naive children with ADHD and healthy controls aged 8–12 years and analyzed the data using independent component analysis. For components corresponding to performance monitoring networks, we compared activations across groups and conditions and correlated them with reaction times (RT). Additionally, we analyzed post-error adaptations in behaviour and motor component activations. Results We included 25 children with ADHD and 29 controls in our analysis. Children with ADHD displayed reduced activation to errors in cingulo-opercular regions and higher RT variability, but no differences of interference control. Larger BOLD amplitude to error trials significantly predicted reduced RT variability across all participants. Neither group showed evidence of post-error response slowing; however, post-error adaptation in motor networks was significantly reduced in children with ADHD. This adaptation was inversely related to activation of the right-lateralized ventral attention network (VAN) on error trials and to task-driven connectivity between the cingulo-opercular system and the VAN. Limitations Our study was limited by the modest sample size and imperfect matching across groups. Conclusion Our findings show a deficit in cingulo-opercular activation in children with ADHD that could relate to reduced signalling for errors. Moreover, the reduced orienting of the VAN signal may mediate deficient post-error motor adaptions. Pinpointing general performance monitoring problems to specific brain regions and operations in error processing may help to guide the targets of future treatments for ADHD. PMID:26441332
A colinear backscattering Mueller matrix microscope for reflection Muller matrix imaging
NASA Astrophysics Data System (ADS)
Chen, Zhenhua; Yao, Yue; Zhu, Yuanhuan; Ma, Hui
2018-02-01
In a recent attempt, we developed a colinear backscattering Mueller matrix microscope by adding polarization state generator (PSG) and polarization state analyzer (PSA) into the illumination and detection optical paths of a commercial metallurgical microscope. It is found that specific efforts have to be made to reduce the artifacts due to the intrinsic residual polarizations of the optical system, particularly the dichroism due to the 45 degrees beam splitter. In this paper, we present a new calibration method based on numerical reconstruction of the instrument matrix to remove the artifacts introduced by beam splitter. Preliminary tests using a mirror as a standard sample show that the maximum Muller matrix element error of the colinear backscattering Muller matrix microscope can be reduced to a few percent.
Effects of Correlated Errors on the Analysis of Space Geodetic Data
NASA Technical Reports Server (NTRS)
Romero-Wolf, Andres; Jacobs, C. S.
2011-01-01
As thermal errors are reduced instrumental and troposphere correlated errors will increasingly become more important. Work in progress shows that troposphere covariance error models improve data analysis results. We expect to see stronger effects with higher data rates. Temperature modeling of delay errors may further reduce temporal correlations in the data.
Bradley, David A.; Nisbet, Andrew
2015-01-01
This work considers a previously overlooked uncertainty present in film dosimetry which results from moderate curvature of films during the scanning process. Small film samples are particularly susceptible to film curling which may be undetected or deemed insignificant. In this study, we consider test cases with controlled induced curvature of film and with film raised horizontally above the scanner plate. We also evaluate the difference in scans of a film irradiated with a typical brachytherapy dose distribution with the film naturally curved and with the film held flat on the scanner. Typical naturally occurring curvature of film at scanning, giving rise to a maximum height 1 to 2 mm above the scan plane, may introduce dose errors of 1% to 4%, and considerably reduce gamma evaluation passing rates when comparing film‐measured doses with treatment planning system‐calculated dose distributions, a common application of film dosimetry in radiotherapy. The use of a triple‐channel dosimetry algorithm appeared to mitigate the error due to film curvature compared to conventional single‐channel film dosimetry. The change in pixel value and calibrated reported dose with film curling or height above the scanner plate may be due to variations in illumination characteristics, optical disturbances, or a Callier‐type effect. There is a clear requirement for physically flat films at scanning to avoid the introduction of a substantial error source in film dosimetry. Particularly for small film samples, a compression glass plate above the film is recommended to ensure flat‐film scanning. This effect has been overlooked to date in the literature. PACS numbers: 87.55.Qr, 87.56.bg, 87.55.km PMID:26103181
Bouda, Martin; Caplan, Joshua S.; Saiers, James E.
2016-01-01
Fractal dimension (FD), estimated by box-counting, is a metric used to characterize plant anatomical complexity or space-filling characteristic for a variety of purposes. The vast majority of published studies fail to evaluate the assumption of statistical self-similarity, which underpins the validity of the procedure. The box-counting procedure is also subject to error arising from arbitrary grid placement, known as quantization error (QE), which is strictly positive and varies as a function of scale, making it problematic for the procedure's slope estimation step. Previous studies either ignore QE or employ inefficient brute-force grid translations to reduce it. The goals of this study were to characterize the effect of QE due to translation and rotation on FD estimates, to provide an efficient method of reducing QE, and to evaluate the assumption of statistical self-similarity of coarse root datasets typical of those used in recent trait studies. Coarse root systems of 36 shrubs were digitized in 3D and subjected to box-counts. A pattern search algorithm was used to minimize QE by optimizing grid placement and its efficiency was compared to the brute force method. The degree of statistical self-similarity was evaluated using linear regression residuals and local slope estimates. QE, due to both grid position and orientation, was a significant source of error in FD estimates, but pattern search provided an efficient means of minimizing it. Pattern search had higher initial computational cost but converged on lower error values more efficiently than the commonly employed brute force method. Our representations of coarse root system digitizations did not exhibit details over a sufficient range of scales to be considered statistically self-similar and informatively approximated as fractals, suggesting a lack of sufficient ramification of the coarse root systems for reiteration to be thought of as a dominant force in their development. FD estimates did not characterize the scaling of our digitizations well: the scaling exponent was a function of scale. Our findings serve as a caution against applying FD under the assumption of statistical self-similarity without rigorously evaluating it first. PMID:26925073
NASA Astrophysics Data System (ADS)
Yoshida, Kenichiro; Nishidate, Izumi; Ojima, Nobutoshi; Iwata, Kayoko
2014-01-01
To quantitatively evaluate skin chromophores over a wide region of curved skin surface, we propose an approach that suppresses the effect of the shading-derived error in the reflectance on the estimation of chromophore concentrations, without sacrificing the accuracy of that estimation. In our method, we use multiple regression analysis, assuming the absorbance spectrum as the response variable and the extinction coefficients of melanin, oxygenated hemoglobin, and deoxygenated hemoglobin as the predictor variables. The concentrations of melanin and total hemoglobin are determined from the multiple regression coefficients using compensation formulae (CF) based on the diffuse reflectance spectra derived from a Monte Carlo simulation. To suppress the shading-derived error, we investigated three different combinations of multiple regression coefficients for the CF. In vivo measurements with the forearm skin demonstrated that the proposed approach can reduce the estimation errors that are due to shading-derived errors in the reflectance. With the best combination of multiple regression coefficients, we estimated that the ratio of the error to the chromophore concentrations is about 10%. The proposed method does not require any measurements or assumptions about the shape of the subjects; this is an advantage over other studies related to the reduction of shading-derived errors.
Highly improved staggered quarks on the lattice with applications to charm physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Follana, E.; Davies, C.; Wong, K.
2007-03-01
We use perturbative Symanzik improvement to create a new staggered-quark action (HISQ) that has greatly reduced one-loop taste-exchange errors, no tree-level order a{sup 2} errors, and no tree-level order (am){sup 4} errors to leading order in the quark's velocity v/c. We demonstrate with simulations that the resulting action has taste-exchange interactions that are 3-4 times smaller than the widely used ASQTAD action. We show how to bound errors due to taste exchange by comparing ASQTAD and HISQ simulations, and demonstrate with simulations that such errors are likely no more than 1% when HISQ is used for light quarks at latticemore » spacings of 1/10 fm or less. The suppression of (am){sup 4} errors also makes HISQ the most accurate discretization currently available for simulating c quarks. We demonstrate this in a new analysis of the {psi}-{eta}{sub c} mass splitting using the HISQ action on lattices where am{sub c}=0.43 and 0.66, with full-QCD gluon configurations (from MILC). We obtain a result of 111(5) MeV which compares well with the experiment. We discuss applications of this formalism to D physics and present our first high-precision results for D{sub s} mesons.« less
Qin, Feng; Zhan, Xingqun; Du, Gang
2013-01-01
Ultra-tight integration was first proposed by Abbott in 2003 with the purpose of integrating a global navigation satellite system (GNSS) and an inertial navigation system (INS). This technology can improve the tracking performances of a receiver by reconfiguring the tracking loops in GNSS-challenged environments. In this paper, the models of all error sources known to date in the phase lock loops (PLLs) of a standard receiver and an ultra-tightly integrated GNSS/INS receiver are built, respectively. Based on these models, the tracking performances of the two receivers are compared to verify the improvement due to the ultra-tight integration. Meanwhile, the PLL error distributions of the two receivers are also depicted to analyze the error changes of the tracking loops. These results show that the tracking error is significantly reduced in the ultra-tightly integrated GNSS/INS receiver since the receiver's dynamics are estimated and compensated by an INS. Moreover, the mathematical relationship between the tracking performances of the ultra-tightly integrated GNSS/INS receiver and the quality of the selected inertial measurement unit (IMU) is derived from the error models and proved by the error comparisons of four ultra-tightly integrated GNSS/INS receivers aided by different grade IMUs.
Guo, Hongbin; Renaut, Rosemary A; Chen, Kewei; Reiman, Eric M
2010-01-01
Graphical analysis methods are widely used in positron emission tomography quantification because of their simplicity and model independence. But they may, particularly for reversible kinetics, lead to bias in the estimated parameters. The source of the bias is commonly attributed to noise in the data. Assuming a two-tissue compartmental model, we investigate the bias that originates from modeling error. This bias is an intrinsic property of the simplified linear models used for limited scan durations, and it is exaggerated by random noise and numerical quadrature error. Conditions are derived under which Logan's graphical method either over- or under-estimates the distribution volume in the noise-free case. The bias caused by modeling error is quantified analytically. The presented analysis shows that the bias of graphical methods is inversely proportional to the dissociation rate. Furthermore, visual examination of the linearity of the Logan plot is not sufficient for guaranteeing that equilibrium has been reached. A new model which retains the elegant properties of graphical analysis methods is presented, along with a numerical algorithm for its solution. We perform simulations with the fibrillar amyloid β radioligand [11C] benzothiazole-aniline using published data from the University of Pittsburgh and Rotterdam groups. The results show that the proposed method significantly reduces the bias due to modeling error. Moreover, the results for data acquired over a 70 minutes scan duration are at least as good as those obtained using existing methods for data acquired over a 90 minutes scan duration. PMID:20493196
NASA Astrophysics Data System (ADS)
Fujiwara, Takahiro; Uchiito, Haruki; Tokairin, Tomoya; Kawai, Hiroyuki
2017-04-01
Regarding Structural Health Monitoring (SHM) for seismic acceleration, Wireless Sensor Networks (WSN) is a promising tool for low-cost monitoring. Compressed sensing and transmission schemes have been drawing attention to achieve effective data collection in WSN. Especially, SHM systems installing massive nodes of WSN require efficient data transmission due to restricted communications capability. The dominant frequency band of seismic acceleration is occupied within 100 Hz or less. In addition, the response motions on upper floors of a structure are activated at a natural frequency, resulting in induced shaking at the specified narrow band. Focusing on the vibration characteristics of structures, we introduce data compression techniques for seismic acceleration monitoring in order to reduce the amount of transmission data. We carry out a compressed sensing and transmission scheme by band pass filtering for seismic acceleration data. The algorithm executes the discrete Fourier transform for the frequency domain and band path filtering for the compressed transmission. Assuming that the compressed data is transmitted through computer networks, restoration of the data is performed by the inverse Fourier transform in the receiving node. This paper discusses the evaluation of the compressed sensing for seismic acceleration by way of an average error. The results present the average error was 0.06 or less for the horizontal acceleration, in conditions where the acceleration was compressed into 1/32. Especially, the average error on the 4th floor achieved a small error of 0.02. Those results indicate that compressed sensing and transmission technique is effective to reduce the amount of data with maintaining the small average error.
Use of historical control data for assessing treatment effects in clinical trials.
Viele, Kert; Berry, Scott; Neuenschwander, Beat; Amzal, Billy; Chen, Fang; Enas, Nathan; Hobbs, Brian; Ibrahim, Joseph G; Kinnersley, Nelson; Lindborg, Stacy; Micallef, Sandrine; Roychoudhury, Satrajit; Thompson, Laura
2014-01-01
Clinical trials rarely, if ever, occur in a vacuum. Generally, large amounts of clinical data are available prior to the start of a study, particularly on the current study's control arm. There is obvious appeal in using (i.e., 'borrowing') this information. With historical data providing information on the control arm, more trial resources can be devoted to the novel treatment while retaining accurate estimates of the current control arm parameters. This can result in more accurate point estimates, increased power, and reduced type I error in clinical trials, provided the historical information is sufficiently similar to the current control data. If this assumption of similarity is not satisfied, however, one can acquire increased mean square error of point estimates due to bias and either reduced power or increased type I error depending on the direction of the bias. In this manuscript, we review several methods for historical borrowing, illustrating how key parameters in each method affect borrowing behavior, and then, we compare these methods on the basis of mean square error, power and type I error. We emphasize two main themes. First, we discuss the idea of 'dynamic' (versus 'static') borrowing. Second, we emphasize the decision process involved in determining whether or not to include historical borrowing in terms of the perceived likelihood that the current control arm is sufficiently similar to the historical data. Our goal is to provide a clear review of the key issues involved in historical borrowing and provide a comparison of several methods useful for practitioners. Copyright © 2013 John Wiley & Sons, Ltd.
Use of historical control data for assessing treatment effects in clinical trials
Viele, Kert; Berry, Scott; Neuenschwander, Beat; Amzal, Billy; Chen, Fang; Enas, Nathan; Hobbs, Brian; Ibrahim, Joseph G.; Kinnersley, Nelson; Lindborg, Stacy; Micallef, Sandrine; Roychoudhury, Satrajit; Thompson, Laura
2014-01-01
Clinical trials rarely, if ever, occur in a vacuum. Generally, large amounts of clinical data are available prior to the start of a study, particularly on the current study’s control arm. There is obvious appeal in using (i.e., ‘borrowing’) this information. With historical data providing information on the control arm, more trial resources can be devoted to the novel treatment while retaining accurate estimates of the current control arm parameters. This can result in more accurate point estimates, increased power, and reduced type I error in clinical trials, provided the historical information is sufficiently similar to the current control data. If this assumption of similarity is not satisfied, however, one can acquire increased mean square error of point estimates due to bias and either reduced power or increased type I error depending on the direction of the bias. In this manuscript, we review several methods for historical borrowing, illustrating how key parameters in each method affect borrowing behavior, and then, we compare these methods on the basis of mean square error, power and type I error. We emphasize two main themes. First, we discuss the idea of ‘dynamic’ (versus ‘static’) borrowing. Second, we emphasize the decision process involved in determining whether or not to include historical borrowing in terms of the perceived likelihood that the current control arm is sufficiently similar to the historical data. Our goal is to provide a clear review of the key issues involved in historical borrowing and provide a comparison of several methods useful for practitioners. PMID:23913901
Extending Moore's Law via Computationally Error Tolerant Computing.
Deng, Bobin; Srikanth, Sriseshan; Hein, Eric R.; ...
2018-03-01
Dennard scaling has ended. Lowering the voltage supply (V dd) to sub-volt levels causes intermittent losses in signal integrity, rendering further scaling (down) no longer acceptable as a means to lower the power required by a processor core. However, it is possible to correct the occasional errors caused due to lower V dd in an efficient manner and effectively lower power. By deploying the right amount and kind of redundancy, we can strike a balance between overhead incurred in achieving reliability and energy savings realized by permitting lower V dd. One promising approach is the Redundant Residue Number System (RRNS)more » representation. Unlike other error correcting codes, RRNS has the important property of being closed under addition, subtraction and multiplication, thus enabling computational error correction at a fraction of an overhead compared to conventional approaches. We use the RRNS scheme to design a Computationally-Redundant, Energy-Efficient core, including the microarchitecture, Instruction Set Architecture (ISA) and RRNS centered algorithms. Finally, from the simulation results, this RRNS system can reduce the energy-delay-product by about 3× for multiplication intensive workloads and by about 2× in general, when compared to a non-error-correcting binary core.« less
Extending Moore's Law via Computationally Error Tolerant Computing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deng, Bobin; Srikanth, Sriseshan; Hein, Eric R.
Dennard scaling has ended. Lowering the voltage supply (V dd) to sub-volt levels causes intermittent losses in signal integrity, rendering further scaling (down) no longer acceptable as a means to lower the power required by a processor core. However, it is possible to correct the occasional errors caused due to lower V dd in an efficient manner and effectively lower power. By deploying the right amount and kind of redundancy, we can strike a balance between overhead incurred in achieving reliability and energy savings realized by permitting lower V dd. One promising approach is the Redundant Residue Number System (RRNS)more » representation. Unlike other error correcting codes, RRNS has the important property of being closed under addition, subtraction and multiplication, thus enabling computational error correction at a fraction of an overhead compared to conventional approaches. We use the RRNS scheme to design a Computationally-Redundant, Energy-Efficient core, including the microarchitecture, Instruction Set Architecture (ISA) and RRNS centered algorithms. Finally, from the simulation results, this RRNS system can reduce the energy-delay-product by about 3× for multiplication intensive workloads and by about 2× in general, when compared to a non-error-correcting binary core.« less
Error-rate prediction for programmable circuits: methodology, tools and studied cases
NASA Astrophysics Data System (ADS)
Velazco, Raoul
2013-05-01
This work presents an approach to predict the error rates due to Single Event Upsets (SEU) occurring in programmable circuits as a consequence of the impact or energetic particles present in the environment the circuits operate. For a chosen application, the error-rate is predicted by combining the results obtained from radiation ground testing and the results of fault injection campaigns performed off-beam during which huge numbers of SEUs are injected during the execution of the studied application. The goal of this strategy is to obtain accurate results about different applications' error rates, without using particle accelerator facilities, thus significantly reducing the cost of the sensitivity evaluation. As a case study, this methodology was applied a complex processor, the Power PC 7448 executing a program issued from a real space application and a crypto-processor application implemented in an SRAM-based FPGA and accepted to be embedded in the payload of a scientific satellite of NASA. The accuracy of predicted error rates was confirmed by comparing, for the same circuit and application, predictions with measures issued from radiation ground testing performed at the cyclotron Cyclone cyclotron of HIF (Heavy Ion Facility) of Louvain-la-Neuve (Belgium).
Retractions by Pakistan Journal of Medical Sciences due to Scientific Misconduct.
Jawaid, Shaukat Ali; Jawaid, Masood
2016-08-01
Under pressure to publish, academicians and research scientists are increasingly indulging in scientific misconduct leading to retraction of such papers when identified. Other reasons of retraction include scientific error and problems related to ethics. Four published manuscripts (three from Turkey and one from Pakistan) had to be retracted from Pakistan Journal of Medical Sciences from January 2014 to July 2015 due to scientific misconduct. There is a need to search for effective measures which could help reduce the number of retractions and prevent scientific literature from being further polluted, which seems to be increasing every year.
Fabrication of ф 160 mm convex hyperbolic mirror for remote sensing instrument
NASA Astrophysics Data System (ADS)
Kuo, Ching-Hsiang; Yu, Zong-Ru; Ho, Cheng-Fang; Hsu, Wei-Yao; Chen, Fong-Zhi
2012-10-01
In this study, efficient polishing processes with inspection procedures for a large convex hyperbolic mirror of Cassegrain optical system are presented. The polishing process combines the techniques of conventional lapping and CNC polishing. We apply the conventional spherical lapping process to quickly remove the sub-surface damage (SSD) layer caused by grinding process and to get the accurate radius of best-fit sphere (BFS) of aspheric surface with fine surface texture simultaneously. Thus the removed material for aspherization process can be minimized and the polishing time for SSD removal can also be reduced substantially. The inspection procedure was carried out by using phase shift interferometer with CGH and stitching technique. To acquire the real surface form error of each sub aperture, the wavefront errors of the reference flat and CGH flat due to gravity effect of the vertical setup are calibrated in advance. Subsequently, we stitch 10 calibrated sub-aperture surface form errors to establish the whole irregularity of the mirror in 160 mm diameter for correction polishing. The final result of the In this study, efficient polishing processes with inspection procedures for a large convex hyperbolic mirror of Cassegrain optical system are presented. The polishing process combines the techniques of conventional lapping and CNC polishing. We apply the conventional spherical lapping process to quickly remove the sub-surface damage (SSD) layer caused by grinding process and to get the accurate radius of best-fit sphere (BFS) of aspheric surface with fine surface texture simultaneously. Thus the removed material for aspherization process can be minimized and the polishing time for SSD removal can also be reduced substantially. The inspection procedure was carried out by using phase shift interferometer with CGH and stitching technique. To acquire the real surface form error of each sub aperture, the wavefront errors of the reference flat and CGH flat due to gravity effect of the vertical setup are calibrated in advance. Subsequently, we stitch 10 calibrated sub-aperture surface form errors to establish the whole irregularity of the mirror in 160 mm diameter for correction polishing. The final result of the Fabrication of ф160 mm Convex Hyperbolic Mirror for Remote Sensing Instrument160 mm convex hyperbolic mirror is 0.15 μm PV and 17.9 nm RMS.160 mm convex hyperbolic mirror is 0.15 μm PV and 17.9 nm RMS.
Narayan, Sreenath; Kalhan, Satish C.; Wilson, David L.
2012-01-01
I.Abstract Purpose To reduce swaps in fat-water separation methods, a particular issue on 7T small animal scanners due to field inhomogeneity, using image postprocessing innovations that detect and correct errors in the B0 field map. Materials and Methods Fat-water decompositions and B0 field maps were computed for images of mice acquired on a 7T Bruker BioSpec scanner, using a computationally efficient method for solving the Markov Random Field formulation of the multi-point Dixon model. The B0 field maps were processed with a novel hole-filling method, based on edge strength between regions, and a novel k-means method, based on field-map intensities, which were iteratively applied to automatically detect and reinitialize error regions in the B0 field maps. Errors were manually assessed in the B0 field maps and chemical parameter maps both before and after error correction. Results Partial swaps were found in 6% of images when processed with FLAWLESS. After REFINED correction, only 0.7% of images contained partial swaps, resulting in an 88% decrease in error rate. Complete swaps were not problematic. Conclusion Ex post facto error correction is a viable supplement to a priori techniques for producing globally smooth B0 field maps, without partial swaps. With our processing pipeline, it is possible to process image volumes rapidly, robustly, and almost automatically. PMID:23023815
Narayan, Sreenath; Kalhan, Satish C; Wilson, David L
2013-05-01
To reduce swaps in fat-water separation methods, a particular issue on 7 Tesla (T) small animal scanners due to field inhomogeneity, using image postprocessing innovations that detect and correct errors in the B0 field map. Fat-water decompositions and B0 field maps were computed for images of mice acquired on a 7T Bruker BioSpec scanner, using a computationally efficient method for solving the Markov Random Field formulation of the multi-point Dixon model. The B0 field maps were processed with a novel hole-filling method, based on edge strength between regions, and a novel k-means method, based on field-map intensities, which were iteratively applied to automatically detect and reinitialize error regions in the B0 field maps. Errors were manually assessed in the B0 field maps and chemical parameter maps both before and after error correction. Partial swaps were found in 6% of images when processed with FLAWLESS. After REFINED correction, only 0.7% of images contained partial swaps, resulting in an 88% decrease in error rate. Complete swaps were not problematic. Ex post facto error correction is a viable supplement to a priori techniques for producing globally smooth B0 field maps, without partial swaps. With our processing pipeline, it is possible to process image volumes rapidly, robustly, and almost automatically. Copyright © 2012 Wiley Periodicals, Inc.
Lobb, M L; Stern, J A
1986-08-01
Sequential patterns of eye and eyelid motion were identified in seven subjects performing a modified serial probe recognition task under drowsy conditions. Using simultaneous EOG and video recordings, eyelid motion was divided into components above, within, and below the pupil and the durations in sequence were recorded. A serial probe recognition task was modified to allow for distinguishing decision errors from attention errors. Decision errors were found to be more frequent following a downward shift in the gaze angle which the eyelid closing sequence was reduced from a five element to a three element sequence. The velocity of the eyelid moving over the pupil during decision errors was slow in the closing and fast in the reopening phase, while on decision correct trials it was fast in closing and slower in reopening. Due to the high variability of eyelid motion under drowsy conditions these findings were only marginally significant. When a five element blink occurred, the velocity of the lid over pupil motion component of these endogenous eye blinks was significantly faster on decision correct than on decision error trials. Furthermore, the highly variable, long duration closings associated with the decision response produced slow eye movements in the horizontal plane (SEM) which were more frequent and significantly longer in duration on decision error versus decision correct responses.
Human Factors Engineering Guidelines for Overhead Cranes
NASA Technical Reports Server (NTRS)
Chandler, Faith; Delgado, H. (Technical Monitor)
2001-01-01
This guideline provides standards for overhead crane cabs that can be applied to the design and modification of crane cabs to reduce the potential for human error due to design. This guideline serves as an aid during the development of a specification for purchases of cranes or for an engineering support request for crane design modification. It aids human factors engineers in evaluating existing cranes during accident investigations or safety reviews.
A False Alarm Reduction Method for a Gas Sensor Based Electronic Nose
Rahman, Mohammad Mizanur; Suksompong, Prapun; Toochinda, Pisanu; Taparugssanagorn, Attaphongse
2017-01-01
Electronic noses (E-Noses) are becoming popular for food and fruit quality assessment due to their robustness and repeated usability without fatigue, unlike human experts. An E-Nose equipped with classification algorithms and having open ended classification boundaries such as the k-nearest neighbor (k-NN), support vector machine (SVM), and multilayer perceptron neural network (MLPNN), are found to suffer from false classification errors of irrelevant odor data. To reduce false classification and misclassification errors, and to improve correct rejection performance; algorithms with a hyperspheric boundary, such as a radial basis function neural network (RBFNN) and generalized regression neural network (GRNN) with a Gaussian activation function in the hidden layer should be used. The simulation results presented in this paper show that GRNN has more correct classification efficiency and false alarm reduction capability compared to RBFNN. As the design of a GRNN and RBFNN is complex and expensive due to large numbers of neuron requirements, a simple hyperspheric classification method based on minimum, maximum, and mean (MMM) values of each class of the training dataset was presented. The MMM algorithm was simple and found to be fast and efficient in correctly classifying data of training classes, and correctly rejecting data of extraneous odors, and thereby reduced false alarms. PMID:28895910
A False Alarm Reduction Method for a Gas Sensor Based Electronic Nose.
Rahman, Mohammad Mizanur; Charoenlarpnopparut, Chalie; Suksompong, Prapun; Toochinda, Pisanu; Taparugssanagorn, Attaphongse
2017-09-12
Electronic noses (E-Noses) are becoming popular for food and fruit quality assessment due to their robustness and repeated usability without fatigue, unlike human experts. An E-Nose equipped with classification algorithms and having open ended classification boundaries such as the k -nearest neighbor ( k -NN), support vector machine (SVM), and multilayer perceptron neural network (MLPNN), are found to suffer from false classification errors of irrelevant odor data. To reduce false classification and misclassification errors, and to improve correct rejection performance; algorithms with a hyperspheric boundary, such as a radial basis function neural network (RBFNN) and generalized regression neural network (GRNN) with a Gaussian activation function in the hidden layer should be used. The simulation results presented in this paper show that GRNN has more correct classification efficiency and false alarm reduction capability compared to RBFNN. As the design of a GRNN and RBFNN is complex and expensive due to large numbers of neuron requirements, a simple hyperspheric classification method based on minimum, maximum, and mean (MMM) values of each class of the training dataset was presented. The MMM algorithm was simple and found to be fast and efficient in correctly classifying data of training classes, and correctly rejecting data of extraneous odors, and thereby reduced false alarms.
On Time/Space Aggregation of Fine-Scale Error Estimates (Invited)
NASA Astrophysics Data System (ADS)
Huffman, G. J.
2013-12-01
Estimating errors inherent in fine time/space-scale satellite precipitation data sets is still an on-going problem and a key area of active research. Complicating features of these data sets include the intrinsic intermittency of the precipitation in space and time and the resulting highly skewed distribution of precipitation rates. Additional issues arise from the subsampling errors that satellites introduce, the errors due to retrieval algorithms, and the correlated error that retrieval and merger algorithms sometimes introduce. Several interesting approaches have been developed recently that appear to make progress on these long-standing issues. At the same time, the monthly averages over 2.5°x2.5° grid boxes in the Global Precipitation Climatology Project (GPCP) Satellite-Gauge (SG) precipitation data set follow a very simple sampling-based error model (Huffman 1997) with coefficients that are set using coincident surface and GPCP SG data. This presentation outlines the unsolved problem of how to aggregate the fine-scale errors (discussed above) to an arbitrary time/space averaging volume for practical use in applications, reducing in the limit to simple Gaussian expressions at the monthly 2.5°x2.5° scale. Scatter diagrams with different time/space averaging show that the relationship between the satellite and validation data improves due to the reduction in random error. One of the key, and highly non-linear, issues is that fine-scale estimates tend to have large numbers of cases with points near the axes on the scatter diagram (one of the values is exactly or nearly zero, while the other value is higher). Averaging 'pulls' the points away from the axes and towards the 1:1 line, which usually happens for higher precipitation rates before lower rates. Given this qualitative observation of how aggregation affects error, we observe that existing aggregation rules, such as the Steiner et al. (2003) power law, only depend on the aggregated precipitation rate. Is this sufficient, or is it necessary to aggregate the precipitation error estimates across the time/space data cube used for averaging? At least for small time/space data cubes it would seem that the detailed variables that affect each precipitation error estimate in the aggregation, such as sensor type, land/ocean surface type, convective/stratiform type, and so on, drive variations that must be accounted for explicitly.
2016-01-01
Objectives To assess why articles are retracted from BioMed Central journals, whether retraction notices adhered to the Committee on Publication Ethics (COPE) guidelines, and are becoming more frequent as a proportion of published articles. Design/setting Retrospective cross-sectional analysis of 134 retractions from January 2000 to December 2015. Results 134 retraction notices were published during this timeframe. Although they account for 0.07% of all articles published (190 514 excluding supplements, corrections, retractions and commissioned content), the rate of retraction is rising. COPE guidelines on retraction were adhered to in that an explicit reason for each retraction was given. However, some notices did not document who retracted the article (eight articles, 6%) and others were unclear whether the underlying cause was honest error or misconduct (15 articles, 11%). The largest proportion of notices was issued by the authors (47 articles, 35%). The majority of retractions were due to some form of misconduct (102 articles, 76%), that is, compromised peer review (44 articles, 33%), plagiarism (22 articles, 16%) and data falsification/fabrication (10 articles, 7%). Honest error accounted for 17 retractions (13%) of which 10 articles (7%) were published in error. The median number of days from publication to retraction was 337.5 days. Conclusions The most common reason to retract was compromised peer review. However, the majority of these cases date to March 2015 and appear to be the result of a systematic attempt to manipulate peer review across several publishers. Retractions due to plagiarism account for the second largest category and may be reduced by screening manuscripts before publication although this is not guaranteed. Retractions due to problems with the data may be reduced by appropriate data sharing and deposition before publication. Adopting a checklist (linked to COPE guidelines) and templates for various classes of retraction notices would increase transparency of retraction notices in future. PMID:27881524
A steep peripheral ring in irregular cornea topography, real or an instrument error?
Galindo-Ferreiro, Alicia; Galvez-Ruiz, Alberto; Schellini, Silvana A; Galindo-Alonso, Julio
2016-01-01
To demonstrate that the steep peripheral ring (red zone) on corneal topography after myopic laser in situ keratomileusis (LASIK) could possibly due to instrument error and not always to a real increase in corneal curvature. A spherical model for the corneal surface and modifying topography software was used to analyze the cause of an error due to instrument design. This study involved modification of the software of a commercially available topographer. A small modification of the topography image results in a red zone on the corneal topography color map. Corneal modeling indicates that the red zone could be an artifact due to an instrument-induced error. The steep curvature changes after LASIK, signified by the red zone, could be also an error due to the plotting algorithms of the corneal topographer, besides a steep curvature change.
Using warnings to reduce categorical false memories in younger and older adults.
Carmichael, Anna M; Gutchess, Angela H
2016-07-01
Warnings about memory errors can reduce their incidence, although past work has largely focused on associative memory errors. The current study sought to explore whether warnings could be tailored to specifically reduce false recall of categorical information in both younger and older populations. Before encoding word pairs designed to induce categorical false memories, half of the younger and older participants were warned to avoid committing these types of memory errors. Older adults who received a warning committed fewer categorical memory errors, as well as other types of semantic memory errors, than those who did not receive a warning. In contrast, young adults' memory errors did not differ for the warning versus no-warning groups. Our findings provide evidence for the effectiveness of warnings at reducing categorical memory errors in older adults, perhaps by supporting source monitoring, reduction in reliance on gist traces, or through effective metacognitive strategies.
Flexible, multi-measurement guided wave damage detection under varying temperatures
NASA Astrophysics Data System (ADS)
Douglass, Alexander C. S.; Harley, Joel B.
2018-04-01
Temperature compensation in structural health monitoring helps identify damage in a structure by removing data variations due to environmental conditions, such as temperature. Stretch-based methods are one of the most commonly used temperature compensation methods. To account for variations in temperature, stretch-based methods optimally stretch signals in time to optimally match a measurement to a baseline. All of the data is then compared with the single baseline to determine the presence of damage. Yet, for these methods to be effective, the measurement and the baseline must satisfy the inherent assumptions of the temperature compensation method. In many scenarios, these assumptions are wrong, the methods generate error, and damage detection fails. To improve damage detection, a multi-measurement damage detection method is introduced. By using each measurement in the dataset as a baseline, error caused by imperfect temperature compensation is reduced. The multi-measurement method increases the detection effectiveness of our damage metric, or damage indicator, over time and reduces the presence of additional peaks caused by temperature that could be mistaken for damage. By using many baselines, the variance of the damage indicator is reduced and the effects from damage are amplified. Notably, the multi-measurement improves damage detection over single-measurement methods. This is demonstrated through an increase in the maximum of our damage signature from 0.55 to 0.95 (where large values, up to a maximum of one, represent a statistically significant change in the data due to damage).
Model Error Estimation for the CPTEC Eta Model
NASA Technical Reports Server (NTRS)
Tippett, Michael K.; daSilva, Arlindo
1999-01-01
Statistical data assimilation systems require the specification of forecast and observation error statistics. Forecast error is due to model imperfections and differences between the initial condition and the actual state of the atmosphere. Practical four-dimensional variational (4D-Var) methods try to fit the forecast state to the observations and assume that the model error is negligible. Here with a number of simplifying assumption, a framework is developed for isolating the model error given the forecast error at two lead-times. Two definitions are proposed for the Talagrand ratio tau, the fraction of the forecast error due to model error rather than initial condition error. Data from the CPTEC Eta Model running operationally over South America are used to calculate forecast error statistics and lower bounds for tau.
The best of both worlds: automated CMP polishing of channel-cut monochromators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kasman, Elina; Erdmann, Mark; Stoupin, Stanislav
2015-09-03
The use of a channel-cut monochromator is the most straightforward method to ensure that the two reflection surfaces maintain alignment between crystallographic planes without the need for complicated alignment mechanisms. Three basic characteristics that affect monochromator performance are: subsurface damage which contaminates spectral purity; surface roughness which reduces efficiency due to scattering; and surface figure error which imparts intensity structure and coherence distortion in the beam. Standard chemical-mechanical polishing processes and equipment are used when the diffracting surface is easily accessible, such as for single-bounce monochromators. Due to the inaccessibly of the surfaces inside a channel-cut monochromator for polishing, thesemore » optics are generally wet-etched for their final processing. This results in minimal subsurface damage, but very poor roughness and figure error. A new CMP channel polishing instrument design is presented which allows the internal diffracting surface quality of channel-cut crystals to approach that of conventional single-bounce monochromators« less
Development of a new instrument for direct skin friction measurements
NASA Technical Reports Server (NTRS)
Vakili, A. D.; Wu, J. M.
1986-01-01
A device developed for the direct measurement of wall shear stress generated by flows is described. Simple and symmetric in design with optional small moving mass and no internal friction, the features employed in the design eliminate most of the difficulties associated with the traditional floating element balances. The device is basically small and can be made in various sizes. Vibration problems associated with the floating element skin friction balances were found to be minimized due to the design symmetry and optional damping provided. The design eliminates or reduces the errors associated with conventional floating element devices: such as errors due to gaps, pressure gradient, acceleration, heat transfer, and temperature change. The instrument is equipped with various sensing systems and the output signal is a linear function of the wall shear stress. Dynamic measurements could be made in a limited range and measurements in liquids could be performed readily. Measurement made in the three different tunnels show excellent agreement with data obtained by the floating element devices and other techniques.
A general model for attitude determination error analysis
NASA Technical Reports Server (NTRS)
Markley, F. Landis; Seidewitz, ED; Nicholson, Mark
1988-01-01
An overview is given of a comprehensive approach to filter and dynamics modeling for attitude determination error analysis. The models presented include both batch least-squares and sequential attitude estimation processes for both spin-stabilized and three-axis stabilized spacecraft. The discussion includes a brief description of a dynamics model of strapdown gyros, but it does not cover other sensor models. Model parameters can be chosen to be solve-for parameters, which are assumed to be estimated as part of the determination process, or consider parameters, which are assumed to have errors but not to be estimated. The only restriction on this choice is that the time evolution of the consider parameters must not depend on any of the solve-for parameters. The result of an error analysis is an indication of the contributions of the various error sources to the uncertainties in the determination of the spacecraft solve-for parameters. The model presented gives the uncertainty due to errors in the a priori estimates of the solve-for parameters, the uncertainty due to measurement noise, the uncertainty due to dynamic noise (also known as process noise or measurement noise), the uncertainty due to the consider parameters, and the overall uncertainty due to all these sources of error.
NASA Technical Reports Server (NTRS)
Wiese, D. N.; Nerem, R. S.; Lemoine, F. G.
2011-01-01
Future satellite missions dedicated to measuring time-variable gravity will need to address the concern of temporal aliasing errors; i.e., errors due to high-frequency mass variations. These errors have been shown to be a limiting error source for future missions with improved sensors. One method of reducing them is to fly multiple satellite pairs, thus increasing the sampling frequency of the mission. While one could imagine a system architecture consisting of dozens of satellite pairs, this paper explores the more economically feasible option of optimizing the orbits of two pairs of satellites. While the search space for this problem is infinite by nature, steps have been made to reduce it via proper assumptions regarding some parameters and a large number of numerical simulations exploring appropriate ranges for other parameters. A search space originally consisting of 15 variables is reduced to two variables with the utmost impact on mission performance: the repeat period of both pairs of satellites (shown to be near-optimal when they are equal to each other), as well as the inclination of one of the satellite pairs (the other pair is assumed to be in a polar orbit). To arrive at this conclusion, we assume circular orbits, repeat groundtracks for both pairs of satellites, a 100-km inter-satellite separation distance, and a minimum allowable operational satellite altitude of 290 km based on a projected 10-year mission lifetime. Given the scientific objectives of determining time-variable hydrology, ice mass variations, and ocean bottom pressure signals with higher spatial resolution, we find that an optimal architecture consists of a polar pair of satellites coupled with a pair inclined at 72deg, both in 13-day repeating orbits. This architecture provides a 67% reduction in error over one pair of satellites, in addition to reducing the longitudinal striping to such a level that minimal post-processing is required, permitting a substantial increase in the spatial resolution of the gravity field products. It should be emphasized that given different sets of scientific objectives for the mission, or a different minimum allowable satellite altitude, different architectures might be selected.
Chana, Narinder; Porat, Talya; Whittlesea, Cate; Delaney, Brendan
2017-03-01
Electronic prescribing has benefited from computerised clinical decision support systems (CDSSs); however, no published studies have evaluated the potential for a CDSS to support GPs in prescribing specialist drugs. To identify potential weaknesses and errors in the existing process of prescribing specialist drugs that could be addressed in the development of a CDSS. Semi-structured interviews with key informants followed by an observational study involving GPs in the UK. Twelve key informants were interviewed to investigate the use of CDSSs in the UK. Nine GPs were observed while performing case scenarios depicting requests from hospitals or patients to prescribe a specialist drug. Activity diagrams, hierarchical task analysis, and systematic human error reduction and prediction approach analyses were performed. The current process of prescribing specialist drugs by GPs is prone to error. Errors of omission due to lack of information were the most common errors, which could potentially result in a GP prescribing a specialist drug that should only be prescribed in hospitals, or prescribing a specialist drug without reference to a shared care protocol. Half of all possible errors in the prescribing process had a high probability of occurrence. A CDSS supporting GPs during the process of prescribing specialist drugs is needed. This could, first, support the decision making of whether or not to undertake prescribing, and, second, provide drug-specific parameters linked to shared care protocols, which could reduce the errors identified and increase patient safety. © British Journal of General Practice 2017.
Govindarajan, R; Llueguera, E; Melero, A; Molero, J; Soler, N; Rueda, C; Paradinas, C
2010-01-01
Statistical Process Control (SPC) was applied to monitor patient set-up in radiotherapy and, when the measured set-up error values indicated a loss of process stability, its root cause was identified and eliminated to prevent set-up errors. Set up errors were measured for medial-lateral (ml), cranial-caudal (cc) and anterior-posterior (ap) dimensions and then the upper control limits were calculated. Once the control limits were known and the range variability was acceptable, treatment set-up errors were monitored using sub-groups of 3 patients, three times each shift. These values were plotted on a control chart in real time. Control limit values showed that the existing variation was acceptable. Set-up errors, measured and plotted on a X chart, helped monitor the set-up process stability and, if and when the stability was lost, treatment was interrupted, the particular cause responsible for the non-random pattern was identified and corrective action was taken before proceeding with the treatment. SPC protocol focuses on controlling the variability due to assignable cause instead of focusing on patient-to-patient variability which normally does not exist. Compared to weekly sampling of set-up error in each and every patient, which may only ensure that just those sampled sessions were set-up correctly, the SPC method enables set-up error prevention in all treatment sessions for all patients and, at the same time, reduces the control costs. Copyright © 2009 SECA. Published by Elsevier Espana. All rights reserved.
Brzozek, Christopher; Benke, Kurt K; Zeleke, Berihun M; Abramson, Michael J; Benke, Geza
2018-03-26
Uncertainty in experimental studies of exposure to radiation from mobile phones has in the past only been framed within the context of statistical variability. It is now becoming more apparent to researchers that epistemic or reducible uncertainties can also affect the total error in results. These uncertainties are derived from a wide range of sources including human error, such as data transcription, model structure, measurement and linguistic errors in communication. The issue of epistemic uncertainty is reviewed and interpreted in the context of the MoRPhEUS, ExPOSURE and HERMES cohort studies which investigate the effect of radiofrequency electromagnetic radiation from mobile phones on memory performance. Research into this field has found inconsistent results due to limitations from a range of epistemic sources. Potential analytic approaches are suggested based on quantification of epistemic error using Monte Carlo simulation. It is recommended that future studies investigating the relationship between radiofrequency electromagnetic radiation and memory performance pay more attention to treatment of epistemic uncertainties as well as further research into improving exposure assessment. Use of directed acyclic graphs is also encouraged to display the assumed covariate relationship.
A study on characteristics of retrospective optimal interpolation with WRF testbed
NASA Astrophysics Data System (ADS)
Kim, S.; Noh, N.; Lim, G.
2012-12-01
This study presents the application of retrospective optimal interpolation (ROI) with Weather Research and Forecasting model (WRF). Song et al. (2009) suggest ROI method which is an optimal interpolation (OI) that gradually assimilates observations over the analysis window for variance-minimum estimate of an atmospheric state at the initial time of the analysis window. Song and Lim (2011) improve the method by incorporating eigen-decomposition and covariance inflation. ROI method assimilates the data at post analysis time using perturbation method (Errico and Raeder, 1999) without adjoint model. In this study, ROI method is applied to WRF model to validate the algorithm and to investigate the capability. The computational costs for ROI can be reduced due to the eigen-decomposition of background error covariance. Using the background error covariance in eigen-space, 1-profile assimilation experiment is performed. The difference between forecast errors with assimilation and without assimilation is obviously increased as time passed, which means the improvement of forecast error by assimilation. The characteristics and strength/weakness of ROI method are investigated by conducting the experiments with other data assimilation method.
Optimized 3D stitching algorithm for whole body SPECT based on transition error minimization (TEM)
NASA Astrophysics Data System (ADS)
Cao, Xinhua; Xu, Xiaoyin; Voss, Stephan
2017-02-01
Standard Single Photon Emission Computed Tomography (SPECT) has a limited field of view (FOV) and cannot provide a 3D image of an entire long whole body SPECT. To produce a 3D whole body SPECT image, two to five overlapped SPECT FOVs from head to foot are acquired and assembled using image stitching. Most commercial software from medical imaging manufacturers applies a direct mid-slice stitching method to avoid blurring or ghosting from 3D image blending. Due to intensity changes across the middle slice of overlapped images, direct mid-slice stitching often produces visible seams in the coronal and sagittal views and maximal intensity projection (MIP). In this study, we proposed an optimized algorithm to reduce the visibility of stitching edges. The new algorithm computed, based on transition error minimization (TEM), a 3D stitching interface between two overlapped 3D SPECT images. To test the suggested algorithm, four studies of 2-FOV whole body SPECT were used and included two different reconstruction methods (filtered back projection (FBP) and ordered subset expectation maximization (OSEM)) as well as two different radiopharmaceuticals (Tc-99m MDP for bone metastases and I-131 MIBG for neuroblastoma tumors). Relative transition errors of stitched whole body SPECT using mid-slice stitching and the TEM-based algorithm were measured for objective evaluation. Preliminary experiments showed that the new algorithm reduced the visibility of the stitching interface in the coronal, sagittal, and MIP views. Average relative transition errors were reduced from 56.7% of mid-slice stitching to 11.7% of TEM-based stitching. The proposed algorithm also avoids blurring artifacts by preserving the noise properties of the original SPECT images.
Optimizing correlation techniques for improved earthquake location
Schaff, D.P.; Bokelmann, G.H.R.; Ellsworth, W.L.; Zanzerkia, E.; Waldhauser, F.; Beroza, G.C.
2004-01-01
Earthquake location using relative arrival time measurements can lead to dramatically reduced location errors and a view of fault-zone processes with unprecedented detail. There are two principal reasons why this approach reduces location errors. The first is that the use of differenced arrival times to solve for the vector separation of earthquakes removes from the earthquake location problem much of the error due to unmodeled velocity structure. The second reason, on which we focus in this article, is that waveform cross correlation can substantially reduce measurement error. While cross correlation has long been used to determine relative arrival times with subsample precision, we extend correlation measurements to less similar waveforms, and we introduce a general quantitative means to assess when correlation data provide an improvement over catalog phase picks. We apply the technique to local earthquake data from the Calaveras Fault in northern California. Tests for an example streak of 243 earthquakes demonstrate that relative arrival times with normalized cross correlation coefficients as low as ???70%, interevent separation distances as large as to 2 km, and magnitudes up to 3.5 as recorded on the Northern California Seismic Network are more precise than relative arrival times determined from catalog phase data. Also discussed are improvements made to the correlation technique itself. We find that for large time offsets, our implementation of time-domain cross correlation is often more robust and that it recovers more observations than the cross spectral approach. Longer time windows give better results than shorter ones. Finally, we explain how thresholds and empirical weighting functions may be derived to optimize the location procedure for any given region of interest, taking advantage of the respective strengths of diverse correlation and catalog phase data on different length scales.
Nadar, M Y; Akar, D K; Rao, D D; Kulkarni, M S; Pradeepkumar, K S
2015-12-01
Assessment of intake due to long-lived actinides by inhalation pathway is carried out by lung monitoring of the radiation workers inside totally shielded steel room using sensitive detection systems such as Phoswich and an array of HPGe detectors. In this paper, uncertainties in the lung activity estimation due to positional errors, chest wall thickness (CWT) and detector background variation are evaluated. First, calibration factors (CFs) of Phoswich and an array of three HPGe detectors are estimated by incorporating ICRP male thorax voxel phantom and detectors in Monte Carlo code 'FLUKA'. CFs are estimated for the uniform source distribution in lungs of the phantom for various photon energies. The variation in the CFs for positional errors of ±0.5, 1 and 1.5 cm in horizontal and vertical direction along the chest are studied. The positional errors are also evaluated by resizing the voxel phantom. Combined uncertainties are estimated at different energies using the uncertainties due to CWT, detector positioning, detector background variation of an uncontaminated adult person and counting statistics in the form of scattering factors (SFs). SFs are found to decrease with increase in energy. With HPGe array, highest SF of 1.84 is found at 18 keV. It reduces to 1.36 at 238 keV. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Effect of Numerical Error on Gravity Field Estimation for GRACE and Future Gravity Missions
NASA Astrophysics Data System (ADS)
McCullough, Christopher; Bettadpur, Srinivas
2015-04-01
In recent decades, gravity field determination from low Earth orbiting satellites, such as the Gravity Recovery and Climate Experiment (GRACE), has become increasingly more effective due to the incorporation of high accuracy measurement devices. Since instrumentation quality will only increase in the near future and the gravity field determination process is computationally and numerically intensive, numerical error from the use of double precision arithmetic will eventually become a prominent error source. While using double-extended or quadruple precision arithmetic will reduce these errors, the numerical limitations of current orbit determination algorithms and processes must be accurately identified and quantified in order to adequately inform the science data processing techniques of future gravity missions. The most obvious numerical limitation in the orbit determination process is evident in the comparison of measured observables with computed values, derived from mathematical models relating the satellites' numerically integrated state to the observable. Significant error in the computed trajectory will corrupt this comparison and induce error in the least squares solution of the gravitational field. In addition, errors in the numerically computed trajectory propagate into the evaluation of the mathematical measurement model's partial derivatives. These errors amalgamate in turn with numerical error from the computation of the state transition matrix, computed using the variational equations of motion, in the least squares mapping matrix. Finally, the solution of the linearized least squares system, computed using a QR factorization, is also susceptible to numerical error. Certain interesting combinations of each of these numerical errors are examined in the framework of GRACE gravity field determination to analyze and quantify their effects on gravity field recovery.
Adamo, Margaret Peggy; Boten, Jessica A; Coyle, Linda M; Cronin, Kathleen A; Lam, Clara J K; Negoita, Serban; Penberthy, Lynne; Stevens, Jennifer L; Ward, Kevin C
2017-02-15
Researchers have used prostate-specific antigen (PSA) values collected by central cancer registries to evaluate tumors for potential aggressive clinical disease. An independent study collecting PSA values suggested a high error rate (18%) related to implied decimal points. To evaluate the error rate in the Surveillance, Epidemiology, and End Results (SEER) program, a comprehensive review of PSA values recorded across all SEER registries was performed. Consolidated PSA values for eligible prostate cancer cases in SEER registries were reviewed and compared with text documentation from abstracted records. Four types of classification errors were identified: implied decimal point errors, abstraction or coding implementation errors, nonsignificant errors, and changes related to "unknown" values. A total of 50,277 prostate cancer cases diagnosed in 2012 were reviewed. Approximately 94.15% of cases did not have meaningful changes (85.85% correct, 5.58% with a nonsignificant change of <1 ng/mL, and 2.80% with no clinical change). Approximately 5.70% of cases had meaningful changes (1.93% due to implied decimal point errors, 1.54% due to abstract or coding errors, and 2.23% due to errors related to unknown categories). Only 419 of the original 50,277 cases (0.83%) resulted in a change in disease stage due to a corrected PSA value. The implied decimal error rate was only 1.93% of all cases in the current validation study, with a meaningful error rate of 5.81%. The reasons for the lower error rate in SEER are likely due to ongoing and rigorous quality control and visual editing processes by the central registries. The SEER program currently is reviewing and correcting PSA values back to 2004 and will re-release these data in the public use research file. Cancer 2017;123:697-703. © 2016 American Cancer Society. © 2016 The Authors. Cancer published by Wiley Periodicals, Inc. on behalf of American Cancer Society.
Mistake proofing: changing designs to reduce error
Grout, J R
2006-01-01
Mistake proofing uses changes in the physical design of processes to reduce human error. It can be used to change designs in ways that prevent errors from occurring, to detect errors after they occur but before harm occurs, to allow processes to fail safely, or to alter the work environment to reduce the chance of errors. Effective mistake proofing design changes should initially be effective in reducing harm, be inexpensive, and easily implemented. Over time these design changes should make life easier and speed up the process. Ideally, the design changes should increase patients' and visitors' understanding of the process. These designs should themselves be mistake proofed and follow the good design practices of other disciplines. PMID:17142609
Performance of concatenated Reed-Solomon/Viterbi channel coding
NASA Technical Reports Server (NTRS)
Divsalar, D.; Yuen, J. H.
1982-01-01
The concatenated Reed-Solomon (RS)/Viterbi coding system is reviewed. The performance of the system is analyzed and results are derived with a new simple approach. A functional model for the input RS symbol error probability is presented. Based on this new functional model, we compute the performance of a concatenated system in terms of RS word error probability, output RS symbol error probability, bit error probability due to decoding failure, and bit error probability due to decoding error. Finally we analyze the effects of the noisy carrier reference and the slow fading on the system performance.
Vélez-Díaz-Pallarés, Manuel; Delgado-Silveira, Eva; Carretero-Accame, María Emilia; Bermejo-Vicedo, Teresa
2013-01-01
To identify actions to reduce medication errors in the process of drug prescription, validation and dispensing, and to evaluate the impact of their implementation. A Health Care Failure Mode and Effect Analysis (HFMEA) was supported by a before-and-after medication error study to measure the actual impact on error rate after the implementation of corrective actions in the process of drug prescription, validation and dispensing in wards equipped with computerised physician order entry (CPOE) and unit-dose distribution system (788 beds out of 1080) in a Spanish university hospital. The error study was carried out by two observers who reviewed medication orders on a daily basis to register prescription errors by physicians and validation errors by pharmacists. Drugs dispensed in the unit-dose trolleys were reviewed for dispensing errors. Error rates were expressed as the number of errors for each process divided by the total opportunities for error in that process times 100. A reduction in prescription errors was achieved by providing training for prescribers on CPOE, updating prescription procedures, improving clinical decision support and automating the software connection to the hospital census (relative risk reduction (RRR), 22.0%; 95% CI 12.1% to 31.8%). Validation errors were reduced after optimising time spent in educating pharmacy residents on patient safety, developing standardised validation procedures and improving aspects of the software's database (RRR, 19.4%; 95% CI 2.3% to 36.5%). Two actions reduced dispensing errors: reorganising the process of filling trolleys and drawing up a protocol for drug pharmacy checking before delivery (RRR, 38.5%; 95% CI 14.1% to 62.9%). HFMEA facilitated the identification of actions aimed at reducing medication errors in a healthcare setting, as the implementation of several of these led to a reduction in errors in the process of drug prescription, validation and dispensing.
Local position control: A new concept for control of manipulators
NASA Technical Reports Server (NTRS)
Kelly, Frederick A.
1988-01-01
Resolved motion rate control is currently one of the most frequently used methods of manipulator control. It is currently used in the Space Shuttle remote manipulator system (RMS) and in prosthetic devices. Position control is predominately used in locating the end-effector of an industrial manipulator along a path with prescribed timing. In industrial applications, resolved motion rate control is inappropriate since position error accumulates. This is due to velocity being the control variable. In some applications this property is an advantage rather than a disadvantage. It may be more important for motion to end as soon as the input command is removed rather than reduce the position error to zero. Local position control is a new concept for manipulator control which retains the important properties of resolved motion rate control, but reduces the drift. Local position control can be considered to be a generalization of resolved position and resolved rate control. It places both control schemes on a common mathematical basis.
NASA Astrophysics Data System (ADS)
Bechet, P.; Mitran, R.; Munteanu, M.
2013-08-01
Non-contact methods for the assessment of vital signs are of great interest for specialists due to the benefits obtained in both medical and special applications, such as those for surveillance, monitoring, and search and rescue. This paper investigates the possibility of implementing a digital processing algorithm based on the MUSIC (Multiple Signal Classification) parametric spectral estimation in order to reduce the observation time needed to accurately measure the heart rate. It demonstrates that, by proper dimensioning the signal subspace, the MUSIC algorithm can be optimized in order to accurately assess the heart rate during an 8-28 s time interval. The validation of the processing algorithm performance was achieved by minimizing the mean error of the heart rate after performing simultaneous comparative measurements on several subjects. In order to calculate the error the reference value of heart rate was measured using a classic measurement system through direct contact.
Selectively Encrypted Pull-Up Based Watermarking of Biometric data
NASA Astrophysics Data System (ADS)
Shinde, S. A.; Patel, Kushal S.
2012-10-01
Biometric authentication systems are becoming increasingly popular due to their potential usage in information security. However, digital biometric data (e.g. thumb impression) are themselves vulnerable to security attacks. There are various methods are available to secure biometric data. In biometric watermarking the data are embedded in an image container and are only retrieved if the secrete key is available. This container image is encrypted to have more security against the attack. As wireless devices are equipped with battery as their power supply, they have limited computational capabilities; therefore to reduce energy consumption we use the method of selective encryption of container image. The bit pull-up-based biometric watermarking scheme is based on amplitude modulation and bit priority which reduces the retrieval error rate to great extent. By using selective Encryption mechanism we expect more efficiency in time at the time of encryption as well as decryption. Significant reduction in error rate is expected to be achieved by the bit pull-up method.
A channel dynamics model for real-time flood forecasting
Hoos, Anne B.; Koussis, Antonis D.; Beale, Guy O.
1989-01-01
A new channel dynamics scheme (alternative system predictor in real time (ASPIRE)), designed specifically for real-time river flow forecasting, is introduced to reduce uncertainty in the forecast. ASPIRE is a storage routing model that limits the influence of catchment model forecast errors to the downstream station closest to the catchment. Comparisons with the Muskingum routing scheme in field tests suggest that the ASPIRE scheme can provide more accurate forecasts, probably because discharge observations are used to a maximum advantage and routing reaches (and model errors in each reach) are uncoupled. Using ASPIRE in conjunction with the Kalman filter did not improve forecast accuracy relative to a deterministic updating procedure. Theoretical analysis suggests that this is due to a large process noise to measurement noise ratio.
Distance Measurement Error in Time-of-Flight Sensors Due to Shot Noise
Illade-Quinteiro, Julio; Brea, Víctor M.; López, Paula; Cabello, Diego; Doménech-Asensi, Gines
2015-01-01
Unlike other noise sources, which can be reduced or eliminated by different signal processing techniques, shot noise is an ever-present noise component in any imaging system. In this paper, we present an in-depth study of the impact of shot noise on time-of-flight sensors in terms of the error introduced in the distance estimation. The paper addresses the effect of parameters, such as the size of the photosensor, the background and signal power or the integration time, and the resulting design trade-offs. The study is demonstrated with different numerical examples, which show that, in general, the phase shift determination technique with two background measurements approach is the most suitable for pixel arrays of large resolution. PMID:25723141
IMPROVED SPECTROPHOTOMETRIC CALIBRATION OF THE SDSS-III BOSS QUASAR SAMPLE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Margala, Daniel; Kirkby, David; Dawson, Kyle
2016-11-10
We present a model for spectrophotometric calibration errors in observations of quasars from the third generation of the Sloan Digital Sky Survey Baryon Oscillation Spectroscopic Survey (BOSS) and describe the correction procedure we have developed and applied to this sample. Calibration errors are primarily due to atmospheric differential refraction and guiding offsets during each exposure. The corrections potentially reduce the systematics for any studies of BOSS quasars, including the measurement of baryon acoustic oscillations using the Ly α forest. Our model suggests that, on average, the observed quasar flux in BOSS is overestimated by ∼19% at 3600 Å and underestimatedmore » by ∼24% at 10,000 Å. Our corrections for the entire BOSS quasar sample are publicly available.« less
Six reasons why thermospheric measurements and models disagree
NASA Technical Reports Server (NTRS)
Moe, Kenneth
1987-01-01
The differences between thermospheric measurements and models are discussed. Sometimes the model is in error and at other times the measurements are, but it also is possible for both to be correct, yet have the comparison result in an apparent disagreement. These reasons are collected for disagreement, and, whenever possible, methods of reducing or eliminating them are suggested. The six causes of disagreement discussed are: actual errors caused by the limited knowledge of gas-surface interactions and by in-track winds; limitations of the thermospheric general circulation models due to incomplete knowledge of the energy sources and sinks as well as incompleteness of the parameterization which must be employed; and limitations imposed on the empirical models by the conceptual framework and the transient waves.
Quantum stopwatch: how to store time in a quantum memory.
Yang, Yuxiang; Chiribella, Giulio; Hayashi, Masahito
2018-05-01
Quantum mechanics imposes a fundamental trade-off between the accuracy of time measurements and the size of the systems used as clocks. When the measurements of different time intervals are combined, the errors due to the finite clock size accumulate, resulting in an overall inaccuracy that grows with the complexity of the set-up. Here, we introduce a method that, in principle, eludes the accumulation of errors by coherently transferring information from a quantum clock to a quantum memory of the smallest possible size. Our method could be used to measure the total duration of a sequence of events with enhanced accuracy, and to reduce the amount of quantum communication needed to stabilize clocks in a quantum network.
Quantifying Errors in Jet Noise Research Due to Microphone Support Reflection
NASA Technical Reports Server (NTRS)
Nallasamy, Nambi; Bridges, James
2002-01-01
The reflection coefficient of a microphone support structure used insist noise testing is documented through tests performed in the anechoic AeroAcoustic Propulsion Laboratory. The tests involve the acquisition of acoustic data from a microphone mounted in the support structure while noise is generated from a known broadband source. The ratio of reflected signal amplitude to the original signal amplitude is determined by performing an auto-correlation function on the data. The documentation of the reflection coefficients is one component of the validation of jet noise data acquired using the given microphone support structure. Finally. two forms of acoustic material were applied to the microphone support structure to determine their effectiveness in reducing reflections which give rise to bias errors in the microphone measurements.
NASA Technical Reports Server (NTRS)
Bell, Thomas L.; Kundu, Prasun K.; Einaudi, Franco (Technical Monitor)
2000-01-01
Estimates from TRMM satellite data of monthly total rainfall over an area are subject to substantial sampling errors due to the limited number of visits to the area by the satellite during the month. Quantitative comparisons of TRMM averages with data collected by other satellites and by ground-based systems require some estimate of the size of this sampling error. A method of estimating this sampling error based on the actual statistics of the TRMM observations and on some modeling work has been developed. "Sampling error" in TRMM monthly averages is defined here relative to the monthly total a hypothetical satellite permanently stationed above the area would have reported. "Sampling error" therefore includes contributions from the random and systematic errors introduced by the satellite remote sensing system. As part of our long-term goal of providing error estimates for each grid point accessible to the TRMM instruments, sampling error estimates for TRMM based on rain retrievals from TRMM microwave (TMI) data are compared for different times of the year and different oceanic areas (to minimize changes in the statistics due to algorithmic differences over land and ocean). Changes in sampling error estimates due to changes in rain statistics due 1) to evolution of the official algorithms used to process the data, and 2) differences from other remote sensing systems such as the Defense Meteorological Satellite Program (DMSP) Special Sensor Microwave/Imager (SSM/I), are analyzed.
NASA Astrophysics Data System (ADS)
Huo, Ming-Xia; Li, Ying
2017-12-01
Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.
Automated drug dispensing system reduces medication errors in an intensive care setting.
Chapuis, Claire; Roustit, Matthieu; Bal, Gaëlle; Schwebel, Carole; Pansu, Pascal; David-Tchouda, Sandra; Foroni, Luc; Calop, Jean; Timsit, Jean-François; Allenet, Benoît; Bosson, Jean-Luc; Bedouch, Pierrick
2010-12-01
We aimed to assess the impact of an automated dispensing system on the incidence of medication errors related to picking, preparation, and administration of drugs in a medical intensive care unit. We also evaluated the clinical significance of such errors and user satisfaction. Preintervention and postintervention study involving a control and an intervention medical intensive care unit. Two medical intensive care units in the same department of a 2,000-bed university hospital. Adult medical intensive care patients. After a 2-month observation period, we implemented an automated dispensing system in one of the units (study unit) chosen randomly, with the other unit being the control. The overall error rate was expressed as a percentage of total opportunities for error. The severity of errors was classified according to National Coordinating Council for Medication Error Reporting and Prevention categories by an expert committee. User satisfaction was assessed through self-administered questionnaires completed by nurses. A total of 1,476 medications for 115 patients were observed. After automated dispensing system implementation, we observed a reduced percentage of total opportunities for error in the study compared to the control unit (13.5% and 18.6%, respectively; p<.05); however, no significant difference was observed before automated dispensing system implementation (20.4% and 19.3%, respectively; not significant). Before-and-after comparisons in the study unit also showed a significantly reduced percentage of total opportunities for error (20.4% and 13.5%; p<.01). An analysis of detailed opportunities for error showed a significant impact of the automated dispensing system in reducing preparation errors (p<.05). Most errors caused no harm (National Coordinating Council for Medication Error Reporting and Prevention category C). The automated dispensing system did not reduce errors causing harm. Finally, the mean for working conditions improved from 1.0±0.8 to 2.5±0.8 on the four-point Likert scale. The implementation of an automated dispensing system reduced overall medication errors related to picking, preparation, and administration of drugs in the intensive care unit. Furthermore, most nurses favored the new drug dispensation organization.
Paudel, Prakash; Ramson, Prasidh; Naduvilath, Thomas; Wilson, David; Phuong, Ha Thanh; Ho, Suit M; Giap, Nguyen V
2014-01-01
Background To assess the prevalence of vision impairment and refractive error in school children 12–15 years of age in Ba Ria – Vung Tau province, Vietnam. Design Prospective, cross-sectional study. Participants 2238 secondary school children. Methods Subjects were selected based on stratified multistage cluster sampling of 13 secondary schools from urban, rural and semi-urban areas. The examination included visual acuity measurements, ocular motility evaluation, cycloplegic autorefraction, and examination of the external eye, anterior segment, media and fundus. Main Outcome Measures Visual acuity and principal cause of vision impairment. Results The prevalence of uncorrected and presenting visual acuity ≤6/12 in the better eye were 19.4% (95% confidence interval, 12.5–26.3) and 12.2% (95% confidence interval, 8.8–15.6), respectively. Refractive error was the cause of vision impairment in 92.7%, amblyopia in 2.2%, cataract in 0.7%, retinal disorders in 0.4%, other causes in 1.5% and unexplained causes in the remaining 2.6%. The prevalence of vision impairment due to myopia in either eye (–0.50 diopter or greater) was 20.4% (95% confidence interval, 12.8–28.0), hyperopia (≥2.00 D) was 0.4% (95% confidence interval, 0.0–0.7) and emmetropia with astigmatism (≥0.75 D) was 0.7% (95% confidence interval, 0.2–1.2). Vision impairment due to myopia was associated with higher school grade and increased time spent reading and working on a computer. Conclusions Uncorrected refractive error, particularly myopia, among secondary school children in Vietnam is a major public health problem. School-based eye health initiative such as refractive error screening is warranted to reduce vision impairment. PMID:24299145
Paudel, Prakash; Ramson, Prasidh; Naduvilath, Thomas; Wilson, David; Phuong, Ha Thanh; Ho, Suit M; Giap, Nguyen V
2014-04-01
To assess the prevalence of vision impairment and refractive error in school children 12-15 years of age in Ba Ria - Vung Tau province, Vietnam. Prospective, cross-sectional study. 2238 secondary school children. Subjects were selected based on stratified multistage cluster sampling of 13 secondary schools from urban, rural and semi-urban areas. The examination included visual acuity measurements, ocular motility evaluation, cycloplegic autorefraction, and examination of the external eye, anterior segment, media and fundus. Visual acuity and principal cause of vision impairment. The prevalence of uncorrected and presenting visual acuity ≤6/12 in the better eye were 19.4% (95% confidence interval, 12.5-26.3) and 12.2% (95% confidence interval, 8.8-15.6), respectively. Refractive error was the cause of vision impairment in 92.7%, amblyopia in 2.2%, cataract in 0.7%, retinal disorders in 0.4%, other causes in 1.5% and unexplained causes in the remaining 2.6%. The prevalence of vision impairment due to myopia in either eye (-0.50 diopter or greater) was 20.4% (95% confidence interval, 12.8-28.0), hyperopia (≥2.00 D) was 0.4% (95% confidence interval, 0.0-0.7) and emmetropia with astigmatism (≥0.75 D) was 0.7% (95% confidence interval, 0.2-1.2). Vision impairment due to myopia was associated with higher school grade and increased time spent reading and working on a computer. Uncorrected refractive error, particularly myopia, among secondary school children in Vietnam is a major public health problem. School-based eye health initiative such as refractive error screening is warranted to reduce vision impairment. © 2013 The Authors. Clinical & Experimental Ophthalmology published by Wiley Publishing Asia Pty Ltd on behalf of Royal Australian and New Zealand College of Ophthalmologists.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jerban, Saeed, E-mail: saeed.jerban@usherbrooke.ca
2016-08-15
The pore interconnection size of β-tricalcium phosphate scaffolds plays an essential role in the bone repair process. Although, the μCT technique is widely used in the biomaterial community, it is rarely used to measure the interconnection size because of the lack of algorithms. In addition, discrete nature of the μCT introduces large systematic errors due to the convex geometry of interconnections. We proposed, verified and validated a novel pore-level algorithm to accurately characterize the individual pores and interconnections. Specifically, pores and interconnections were isolated, labeled, and individually analyzed with high accuracy. The technique was verified thoroughly by visually inspecting andmore » verifying over 3474 properties of randomly selected pores. This extensive verification process has passed a one-percent accuracy criterion. Scanning errors inherent in the discretization, which lead to both dummy and significantly overestimated interconnections, have been examined using computer-based simulations and additional high-resolution scanning. Then accurate correction charts were developed and used to reduce the scanning errors. Only after the corrections, both the μCT and SEM-based results converged, and the novel algorithm was validated. Material scientists with access to all geometrical properties of individual pores and interconnections, using the novel algorithm, will have a more-detailed and accurate description of the substitute architecture and a potentially deeper understanding of the link between the geometric and biological interaction. - Highlights: •An algorithm is developed to analyze individually all pores and interconnections. •After pore isolating, the discretization errors in interconnections were corrected. •Dummy interconnections and overestimated sizes were due to thin material walls. •The isolating algorithm was verified through visual inspection (99% accurate). •After correcting for the systematic errors, algorithm was validated successfully.« less
Determining relative error bounds for the CVBEM
Hromadka, T.V.
1985-01-01
The Complex Variable Boundary Element Methods provides a measure of relative error which can be utilized to subsequently reduce the error or provide information for further modeling analysis. By maximizing the relative error norm on each boundary element, a bound on the total relative error for each boundary element can be evaluated. This bound can be utilized to test CVBEM convergence, to analyze the effects of additional boundary nodal points in reducing the modeling error, and to evaluate the sensitivity of resulting modeling error within a boundary element from the error produced in another boundary element as a function of geometric distance. ?? 1985.
Suess, D.; Fuger, M.; Abert, C.; Bruckner, F.; Vogler, C.
2016-01-01
We report two effects that lead to a significant reduction of the switching field distribution in exchange spring media. The first effect relies on a subtle mechanism of the interplay between exchange coupling between soft and hard layers and anisotropy that allows significant reduction of the switching field distribution in exchange spring media. This effect reduces the switching field distribution by about 30% compared to single-phase media. A second effect is that due to the improved thermal stability of exchange spring media over single-phase media, the jitter due to thermal fluctuation is significantly smaller for exchange spring media than for single-phase media. The influence of this overall improved switching field distribution on the transition jitter in granular recording and the bit error rate in bit-patterned magnetic recording is discussed. The transition jitter in granular recording for a distribution of Khard values of 3% in the hard layer, taking into account thermal fluctuations during recording, is estimated to be a = 0.78 nm, which is similar to the best reported calculated jitter in optimized heat-assisted recording media. PMID:27245287
Reduction of Orifice-Induced Pressure Errors
NASA Technical Reports Server (NTRS)
Plentovich, Elizabeth B.; Gloss, Blair B.; Eves, John W.; Stack, John P.
1987-01-01
Use of porous-plug orifice reduces or eliminates errors, induced by orifice itself, in measuring static pressure on airfoil surface in wind-tunnel experiments. Piece of sintered metal press-fitted into static-pressure orifice so it matches surface contour of model. Porous material reduces orifice-induced pressure error associated with conventional orifice of same or smaller diameter. Also reduces or eliminates additional errors in pressure measurement caused by orifice imperfections. Provides more accurate measurements in regions with very thin boundary layers.
Reducing errors benefits the field-based learning of a fundamental movement skill in children.
Capio, C M; Poolton, J M; Sit, C H P; Holmstrom, M; Masters, R S W
2013-03-01
Proficient fundamental movement skills (FMS) are believed to form the basis of more complex movement patterns in sports. This study examined the development of the FMS of overhand throwing in children through either an error-reduced (ER) or error-strewn (ES) training program. Students (n = 216), aged 8-12 years (M = 9.16, SD = 0.96), practiced overhand throwing in either a program that reduced errors during practice (ER) or one that was ES. ER program reduced errors by incrementally raising the task difficulty, while the ES program had an incremental lowering of task difficulty. Process-oriented assessment of throwing movement form (Test of Gross Motor Development-2) and product-oriented assessment of throwing accuracy (absolute error) were performed. Changes in performance were examined among children in the upper and lower quartiles of the pretest throwing accuracy scores. ER training participants showed greater gains in movement form and accuracy, and performed throwing more effectively with a concurrent secondary cognitive task. Movement form improved among girls, while throwing accuracy improved among children with low ability. Reduced performance errors in FMS training resulted in greater learning than a program that did not restrict errors. Reduced cognitive processing costs (effective dual-task performance) associated with such approach suggest its potential benefits for children with developmental conditions. © 2011 John Wiley & Sons A/S.
Horn, F.L.; Binns, J.E.
1961-05-01
Apparatus for continuously and automatically measuring and computing the specific heat of a flowing solution is described. The invention provides for the continuous measurement of all the parameters required for the mathematical solution of this characteristic. The parameters are converted to logarithmic functions which are added and subtracted in accordance with the solution and a null-seeking servo reduces errors due to changing voltage drops to a minimum. Logarithmic potentiometers are utilized in a unique manner to accomplish these results.
2010-01-01
hematocrit, low oxygen tension, acetaminophen, uric acid , ascorbic acid , maltose, galactose, xy- lose, lactose, operator inexperience, age of strips, heat...Biomedical, Waltham, MA) that corrects for the effects of anemia, low oxygen tension, acetaminophen, uric acid , ascorbic acid , maltose, galactose, xylose, and...resulted in inappropriately high glucometer values (data not shown). The effects of interfering substances (acetaminophen, uric acid , ascorbic acid
Zilberman, Uri; Bibi, Haim
2016-01-01
Multiple sulfatase deficiency (MSD) is a rare autosomal recessive inborn error of metabolism due to reduced catalytic activity of the different sulfatase. Affected individuals show neurologic deterioration with mental retardation, skeletal anomalies, organomegaly, and skin changes as in X-linked ichthyosis. The only organ that was not examined in MSD patients is the dentition. To evaluate the effect of the metabolic error on dental development in a patient with the intermediate severe late-infantile form of MSD (S155P). Histological and chemical study were performed on three deciduous and five permanent teeth from MSD patient and pair-matched normal patients. Tooth germ size and enamel thickness were reduced in both deciduous and permanent MSD teeth, and the scalloping feature of the DEJ was missing in MSD teeth causing enamel to break off from the dentin. The mineral components in the enamel and dentin were different. The metabolic error insults the teeth in the stage of organogenesis in both the deciduous and permanent dentition. The end result is teeth with very sharp cusp tips, thin hypomineralized enamel, and exposed dentin due to the break off of enamel. These findings are different from all other types of MPS syndromes.Clinically the phenotype of intermediate severe late-infantile form of MSD appeared during the third year of life. In children of parents that are carriers, we can diagnose the disease as early as birth using X-ray radiograph of the anterior upper region or as early as 6-8 months when the first deciduous tooth erupt and consider very early treatment to ameliorate the symptoms.
Reducing errors in the GRACE gravity solutions using regularization
NASA Astrophysics Data System (ADS)
Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron D.
2012-09-01
The nature of the gravity field inverse problem amplifies the noise in the GRACE data, which creeps into the mid and high degree and order harmonic coefficients of the Earth's monthly gravity fields provided by GRACE. Due to the use of imperfect background models and data noise, these errors are manifested as north-south striping in the monthly global maps of equivalent water heights. In order to reduce these errors, this study investigates the use of the L-curve method with Tikhonov regularization. L-curve is a popular aid for determining a suitable value of the regularization parameter when solving linear discrete ill-posed problems using Tikhonov regularization. However, the computational effort required to determine the L-curve is prohibitively high for a large-scale problem like GRACE. This study implements a parameter-choice method, using Lanczos bidiagonalization which is a computationally inexpensive approximation to L-curve. Lanczos bidiagonalization is implemented with orthogonal transformation in a parallel computing environment and projects a large estimation problem on a problem of the size of about 2 orders of magnitude smaller for computing the regularization parameter. Errors in the GRACE solution time series have certain characteristics that vary depending on the ground track coverage of the solutions. These errors increase with increasing degree and order. In addition, certain resonant and near-resonant harmonic coefficients have higher errors as compared with the other coefficients. Using the knowledge of these characteristics, this study designs a regularization matrix that provides a constraint on the geopotential coefficients as a function of its degree and order. This regularization matrix is then used to compute the appropriate regularization parameter for each monthly solution. A 7-year time-series of the candidate regularized solutions (Mar 2003-Feb 2010) show markedly reduced error stripes compared with the unconstrained GRACE release 4 solutions (RL04) from the Center for Space Research (CSR). Post-fit residual analysis shows that the regularized solutions fit the data to within the noise level of GRACE. A time series of filtered hydrological model is used to confirm that signal attenuation for basins in the Total Runoff Integrating Pathways (TRIP) database over 320 km radii is less than 1 cm equivalent water height RMS, which is within the noise level of GRACE.
Computer Generated Hologram System for Wavefront Measurement System Calibration
NASA Technical Reports Server (NTRS)
Olczak, Gene
2011-01-01
Computer Generated Holograms (CGHs) have been used for some time to calibrate interferometers that require nulling optics. A typical scenario is the testing of aspheric surfaces with an interferometer placed near the paraxial center of curvature. Existing CGH technology suffers from a reduced capacity to calibrate middle and high spatial frequencies. The root cause of this shortcoming is as follows: the CGH is not placed at an image conjugate of the asphere due to limitations imposed by the geometry of the test and the allowable size of the CGH. This innovation provides a calibration system where the imaging properties in calibration can be made comparable to the test configuration. Thus, if the test is designed to have good imaging properties, then middle and high spatial frequency errors in the test system can be well calibrated. The improved imaging properties are provided by a rudimentary auxiliary optic as part of the calibration system. The auxiliary optic is simple to characterize and align to the CGH. Use of the auxiliary optic also reduces the size of the CGH required for calibration and the density of the lines required for the CGH. The resulting CGH is less expensive than the existing technology and has reduced write error and alignment error sensitivities. This CGH system is suitable for any kind of calibration using an interferometer when high spatial resolution is required. It is especially well suited for tests that include segmented optical components or large apertures.
tPA Prescription and Administration Errors within a Regional Stroke System
Chung, Lee S; Tkach, Aleksander; Lingenfelter, Erin M; Dehoney, Sarah; Rollo, Jeannie; de Havenon, Adam; DeWitt, Lucy Dana; Grantz, Matthew Ryan; Wang, Haimei; Wold, Jana J; Hannon, Peter M; Weathered, Natalie R; Majersik, Jennifer J
2015-01-01
Background IV tPA utilization in acute ischemic stroke (AIS) requires weight-based dosing and a standardized infusion rate. In our regional network, we have tried to minimize tPA dosing errors. We describe the frequency and types of tPA administration errors made in our comprehensive stroke center (CSC) and at community hospitals (CHs) prior to transfer. Methods Using our stroke quality database, we extracted clinical and pharmacy information on all patients who received IV tPA from 2010–11 at the CSC or CH prior to transfer. All records were analyzed for the presence of inclusion/exclusion criteria deviations or tPA errors in prescription, reconstitution, dispensing, or administration, and analyzed for association with outcomes. Results We identified 131 AIS cases treated with IV tPA: 51% female; mean age 68; 32% treated at CSC, 68% at CH (including 26% by telestroke) from 22 CHs. tPA prescription and administration errors were present in 64% of all patients (41% CSC, 75% CH, p<0.001), the most common being incorrect dosage for body weight (19% CSC, 55% CH, p<0.001). Of the 27 overdoses, there were 3 deaths due to systemic hemorrhage or ICH. Nonetheless, outcomes (parenchymal hematoma, mortality, mRS) did not differ between CSC and CH patients nor between those with and without errors. Conclusion Despite focus on minimization of tPA administration errors in AIS patients, such errors were very common in our regional stroke system. Although an association between tPA errors and stroke outcomes was not demonstrated, quality assurance mechanisms are still necessary to reduce potentially dangerous, avoidable errors. PMID:26698642
Load Sharing Behavior of Star Gearing Reducer for Geared Turbofan Engine
NASA Astrophysics Data System (ADS)
Mo, Shuai; Zhang, Yidu; Wu, Qiong; Wang, Feiming; Matsumura, Shigeki; Houjoh, Haruo
2017-07-01
Load sharing behavior is very important for power-split gearing system, star gearing reducer as a new type and special transmission system can be used in many industry fields. However, there is few literature regarding the key multiple-split load sharing issue in main gearbox used in new type geared turbofan engine. Further mechanism analysis are made on load sharing behavior among star gears of star gearing reducer for geared turbofan engine. Comprehensive meshing error analysis are conducted on eccentricity error, gear thickness error, base pitch error, assembly error, and bearing error of star gearing reducer respectively. Floating meshing error resulting from meshing clearance variation caused by the simultaneous floating of sun gear and annular gear are taken into account. A refined mathematical model for load sharing coefficient calculation is established in consideration of different meshing stiffness and supporting stiffness for components. The regular curves of load sharing coefficient under the influence of interactions, single action and single variation of various component errors are obtained. The accurate sensitivity of load sharing coefficient toward different errors is mastered. The load sharing coefficient of star gearing reducer is 1.033 and the maximum meshing force in gear tooth is about 3010 N. This paper provides scientific theory evidences for optimal parameter design and proper tolerance distribution in advanced development and manufacturing process, so as to achieve optimal effects in economy and technology.
NASA Astrophysics Data System (ADS)
Xue, ShiChuan; Wu, JunJie; Xu, Ping; Yang, XueJun
2018-02-01
Quantum computing is a significant computing capability which is superior to classical computing because of its superposition feature. Distinguishing several quantum states from quantum algorithm outputs is often a vital computational task. In most cases, the quantum states tend to be non-orthogonal due to superposition; quantum mechanics has proved that perfect outcomes could not be achieved by measurements, forcing repetitive measurement. Hence, it is important to determine the optimum measuring method which requires fewer repetitions and a lower error rate. However, extending current measurement approaches mainly aiming at quantum cryptography to multi-qubit situations for quantum computing confronts challenges, such as conducting global operations which has considerable costs in the experimental realm. Therefore, in this study, we have proposed an optimum subsystem method to avoid these difficulties. We have provided an analysis of the comparison between the reduced subsystem method and the global minimum error method for two-qubit problems; the conclusions have been verified experimentally. The results showed that the subsystem method could effectively discriminate non-orthogonal two-qubit states, such as separable states, entangled pure states, and mixed states; the cost of the experimental process had been significantly reduced, in most circumstances, with acceptable error rate. We believe the optimal subsystem method is the most valuable and promising approach for multi-qubit quantum computing applications.
Monte Carlo dose calculation in dental amalgam phantom
Aziz, Mohd. Zahri Abdul; Yusoff, A. L.; Osman, N. D.; Abdullah, R.; Rabaie, N. A.; Salikin, M. S.
2015-01-01
It has become a great challenge in the modern radiation treatment to ensure the accuracy of treatment delivery in electron beam therapy. Tissue inhomogeneity has become one of the factors for accurate dose calculation, and this requires complex algorithm calculation like Monte Carlo (MC). On the other hand, computed tomography (CT) images used in treatment planning system need to be trustful as they are the input in radiotherapy treatment. However, with the presence of metal amalgam in treatment volume, the CT images input showed prominent streak artefact, thus, contributed sources of error. Hence, metal amalgam phantom often creates streak artifacts, which cause an error in the dose calculation. Thus, a streak artifact reduction technique was applied to correct the images, and as a result, better images were observed in terms of structure delineation and density assigning. Furthermore, the amalgam density data were corrected to provide amalgam voxel with accurate density value. As for the errors of dose uncertainties due to metal amalgam, they were reduced from 46% to as low as 2% at d80 (depth of the 80% dose beyond Zmax) using the presented strategies. Considering the number of vital and radiosensitive organs in the head and the neck regions, this correction strategy is suggested in reducing calculation uncertainties through MC calculation. PMID:26500401
Image transmission system using adaptive joint source and channel decoding
NASA Astrophysics Data System (ADS)
Liu, Weiliang; Daut, David G.
2005-03-01
In this paper, an adaptive joint source and channel decoding method is designed to accelerate the convergence of the iterative log-dimain sum-product decoding procedure of LDPC codes as well as to improve the reconstructed image quality. Error resilience modes are used in the JPEG2000 source codec, which makes it possible to provide useful source decoded information to the channel decoder. After each iteration, a tentative decoding is made and the channel decoded bits are then sent to the JPEG2000 decoder. Due to the error resilience modes, some bits are known to be either correct or in error. The positions of these bits are then fed back to the channel decoder. The log-likelihood ratios (LLR) of these bits are then modified by a weighting factor for the next iteration. By observing the statistics of the decoding procedure, the weighting factor is designed as a function of the channel condition. That is, for lower channel SNR, a larger factor is assigned, and vice versa. Results show that the proposed joint decoding methods can greatly reduce the number of iterations, and thereby reduce the decoding delay considerably. At the same time, this method always outperforms the non-source controlled decoding method up to 5dB in terms of PSNR for various reconstructed images.
NASA Astrophysics Data System (ADS)
Endelt, B.
2017-09-01
Forming operation are subject to external disturbances and changing operating conditions e.g. new material batch, increasing tool temperature due to plastic work, material properties and lubrication is sensitive to tool temperature. It is generally accepted that forming operations are not stable over time and it is not uncommon to adjust the process parameters during the first half hour production, indicating that process instability is gradually developing over time. Thus, in-process feedback control scheme might not-be necessary to stabilize the process and an alternative approach is to apply an iterative learning algorithm, which can learn from previously produced parts i.e. a self learning system which gradually reduces error based on historical process information. What is proposed in the paper is a simple algorithm which can be applied to a wide range of sheet-metal forming processes. The input to the algorithm is the final flange edge geometry and the basic idea is to reduce the least-square error between the current flange geometry and a reference geometry using a non-linear least square algorithm. The ILC scheme is applied to a square deep-drawing and the Numisheet’08 S-rail benchmark problem, the numerical tests shows that the proposed control scheme is able control and stabilise both processes.
Hybrid Adaptive Flight Control with Model Inversion Adaptation
NASA Technical Reports Server (NTRS)
Nguyen, Nhan
2011-01-01
This study investigates a hybrid adaptive flight control method as a design possibility for a flight control system that can enable an effective adaptation strategy to deal with off-nominal flight conditions. The hybrid adaptive control blends both direct and indirect adaptive control in a model inversion flight control architecture. The blending of both direct and indirect adaptive control provides a much more flexible and effective adaptive flight control architecture than that with either direct or indirect adaptive control alone. The indirect adaptive control is used to update the model inversion controller by an on-line parameter estimation of uncertain plant dynamics based on two methods. The first parameter estimation method is an indirect adaptive law based on the Lyapunov theory, and the second method is a recursive least-squares indirect adaptive law. The model inversion controller is therefore made to adapt to changes in the plant dynamics due to uncertainty. As a result, the modeling error is reduced that directly leads to a decrease in the tracking error. In conjunction with the indirect adaptive control that updates the model inversion controller, a direct adaptive control is implemented as an augmented command to further reduce any residual tracking error that is not entirely eliminated by the indirect adaptive control.
NASA Technical Reports Server (NTRS)
Long, S. A. T.
1973-01-01
The triangulation method developed specifically for the Barium Ion Cloud Project is discussed. Expression for the four displacement errors, the three slope errors, and the curvature error in the triangulation solution due to a probable error in the lines-of-sight from the observation stations to points on the cloud are derived. The triangulation method is then used to determine the effect of the following on these different errors in the solution: the number and location of the stations, the observation duration, east-west cloud drift, the number of input data points, and the addition of extra cameras to one of the stations. The pointing displacement errors, and the pointing slope errors are compared. The displacement errors in the solution due to a probable error in the position of a moving station plus the weighting factors for the data from the moving station are also determined.
NASA Astrophysics Data System (ADS)
Shankar, Praveen
The performance of nonlinear control algorithms such as feedback linearization and dynamic inversion is heavily dependent on the fidelity of the dynamic model being inverted. Incomplete or incorrect knowledge of the dynamics results in reduced performance and may lead to instability. Augmenting the baseline controller with approximators which utilize a parametrization structure that is adapted online reduces the effect of this error between the design model and actual dynamics. However, currently existing parameterizations employ a fixed set of basis functions that do not guarantee arbitrary tracking error performance. To address this problem, we develop a self-organizing parametrization structure that is proven to be stable and can guarantee arbitrary tracking error performance. The training algorithm to grow the network and adapt the parameters is derived from Lyapunov theory. In addition to growing the network of basis functions, a pruning strategy is incorporated to keep the size of the network as small as possible. This algorithm is implemented on a high performance flight vehicle such as F-15 military aircraft. The baseline dynamic inversion controller is augmented with a Self-Organizing Radial Basis Function Network (SORBFN) to minimize the effect of the inversion error which may occur due to imperfect modeling, approximate inversion or sudden changes in aircraft dynamics. The dynamic inversion controller is simulated for different situations including control surface failures, modeling errors and external disturbances with and without the adaptive network. A performance measure of maximum tracking error is specified for both the controllers a priori. Excellent tracking error minimization to a pre-specified level using the adaptive approximation based controller was achieved while the baseline dynamic inversion controller failed to meet this performance specification. The performance of the SORBFN based controller is also compared to a fixed RBF network based adaptive controller. While the fixed RBF network based controller which is tuned to compensate for control surface failures fails to achieve the same performance under modeling uncertainty and disturbances, the SORBFN is able to achieve good tracking convergence under all error conditions.
A Well-Calibrated Ocean Algorithm for Special Sensor Microwave/Imager
NASA Technical Reports Server (NTRS)
Wentz, Frank J.
1997-01-01
I describe an algorithm for retrieving geophysical parameters over the ocean from special sensor microwave/imager (SSM/I) observations. This algorithm is based on a model for the brightness temperature T(sub B) of the ocean and intervening atmosphere. The retrieved parameters are the near-surface wind speed W, the columnar water vapor V, the columnar cloud liquid water L, and the line-of-sight wind W(sub LS). I restrict my analysis to ocean scenes free of rain, and when the algorithm detects rain, the retrievals are discarded. The model and algorithm are precisely calibrated using a very large in situ database containing 37,650 SSM/I overpasses of buoys and 35,108 overpasses of radiosonde sites. A detailed error analysis indicates that the T(sub B) model rms accuracy is between 0.5 and 1 K and that the rms retrieval accuracies for wind, vapor, and cloud are 0.9 m/s, 1.2 mm, and 0.025 mm, respectively. The error in specifying the cloud temperature will introduce an additional 10% error in the cloud water retrieval. The spatial resolution for these accuracies is 50 km. The systematic errors in the retrievals are smaller than the rms errors, being about 0.3 m/s, 0.6 mm, and 0.005 mm for W, V, and L, respectively. The one exception is the systematic error in wind speed of -1.0 m/s that occurs for observations within +/-20 deg of upwind. The inclusion of the line-of-sight wind W(sub LS) in the retrieval significantly reduces the error in wind speed due to wind direction variations. The wind error for upwind observations is reduced from -3.0 to -1.0 m/s. Finally, I find a small signal in the 19-GHz, horizontal polarization (h(sub pol) T(sub B) residual DeltaT(sub BH) that is related to the effective air pressure of the water vapor profile. This information may be of some use in specifying the vertical distribution of water vapor.
NASA Astrophysics Data System (ADS)
Caimmi, R.
2011-08-01
Concerning bivariate least squares linear regression, the classical approach pursued for functional models in earlier attempts ( York, 1966, 1969) is reviewed using a new formalism in terms of deviation (matrix) traces which, for unweighted data, reduce to usual quantities leaving aside an unessential (but dimensional) multiplicative factor. Within the framework of classical error models, the dependent variable relates to the independent variable according to the usual additive model. The classes of linear models considered are regression lines in the general case of correlated errors in X and in Y for weighted data, and in the opposite limiting situations of (i) uncorrelated errors in X and in Y, and (ii) completely correlated errors in X and in Y. The special case of (C) generalized orthogonal regression is considered in detail together with well known subcases, namely: (Y) errors in X negligible (ideally null) with respect to errors in Y; (X) errors in Y negligible (ideally null) with respect to errors in X; (O) genuine orthogonal regression; (R) reduced major-axis regression. In the limit of unweighted data, the results determined for functional models are compared with their counterparts related to extreme structural models i.e. the instrumental scatter is negligible (ideally null) with respect to the intrinsic scatter ( Isobe et al., 1990; Feigelson and Babu, 1992). While regression line slope and intercept estimators for functional and structural models necessarily coincide, the contrary holds for related variance estimators even if the residuals obey a Gaussian distribution, with the exception of Y models. An example of astronomical application is considered, concerning the [O/H]-[Fe/H] empirical relations deduced from five samples related to different stars and/or different methods of oxygen abundance determination. For selected samples and assigned methods, different regression models yield consistent results within the errors (∓ σ) for both heteroscedastic and homoscedastic data. Conversely, samples related to different methods produce discrepant results, due to the presence of (still undetected) systematic errors, which implies no definitive statement can be made at present. A comparison is also made between different expressions of regression line slope and intercept variance estimators, where fractional discrepancies are found to be not exceeding a few percent, which grows up to about 20% in the presence of large dispersion data. An extension of the formalism to structural models is left to a forthcoming paper.
NASA Astrophysics Data System (ADS)
Henze, D. K.; Guerrette, J.; Bousserez, N.
2016-12-01
Wildfires contribute significantly to regional haze events globally, and they are potentially becoming more commonplace with increasing droughts due to climate change. Aerosol emissions from wildfires are highly uncertain, with global annual totals varying by a factor of 2 to 3 and regional rates varying by up to a factor of 10. At the high resolution required to predict PM2.5 exposure events, this variance is attributable to differences in methodology, differing land cover datasets, spatial variation in fire locations, and limited understanding of fast transient fire behavior. Here we apply an adjoint-based online chemical inverse modeling tool, WRFDA-Chem, to constrain black carbon aerosol (BC) emissions from fires during the 2008 ARCTAS-CARB field campaign. We identify several weaknesses in the prior diurnal distribution of emissions, including a missing early morning emission peak associated with local, persistent, large-scale forest fires. On 22 June, 2008, aircraft observations are able to reduce the spread between FINNv1.0 and QFEDv2.4r8 from ×3.5 to ×2.1. On 23 and 24 June, the spread is reduced from ×3.4 to ×1.4. Using posterior error estimates, we found that emission variance improvements are limited to a small footprint surrounding the measurements. Relative BB emission variances are reduced by up to 35% near aircraft flight paths and up to 60% near IMPROVE surface sites. Due to the spatial variation of observations on multiple days, and the heterogeneous biomass burning errors on daily scales, cross-validation was not successful. Future high-resolution measurements need to be carefully planned to characterize biomass burning emission errors and control for day-to-day variation. In general, the 4D-Var inversion framework would benefit from reduced wall-time. For the problem presented, incremental 4D-Var requires 20 hours on 96 cores to reach practical optimization convergence and generate the posterior covariance matrix for a 24-hour assimilation window. We will present initial computational comparisons with a recently developed method to parallelize those calculations, which will reduce wall-time by a factor of 5 or more for all WRFDA 4D-Var applications.
Twice cutting method reduces tibial cutting error in unicompartmental knee arthroplasty.
Inui, Hiroshi; Taketomi, Shuji; Yamagami, Ryota; Sanada, Takaki; Tanaka, Sakae
2016-01-01
Bone cutting error can be one of the causes of malalignment in unicompartmental knee arthroplasty (UKA). The amount of cutting error in total knee arthroplasty has been reported. However, none have investigated cutting error in UKA. The purpose of this study was to reveal the amount of cutting error in UKA when open cutting guide was used and clarify whether cutting the tibia horizontally twice using the same cutting guide reduced the cutting errors in UKA. We measured the alignment of the tibial cutting guides, the first-cut cutting surfaces and the second cut cutting surfaces using the navigation system in 50 UKAs. Cutting error was defined as the angular difference between the cutting guide and cutting surface. The mean absolute first-cut cutting error was 1.9° (1.1° varus) in the coronal plane and 1.1° (0.6° anterior slope) in the sagittal plane, whereas the mean absolute second-cut cutting error was 1.1° (0.6° varus) in the coronal plane and 1.1° (0.4° anterior slope) in the sagittal plane. Cutting the tibia horizontally twice reduced the cutting errors in the coronal plane significantly (P<0.05). Our study demonstrated that in UKA, cutting the tibia horizontally twice using the same cutting guide reduced cutting error in the coronal plane. Copyright © 2014 Elsevier B.V. All rights reserved.
An educational and audit tool to reduce prescribing error in intensive care.
Thomas, A N; Boxall, E M; Laha, S K; Day, A J; Grundy, D
2008-10-01
To reduce prescribing errors in an intensive care unit by providing prescriber education in tutorials, ward-based teaching and feedback in 3-monthly cycles with each new group of trainee medical staff. Prescribing audits were conducted three times in each 3-month cycle, once pretraining, once post-training and a final audit after 6 weeks. The audit information was fed back to prescribers with their correct prescribing rates, rates for individual error types and total error rates together with anonymised information about other prescribers' error rates. The percentage of prescriptions with errors decreased over each 3-month cycle (pretraining 25%, 19%, (one missing data point), post-training 23%, 6%, 11%, final audit 7%, 3%, 5% (p<0.0005)). The total number of prescriptions and error rates varied widely between trainees (data collection one; cycle two: range of prescriptions written: 1-61, median 18; error rate: 0-100%; median: 15%). Prescriber education and feedback reduce manual prescribing errors in intensive care.
Optimized method for manufacturing large aspheric surfaces
NASA Astrophysics Data System (ADS)
Zhou, Xusheng; Li, Shengyi; Dai, Yifan; Xie, Xuhui
2007-12-01
Aspheric optics are being used more and more widely in modern optical systems, due to their ability of correcting aberrations, enhancing image quality, enlarging the field of view and extending the range of effect, while reducing the weight and volume of the system. With optical technology development, we have more pressing requirement to large-aperture and high-precision aspheric surfaces. The original computer controlled optical surfacing (CCOS) technique cannot meet the challenge of precision and machining efficiency. This problem has been thought highly of by researchers. Aiming at the problem of original polishing process, an optimized method for manufacturing large aspheric surfaces is put forward. Subsurface damage (SSD), full aperture errors and full band of frequency errors are all in control of this method. Lesser SSD depth can be gained by using little hardness tool and small abrasive grains in grinding process. For full aperture errors control, edge effects can be controlled by using smaller tools and amendment model with material removal function. For full band of frequency errors control, low frequency errors can be corrected with the optimized material removal function, while medium-high frequency errors by using uniform removing principle. With this optimized method, the accuracy of a K9 glass paraboloid mirror can reach rms 0.055 waves (where a wave is 0.6328μm) in a short time. The results show that the optimized method can guide large aspheric surface manufacturing effectively.
Error Sensitivity to Environmental Noise in Quantum Circuits for Chemical State Preparation.
Sawaya, Nicolas P D; Smelyanskiy, Mikhail; McClean, Jarrod R; Aspuru-Guzik, Alán
2016-07-12
Calculating molecular energies is likely to be one of the first useful applications to achieve quantum supremacy, performing faster on a quantum than a classical computer. However, if future quantum devices are to produce accurate calculations, errors due to environmental noise and algorithmic approximations need to be characterized and reduced. In this study, we use the high performance qHiPSTER software to investigate the effects of environmental noise on the preparation of quantum chemistry states. We simulated 18 16-qubit quantum circuits under environmental noise, each corresponding to a unitary coupled cluster state preparation of a different molecule or molecular configuration. Additionally, we analyze the nature of simple gate errors in noise-free circuits of up to 40 qubits. We find that, in most cases, the Jordan-Wigner (JW) encoding produces smaller errors under a noisy environment as compared to the Bravyi-Kitaev (BK) encoding. For the JW encoding, pure dephasing noise is shown to produce substantially smaller errors than pure relaxation noise of the same magnitude. We report error trends in both molecular energy and electron particle number within a unitary coupled cluster state preparation scheme, against changes in nuclear charge, bond length, number of electrons, noise types, and noise magnitude. These trends may prove to be useful in making algorithmic and hardware-related choices for quantum simulation of molecular energies.
Diagnostic Error in Stroke-Reasons and Proposed Solutions.
Bakradze, Ekaterina; Liberman, Ava L
2018-02-13
We discuss the frequency of stroke misdiagnosis and identify subgroups of stroke at high risk for specific diagnostic errors. In addition, we review common reasons for misdiagnosis and propose solutions to decrease error. According to a recent report by the National Academy of Medicine, most people in the USA are likely to experience a diagnostic error during their lifetimes. Nearly half of such errors result in serious disability and death. Stroke misdiagnosis is a major health care concern, with initial misdiagnosis estimated to occur in 9% of all stroke patients in the emergency setting. Under- or missed diagnosis (false negative) of stroke can result in adverse patient outcomes due to the preclusion of acute treatments and failure to initiate secondary prevention strategies. On the other hand, the overdiagnosis of stroke can result in inappropriate treatment, delayed identification of actual underlying disease, and increased health care costs. Young patients, women, minorities, and patients presenting with non-specific, transient, or posterior circulation stroke symptoms are at increased risk of misdiagnosis. Strategies to decrease diagnostic error in stroke have largely focused on early stroke detection via bedside examination strategies and a clinical decision rules. Targeted interventions to improve the diagnostic accuracy of stroke diagnosis among high-risk groups as well as symptom-specific clinical decision supports are needed. There are a number of open questions in the study of stroke misdiagnosis. To improve patient outcomes, existing strategies to improve stroke diagnostic accuracy should be more broadly adopted and novel interventions devised and tested to reduce diagnostic errors.
MacDonald, Shannon E; Schopflocher, Donald P; Golonka, Richard P
2014-01-04
Accurate classification of children's immunization status is essential for clinical care, administration and evaluation of immunization programs, and vaccine program research. Computerized immunization registries have been proposed as a valuable alternative to provider paper records or parent report, but there is a need to better understand the challenges associated with their use. This study assessed the accuracy of immunization status classification in an immunization registry as compared to parent report and determined the number and type of errors occurring in both sources. This study was a sub-analysis of a larger study which compared the characteristics of children whose immunizations were up to date (UTD) at two years as compared to those not UTD. Children's immunization status was initially determined from a population-based immunization registry, and then compared to parent report of immunization status, as reported in a postal survey. Discrepancies between the two sources were adjudicated by review of immunization providers' hard-copy clinic records. Descriptive analyses included calculating proportions and confidence intervals for errors in classification and reporting of the type and frequency of errors. Among the 461 survey respondents, there were 60 discrepancies in immunization status. The majority of errors were due to parent report (n = 44), but the registry was not without fault (n = 16). Parents tended to erroneously report their child as UTD, whereas the registry was more likely to wrongly classify children as not UTD. Reasons for registry errors included failure to account for varicella disease history, variable number of doses required due to age at series initiation, and doses administered out of the region. These results confirm that parent report is often flawed, but also identify that registries are prone to misclassification of immunization status. Immunization program administrators and researchers need to institute measures to identify and reduce misclassification, in order for registries to play an effective role in the control of vaccine-preventable disease.
2014-01-01
Background Accurate classification of children’s immunization status is essential for clinical care, administration and evaluation of immunization programs, and vaccine program research. Computerized immunization registries have been proposed as a valuable alternative to provider paper records or parent report, but there is a need to better understand the challenges associated with their use. This study assessed the accuracy of immunization status classification in an immunization registry as compared to parent report and determined the number and type of errors occurring in both sources. Methods This study was a sub-analysis of a larger study which compared the characteristics of children whose immunizations were up to date (UTD) at two years as compared to those not UTD. Children’s immunization status was initially determined from a population-based immunization registry, and then compared to parent report of immunization status, as reported in a postal survey. Discrepancies between the two sources were adjudicated by review of immunization providers’ hard-copy clinic records. Descriptive analyses included calculating proportions and confidence intervals for errors in classification and reporting of the type and frequency of errors. Results Among the 461 survey respondents, there were 60 discrepancies in immunization status. The majority of errors were due to parent report (n = 44), but the registry was not without fault (n = 16). Parents tended to erroneously report their child as UTD, whereas the registry was more likely to wrongly classify children as not UTD. Reasons for registry errors included failure to account for varicella disease history, variable number of doses required due to age at series initiation, and doses administered out of the region. Conclusions These results confirm that parent report is often flawed, but also identify that registries are prone to misclassification of immunization status. Immunization program administrators and researchers need to institute measures to identify and reduce misclassification, in order for registries to play an effective role in the control of vaccine-preventable disease. PMID:24387002
NASA Astrophysics Data System (ADS)
Dohe, S.; Sherlock, V.; Hase, F.; Gisi, M.; Robinson, J.; Sepúlveda, E.; Schneider, M.; Blumenstock, T.
2013-08-01
The Total Carbon Column Observing Network (TCCON) has been established to provide ground-based remote sensing measurements of the column-averaged dry air mole fractions (DMF) of key greenhouse gases. To ensure network-wide consistency, biases between Fourier transform spectrometers at different sites have to be well controlled. Errors in interferogram sampling can introduce significant biases in retrievals. In this study we investigate a two-step scheme to correct these errors. In the first step the laser sampling error (LSE) is estimated by determining the sampling shift which minimises the magnitude of the signal intensity in selected, fully absorbed regions of the solar spectrum. The LSE is estimated for every day with measurements which meet certain selection criteria to derive the site-specific time series of the LSEs. In the second step, this sequence of LSEs is used to resample all the interferograms acquired at the site, and hence correct the sampling errors. Measurements acquired at the Izaña and Lauder TCCON sites are used to demonstrate the method. At both sites the sampling error histories show changes in LSE due to instrument interventions (e.g. realignment). Estimated LSEs are in good agreement with sampling errors inferred from the ratio of primary and ghost spectral signatures in optically bandpass-limited tungsten lamp spectra acquired at Lauder. The original time series of Xair and XCO2 (XY: column-averaged DMF of the target gas Y) at both sites show discrepancies of 0.2-0.5% due to changes in the LSE associated with instrument interventions or changes in the measurement sample rate. After resampling, discrepancies are reduced to 0.1% or less at Lauder and 0.2% at Izaña. In the latter case, coincident changes in interferometer alignment may also have contributed to the residual difference. In the future the proposed method will be used to correct historical spectra at all TCCON sites.
ERIC Educational Resources Information Center
Boedigheimer, Dan
2010-01-01
Approximately 70% of aviation accidents are attributable to human error. The greatest opportunity for further improving aviation safety is found in reducing human errors in the cockpit. The purpose of this quasi-experimental, mixed-method research was to evaluate whether there was a difference in pilot attitudes toward reducing human error in the…
Reducing diagnostic errors in medicine: what's the goal?
Graber, Mark; Gordon, Ruthanna; Franklin, Nancy
2002-10-01
This review considers the feasibility of reducing or eliminating the three major categories of diagnostic errors in medicine: "No-fault errors" occur when the disease is silent, presents atypically, or mimics something more common. These errors will inevitably decline as medical science advances, new syndromes are identified, and diseases can be detected more accurately or at earlier stages. These errors can never be eradicated, unfortunately, because new diseases emerge, tests are never perfect, patients are sometimes noncompliant, and physicians will inevitably, at times, choose the most likely diagnosis over the correct one, illustrating the concept of necessary fallibility and the probabilistic nature of choosing a diagnosis. "System errors" play a role when diagnosis is delayed or missed because of latent imperfections in the health care system. These errors can be reduced by system improvements, but can never be eliminated because these improvements lag behind and degrade over time, and each new fix creates the opportunity for novel errors. Tradeoffs also guarantee system errors will persist, when resources are just shifted. "Cognitive errors" reflect misdiagnosis from faulty data collection or interpretation, flawed reasoning, or incomplete knowledge. The limitations of human processing and the inherent biases in using heuristics guarantee that these errors will persist. Opportunities exist, however, for improving the cognitive aspect of diagnosis by adopting system-level changes (e.g., second opinions, decision-support systems, enhanced access to specialists) and by training designed to improve cognition or cognitive awareness. Diagnostic error can be substantially reduced, but never eradicated.
NASA Technical Reports Server (NTRS)
Huynh, Loc C.; Duval, R. W.
1986-01-01
The use of Redundant Asynchronous Multiprocessor System to achieve ultrareliable Fault Tolerant Control Systems shows great promise. The development has been hampered by the inability to determine whether differences in the outputs of redundant CPU's are due to failures or to accrued error built up by slight differences in CPU clock intervals. This study derives an analytical dynamic model of the difference between redundant CPU's due to differences in their clock intervals and uses this model with on-line parameter identification to idenitify the differences in the clock intervals. The ability of this methodology to accurately track errors due to asynchronisity generate an error signal with the effect of asynchronisity removed and this signal may be used to detect and isolate actual system failures.
The potential for geostationary remote sensing of NO2 to improve weather prediction
NASA Astrophysics Data System (ADS)
Liu, X.; Mizzi, A. P.; Anderson, J. L.; Fung, I. Y.; Cohen, R. C.
2017-12-01
Observations of surface winds remain sparse making it challenging to simulate and predict the weather in circumstances of light winds that are most important for poor air quality. Direct measurements of short-lived chemicals from space might be a solution to this challenge. Here we investigate the application of data assimilation of NO2 columns as will be observed from geostationary orbit to improve predictions and retrospective analysis of surface wind fields. Specifically, synthetic NO2 observations are sampled from a "nature run (NR)" regarded as the true atmosphere. Then NO2 observations are assimilated using EAKF methods into a "control run (CR)" which differs from the NR in the wind field. Wind errors are generated by introducing (1) errors in the initial conditions, (2) creating a model error by using two different formulations for the planetary boundary layer, (3) and by combining both of these effects. Assimilation of NO2 column observations succeeds in reducing wind errors, indicating the prospects for future geostationary atmospheric composition measurements to improve weather forecasting are substantial. We find that due to the temporal heterogeneity of wind errors, the success of this application favors chemical observations of high frequency, such as those from geostationary platform. We also show the potential to improve soil moisture field by assimilating NO2 columns.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olama, Mohammed M; Matalgah, Mustafa M; Bobrek, Miljko
Traditional encryption techniques require packet overhead, produce processing time delay, and suffer from severe quality of service deterioration due to fades and interference in wireless channels. These issues reduce the effective transmission data rate (throughput) considerably in wireless communications, where data rate with limited bandwidth is the main constraint. In this paper, performance evaluation analyses are conducted for an integrated signaling-encryption mechanism that is secure and enables improved throughput and probability of bit-error in wireless channels. This mechanism eliminates the drawbacks stated herein by encrypting only a small portion of an entire transmitted frame, while the rest is not subjectmore » to traditional encryption but goes through a signaling process (designed transformation) with the plaintext of the portion selected for encryption. We also propose to incorporate error correction coding solely on the small encrypted portion of the data to drastically improve the overall bit-error rate performance while not noticeably increasing the required bit-rate. We focus on validating the signaling-encryption mechanism utilizing Hamming and convolutional error correction coding by conducting an end-to-end system-level simulation-based study. The average probability of bit-error and throughput of the encryption mechanism are evaluated over standard Gaussian and Rayleigh fading-type channels and compared to the ones of the conventional advanced encryption standard (AES).« less
Factors affecting the sticking of insects on modified aircraft wings
NASA Technical Reports Server (NTRS)
Yi, O.; Chitsaz-Z, M. R.; Eiss, N. S.; Wightman, J. P.
1988-01-01
Previous work showed that the total number of insects sticking to an aluminum surface was reduced by coating the aluminum surface with elastomers. Due to a large number of possible experimental errors, no correlation between the modulus of elasticity, the elastomer, and the total number of insects sticking to a given elastomer was obtained. One of the errors assumed to be introduced during the road test is a variable insect flux so the number of insects striking one surface might be different from that striking another sample. To eliminate this source of error, the road test used to collect insects was simulated in a laboratory by development of an insect impacting technique using a pipe and high pressure compressed air. The insects are accelerated by a compressed air gun to high velocities and are then impacted with a stationary target on which the sample is mounted. The velocity of an object exiting from the pipe was determined and further improvement of the technique was achieved to obtain a uniform air velocity distribution.
Reducing Bias and Error in the Correlation Coefficient Due to Nonnormality.
Bishara, Anthony J; Hittner, James B
2015-10-01
It is more common for educational and psychological data to be nonnormal than to be approximately normal. This tendency may lead to bias and error in point estimates of the Pearson correlation coefficient. In a series of Monte Carlo simulations, the Pearson correlation was examined under conditions of normal and nonnormal data, and it was compared with its major alternatives, including the Spearman rank-order correlation, the bootstrap estimate, the Box-Cox transformation family, and a general normalizing transformation (i.e., rankit), as well as to various bias adjustments. Nonnormality caused the correlation coefficient to be inflated by up to +.14, particularly when the nonnormality involved heavy-tailed distributions. Traditional bias adjustments worsened this problem, further inflating the estimate. The Spearman and rankit correlations eliminated this inflation and provided conservative estimates. Rankit also minimized random error for most sample sizes, except for the smallest samples ( n = 10), where bootstrapping was more effective. Overall, results justify the use of carefully chosen alternatives to the Pearson correlation when normality is violated.
Reducing Bias and Error in the Correlation Coefficient Due to Nonnormality
Hittner, James B.
2014-01-01
It is more common for educational and psychological data to be nonnormal than to be approximately normal. This tendency may lead to bias and error in point estimates of the Pearson correlation coefficient. In a series of Monte Carlo simulations, the Pearson correlation was examined under conditions of normal and nonnormal data, and it was compared with its major alternatives, including the Spearman rank-order correlation, the bootstrap estimate, the Box–Cox transformation family, and a general normalizing transformation (i.e., rankit), as well as to various bias adjustments. Nonnormality caused the correlation coefficient to be inflated by up to +.14, particularly when the nonnormality involved heavy-tailed distributions. Traditional bias adjustments worsened this problem, further inflating the estimate. The Spearman and rankit correlations eliminated this inflation and provided conservative estimates. Rankit also minimized random error for most sample sizes, except for the smallest samples (n = 10), where bootstrapping was more effective. Overall, results justify the use of carefully chosen alternatives to the Pearson correlation when normality is violated. PMID:29795841
Operator- and software-related post-experimental variability and source of error in 2-DE analysis.
Millioni, Renato; Puricelli, Lucia; Sbrignadello, Stefano; Iori, Elisabetta; Murphy, Ellen; Tessari, Paolo
2012-05-01
In the field of proteomics, several approaches have been developed for separating proteins and analyzing their differential relative abundance. One of the oldest, yet still widely used, is 2-DE. Despite the continuous advance of new methods, which are less demanding from a technical standpoint, 2-DE is still compelling and has a lot of potential for improvement. The overall variability which affects 2-DE includes biological, experimental, and post-experimental (software-related) variance. It is important to highlight how much of the total variability of this technique is due to post-experimental variability, which, so far, has been largely neglected. In this short review, we have focused on this topic and explained that post-experimental variability and source of error can be further divided into those which are software-dependent and those which are operator-dependent. We discuss these issues in detail, offering suggestions for reducing errors that may affect the quality of results, summarizing the advantages and drawbacks of each approach.
APOLLO clock performance and normal point corrections
NASA Astrophysics Data System (ADS)
Liang, Y.; Murphy, T. W., Jr.; Colmenares, N. R.; Battat, J. B. R.
2017-12-01
The Apache point observatory lunar laser-ranging operation (APOLLO) has produced a large volume of high-quality lunar laser ranging (LLR) data since it began operating in 2006. For most of this period, APOLLO has relied on a GPS-disciplined, high-stability quartz oscillator as its frequency and time standard. The recent addition of a cesium clock as part of a timing calibration system initiated a comparison campaign between the two clocks. This has allowed correction of APOLLO range measurements—called normal points—during the overlap period, but also revealed a mechanism to correct for systematic range offsets due to clock errors in historical APOLLO data. Drift of the GPS clock on ∼1000 s timescales contributed typically 2.5 mm of range error to APOLLO measurements, and we find that this may be reduced to ∼1.6 mm on average. We present here a characterization of APOLLO clock errors, the method by which we correct historical data, and the resulting statistics.
Liu, Hsiu-Chu; Li, Hsing; Chang, Hsin-Fei; Lu, Mei-Rou; Chen, Feng-Chuan
2015-01-01
Learning from the experience of another medical center in Taiwan, Kaohsiung Municipal Kai-Suan Psychiatric Hospital has changed the nursing informatics system step by step in the past year and a half . We considered ethics in the original idea of implementing barcodes on the test tube labels to process the identification of the psychiatric patients. The main aims of this project are to maintain the confidential information and to transport the sample effectively. The primary nurses had been using different work sheets for this project to ensure the acceptance of the new barcode system. In the past two years the errors in the blood testing process were as high as 11,000 in 14,000 events per year, resulting in wastage of resources. The actions taken by the nurses and the new barcode system implementation can improve the clinical nursing care quality, safety of the patients, and efficiency, while decreasing the cost due to the human error.
Yan, M; Lovelock, D; Hunt, M; Mechalakos, J; Hu, Y; Pham, H; Jackson, A
2013-12-01
To use Cone Beam CT scans obtained just prior to treatments of head and neck cancer patients to measure the setup error and cumulative dose uncertainty of the cochlea. Data from 10 head and neck patients with 10 planning CTs and 52 Cone Beam CTs taken at time of treatment were used in this study. Patients were treated with conventional fractionation using an IMRT dose painting technique, most with 33 fractions. Weekly radiographic imaging was used to correct the patient setup. The authors used rigid registration of the planning CT and Cone Beam CT scans to find the translational and rotational setup errors, and the spatial setup errors of the cochlea. The planning CT was rotated and translated such that the cochlea positions match those seen in the cone beam scans, cochlea doses were recalculated and fractional doses accumulated. Uncertainties in the positions and cumulative doses of the cochlea were calculated with and without setup adjustments from radiographic imaging. The mean setup error of the cochlea was 0.04 ± 0.33 or 0.06 ± 0.43 cm for RL, 0.09 ± 0.27 or 0.07 ± 0.48 cm for AP, and 0.00 ± 0.21 or -0.24 ± 0.45 cm for SI with and without radiographic imaging, respectively. Setup with radiographic imaging reduced the standard deviation of the setup error by roughly 1-2 mm. The uncertainty of the cochlea dose depends on the treatment plan and the relative positions of the cochlea and target volumes. Combining results for the left and right cochlea, the authors found the accumulated uncertainty of the cochlea dose per fraction was 4.82 (0.39-16.8) cGy, or 10.1 (0.8-32.4) cGy, with and without radiographic imaging, respectively; the percentage uncertainties relative to the planned doses were 4.32% (0.28%-9.06%) and 10.2% (0.7%-63.6%), respectively. Patient setup error introduces uncertainty in the position of the cochlea during radiation treatment. With the assistance of radiographic imaging during setup, the standard deviation of setup error reduced by 31%, 42%, and 54% in RL, AP, and SI direction, respectively, and consequently, the uncertainty of the mean dose to cochlea reduced more than 50%. The authors estimate that the effects of these uncertainties on the probability of hearing loss for an individual patient could be as large as 10%.
Yan, M.; Lovelock, D.; Hunt, M.; Mechalakos, J.; Hu, Y.; Pham, H.; Jackson, A.
2013-01-01
Purpose: To use Cone Beam CT scans obtained just prior to treatments of head and neck cancer patients to measure the setup error and cumulative dose uncertainty of the cochlea. Methods: Data from 10 head and neck patients with 10 planning CTs and 52 Cone Beam CTs taken at time of treatment were used in this study. Patients were treated with conventional fractionation using an IMRT dose painting technique, most with 33 fractions. Weekly radiographic imaging was used to correct the patient setup. The authors used rigid registration of the planning CT and Cone Beam CT scans to find the translational and rotational setup errors, and the spatial setup errors of the cochlea. The planning CT was rotated and translated such that the cochlea positions match those seen in the cone beam scans, cochlea doses were recalculated and fractional doses accumulated. Uncertainties in the positions and cumulative doses of the cochlea were calculated with and without setup adjustments from radiographic imaging. Results: The mean setup error of the cochlea was 0.04 ± 0.33 or 0.06 ± 0.43 cm for RL, 0.09 ± 0.27 or 0.07 ± 0.48 cm for AP, and 0.00 ± 0.21 or −0.24 ± 0.45 cm for SI with and without radiographic imaging, respectively. Setup with radiographic imaging reduced the standard deviation of the setup error by roughly 1–2 mm. The uncertainty of the cochlea dose depends on the treatment plan and the relative positions of the cochlea and target volumes. Combining results for the left and right cochlea, the authors found the accumulated uncertainty of the cochlea dose per fraction was 4.82 (0.39–16.8) cGy, or 10.1 (0.8–32.4) cGy, with and without radiographic imaging, respectively; the percentage uncertainties relative to the planned doses were 4.32% (0.28%–9.06%) and 10.2% (0.7%–63.6%), respectively. Conclusions: Patient setup error introduces uncertainty in the position of the cochlea during radiation treatment. With the assistance of radiographic imaging during setup, the standard deviation of setup error reduced by 31%, 42%, and 54% in RL, AP, and SI direction, respectively, and consequently, the uncertainty of the mean dose to cochlea reduced more than 50%. The authors estimate that the effects of these uncertainties on the probability of hearing loss for an individual patient could be as large as 10%. PMID:24320510
New Gear Transmission Error Measurement System Designed
NASA Technical Reports Server (NTRS)
Oswald, Fred B.
2001-01-01
The prime source of vibration and noise in a gear system is the transmission error between the meshing gears. Transmission error is caused by manufacturing inaccuracy, mounting errors, and elastic deflections under load. Gear designers often attempt to compensate for transmission error by modifying gear teeth. This is done traditionally by a rough "rule of thumb" or more recently under the guidance of an analytical code. In order for a designer to have confidence in a code, the code must be validated through experiment. NASA Glenn Research Center contracted with the Design Unit of the University of Newcastle in England for a system to measure the transmission error of spur and helical test gears in the NASA Gear Noise Rig. The new system measures transmission error optically by means of light beams directed by lenses and prisms through gratings mounted on the gear shafts. The amount of light that passes through both gratings is directly proportional to the transmission error of the gears. A photodetector circuit converts the light to an analog electrical signal. To increase accuracy and reduce "noise" due to transverse vibration, there are parallel light paths at the top and bottom of the gears. The two signals are subtracted via differential amplifiers in the electronics package. The output of the system is 40 mV/mm, giving a resolution in the time domain of better than 0.1 mm, and discrimination in the frequency domain of better than 0.01 mm. The new system will be used to validate gear analytical codes and to investigate mechanisms that produce vibration and noise in parallel axis gears.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-16
... due Revision due to agency Collection Old burden to error error (old-- error) IC1: ``Ready to Move... Revisions of Estimates of Annual Costs to Respondents Total cost Collection New cost Old cost reduction (new--old) IC1: ``Ready to Move?'' $288,000 $720,000 -$432,000 ``Rights & Responsibilities'' 3,264,000 8,160...
Medication errors in anesthesia: unacceptable or unavoidable?
Dhawan, Ira; Tewari, Anurag; Sehgal, Sankalp; Sinha, Ashish Chandra
Medication errors are the common causes of patient morbidity and mortality. It adds financial burden to the institution as well. Though the impact varies from no harm to serious adverse effects including death, it needs attention on priority basis since medication errors' are preventable. In today's world where people are aware and medical claims are on the hike, it is of utmost priority that we curb this issue. Individual effort to decrease medication error alone might not be successful until a change in the existing protocols and system is incorporated. Often drug errors that occur cannot be reversed. The best way to 'treat' drug errors is to prevent them. Wrong medication (due to syringe swap), overdose (due to misunderstanding or preconception of the dose, pump misuse and dilution error), incorrect administration route, under dosing and omission are common causes of medication error that occur perioperatively. Drug omission and calculation mistakes occur commonly in ICU. Medication errors can occur perioperatively either during preparation, administration or record keeping. Numerous human and system errors can be blamed for occurrence of medication errors. The need of the hour is to stop the blame - game, accept mistakes and develop a safe and 'just' culture in order to prevent medication errors. The newly devised systems like VEINROM, a fluid delivery system is a novel approach in preventing drug errors due to most commonly used medications in anesthesia. Similar developments along with vigilant doctors, safe workplace culture and organizational support all together can help prevent these errors. Copyright © 2016. Published by Elsevier Editora Ltda.
Experimental investigation of false positive errors in auditory species occurrence surveys
Miller, David A.W.; Weir, Linda A.; McClintock, Brett T.; Grant, Evan H. Campbell; Bailey, Larissa L.; Simons, Theodore R.
2012-01-01
False positive errors are a significant component of many ecological data sets, which in combination with false negative errors, can lead to severe biases in conclusions about ecological systems. We present results of a field experiment where observers recorded observations for known combinations of electronically broadcast calling anurans under conditions mimicking field surveys to determine species occurrence. Our objectives were to characterize false positive error probabilities for auditory methods based on a large number of observers, to determine if targeted instruction could be used to reduce false positive error rates, and to establish useful predictors of among-observer and among-species differences in error rates. We recruited 31 observers, ranging in abilities from novice to expert, that recorded detections for 12 species during 180 calling trials (66,960 total observations). All observers made multiple false positive errors and on average 8.1% of recorded detections in the experiment were false positive errors. Additional instruction had only minor effects on error rates. After instruction, false positive error probabilities decreased by 16% for treatment individuals compared to controls with broad confidence interval overlap of 0 (95% CI: -46 to 30%). This coincided with an increase in false negative errors due to the treatment (26%; -3 to 61%). Differences among observers in false positive and in false negative error rates were best predicted by scores from an online test and a self-assessment of observer ability completed prior to the field experiment. In contrast, years of experience conducting call surveys was a weak predictor of error rates. False positive errors were also more common for species that were played more frequently, but were not related to the dominant spectral frequency of the call. Our results corroborate other work that demonstrates false positives are a significant component of species occurrence data collected by auditory methods. Instructing observers to only report detections they are completely certain are correct is not sufficient to eliminate errors. As a result, analytical methods that account for false positive errors will be needed, and independent testing of observer ability is a useful predictor for among-observer variation in observation error rates.
Vision Assisted Navigation for Miniature Unmanned Aerial Vehicles (MAVs)
2009-11-01
commanded to orbit a target of known location. The error in target geolocation is shown for 200 frames with filtering (dashed line) and without (solid...so the performance of the filter was determined by using the estimated poses to solve a geolocation problem. An MAV flying at an altitude of 70 meters... geolocation as well as significantly reducing the short-term variance in the estimates based on the GPS/IMU alone. Due to the nature of the autopilot
Fresnel diffraction by spherical obstacles
NASA Technical Reports Server (NTRS)
Hovenac, Edward A.
1989-01-01
Lommel functions were used to solve the Fresnel-Kirchhoff diffraction integral for the case of a spherical obstacle. Comparisons were made between Fresnel diffraction theory and Mie scattering theory. Fresnel theory is then compared to experimental data. Experiment and theory typically deviated from one another by less than 10 percent. A unique experimental setup using mercury spheres suspended in a viscous fluid significantly reduced optical noise. The major source of error was due to the Gaussian-shaped laser beam.
MacDonald, M. Ethan; Forkert, Nils D.; Pike, G. Bruce; Frayne, Richard
2016-01-01
Purpose Volume flow rate (VFR) measurements based on phase contrast (PC)-magnetic resonance (MR) imaging datasets have spatially varying bias due to eddy current induced phase errors. The purpose of this study was to assess the impact of phase errors in time averaged PC-MR imaging of the cerebral vasculature and explore the effects of three common correction schemes (local bias correction (LBC), local polynomial correction (LPC), and whole brain polynomial correction (WBPC)). Methods Measurements of the eddy current induced phase error from a static phantom were first obtained. In thirty healthy human subjects, the methods were then assessed in background tissue to determine if local phase offsets could be removed. Finally, the techniques were used to correct VFR measurements in cerebral vessels and compared statistically. Results In the phantom, phase error was measured to be <2.1 ml/s per pixel and the bias was reduced with the correction schemes. In background tissue, the bias was significantly reduced, by 65.6% (LBC), 58.4% (LPC) and 47.7% (WBPC) (p < 0.001 across all schemes). Correction did not lead to significantly different VFR measurements in the vessels (p = 0.997). In the vessel measurements, the three correction schemes led to flow measurement differences of -0.04 ± 0.05 ml/s, 0.09 ± 0.16 ml/s, and -0.02 ± 0.06 ml/s. Although there was an improvement in background measurements with correction, there was no statistical difference between the three correction schemes (p = 0.242 in background and p = 0.738 in vessels). Conclusions While eddy current induced phase errors can vary between hardware and sequence configurations, our results showed that the impact is small in a typical brain PC-MR protocol and does not have a significant effect on VFR measurements in cerebral vessels. PMID:26910600
NASA Technical Reports Server (NTRS)
Stricker, Josef
1989-01-01
Effects of spherical aberrations of the mirror used in the moire system on the angular resolution of the system are investigated. It is shown that the spherical aberrations may reduce significantly the performance of the conventional moire deflectometer. However, due to the heterodyne procedure, this is not the case with the heterodyne moire system. A moire system with a constant speed moving grating is demonstrated. It is shown that the system readout is linear and the system does not need calibration. In addition, the repeatability of the measurements is improved in this system as compared to the sinusoidally moving grating setup. The problem of the photographic plates alignment is solved by using a mechanical system in which the plate is held firmly throughout the experiment and accurately replaced after removing for photographic processing. The effect of a circular detector's aperture size on readout was tested. It is shown that the spatial phase variations, observed when scanning along a straight moire fringe, may considerably be reduced. At present we may say that both the on-line and the deferred heterodyne moire techniques may reliably be used. The errors of phase readings are 1 deg and 5 deg for the on-line and deferred methods. The total error due to subtraction of two readings at each position is, therefore, 1.4 deg and 7 deg, respectively. Further research for improving the deferred system is suggested.
NASA Astrophysics Data System (ADS)
Doihara, R.; Shimada, T.; Cheong, K. H.; Terao, Y.
2017-06-01
A flow calibration facility based on the gravimetric method using a double-wing diverter for hydrocarbon flows from 0.1 m3 h-1 to 15 m3 h-1 was constructed as a national measurement standard in Japan. The original working liquids were kerosene and light oil. The calibration facility was modified to calibrate flowmeters with two additional working liquids, industrial gasoline (flash point > 40 °C) and spindle oil, to achieve calibration over a wide viscosity range at the same calibration test rig. The kinematic viscosity range is 1.2 mm2 s-1 to 24 mm2 s-1. The contributions to the measurement uncertainty due to different types of working liquids were evaluated experimentally in this study. The evaporation error was reduced by using a seal system at the weighing tank inlet. The uncertainty due to droplets from the diverter wings was reduced by a modified diverter operation. The diverter timing errors for all types of working liquids were estimated. The expanded uncertainties for the calibration facility were estimated to be 0.020% for mass flow and 0.030% for volumetric flow for all considered types of liquids. Internal comparisons with other calibration facilities were also conducted, and the agreement was confirmed to be within the claimed expanded uncertainties.
An outlet breaching algorithm for the treatment of closed depressions in a raster DEM
NASA Astrophysics Data System (ADS)
Martz, Lawrence W.; Garbrecht, Jurgen
1999-08-01
Automated drainage analysis of raster DEMs typically begins with the simulated filling of all closed depressions and the imposition of a drainage pattern on the resulting flat areas. The elimination of closed depressions by filling implicitly assumes that all depressions are caused by elevation underestimation. This assumption is difficult to support, as depressions can be produced by overestimation as well as by underestimation of DEM values.This paper presents a new algorithm that is applied in conjunction with conventional depression filling to provide a more realistic treatment of those depressions that are likely due to overestimation errors. The algorithm lowers the elevation of selected cells on the edge of closed depressions to simulate breaching of the depression outlets. Application of this breaching algorithm prior to depression filling can substantially reduce the number and size of depressions that need to be filled, especially in low relief terrain.Removing or reducing the size of a depression by breaching implicitly assumes that the depression is due to a spurious flow blockage caused by elevation overestimation. Removing a depression by filling, on the other hand, implicitly assumes that the depression is a direct artifact of elevation underestimation. Although the breaching algorithm cannot distinguish between overestimation and underestimation errors in a DEM, a constraining parameter for breaching length can be used to restrict breaching to closed depressions caused by narrow blockages along well-defined drainage courses. These are considered the depressions most likely to have arisen from overestimation errors. Applying the constrained breaching algorithm prior to a conventional depression-filling algorithm allows both positive and negative elevation adjustments to be used to remove depressions.The breaching algorithm was incorporated into the DEM pre-processing operations of the TOPAZ software system. The effect of the algorithm is illustrated by the application of TOPAZ to a DEM of a low-relief landscape. The use of the breaching algorithm during DEM pre-processing substantially reduced the number of cells that needed to be subsequently raised in elevation to remove depressions. The number and kind of depression cells that were eliminated by the breaching algorithm suggested that the algorithm effectively targeted those topographic situations for which it was intended. A detailed inspection of a portion of the DEM that was processed using breaching algorithm in conjunction with depression-filling also suggested the effects of the algorithm were as intended.The breaching algorithm provides an empirically satisfactory and robust approach to treating closed depressions in a raster DEM. It recognises that depressions in certain topographic settings are as likely to be due to elevation overestimation as to elevation underestimation errors. The algorithm allows a more realistic treatment of depressions in these situations than conventional methods that rely solely on depression-filling.
Pegler, Joe; Lehane, Elaine; Livingstone, Vicki; McCarthy, Nora; Sahm, Laura J.; Tabirca, Sabin; O’Driscoll, Aoife; Corrigan, Mark
2016-01-01
Background Patient safety requires optimal management of medications. Electronic systems are encouraged to reduce medication errors. Near field communications (NFC) is an emerging technology that may be used to develop novel medication management systems. Methods An NFC-based system was designed to facilitate prescribing, administration and review of medications commonly used on surgical wards. Final year medical, nursing, and pharmacy students were recruited to test the electronic system in a cross-over observational setting on a simulated ward. Medication errors were compared against errors recorded using a paper-based system. Results A significant difference in the commission of medication errors was seen when NFC and paper-based medication systems were compared. Paper use resulted in a mean of 4.09 errors per prescribing round while NFC prescribing resulted in a mean of 0.22 errors per simulated prescribing round (P=0.000). Likewise, medication administration errors were reduced from a mean of 2.30 per drug round with a Paper system to a mean of 0.80 errors per round using NFC (P<0.015). A mean satisfaction score of 2.30 was reported by users, (rated on seven-point scale with 1 denoting total satisfaction with system use and 7 denoting total dissatisfaction). Conclusions An NFC based medication system may be used to effectively reduce medication errors in a simulated ward environment. PMID:28293602
O'Connell, Emer; Pegler, Joe; Lehane, Elaine; Livingstone, Vicki; McCarthy, Nora; Sahm, Laura J; Tabirca, Sabin; O'Driscoll, Aoife; Corrigan, Mark
2016-01-01
Patient safety requires optimal management of medications. Electronic systems are encouraged to reduce medication errors. Near field communications (NFC) is an emerging technology that may be used to develop novel medication management systems. An NFC-based system was designed to facilitate prescribing, administration and review of medications commonly used on surgical wards. Final year medical, nursing, and pharmacy students were recruited to test the electronic system in a cross-over observational setting on a simulated ward. Medication errors were compared against errors recorded using a paper-based system. A significant difference in the commission of medication errors was seen when NFC and paper-based medication systems were compared. Paper use resulted in a mean of 4.09 errors per prescribing round while NFC prescribing resulted in a mean of 0.22 errors per simulated prescribing round (P=0.000). Likewise, medication administration errors were reduced from a mean of 2.30 per drug round with a Paper system to a mean of 0.80 errors per round using NFC (P<0.015). A mean satisfaction score of 2.30 was reported by users, (rated on seven-point scale with 1 denoting total satisfaction with system use and 7 denoting total dissatisfaction). An NFC based medication system may be used to effectively reduce medication errors in a simulated ward environment.
Remediating Common Math Errors.
ERIC Educational Resources Information Center
Wagner, Rudolph F.
1981-01-01
Explanations and remediation suggestions for five types of mathematics errors due either to perceptual or cognitive difficulties are given. Error types include directionality problems, mirror writing, visually misperceived signs, diagnosed directionality problems, and mixed process errors. (CL)
Intimate Partner Violence, 1993-2010
... appendix table 2 for standard errors. *Due to methodological changes, use caution when comparing 2006 NCVS criminal ... appendix table 2 for standard errors. *Due to methodological changes, use caution when comparing 2006 NCVS criminal ...
Evaluating a medical error taxonomy.
Brixey, Juliana; Johnson, Todd R; Zhang, Jiajie
2002-01-01
Healthcare has been slow in using human factors principles to reduce medical errors. The Center for Devices and Radiological Health (CDRH) recognizes that a lack of attention to human factors during product development may lead to errors that have the potential for patient injury, or even death. In response to the need for reducing medication errors, the National Coordinating Council for Medication Errors Reporting and Prevention (NCC MERP) released the NCC MERP taxonomy that provides a standard language for reporting medication errors. This project maps the NCC MERP taxonomy of medication error to MedWatch medical errors involving infusion pumps. Of particular interest are human factors associated with medical device errors. The NCC MERP taxonomy of medication errors is limited in mapping information from MEDWATCH because of the focus on the medical device and the format of reporting.
Tests and comparisons of gravity models.
NASA Technical Reports Server (NTRS)
Marsh, J. G.; Douglas, B. C.
1971-01-01
Optical observations of the GEOS satellites were used to obtain orbital solutions with different sets of geopotential coefficients. The solutions were compared before and after modification to high order terms (necessary because of resonance) and were then analyzed by comparing subsequent observations with predicted trajectories. The most important source of error in orbit determination and prediction for the GEOS satellites is the effect of resonance found in most published sets of geopotential coefficients. Modifications to the sets yield greatly improved orbits in most cases. The results of these comparisons suggest that with the best optical tracking systems and gravity models, satellite position error due to gravity model uncertainty can reach 50-100 m during a heavily observed 5-6 day orbital arc. If resonant coefficients are estimated, the uncertainty is reduced considerably.
NASA Astrophysics Data System (ADS)
Kim, Younsu; Kim, Sungmin; Boctor, Emad M.
2017-03-01
An ultrasound image-guided needle tracking systems have been widely used due to their cost-effectiveness and nonionizing radiation properties. Various surgical navigation systems have been developed by utilizing state-of-the-art sensor technologies. However, ultrasound transmission beam thickness causes unfair initial evaluation conditions due to inconsistent placement of the target with respect to the ultrasound probe. This inconsistency also brings high uncertainty and results in large standard deviations for each measurement when we compare accuracy with and without the guidance. To resolve this problem, we designed a complete evaluation platform by utilizing our mid-plane detection and time of flight measurement systems. The evaluating system uses a PZT element target and an ultrasound transmitting needle. In this paper, we evaluated an optical tracker-based surgical ultrasound-guided navigation system whereby the optical tracker tracks marker frames attached on the ultrasound probe and the needle. We performed ten needle trials of guidance experiment with a mid-plane adjustment algorithm and with a B-mode segmentation method. With the midplane adjustment, the result showed a mean error of 1.62+/-0.72mm. The mean error increased to 3.58+/-2.07mm without the mid-plane adjustment. Our evaluation system can reduce the effect of the beam-thickness problem, and measure ultrasound image-guided technologies consistently with a minimal standard deviation. Using our novel evaluation system, ultrasound image-guided technologies can be compared under equal initial conditions. Therefore, the error can be evaluated more accurately, and the system provides better analysis on the error sources such as ultrasound beam thickness.
Intrusion errors in visuospatial working memory performance.
Cornoldi, Cesare; Mammarella, Nicola
2006-02-01
This study tested the hypothesis that failure in active visuospatial working memory tasks involves a difficulty in avoiding intrusions due to information that is already activated. Two experiments are described, in which participants were required to process several series of locations on a 4 x 4 matrix and then to produce only the final location of each series. Results revealed a higher number of errors due to already activated locations (intrusions) compared with errors due to new locations (inventions). Moreover, when participants were required to pay extra attention to some irrelevant (non-final) locations by tapping on the table, intrusion errors increased. Results are discussed in terms of current models of working memory functioning.
26 CFR 301.6621-3 - Higher interest rate payable on large corporate underpayments.
Code of Federal Regulations, 2010 CFR
2010-04-01
... resulting from a math error on Y's return. Y did not request an abatement of the assessment pursuant to...,000 amount shown as due on the math error assessment notice (plus interest) on or before January 31...,000 amount shown as due on the math error assessment notice (plus interest) on or before January 31...
26 CFR 301.6621-3 - Higher interest rate payable on large corporate underpayments.
Code of Federal Regulations, 2014 CFR
2014-04-01
... resulting from a math error on Y's return. Y did not request an abatement of the assessment pursuant to...,000 amount shown as due on the math error assessment notice (plus interest) on or before January 31...,000 amount shown as due on the math error assessment notice (plus interest) on or before January 31...
26 CFR 301.6621-3 - Higher interest rate payable on large corporate underpayments.
Code of Federal Regulations, 2012 CFR
2012-04-01
... resulting from a math error on Y's return. Y did not request an abatement of the assessment pursuant to...,000 amount shown as due on the math error assessment notice (plus interest) on or before January 31...,000 amount shown as due on the math error assessment notice (plus interest) on or before January 31...
26 CFR 301.6621-3 - Higher interest rate payable on large corporate underpayments.
Code of Federal Regulations, 2013 CFR
2013-04-01
... resulting from a math error on Y's return. Y did not request an abatement of the assessment pursuant to...,000 amount shown as due on the math error assessment notice (plus interest) on or before January 31...,000 amount shown as due on the math error assessment notice (plus interest) on or before January 31...
26 CFR 301.6621-3 - Higher interest rate payable on large corporate underpayments.
Code of Federal Regulations, 2011 CFR
2011-04-01
... resulting from a math error on Y's return. Y did not request an abatement of the assessment pursuant to...,000 amount shown as due on the math error assessment notice (plus interest) on or before January 31...,000 amount shown as due on the math error assessment notice (plus interest) on or before January 31...
Study on advanced information processing system
NASA Technical Reports Server (NTRS)
Shin, Kang G.; Liu, Jyh-Charn
1992-01-01
Issues related to the reliability of a redundant system with large main memory are addressed. In particular, the Fault-Tolerant Processor (FTP) for Advanced Launch System (ALS) is used as a basis for our presentation. When the system is free of latent faults, the probability of system crash due to nearly-coincident channel faults is shown to be insignificant even when the outputs of computing channels are infrequently voted on. In particular, using channel error maskers (CEMs) is shown to improve reliability more effectively than increasing the number of channels for applications with long mission times. Even without using a voter, most memory errors can be immediately corrected by CEMs implemented with conventional coding techniques. In addition to their ability to enhance system reliability, CEMs--with a low hardware overhead--can be used to reduce not only the need of memory realignment, but also the time required to realign channel memories in case, albeit rare, such a need arises. Using CEMs, we have developed two schemes, called Scheme 1 and Scheme 2, to solve the memory realignment problem. In both schemes, most errors are corrected by CEMs, and the remaining errors are masked by a voter.
Study on fault-tolerant processors for advanced launch system
NASA Technical Reports Server (NTRS)
Shin, Kang G.; Liu, Jyh-Charn
1990-01-01
Issues related to the reliability of a redundant system with large main memory are addressed. The Fault-Tolerant Processor (FTP) for the Advanced Launch System (ALS) is used as a basis for the presentation. When the system is free of latent faults, the probability of system crash due to multiple channel faults is shown to be insignificant even when voting on the outputs of computing channels is infrequent. Using channel error maskers (CEMs) is shown to improve reliability more effectively than increasing redundancy or the number of channels for applications with long mission times. Even without using a voter, most memory errors can be immediately corrected by those CEMs implemented with conventional coding techniques. In addition to their ability to enhance system reliability, CEMs (with a very low hardware overhead) can be used to dramatically reduce not only the need of memory realignment, but also the time required to realign channel memories in case, albeit rare, such a need arises. Using CEMs, two different schemes were developed to solve the memory realignment problem. In both schemes, most errors are corrected by CEMs, and the remaining errors are masked by a voter.
Zeleke, Berihun M.; Abramson, Michael J.; Benke, Geza
2018-01-01
Uncertainty in experimental studies of exposure to radiation from mobile phones has in the past only been framed within the context of statistical variability. It is now becoming more apparent to researchers that epistemic or reducible uncertainties can also affect the total error in results. These uncertainties are derived from a wide range of sources including human error, such as data transcription, model structure, measurement and linguistic errors in communication. The issue of epistemic uncertainty is reviewed and interpreted in the context of the MoRPhEUS, ExPOSURE and HERMES cohort studies which investigate the effect of radiofrequency electromagnetic radiation from mobile phones on memory performance. Research into this field has found inconsistent results due to limitations from a range of epistemic sources. Potential analytic approaches are suggested based on quantification of epistemic error using Monte Carlo simulation. It is recommended that future studies investigating the relationship between radiofrequency electromagnetic radiation and memory performance pay more attention to treatment of epistemic uncertainties as well as further research into improving exposure assessment. Use of directed acyclic graphs is also encouraged to display the assumed covariate relationship. PMID:29587425
Sun, Xingming; Yan, Shuangshuang; Wang, Baowei; Xia, Li; Liu, Qi; Zhang, Hui
2015-01-01
Air temperature (AT) is an extremely vital factor in meteorology, agriculture, military, etc., being used for the prediction of weather disasters, such as drought, flood, frost, etc. Many efforts have been made to monitor the temperature of the atmosphere, like automatic weather stations (AWS). Nevertheless, due to the high cost of specialized AT sensors, they cannot be deployed within a large spatial density. A novel method named the meteorology wireless sensor network relying on a sensing node has been proposed for the purpose of reducing the cost of AT monitoring. However, the temperature sensor on the sensing node can be easily influenced by environmental factors. Previous research has confirmed that there is a close relation between AT and solar radiation (SR). Therefore, this paper presents a method to decrease the error of sensed AT, taking SR into consideration. In this work, we analyzed all of the collected data of AT and SR in May 2014 and found the numerical correspondence between AT error (ATE) and SR. This corresponding relation was used to calculate real-time ATE according to real-time SR and to correct the error of AT in other months. PMID:26213941
Sun, Xingming; Yan, Shuangshuang; Wang, Baowei; Xia, Li; Liu, Qi; Zhang, Hui
2015-07-24
Air temperature (AT) is an extremely vital factor in meteorology, agriculture, military, etc., being used for the prediction of weather disasters, such as drought, flood, frost, etc. Many efforts have been made to monitor the temperature of the atmosphere, like automatic weather stations (AWS). Nevertheless, due to the high cost of specialized AT sensors, they cannot be deployed within a large spatial density. A novel method named the meteorology wireless sensor network relying on a sensing node has been proposed for the purpose of reducing the cost of AT monitoring. However, the temperature sensor on the sensing node can be easily influenced by environmental factors. Previous research has confirmed that there is a close relation between AT and solar radiation (SR). Therefore, this paper presents a method to decrease the error of sensed AT, taking SR into consideration. In this work, we analyzed all of the collected data of AT and SR in May 2014 and found the numerical correspondence between AT error (ATE) and SR. This corresponding relation was used to calculate real-time ATE according to real-time SR and to correct the error of AT in other months.
NASA Technical Reports Server (NTRS)
Goad, C. C.
1977-01-01
The effects of tropospheric and ionospheric refraction errors are analyzed for the GEOS-C altimeter project in terms of their resultant effects on C-band orbits and the altimeter measurement itself. Operational procedures using surface meteorological measurements at ground stations and monthly means for ocean surface conditions are assumed, with no corrections made for ionospheric effects. Effects on the orbit height due to tropospheric errors are approximately 15 cm for single pass short arcs (such as for calibration) and 10 cm for global orbits of one revolution. Orbit height errors due to neglect of the ionosphere have an amplitude of approximately 40 cm when the orbits are determined from C-band range data with predominantly daylight tracking. Altimeter measurement errors are approximately 10 cm due to residual tropospheric refraction correction errors. Ionospheric effects on the altimeter range measurement are also on the order of 10 cm during the GEOS-C launch and early operation period.
Geographically correlated orbit error
NASA Technical Reports Server (NTRS)
Rosborough, G. W.
1989-01-01
The dominant error source in estimating the orbital position of a satellite from ground based tracking data is the modeling of the Earth's gravity field. The resulting orbit error due to gravity field model errors are predominantly long wavelength in nature. This results in an orbit error signature that is strongly correlated over distances on the size of ocean basins. Anderle and Hoskin (1977) have shown that the orbit error along a given ground track also is correlated to some degree with the orbit error along adjacent ground tracks. This cross track correlation is verified here and is found to be significant out to nearly 1000 kilometers in the case of TOPEX/POSEIDON when using the GEM-T1 gravity model. Finally, it was determined that even the orbit error at points where ascending and descending ground traces cross is somewhat correlated. The implication of these various correlations is that the orbit error due to gravity error is geographically correlated. Such correlations have direct implications when using altimetry to recover oceanographic signals.
COMPLEX VARIABLE BOUNDARY ELEMENT METHOD: APPLICATIONS.
Hromadka, T.V.; Yen, C.C.; Guymon, G.L.
1985-01-01
The complex variable boundary element method (CVBEM) is used to approximate several potential problems where analytical solutions are known. A modeling result produced from the CVBEM is a measure of relative error in matching the known boundary condition values of the problem. A CVBEM error-reduction algorithm is used to reduce the relative error of the approximation by adding nodal points in boundary regions where error is large. From the test problems, overall error is reduced significantly by utilizing the adaptive integration algorithm.
Prakash, Varuna; Koczmara, Christine; Savage, Pamela; Trip, Katherine; Stewart, Janice; McCurdie, Tara; Cafazzo, Joseph A; Trbovich, Patricia
2014-11-01
Nurses are frequently interrupted during medication verification and administration; however, few interventions exist to mitigate resulting errors, and the impact of these interventions on medication safety is poorly understood. The study objectives were to (A) assess the effects of interruptions on medication verification and administration errors, and (B) design and test the effectiveness of targeted interventions at reducing these errors. The study focused on medication verification and administration in an ambulatory chemotherapy setting. A simulation laboratory experiment was conducted to determine interruption-related error rates during specific medication verification and administration tasks. Interventions to reduce these errors were developed through a participatory design process, and their error reduction effectiveness was assessed through a postintervention experiment. Significantly more nurses committed medication errors when interrupted than when uninterrupted. With use of interventions when interrupted, significantly fewer nurses made errors in verifying medication volumes contained in syringes (16/18; 89% preintervention error rate vs 11/19; 58% postintervention error rate; p=0.038; Fisher's exact test) and programmed in ambulatory pumps (17/18; 94% preintervention vs 11/19; 58% postintervention; p=0.012). The rate of error commission significantly decreased with use of interventions when interrupted during intravenous push (16/18; 89% preintervention vs 6/19; 32% postintervention; p=0.017) and pump programming (7/18; 39% preintervention vs 1/19; 5% postintervention; p=0.017). No statistically significant differences were observed for other medication verification tasks. Interruptions can lead to medication verification and administration errors. Interventions were highly effective at reducing unanticipated errors of commission in medication administration tasks, but showed mixed effectiveness at reducing predictable errors of detection in medication verification tasks. These findings can be generalised and adapted to mitigate interruption-related errors in other settings where medication verification and administration are required. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Prakash, Varuna; Koczmara, Christine; Savage, Pamela; Trip, Katherine; Stewart, Janice; McCurdie, Tara; Cafazzo, Joseph A; Trbovich, Patricia
2014-01-01
Background Nurses are frequently interrupted during medication verification and administration; however, few interventions exist to mitigate resulting errors, and the impact of these interventions on medication safety is poorly understood. Objective The study objectives were to (A) assess the effects of interruptions on medication verification and administration errors, and (B) design and test the effectiveness of targeted interventions at reducing these errors. Methods The study focused on medication verification and administration in an ambulatory chemotherapy setting. A simulation laboratory experiment was conducted to determine interruption-related error rates during specific medication verification and administration tasks. Interventions to reduce these errors were developed through a participatory design process, and their error reduction effectiveness was assessed through a postintervention experiment. Results Significantly more nurses committed medication errors when interrupted than when uninterrupted. With use of interventions when interrupted, significantly fewer nurses made errors in verifying medication volumes contained in syringes (16/18; 89% preintervention error rate vs 11/19; 58% postintervention error rate; p=0.038; Fisher's exact test) and programmed in ambulatory pumps (17/18; 94% preintervention vs 11/19; 58% postintervention; p=0.012). The rate of error commission significantly decreased with use of interventions when interrupted during intravenous push (16/18; 89% preintervention vs 6/19; 32% postintervention; p=0.017) and pump programming (7/18; 39% preintervention vs 1/19; 5% postintervention; p=0.017). No statistically significant differences were observed for other medication verification tasks. Conclusions Interruptions can lead to medication verification and administration errors. Interventions were highly effective at reducing unanticipated errors of commission in medication administration tasks, but showed mixed effectiveness at reducing predictable errors of detection in medication verification tasks. These findings can be generalised and adapted to mitigate interruption-related errors in other settings where medication verification and administration are required. PMID:24906806
Blumenfeld, Philip; Hata, Nobuhiko; DiMaio, Simon; Zou, Kelly; Haker, Steven; Fichtinger, Gabor; Tempany, Clare M C
2007-09-01
To quantify needle placement accuracy of magnetic resonance image (MRI)-guided core needle biopsy of the prostate. A total of 10 biopsies were performed with 18-gauge (G) core biopsy needle via a percutaneous transperineal approach. Needle placement error was assessed by comparing the coordinates of preplanned targets with the needle tip measured from the intraprocedural coherent gradient echo images. The source of these errors was subsequently investigated by measuring displacement caused by needle deflection and needle susceptibility artifact shift in controlled phantom studies. Needle placement error due to misalignment of the needle template guide was also evaluated. The mean and standard deviation (SD) of errors in targeted biopsies was 6.5 +/- 3.5 mm. Phantom experiments showed significant placement error due to needle deflection with a needle with an asymmetrically beveled tip (3.2-8.7 mm depending on tissue type) but significantly smaller error with a symmetrical bevel (0.6-1.1 mm). Needle susceptibility artifacts observed a shift of 1.6 +/- 0.4 mm from the true needle axis. Misalignment of the needle template guide contributed an error of 1.5 +/- 0.3 mm. Needle placement error was clinically significant in MRI-guided biopsy for diagnosis of prostate cancer. Needle placement error due to needle deflection was the most significant cause of error, especially for needles with an asymmetrical bevel. (c) 2007 Wiley-Liss, Inc.
Improving cancer patient emergency room utilization: A New Jersey state assessment.
Scholer, Anthony J; Mahmoud, Omar M; Ghosh, Debopyria; Schwartzman, Jacob; Farooq, Mohammed; Cabrera, Javier; Wieder, Robert; Adam, Nabil R; Chokshi, Ravi J
2017-12-01
Due to its increasing incidence and its major contribution to healthcare costs, cancer is a major public health problem in the United States. The impact across different services is not well documented and utilization of emergency departments (ED) by cancer patients is not well characterized. The aim of our study was to identify factors that can be addressed to improve the appropriate delivery of quality cancer care thereby reducing ED utilization, decreasing hospitalizations and reducing the related healthcare costs. The New Jersey State Inpatient and Emergency Department Databases were used to identify the primary outcome variables; patient disposition and readmission rates. The independent variables were demographics, payer and clinical characteristics. Multivariable unconditional logistic regression models using clinical and demographic data were used to predict hospital admission or emergency department return. A total of 37,080 emergency department visits were cancer related with the most common diagnosis attributed to lung cancer (30.0%) and the most common presentation was pain. The disposition of patients who visit the ED due to cancer related issues is significantly affected by the factors of race (African American OR=0.6, p value=0.02 and Hispanic OR=0.5, p value=0.02, respectively), age aged 65 to 75years (SNF/ICF OR 2.35, p value=0.00 and Home Healthcare Service OR 5.15, p value=0.01, respectively), number of diagnoses (OR 1.26, p value=0.00), insurance payer (SNF/ICF OR 2.2, p value=0.02 and Home Healthcare Services OR 2.85, p value=0.07, respectively) and type of cancer (breast OR 0.54, p value=0.01, prostate OR 0.56, p value=0.01, uterine OR 0.37, p value=0.02, and other OR 0.62, p value=0.05, respectively). In addition, comorbidities increased the likelihood of death, being transferred to SNF/ICF, or utilization of home healthcare services (OR 1.6, p value=0.00, OR 1.18, p value=0.00, and OR 1.16, p value=0.04, respectively). Readmission is significantly affected by race (American Americans OR 0.41, standard error 0.08, p value=0.001 and Hispanics OR 0.29, standard error 0.11, p value=0.01, respectively), income (Quartile 2 OR 0.98, standard error 0.14, p value 0.01, Quartile 3 OR 1.07, standard error 0.13, p value 0.01, and Quartile 4 OR 0.88, standard error 0.12, p value 0.01, respectively), and type of cancer (prostate OR 0.25, standard error 0.09, p value=0.001). Web based symptom questionnaires, patient navigators, end of life nursing and clinical cancer pathways can identify, guide and prompt early initiation of treat before progression of symptoms in cancer patients most likely to visit the ED. Thus, improving cancer patient satisfaction, outcomes and reduce health care costs. Published by Elsevier Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parks, K.; Wan, Y. H.; Wiener, G.
2011-10-01
The focus of this report is the wind forecasting system developed during this contract period with results of performance through the end of 2010. The report is intentionally high-level, with technical details disseminated at various conferences and academic papers. At the end of 2010, Xcel Energy managed the output of 3372 megawatts of installed wind energy. The wind plants span three operating companies1, serving customers in eight states2, and three market structures3. The great majority of the wind energy is contracted through power purchase agreements (PPAs). The remainder is utility owned, Qualifying Facilities (QF), distributed resources (i.e., 'behind the meter'),more » or merchant entities within Xcel Energy's Balancing Authority footprints. Regardless of the contractual or ownership arrangements, the output of the wind energy is balanced by Xcel Energy's generation resources that include fossil, nuclear, and hydro based facilities that are owned or contracted via PPAs. These facilities are committed and dispatched or bid into day-ahead and real-time markets by Xcel Energy's Commercial Operations department. Wind energy complicates the short and long-term planning goals of least-cost, reliable operations. Due to the uncertainty of wind energy production, inherent suboptimal commitment and dispatch associated with imperfect wind forecasts drives up costs. For example, a gas combined cycle unit may be turned on, or committed, in anticipation of low winds. The reality is winds stayed high, forcing this unit and others to run, or be dispatched, to sub-optimal loading positions. In addition, commitment decisions are frequently irreversible due to minimum up and down time constraints. That is, a dispatcher lives with inefficient decisions made in prior periods. In general, uncertainty contributes to conservative operations - committing more units and keeping them on longer than may have been necessary for purposes of maintaining reliability. The downside is costs are higher. In organized electricity markets, units that are committed for reliability reasons are paid their offer price even when prevailing market prices are lower. Often, these uplift charges are allocated to market participants that caused the inefficient dispatch in the first place. Thus, wind energy facilities are burdened with their share of costs proportional to their forecast errors. For Xcel Energy, wind energy uncertainty costs manifest depending on specific market structures. In the Public Service of Colorado (PSCo), inefficient commitment and dispatch caused by wind uncertainty increases fuel costs. Wind resources participating in the Midwest Independent System Operator (MISO) footprint make substantial payments in the real-time markets to true-up their day-ahead positions and are additionally burdened with deviation charges called a Revenue Sufficiency Guarantee (RSG) to cover out of market costs associated with operations. Southwest Public Service (SPS) wind plants cause both commitment inefficiencies and are charged Southwest Power Pool (SPP) imbalance payments due to wind uncertainty and variability. Wind energy forecasting helps mitigate these costs. Wind integration studies for the PSCo and Northern States Power (NSP) operating companies have projected increasing costs as more wind is installed on the system due to forecast error. It follows that reducing forecast error would reduce these costs. This is echoed by large scale studies in neighboring regions and states that have recommended adoption of state-of-the-art wind forecasting tools in day-ahead and real-time planning and operations. Further, Xcel Energy concluded reduction of the normalized mean absolute error by one percent would have reduced costs in 2008 by over $1 million annually in PSCo alone. The value of reducing forecast error prompted Xcel Energy to make substantial investments in wind energy forecasting research and development.« less
Errors Affect Hypothetical Intertemporal Food Choice in Women
Sellitto, Manuela; di Pellegrino, Giuseppe
2014-01-01
Growing evidence suggests that the ability to control behavior is enhanced in contexts in which errors are more frequent. Here we investigated whether pairing desirable food with errors could decrease impulsive choice during hypothetical temporal decisions about food. To this end, healthy women performed a Stop-signal task in which one food cue predicted high-error rate, and another food cue predicted low-error rate. Afterwards, we measured participants’ intertemporal preferences during decisions between smaller-immediate and larger-delayed amounts of food. We expected reduced sensitivity to smaller-immediate amounts of food associated with high-error rate. Moreover, taking into account that deprivational states affect sensitivity for food, we controlled for participants’ hunger. Results showed that pairing food with high-error likelihood decreased temporal discounting. This effect was modulated by hunger, indicating that, the lower the hunger level, the more participants showed reduced impulsive preference for the food previously associated with a high number of errors as compared with the other food. These findings reveal that errors, which are motivationally salient events that recruit cognitive control and drive avoidance learning against error-prone behavior, are effective in reducing impulsive choice for edible outcomes. PMID:25244534
Cao, Rensheng; Ruan, Wenqian; Wu, Xianliang; Wei, Xionghui
2018-01-01
Highly promising artificial intelligence tools, including neural network (ANN), genetic algorithm (GA) and particle swarm optimization (PSO), were applied in the present study to develop an approach for the evaluation of Se(IV) removal from aqueous solutions by reduced graphene oxide-supported nanoscale zero-valent iron (nZVI/rGO) composites. Both GA and PSO were used to optimize the parameters of ANN. The effect of operational parameters (i.e., initial pH, temperature, contact time and initial Se(IV) concentration) on the removal efficiency was examined using response surface methodology (RSM), which was also utilized to obtain a dataset for the ANN training. The ANN-GA model results (with a prediction error of 2.88%) showed a better agreement with the experimental data than the ANN-PSO model results (with a prediction error of 4.63%) and the RSM model results (with a prediction error of 5.56%), thus the ANN-GA model was an ideal choice for modeling and optimizing the Se(IV) removal by the nZVI/rGO composites due to its low prediction error. The analysis of the experimental data illustrates that the removal process of Se(IV) obeyed the Langmuir isotherm and the pseudo-second-order kinetic model. Furthermore, the Se 3d and 3p peaks found in XPS spectra for the nZVI/rGO composites after removing treatment illustrates that the removal of Se(IV) was mainly through the adsorption and reduction mechanisms. PMID:29543753
Improving the twilight model for polar cap absorption nowcasts
NASA Astrophysics Data System (ADS)
Rogers, N. C.; Kero, A.; Honary, F.; Verronen, P. T.; Warrington, E. M.; Danskin, D. W.
2016-11-01
During solar proton events (SPE), energetic protons ionize the polar mesosphere causing HF radio wave attenuation, more strongly on the dayside where the effective recombination coefficient, αeff, is low. Polar cap absorption models predict the 30 MHz cosmic noise absorption, A, measured by riometers, based on real-time measurements of the integrated proton flux-energy spectrum, J. However, empirical models in common use cannot account for regional and day-to-day variations in the daytime and nighttime profiles of αeff(z) or the related sensitivity parameter, m=A>/&sqrt;J. Large prediction errors occur during twilight when m changes rapidly, and due to errors locating the rigidity cutoff latitude. Modeling the twilight change in m as a linear or Gauss error-function transition over a range of solar-zenith angles (χl < χ < χu) provides a better fit to measurements than selecting day or night αeff profiles based on the Earth-shadow height. Optimal model parameters were determined for several polar cap riometers for large SPEs in 1998-2005. The optimal χl parameter was found to be most variable, with smaller values (as low as 60°) postsunrise compared with presunset and with positive correlation between riometers over a wide area. Day and night values of m exhibited higher correlation for closely spaced riometers. A nowcast simulation is presented in which rigidity boundary latitude and twilight model parameters are optimized by assimilating age-weighted measurements from 25 riometers. The technique reduces model bias, and root-mean-square errors are reduced by up to 30% compared with a model employing no riometer data assimilation.
Cao, Rensheng; Fan, Mingyi; Hu, Jiwei; Ruan, Wenqian; Wu, Xianliang; Wei, Xionghui
2018-03-15
Highly promising artificial intelligence tools, including neural network (ANN), genetic algorithm (GA) and particle swarm optimization (PSO), were applied in the present study to develop an approach for the evaluation of Se(IV) removal from aqueous solutions by reduced graphene oxide-supported nanoscale zero-valent iron (nZVI/rGO) composites. Both GA and PSO were used to optimize the parameters of ANN. The effect of operational parameters (i.e., initial pH, temperature, contact time and initial Se(IV) concentration) on the removal efficiency was examined using response surface methodology (RSM), which was also utilized to obtain a dataset for the ANN training. The ANN-GA model results (with a prediction error of 2.88%) showed a better agreement with the experimental data than the ANN-PSO model results (with a prediction error of 4.63%) and the RSM model results (with a prediction error of 5.56%), thus the ANN-GA model was an ideal choice for modeling and optimizing the Se(IV) removal by the nZVI/rGO composites due to its low prediction error. The analysis of the experimental data illustrates that the removal process of Se(IV) obeyed the Langmuir isotherm and the pseudo-second-order kinetic model. Furthermore, the Se 3d and 3p peaks found in XPS spectra for the nZVI/rGO composites after removing treatment illustrates that the removal of Se(IV) was mainly through the adsorption and reduction mechanisms.
NASA Astrophysics Data System (ADS)
Nezhad, Mohsen Motahari; Shojaeefard, Mohammad Hassan; Shahraki, Saeid
2016-02-01
In this study, the experiments aimed at analyzing thermally the exhaust valve in an air-cooled internal combustion engine and estimating the thermal contact conductance in fixed and periodic contacts. Due to the nature of internal combustion engines, the duration of contact between the valve and its seat is too short, and much time is needed to reach the quasi-steady state in the periodic contact between the exhaust valve and its seat. Using the methods of linear extrapolation and the inverse solution, the surface contact temperatures and the fixed and periodic thermal contact conductance were calculated. The results of linear extrapolation and inverse methods have similar trends, and based on the error analysis, they are accurate enough to estimate the thermal contact conductance. Moreover, due to the error analysis, a linear extrapolation method using inverse ratio is preferred. The effects of pressure, contact frequency, heat flux, and cooling air speed on thermal contact conductance have been investigated. The results show that by increasing the contact pressure the thermal contact conductance increases substantially. In addition, by increasing the engine speed the thermal contact conductance decreases. On the other hand, by boosting the air speed the thermal contact conductance increases, and by raising the heat flux the thermal contact conductance reduces. The average calculated error equals to 12.9 %.
Wind tunnel seeding particles for laser velocimeter
NASA Technical Reports Server (NTRS)
Ghorieshi, Anthony
1992-01-01
The design of an optimal air foil has been a major challenge for aerospace industries. The main objective is to reduce the drag force while increasing the lift force in various environmental air conditions. Experimental verification of theoretical and computational results is a crucial part of the analysis because of errors buried in the solutions, due to the assumptions made in theoretical work. Experimental studies are an integral part of a good design procedure; however, empirical data are not always error free due to environmental obstacles or poor execution, etc. The reduction of errors in empirical data is a major challenge in wind tunnel testing. One of the recent advances of particular interest is the use of a non-intrusive measurement technique known as laser velocimetry (LV) which allows for obtaining quantitative flow data without introducing flow disturbing probes. The laser velocimeter technique is based on measurement of scattered light by the particles present in the flow but not the velocity of the flow. Therefore, for an accurate flow velocity measurement with laser velocimeters, two criterion are investigated: (1) how well the particles track the local flow field, and (2) the requirement of light scattering efficiency to obtain signals with the LV. In order to demonstrate the concept of predicting the flow velocity by velocity measurement of particle seeding, the theoretical velocity of the gas flow is computed and compared with experimentally obtained velocity of particle seeding.
Effects of Optical Blur Reduction on Equivalent Intrinsic Blur
Valeshabad, Ali Kord; Wanek, Justin; McAnany, J. Jason; Shahidi, Mahnaz
2015-01-01
Purpose To determine the effect of optical blur reduction on equivalent intrinsic blur, an estimate of the blur within the visual system, by comparing optical and equivalent intrinsic blur before and after adaptive optics (AO) correction of wavefront error. Methods Twelve visually normal individuals (age; 31 ± 12 years) participated in this study. Equivalent intrinsic blur (σint) was derived using a previously described model. Optical blur (σopt) due to high-order aberrations was quantified by Shack-Hartmann aberrometry and minimized using AO correction of wavefront error. Results σopt and σint were significantly reduced and visual acuity (VA) was significantly improved after AO correction (P ≤ 0.004). Reductions in σopt and σint were linearly dependent on the values before AO correction (r ≥ 0.94, P ≤ 0.002). The reduction in σint was greater than the reduction in σopt, although it was marginally significant (P = 0.05). σint after AO correlated significantly with σint before AO (r = 0.92, P < 0.001) and the two parameters were related linearly with a slope of 0.46. Conclusions Reduction in equivalent intrinsic blur was greater than the reduction in optical blur due to AO correction of wavefront error. This finding implies that VA in subjects with high equivalent intrinsic blur can be improved beyond that expected from the reduction in optical blur alone. PMID:25785538
A priori discretization error metrics for distributed hydrologic modeling applications
NASA Astrophysics Data System (ADS)
Liu, Hongli; Tolson, Bryan A.; Craig, James R.; Shafii, Mahyar
2016-12-01
Watershed spatial discretization is an important step in developing a distributed hydrologic model. A key difficulty in the spatial discretization process is maintaining a balance between the aggregation-induced information loss and the increase in computational burden caused by the inclusion of additional computational units. Objective identification of an appropriate discretization scheme still remains a challenge, in part because of the lack of quantitative measures for assessing discretization quality, particularly prior to simulation. This study proposes a priori discretization error metrics to quantify the information loss of any candidate discretization scheme without having to run and calibrate a hydrologic model. These error metrics are applicable to multi-variable and multi-site discretization evaluation and provide directly interpretable information to the hydrologic modeler about discretization quality. The first metric, a subbasin error metric, quantifies the routing information loss from discretization, and the second, a hydrological response unit (HRU) error metric, improves upon existing a priori metrics by quantifying the information loss due to changes in land cover or soil type property aggregation. The metrics are straightforward to understand and easy to recode. Informed by the error metrics, a two-step discretization decision-making approach is proposed with the advantage of reducing extreme errors and meeting the user-specified discretization error targets. The metrics and decision-making approach are applied to the discretization of the Grand River watershed in Ontario, Canada. Results show that information loss increases as discretization gets coarser. Moreover, results help to explain the modeling difficulties associated with smaller upstream subbasins since the worst discretization errors and highest error variability appear in smaller upstream areas instead of larger downstream drainage areas. Hydrologic modeling experiments under candidate discretization schemes validate the strong correlation between the proposed discretization error metrics and hydrologic simulation responses. Discretization decision-making results show that the common and convenient approach of making uniform discretization decisions across the watershed performs worse than the proposed non-uniform discretization approach in terms of preserving spatial heterogeneity under the same computational cost.
NASA Astrophysics Data System (ADS)
Zelensky, Nikita P.; Lemoine, Frank G.; Chinn, Douglas S.; Beckley, Brian D.; Bordyugov, Oleg; Yang, Xu; Wimert, Jesse; Pavlis, Despina
2016-12-01
We have investigated the quality of precise orbits for the SARAL altimeter satellite using Satellite Laser Ranging (SLR) and Doppler Orbitography and Radiopositioning Integrated by Satellite (DORIS) data from March 14, 2013 to August 10, 2014. We have identified a 4.31 ± 0.14 cm error in the Z (cross-track) direction that defines the center-of-mass of the SARAL satellite in the spacecraft coordinate system, and we have tuned the SLR and DORIS tracking point offsets. After these changes, we reduce the average RMS of the SLR residuals for seven-day arcs from 1.85 to 1.38 cm. We tuned the non-conservative force model for SARAL, reducing the amplitude of the daily adjusted empirical accelerations by eight percent. We find that the best dynamic orbits show altimeter crossover residuals of 5.524 cm over cycles 7-15. Our analysis offers a unique illustration that high-elevation SLR residuals will not necessarily provide an accurate estimate of radial error at the 1-cm level, and that other supporting orbit tests are necessary for a better estimate. Through the application of improved models for handling time-variable gravity, the use of reduced-dynamic orbits, and through an arc-by-arc estimation of the C22 and S22 coefficients, we find from analysis of independent SLR residuals and other tests that we achieve 1.1-1.2 cm radial orbit accuracies for SARAL. The limiting errors stem from the inadequacy of the DPOD2008 and SLRF2008 station complements, and inadequacies in radiation force modeling, especially with respect to spacecraft self-shadowing and modeling of thermal variations due to eclipses.
NASA Astrophysics Data System (ADS)
Cheng, Lara W. S.
Airport moving maps (AMMs) have been shown to decrease navigation errors, increase taxiing speed, and reduce workload when they depict airport layout, current aircraft position, and the cleared taxi route. However, current technologies are limited in their ability to depict the cleared taxi route due to the unavailability of datacomm or other means of electronically transmitting clearances from ATC to the flight deck. This study examined methods by which pilots can input ATC-issued taxi clearances to support taxi route depictions on the AMM. Sixteen general aviation (GA) pilots used a touchscreen monitor to input taxi clearances using two input layouts, softkeys and QWERTY, each with and without feedforward (graying out invalid inputs). QWERTY yielded more taxi route input errors than the softkeys layout. The presence of feedforward did not produce fewer taxi route input errors than in the non-feedforward condition. The QWERTY layout did reduce taxi clearance input times relative to the softkeys layout, but when feedforward was present this effect was observed only for the longer, 6-segment taxi clearances. It was observed that with the softkeys layout, feedforward reduced input times compared to non-feedforward but only for the 4-segment clearances. Feedforward did not support faster taxi clearance input times for the QWERTY layout. Based on the results and analyses of the present study, it is concluded that for taxi clearance inputs, (1) QWERTY remain the standard for alphanumeric inputs, and (2) feedforward be investigated further, with a focus on participant preference and performance of black-gray contrast of keys.
Reducing the Familiarity of Conjunction Lures with Pictures
ERIC Educational Resources Information Center
Lloyd, Marianne E.
2013-01-01
Four experiments were conducted to test whether conjunction errors were reduced after pictorial encoding and whether the semantic overlap between study and conjunction items would impact error rates. Across 4 experiments, compound words studied with a single-picture had lower conjunction error rates during a recognition test than those words…
Recommendations to Improve the Accuracy of Estimates of Physical Activity Derived from Self Report
Ainsworth, Barbara E; Caspersen, Carl J; Matthews, Charles E; Mâsse, Louise C; Baranowski, Tom; Zhu, Weimo
2013-01-01
Context Assessment of physical activity using self-report has the potential for measurement error that can lead to incorrect inferences about physical activity behaviors and bias study results. Objective To provide recommendations to improve the accuracy of physical activity derived from self report. Process We provide an overview of presentations and a compilation of perspectives shared by the authors of this paper and workgroup members. Findings We identified a conceptual framework for reducing errors using physical activity self-report questionnaires. The framework identifies six steps to reduce error: (1) identifying the need to measure physical activity, (2) selecting an instrument, (3) collecting data, (4) analyzing data, (5) developing a summary score, and (6) interpreting data. Underlying the first four steps are behavioral parameters of type, intensity, frequency, and duration of physical activities performed, activity domains, and the location where activities are performed. We identified ways to reduce measurement error at each step and made recommendations for practitioners, researchers, and organizational units to reduce error in questionnaire assessment of physical activity. Conclusions Self-report measures of physical activity have a prominent role in research and practice settings. Measurement error can be reduced by applying the framework discussed in this paper. PMID:22287451
2014-03-22
consideration for enlisted airmen, has largely become a non factor due to over-inflated scores, with other factors such as specialty knowledge test scores, time...appraisal. Secondly, an Artificial Neural Network (ANN) classifier will be applied to the large sample data to confirm that the values solicited to...jobs, employees make themselves vulnerable to the organization when they expend effort. If extra effort is expended to reduce errors or defects, or
Moylan, Elizabeth C; Kowalczuk, Maria K
2016-11-23
To assess why articles are retracted from BioMed Central journals, whether retraction notices adhered to the Committee on Publication Ethics (COPE) guidelines, and are becoming more frequent as a proportion of published articles. Retrospective cross-sectional analysis of 134 retractions from January 2000 to December 2015. 134 retraction notices were published during this timeframe. Although they account for 0.07% of all articles published (190 514 excluding supplements, corrections, retractions and commissioned content), the rate of retraction is rising. COPE guidelines on retraction were adhered to in that an explicit reason for each retraction was given. However, some notices did not document who retracted the article (eight articles, 6%) and others were unclear whether the underlying cause was honest error or misconduct (15 articles, 11%). The largest proportion of notices was issued by the authors (47 articles, 35%). The majority of retractions were due to some form of misconduct (102 articles, 76%), that is, compromised peer review (44 articles, 33%), plagiarism (22 articles, 16%) and data falsification/fabrication (10 articles, 7%). Honest error accounted for 17 retractions (13%) of which 10 articles (7%) were published in error. The median number of days from publication to retraction was 337.5 days. The most common reason to retract was compromised peer review. However, the majority of these cases date to March 2015 and appear to be the result of a systematic attempt to manipulate peer review across several publishers. Retractions due to plagiarism account for the second largest category and may be reduced by screening manuscripts before publication although this is not guaranteed. Retractions due to problems with the data may be reduced by appropriate data sharing and deposition before publication. Adopting a checklist (linked to COPE guidelines) and templates for various classes of retraction notices would increase transparency of retraction notices in future. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Audit of litigation against the accident and emergency radiology department.
Cantoni, S; De Stefano, F; Mari, A; Savaia, F; Rosso, R; Derchi, L
2009-09-01
The aims of this study were to reduce and monitor litigation due to failure to diagnose a fracture, to evaluate whether the cases were due to radiological error or other problems in the diagnostic and therapeutic management of patients and to identify organisational, technical or functional changes or guidelines to improve the management of patients with suspected fracture and their expectations. We analysed the litigation database for the period 2004-2006 and extracted all episodes indicating failure to diagnose a fracture at the accident and emergency radiology department of our centre. The radiographs underwent blinded review by two experts, and each case was jointly analysed by a radiologist and a forensic physician to see what led to the compensation claim. We identified 22 events (2004 seven cases; 2005 eight cases; 2006 seven cases). Six cases were unrelated to radiological error. Six were due to imperceptible fractures at the time of the examination. These were accounted for by the presence of a major lesion distracting the examiner's attention from a less important associated lesion in one case, a false negative result in a patient examined on a incompletely radiolucent spinal board and underexposure of the coccyx region in an obese patient. Six cases were related to an interpretation error by the radiologist. In the remaining cases, the lesion being referred to in the compensation claim could either not be established or the case was closed by the insurance company without compensation. Corrective measures were adopted. These included planning the purchase of a higher performance device, drawing up a protocol for imaging patients on spinal boards, reminding radiologists of the need to carefully scrutinise the entire radiogram even after having identified a lesion, and producing an information sheet explaining to patients the possibility of false negative results in cases of imperceptible lesions and inviting them to return to the department if symptoms persist. We believe the clinical and administrative analysis we performed is useful. It reviewed some administrative practices and identified critical features. We identified tools that we trust will reduce litigation.
Liu, Shi Qiang; Zhu, Rong
2016-01-01
Errors compensation of micromachined-inertial-measurement-units (MIMU) is essential in practical applications. This paper presents a new compensation method using a neural-network-based identification for MIMU, which capably solves the universal problems of cross-coupling, misalignment, eccentricity, and other deterministic errors existing in a three-dimensional integrated system. Using a neural network to model a complex multivariate and nonlinear coupling system, the errors could be readily compensated through a comprehensive calibration. In this paper, we also present a thermal-gas MIMU based on thermal expansion, which measures three-axis angular rates and three-axis accelerations using only three thermal-gas inertial sensors, each of which capably measures one-axis angular rate and one-axis acceleration simultaneously in one chip. The developed MIMU (100 × 100 × 100 mm3) possesses the advantages of simple structure, high shock resistance, and large measuring ranges (three-axes angular rates of ±4000°/s and three-axes accelerations of ±10 g) compared with conventional MIMU, due to using gas medium instead of mechanical proof mass as the key moving and sensing elements. However, the gas MIMU suffers from cross-coupling effects, which corrupt the system accuracy. The proposed compensation method is, therefore, applied to compensate the system errors of the MIMU. Experiments validate the effectiveness of the compensation, and the measurement errors of three-axis angular rates and three-axis accelerations are reduced to less than 1% and 3% of uncompensated errors in the rotation range of ±600°/s and the acceleration range of ±1 g, respectively. PMID:26840314
Porter, Teresita M.; Golding, G. Brian
2012-01-01
Nuclear large subunit ribosomal DNA is widely used in fungal phylogenetics and to an increasing extent also amplicon-based environmental sequencing. The relatively short reads produced by next-generation sequencing, however, makes primer choice and sequence error important variables for obtaining accurate taxonomic classifications. In this simulation study we tested the performance of three classification methods: 1) a similarity-based method (BLAST + Metagenomic Analyzer, MEGAN); 2) a composition-based method (Ribosomal Database Project naïve Bayesian classifier, NBC); and, 3) a phylogeny-based method (Statistical Assignment Package, SAP). We also tested the effects of sequence length, primer choice, and sequence error on classification accuracy and perceived community composition. Using a leave-one-out cross validation approach, results for classifications to the genus rank were as follows: BLAST + MEGAN had the lowest error rate and was particularly robust to sequence error; SAP accuracy was highest when long LSU query sequences were classified; and, NBC runs significantly faster than the other tested methods. All methods performed poorly with the shortest 50–100 bp sequences. Increasing simulated sequence error reduced classification accuracy. Community shifts were detected due to sequence error and primer selection even though there was no change in the underlying community composition. Short read datasets from individual primers, as well as pooled datasets, appear to only approximate the true community composition. We hope this work informs investigators of some of the factors that affect the quality and interpretation of their environmental gene surveys. PMID:22558215
NASA Astrophysics Data System (ADS)
Navidi, N.; Landry, R., Jr.
2015-08-01
Nowadays, Global Positioning System (GPS) receivers are aided by some complementary radio navigation systems and Inertial Navigation Systems (INS) to obtain more accuracy and robustness in land vehicular navigation. Extended Kalman Filter (EKF) is an acceptable conventional method to estimate the position, the velocity, and the attitude of the navigation system when INS measurements are fused with GPS data. However, the usage of the low-cost Inertial Measurement Units (IMUs) based on the Micro-Electro-Mechanical Systems (MEMS), for the land navigation systems, reduces the precision and stability of the navigation system due to their inherent errors. The main goal of this paper is to provide a new model for fusing low-cost IMU and GPS measurements. The proposed model is based on EKF aided by Fuzzy Inference Systems (FIS) as a promising method to solve the mentioned problems. This model considers the parameters of the measurement noise to adjust the measurement and noise process covariance. The simulation results show the efficiency of the proposed method to reduce the navigation system errors compared with EKF.
Reducing post analytical error: perspectives on new formats for the blood sciences pathology report.
O'Connor, John D
2015-02-01
Little has changed in the way we report pathology results from blood sciences over the last 50 years other than moving to electronic display from paper. In part, this is aspiration to preserve the format of a paper report in electronic format. It is also due to the limitations of electronic media to display the data. The advancement of web-based technologies and functionality of hand-held devices together with wireless and other technologies afford the opportunity to rethink data presentation with the aim of emphasising the message in the data, thereby modifying clinical behaviours and potentially reducing post-analytical error. This article takes the form of a commentary which explores new developments in the field of infographics and, together with examples, suggests some new approaches to communicating what is currently just data into information. The combination of graphics and a new approach to provocative interpretative commenting offers a powerful tool in improving pathology utilisation. An additional challenge is the requirement to consider how pathology reports may be issued directly to patients.
Reducing Post Analytical Error: Perspectives on New Formats for the Blood Sciences Pathology Report
O’Connor, John D
2015-01-01
Little has changed in the way we report pathology results from blood sciences over the last 50 years other than moving to electronic display from paper. In part, this is aspiration to preserve the format of a paper report in electronic format. It is also due to the limitations of electronic media to display the data. The advancement of web-based technologies and functionality of hand-held devices together with wireless and other technologies afford the opportunity to rethink data presentation with the aim of emphasising the message in the data, thereby modifying clinical behaviours and potentially reducing post-analytical error. This article takes the form of a commentary which explores new developments in the field of infographics and, together with examples, suggests some new approaches to communicating what is currently just data into information. The combination of graphics and a new approach to provocative interpretative commenting offers a powerful tool in improving pathology utilisation. An additional challenge is the requirement to consider how pathology reports may be issued directly to patients. PMID:25944968
Hadronic Contribution to Muon g-2 with Systematic Error Correlations
NASA Astrophysics Data System (ADS)
Brown, D. H.; Worstell, W. A.
1996-05-01
We have performed a new evaluation of the hadronic contribution to a_μ=(g-2)/2 of the muon with explicit correlations of systematic errors among the experimental data on σ( e^+e^- → hadrons ). Our result for the lowest order hadronic vacuum polarization contribution is a_μ^hvp = 701.7(7.6)(13.4) × 10-10 where the total systematic error contributions from below and above √s = 1.4 GeV are (12.5) × 10-10 and (4.8) × 10-10 respectively. Therefore new measurements on σ( e^+e^- → hadrons ) below 1.4 GeV in Novosibirsk, Russia can significantly reduce the total error on a_μ^hvp. This contrasts with a previous evaluation which indicated that the dominant error is due to the energy region above 1.4 GeV. The latter analysis correlated systematic errors at each energy point separately but not across energy ranges as we have done. Combination with higher order hadronic contributions is required for a new measurement of a_μ at Brookhaven National Laboratory to be sensitive to electroweak and possibly supergravity and muon substructure effects. Our analysis may also be applied to calculations of hadronic contributions to the running of α(s) at √s= M_Z, the hyperfine structure of muonium, and the running of sin^2 θW in Møller scattering. The analysis of the new Novosibirsk data will also be given.
Merging gauge and satellite rainfall with specification of associated uncertainty across Australia
NASA Astrophysics Data System (ADS)
Woldemeskel, Fitsum M.; Sivakumar, Bellie; Sharma, Ashish
2013-08-01
Accurate estimation of spatial rainfall is crucial for modelling hydrological systems and planning and management of water resources. While spatial rainfall can be estimated either using rain gauge-based measurements or using satellite-based measurements, such estimates are subject to uncertainties due to various sources of errors in either case, including interpolation and retrieval errors. The purpose of the present study is twofold: (1) to investigate the benefit of merging rain gauge measurements and satellite rainfall data for Australian conditions and (2) to produce a database of retrospective rainfall along with a new uncertainty metric for each grid location at any timestep. The analysis involves four steps: First, a comparison of rain gauge measurements and the Tropical Rainfall Measuring Mission (TRMM) 3B42 data at such rain gauge locations is carried out. Second, gridded monthly rain gauge rainfall is determined using thin plate smoothing splines (TPSS) and modified inverse distance weight (MIDW) method. Third, the gridded rain gauge rainfall is merged with the monthly accumulated TRMM 3B42 using a linearised weighting procedure, the weights at each grid being calculated based on the error variances of each dataset. Finally, cross validation (CV) errors at rain gauge locations and standard errors at gridded locations for each timestep are estimated. The CV error statistics indicate that merging of the two datasets improves the estimation of spatial rainfall, and more so where the rain gauge network is sparse. The provision of spatio-temporal standard errors with the retrospective dataset is particularly useful for subsequent modelling applications where input error knowledge can help reduce the uncertainty associated with modelling outcomes.
Galerkin v. discrete-optimal projection in nonlinear model reduction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlberg, Kevin Thomas; Barone, Matthew Franklin; Antil, Harbir
Discrete-optimal model-reduction techniques such as the Gauss{Newton with Approximated Tensors (GNAT) method have shown promise, as they have generated stable, accurate solutions for large-scale turbulent, compressible ow problems where standard Galerkin techniques have failed. However, there has been limited comparative analysis of the two approaches. This is due in part to difficulties arising from the fact that Galerkin techniques perform projection at the time-continuous level, while discrete-optimal techniques do so at the time-discrete level. This work provides a detailed theoretical and experimental comparison of the two techniques for two common classes of time integrators: linear multistep schemes and Runge{Kutta schemes.more » We present a number of new ndings, including conditions under which the discrete-optimal ROM has a time-continuous representation, conditions under which the two techniques are equivalent, and time-discrete error bounds for the two approaches. Perhaps most surprisingly, we demonstrate both theoretically and experimentally that decreasing the time step does not necessarily decrease the error for the discrete-optimal ROM; instead, the time step should be `matched' to the spectral content of the reduced basis. In numerical experiments carried out on a turbulent compressible- ow problem with over one million unknowns, we show that increasing the time step to an intermediate value decreases both the error and the simulation time of the discrete-optimal reduced-order model by an order of magnitude.« less
Miyashita, Theresa L; Diakogeorgiou, Eleni; Marrie, Kaitlyn
Investigation into the effect of cumulative subconcussive head impacts has yielded various results in the literature, with many supporting a link to neurological deficits. Little research has been conducted on men's lacrosse and associated balance deficits from head impacts. (1) Athletes will commit more errors on the postseason Balance Error Scoring System (BESS) test. (2) There will be a positive correlation to change in BESS scores and head impact exposure data. Prospective longitudinal study. Level 3. Thirty-four Division I men's lacrosse players (age, 19.59 ± 1.42 years) wore helmets instrumented with a sensor to collect head impact exposure data over the course of a competitive season. Players completed a BESS test at the start and end of the competitive season. The number of errors from pre- to postseason increased during the double-leg stance on foam ( P < 0.001), tandem stance on foam ( P = 0.009), total number of errors on a firm surface ( P = 0.042), and total number of errors on a foam surface ( P = 0.007). There were significant correlations only between the total errors on a foam surface and linear acceleration ( P = 0.038, r = 0.36), head injury criteria ( P = 0.024, r = 0.39), and Gadd Severity Index scores ( P = 0.031, r = 0.37). Changes in the total number of errors on a foam surface may be considered a sensitive measure to detect balance deficits associated with cumulative subconcussive head impacts sustained over the course of 1 lacrosse season, as measured by average linear acceleration, head injury criteria, and Gadd Severity Index scores. If there is microtrauma to the vestibular system due to repetitive subconcussive impacts, only an assessment that highly stresses the vestibular system may be able to detect these changes. Cumulative subconcussive impacts may result in neurocognitive dysfunction, including balance deficits, which are associated with an increased risk for injury. The development of a strategy to reduce total number of head impacts may curb the associated sequelae. Incorporation of a modified BESS test, firm surface only, may not be recommended as it may not detect changes due to repetitive impacts over the course of a competitive season.
Methods and apparatus for reducing peak wind turbine loads
Moroz, Emilian Mieczyslaw
2007-02-13
A method for reducing peak loads of wind turbines in a changing wind environment includes measuring or estimating an instantaneous wind speed and direction at the wind turbine and determining a yaw error of the wind turbine relative to the measured instantaneous wind direction. The method further includes comparing the yaw error to a yaw error trigger that has different values at different wind speeds and shutting down the wind turbine when the yaw error exceeds the yaw error trigger corresponding to the measured or estimated instantaneous wind speed.
Simulation: learning from mistakes while building communication and teamwork.
Kuehster, Christina R; Hall, Carla D
2010-01-01
Medical errors are one of the leading causes of death annually in the United States. Many of these errors are related to poor communication and/or lack of teamwork. Using simulation as a teaching modality provides a dual role in helping to reduce these errors. Thorough integration of clinical practice with teamwork and communication in a safe environment increases the likelihood of reducing the error rates in medicine. By allowing practitioners to make potential errors in a safe environment, such as simulation, these valuable lessons improve retention and will rarely be repeated.
Intrinsic errors in transporting a single-spin qubit through a double quantum dot
NASA Astrophysics Data System (ADS)
Li, Xiao; Barnes, Edwin; Kestner, J. P.; Das Sarma, S.
2017-07-01
Coherent spatial transport or shuttling of a single electron spin through semiconductor nanostructures is an important ingredient in many spintronic and quantum computing applications. In this work we analyze the possible errors in solid-state quantum computation due to leakage in transporting a single-spin qubit through a semiconductor double quantum dot. In particular, we consider three possible sources of leakage errors associated with such transport: finite ramping times, spin-dependent tunneling rates between quantum dots induced by finite spin-orbit couplings, and the presence of multiple valley states. In each case we present quantitative estimates of the leakage errors, and discuss how they can be minimized. The emphasis of this work is on how to deal with the errors intrinsic to the ideal semiconductor structure, such as leakage due to spin-orbit couplings, rather than on errors due to defects or noise sources. In particular, we show that in order to minimize leakage errors induced by spin-dependent tunnelings, it is necessary to apply pulses to perform certain carefully designed spin rotations. We further develop a formalism that allows one to systematically derive constraints on the pulse shapes and present a few examples to highlight the advantage of such an approach.
Commentary: Reducing diagnostic errors: another role for checklists?
Winters, Bradford D; Aswani, Monica S; Pronovost, Peter J
2011-03-01
Diagnostic errors are a widespread problem, although the true magnitude is unknown because they cannot currently be measured validly. These errors have received relatively little attention despite alarming estimates of associated harm and death. One promising intervention to reduce preventable harm is the checklist. This intervention has proven successful in aviation, in which situations are linear and deterministic (one alarm goes off and a checklist guides the flight crew to evaluate the cause). In health care, problems are multifactorial and complex. A checklist has been used to reduce central-line-associated bloodstream infections in intensive care units. Nevertheless, this checklist was incorporated in a culture-based safety program that engaged and changed behaviors and used robust measurement of infections to evaluate progress. In this issue, Ely and colleagues describe how three checklists could reduce the cognitive biases and mental shortcuts that underlie diagnostic errors, but point out that these tools still need to be tested. To be effective, they must reduce diagnostic errors (efficacy) and be routinely used in practice (effectiveness). Such tools must intuitively support how the human brain works, and under time pressures, clinicians rarely think in conditional probabilities when making decisions. To move forward, it is necessary to accurately measure diagnostic errors (which could come from mapping out the diagnostic process as the medication process has done and measuring errors at each step) and pilot test interventions such as these checklists to determine whether they work.
Computerized Design and Generation of Low-Noise Gears with Localized Bearing Contact
NASA Technical Reports Server (NTRS)
Litvin, Faydor L.; Chen, Ningxin; Chen, Jui-Sheng; Lu, Jian; Handschuh, Robert F.
1995-01-01
The results of research projects directed at the reduction of noise caused by misalignment of the following gear drives: double-circular arc helical gears, modified involute helical gears, face-milled spiral bevel gears, and face-milled formate cut hypoid gears are presented. Misalignment in these types of gear drives causes periodic, almost linear discontinuous functions of transmission errors. The period of such functions is the cycle of meshing when one pair of teeth is changed for the next. Due to the discontinuity of such functions of transmission errors high vibration and noise are inevitable. A predesigned parabolic function of transmission errors that is able to absorb linear discontinuous functions of transmission errors and change the resulting function of transmission errors into a continuous one is proposed. The proposed idea was successfully tested using spiral bevel gears and the noise was reduced a substantial amount in comparison with the existing design. The idea of a predesigned parabolic function is applied for the reduction of noise of helical and hypoid gears. The effectiveness of the proposed approach has been investigated by developed TCA (tooth contact analysis) programs. The bearing contact for the mentioned gears is localized. Conditions that avoid edge contact for the gear drives have been determined. Manufacturing of helical gears with new topology by hobs and grinding worms has been investigated.
Distance error correction for time-of-flight cameras
NASA Astrophysics Data System (ADS)
Fuersattel, Peter; Schaller, Christian; Maier, Andreas; Riess, Christian
2017-06-01
The measurement accuracy of time-of-flight cameras is limited due to properties of the scene and systematic errors. These errors can accumulate to multiple centimeters which may limit the applicability of these range sensors. In the past, different approaches have been proposed for improving the accuracy of these cameras. In this work, we propose a new method that improves two important aspects of the range calibration. First, we propose a new checkerboard which is augmented by a gray-level gradient. With this addition it becomes possible to capture the calibration features for intrinsic and distance calibration at the same time. The gradient strip allows to acquire a large amount of distance measurements for different surface reflectivities, which results in more meaningful training data. Second, we present multiple new features which are used as input to a random forest regressor. By using random regression forests, we circumvent the problem of finding an accurate model for the measurement error. During application, a correction value for each individual pixel is estimated with the trained forest based on a specifically tailored feature vector. With our approach the measurement error can be reduced by more than 40% for the Mesa SR4000 and by more than 30% for the Microsoft Kinect V2. In our evaluation we also investigate the impact of the individual forest parameters and illustrate the importance of the individual features.
The potential for geostationary remote sensing of NO2 to improve weather prediction
NASA Astrophysics Data System (ADS)
Liu, X.; Mizzi, A. P.; Anderson, J. L.; Fung, I. Y.; Cohen, R. C.
2016-12-01
Observations of surface winds remain sparse making it challenging to simulate and predict the weather in circumstances of light winds that are most important for poor air quality. Direct measurements of short-lived chemicals from space might be a solution to this challenge. Here we investigate the application of data assimilation of NO2 columns as will be observed from geostationary orbit to improve predictions and retrospective analysis of surface wind fields. Specifically, synthetic NO2 observations are sampled from a "nature run (NR)" regarded as the true atmosphere. Then NO2 observations are assimilated using EAKF methods into a "control run (CR)" which differs from the NR in the wind field. Wind errors are generated by introducing (1) errors in the initial conditions, (2) creating a model error by using two different formulations for the planetary boundary layer, (3) and by combining both of these effects. The assimilation reduces wind errors by up to 50%, indicating the prospects for future geostationary atmospheric composition measurements to improve weather forecasting are substantial. We also examine the assimilation sensitivity to the data assimilation window length. We find that due to the temporal heterogeneity of wind errors, the success of this application favors chemical observations of high frequency, such as those from geostationary platform. We also show the potential to improve soil moisture field by assimilating NO2 columns.
Using Automated Writing Evaluation to Reduce Grammar Errors in Writing
ERIC Educational Resources Information Center
Liao, Hui-Chuan
2016-01-01
Despite the recent development of automated writing evaluation (AWE) technology and the growing interest in applying this technology to language classrooms, few studies have looked at the effects of using AWE on reducing grammatical errors in L2 writing. This study identified the primary English grammatical error types made by 66 Taiwanese…
Using Six Sigma to reduce medication errors in a home-delivery pharmacy service.
Castle, Lon; Franzblau-Isaac, Ellen; Paulsen, Jim
2005-06-01
Medco Health Solutions, Inc. conducted a project to reduce medication errors in its home-delivery service, which is composed of eight prescription-processing pharmacies, three dispensing pharmacies, and six call-center pharmacies. Medco uses the Six Sigma methodology to reduce process variation, establish procedures to monitor the effectiveness of medication safety programs, and determine when these efforts do not achieve performance goals. A team reviewed the processes in home-delivery pharmacy and suggested strategies to improve the data-collection and medication-dispensing practices. A variety of improvement activities were implemented, including a procedure for developing, reviewing, and enhancing sound-alike/look-alike (SALA) alerts and system enhancements to improve processing consistency across the pharmacies. "External nonconformances" were reduced for several categories of medication errors, including wrong-drug selection (33%), wrong directions (49%), and SALA errors (69%). Control charts demonstrated evidence of sustained process improvement and actual reduction in specific medication error elements. Establishing a continuous quality improvement process to ensure that medication errors are minimized is critical to any health care organization providing medication services.
Aldridge, Kristina; Boyadjiev, Simeon A.; Capone, George T.; DeLeon, Valerie B.; Richtsmeier, Joan T.
2015-01-01
The genetic basis for complex phenotypes is currently of great interest for both clinical investigators and basic scientists. In order to acquire a thorough understanding of the translation from genotype to phenotype, highly precise measures of phenotypic variation are required. New technologies, such as 3D photogrammetry are being implemented in phenotypic studies due to their ability to collect data rapidly and non-invasively. Before these systems can be broadly implemented the error associated with data collected from images acquired using these technologies must be assessed. This study investigates the precision, error, and repeatability associated with anthropometric landmark coordinate data collected from 3D digital photogrammetric images acquired with the 3dMDface System. Precision, error due to the imaging system, error due to digitization of the images, and repeatability are assessed in a sample of children and adults (N=15). Results show that data collected from images with the 3dMDface System are highly repeatable and precise. The average error associated with the placement of landmarks is sub-millimeter; both the error due to digitization and to the imaging system are very low. The few measures showing a higher degree of error include those crossing the labial fissure, which are influenced by even subtle movement of the mandible. These results suggest that 3D anthropometric data collected using the 3dMDface System are highly reliable and therefore useful for evaluation of clinical dysmorphology and surgery, analyses of genotype-phenotype correlations, and inheritance of complex phenotypes. PMID:16158436
Reducing Noise in the MSU Daily Lower-Tropospheric Global Temperature Dataset
NASA Technical Reports Server (NTRS)
Christy, John R.; Spencer, Roy W.; McNider, Richard T.
1996-01-01
The daily global-mean values of the lower-tropospheric temperature determined from microwave emissions measured by satellites are examined in terms of their signal, noise, and signal-to-noise ratio. Daily and 30-day average noise estimates are reduced by almost 50% and 35%. respectively, by analyzing and adjusting (if necessary) for errors due to 1) missing data, 2) residual harmonics of the annual cycle unique to particular satellites, 3) lack of filtering, and 4) spurious trends. After adjustments, the decadal trend of the lower-tropospheric global temperature from January 1979 through February 1994 becomes -0.058 C. or about 0.03 C per decade cooler than previously calculated.
Reducing Noise in the MSU Daily Lower-Tropospheric Global Temperature Dataset
NASA Technical Reports Server (NTRS)
Christy, John R.; Spencer, Roy W.; McNider, Richard T.
1995-01-01
The daily global-mean values of the lower-tropospheric temperature determined from microwave emissions measured by satellites are examined in terms of their signal, noise, and signal-to-noise ratio. Daily and 30-day average noise estimates are reduced by, almost 50% and 35%, respectively, by analyzing and adjusting (if necessary) for errors due to (1) missing data, (2) residual harmonics of the annual cycle unique to particular satellites, (3) lack of filtering, and (4) spurious trends. After adjustments, the decadal trend of the lower-tropospheric global temperature from January 1979 through February 1994 becomes -0.058 C, or about 0.03 C per decade cooler than previously calculated.
Modified SPC for short run test and measurement process in multi-stations
NASA Astrophysics Data System (ADS)
Koh, C. K.; Chin, J. F.; Kamaruddin, S.
2018-03-01
Due to short production runs and measurement error inherent in electronic test and measurement (T&M) processes, continuous quality monitoring through real-time statistical process control (SPC) is challenging. Industry practice allows the installation of guard band using measurement uncertainty to reduce the width of acceptance limit, as an indirect way to compensate the measurement errors. This paper presents a new SPC model combining modified guard band and control charts (\\bar{\\text{Z}} chart and W chart) for short runs in T&M process in multi-stations. The proposed model standardizes the observed value with measurement target (T) and rationed measurement uncertainty (U). S-factor (S f) is introduced to the control limits to improve the sensitivity in detecting small shifts. The model was embedded in automated quality control system and verified with a case study in real industry.
Robust Flight Path Determination for Mars Precision Landing Using Genetic Algorithms
NASA Technical Reports Server (NTRS)
Bayard, David S.; Kohen, Hamid
1997-01-01
This paper documents the application of genetic algorithms (GAs) to the problem of robust flight path determination for Mars precision landing. The robust flight path problem is defined here as the determination of the flight path which delivers a low-lift open-loop controlled vehicle to its desired final landing location while minimizing the effect of perturbations due to uncertainty in the atmospheric model and entry conditions. The genetic algorithm was capable of finding solutions which reduced the landing error from 111 km RMS radial (open-loop optimal) to 43 km RMS radial (optimized with respect to perturbations) using 200 hours of computation on an Ultra-SPARC workstation. Further reduction in the landing error is possible by going to closed-loop control which can utilize the GA optimized paths as nominal trajectories for linearization.
Singh, Prashant; Harbola, Manoj K.; Johnson, Duane D.
2017-09-08
Here, this work constitutes a comprehensive and improved account of electronic-structure and mechanical properties of silicon-nitride (more » $${\\rm Si}_{3}$$ $${\\rm N}_{4}$$ ) polymorphs via van Leeuwen and Baerends (LB) exchange-corrected local density approximation (LDA) that enforces the exact exchange potential asymptotic behavior. The calculated lattice constant, bulk modulus, and electronic band structure of $${\\rm Si}_{3}$$ $${\\rm N}_{4}$$ polymorphs are in good agreement with experimental results. We also show that, for a single electron in a hydrogen atom, spherical well, or harmonic oscillator, the LB-corrected LDA reduces the (self-interaction) error to exact total energy to ~10%, a factor of three to four lower than standard LDA, due to a dramatically improved representation of the exchange-potential.« less
Improving the safety of vaccine delivery.
Evans, Huw P; Cooper, Alison; Williams, Huw; Carson-Stevens, Andrew
2016-05-03
Vaccines save millions of lives per annum as an integral part of community primary care provision worldwide. Adverse events due to the vaccine delivery process outnumber those arising from the pharmacological properties of the vaccines themselves. Whilst one in three patients receiving a vaccine will encounter some form of error, little is known about their underlying causes and how to mitigate them in practice. Patient safety incident reporting systems and adverse drug event surveillance offer a rich opportunity for understanding the underlying causes of those errors. Reducing harm relies on the identification and implementation of changes to improve vaccine safety at multiple levels: from patient interventions through to organizational actions at local, national and international levels. Here we highlight the potential for maximizing learning from patient safety incident reports to improve the quality and safety of vaccine delivery.
Tarrasch, Ricardo; Berman, Zohar; Friedmann, Naama
2016-01-01
This study explored the effects of a Mindfulness-Based Stress Reduction (MBSR) intervention on reading, attention, and psychological well-being among people with developmental dyslexia and/or attention deficits. Various types of dyslexia exist, characterized by different error types. We examined a question that has not been tested so far: which types of errors (and dyslexias) are affected by MBSR training. To do so, we tested, using an extensive battery of reading tests, whether each participant had dyslexia, and which errors types s/he makes, and then compared the rate of each error type before and after the MBSR workshop. We used a similar approach to attention disorders: we evaluated the participants' sustained, selective, executive, and orienting of attention to assess whether they had attention-disorders, and if so, which functions were impaired. We then evaluated the effect of MBSR on each of the attention functions. Psychological measures including mindfulness, stress, reflection and rumination, lifesatisfaction, depression, anxiety, and sleep-disturbances were also evaluated. Nineteen Hebrew-readers completed a 2-month mindfulness workshop. The results showed that whereas reading errors of letter-migrations within and between words and vowelletter errors did not decrease following the workshop, most participants made fewer reading errors in general following the workshop, with a significant reduction of 19% from their original number of errors. This decrease mainly resulted from a decrease in errors that occur due to reading via the sublexical rather than the lexical route. It seems, therefore, that mindfulness helped reading by keeping the readers on the lexical route. This improvement in reading probably resulted from improved sustained attention: the reduction in sublexical reading was significant for the dyslexic participants who also had attention deficits, and there were significant correlations between reduced reading errors and decreases in impulsivity. Following the meditation workshop, the rate of commission errors decreased, indicating decreased impulsivity, and the variation in RTs in the CPT task decreased, indicating improved sustained attention. Significant improvements were obtained in participants' mindfulness, perceived-stress, rumination, depression, state-anxiety, and sleep-disturbances. Correlations were also obtained between reading improvement and increased mindfulness following the workshop. Thus, whereas mindfulness training did not affect specific types of errors and did not improve dyslexia, it did affect the reading of adults with developmental dyslexia and ADHD, by helping them to stay on the straight path of the lexical route while reading. Thus, the reading improvement induced by mindfulness sheds light on the intricate relation between attention and reading. Mindfulness reduced impulsivity and improved sustained attention, and this, in turn, improved reading of adults with developmental dyslexia and ADHD, by helping them to read via the straight path of the lexical route.
Tarrasch, Ricardo; Berman, Zohar; Friedmann, Naama
2016-01-01
This study explored the effects of a Mindfulness-Based Stress Reduction (MBSR) intervention on reading, attention, and psychological well-being among people with developmental dyslexia and/or attention deficits. Various types of dyslexia exist, characterized by different error types. We examined a question that has not been tested so far: which types of errors (and dyslexias) are affected by MBSR training. To do so, we tested, using an extensive battery of reading tests, whether each participant had dyslexia, and which errors types s/he makes, and then compared the rate of each error type before and after the MBSR workshop. We used a similar approach to attention disorders: we evaluated the participants’ sustained, selective, executive, and orienting of attention to assess whether they had attention-disorders, and if so, which functions were impaired. We then evaluated the effect of MBSR on each of the attention functions. Psychological measures including mindfulness, stress, reflection and rumination, lifesatisfaction, depression, anxiety, and sleep-disturbances were also evaluated. Nineteen Hebrew-readers completed a 2-month mindfulness workshop. The results showed that whereas reading errors of letter-migrations within and between words and vowelletter errors did not decrease following the workshop, most participants made fewer reading errors in general following the workshop, with a significant reduction of 19% from their original number of errors. This decrease mainly resulted from a decrease in errors that occur due to reading via the sublexical rather than the lexical route. It seems, therefore, that mindfulness helped reading by keeping the readers on the lexical route. This improvement in reading probably resulted from improved sustained attention: the reduction in sublexical reading was significant for the dyslexic participants who also had attention deficits, and there were significant correlations between reduced reading errors and decreases in impulsivity. Following the meditation workshop, the rate of commission errors decreased, indicating decreased impulsivity, and the variation in RTs in the CPT task decreased, indicating improved sustained attention. Significant improvements were obtained in participants’ mindfulness, perceived-stress, rumination, depression, state-anxiety, and sleep-disturbances. Correlations were also obtained between reading improvement and increased mindfulness following the workshop. Thus, whereas mindfulness training did not affect specific types of errors and did not improve dyslexia, it did affect the reading of adults with developmental dyslexia and ADHD, by helping them to stay on the straight path of the lexical route while reading. Thus, the reading improvement induced by mindfulness sheds light on the intricate relation between attention and reading. Mindfulness reduced impulsivity and improved sustained attention, and this, in turn, improved reading of adults with developmental dyslexia and ADHD, by helping them to read via the straight path of the lexical route. PMID:27242565
Estimate of higher order ionospheric errors in GNSS positioning
NASA Astrophysics Data System (ADS)
Hoque, M. Mainul; Jakowski, N.
2008-10-01
Precise navigation and positioning using GPS/GLONASS/Galileo require the ionospheric propagation errors to be accurately determined and corrected for. Current dual-frequency method of ionospheric correction ignores higher order ionospheric errors such as the second and third order ionospheric terms in the refractive index formula and errors due to bending of the signal. The total electron content (TEC) is assumed to be same at two GPS frequencies. All these assumptions lead to erroneous estimations and corrections of the ionospheric errors. In this paper a rigorous treatment of these problems is presented. Different approximation formulas have been proposed to correct errors due to excess path length in addition to the free space path length, TEC difference at two GNSS frequencies, and third-order ionospheric term. The GPS dual-frequency residual range errors can be corrected within millimeter level accuracy using the proposed correction formulas.
NASA Astrophysics Data System (ADS)
Meier, Walter Neil
This thesis demonstrates the applicability of data assimilation methods to improve observed and modeled ice motion fields and to demonstrate the effects of assimilated motion on Arctic processes important to the global climate and of practical concern to human activities. Ice motions derived from 85 GHz and 37 GHz SSM/I imagery and estimated from two-dimensional dynamic-thermodynamic sea ice models are compared to buoy observations. Mean error, error standard deviation, and correlation with buoys are computed for the model domain. SSM/I motions generally have a lower bias, but higher error standard deviations and lower correlation with buoys than model motions. There are notable variations in the statistics depending on the region of the Arctic, season, and ice characteristics. Assimilation methods are investigated and blending and optimal interpolation strategies are implemented. Blending assimilation improves error statistics slightly, but the effect of the assimilation is reduced due to noise in the SSM/I motions and is thus not an effective method to improve ice motion estimates. However, optimal interpolation assimilation reduces motion errors by 25--30% over modeled motions and 40--45% over SSM/I motions. Optimal interpolation assimilation is beneficial in all regions, seasons and ice conditions, and is particularly effective in regimes where modeled and SSM/I errors are high. Assimilation alters annual average motion fields. Modeled ice products of ice thickness, ice divergence, Fram Strait ice volume export, transport across the Arctic and interannual basin averages are also influenced by assimilated motions. Assimilation improves estimates of pollutant transport and corrects synoptic-scale errors in the motion fields caused by incorrect forcings or errors in model physics. The portability of the optimal interpolation assimilation method is demonstrated by implementing the strategy in an ice thickness distribution (ITD) model. This research presents an innovative method of combining a new data set of SSM/I-derived ice motions with three different sea ice models via two data assimilation methods. The work described here is the first example of assimilating remotely-sensed data within high-resolution and detailed dynamic-thermodynamic sea ice models. The results demonstrate that assimilation is a valuable resource for determining accurate ice motion in the Arctic.
Reducing patient identification errors related to glucose point-of-care testing.
Alreja, Gaurav; Setia, Namrata; Nichols, James; Pantanowitz, Liron
2011-01-01
Patient identification (ID) errors in point-of-care testing (POCT) can cause test results to be transferred to the wrong patient's chart or prevent results from being transmitted and reported. Despite the implementation of patient barcoding and ongoing operator training at our institution, patient ID errors still occur with glucose POCT. The aim of this study was to develop a solution to reduce identification errors with POCT. Glucose POCT was performed by approximately 2,400 clinical operators throughout our health system. Patients are identified by scanning in wristband barcodes or by manual data entry using portable glucose meters. Meters are docked to upload data to a database server which then transmits data to any medical record matching the financial number of the test result. With a new model, meters connect to an interface manager where the patient ID (a nine-digit account number) is checked against patient registration data from admission, discharge, and transfer (ADT) feeds and only matched results are transferred to the patient's electronic medical record. With the new process, the patient ID is checked prior to testing, and testing is prevented until ID errors are resolved. When averaged over a period of a month, ID errors were reduced to 3 errors/month (0.015%) in comparison with 61.5 errors/month (0.319%) before implementing the new meters. Patient ID errors may occur with glucose POCT despite patient barcoding. The verification of patient identification should ideally take place at the bedside before testing occurs so that the errors can be addressed in real time. The introduction of an ADT feed directly to glucose meters reduced patient ID errors in POCT.
Reducing patient identification errors related to glucose point-of-care testing
Alreja, Gaurav; Setia, Namrata; Nichols, James; Pantanowitz, Liron
2011-01-01
Background: Patient identification (ID) errors in point-of-care testing (POCT) can cause test results to be transferred to the wrong patient's chart or prevent results from being transmitted and reported. Despite the implementation of patient barcoding and ongoing operator training at our institution, patient ID errors still occur with glucose POCT. The aim of this study was to develop a solution to reduce identification errors with POCT. Materials and Methods: Glucose POCT was performed by approximately 2,400 clinical operators throughout our health system. Patients are identified by scanning in wristband barcodes or by manual data entry using portable glucose meters. Meters are docked to upload data to a database server which then transmits data to any medical record matching the financial number of the test result. With a new model, meters connect to an interface manager where the patient ID (a nine-digit account number) is checked against patient registration data from admission, discharge, and transfer (ADT) feeds and only matched results are transferred to the patient's electronic medical record. With the new process, the patient ID is checked prior to testing, and testing is prevented until ID errors are resolved. Results: When averaged over a period of a month, ID errors were reduced to 3 errors/month (0.015%) in comparison with 61.5 errors/month (0.319%) before implementing the new meters. Conclusion: Patient ID errors may occur with glucose POCT despite patient barcoding. The verification of patient identification should ideally take place at the bedside before testing occurs so that the errors can be addressed in real time. The introduction of an ADT feed directly to glucose meters reduced patient ID errors in POCT. PMID:21633490
Zhou, Shiyue; Tello, Nadia; Harvey, Alex; Boyes, Barry; Orlando, Ron; Mechref, Yehia
2016-06-01
Glycans have numerous functions in various biological processes and participate in the progress of diseases. Reliable quantitative glycomic profiling techniques could contribute to the understanding of the biological functions of glycans, and lead to the discovery of potential glycan biomarkers for diseases. Although LC-MS is a powerful analytical tool for quantitative glycomics, the variation of ionization efficiency and MS intensity bias are influencing quantitation reliability. Internal standards can be utilized for glycomic quantitation by MS-based methods to reduce variability. In this study, we used stable isotope labeled IgG2b monoclonal antibody, iGlycoMab, as an internal standard to reduce potential for errors and to reduce variabililty due to sample digestion, derivatization, and fluctuation of nanoESI efficiency in the LC-MS analysis of permethylated N-glycans released from model glycoproteins, human blood serum, and breast cancer cell line. We observed an unanticipated degradation of isotope labeled glycans, tracked a source of such degradation, and optimized a sample preparation protocol to minimize degradation of the internal standard glycans. All results indicated the effectiveness of using iGlycoMab to minimize errors originating from sample handling and instruments. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Temporal Decompostion of a Distribution System Quasi-Static Time-Series Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mather, Barry A; Hunsberger, Randolph J
This paper documents the first phase of an investigation into reducing runtimes of complex OpenDSS models through parallelization. As the method seems promising, future work will quantify - and further mitigate - errors arising from this process. In this initial report, we demonstrate how, through the use of temporal decomposition, the run times of a complex distribution-system-level quasi-static time series simulation can be reduced roughly proportional to the level of parallelization. Using this method, the monolithic model runtime of 51 hours was reduced to a minimum of about 90 minutes. As expected, this comes at the expense of control- andmore » voltage-errors at the time-slice boundaries. All evaluations were performed using a real distribution circuit model with the addition of 50 PV systems - representing a mock complex PV impact study. We are able to reduce induced transition errors through the addition of controls initialization, though small errors persist. The time savings with parallelization are so significant that we feel additional investigation to reduce control errors is warranted.« less
Precise X-ray and video overlay for augmented reality fluoroscopy.
Chen, Xin; Wang, Lejing; Fallavollita, Pascal; Navab, Nassir
2013-01-01
The camera-augmented mobile C-arm (CamC) augments any mobile C-arm by a video camera and mirror construction and provides a co-registration of X-ray with video images. The accurate overlay between these images is crucial to high-quality surgical outcomes. In this work, we propose a practical solution that improves the overlay accuracy for any C-arm orientation by: (i) improving the existing CamC calibration, (ii) removing distortion effects, and (iii) accounting for the mechanical sagging of the C-arm gantry due to gravity. A planar phantom is constructed and placed at different distances to the image intensifier in order to obtain the optimal homography that co-registers X-ray and video with a minimum error. To alleviate distortion, both X-ray calibration based on equidistant grid model and Zhang's camera calibration method are implemented for distortion correction. Lastly, the virtual detector plane (VDP) method is adapted and integrated to reduce errors due to the mechanical sagging of the C-arm gantry. The overlay errors are 0.38±0.06 mm when not correcting for distortion, 0.27±0.06 mm when applying Zhang's camera calibration, and 0.27±0.05 mm when applying X-ray calibration. Lastly, when taking into account all angular and orbital rotations of the C-arm, as well as correcting for distortion, the overlay errors are 0.53±0.24 mm using VDP and 1.67±1.25 mm excluding VDP. The augmented reality fluoroscope achieves an accurate video and X-ray overlay when applying the optimal homography calculated from distortion correction using X-ray calibration together with the VDP.
Should Studies of Diabetes Treatment Stratification Correct for Baseline HbA1c?
Jones, Angus G.; Lonergan, Mike; Henley, William E.; Pearson, Ewan R.; Hattersley, Andrew T.; Shields, Beverley M.
2016-01-01
Aims Baseline HbA1c is a major predictor of response to glucose lowering therapy and therefore a potential confounder in studies aiming to identify other predictors. However, baseline adjustment may introduce error if the association between baseline HbA1c and response is substantially due to measurement error and regression to the mean. We aimed to determine whether studies of predictors of response should adjust for baseline HbA1c. Methods We assessed the relationship between baseline HbA1c and glycaemic response in 257 participants treated with GLP-1R agonists and assessed whether it reflected measurement error and regression to the mean using duplicate ‘pre-baseline’ HbA1c measurements not included in the response variable. In this cohort and an additional 2659 participants treated with sulfonylureas we assessed the relationship between covariates associated with baseline HbA1c and treatment response with and without baseline adjustment, and with a bias correction using pre-baseline HbA1c to adjust for the effects of error in baseline HbA1c. Results Baseline HbA1c was a major predictor of response (R2 = 0.19,β = -0.44,p<0.001).The association between pre-baseline and response was similar suggesting the greater response at higher baseline HbA1cs is not mainly due to measurement error and subsequent regression to the mean. In unadjusted analysis in both cohorts, factors associated with baseline HbA1c were associated with response, however these associations were weak or absent after adjustment for baseline HbA1c. Bias correction did not substantially alter associations. Conclusions Adjustment for the baseline HbA1c measurement is a simple and effective way to reduce bias in studies of predictors of response to glucose lowering therapy. PMID:27050911
Van de Vreede, Melita; McGrath, Anne; de Clifford, Jan
2018-05-14
Objective. The aim of the present study was to identify and quantify medication errors reportedly related to electronic medication management systems (eMMS) and those considered likely to occur more frequently with eMMS. This included developing a new classification system relevant to eMMS errors. Methods. Eight Victorian hospitals with eMMS participated in a retrospective audit of reported medication incidents from their incident reporting databases between May and July 2014. Site-appointed project officers submitted deidentified incidents they deemed new or likely to occur more frequently due to eMMS, together with the Incident Severity Rating (ISR). The authors reviewed and classified incidents. Results. There were 5826 medication-related incidents reported. In total, 93 (47 prescribing errors, 46 administration errors) were identified as new or potentially related to eMMS. Only one ISR2 (moderate) and no ISR1 (severe or death) errors were reported, so harm to patients in this 3-month period was minimal. The most commonly reported error types were 'human factors' and 'unfamiliarity or training' (70%) and 'cross-encounter or hybrid system errors' (22%). Conclusions. Although the results suggest that the errors reported were of low severity, organisations must remain vigilant to the risk of new errors and avoid the assumption that eMMS is the panacea to all medication error issues. What is known about the topic? eMMS have been shown to reduce some types of medication errors, but it has been reported that some new medication errors have been identified and some are likely to occur more frequently with eMMS. There are few published Australian studies that have reported on medication error types that are likely to occur more frequently with eMMS in more than one organisation and that include administration and prescribing errors. What does this paper add? This paper includes a new simple classification system for eMMS that is useful and outlines the most commonly reported incident types and can inform organisations and vendors on possible eMMS improvements. The paper suggests a new classification system for eMMS medication errors. What are the implications for practitioners? The results of the present study will highlight to organisations the need for ongoing review of system design, refinement of workflow issues, staff education and training and reporting and monitoring of errors.
Airplane wing vibrations due to atmospheric turbulence
NASA Technical Reports Server (NTRS)
Pastel, R. L.; Caruthers, J. E.; Frost, W.
1981-01-01
The magnitude of error introduced due to wing vibration when measuring atmospheric turbulence with a wind probe mounted at the wing tip was studied. It was also determined whether accelerometers mounted on the wing tip are needed to correct this error. A spectrum analysis approach is used to determine the error. Estimates of the B-57 wing characteristics are used to simulate the airplane wing, and von Karman's cross spectrum function is used to simulate atmospheric turbulence. It was found that wing vibration introduces large error in measured spectra of turbulence in the frequency's range close to the natural frequencies of the wing.
Improving the delivery of care and reducing healthcare costs with the digitization of information.
Noffsinger, R; Chin, S
2000-01-01
In the coming years, the digitization of information and the Internet will be extremely powerful in reducing healthcare costs while assisting providers in the delivery of care. One example of healthcare inefficiency that can be managed through information digitization is the process of prescription writing. Due to the handwritten and verbal communication surrounding prescription writing, as well as the multiple tiers of authorizations, the prescription drug process causes extensive financial waste as well as medical errors, lost time, and even fatal accidents. Electronic prescription management systems are being designed to address these inefficiencies. By utilizing new electronic prescription systems, physicians not only prescribe more accurately, but also improve formulary compliance thereby reducing pharmacy utilization. These systems expand patient care by presenting proactive alternatives at the point of prescription while reducing costs and providing additional benefits for consumers and healthcare providers.
Asymmetric Memory Circuit Would Resist Soft Errors
NASA Technical Reports Server (NTRS)
Buehler, Martin G.; Perlman, Marvin
1990-01-01
Some nonlinear error-correcting codes more efficient in presence of asymmetry. Combination of circuit-design and coding concepts expected to make integrated-circuit random-access memories more resistant to "soft" errors (temporary bit errors, also called "single-event upsets" due to ionizing radiation). Integrated circuit of new type made deliberately more susceptible to one kind of bit error than to other, and associated error-correcting code adapted to exploit this asymmetry in error probabilities.
NASA Astrophysics Data System (ADS)
Nebashi, Ryusuke; Sakimura, Noboru; Sugibayashi, Tadahiko
2017-08-01
We evaluated the soft-error tolerance and energy consumption of an embedded computer with magnetic random access memory (MRAM) using two computer simulators. One is a central processing unit (CPU) simulator of a typical embedded computer system. We simulated the radiation-induced single-event-upset (SEU) probability in a spin-transfer-torque MRAM cell and also the failure rate of a typical embedded computer due to its main memory SEU error. The other is a delay tolerant network (DTN) system simulator. It simulates the power dissipation of wireless sensor network nodes of the system using a revised CPU simulator and a network simulator. We demonstrated that the SEU effect on the embedded computer with 1 Gbit MRAM-based working memory is less than 1 failure in time (FIT). We also demonstrated that the energy consumption of the DTN sensor node with MRAM-based working memory can be reduced to 1/11. These results indicate that MRAM-based working memory enhances the disaster tolerance of embedded computers.
Design and simulation of a 800 Mbit/s data link for magnetic resonance imaging wearables.
Vogt, Christian; Buthe, Lars; Petti, Luisa; Cantarella, Giuseppe; Munzenrieder, Niko; Daus, Alwin; Troster, Gerhard
2015-08-01
This paper presents the optimization of electronic circuitry for operation in the harsh electro magnetic (EM) environment during a magnetic resonance imaging (MRI) scan. As demonstrator, a device small enough to be worn during the scan is optimized. Based on finite element method (FEM) simulations, the induced current densities due to magnetic field changes of 200 T s(-1) were reduced from 1 × 10(10) A m(-2) by one order of magnitude, predicting error-free operation of the 1.8V logic employed. The simulations were validated using a bit error rate test, which showed no bit errors during a MRI scan sequence. Therefore, neither the logic, nor the utilized 800 Mbit s(-1) low voltage differential swing (LVDS) data link of the optimized wearable device were significantly influenced by the EM interference. Next, the influence of ferro-magnetic components on the static magnetic field and consequently the image quality was simulated showing a MRI image loss with approximately 2 cm radius around a commercial integrated circuit of 1×1 cm(2). This was successively validated by a conventional MRI scan.
Monitoring Method of Cutting Force by Using Additional Spindle Sensors
NASA Astrophysics Data System (ADS)
Sarhan, Ahmed Aly Diaa; Matsubara, Atsushi; Sugihara, Motoyuki; Saraie, Hidenori; Ibaraki, Soichi; Kakino, Yoshiaki
This paper describes a monitoring method of cutting forces for end milling process by using displacement sensors. Four eddy-current displacement sensors are installed on the spindle housing of a machining center so that they can detect the radial motion of the rotating spindle. Thermocouples are also attached to the spindle structure in order to examine the thermal effect in the displacement sensing. The change in the spindle stiffness due to the spindle temperature and the speed is investigated as well. Finally, the estimation performance of cutting forces using the spindle displacement sensors is experimentally investigated by machining tests on carbon steel in end milling operations under different cutting conditions. It is found that the monitoring errors are attributable to the thermal displacement of the spindle, the time lag of the sensing system, and the modeling error of the spindle stiffness. It is also shown that the root mean square errors between estimated and measured amplitudes of cutting forces are reduced to be less than 20N with proper selection of the linear stiffness.
Coarse-graining errors and numerical optimization using a relative entropy framework
NASA Astrophysics Data System (ADS)
Chaimovich, Aviel; Shell, M. Scott
2011-03-01
The ability to generate accurate coarse-grained models from reference fully atomic (or otherwise "first-principles") ones has become an important component in modeling the behavior of complex molecular systems with large length and time scales. We recently proposed a novel coarse-graining approach based upon variational minimization of a configuration-space functional called the relative entropy, Srel, that measures the information lost upon coarse-graining. Here, we develop a broad theoretical framework for this methodology and numerical strategies for its use in practical coarse-graining settings. In particular, we show that the relative entropy offers tight control over the errors due to coarse-graining in arbitrary microscopic properties, and suggests a systematic approach to reducing them. We also describe fundamental connections between this optimization methodology and other coarse-graining strategies like inverse Monte Carlo, force matching, energy matching, and variational mean-field theory. We suggest several new numerical approaches to its minimization that provide new coarse-graining strategies. Finally, we demonstrate the application of these theoretical considerations and algorithms to a simple, instructive system and characterize convergence and errors within the relative entropy framework.
NASA Technical Reports Server (NTRS)
Pak, Chan-Gi
2013-01-01
Modern aircraft employ a significant fraction of their weight in composite materials to reduce weight and improve performance. Aircraft aeroservoelastic models are typically characterized by significant levels of model parameter uncertainty due to the composite manufacturing process. Small modeling errors in the finite element model will eventually induce errors in the structural flexibility and mass, thus propagating into unpredictable errors in the unsteady aerodynamics and the control law design. One of the primary objectives of Multi Utility Technology Test-bed (MUTT) aircraft is the flight demonstration of active flutter suppression, and therefore in this study, the identification of the primary and secondary modes for the structural model tuning based on the flutter analysis of MUTT aircraft. The ground vibration test-validated structural dynamic finite element model of the MUTT aircraft is created in this study. The structural dynamic finite element model of MUTT aircraft is improved using the in-house Multi-disciplinary Design, Analysis, and Optimization tool. In this study, two different weight configurations of MUTT aircraft have been improved simultaneously in a single model tuning procedure.
A prototype automatic phase compensation module
NASA Technical Reports Server (NTRS)
Terry, John D.
1992-01-01
The growing demands for high gain and accurate satellite communication systems will necessitate the utilization of large reflector systems. One area of concern of reflector based satellite communication is large scale surface deformations due to thermal effects. These distortions, when present, can degrade the performance of the reflector system appreciable. This performance degradation is manifested by a decrease in peak gain, and increase in sidelobe level, and pointing errors. It is essential to compensate for these distortion effects and to maintain the required system performance in the operating space environment. For this reason the development of a technique to offset the degradation effects is highly desirable. Currently, most research is direct at developing better material for the reflector. These materials have a lower coefficient of linear expansion thereby reducing the surface errors. Alternatively, one can minimize the distortion effects of these large scale errors by adaptive phased array compensation. Adaptive phased array techniques have been studied extensively at NASA and elsewhere. Presented in this paper is a prototype automatic phase compensation module designed and built at NASA Lewis Research Center which is the first stage of development for an adaptive array compensation module.
Mitigation of Angle Tracking Errors Due to Color Dependent Centroid Shifts in SIM-Lite
NASA Technical Reports Server (NTRS)
Nemati, Bijan; An, Xin; Goullioud, Renaud; Shao, Michael; Shen, Tsae-Pyng; Wehmeier, Udo J.; Weilert, Mark A.; Wang, Xu; Werne, Thomas A.; Wu, Janet P.;
2010-01-01
The SIM-Lite astrometric interferometer will search for Earth-size planets in the habitable zones of nearby stars. In this search the interferometer will monitor the astrometric position of candidate stars relative to nearby reference stars over the course of a 5 year mission. The elemental measurement is the angle between a target star and a reference star. This is a two-step process, in which the interferometer will each time need to use its controllable optics to align the starlight in the two arms with each other and with the metrology beams. The sensor for this alignment is an angle tracking CCD camera. Various constraints in the design of the camera subject it to systematic alignment errors when observing a star of one spectrum compared with a start of a different spectrum. This effect is called a Color Dependent Centroid Shift (CDCS) and has been studied extensively with SIM-Lite's SCDU testbed. Here we describe results from the simulation and testing of this error in the SCDU testbed, as well as effective ways that it can be reduced to acceptable levels.
Error reduction in EMG signal decomposition
Kline, Joshua C.
2014-01-01
Decomposition of the electromyographic (EMG) signal into constituent action potentials and the identification of individual firing instances of each motor unit in the presence of ambient noise are inherently probabilistic processes, whether performed manually or with automated algorithms. Consequently, they are subject to errors. We set out to classify and reduce these errors by analyzing 1,061 motor-unit action-potential trains (MUAPTs), obtained by decomposing surface EMG (sEMG) signals recorded during human voluntary contractions. Decomposition errors were classified into two general categories: location errors representing variability in the temporal localization of each motor-unit firing instance and identification errors consisting of falsely detected or missed firing instances. To mitigate these errors, we developed an error-reduction algorithm that combines multiple decomposition estimates to determine a more probable estimate of motor-unit firing instances with fewer errors. The performance of the algorithm is governed by a trade-off between the yield of MUAPTs obtained above a given accuracy level and the time required to perform the decomposition. When applied to a set of sEMG signals synthesized from real MUAPTs, the identification error was reduced by an average of 1.78%, improving the accuracy to 97.0%, and the location error was reduced by an average of 1.66 ms. The error-reduction algorithm in this study is not limited to any specific decomposition strategy. Rather, we propose it be used for other decomposition methods, especially when analyzing precise motor-unit firing instances, as occurs when measuring synchronization. PMID:25210159
Use of machine learning methods to reduce predictive error of groundwater models.
Xu, Tianfang; Valocchi, Albert J; Choi, Jaesik; Amir, Eyal
2014-01-01
Quantitative analyses of groundwater flow and transport typically rely on a physically-based model, which is inherently subject to error. Errors in model structure, parameter and data lead to both random and systematic error even in the output of a calibrated model. We develop complementary data-driven models (DDMs) to reduce the predictive error of physically-based groundwater models. Two machine learning techniques, the instance-based weighting and support vector regression, are used to build the DDMs. This approach is illustrated using two real-world case studies of the Republican River Compact Administration model and the Spokane Valley-Rathdrum Prairie model. The two groundwater models have different hydrogeologic settings, parameterization, and calibration methods. In the first case study, cluster analysis is introduced for data preprocessing to make the DDMs more robust and computationally efficient. The DDMs reduce the root-mean-square error (RMSE) of the temporal, spatial, and spatiotemporal prediction of piezometric head of the groundwater model by 82%, 60%, and 48%, respectively. In the second case study, the DDMs reduce the RMSE of the temporal prediction of piezometric head of the groundwater model by 77%. It is further demonstrated that the effectiveness of the DDMs depends on the existence and extent of the structure in the error of the physically-based model. © 2013, National GroundWater Association.
Zhang, Guangzhi; Cai, Shaobin; Xiong, Naixue
2018-01-01
One of the remarkable challenges about Wireless Sensor Networks (WSN) is how to transfer the collected data efficiently due to energy limitation of sensor nodes. Network coding will increase network throughput of WSN dramatically due to the broadcast nature of WSN. However, the network coding usually propagates a single original error over the whole network. Due to the special property of error propagation in network coding, most of error correction methods cannot correct more than C/2 corrupted errors where C is the max flow min cut of the network. To maximize the effectiveness of network coding applied in WSN, a new error-correcting mechanism to confront the propagated error is urgently needed. Based on the social network characteristic inherent in WSN and L1 optimization, we propose a novel scheme which successfully corrects more than C/2 corrupted errors. What is more, even if the error occurs on all the links of the network, our scheme also can correct errors successfully. With introducing a secret channel and a specially designed matrix which can trap some errors, we improve John and Yi’s model so that it can correct the propagated errors in network coding which usually pollute exactly 100% of the received messages. Taking advantage of the social characteristic inherent in WSN, we propose a new distributed approach that establishes reputation-based trust among sensor nodes in order to identify the informative upstream sensor nodes. With referred theory of social networks, the informative relay nodes are selected and marked with high trust value. The two methods of L1 optimization and utilizing social characteristic coordinate with each other, and can correct the propagated error whose fraction is even exactly 100% in WSN where network coding is performed. The effectiveness of the error correction scheme is validated through simulation experiments. PMID:29401668
Zhang, Guangzhi; Cai, Shaobin; Xiong, Naixue
2018-02-03
One of the remarkable challenges about Wireless Sensor Networks (WSN) is how to transfer the collected data efficiently due to energy limitation of sensor nodes. Network coding will increase network throughput of WSN dramatically due to the broadcast nature of WSN. However, the network coding usually propagates a single original error over the whole network. Due to the special property of error propagation in network coding, most of error correction methods cannot correct more than C /2 corrupted errors where C is the max flow min cut of the network. To maximize the effectiveness of network coding applied in WSN, a new error-correcting mechanism to confront the propagated error is urgently needed. Based on the social network characteristic inherent in WSN and L1 optimization, we propose a novel scheme which successfully corrects more than C /2 corrupted errors. What is more, even if the error occurs on all the links of the network, our scheme also can correct errors successfully. With introducing a secret channel and a specially designed matrix which can trap some errors, we improve John and Yi's model so that it can correct the propagated errors in network coding which usually pollute exactly 100% of the received messages. Taking advantage of the social characteristic inherent in WSN, we propose a new distributed approach that establishes reputation-based trust among sensor nodes in order to identify the informative upstream sensor nodes. With referred theory of social networks, the informative relay nodes are selected and marked with high trust value. The two methods of L1 optimization and utilizing social characteristic coordinate with each other, and can correct the propagated error whose fraction is even exactly 100% in WSN where network coding is performed. The effectiveness of the error correction scheme is validated through simulation experiments.
NASA Astrophysics Data System (ADS)
Jones, Emlyn M.; Baird, Mark E.; Mongin, Mathieu; Parslow, John; Skerratt, Jenny; Lovell, Jenny; Margvelashvili, Nugzar; Matear, Richard J.; Wild-Allen, Karen; Robson, Barbara; Rizwi, Farhan; Oke, Peter; King, Edward; Schroeder, Thomas; Steven, Andy; Taylor, John
2016-12-01
Skillful marine biogeochemical (BGC) models are required to understand a range of coastal and global phenomena such as changes in nitrogen and carbon cycles. The refinement of BGC models through the assimilation of variables calculated from observed in-water inherent optical properties (IOPs), such as phytoplankton absorption, is problematic. Empirically derived relationships between IOPs and variables such as chlorophyll-a concentration (Chl a), total suspended solids (TSS) and coloured dissolved organic matter (CDOM) have been shown to have errors that can exceed 100 % of the observed quantity. These errors are greatest in shallow coastal regions, such as the Great Barrier Reef (GBR), due to the additional signal from bottom reflectance. Rather than assimilate quantities calculated using IOP algorithms, this study demonstrates the advantages of assimilating quantities calculated directly from the less error-prone satellite remote-sensing reflectance (RSR). To assimilate the observed RSR, we use an in-water optical model to produce an equivalent simulated RSR and calculate the mismatch between the observed and simulated quantities to constrain the BGC model with a deterministic ensemble Kalman filter (DEnKF). The traditional assumption that simulated surface Chl a is equivalent to the remotely sensed OC3M estimate of Chl a resulted in a forecast error of approximately 75 %. We show this error can be halved by instead using simulated RSR to constrain the model via the assimilation system. When the analysis and forecast fields from the RSR-based assimilation system are compared with the non-assimilating model, a comparison against independent in situ observations of Chl a, TSS and dissolved inorganic nutrients (NO3, NH4 and DIP) showed that errors are reduced by up to 90 %. In all cases, the assimilation system improves the simulation compared to the non-assimilating model. Our approach allows for the incorporation of vast quantities of remote-sensing observations that have in the past been discarded due to shallow water and/or artefacts introduced by terrestrially derived TSS and CDOM or the lack of a calibrated regional IOP algorithm.
Flux Sampling Errors for Aircraft and Towers
NASA Technical Reports Server (NTRS)
Mahrt, Larry
1998-01-01
Various errors and influences leading to differences between tower- and aircraft-measured fluxes are surveyed. This survey is motivated by reports in the literature that aircraft fluxes are sometimes smaller than tower-measured fluxes. Both tower and aircraft flux errors are larger with surface heterogeneity due to several independent effects. Surface heterogeneity may cause tower flux errors to increase with decreasing wind speed. Techniques to assess flux sampling error are reviewed. Such error estimates suffer various degrees of inapplicability in real geophysical time series due to nonstationarity of tower time series (or inhomogeneity of aircraft data). A new measure for nonstationarity is developed that eliminates assumptions on the form of the nonstationarity inherent in previous methods. When this nonstationarity measure becomes large, the surface energy imbalance increases sharply. Finally, strategies for obtaining adequate flux sampling using repeated aircraft passes and grid patterns are outlined.
Errors from approximation of ODE systems with reduced order models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vassilevska, Tanya
2016-12-30
This is a code to calculate the error from approximation of systems of ordinary differential equations (ODEs) by using Proper Orthogonal Decomposition (POD) Reduced Order Models (ROM) methods and to compare and analyze the errors for two POD ROM variants. The first variant is the standard POD ROM, the second variant is a modification of the method using the values of the time derivatives (a.k.a. time-derivative snapshots). The code compares the errors from the two variants under different conditions.
Use of dual coolant displacing media for in-process optical measurement of form profiles
NASA Astrophysics Data System (ADS)
Gao, Y.; Xie, F.
2018-07-01
In-process measurement supports feedback control to reduce workpiece surface form error. Without it, the workpiece surface must be measured offline causing significant errors in workpiece positioning and reduced productivity. To offer better performance, a new in-process optical measurement method based on the use of dual coolant displacing media is proposed and studied, which uses an air and liquid phase together to resist coolant and to achieve in-process measurement. In the proposed new design, coolant is used to replace the previously used clean water to avoid coolant dilution. Compared with the previous methods, the distance between the applicator and the workpiece surface can be relaxed to 1 mm. The result is 4 times larger than before, thus permitting measurement of curved surfaces. The use of air is up to 1.5 times less than the best method previously available. For a sample workpiece with curved surfaces, the relative error of profile measurement under coolant conditions can be as small as 0.1% compared with the one under no coolant conditions. Problems in comparing measured 3D surfaces are discussed. A comparative study between a Bruker Npflex optical profiler and the developed new in-process optical profiler was conducted. For a surface area of 5.5 mm × 5.5 mm, the average measurement error under coolant conditions is only 0.693 µm. In addition, the error due to the new method is only 0.10 µm when compared between coolant and no coolant conditions. The effect of a thin liquid film on workpiece surface is discussed. The experimental results show that the new method can successfully solve the coolant dilution problem and is able to accurately measure the workpiece surface whilst fully submerged in the opaque coolant. The proposed new method is advantageous and should be very useful for in-process optical form profile measurement in precision machining.
NASA Astrophysics Data System (ADS)
James, Mike R.; Robson, Stuart; d'Oleire-Oltmanns, Sebastian; Niethammer, Uwe
2016-04-01
Structure-from-motion (SfM) algorithms are greatly facilitating the production of detailed topographic models based on images collected by unmanned aerial vehicles (UAVs). However, SfM-based software does not generally provide the rigorous photogrammetric analysis required to fully understand survey quality. Consequently, error related to problems in control point data or the distribution of control points can remain undiscovered. Even if these errors are not large in magnitude, they can be systematic, and thus have strong implications for the use of products such as digital elevation models (DEMs) and orthophotos. Here, we develop a Monte Carlo approach to (1) improve the accuracy of products when SfM-based processing is used and (2) reduce the associated field effort by identifying suitable lower density deployments of ground control points. The method highlights over-parameterisation during camera self-calibration and provides enhanced insight into control point performance when rigorous error metrics are not available. Processing was implemented using commonly-used SfM-based software (Agisoft PhotoScan), which we augment with semi-automated and automated GCPs image measurement. We apply the Monte Carlo method to two contrasting case studies - an erosion gully survey (Taurodont, Morocco) carried out with an fixed-wing UAV, and an active landslide survey (Super-Sauze, France), acquired using a manually controlled quadcopter. The results highlight the differences in the control requirements for the two sites, and we explore the implications for future surveys. We illustrate DEM sensitivity to critical processing parameters and show how the use of appropriate parameter values increases DEM repeatability and reduces the spatial variability of error due to processing artefacts.
Newell, Felicity L.; Sheehan, James; Wood, Petra Bohall; Rodewald, Amanda D.; Buehler, David A.; Keyser, Patrick D.; Larkin, Jeffrey L.; Beachy, Tiffany A.; Bakermans, Marja H.; Boves, Than J.; Evans, Andrea; George, Gregory A.; McDermott, Molly E.; Perkins, Kelly A.; White, Matthew; Wigley, T. Bently
2013-01-01
Point counts are commonly used to assess changes in bird abundance, including analytical approaches such as distance sampling that estimate density. Point-count methods have come under increasing scrutiny because effects of detection probability and field error are difficult to quantify. For seven forest songbirds, we compared fixed-radii counts (50 m and 100 m) and density estimates obtained from distance sampling to known numbers of birds determined by territory mapping. We applied point-count analytic approaches to a typical forest management question and compared results to those obtained by territory mapping. We used a before–after control impact (BACI) analysis with a data set collected across seven study areas in the central Appalachians from 2006 to 2010. Using a 50-m fixed radius, variance in error was at least 1.5 times that of the other methods, whereas a 100-m fixed radius underestimated actual density by >3 territories per 10 ha for the most abundant species. Distance sampling improved accuracy and precision compared to fixed-radius counts, although estimates were affected by birds counted outside 10-ha units. In the BACI analysis, territory mapping detected an overall treatment effect for five of the seven species, and effects were generally consistent each year. In contrast, all point-count methods failed to detect two treatment effects due to variance and error in annual estimates. Overall, our results highlight the need for adequate sample sizes to reduce variance, and skilled observers to reduce the level of error in point-count data. Ultimately, the advantages and disadvantages of different survey methods should be considered in the context of overall study design and objectives, allowing for trade-offs among effort, accuracy, and power to detect treatment effects.
Classification and reduction of pilot error
NASA Technical Reports Server (NTRS)
Rogers, W. H.; Logan, A. L.; Boley, G. D.
1989-01-01
Human error is a primary or contributing factor in about two-thirds of commercial aviation accidents worldwide. With the ultimate goal of reducing pilot error accidents, this contract effort is aimed at understanding the factors underlying error events and reducing the probability of certain types of errors by modifying underlying factors such as flight deck design and procedures. A review of the literature relevant to error classification was conducted. Classification includes categorizing types of errors, the information processing mechanisms and factors underlying them, and identifying factor-mechanism-error relationships. The classification scheme developed by Jens Rasmussen was adopted because it provided a comprehensive yet basic error classification shell or structure that could easily accommodate addition of details on domain-specific factors. For these purposes, factors specific to the aviation environment were incorporated. Hypotheses concerning the relationship of a small number of underlying factors, information processing mechanisms, and error types types identified in the classification scheme were formulated. ASRS data were reviewed and a simulation experiment was performed to evaluate and quantify the hypotheses.
Brown, Judith A.; Bishop, Joseph E.
2016-07-20
An a posteriori error-estimation framework is introduced to quantify and reduce modeling errors resulting from approximating complex mesoscale material behavior with a simpler macroscale model. Such errors may be prevalent when modeling welds and additively manufactured structures, where spatial variations and material textures may be present in the microstructure. We consider a case where a <100> fiber texture develops in the longitudinal scanning direction of a weld. Transversely isotropic elastic properties are obtained through homogenization of a microstructural model with this texture and are considered the reference weld properties within the error-estimation framework. Conversely, isotropic elastic properties are considered approximatemore » weld properties since they contain no representation of texture. Errors introduced by using isotropic material properties to represent a weld are assessed through a quantified error bound in the elastic regime. Lastly, an adaptive error reduction scheme is used to determine the optimal spatial variation of the isotropic weld properties to reduce the error bound.« less
Piecewise compensation for the nonlinear error of fiber-optic gyroscope scale factor
NASA Astrophysics Data System (ADS)
Zhang, Yonggang; Wu, Xunfeng; Yuan, Shun; Wu, Lei
2013-08-01
Fiber-Optic Gyroscope (FOG) scale factor nonlinear error will result in errors in Strapdown Inertial Navigation System (SINS). In order to reduce nonlinear error of FOG scale factor in SINS, a compensation method is proposed in this paper based on curve piecewise fitting of FOG output. Firstly, reasons which can result in FOG scale factor error are introduced and the definition of nonlinear degree is provided. Then we introduce the method to divide the output range of FOG into several small pieces, and curve fitting is performed in each output range of FOG to obtain scale factor parameter. Different scale factor parameters of FOG are used in different pieces to improve FOG output precision. These parameters are identified by using three-axis turntable, and nonlinear error of FOG scale factor can be reduced. Finally, three-axis swing experiment of SINS verifies that the proposed method can reduce attitude output errors of SINS by compensating the nonlinear error of FOG scale factor and improve the precision of navigation. The results of experiments also demonstrate that the compensation scheme is easy to implement. It can effectively compensate the nonlinear error of FOG scale factor with slightly increased computation complexity. This method can be used in inertial technology based on FOG to improve precision.
Wu, Mixia; Zhang, Dianchen; Liu, Aiyi
2016-01-01
New biomarkers continue to be developed for the purpose of diagnosis, and their diagnostic performances are typically compared with an existing reference biomarker used for the same purpose. Considerable amounts of research have focused on receiver operating characteristic curves analysis when the reference biomarker is dichotomous. In the situation where the reference biomarker is measured on a continuous scale and dichotomization is not practically appealing, an index was proposed in the literature to measure the accuracy of a continuous biomarker, which is essentially a linear function of the popular Kendall's tau. We consider the issue of estimating such an accuracy index when the continuous reference biomarker is measured with errors. We first investigate the impact of measurement errors on the accuracy index, and then propose methods to correct for the bias due to measurement errors. Simulation results show the effectiveness of the proposed estimator in reducing biases. The methods are exemplified with hemoglobin A1c measurements obtained from both the central lab and a local lab to evaluate the accuracy of the mean data obtained from the metered blood glucose monitoring against the centrally measured hemoglobin A1c from a behavioral intervention study for families of youth with type 1 diabetes.
Towards an evaluation framework for Laboratory Information Systems.
Yusof, Maryati M; Arifin, Azila
Laboratory testing and reporting are error-prone and redundant due to repeated, unnecessary requests and delayed or missed reactions to laboratory reports. Occurring errors may negatively affect the patient treatment process and clinical decision making. Evaluation on laboratory testing and Laboratory Information System (LIS) may explain the root cause to improve the testing process and enhance LIS in supporting the process. This paper discusses a new evaluation framework for LIS that encompasses the laboratory testing cycle and the socio-technical part of LIS. Literature review on discourses, dimensions and evaluation methods of laboratory testing and LIS. A critical appraisal of the Total Testing Process (TTP) and the human, organization, technology-fit factors (HOT-fit) evaluation frameworks was undertaken in order to identify error incident, its contributing factors and preventive action pertinent to laboratory testing process and LIS. A new evaluation framework for LIS using a comprehensive and socio-technical approach is outlined. Positive relationship between laboratory and clinical staff resulted in a smooth laboratory testing process, reduced errors and increased process efficiency whilst effective use of LIS streamlined the testing processes. The TTP-LIS framework could serve as an assessment as well as a problem-solving tool for the laboratory testing process and system. Copyright © 2016 King Saud Bin Abdulaziz University for Health Sciences. Published by Elsevier Ltd. All rights reserved.
Ghirlando, Rodolfo; Balbo, Andrea; Piszczek, Grzegorz; Brown, Patrick H.; Lewis, Marc S.; Brautigam, Chad A.; Schuck, Peter; Zhao, Huaying
2013-01-01
Sedimentation velocity (SV) is a method based on first-principles that provides a precise hydrodynamic characterization of macromolecules in solution. Due to recent improvements in data analysis, the accuracy of experimental SV data emerges as a limiting factor in its interpretation. Our goal was to unravel the sources of experimental error and develop improved calibration procedures. We implemented the use of a Thermochron iButton® temperature logger to directly measure the temperature of a spinning rotor, and detected deviations that can translate into an error of as much as 10% in the sedimentation coefficient. We further designed a precision mask with equidistant markers to correct for instrumental errors in the radial calibration, which were observed to span a range of 8.6%. The need for an independent time calibration emerged with use of the current data acquisition software (Zhao et al., doi 10.1016/j.ab.2013.02.011) and we now show that smaller but significant time errors of up to 2% also occur with earlier versions. After application of these calibration corrections, the sedimentation coefficients obtained from eleven instruments displayed a significantly reduced standard deviation of ∼ 0.7 %. This study demonstrates the need for external calibration procedures and regular control experiments with a sedimentation coefficient standard. PMID:23711724
Ghirlando, Rodolfo; Balbo, Andrea; Piszczek, Grzegorz; Brown, Patrick H; Lewis, Marc S; Brautigam, Chad A; Schuck, Peter; Zhao, Huaying
2013-09-01
Sedimentation velocity (SV) is a method based on first principles that provides a precise hydrodynamic characterization of macromolecules in solution. Due to recent improvements in data analysis, the accuracy of experimental SV data emerges as a limiting factor in its interpretation. Our goal was to unravel the sources of experimental error and develop improved calibration procedures. We implemented the use of a Thermochron iButton temperature logger to directly measure the temperature of a spinning rotor and detected deviations that can translate into an error of as much as 10% in the sedimentation coefficient. We further designed a precision mask with equidistant markers to correct for instrumental errors in the radial calibration that were observed to span a range of 8.6%. The need for an independent time calibration emerged with use of the current data acquisition software (Zhao et al., Anal. Biochem., 437 (2013) 104-108), and we now show that smaller but significant time errors of up to 2% also occur with earlier versions. After application of these calibration corrections, the sedimentation coefficients obtained from 11 instruments displayed a significantly reduced standard deviation of approximately 0.7%. This study demonstrates the need for external calibration procedures and regular control experiments with a sedimentation coefficient standard. Published by Elsevier Inc.
Albrecht, Bjoern; Brandeis, Daniel; Uebel, Henrik; Heinrich, Hartmut; Mueller, Ueli C.; Hasselhorn, Marcus; Steinhausen, Hans-Christoph; Rothenberger, Aribert; Banaschewski, Tobias
2008-01-01
Background Attention deficit/hyperactivity disorder is a very common and highly heritable child psychiatric disorder associated with dysfunctions in fronto-striatal networks that control attention and response organisation. Aim of this study was to investigate whether features of action monitoring related to dopaminergic functions represent endophenotypes which are brain functions on the pathway from genes and environmental risk factors to behaviour. Methods Action monitoring and error processing as indicated by behavioural and electrophysiological parameters during a flanker task were examined in boys with ADHD combined type according to DSM-IV (N=68), their nonaffected siblings (N=18) and healthy controls with no known family history of ADHD (N=22). Results Boys with ADHD displayed slower and more variable reaction-times. Error negativity (Ne) was smaller in boys with ADHD compared to healthy controls, while nonaffected siblings displayed intermediate amplitudes following a linear model predicted by genetic concordance. The three groups did not differ on error positivity (Pe). N2 amplitude enhancement due to conflict (incongruent flankers) was reduced in the ADHD group. Nonaffected siblings also displayed intermediate N2 enhancement. Conclusions Converging evidence from behavioural and ERP findings suggests that action monitoring and initial error processing, both related to dopaminergically modulated functions of anterior cingulate cortex, might be an endophenotype related to ADHD. PMID:18339358
Interventions to reduce medication errors in neonatal care: a systematic review
Nguyen, Minh-Nha Rhylie; Mosel, Cassandra
2017-01-01
Background: Medication errors represent a significant but often preventable cause of morbidity and mortality in neonates. The objective of this systematic review was to determine the effectiveness of interventions to reduce neonatal medication errors. Methods: A systematic review was undertaken of all comparative and noncomparative studies published in any language, identified from searches of PubMed and EMBASE and reference-list checking. Eligible studies were those investigating the impact of any medication safety interventions aimed at reducing medication errors in neonates in the hospital setting. Results: A total of 102 studies were identified that met the inclusion criteria, including 86 comparative and 16 noncomparative studies. Medication safety interventions were classified into six themes: technology (n = 38; e.g. electronic prescribing), organizational (n = 16; e.g. guidelines, policies, and procedures), personnel (n = 13; e.g. staff education), pharmacy (n = 9; e.g. clinical pharmacy service), hazard and risk analysis (n = 8; e.g. error detection tools), and multifactorial (n = 18; e.g. any combination of previous interventions). Significant variability was evident across all included studies, with differences in intervention strategies, trial methods, types of medication errors evaluated, and how medication errors were identified and evaluated. Most studies demonstrated an appreciable risk of bias. The vast majority of studies (>90%) demonstrated a reduction in medication errors. A similar median reduction of 50–70% in medication errors was evident across studies included within each of the identified themes, but findings varied considerably from a 16% increase in medication errors to a 100% reduction in medication errors. Conclusion: While neonatal medication errors can be reduced through multiple interventions aimed at improving the medication use process, no single intervention appeared clearly superior. Further research is required to evaluate the relative cost-effectiveness of the various medication safety interventions to facilitate decisions regarding uptake and implementation into clinical practice. PMID:29387337
Errors in Bibliographic Citations: A Continuing Problem.
ERIC Educational Resources Information Center
Sweetland, James H.
1989-01-01
Summarizes studies examining citation errors and illustrates errors resulting from a lack of standardization, misunderstanding of foreign languages, failure to examine the document cited, and general lack of training in citation norms. It is argued that the failure to detect and correct citation errors is due to diffusion of responsibility in the…
NASA Astrophysics Data System (ADS)
Qi, D.; Majda, A.
2017-12-01
A low-dimensional reduced-order statistical closure model is developed for quantifying the uncertainty in statistical sensitivity and intermittency in principal model directions with largest variability in high-dimensional turbulent system and turbulent transport models. Imperfect model sensitivity is improved through a recent mathematical strategy for calibrating model errors in a training phase, where information theory and linear statistical response theory are combined in a systematic fashion to achieve the optimal model performance. The idea in the reduced-order method is from a self-consistent mathematical framework for general systems with quadratic nonlinearity, where crucial high-order statistics are approximated by a systematic model calibration procedure. Model efficiency is improved through additional damping and noise corrections to replace the expensive energy-conserving nonlinear interactions. Model errors due to the imperfect nonlinear approximation are corrected by tuning the model parameters using linear response theory with an information metric in a training phase before prediction. A statistical energy principle is adopted to introduce a global scaling factor in characterizing the higher-order moments in a consistent way to improve model sensitivity. Stringent models of barotropic and baroclinic turbulence are used to display the feasibility of the reduced-order methods. Principal statistical responses in mean and variance can be captured by the reduced-order models with accuracy and efficiency. Besides, the reduced-order models are also used to capture crucial passive tracer field that is advected by the baroclinic turbulent flow. It is demonstrated that crucial principal statistical quantities like the tracer spectrum and fat-tails in the tracer probability density functions in the most important large scales can be captured efficiently with accuracy using the reduced-order tracer model in various dynamical regimes of the flow field with distinct statistical structures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saeki, Hiroshi, E-mail: saeki@spring8.or.jp; Magome, Tamotsu, E-mail: saeki@spring8.or.jp
2014-10-06
To compensate pressure-measurement errors caused by a synchrotron radiation environment, a precise method using a hot-cathode-ionization-gauge head with correcting electrode, was developed and tested in a simulation experiment with excess electrons in the SPring-8 storage ring. This precise method to improve the measurement accuracy, can correctly reduce the pressure-measurement errors caused by electrons originating from the external environment, and originating from the primary gauge filament influenced by spatial conditions of the installed vacuum-gauge head. As the result of the simulation experiment to confirm the performance reducing the errors caused by the external environment, the pressure-measurement error using this method wasmore » approximately less than several percent in the pressure range from 10{sup −5} Pa to 10{sup −8} Pa. After the experiment, to confirm the performance reducing the error caused by spatial conditions, an additional experiment was carried out using a sleeve and showed that the improved function was available.« less
NASA Astrophysics Data System (ADS)
Swanson, Steven Roy
The objective of the dissertation is to improve state estimation performance, as compared to a Kalman filter, when non-constant, or changing, biases exist in the measurement data. The state estimation performance increase will come from the use of a fuzzy model to determine the position and velocity gains of a state estimator. A method is proposed for incorporating heuristic knowledge into a state estimator through the use of a fuzzy model. This method consists of using a fuzzy model to determine the gains of the state estimator, converting the heuristic knowledge into the fuzzy model, and then optimizing the fuzzy model with a genetic algorithm. This method is applied to the problem of state estimation of a cascaded global positioning system (GPS)/inertial reference unit (IRU) navigation system. The GPS position data contains two major sources for position bias. The first bias is due to satellite errors and the second is due to the time delay or lag from when the GPS position is calculated until it is used in the state estimator. When a change in the bias of the measurement data occurs, a state estimator will converge on the new measurement data solution. This will introduce errors into a Kalman filter's estimated state velocities, which in turn will cause a position overshoot as it converges. By using a fuzzy model to determine the gains of a state estimator, the velocity errors and their associated deficiencies can be reduced.
Egger, C; Maurer, M
2015-04-15
Urban drainage design relying on observed precipitation series neglects the uncertainties associated with current and indeed future climate variability. Urban drainage design is further affected by the large stochastic variability of precipitation extremes and sampling errors arising from the short observation periods of extreme precipitation. Stochastic downscaling addresses anthropogenic climate impact by allowing relevant precipitation characteristics to be derived from local observations and an ensemble of climate models. This multi-climate model approach seeks to reflect the uncertainties in the data due to structural errors of the climate models. An ensemble of outcomes from stochastic downscaling allows for addressing the sampling uncertainty. These uncertainties are clearly reflected in the precipitation-runoff predictions of three urban drainage systems. They were mostly due to the sampling uncertainty. The contribution of climate model uncertainty was found to be of minor importance. Under the applied greenhouse gas emission scenario (A1B) and within the period 2036-2065, the potential for urban flooding in our Swiss case study is slightly reduced on average compared to the reference period 1981-2010. Scenario planning was applied to consider urban development associated with future socio-economic factors affecting urban drainage. The impact of scenario uncertainty was to a large extent found to be case-specific, thus emphasizing the need for scenario planning in every individual case. The results represent a valuable basis for discussions of new drainage design standards aiming specifically to include considerations of uncertainty. Copyright © 2015 Elsevier Ltd. All rights reserved.
A Real-Time Position-Locating Algorithm for CCD-Based Sunspot Tracking
NASA Technical Reports Server (NTRS)
Taylor, Jaime R.
1996-01-01
NASA Marshall Space Flight Center's (MSFC) EXperimental Vector Magnetograph (EXVM) polarimeter measures the sun's vector magnetic field. These measurements are taken to improve understanding of the sun's magnetic field in the hopes to better predict solar flares. Part of the procedure for the EXVM requires image motion stabilization over a period of a few minutes. A high speed tracker can be used to reduce image motion produced by wind loading on the EXVM, fluctuations in the atmosphere and other vibrations. The tracker consists of two elements, an image motion detector and a control system. The image motion detector determines the image movement from one frame to the next and sends an error signal to the control system. For the ground based application to reduce image motion due to atmospheric fluctuations requires an error determination at the rate of at least 100 hz. It would be desirable to have an error determination rate of 1 kHz to assure that higher rate image motion is reduced and to increase the control system stability. Two algorithms are presented that are typically used for tracking. These algorithms are examined for their applicability for tracking sunspots, specifically their accuracy if only one column and one row of CCD pixels are used. To examine the accuracy of this method two techniques are used. One involves moving a sunspot image a known distance with computer software, then applying the particular algorithm to see how accurately it determines this movement. The second technique involves using a rate table to control the object motion, then applying the algorithms to see how accurately each determines the actual motion. Results from these two techniques are presented.
NASA Astrophysics Data System (ADS)
Cooper, Matthew; Martin, Randall V.; Padmanabhan, Akhila; Henze, Daven K.
2017-04-01
Satellite observations offer information applicable to top-down constraints on emission inventories through inverse modeling. Here we compare two methods of inverse modeling for emissions of nitrogen oxides (NOx) from nitrogen dioxide (NO2) columns using the GEOS-Chem chemical transport model and its adjoint. We treat the adjoint-based 4D-Var modeling approach for estimating top-down emissions as a benchmark against which to evaluate variations on the mass balance method. We use synthetic NO2 columns generated from known NOx emissions to serve as "truth." We find that error in mass balance inversions can be reduced by up to a factor of 2 with an iterative process that uses finite difference calculations of the local sensitivity of NO2 columns to a change in emissions. In a simplified experiment to recover local emission perturbations, horizontal smearing effects due to NOx transport are better resolved by the adjoint approach than by mass balance. For more complex emission changes, or at finer resolution, the iterative finite difference mass balance and adjoint methods produce similar global top-down inventories when inverting hourly synthetic observations, both reducing the a priori error by factors of 3-4. Inversions of simulated satellite observations from low Earth and geostationary orbits also indicate that both the mass balance and adjoint inversions produce similar results, reducing a priori error by a factor of 3. As the iterative finite difference mass balance method provides similar accuracy as the adjoint method, it offers the prospect of accurately estimating top-down NOx emissions using models that do not have an adjoint.
Learning a locomotor task: with or without errors?
Marchal-Crespo, Laura; Schneider, Jasmin; Jaeger, Lukas; Riener, Robert
2014-03-04
Robotic haptic guidance is the most commonly used robotic training strategy to reduce performance errors while training. However, research on motor learning has emphasized that errors are a fundamental neural signal that drive motor adaptation. Thus, researchers have proposed robotic therapy algorithms that amplify movement errors rather than decrease them. However, to date, no study has analyzed with precision which training strategy is the most appropriate to learn an especially simple task. In this study, the impact of robotic training strategies that amplify or reduce errors on muscle activation and motor learning of a simple locomotor task was investigated in twenty two healthy subjects. The experiment was conducted with the MAgnetic Resonance COmpatible Stepper (MARCOS) a special robotic device developed for investigations in the MR scanner. The robot moved the dominant leg passively and the subject was requested to actively synchronize the non-dominant leg to achieve an alternating stepping-like movement. Learning with four different training strategies that reduce or amplify errors was evaluated: (i) Haptic guidance: errors were eliminated by passively moving the limbs, (ii) No guidance: no robot disturbances were presented, (iii) Error amplification: existing errors were amplified with repulsive forces, (iv) Noise disturbance: errors were evoked intentionally with a randomly-varying force disturbance on top of the no guidance strategy. Additionally, the activation of four lower limb muscles was measured by the means of surface electromyography (EMG). Strategies that reduce or do not amplify errors limit muscle activation during training and result in poor learning gains. Adding random disturbing forces during training seems to increase attention, and therefore improve motor learning. Error amplification seems to be the most suitable strategy for initially less skilled subjects, perhaps because subjects could better detect their errors and correct them. Error strategies have a great potential to evoke higher muscle activation and provoke better motor learning of simple tasks. Neuroimaging evaluation of brain regions involved in learning can provide valuable information on observed behavioral outcomes related to learning processes. The impacts of these strategies on neurological patients need further investigations.
NASA Astrophysics Data System (ADS)
Zhang, Y. K.; Liang, X.
2014-12-01
Effects of aquifer heterogeneity and uncertainties in source/sink, and initial and boundary conditions in a groundwater flow model on the spatiotemporal variations of groundwater level, h(x,t), were investigated. Analytical solutions for the variance and covariance of h(x, t) in an unconfined aquifer described by a linearized Boussinesq equation with a white noise source/sink and a random transmissivity field were derived. It was found that in a typical aquifer the error in h(x,t) in early time is mainly caused by the random initial condition and the error reduces as time goes to reach a constant error in later time. The duration during which the effect of the random initial condition is significant may last a few hundred days in most aquifers. The constant error in groundwater in later time is due to the combined effects of the uncertain source/sink and flux boundary: the closer to the flux boundary, the larger the error. The error caused by the uncertain head boundary is limited in a narrow zone near the boundary but it remains more or less constant over time. The effect of the heterogeneity is to increase the variation of groundwater level and the maximum effect occurs close to the constant head boundary because of the linear mean hydraulic gradient. The correlation of groundwater level decreases with temporal interval and spatial distance. In addition, the heterogeneity enhances the correlation of groundwater level, especially at larger time intervals and small spatial distances.
Pilot error in air carrier mishaps: longitudinal trends among 558 reports, 1983-2002.
Baker, Susan P; Qiang, Yandong; Rebok, George W; Li, Guohua
2008-01-01
Many interventions have been implemented in recent decades to reduce pilot error in flight operations. This study aims to identify longitudinal trends in the prevalence and patterns of pilot error and other factors in U.S. air carrier mishaps. National Transportation Safety Board investigation reports were examined for 558 air carrier mishaps during 1983-2002. Pilot errors and circumstances of mishaps were described and categorized. Rates were calculated per 10 million flights. The overall mishap rate remained fairly stable, but the proportion of mishaps involving pilot error decreased from 42% in 1983-87 to 25% in 1998-2002, a 40% reduction. The rate of mishaps related to poor decisions declined from 6.2 to 1.8 per 10 million flights, a 71% reduction; much of this decrease was due to a 76% reduction in poor decisions related to weather. Mishandling wind or runway conditions declined by 78%. The rate of mishaps involving poor crew interaction declined by 68%. Mishaps during takeoff declined by 70%, from 5.3 to 1.6 per 10 million flights. The latter reduction was offset by an increase in mishaps while the aircraft was standing, from 2.5 to 6.0 per 10 million flights, and during pushback, which increased from 0 to 3.1 per 10 million flights. Reductions in pilot errors involving decision making and crew coordination are important trends that may reflect improvements in training and technological advances that facilitate good decisions. Mishaps while aircraft are standing and during pushback have increased and deserve special attention.
Pilot Error in Air Carrier Mishaps: Longitudinal Trends Among 558 Reports, 1983–2002
Baker, Susan P.; Qiang, Yandong; Rebok, George W.; Li, Guohua
2009-01-01
Background Many interventions have been implemented in recent decades to reduce pilot error in flight operations. This study aims to identify longitudinal trends in the prevalence and patterns of pilot error and other factors in U.S. air carrier mishaps. Method National Transportation Safety Board investigation reports were examined for 558 air carrier mishaps during 1983–2002. Pilot errors and circumstances of mishaps were described and categorized. Rates were calculated per 10 million flights. Results The overall mishap rate remained fairly stable, but the proportion of mishaps involving pilot error decreased from 42% in 1983–87 to 25% in 1998–2002, a 40% reduction. The rate of mishaps related to poor decisions declined from 6.2 to 1.8 per 10 million flights, a 71% reduction; much of this decrease was due to a 76% reduction in poor decisions related to weather. Mishandling wind or runway conditions declined by 78%. The rate of mishaps involving poor crew interaction declined by 68%. Mishaps during takeoff declined by 70%, from 5.3 to 1.6 per 10 million flights. The latter reduction was offset by an increase in mishaps while the aircraft was standing, from 2.5 to 6.0 per 10 million flights, and during pushback, which increased from 0 to 3.1 per 10 million flights. Conclusions Reductions in pilot errors involving decision making and crew coordination are important trends that may reflect improvements in training and technological advances that facilitate good decisions. Mishaps while aircraft are standing and during push-back have increased and deserve special attention. PMID:18225771
A sensitivity analysis of nine diversity and seven similarity indices
Boyle, Terrence P.; Smillie, Gary M.; Anderson, Jana C.; Beeson, David R.
1990-01-01
Indices summarizing community structure are used to evaluate fundamental community ecology, species interaction, biogeographical factors, and environmental stress. Some of these indices are insensitive to gross community changes induced by contaminants of pollution. Sixteen indices commonly used to assess the status of aquatic communities in water quality studies were evaluated using computer simulation techniques to determine specific index responses. Three communities of different initial structure (19 species, 38 species, and 83 species) were generated using the lognormal equation. Each community was then perturbed in three ways: common species disproportionally reduced, all species proportionally reduced, and rare species disproportionally reduced. The behavior of the indices was analyzed graphically and differential response due to initial community structure and type of community change was documented. Some recommendations of potential sources of error using community levels indices were developed.
Analysis of Wind Tunnel Lateral Oscillatory Data of the F-16XL Aircraft
NASA Technical Reports Server (NTRS)
Klein, Vladislav; Murphy, Patrick C.; Szyba, Nathan M.
2004-01-01
Static and dynamic wind tunnel tests were performed on an 18% scale model of the F-16XL aircraft. These tests were performed over a wide range of angles of attack and sideslip with oscillation amplitudes from 5 deg. to 30 deg. and reduced frequencies from 0.073 to 0.269. Harmonic analysis was used to estimate Fourier coefficients and in-phase and out-of-phase components. For frequency dependent data from rolling oscillations, a two-step regression method was used to obtain unsteady models (indicial functions), and derivatives due to sideslip angle, roll rate and yaw rate from in-phase and out-of-phase components. Frequency dependence was found for angles of attack between 20 deg. and 50 deg. Reduced values of coefficient of determination and increased values of fit error were found for angles of attack between 35 deg. and 45 deg. An attempt to estimate model parameters from yaw oscillations failed, probably due to the low number of test cases at different frequencies.
Liu, Jinpeng; Horimai, Hideyoshi; Lin, Xiao; Liu, Jinyan; Huang, Yong; Tan, Xiaodi
2017-06-01
The collinear holographic data storage system (CHDSS) is a very promising storage system due to its large storage capacities and high transfer rates in the era of big data. The digital micro-mirror device (DMD) as a spatial light modulator is the key device of the CHDSS due to its high speed, high precision, and broadband working range. To improve the system stability and performance, an optimal micro-mirror tilt angle was theoretically calculated and experimentally confirmed by analyzing the relationship between the tilt angle of the micro-mirror on the DMD and the power profiles of diffraction patterns of the DMD at the Fourier plane. In addition, we proposed a novel chess board sync mark design in the data page to reduce the system bit error rate in circumstances of reduced aperture required to decrease noise and median exposure amount. It will provide practical guidance for future DMD based CHDSS development.
Influence of Tooth Spacing Error on Gears With and Without Profile Modifications
NASA Technical Reports Server (NTRS)
Padmasolala, Giri; Lin, Hsiang H.; Oswald, Fred B.
2000-01-01
A computer simulation was conducted to investigate the effectiveness of profile modification for reducing dynamic loads in gears with different tooth spacing errors. The simulation examined varying amplitudes of spacing error and differences in the span of teeth over which the error occurs. The modification considered included both linear and parabolic tip relief. The analysis considered spacing error that varies around most of the gear circumference (similar to a typical sinusoidal error pattern) as well as a shorter span of spacing errors that occurs on only a few teeth. The dynamic analysis was performed using a revised version of a NASA gear dynamics code, modified to add tooth spacing errors to the analysis. Results obtained from the investigation show that linear tip relief is more effective in reducing dynamic loads on gears with small spacing errors but parabolic tip relief becomes more effective as the amplitude of spacing error increases. In addition, the parabolic modification is more effective for the more severe error case where the error is spread over a longer span of teeth. The findings of this study can be used to design robust tooth profile modification for improving dynamic performance of gear sets with different tooth spacing errors.
Hromadka, T.V.; Guymon, G.L.
1985-01-01
An algorithm is presented for the numerical solution of the Laplace equation boundary-value problem, which is assumed to apply to soil freezing or thawing. The Laplace equation is numerically approximated by the complex-variable boundary-element method. The algorithm aids in reducing integrated relative error by providing a true measure of modeling error along the solution domain boundary. This measure of error can be used to select locations for adding, removing, or relocating nodal points on the boundary or to provide bounds for the integrated relative error of unknown nodal variable values along the boundary.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huan, Xun; Safta, Cosmin; Sargsyan, Khachik
The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis ismore » conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. In conclusion, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huan, Xun; Safta, Cosmin; Sargsyan, Khachik
The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis ismore » conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. Finally, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.« less
Regier, Michael D; Moodie, Erica E M
2016-05-01
We propose an extension of the EM algorithm that exploits the common assumption of unique parameterization, corrects for biases due to missing data and measurement error, converges for the specified model when standard implementation of the EM algorithm has a low probability of convergence, and reduces a potentially complex algorithm into a sequence of smaller, simpler, self-contained EM algorithms. We use the theory surrounding the EM algorithm to derive the theoretical results of our proposal, showing that an optimal solution over the parameter space is obtained. A simulation study is used to explore the finite sample properties of the proposed extension when there is missing data and measurement error. We observe that partitioning the EM algorithm into simpler steps may provide better bias reduction in the estimation of model parameters. The ability to breakdown a complicated problem in to a series of simpler, more accessible problems will permit a broader implementation of the EM algorithm, permit the use of software packages that now implement and/or automate the EM algorithm, and make the EM algorithm more accessible to a wider and more general audience.
Reconstructing the calibrated strain signal in the Advanced LIGO detectors
NASA Astrophysics Data System (ADS)
Viets, A. D.; Wade, M.; Urban, A. L.; Kandhasamy, S.; Betzwieser, J.; Brown, Duncan A.; Burguet-Castell, J.; Cahillane, C.; Goetz, E.; Izumi, K.; Karki, S.; Kissel, J. S.; Mendell, G.; Savage, R. L.; Siemens, X.; Tuyenbayev, D.; Weinstein, A. J.
2018-05-01
Advanced LIGO’s raw detector output needs to be calibrated to compute dimensionless strain h(t) . Calibrated strain data is produced in the time domain using both a low-latency, online procedure and a high-latency, offline procedure. The low-latency h(t) data stream is produced in two stages, the first of which is performed on the same computers that operate the detector’s feedback control system. This stage, referred to as the front-end calibration, uses infinite impulse response (IIR) filtering and performs all operations at a 16 384 Hz digital sampling rate. Due to several limitations, this procedure currently introduces certain systematic errors in the calibrated strain data, motivating the second stage of the low-latency procedure, known as the low-latency gstlal calibration pipeline. The gstlal calibration pipeline uses finite impulse response (FIR) filtering to apply corrections to the output of the front-end calibration. It applies time-dependent correction factors to the sensing and actuation components of the calibrated strain to reduce systematic errors. The gstlal calibration pipeline is also used in high latency to recalibrate the data, which is necessary due mainly to online dropouts in the calibrated data and identified improvements to the calibration models or filters.
NASA Astrophysics Data System (ADS)
Huan, Xun; Safta, Cosmin; Sargsyan, Khachik; Geraci, Gianluca; Eldred, Michael S.; Vane, Zachary P.; Lacaze, Guilhem; Oefelein, Joseph C.; Najm, Habib N.
2018-03-01
The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis is conducted to identify influential uncertain input parameters, which can help reduce the systems stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. These methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.
NASA Technical Reports Server (NTRS)
Ream, Allen
2011-01-01
A pair of conjugated multiple bandpass filters (CMBF) can be used to create spatially separated pupils in a traditional lens and imaging sensor system allowing for the passive capture of stereo video. This method is especially useful for surgical endoscopy where smaller cameras are needed to provide ample room for manipulating tools while also granting improved visualizations of scene depth. The significant issue in this process is that, due to the complimentary nature of the filters, the colors seen through each filter do not match each other, and also differ from colors as seen under a white illumination source. A color correction model was implemented that included optimized filter selection, such that the degree of necessary post-processing correction was minimized, and a chromatic adaptation transformation that attempted to fix the imaged colors tristimulus indices based on the principle of color constancy. Due to fabrication constraints, only dual bandpass filters were feasible. The theoretical average color error after correction between these filters was still above the fusion limit meaning that rivalry conditions are possible during viewing. This error can be minimized further by designing the filters for a subset of colors corresponding to specific working environments.
Potential and Limitations of an Improved Method to Produce Dynamometric Wheels
García de Jalón, Javier
2018-01-01
A new methodology for the estimation of tyre-contact forces is presented. The new procedure is an evolution of a previous method based on harmonic elimination techniques developed with the aim of producing low cost dynamometric wheels. While the original method required stress measurement in many rim radial lines and the fulfillment of some rigid conditions of symmetry, the new methodology described in this article significantly reduces the number of required measurement points and greatly relaxes symmetry constraints. This can be done without compromising the estimation error level. The reduction of the number of measuring radial lines increases the ripple of demodulated signals due to non-eliminated higher order harmonics. Therefore, it is necessary to adapt the calibration procedure to this new scenario. A new calibration procedure that takes into account angular position of the wheel is completely described. This new methodology is tested on a standard commercial five-spoke car wheel. Obtained results are qualitatively compared to those derived from the application of former methodology leading to the conclusion that the new method is both simpler and more robust due to the reduction in the number of measuring points, while contact forces’ estimation error remains at an acceptable level. PMID:29439427
Huan, Xun; Safta, Cosmin; Sargsyan, Khachik; ...
2018-02-09
The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis ismore » conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. In conclusion, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.« less
Handheld tools assess medical necessity at the point of care.
Pollard, Dan
2002-01-01
An emerging strategy to manage financial risk in clinical practice is to involve the physician at the point of care. Using handheld technology, encounter-specific information along with medical necessity policy can be presented to physicians allowing them to integrate it into their medical decision-making process. Three different strategies are discussed: reference books or paper encounter forms, electronic reference tools, and integrated process tools. The electronic reference tool strategy was evaluated and showed a return on investment exceeding 1200% due to reduced overhead costs associated with rework of claim errors.
i-LOVE: ISS-JEM lidar for observation of vegetation environment
NASA Astrophysics Data System (ADS)
Asai, Kazuhiro; Sawada, Haruo; Sugimoto, Nobuo; Mizutani, Kohei; Ishii, Shoken; Nishizawa, Tomoaki; Shimoda, Haruhisa; Honda, Yoshiaki; Kajiwara, Koji; Takao, Gen; Hirata, Yasumasa; Saigusa, Nobuko; Hayashi, Masatomo; Oguma, Hiroyuki; Saito, Hideki; Awaya, Yoshio; Endo, Takahiro; Imai, Tadashi; Murooka, Jumpei; Kobatashi, Takashi; Suzuki, Keiko; Sato, Ryota
2012-11-01
It is very important to watch the spatial distribution of vegetation biomass and changes in biomass over time, representing invaluable information to improve present assessments and future projections of the terrestrial carbon cycle. A space lidar is well known as a powerful remote sensing technology for measuring the canopy height accurately. This paper describes the ISS(International Space Station)-JEM(Japanese Experimental Module)-EF(Exposed Facility) borne vegetation lidar using a two dimensional array detector in order to reduce the root mean square error (RMSE) of tree height due to sloped surface.
Measurements of the toroidal torque balance of error field penetration locked modes
Shiraki, Daisuke; Paz-Soldan, Carlos; Hanson, Jeremy M.; ...
2015-01-05
Here, detailed measurements from the DIII-D tokamak of the toroidal dynamics of error field penetration locked modes under the influence of slowly evolving external fields, enable study of the toroidal torques on the mode, including interaction with the intrinsic error field. The error field in these low density Ohmic discharges is well known based on the mode penetration threshold, allowing resonant and non-resonant torque effects to be distinguished. These m/n = 2/1 locked modes are found to be well described by a toroidal torque balance between the resonant interaction with n = 1 error fields, and a viscous torque inmore » the electron diamagnetic drift direction which is observed to scale as the square of the perturbed field due to the island. Fitting to this empirical torque balance allows a time-resolved measurement of the intrinsic error field of the device, providing evidence for a time-dependent error field in DIII-D due to ramping of the Ohmic coil current.« less
The variability of atmospheric equivalent temperature for radar altimeter range correction
NASA Technical Reports Server (NTRS)
Liu, W. Timothy; Mock, Donald
1990-01-01
Two sets of data were used to test the validity of the presently used approximation for radar altimeter range correction due to atmospheric water vapor. The approximation includes an assumption of constant atmospheric equivalent temperature. The first data set includes monthly, three-dimensional, gridded temperature and humidity fields over global oceans for a 10-year period, and the second is comprised of daily or semidaily rawinsonde data at 17 island stations for a 7-year period. It is found that the standard method underestimates the variability of the equivalent temperature, and the approximation could introduce errors of 2 cm for monthly means. The equivalent temperature is found to have a strong meridional gradient, and the highest temporal variabilities are found over western boundary currents. The study affirms that the atmospheric water vapor is a good predictor for both the equivalent temperature and the range correction. A relation is proposed to reduce the error.
Optimization of Pockels electric field in transverse modulated optical voltage sensor
NASA Astrophysics Data System (ADS)
Huang, Yifan; Xu, Qifeng; Chen, Kun-Long; Zhou, Jie
2018-05-01
This paper investigates the possibilities of optimizing the Pockels electric field in a transverse modulated optical voltage sensor with a spherical electrode structure. The simulations show that due to the edge effect and the electric field concentrations and distortions, the electric field distributions in the crystal are non-uniform. In this case, a tiny variation in the light path leads to an integral error of more than 0.5%. Moreover, a 2D model cannot effectively represent the edge effect, so a 3D model is employed to optimize the electric field distributions. Furthermore, a new method to attach a quartz crystal to the electro-optic crystal along the electric field direction is proposed to improve the non-uniformity of the electric field. The integral error is reduced therefore from 0.5% to 0.015% and less. The proposed method is simple, practical and effective, and it has been validated by numerical simulations and experimental tests.
Zhang, Zezhong; Qi, Qingqing
2014-05-01
Medication errors are very dangerous even fatal since it could cause serious even fatal harm to patients. In order to reduce medication errors, automated patient medication systems using the Radio Frequency Identification (RFID) technology have been used in many hospitals. The data transmitted in those medication systems is very important and sensitive. In the past decade, many security protocols have been proposed to ensure its secure transition attracted wide attention. Due to providing mutual authentication between the medication server and the tag, the RFID authentication protocol is considered as the most important security protocols in those systems. In this paper, we propose a RFID authentication protocol to enhance patient medication safety using elliptic curve cryptography (ECC). The analysis shows the proposed protocol could overcome security weaknesses in previous protocols and has better performance. Therefore, the proposed protocol is very suitable for automated patient medication systems.
A Computational Intelligence (CI) Approach to the Precision Mars Lander Problem
NASA Technical Reports Server (NTRS)
Birge, Brian; Walberg, Gerald
2002-01-01
A Mars precision landing requires a landed footprint of no more than 100 meters. Obstacles to reducing the landed footprint include trajectory dispersions due to initial atmospheric entry conditions such as entry angle, parachute deployment height, environment parameters such as wind, atmospheric density, parachute deployment dynamics, unavoidable injection error or propagated error from launch, etc. Computational Intelligence (CI) techniques such as Artificial Neural Nets and Particle Swarm Optimization have been shown to have great success with other control problems. The research period extended previous work on investigating applicability of the computational intelligent approaches. The focus of this investigation was on Particle Swarm Optimization and basic Neural Net architectures. The research investigating these issues was performed for the grant cycle from 5/15/01 to 5/15/02. Matlab 5.1 and 6.0 along with NASA's POST were the primary computational tools.
Development of a decentralized multi-axis synchronous control approach for real-time networks.
Xu, Xiong; Gu, Guo-Ying; Xiong, Zhenhua; Sheng, Xinjun; Zhu, Xiangyang
2017-05-01
The message scheduling and the network-induced delays of real-time networks, together with the different inertias and disturbances in different axes, make the synchronous control of the real-time network-based systems quite challenging. To address this challenge, a decentralized multi-axis synchronous control approach is developed in this paper. Due to the limitations of message scheduling and network bandwidth, error of the position synchronization is firstly defined in the proposed control approach as a subset of preceding-axis pairs. Then, a motion message estimator is designed to reduce the effect of network delays. It is proven that position and synchronization errors asymptotically converge to zero in the proposed controller with the delay compensation. Finally, simulation and experimental results show that the developed control approach can achieve the good position synchronization performance for the multi-axis motion over the real-time network. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Building a genome database using an object-oriented approach.
Barbasiewicz, Anna; Liu, Lin; Lang, B Franz; Burger, Gertraud
2002-01-01
GOBASE is a relational database that integrates data associated with mitochondria and chloroplasts. The most important data in GOBASE, i. e., molecular sequences and taxonomic information, are obtained from the public sequence data repository at the National Center for Biotechnology Information (NCBI), and are validated by our experts. Maintaining a curated genomic database comes with a towering labor cost, due to the shear volume of available genomic sequences and the plethora of annotation errors and omissions in records retrieved from public repositories. Here we describe our approach to increase automation of the database population process, thereby reducing manual intervention. As a first step, we used Unified Modeling Language (UML) to construct a list of potential errors. Each case was evaluated independently, and an expert solution was devised, and represented as a diagram. Subsequently, the UML diagrams were used as templates for writing object-oriented automation programs in the Java programming language.
Efficient color correction method for smartphone camera-based health monitoring application.
Duc Dang; Chae Ho Cho; Daeik Kim; Oh Seok Kwon; Jo Woon Chong
2017-07-01
Smartphone health monitoring applications are recently highlighted due to the rapid development of hardware and software performance of smartphones. However, color characteristics of images captured by different smartphone models are dissimilar each other and this difference may give non-identical health monitoring results when the smartphone health monitoring applications monitor physiological information using their embedded smartphone cameras. In this paper, we investigate the differences in color properties of the captured images from different smartphone models and apply a color correction method to adjust dissimilar color values obtained from different smartphone cameras. Experimental results show that the color corrected images using the correction method provide much smaller color intensity errors compared to the images without correction. These results can be applied to enhance the consistency of smartphone camera-based health monitoring applications by reducing color intensity errors among the images obtained from different smartphones.
Nonlinear gearshifts control of dual-clutch transmissions during inertia phase.
Hu, Yunfeng; Tian, Lu; Gao, Bingzhao; Chen, Hong
2014-07-01
In this paper, a model-based nonlinear gearshift controller is designed by the backstepping method to improve the shift quality of vehicles with a dual-clutch transmission (DCT). Considering easy-implementation, the controller is rearranged into a concise structure which contains a feedforward control and a feedback control. Then, robustness of the closed-loop error system is discussed in the framework of the input to state stability (ISS) theory, where model uncertainties are considered as the additive disturbance inputs. Furthermore, due to the application of the backstepping method, the closed-loop error system is ordered as a linear system. Using the linear system theory, a guideline for selecting the controller parameters is deduced which could reduce the workload of parameters tuning. Finally, simulation results and Hardware in the Loop (HiL) simulation are presented to validate the effectiveness of the designed controller. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Wall shear stress measurements using a new transducer
NASA Technical Reports Server (NTRS)
Vakili, A. D.; Wu, J. M.; Lawing, P. L.
1986-01-01
A new instrument has been developed for direct measurement of wall shear stress. This instrument is simple and symmetric in design with small moving mass and no internal friction. Features employed in the design of this instrument eliminate most of the difficulties associated with the traditional floating element balances. Vibration problems associated with the floating element skin friction balances have been found to be minimized by the design features and optional damping provided. The unique design of this instrument eliminates or reduces the errors associated with conventional floating-element devices: such as errors due to gaps, pressure gradient, acceleration, heat transfer and temperature change. The instrument is equipped with various sensing systems and the output signal is a linear function of the wall shear stress. Measurement made in three different tunnels show good agreement with theory and data obtained by the floating element devices.
NASA Astrophysics Data System (ADS)
Seo, Seung Beom
Although water is one of the most essential natural resources, human activities have been exerting pressure on water resources. In order to reduce these stresses on water resources, two key issues threatening water resources sustainability - interaction between surface water and groundwater resources and groundwater withdrawal impacts of streamflow depletion - were investigated in this study. First, a systematic decomposition procedure was proposed for quantifying the errors arising from various sources in the model chain in projecting the changes in hydrologic attributes using near-term climate change projections. Apart from the unexplained changes by GCMs, the process of customizing GCM projections to watershed scale through a model chain - spatial downscaling, temporal disaggregation and hydrologic model - also introduces errors, thereby limiting the ability to explain the observed changes in hydrologic variability. Towards this, we first propose metrics for quantifying the errors arising from different steps in the model chain in explaining the observed changes in hydrologic variables (streamflow, groundwater). The proposed metrics are then evaluated using a detailed retrospective analyses in projecting the changes in streamflow and groundwater attributes in four target basins that span across a diverse hydroclimatic regimes over the US Sunbelt. Our analyses focused on quantifying the dominant sources of errors in projecting the changes in eight hydrologic variables - mean and variability of seasonal streamflow, mean and variability of 3-day peak seasonal streamflow, mean and variability of 7-day low seasonal streamflow and mean and standard deviation of groundwater depth - over four target basins using an Penn state Integrated Hydrologic Model (PIHM) between the period 1956-1980 and 1981-2005. Retrospective analyses show that small/humid (large/arid) basins show increased (reduced) uncertainty in projecting the changes in hydrologic attributes. Further, changes in error due to GCMs primarily account for the unexplained changes in mean and variability of seasonal streamflow. On the other hand, the changes in error due to temporal disaggregation and hydrologic model account for the inability to explain the observed changes in mean and variability of seasonal extremes. Thus, the proposed metrics provide insights on how the error in explaining the observed changes being propagated through the model under different hydroclimatic regimes. To understand interaction between surface water and groundwater resources, transient pumping impacts on streamflow and groundwater level were analyzed by imposing shortterm pumping scenarios under historic drought conditions. Since surface water and groundwater systems are fully coupled and integrated systems, increased groundwater withdrawal during drought may reduce baseflow into the stream and prolong both systems' recovery from drought. Towards this, we proposed an uncertainty framework to understand the resiliency of groundwater and surface water systems using a fully-coupled hydrologic model under transient pumping. Using this framework, we quantified the restoration time of surface water and groundwater systems and also estimated the changes in the state variables after pumping. Groundwater pumping impacts over the watershed were also analyzed under different pumping volumes and different potential climate scenarios. Our analyses show that groundwater restoration time is more sensitive to changes in pumping volumes as opposed to changes in climate. After the cessation of pumping, streamflow recovers quickly in comparison to groundwater. Pumping impacts on other state variables are also discussed. Given that surface water and groundwater are inter-connected, optimal management of the both resources should be considered to improve the watershed resiliency under drought. Subsequently, conjunctive use of surface water and groundwater has been considered as an effective approach to mitigate water shortage problems that are primarily caused by a drought. It is found that appropriate use of groundwater withdrawal was able to reduce water scarcity in surface water resources in drought condition. Besides, recovery time constraint was embedded in the management model so that trade-off between minimizing water scarcity and maximizing sustainability on groundwater was successfully addressed.
Interruption Practice Reduces Errors
2014-01-01
dangers of errors at the PCS. Electronic health record systems are used to reduce certain errors related to poor- handwriting and dosage...10.16, MSE =.31, p< .05, η2 = .18 A significant interaction between the number of interruptions and interrupted trials suggests that trials...the variance when calculating whether a memory has a higher signal than interference. If something in addition to activation contributes to goal
Galerkin v. least-squares Petrov–Galerkin projection in nonlinear model reduction
Carlberg, Kevin Thomas; Barone, Matthew F.; Antil, Harbir
2016-10-20
Least-squares Petrov–Galerkin (LSPG) model-reduction techniques such as the Gauss–Newton with Approximated Tensors (GNAT) method have shown promise, as they have generated stable, accurate solutions for large-scale turbulent, compressible flow problems where standard Galerkin techniques have failed. Furthermore, there has been limited comparative analysis of the two approaches. This is due in part to difficulties arising from the fact that Galerkin techniques perform optimal projection associated with residual minimization at the time-continuous level, while LSPG techniques do so at the time-discrete level. This work provides a detailed theoretical and computational comparison of the two techniques for two common classes of timemore » integrators: linear multistep schemes and Runge–Kutta schemes. We present a number of new findings, including conditions under which the LSPG ROM has a time-continuous representation, conditions under which the two techniques are equivalent, and time-discrete error bounds for the two approaches. Perhaps most surprisingly, we demonstrate both theoretically and computationally that decreasing the time step does not necessarily decrease the error for the LSPG ROM; instead, the time step should be ‘matched’ to the spectral content of the reduced basis. In numerical experiments carried out on a turbulent compressible-flow problem with over one million unknowns, we show that increasing the time step to an intermediate value decreases both the error and the simulation time of the LSPG reduced-order model by an order of magnitude.« less
Galerkin v. least-squares Petrov–Galerkin projection in nonlinear model reduction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlberg, Kevin Thomas; Barone, Matthew F.; Antil, Harbir
Least-squares Petrov–Galerkin (LSPG) model-reduction techniques such as the Gauss–Newton with Approximated Tensors (GNAT) method have shown promise, as they have generated stable, accurate solutions for large-scale turbulent, compressible flow problems where standard Galerkin techniques have failed. Furthermore, there has been limited comparative analysis of the two approaches. This is due in part to difficulties arising from the fact that Galerkin techniques perform optimal projection associated with residual minimization at the time-continuous level, while LSPG techniques do so at the time-discrete level. This work provides a detailed theoretical and computational comparison of the two techniques for two common classes of timemore » integrators: linear multistep schemes and Runge–Kutta schemes. We present a number of new findings, including conditions under which the LSPG ROM has a time-continuous representation, conditions under which the two techniques are equivalent, and time-discrete error bounds for the two approaches. Perhaps most surprisingly, we demonstrate both theoretically and computationally that decreasing the time step does not necessarily decrease the error for the LSPG ROM; instead, the time step should be ‘matched’ to the spectral content of the reduced basis. In numerical experiments carried out on a turbulent compressible-flow problem with over one million unknowns, we show that increasing the time step to an intermediate value decreases both the error and the simulation time of the LSPG reduced-order model by an order of magnitude.« less
Modeling of influencing parameters in active noise control on an enclosure wall
NASA Astrophysics Data System (ADS)
Tarabini, Marco; Roure, Alain
2008-04-01
This paper investigates, by means of a numerical model, the possibility of using an active noise barrier to virtually reduce the acoustic transparency of a partition wall inside an enclosure. The room is modeled with the image method as a rectangular enclosure with a stationary point source; the active barrier is set up by an array of loudspeakers and error microphones and is meant to minimize the squared sound pressure on a wall with the use of a decentralized control. Simulations investigate the effects of the enclosure characteristics and of the barrier geometric parameters on the sound pressure attenuation on the controlled partition, on the whole enclosure potential energy and on the diagonal control stability. Performances are analyzed in a frequency range of 25-300 Hz at discrete 25 Hz steps. Influencing parameters and their effects on the system performances are identified with a statistical inference procedure. Simulation results have shown that it is possible to averagely reduce the sound pressure on the controlled partition. In the investigated configuration, the surface attenuation and the diagonal control stability are mainly driven by the distance between the loudspeakers and the error microphones and by the loudspeakers directivity; minor effects are due to the distance between the error microphones and the wall, by the wall reflectivity and by the active barrier grid meshing. Room dimensions and source position have negligible effects. Experimental results point out the validity of the model and the efficiency of the barrier in the reduction of the wall acoustic transparency.
NASA Astrophysics Data System (ADS)
Dahlqvist, Per
1999-10-01
We estimate the error in the semiclassical trace formula for the Sinai billiard under the assumption that the largest source of error is due to penumbra diffraction: namely, diffraction effects for trajectories passing within a distance Ricons/Journals/Common/cdot" ALT="cdot" ALIGN="TOP"/>O((kR)-2/3) to the disc and trajectories being scattered in very forward directions. Here k is the momentum and R the radius of the scatterer. The semiclassical error is estimated by perturbing the Berry-Keating formula. The analysis necessitates an asymptotic analysis of very long periodic orbits. This is obtained within an approximation originally due to Baladi, Eckmann and Ruelle. We find that the average error, for sufficiently large values of kR, will exceed the mean level spacing.
Characterizing the SWOT discharge error budget on the Sacramento River, CA
NASA Astrophysics Data System (ADS)
Yoon, Y.; Durand, M. T.; Minear, J. T.; Smith, L.; Merry, C. J.
2013-12-01
The Surface Water and Ocean Topography (SWOT) is an upcoming satellite mission (2020 year) that will provide surface-water elevation and surface-water extent globally. One goal of SWOT is the estimation of river discharge directly from SWOT measurements. SWOT discharge uncertainty is due to two sources. First, SWOT cannot measure channel bathymetry and determine roughness coefficient data necessary for discharge calculations directly; these parameters must be estimated from the measurements or from a priori information. Second, SWOT measurement errors directly impact the discharge estimate accuracy. This study focuses on characterizing parameter and measurement uncertainties for SWOT river discharge estimation. A Bayesian Markov Chain Monte Carlo scheme is used to calculate parameter estimates, given the measurements of river height, slope and width, and mass and momentum constraints. The algorithm is evaluated using simulated both SWOT and AirSWOT (the airborne version of SWOT) observations over seven reaches (about 40 km) of the Sacramento River. The SWOT and AirSWOT observations are simulated by corrupting the ';true' HEC-RAS hydraulic modeling results with the instrument error. This experiment answers how unknown bathymetry and roughness coefficients affect the accuracy of the river discharge algorithm. From the experiment, the discharge error budget is almost completely dominated by unknown bathymetry and roughness; 81% of the variance error is explained by uncertainties in bathymetry and roughness. Second, we show how the errors in water surface, slope, and width observations influence the accuracy of discharge estimates. Indeed, there is a significant sensitivity to water surface, slope, and width errors due to the sensitivity of bathymetry and roughness to measurement errors. Increasing water-surface error above 10 cm leads to a corresponding sharper increase of errors in bathymetry and roughness. Increasing slope error above 1.5 cm/km leads to a significant degradation due to direct error in the discharge estimates. As the width error increases past 20%, the discharge error budget is dominated by the width error. Above two experiments are performed based on AirSWOT scenarios. In addition, we explore the sensitivity of the algorithm to the SWOT scenarios.
Reduced cost and improved figure of sapphire optical components
NASA Astrophysics Data System (ADS)
Walters, Mark; Bartlett, Kevin; Brophy, Matthew R.; DeGroote Nelson, Jessica; Medicus, Kate
2015-10-01
Sapphire presents many challenges to optical manufacturers due to its high hardness and anisotropic properties. Long lead times and high prices are the typical result of such challenges. The cost of even a simple 'grind and shine' process can be prohibitive. The high precision surfaces required by optical sensor applications further exacerbate the challenge of processing sapphire thereby increasing cost further. Optimax has demonstrated a production process for such windows that delivers over 50% time reduction as compared to traditional manufacturing processes for sapphire, while producing windows with less than 1/5 wave rms figure error. Optimax's sapphire production process achieves significant improvement in cost by implementation of a controlled grinding process to present the best possible surface to the polishing equipment. Following the grinding process is a polishing process taking advantage of chemical interactions between slurry and substrate to deliver excellent removal rates and surface finish. Through experiments, the mechanics of the polishing process were also optimized to produce excellent optical figure. In addition to reducing the cost of producing large sapphire sensor windows, the grinding and polishing technology Optimax has developed aids in producing spherical sapphire components to better figure quality. In addition to reducing the cost of producing large sapphire sensor windows, the grinding and polishing technology Optimax has developed aids in producing spherical sapphire components to better figure quality. Through specially developed polishing slurries, the peak-to-valley figure error of spherical sapphire parts is reduced by over 80%.
Evaluation of Phantom-Based Education System for Acupuncture Manipulation
Lee, In-Seon; Lee, Ye-Seul; Park, Hi-Joon; Lee, Hyejung; Chae, Younbyoung
2015-01-01
Background Although acupuncture manipulation has been regarded as one of the important factors in clinical outcome, it has been difficult to train novice students to become skillful experts due to a lack of adequate educational program and tools. Objectives In the present study, we investigated whether newly developed phantom acupoint tools would be useful to practice-naïve acupuncture students for practicing the three different types of acupuncture manipulation to enhance their skills. Methods We recruited 12 novice students and had them practice acupuncture manipulations on the phantom acupoint (5% agarose gel). We used the Acusensor 2 and compared their acupuncture manipulation techniques, for which the target criteria were depth and time factors, at acupoint LI11 in the human body before and after 10 training sessions. The outcomes were depth of needle insertion, depth error from target criterion, time of rotating, lifting, and thrusting, time error from target criteria and the time ratio. Results After 10 training sessions, the students showed significantly improved outcomes in depth of needle, depth error (rotation, reducing lifting/thrusting), thumb-forward time error, thumb-backward time error (rotation), and lifting time (reinforcing lifting/thrusting). Conclusions The phantom acupoint tool could be useful in a phantom-based education program for acupuncture-manipulation training for students. For advanced education programs for acupuncture manipulation, we will need to collect additional information, such as patient responses, acupoint-specific anatomical characteristics, delicate tissue-like modeling, haptic and visual feedback, and data from an acupuncture practice simulator. PMID:25689598
De-biasing the dynamic mode decomposition for applied Koopman spectral analysis of noisy datasets
NASA Astrophysics Data System (ADS)
Hemati, Maziar S.; Rowley, Clarence W.; Deem, Eric A.; Cattafesta, Louis N.
2017-08-01
The dynamic mode decomposition (DMD)—a popular method for performing data-driven Koopman spectral analysis—has gained increased popularity for extracting dynamically meaningful spatiotemporal descriptions of fluid flows from snapshot measurements. Often times, DMD descriptions can be used for predictive purposes as well, which enables informed decision-making based on DMD model forecasts. Despite its widespread use and utility, DMD can fail to yield accurate dynamical descriptions when the measured snapshot data are imprecise due to, e.g., sensor noise. Here, we express DMD as a two-stage algorithm in order to isolate a source of systematic error. We show that DMD's first stage, a subspace projection step, systematically introduces bias errors by processing snapshots asymmetrically. To remove this systematic error, we propose utilizing an augmented snapshot matrix in a subspace projection step, as in problems of total least-squares, in order to account for the error present in all snapshots. The resulting unbiased and noise-aware total DMD (TDMD) formulation reduces to standard DMD in the absence of snapshot errors, while the two-stage perspective generalizes the de-biasing framework to other related methods as well. TDMD's performance is demonstrated in numerical and experimental fluids examples. In particular, in the analysis of time-resolved particle image velocimetry data for a separated flow, TDMD outperforms standard DMD by providing dynamical interpretations that are consistent with alternative analysis techniques. Further, TDMD extracts modes that reveal detailed spatial structures missed by standard DMD.
Evaluation of phantom-based education system for acupuncture manipulation.
Lee, In-Seon; Lee, Ye-Seul; Park, Hi-Joon; Lee, Hyejung; Chae, Younbyoung
2015-01-01
Although acupuncture manipulation has been regarded as one of the important factors in clinical outcome, it has been difficult to train novice students to become skillful experts due to a lack of adequate educational program and tools. In the present study, we investigated whether newly developed phantom acupoint tools would be useful to practice-naïve acupuncture students for practicing the three different types of acupuncture manipulation to enhance their skills. We recruited 12 novice students and had them practice acupuncture manipulations on the phantom acupoint (5% agarose gel). We used the Acusensor 2 and compared their acupuncture manipulation techniques, for which the target criteria were depth and time factors, at acupoint LI11 in the human body before and after 10 training sessions. The outcomes were depth of needle insertion, depth error from target criterion, time of rotating, lifting, and thrusting, time error from target criteria and the time ratio. After 10 training sessions, the students showed significantly improved outcomes in depth of needle, depth error (rotation, reducing lifting/thrusting), thumb-forward time error, thumb-backward time error (rotation), and lifting time (reinforcing lifting/thrusting). The phantom acupoint tool could be useful in a phantom-based education program for acupuncture-manipulation training for students. For advanced education programs for acupuncture manipulation, we will need to collect additional information, such as patient responses, acupoint-specific anatomical characteristics, delicate tissue-like modeling, haptic and visual feedback, and data from an acupuncture practice simulator.
Reduction in chemotherapy order errors with computerized physician order entry.
Meisenberg, Barry R; Wright, Robert R; Brady-Copertino, Catherine J
2014-01-01
To measure the number and type of errors associated with chemotherapy order composition associated with three sequential methods of ordering: handwritten orders, preprinted orders, and computerized physician order entry (CPOE) embedded in the electronic health record. From 2008 to 2012, a sample of completed chemotherapy orders were reviewed by a pharmacist for the number and type of errors as part of routine performance improvement monitoring. Error frequencies for each of the three distinct methods of composing chemotherapy orders were compared using statistical methods. The rate of problematic order sets-those requiring significant rework for clarification-was reduced from 30.6% with handwritten orders to 12.6% with preprinted orders (preprinted v handwritten, P < .001) to 2.2% with CPOE (preprinted v CPOE, P < .001). The incidence of errors capable of causing harm was reduced from 4.2% with handwritten orders to 1.5% with preprinted orders (preprinted v handwritten, P < .001) to 0.1% with CPOE (CPOE v preprinted, P < .001). The number of problem- and error-containing chemotherapy orders was reduced sequentially by preprinted order sets and then by CPOE. CPOE is associated with low error rates, but it did not eliminate all errors, and the technology can introduce novel types of errors not seen with traditional handwritten or preprinted orders. Vigilance even with CPOE is still required to avoid patient harm.
NASA Technical Reports Server (NTRS)
Menard, Richard; Chang, Lang-Ping
1998-01-01
A Kalman filter system designed for the assimilation of limb-sounding observations of stratospheric chemical tracers, which has four tunable covariance parameters, was developed in Part I (Menard et al. 1998) The assimilation results of CH4 observations from the Cryogenic Limb Array Etalon Sounder instrument (CLAES) and the Halogen Observation Experiment instrument (HALOE) on board of the Upper Atmosphere Research Satellite are described in this paper. A robust (chi)(sup 2) criterion, which provides a statistical validation of the forecast and observational error covariances, was used to estimate the tunable variance parameters of the system. In particular, an estimate of the model error variance was obtained. The effect of model error on the forecast error variance became critical after only three days of assimilation of CLAES observations, although it took 14 days of forecast to double the initial error variance. We further found that the model error due to numerical discretization as arising in the standard Kalman filter algorithm, is comparable in size to the physical model error due to wind and transport modeling errors together. Separate assimilations of CLAES and HALOE observations were compared to validate the state estimate away from the observed locations. A wave-breaking event that took place several thousands of kilometers away from the HALOE observation locations was well captured by the Kalman filter due to highly anisotropic forecast error correlations. The forecast error correlation in the assimilation of the CLAES observations was found to have a structure similar to that in pure forecast mode except for smaller length scales. Finally, we have conducted an analysis of the variance and correlation dynamics to determine their relative importance in chemical tracer assimilation problems. Results show that the optimality of a tracer assimilation system depends, for the most part, on having flow-dependent error correlation rather than on evolving the error variance.
Cullings, H M; Grant, E J; Egbert, S D; Watanabe, T; Oda, T; Nakamura, F; Yamashita, T; Fuchi, H; Funamoto, S; Marumo, K; Sakata, R; Kodama, Y; Ozasa, K; Kodama, K
2017-01-01
Individual dose estimates calculated by Dosimetry System 2002 (DS02) for the Life Span Study (LSS) of atomic bomb survivors are based on input data that specify location and shielding at the time of the bombing (ATB). A multi-year effort to improve information on survivors' locations ATB has recently been completed, along with comprehensive improvements in their terrain shielding input data and several improvements to computational algorithms used in combination with DS02 at RERF. Improvements began with a thorough review and prioritization of original questionnaire data on location and shielding that were taken from survivors or their proxies in the period 1949-1963. Related source documents varied in level of detail, from relatively simple lists to carefully-constructed technical drawings of structural and other shielding and surrounding neighborhoods. Systematic errors were reduced in this work by restoring the original precision of map coordinates that had been truncated due to limitations in early data processing equipment and by correcting distortions in the old (WWII-era) maps originally used to specify survivors' positions, among other improvements. Distortion errors were corrected by aligning the old maps and neighborhood drawings to orthophotographic mosaics of the cities that were newly constructed from pre-bombing aerial photographs. Random errors that were reduced included simple transcription errors and mistakes in identifying survivors' locations on the old maps. Terrain shielding input data that had been originally estimated for limited groups of survivors using older methods and data sources were completely re-estimated for all survivors using new digital terrain elevation data. Improvements to algorithms included a fix to an error in the DS02 code for coupling house and terrain shielding, a correction for elevation at the survivor's location in calculating angles to the horizon used for terrain shielding input, an improved method for truncating high dose estimates to 4 Gy to reduce the effect of dose error, and improved methods for calculating averaged shielding transmission factors that are used to calculate doses for survivors without detailed shielding input data. Input data changes are summarized and described here in some detail, along with the resulting changes in dose estimates and a simple description of changes in risk estimates for solid cancer mortality. This and future RERF publications will refer to the new dose estimates described herein as "DS02R1 doses."
Medeiros, Stephen; Hagen, Scott; Weishampel, John; ...
2015-03-25
Digital elevation models (DEMs) derived from airborne lidar are traditionally unreliable in coastal salt marshes due to the inability of the laser to penetrate the dense grasses and reach the underlying soil. To that end, we present a novel processing methodology that uses ASTER Band 2 (visible red), an interferometric SAR (IfSAR) digital surface model, and lidar-derived canopy height to classify biomass density using both a three-class scheme (high, medium and low) and a two-class scheme (high and low). Elevation adjustments associated with these classes using both median and quartile approaches were applied to adjust lidar-derived elevation values closer tomore » true bare earth elevation. The performance of the method was tested on 229 elevation points in the lower Apalachicola River Marsh. The two-class quartile-based adjusted DEM produced the best results, reducing the RMS error in elevation from 0.65 m to 0.40 m, a 38% improvement. The raw mean errors for the lidar DEM and the adjusted DEM were 0.61 ± 0.24 m and 0.32 ± 0.24 m, respectively, thereby reducing the high bias by approximately 49%.« less
Banach, Marzena; Wasilewska, Agnieszka; Dlugosz, Rafal; Pauk, Jolanta
2018-05-18
Due to the problem of aging societies, there is a need for smart buildings to monitor and support people with various disabilities, including rheumatoid arthritis. The aim of this paper is to elaborate on novel techniques for wireless motion capture systems for the monitoring and rehabilitation of disabled people for application in smart buildings. The proposed techniques are based on cross-verification of distance measurements between markers and transponders in an environment with highly variable parameters. To their verification, algorithms that enable comprehensive investigation of a system with different numbers of transponders and varying ambient parameters (temperature and noise) were developed. In the estimation of the real positions of markers, various linear and nonlinear filters were used. Several thousand tests were carried out for various system parameters and different marker locations. The results show that localization error may be reduced by as much as 90%. It was observed that repetition of measurement reduces localization error by as much as one order of magnitude. The proposed system, based on wireless techniques, offers a high commercial potential. However, it requires extensive cooperation between teams, including hardware and software design, system modelling, and architectural design.
Filling schemes at submicron scale: Development of submicron sized plasmonic colour filters
Rajasekharan, Ranjith; Balaur, Eugeniu; Minovich, Alexander; Collins, Sean; James, Timothy D.; Djalalian-Assl, Amir; Ganesan, Kumaravelu; Tomljenovic-Hanic, Snjezana; Kandasamy, Sasikaran; Skafidas, Efstratios; Neshev, Dragomir N.; Mulvaney, Paul; Roberts, Ann; Prawer, Steven
2014-01-01
The pixel size imposes a fundamental limit on the amount of information that can be displayed or recorded on a sensor. Thus, there is strong motivation to reduce the pixel size down to the nanometre scale. Nanometre colour pixels cannot be fabricated by simply downscaling current pixels due to colour cross talk and diffraction caused by dyes or pigments used as colour filters. Colour filters based on plasmonic effects can overcome these difficulties. Although different plasmonic colour filters have been demonstrated at the micron scale, there have been no attempts so far to reduce the filter size to the submicron scale. Here, we present for the first time a submicron plasmonic colour filter design together with a new challenge - pixel boundary errors at the submicron scale. We present simple but powerful filling schemes to produce submicron colour filters, which are free from pixel boundary errors and colour cross- talk, are polarization independent and angle insensitive, and based on LCD compatible aluminium technology. These results lay the basis for the development of submicron pixels in displays, RGB-spatial light modulators, liquid crystal over silicon, Google glasses and pico-projectors. PMID:25242695
Niu, Jie; Yang, Qianqian; Wang, Xiaoyun; Song, Rong
2017-01-01
Robot-aided rehabilitation has become an important technology to restore and reinforce motor functions of patients with extremity impairment, whereas it can be extremely challenging to achieve satisfactory tracking performance due to uncertainties and disturbances during rehabilitation training. In this paper, a wire-driven rehabilitation robot that can work over a three-dimensional space is designed for upper-limb rehabilitation, and sliding mode control with nonlinear disturbance observer is designed for the robot to deal with the problem of unpredictable disturbances during robot-assisted training. Then, simulation and experiments of trajectory tracking are carried out to evaluate the performance of the system, the position errors, and the output forces of the designed control scheme are compared with those of the traditional sliding mode control (SMC) scheme. The results show that the designed control scheme can effectively reduce the tracking errors and chattering of the output forces as compared with the traditional SMC scheme, which indicates that the nonlinear disturbance observer can reduce the effect of unpredictable disturbances. The designed control scheme for the wire-driven rehabilitation robot has potential to assist patients with stroke in performing repetitive rehabilitation training.
Filling schemes at submicron scale: development of submicron sized plasmonic colour filters.
Rajasekharan, Ranjith; Balaur, Eugeniu; Minovich, Alexander; Collins, Sean; James, Timothy D; Djalalian-Assl, Amir; Ganesan, Kumaravelu; Tomljenovic-Hanic, Snjezana; Kandasamy, Sasikaran; Skafidas, Efstratios; Neshev, Dragomir N; Mulvaney, Paul; Roberts, Ann; Prawer, Steven
2014-09-22
The pixel size imposes a fundamental limit on the amount of information that can be displayed or recorded on a sensor. Thus, there is strong motivation to reduce the pixel size down to the nanometre scale. Nanometre colour pixels cannot be fabricated by simply downscaling current pixels due to colour cross talk and diffraction caused by dyes or pigments used as colour filters. Colour filters based on plasmonic effects can overcome these difficulties. Although different plasmonic colour filters have been demonstrated at the micron scale, there have been no attempts so far to reduce the filter size to the submicron scale. Here, we present for the first time a submicron plasmonic colour filter design together with a new challenge - pixel boundary errors at the submicron scale. We present simple but powerful filling schemes to produce submicron colour filters, which are free from pixel boundary errors and colour cross- talk, are polarization independent and angle insensitive, and based on LCD compatible aluminium technology. These results lay the basis for the development of submicron pixels in displays, RGB-spatial light modulators, liquid crystal over silicon, Google glasses and pico-projectors.
NASA Astrophysics Data System (ADS)
Tian, Lizhi; Xiong, Zhenhua; Wu, Jianhua; Ding, Han
2017-05-01
Feedforward-feedback control is widely used in motion control of piezoactuator systems. Due to the phase lag caused by incomplete dynamics compensation, the performance of the composite controller is greatly limited at high frequency. This paper proposes a new rate-dependent model to improve the high-frequency tracking performance by reducing dynamics compensation error. The rate-dependent model is designed as a function of the input and input variation rate to describe the input-output relationship of the residual system dynamics which mainly performs as phase lag in a wide frequency band. Then the direct inversion of the proposed rate-dependent model is used to compensate the residual system dynamics. Using the proposed rate-dependent model as feedforward term, the open loop performance can be improved significantly at medium-high frequency. Then, combining the with feedback controller, the composite controller can provide enhanced close loop performance from low frequency to high frequency. At the frequency of 1 Hz, the proposed controller presents the same performance as previous methods. However, at the frequency of 900 Hz, the tracking error is reduced to be 30.7% of the decoupled approach.
Ozone Climatological Profiles for Version 8 TOMS and SBUV Retrievals
NASA Technical Reports Server (NTRS)
McPeters, R. D.; Logan, J. A.; Labow, G. J.
2003-01-01
A new altitude dependent ozone climatology has been produced for use with the latest Total Ozone Mapping Spectrometer (TOMS) and Solar Backscatter Ultraviolet (SBUV) retrieval algorithms. The climatology consists of monthly average profiles for ten degree latitude zones covering from 0 to 60 km. The climatology was formed by combining data from SAGE II (1988 to 2000) and MLS (1991-1999) with data from balloon sondes (1988-2002). Ozone below about 20 km is based on balloons sondes, while ozone above 30 km is based on satellite measurements. The profiles join smoothly between 20 and 30 km. The ozone climatology in the southern hemisphere and tropics has been greatly enhanced in recent years by the addition of balloon sonde stations under the SHADOZ (Southern Hemisphere Additional Ozonesondes) program. A major source of error in the TOMS and SBUV retrieval of total column ozone comes from their reduced sensitivity to ozone in the lower troposphere. An accurate climatology for the retrieval a priori is important for reducing this error on the average. The new climatology follows the seasonal behavior of tropospheric ozone and reflects its hemispheric asymmetry. Comparisons of TOMS version 8 ozone with ground stations show an improvement due in part to the new climatology.
Diagnostic Errors in Ambulatory Care: Dimensions and Preventive Strategies
ERIC Educational Resources Information Center
Singh, Hardeep; Weingart, Saul N.
2009-01-01
Despite an increasing focus on patient safety in ambulatory care, progress in understanding and reducing diagnostic errors in this setting lag behind many other safety concerns such as medication errors. To explore the extent and nature of diagnostic errors in ambulatory care, we identified five dimensions of ambulatory care from which errors may…
[Diagnostic Errors in Medicine].
Buser, Claudia; Bankova, Andriyana
2015-12-09
The recognition of diagnostic errors in everyday practice can help improve patient safety. The most common diagnostic errors are the cognitive errors, followed by system-related errors and no fault errors. The cognitive errors often result from mental shortcuts, known as heuristics. The rate of cognitive errors can be reduced by a better understanding of heuristics and the use of checklists. The autopsy as a retrospective quality assessment of clinical diagnosis has a crucial role in learning from diagnostic errors. Diagnostic errors occur more often in primary care in comparison to hospital settings. On the other hand, the inpatient errors are more severe than the outpatient errors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ellefson, S; Department of Human Oncology, University of Wisconsin, Madison, WI; Culberson, W
Purpose: Discrepancies in absolute dose values have been detected between the ViewRay treatment planning system and ArcCHECK readings when performing delivery quality assurance on the ViewRay system with the ArcCHECK-MR diode array (SunNuclear Corporation). In this work, we investigate whether these discrepancies are due to errors in the ViewRay planning and/or delivery system or due to errors in the ArcCHECK’s readings. Methods: Gamma analysis was performed on 19 ViewRay patient plans using the ArcCHECK. Frequency analysis on the dose differences was performed. To investigate whether discrepancies were due to measurement or delivery error, 10 diodes in low-gradient dose regions weremore » chosen to compare with ion chamber measurements in a PMMA phantom with the same size and shape as the ArcCHECK, provided by SunNuclear. The diodes chosen all had significant discrepancies in absolute dose values compared to the ViewRay TPS. Absolute doses to PMMA were compared between the ViewRay TPS calculations, ArcCHECK measurements, and measurements in the PMMA phantom. Results: Three of the 19 patient plans had 3%/3mm gamma passing rates less than 95%, and ten of the 19 plans had 2%/2mm passing rates less than 95%. Frequency analysis implied a non-random error process. Out of the 10 diode locations measured, ion chamber measurements were all within 2.2% error relative to the TPS and had a mean error of 1.2%. ArcCHECK measurements ranged from 4.5% to over 15% error relative to the TPS and had a mean error of 8.0%. Conclusion: The ArcCHECK performs well for quality assurance on the ViewRay under most circumstances. However, under certain conditions the absolute dose readings are significantly higher compared to the planned doses. As the ion chamber measurements consistently agree with the TPS, it can be concluded that the discrepancies are due to ArcCHECK measurement error and not TPS or delivery system error. This work was funded by the Bhudatt Paliwal Professorship and the University of Wisconsin Medical Radiation Research Center.« less
A novel diagnosis method for a Hall plates-based rotary encoder with a magnetic concentrator.
Meng, Bumin; Wang, Yaonan; Sun, Wei; Yuan, Xiaofang
2014-07-31
In the last few years, rotary encoders based on two-dimensional complementary metal oxide semiconductors (CMOS) Hall plates with a magnetic concentrator have been developed to measure contactless absolute angle. There are various error factors influencing the measuring accuracy, which are difficult to locate after the assembly of encoder. In this paper, a model-based rapid diagnosis method is presented. Based on an analysis of the error mechanism, an error model is built to compare minimum residual angle error and to quantify the error factors. Additionally, a modified particle swarm optimization (PSO) algorithm is used to reduce the calculated amount. The simulation and experimental results show that this diagnosis method is feasible to quantify the causes of the error and to reduce iteration significantly.
Demonstration of spectral calibration for stellar interferometry
NASA Technical Reports Server (NTRS)
Demers, Richard T.; An, Xin; Tang, Hong; Rud, Mayer; Wayne, Leonard; Kissil, Andrew; Kwack, Eug-Yun
2006-01-01
A breadboard is under development to demonstrate the calibration of spectral errors in microarcsecond stellar interferometers. Analysis shows that thermally and mechanically stable hardware in addition to careful optical design can reduce the wavelength dependent error to tens of nanometers. Calibration of the hardware can further reduce the error to the level of picometers. The results of thermal, mechanical and optical analysis supporting the breadboard design will be shown.
Reducing medication errors in critical care: a multimodal approach
Kruer, Rachel M; Jarrell, Andrew S; Latif, Asad
2014-01-01
The Institute of Medicine has reported that medication errors are the single most common type of error in health care, representing 19% of all adverse events, while accounting for over 7,000 deaths annually. The frequency of medication errors in adult intensive care units can be as high as 947 per 1,000 patient-days, with a median of 105.9 per 1,000 patient-days. The formulation of drugs is a potential contributor to medication errors. Challenges related to drug formulation are specific to the various routes of medication administration, though errors associated with medication appearance and labeling occur among all drug formulations and routes of administration. Addressing these multifaceted challenges requires a multimodal approach. Changes in technology, training, systems, and safety culture are all strategies to potentially reduce medication errors related to drug formulation in the intensive care unit. PMID:25210478
Liu, Xiaofeng Steven
2011-05-01
The use of covariates is commonly believed to reduce the unexplained error variance and the standard error for the comparison of treatment means, but the reduction in the standard error is neither guaranteed nor uniform over different sample sizes. The covariate mean differences between the treatment conditions can inflate the standard error of the covariate-adjusted mean difference and can actually produce a larger standard error for the adjusted mean difference than that for the unadjusted mean difference. When the covariate observations are conceived of as randomly varying from one study to another, the covariate mean differences can be related to a Hotelling's T(2) . Using this Hotelling's T(2) statistic, one can always find a minimum sample size to achieve a high probability of reducing the standard error and confidence interval width for the adjusted mean difference. ©2010 The British Psychological Society.
General linear codes for fault-tolerant matrix operations on processor arrays
NASA Technical Reports Server (NTRS)
Nair, V. S. S.; Abraham, J. A.
1988-01-01
Various checksum codes have been suggested for fault-tolerant matrix computations on processor arrays. Use of these codes is limited due to potential roundoff and overflow errors. Numerical errors may also be misconstrued as errors due to physical faults in the system. In this a set of linear codes is identified which can be used for fault-tolerant matrix operations such as matrix addition, multiplication, transposition, and LU-decomposition, with minimum numerical error. Encoding schemes are given for some of the example codes which fall under the general set of codes. With the help of experiments, a rule of thumb for the selection of a particular code for a given application is derived.
NASA Technical Reports Server (NTRS)
Greatorex, Scott (Editor); Beckman, Mark
1996-01-01
Several future, and some current missions, use an on-board computer (OBC) force model that is very limited. The OBC geopotential force model typically includes only the J(2), J(3), J(4), C(2,2) and S(2,2) terms to model non-spherical Earth gravitational effects. The Tropical Rainfall Measuring Mission (TRMM), Wide-field Infrared Explorer (WIRE), Transition Region and Coronal Explorer (TRACE), Submillimeter Wave Astronomy Satellite (SWAS), and X-ray Timing Explorer (XTE) all plan to use this geopotential force model on-board. The Solar, Anomalous, and Magnetospheric Particle Explorer (SAMPEX) is already flying this geopotential force model. Past analysis has shown that one of the leading sources of error in the OBC propagated ephemeris is the omission of the higher order geopotential terms. However, these same analyses have shown a wide range of accuracies for the OBC ephemerides. Analysis was performed using EUVE state vectors that showed the EUVE four day OBC propagated ephemerides varied in accuracy from 200 m. to 45 km. depending on the initial vector used to start the propagation. The vectors used in the study were from a single EUVE orbit at one minute intervals in the ephemeris. Since each vector propagated practically the same path as the others, the differences seen had to be due to differences in the inital state vector only. An algorithm was developed that will optimize the epoch of the uploaded state vector. Proper selection can reduce the previous errors of anywhere from 200 m. to 45 km. to generally less than one km. over four days of propagation. This would enable flight projects to minimize state vector uploads to the spacecraft. Additionally, this method is superior to other methods in that no additional orbit estimates need be done. The definitive ephemeris generated on the ground can be used as long as the proper epoch is chosen. This algorithm can be easily coded in software that would pick the epoch within a specified time range that would minimize the OBC propagation error. This techniques should greatly improve the accuracy of the OBC propagation on-board future spacecraft such as TRMM, WIRE, SWAS, and XTE without increasing complexity in the ground processing.
Bias in the Wagner-Nelson estimate of the fraction of drug absorbed.
Wang, Yibin; Nedelman, Jerry
2002-04-01
To examine and quantify bias in the Wagner-Nelson estimate of the fraction of drug absorbed resulting from the estimation error of the elimination rate constant (k), measurement error of the drug concentration, and the truncation error in the area under the curve. Bias in the Wagner-Nelson estimate was derived as a function of post-dosing time (t), k, ratio of absorption rate constant to k (r), and the coefficient of variation for estimates of k (CVk), or CV% for the observed concentration, by assuming a one-compartment model and using an independent estimate of k. The derived functions were used for evaluating the bias with r = 0.5, 3, or 6; k = 0.1 or 0.2; CV, = 0.2 or 0.4; and CV, =0.2 or 0.4; for t = 0 to 30 or 60. Estimation error of k resulted in an upward bias in the Wagner-Nelson estimate that could lead to the estimate of the fraction absorbed being greater than unity. The bias resulting from the estimation error of k inflates the fraction of absorption vs. time profiles mainly in the early post-dosing period. The magnitude of the bias in the Wagner-Nelson estimate resulting from estimation error of k was mainly determined by CV,. The bias in the Wagner-Nelson estimate resulting from to estimation error in k can be dramatically reduced by use of the mean of several independent estimates of k, as in studies for development of an in vivo-in vitro correlation. The truncation error in the area under the curve can introduce a negative bias in the Wagner-Nelson estimate. This can partially offset the bias resulting from estimation error of k in the early post-dosing period. Measurement error of concentration does not introduce bias in the Wagner-Nelson estimate. Estimation error of k results in an upward bias in the Wagner-Nelson estimate, mainly in the early drug absorption phase. The truncation error in AUC can result in a downward bias, which may partially offset the upward bias due to estimation error of k in the early absorption phase. Measurement error of concentration does not introduce bias. The joint effect of estimation error of k and truncation error in AUC can result in a non-monotonic fraction-of-drug-absorbed-vs-time profile. However, only estimation error of k can lead to the Wagner-Nelson estimate of fraction of drug absorbed greater than unity.
ERIC Educational Resources Information Center
Pouplier, Marianne; Marin, Stefania; Waltl, Susanne
2014-01-01
Purpose: Phonetic accommodation in speech errors has traditionally been used to identify the processing level at which an error has occurred. Recent studies have challenged the view that noncanonical productions may solely be due to phonetic, not phonological, processing irregularities, as previously assumed. The authors of the present study…
RMP Enhanced Transport and Rotation Screening in DIII-D Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Izzo, V; Joseph, I; Moyer, R
The application of resonant magnetic perturbations (RMP) to DIII-D plasmas at low collisionality has achieved ELM suppression, primarily due to a pedestal density reduction. The mechanism of the enhanced particle transport is investigated in 3D MHD simulations with the NIMROD code. The simulations apply realistic vacuum fields from the DIII-D I-coils, C-coils and measure intrinsic error fields to an EFIT reconstructed DIII-D equilibrium, and allow the plasma to respond to the applied fields while the fields are fixed at the boundary, which lies in the vacuum region. A non-rotating plasma amplifies the resonant components of the applied fields by factorsmore » of 2-5. The poloidal velocity forms E x B convection cells crossing the separatrix, which push particles into the vacuum region and reduce the pedestal density. Low toroidal rotation at the separatrix reduces the resonant field amplitudes, but does not strongly affect the particle pumpout. At higher separatrix rotation, the poloidal E x B velocity is reduced by half, while the enhanced particle transport is entirely eliminated. A high collisionality DIII-D equilibrium with an experimentally measured rotation profile serves as the starting point for a simulation with odd parity I-coil fields that can ultimately be compared with experimental results. All of the NIMROD results are compared with analytic error field theory.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Albert, J; Labarbe, R; Sterpin, E
2016-06-15
Purpose: To understand the extent to which the prompt gamma camera measurements can be used to predict the residual proton range due to setup errors and errors in the calibration curve. Methods: We generated ten variations on a default calibration curve (CC) and ten corresponding range maps (RM). Starting with the default RM, we chose a square array of N beamlets, which were then rotated by a random angle θ and shifted by a random vector s. We added a 5% distal Gaussian noise to each beamlet in order to introduce discrepancies that exist between the ranges predicted from themore » prompt gamma measurements and those simulated with Monte Carlo algorithms. For each RM, s, θ, along with an offset u in the CC, were optimized using a simple Euclidian distance between the default ranges and the ranges produced by the given RM. Results: The application of our method lead to the maximal overrange of 2.0mm and underrange of 0.6mm on average. Compared to the situations where s, θ, and u were ignored, these values were larger: 2.1mm and 4.3mm. In order to quantify the need for setup error corrections, we also performed computations in which u was corrected for, but s and θ were not. This yielded: 3.2mm and 3.2mm. The average computation time for 170 beamlets was 65 seconds. Conclusion: These results emphasize the necessity to correct for setup errors and the errors in the calibration curve. The simplicity and speed of our method makes it a good candidate for being implemented as a tool for in-room adaptive therapy. This work also demonstrates that the Prompt gamma range measurements can indeed be useful in the effort to reduce range errors. Given these results, and barring further refinements, this approach is a promising step towards an adaptive proton radiotherapy.« less
NASA Astrophysics Data System (ADS)
Park, M.; Stenstrom, M. K.
2004-12-01
Recognizing urban information from the satellite imagery is problematic due to the diverse features and dynamic changes of urban landuse. The use of Landsat imagery for urban land use classification involves inherent uncertainty due to its spatial resolution and the low separability among land uses. To resolve the uncertainty problem, we investigated the performance of Bayesian networks to classify urban land use since Bayesian networks provide a quantitative way of handling uncertainty and have been successfully used in many areas. In this study, we developed the optimized networks for urban land use classification from Landsat ETM+ images of Marina del Rey area based on USGS land cover/use classification level III. The networks started from a tree structure based on mutual information between variables and added the links to improve accuracy. This methodology offers several advantages: (1) The network structure shows the dependency relationships between variables. The class node value can be predicted even with particular band information missing due to sensor system error. The missing information can be inferred from other dependent bands. (2) The network structure provides information of variables that are important for the classification, which is not available from conventional classification methods such as neural networks and maximum likelihood classification. In our case, for example, bands 1, 5 and 6 are the most important inputs in determining the land use of each pixel. (3) The networks can be reduced with those input variables important for classification. This minimizes the problem without considering all possible variables. We also examined the effect of incorporating ancillary data: geospatial information such as X and Y coordinate values of each pixel and DEM data, and vegetation indices such as NDVI and Tasseled Cap transformation. The results showed that the locational information improved overall accuracy (81%) and kappa coefficient (76%), and lowered the omission and commission errors compared with using only spectral data (accuracy 71%, kappa coefficient 62%). Incorporating DEM data did not significantly improve overall accuracy (74%) and kappa coefficient (66%) but lowered the omission and commission errors. Incorporating NDVI did not much improve the overall accuracy (72%) and k coefficient (65%). Including Tasseled Cap transformation reduced the accuracy (accuracy 70%, kappa 61%). Therefore, additional information from the DEM and vegetation indices was not useful as locational ancillary data.