NASA Technical Reports Server (NTRS)
Ulvestad, J. S.
1989-01-01
Errors from a number of sources in astrometric very long baseline interferometry (VLBI) have been reduced in recent years through a variety of methods of calibration and modeling. Such reductions have led to a situation in which the extended structure of the natural radio sources used in VLBI is a significant error source in the effort to improve the accuracy of the radio reference frame. In the past, work has been done on individual radio sources to establish the magnitude of the errors caused by their particular structures. The results of calculations on 26 radio sources are reported in which an effort is made to determine the typical delay and delay-rate errors for a number of sources having different types of structure. It is found that for single observations of the types of radio sources present in astrometric catalogs, group-delay and phase-delay scatter in the 50 to 100 psec range due to source structure can be expected at 8.4 GHz on the intercontinental baselines available in the Deep Space Network (DSN). Delay-rate scatter of approx. 5 x 10(exp -15) sec sec(exp -1) (or approx. 0.002 mm sec (exp -1) is also expected. If such errors mapped directly into source position errors, they would correspond to position uncertainties of approx. 2 to 5 nrad, similar to the best position determinations in the current JPL VLBI catalog. With the advent of wider bandwidth VLBI systems on the large DSN antennas, the system noise will be low enough so that the structure-induced errors will be a significant part of the error budget. Several possibilities for reducing the structure errors are discussed briefly, although it is likely that considerable effort will have to be devoted to the structure problem in order to reduce the typical error by a factor of two or more.
Du, Zhongzhou; Su, Rijian; Liu, Wenzhong; Huang, Zhixing
2015-01-01
The signal transmission module of a magnetic nanoparticle thermometer (MNPT) was established in this study to analyze the error sources introduced during the signal flow in the hardware system. The underlying error sources that significantly affected the precision of the MNPT were determined through mathematical modeling and simulation. A transfer module path with the minimum error in the hardware system was then proposed through the analysis of the variations of the system error caused by the significant error sources when the signal flew through the signal transmission module. In addition, a system parameter, named the signal-to-AC bias ratio (i.e., the ratio between the signal and AC bias), was identified as a direct determinant of the precision of the measured temperature. The temperature error was below 0.1 K when the signal-to-AC bias ratio was higher than 80 dB, and other system errors were not considered. The temperature error was below 0.1 K in the experiments with a commercial magnetic fluid (Sample SOR-10, Ocean Nanotechnology, Springdale, AR, USA) when the hardware system of the MNPT was designed with the aforementioned method. PMID:25875188
The impact of work-related stress on medication errors in Eastern Region Saudi Arabia.
Salam, Abdul; Segal, David M; Abu-Helalah, Munir Ahmad; Gutierrez, Mary Lou; Joosub, Imran; Ahmed, Wasim; Bibi, Rubina; Clarke, Elizabeth; Qarni, Ali Ahmed Al
2018-05-07
To examine the relationship between overall level and source-specific work-related stressors on medication errors rate. A cross-sectional study examined the relationship between overall levels of stress, 25 source-specific work-related stressors and medication error rate based on documented incident reports in Saudi Arabia (SA) hospital, using secondary databases. King Abdulaziz Hospital in Al-Ahsa, Eastern Region, SA. Two hundred and sixty-nine healthcare professionals (HCPs). The odds ratio (OR) and corresponding 95% confidence interval (CI) for HCPs documented incident report medication errors and self-reported sources of Job Stress Survey. Multiple logistic regression analysis identified source-specific work-related stress as significantly associated with HCPs who made at least one medication error per month (P < 0.05), including disruption to home life, pressure to meet deadlines, difficulties with colleagues, excessive workload, income over 10 000 riyals and compulsory night/weekend call duties either some or all of the time. Although not statistically significant, HCPs who reported overall stress were two times more likely to make at least one medication error per month than non-stressed HCPs (OR: 1.95, P = 0.081). This is the first study to use documented incident reports for medication errors rather than self-report to evaluate the level of stress-related medication errors in SA HCPs. Job demands, such as social stressors (home life disruption, difficulties with colleagues), time pressures, structural determinants (compulsory night/weekend call duties) and higher income, were significantly associated with medication errors whereas overall stress revealed a 2-fold higher trend.
Quantifying Data Quality for Clinical Trials Using Electronic Data Capture
Nahm, Meredith L.; Pieper, Carl F.; Cunningham, Maureen M.
2008-01-01
Background Historically, only partial assessments of data quality have been performed in clinical trials, for which the most common method of measuring database error rates has been to compare the case report form (CRF) to database entries and count discrepancies. Importantly, errors arising from medical record abstraction and transcription are rarely evaluated as part of such quality assessments. Electronic Data Capture (EDC) technology has had a further impact, as paper CRFs typically leveraged for quality measurement are not used in EDC processes. Methods and Principal Findings The National Institute on Drug Abuse Treatment Clinical Trials Network has developed, implemented, and evaluated methodology for holistically assessing data quality on EDC trials. We characterize the average source-to-database error rate (14.3 errors per 10,000 fields) for the first year of use of the new evaluation method. This error rate was significantly lower than the average of published error rates for source-to-database audits, and was similar to CRF-to-database error rates reported in the published literature. We attribute this largely to an absence of medical record abstraction on the trials we examined, and to an outpatient setting characterized by less acute patient conditions. Conclusions Historically, medical record abstraction is the most significant source of error by an order of magnitude, and should be measured and managed during the course of clinical trials. Source-to-database error rates are highly dependent on the amount of structured data collection in the clinical setting and on the complexity of the medical record, dependencies that should be considered when developing data quality benchmarks. PMID:18725958
NASA Astrophysics Data System (ADS)
Zhu, Jian-Rong; Li, Jian; Zhang, Chun-Mei; Wang, Qin
2017-10-01
The decoy-state method has been widely used in commercial quantum key distribution (QKD) systems. In view of the practical decoy-state QKD with both source errors and statistical fluctuations, we propose a universal model of full parameter optimization in biased decoy-state QKD with phase-randomized sources. Besides, we adopt this model to carry out simulations of two widely used sources: weak coherent source (WCS) and heralded single-photon source (HSPS). Results show that full parameter optimization can significantly improve not only the secure transmission distance but also the final key generation rate. And when taking source errors and statistical fluctuations into account, the performance of decoy-state QKD using HSPS suffered less than that of decoy-state QKD using WCS.
Flight Test Results: CTAS Cruise/Descent Trajectory Prediction Accuracy for En route ATC Advisories
NASA Technical Reports Server (NTRS)
Green, S.; Grace, M.; Williams, D.
1999-01-01
The Center/TRACON Automation System (CTAS), under development at NASA Ames Research Center, is designed to assist controllers with the management and control of air traffic transitioning to/from congested airspace. This paper focuses on the transition from the en route environment, to high-density terminal airspace, under a time-based arrival-metering constraint. Two flight tests were conducted at the Denver Air Route Traffic Control Center (ARTCC) to study trajectory-prediction accuracy, the key to accurate Decision Support Tool advisories such as conflict detection/resolution and fuel-efficient metering conformance. In collaboration with NASA Langley Research Center, these test were part of an overall effort to research systems and procedures for the integration of CTAS and flight management systems (FMS). The Langley Transport Systems Research Vehicle Boeing 737 airplane flew a combined total of 58 cruise-arrival trajectory runs while following CTAS clearance advisories. Actual trajectories of the airplane were compared to CTAS and FMS predictions to measure trajectory-prediction accuracy and identify the primary sources of error for both. The research airplane was used to evaluate several levels of cockpit automation ranging from conventional avionics to a performance-based vertical navigation (VNAV) FMS. Trajectory prediction accuracy was analyzed with respect to both ARTCC radar tracking and GPS-based aircraft measurements. This paper presents detailed results describing the trajectory accuracy and error sources. Although differences were found in both accuracy and error sources, CTAS accuracy was comparable to the FMS in terms of both meter-fix arrival-time performance (in support of metering) and 4D-trajectory prediction (key to conflict prediction). Overall arrival time errors (mean plus standard deviation) were measured to be approximately 24 seconds during the first flight test (23 runs) and 15 seconds during the second flight test (25 runs). The major source of error during these tests was found to be the predicted winds aloft used by CTAS. Position and velocity estimates of the airplane provided to CTAS by the ATC Host radar tracker were found to be a relatively insignificant error source for the trajectory conditions evaluated. Airplane performance modeling errors within CTAS were found to not significantly affect arrival time errors when the constrained descent procedures were used. The most significant effect related to the flight guidance was observed to be the cross-track and turn-overshoot errors associated with conventional VOR guidance. Lateral navigation (LNAV) guidance significantly reduced both the cross-track and turn-overshoot error. Pilot procedures and VNAV guidance were found to significantly reduce the vertical profile errors associated with atmospheric and aircraft performance model errors.
Quantifying errors without random sampling.
Phillips, Carl V; LaPole, Luwanna M
2003-06-12
All quantifications of mortality, morbidity, and other health measures involve numerous sources of error. The routine quantification of random sampling error makes it easy to forget that other sources of error can and should be quantified. When a quantification does not involve sampling, error is almost never quantified and results are often reported in ways that dramatically overstate their precision. We argue that the precision implicit in typical reporting is problematic and sketch methods for quantifying the various sources of error, building up from simple examples that can be solved analytically to more complex cases. There are straightforward ways to partially quantify the uncertainty surrounding a parameter that is not characterized by random sampling, such as limiting reported significant figures. We present simple methods for doing such quantifications, and for incorporating them into calculations. More complicated methods become necessary when multiple sources of uncertainty must be combined. We demonstrate that Monte Carlo simulation, using available software, can estimate the uncertainty resulting from complicated calculations with many sources of uncertainty. We apply the method to the current estimate of the annual incidence of foodborne illness in the United States. Quantifying uncertainty from systematic errors is practical. Reporting this uncertainty would more honestly represent study results, help show the probability that estimated values fall within some critical range, and facilitate better targeting of further research.
Source localization (LORETA) of the error-related-negativity (ERN/Ne) and positivity (Pe).
Herrmann, Martin J; Römmler, Josefine; Ehlis, Ann-Christine; Heidrich, Anke; Fallgatter, Andreas J
2004-07-01
We investigated error processing of 39 subjects engaging the Eriksen flanker task. In all 39 subjects a pronounced negative deflection (ERN/Ne) and a later positive component (Pe) were observed after incorrect as compared to correct responses. The neural sources of both components were analyzed using LORETA source localization. For the negative component (ERN/Ne) we found significantly higher brain electrical activity in medial prefrontal areas for incorrect responses, whereas the positive component (Pe) was localized nearby but more rostral within the anterior cingulate cortex (ACC). Thus, different neural generators were found for the ERN/Ne and the Pe, which further supports the notion that both error-related components represent different aspects of error processing.
NASA Technical Reports Server (NTRS)
Thomas, J. B.
1981-01-01
The effects of source structure on radio interferometry measurements were investigated. The brightness distribution measurements for ten extragalactic sources were analyzed. Significant results are reported.
Altimeter error sources at the 10-cm performance level
NASA Technical Reports Server (NTRS)
Martin, C. F.
1977-01-01
Error sources affecting the calibration and operational use of a 10 cm altimeter are examined to determine the magnitudes of current errors and the investigations necessary to reduce them to acceptable bounds. Errors considered include those affecting operational data pre-processing, and those affecting altitude bias determination, with error budgets developed for both. The most significant error sources affecting pre-processing are bias calibration, propagation corrections for the ionosphere, and measurement noise. No ionospheric models are currently validated at the required 10-25% accuracy level. The optimum smoothing to reduce the effects of measurement noise is investigated and found to be on the order of one second, based on the TASC model of geoid undulations. The 10 cm calibrations are found to be feasible only through the use of altimeter passes that are very high elevation for a tracking station which tracks very close to the time of altimeter track, such as a high elevation pass across the island of Bermuda. By far the largest error source, based on the current state-of-the-art, is the location of the island tracking station relative to mean sea level in the surrounding ocean areas.
NASA Astrophysics Data System (ADS)
Sharan, Maithili; Singh, Amit Kumar; Singh, Sarvesh Kumar
2017-11-01
Estimation of an unknown atmospheric release from a finite set of concentration measurements is considered an ill-posed inverse problem. Besides ill-posedness, the estimation process is influenced by the instrumental errors in the measured concentrations and model representativity errors. The study highlights the effect of minimizing model representativity errors on the source estimation. This is described in an adjoint modelling framework and followed in three steps. First, an estimation of point source parameters (location and intensity) is carried out using an inversion technique. Second, a linear regression relationship is established between the measured concentrations and corresponding predicted using the retrieved source parameters. Third, this relationship is utilized to modify the adjoint functions. Further, source estimation is carried out using these modified adjoint functions to analyse the effect of such modifications. The process is tested for two well known inversion techniques, called renormalization and least-square. The proposed methodology and inversion techniques are evaluated for a real scenario by using concentrations measurements from the Idaho diffusion experiment in low wind stable conditions. With both the inversion techniques, a significant improvement is observed in the retrieval of source estimation after minimizing the representativity errors.
Flight Evaluation of Center-TRACON Automation System Trajectory Prediction Process
NASA Technical Reports Server (NTRS)
Williams, David H.; Green, Steven M.
1998-01-01
Two flight experiments (Phase 1 in October 1992 and Phase 2 in September 1994) were conducted to evaluate the accuracy of the Center-TRACON Automation System (CTAS) trajectory prediction process. The Transport Systems Research Vehicle (TSRV) Boeing 737 based at Langley Research Center flew 57 arrival trajectories that included cruise and descent segments; at the same time, descent clearance advisories from CTAS were followed. Actual trajectories of the airplane were compared with the trajectories predicted by the CTAS trajectory synthesis algorithms and airplane Flight Management System (FMS). Trajectory prediction accuracy was evaluated over several levels of cockpit automation that ranged from a conventional cockpit to performance-based FMS vertical navigation (VNAV). Error sources and their magnitudes were identified and measured from the flight data. The major source of error during these tests was found to be the predicted winds aloft used by CTAS. The most significant effect related to flight guidance was the cross-track and turn-overshoot errors associated with conventional VOR guidance. FMS lateral navigation (LNAV) guidance significantly reduced both the cross-track and turn-overshoot error. Pilot procedures and VNAV guidance were found to significantly reduce the vertical profile errors associated with atmospheric and airplane performance model errors.
Reaching nearby sources: comparison between real and virtual sound and visual targets
Parseihian, Gaëtan; Jouffrais, Christophe; Katz, Brian F. G.
2014-01-01
Sound localization studies over the past century have predominantly been concerned with directional accuracy for far-field sources. Few studies have examined the condition of near-field sources and distance perception. The current study concerns localization and pointing accuracy by examining source positions in the peripersonal space, specifically those associated with a typical tabletop surface. Accuracy is studied with respect to the reporting hand (dominant or secondary) for auditory sources. Results show no effect on the reporting hand with azimuthal errors increasing equally for the most extreme source positions. Distance errors show a consistent compression toward the center of the reporting area. A second evaluation is carried out comparing auditory and visual stimuli to examine any bias in reporting protocol or biomechanical difficulties. No common bias error was observed between auditory and visual stimuli indicating that reporting errors were not due to biomechanical limitations in the pointing task. A final evaluation compares real auditory sources and anechoic condition virtual sources created using binaural rendering. Results showed increased azimuthal errors, with virtual source positions being consistently overestimated to more lateral positions, while no significant distance perception was observed, indicating a deficiency in the binaural rendering condition relative to the real stimuli situation. Various potential reasons for this discrepancy are discussed with several proposals for improving distance perception in peripersonal virtual environments. PMID:25228855
Realtime mitigation of GPS SA errors using Loran-C
NASA Technical Reports Server (NTRS)
Braasch, Soo Y.
1994-01-01
The hybrid use of Loran-C with the Global Positioning System (GPS) was shown capable of providing a sole-means of enroute air radionavigation. By allowing pilots to fly direct to their destinations, use of this system is resulting in significant time savings and therefore fuel savings as well. However, a major error source limiting the accuracy of GPS is the intentional degradation of the GPS signal known as Selective Availability (SA). SA-induced position errors are highly correlated and far exceed all other error sources (horizontal position error: 100 meters, 95 percent). Realtime mitigation of SA errors from the position solution is highly desirable. How that can be achieved is discussed. The stability of Loran-C signals is exploited to reduce SA errors. The theory behind this technique is discussed and results using bench and flight data are given.
Logic-based assessment of the compatibility of UMLS ontology sources
2011-01-01
Background The UMLS Metathesaurus (UMLS-Meta) is currently the most comprehensive effort for integrating independently-developed medical thesauri and ontologies. UMLS-Meta is being used in many applications, including PubMed and ClinicalTrials.gov. The integration of new sources combines automatic techniques, expert assessment, and auditing protocols. The automatic techniques currently in use, however, are mostly based on lexical algorithms and often disregard the semantics of the sources being integrated. Results In this paper, we argue that UMLS-Meta’s current design and auditing methodologies could be significantly enhanced by taking into account the logic-based semantics of the ontology sources. We provide empirical evidence suggesting that UMLS-Meta in its 2009AA version contains a significant number of errors; these errors become immediately apparent if the rich semantics of the ontology sources is taken into account, manifesting themselves as unintended logical consequences that follow from the ontology sources together with the information in UMLS-Meta. We then propose general principles and specific logic-based techniques to effectively detect and repair such errors. Conclusions Our results suggest that the methodologies employed in the design of UMLS-Meta are not only very costly in terms of human effort, but also error-prone. The techniques presented here can be useful for both reducing human effort in the design and maintenance of UMLS-Meta and improving the quality of its contents. PMID:21388571
Main sources of errors in diagnosis of chronic radiation sickness (in Russian)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soldatova, V.A.
1973-11-01
With the aim of finding out the main sources of errors in the diagnosis of chronic radiation sickness, the author analyzed a total of 500 cases of this sickness in roenigenologists and radiologists sent to the clinic to be examined according to occupational indications. lt was shown that the main source of errors when interpreting the observed deviations as occupational was underestimation of etiological significance of functional and organic diseases of the nervous system, endocrinevascular dystonia and also such diseases as hypochromic anemia and chronic infection. The majority of diagnostic errors is explained by insufficient knowledge of the main regularitymore » of forming the picture of chronic radiation sickness and by the absence of the necessary differential diagnosis with general somatic diseases. (auth)« less
Exception handling for sensor fusion
NASA Astrophysics Data System (ADS)
Chavez, G. T.; Murphy, Robin R.
1993-08-01
This paper presents a control scheme for handling sensing failures (sensor malfunctions, significant degradations in performance due to changes in the environment, and errant expectations) in sensor fusion for autonomous mobile robots. The advantages of the exception handling mechanism are that it emphasizes a fast response to sensing failures, is able to use only a partial causal model of sensing failure, and leads to a graceful degradation of sensing if the sensing failure cannot be compensated for. The exception handling mechanism consists of two modules: error classification and error recovery. The error classification module in the exception handler attempts to classify the type and source(s) of the error using a modified generate-and-test procedure. If the source of the error is isolated, the error recovery module examines its cache of recovery schemes, which either repair or replace the current sensing configuration. If the failure is due to an error in expectation or cannot be identified, the planner is alerted. Experiments using actual sensor data collected by the CSM Mobile Robotics/Machine Perception Laboratory's Denning mobile robot demonstrate the operation of the exception handling mechanism.
NASA Technical Reports Server (NTRS)
Rice, R. F.
1976-01-01
The root-mean-square error performance measure is used to compare the relative performance of several widely known source coding algorithms with the RM2 image data compression system. The results demonstrate that RM2 has a uniformly significant performance advantage.
Error in the Honeybee Waggle Dance Improves Foraging Flexibility
Okada, Ryuichi; Ikeno, Hidetoshi; Kimura, Toshifumi; Ohashi, Mizue; Aonuma, Hitoshi; Ito, Etsuro
2014-01-01
The honeybee waggle dance communicates the location of profitable food sources, usually with a certain degree of error in the directional information ranging from 10–15° at the lower margin. We simulated one-day colonial foraging to address the biological significance of information error in the waggle dance. When the error was 30° or larger, the waggle dance was not beneficial. If the error was 15°, the waggle dance was beneficial when the food sources were scarce. When the error was 10° or smaller, the waggle dance was beneficial under all the conditions tested. Our simulation also showed that precise information (0–5° error) yielded great success in finding feeders, but also caused failures at finding new feeders, i.e., a high-risk high-return strategy. The observation that actual bees perform the waggle dance with an error of 10–15° might reflect, at least in part, the maintenance of a successful yet risky foraging trade-off. PMID:24569525
S-193 scatterometer transfer function analysis for data processing
NASA Technical Reports Server (NTRS)
Johnson, L.
1974-01-01
A mathematical model for converting raw data measurements of the S-193 scatterometer into processed values of radar scattering coefficient is presented. The argument is based on an approximation derived from the Radar Equation and actual operating principles of the S-193 Scatterometer hardware. Possible error sources are inaccuracies in transmitted wavelength, range, antenna illumination integrals, and the instrument itself. The dominant source of error in the calculation of scattering coefficent is accuracy of the range. All other ractors with the possible exception of illumination integral are not considered to cause significant error in the calculation of scattering coefficient.
High energy X-ray observations of COS-B gamma-ray sources from OSO-8
NASA Technical Reports Server (NTRS)
Dolan, J. F.; Crannell, C. J.; Dennis, B. R.; Frost, K. J.; Orwig, L. E.; Caraveo, P. A.
1985-01-01
During the three years between satellite launch in June 1975 and turn-off in October 1978, the high energy X-ray spectrometer on board OSO-8 observed nearly all of the COS-B gamma-ray source positions given in the 2CG catalog (Swanenburg et al., 1981). An X-ray source was detected at energies above 20 keV at the 6-sigma level of significance in the gamma-ray error box containing 2CG342 - 02 and at the 3-sigma level of significance in the error boxes containing 2CG065 + 00, 2CG195 + 04, and 2CG311 - 01. No definite association between the X-ray and gamma-ray sources can be made from these data alone. Upper limits are given for the 2CG sources from which no X-ray flux was detected above 20 keV.
Sproul, Ashley; Goodine, Carole; Moore, David; McLeod, Amy; Gordon, Jacqueline; Digby, Jennifer; Stoica, George
2018-01-01
Medication reconciliation at transitions of care increases patient safety. Collection of an accurate best possible medication history (BPMH) on admission is a key step. National quality indicators are used as surrogate markers for BPMH quality, but no literature on their accuracy exists. Obtaining a high-quality BPMH is often labour- and resource-intensive. Pharmacy students are now being assigned to obtain BPMHs, as a cost-effective means to increase BPMH completion, despite limited information to support the quality of BPMHs obtained by students relative to other health care professionals. To determine whether the national quality indicator of using more than one source to complete a BPMH is a true marker of quality and to assess whether BPMHs obtained by pharmacy students were of quality equal to those obtained by nurses. This prospective trial compared BPMHs for the same group of patients collected by nurses and by trained pharmacy students in the emergency departments of 2 sites within a large health network over a 2-month period (July and August 2016). Discrepancies between the 2 versions were identified by a pharmacist, who determined which party (nurse, pharmacy student, or both) had made an error. A panel of experts reviewed the errors and ranked their severity. BPMHs were prepared for a total of 40 patients. Those prepared by nurses were more likely to contain an error than those prepared by pharmacy students (171 versus 43 errors, p = 0.006). There was a nonsignificant trend toward less severe errors in BPMHs completed by pharmacy students. There was no significant difference in the mean number of errors in relation to the specified quality indicator (mean of 2.7 errors for BPMHs prepared from 1 source versus 4.8 errors for BPMHs prepared from ≥ 2 sources, p = 0.08). The surrogate marker (number of BPMH sources) may not reflect BPMH quality. However, it appears that BPMHs prepared by pharmacy students had fewer errors and were of similar quality (in terms of clinically significant errors) relative to those prepared by nurses.
LANDSAT 4 band 6 data evaluation
NASA Technical Reports Server (NTRS)
1985-01-01
Comparison of underflight data with satellite estimates of temperature revealed significant gain calibration errors. The source of the LANDSAT 5 band 6 error and its reproducibility is not yet adequately defined. The error can be accounted for using underflight or ground truth data. When underflight data are used to correct the satellite data, the residual error for the scene studied was 1.3K when the predicted temperatures were compared to measured surface temperature.
Distortion of Digital Image Correlation (DIC) Displacements and Strains from Heat Waves
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, E. M. C.; Reu, P. L.
“Heat waves” is a colloquial term used to describe convective currents in air formed when different objects in an area are at different temperatures. In the context of Digital Image Correlation (DIC) and other optical-based image processing techniques, imaging an object of interest through heat waves can significantly distort the apparent location and shape of the object. We present that there are many potential heat sources in DIC experiments, including but not limited to lights, cameras, hot ovens, and sunlight, yet error caused by heat waves is often overlooked. This paper first briefly presents three practical situations in which heatmore » waves contributed significant error to DIC measurements to motivate the investigation of heat waves in more detail. Then the theoretical background of how light is refracted through heat waves is presented, and the effects of heat waves on displacements and strains computed from DIC are characterized in detail. Finally, different filtering methods are investigated to reduce the displacement and strain errors caused by imaging through heat waves. The overarching conclusions from this work are that errors caused by heat waves are significantly higher than typical noise floors for DIC measurements, and that the errors are difficult to filter because the temporal and spatial frequencies of the errors are in the same range as those of typical signals of interest. In conclusion, eliminating or mitigating the effects of heat sources in a DIC experiment is the best solution to minimizing errors caused by heat waves.« less
Distortion of Digital Image Correlation (DIC) Displacements and Strains from Heat Waves
Jones, E. M. C.; Reu, P. L.
2017-11-28
“Heat waves” is a colloquial term used to describe convective currents in air formed when different objects in an area are at different temperatures. In the context of Digital Image Correlation (DIC) and other optical-based image processing techniques, imaging an object of interest through heat waves can significantly distort the apparent location and shape of the object. We present that there are many potential heat sources in DIC experiments, including but not limited to lights, cameras, hot ovens, and sunlight, yet error caused by heat waves is often overlooked. This paper first briefly presents three practical situations in which heatmore » waves contributed significant error to DIC measurements to motivate the investigation of heat waves in more detail. Then the theoretical background of how light is refracted through heat waves is presented, and the effects of heat waves on displacements and strains computed from DIC are characterized in detail. Finally, different filtering methods are investigated to reduce the displacement and strain errors caused by imaging through heat waves. The overarching conclusions from this work are that errors caused by heat waves are significantly higher than typical noise floors for DIC measurements, and that the errors are difficult to filter because the temporal and spatial frequencies of the errors are in the same range as those of typical signals of interest. In conclusion, eliminating or mitigating the effects of heat sources in a DIC experiment is the best solution to minimizing errors caused by heat waves.« less
GY SAMPLING THEORY IN ENVIRONMENTAL STUDIES 2: SUBSAMPLING ERROR MEASUREMENTS
Sampling can be a significant source of error in the measurement process. The characterization and cleanup of hazardous waste sites require data that meet site-specific levels of acceptable quality if scientifically supportable decisions are to be made. In support of this effort,...
Personal digital assistant-based drug information sources: potential to improve medication safety.
Galt, Kimberly A; Rule, Ann M; Houghton, Bruce; Young, Daniel O; Remington, Gina
2005-04-01
This study compared the potential for personal digital assistant (PDA)-based drug information sources to minimize potential medication errors dependent on accurate and complete drug information at the point of care. A quality and safety framework for drug information resources was developed to evaluate 11 PDA-based drug information sources. Three drug information sources met the criteria of the framework: Eprocrates Rx Pro, Lexi-Drugs, and mobileMICROMEDEX. Medication error types related to drug information at the point of care were then determined. Forty-seven questions were developed to test the potential of the sources to prevent these error types. Pharmacists and physician experts from Creighton University created these questions based on the most common types of questions asked by primary care providers. Three physicians evaluated the drug information sources, rating the source for each question: 1=no information available, 2=some information available, or 3 = adequate amount of information available. The mean ratings for the drug information sources were: 2.0 (Eprocrates Rx Pro), 2.5 (Lexi-Drugs), and 2.03 (mobileMICROMEDEX). Lexi-Drugs was significantly better (mobileMICROMEDEX t test; P=0.05; Eprocrates Rx Pro t test; P=0.01). Lexi-Drugs was found to be the most specific and complete PDA resource available to optimize medication safety by reducing potential errors associated with drug information. No resource was sufficient to address the patient safety information needs for all cases.
Enhanced orbit determination filter sensitivity analysis: Error budget development
NASA Technical Reports Server (NTRS)
Estefan, J. A.; Burkhart, P. D.
1994-01-01
An error budget analysis is presented which quantifies the effects of different error sources in the orbit determination process when the enhanced orbit determination filter, recently developed, is used to reduce radio metric data. The enhanced filter strategy differs from more traditional filtering methods in that nearly all of the principal ground system calibration errors affecting the data are represented as filter parameters. Error budget computations were performed for a Mars Observer interplanetary cruise scenario for cases in which only X-band (8.4-GHz) Doppler data were used to determine the spacecraft's orbit, X-band ranging data were used exclusively, and a combined set in which the ranging data were used in addition to the Doppler data. In all three cases, the filter model was assumed to be a correct representation of the physical world. Random nongravitational accelerations were found to be the largest source of error contributing to the individual error budgets. Other significant contributors, depending on the data strategy used, were solar-radiation pressure coefficient uncertainty, random earth-orientation calibration errors, and Deep Space Network (DSN) station location uncertainty.
Decomposition of Sources of Errors in Seasonal Streamflow Forecasting over the U.S. Sunbelt
NASA Technical Reports Server (NTRS)
Mazrooei, Amirhossein; Sinah, Tusshar; Sankarasubramanian, A.; Kumar, Sujay V.; Peters-Lidard, Christa D.
2015-01-01
Seasonal streamflow forecasts, contingent on climate information, can be utilized to ensure water supply for multiple uses including municipal demands, hydroelectric power generation, and for planning agricultural operations. However, uncertainties in the streamflow forecasts pose significant challenges in their utilization in real-time operations. In this study, we systematically decompose various sources of errors in developing seasonal streamflow forecasts from two Land Surface Models (LSMs) (Noah3.2 and CLM2), which are forced with downscaled and disaggregated climate forecasts. In particular, the study quantifies the relative contributions of the sources of errors from LSMs, climate forecasts, and downscaling/disaggregation techniques in developing seasonal streamflow forecast. For this purpose, three month ahead seasonal precipitation forecasts from the ECHAM4.5 general circulation model (GCM) were statistically downscaled from 2.8deg to 1/8deg spatial resolution using principal component regression (PCR) and then temporally disaggregated from monthly to daily time step using kernel-nearest neighbor (K-NN) approach. For other climatic forcings, excluding precipitation, we considered the North American Land Data Assimilation System version 2 (NLDAS-2) hourly climatology over the years 1979 to 2010. Then the selected LSMs were forced with precipitation forecasts and NLDAS-2 hourly climatology to develop retrospective seasonal streamflow forecasts over a period of 20 years (1991-2010). Finally, the performance of LSMs in forecasting streamflow under different schemes was analyzed to quantify the relative contribution of various sources of errors in developing seasonal streamflow forecast. Our results indicate that the most dominant source of errors during winter and fall seasons is the errors due to ECHAM4.5 precipitation forecasts, while temporal disaggregation scheme contributes to maximum errors during summer season.
Single-Frequency GPS Relative Navigation in a High Ionosphere Orbital Environment
NASA Technical Reports Server (NTRS)
Conrad, Patrick R.; Naasz, Bo J.
2007-01-01
The Global Positioning System (GPS) provides a convenient source for space vehicle relative navigation measurements, especially for low Earth orbit formation flying and autonomous rendezvous mission concepts. For single-frequency GPS receivers, ionospheric path delay can be a significant error source if not properly mitigated. In particular, ionospheric effects are known to cause significant radial position error bias and add dramatically to relative state estimation error if the onboard navigation software does not force the use of measurements from common or shared GPS space vehicles. Results from GPS navigation simulations are presented for a pair of space vehicles flying in formation and using GPS pseudorange measurements to perform absolute and relative orbit determination. With careful measurement selection techniques relative state estimation accuracy to less than 20 cm with standard GPS pseudorange processing and less than 10 cm with single-differenced pseudorange processing is shown.
A spectrally tunable solid-state source for radiometric, photometric, and colorimetric applications
NASA Astrophysics Data System (ADS)
Fryc, Irena; Brown, Steven W.; Eppeldauer, George P.; Ohno, Yoshihiro
2004-10-01
A spectrally tunable light source using a large number of LEDs and an integrating sphere has been designed and being developed at NIST. The source is designed to have a capability of producing any spectral distributions mimicking various light sources in the visible region by feedback control of individual LEDs. The output spectral irradiance or radiance of the source will be calibrated by a reference instrument, and the source will be used as a spectroradiometric as well as photometric and colorimetric standard. The use of the tunable source mimicking spectra of display colors, for example, rather than a traditional incandescent standard lamp for calibration of colorimeters, can reduce the spectral mismatch errors of the colorimeter measuring displays significantly. A series of simulations have been conducted to predict the performance of the designed tunable source when used for calibration of colorimeters. The results indicate that the errors can be reduced by an order of magnitude compared with those when the colorimeters are calibrated against Illuminant A. Stray light errors of a spectroradiometer can also be effectively reduced by using the tunable source producing a blackbody spectrum at higher temperature (e.g., 9000 K). The source can also approximate various CIE daylight illuminants and common lamp spectral distributions for other photometric and colorimetric applications.
NASA Technical Reports Server (NTRS)
Mcruer, D. T.; Clement, W. F.; Allen, R. W.
1980-01-01
Human error, a significant contributing factor in a very high proportion of civil transport, general aviation, and rotorcraft accidents is investigated. Correction of the sources of human error requires that one attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation operations is presented. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.
Blumenfeld, Philip; Hata, Nobuhiko; DiMaio, Simon; Zou, Kelly; Haker, Steven; Fichtinger, Gabor; Tempany, Clare M C
2007-09-01
To quantify needle placement accuracy of magnetic resonance image (MRI)-guided core needle biopsy of the prostate. A total of 10 biopsies were performed with 18-gauge (G) core biopsy needle via a percutaneous transperineal approach. Needle placement error was assessed by comparing the coordinates of preplanned targets with the needle tip measured from the intraprocedural coherent gradient echo images. The source of these errors was subsequently investigated by measuring displacement caused by needle deflection and needle susceptibility artifact shift in controlled phantom studies. Needle placement error due to misalignment of the needle template guide was also evaluated. The mean and standard deviation (SD) of errors in targeted biopsies was 6.5 +/- 3.5 mm. Phantom experiments showed significant placement error due to needle deflection with a needle with an asymmetrically beveled tip (3.2-8.7 mm depending on tissue type) but significantly smaller error with a symmetrical bevel (0.6-1.1 mm). Needle susceptibility artifacts observed a shift of 1.6 +/- 0.4 mm from the true needle axis. Misalignment of the needle template guide contributed an error of 1.5 +/- 0.3 mm. Needle placement error was clinically significant in MRI-guided biopsy for diagnosis of prostate cancer. Needle placement error due to needle deflection was the most significant cause of error, especially for needles with an asymmetrical bevel. (c) 2007 Wiley-Liss, Inc.
A procedure for the significance testing of unmodeled errors in GNSS observations
NASA Astrophysics Data System (ADS)
Li, Bofeng; Zhang, Zhetao; Shen, Yunzhong; Yang, Ling
2018-01-01
It is a crucial task to establish a precise mathematical model for global navigation satellite system (GNSS) observations in precise positioning. Due to the spatiotemporal complexity of, and limited knowledge on, systematic errors in GNSS observations, some residual systematic errors would inevitably remain even after corrected with empirical model and parameterization. These residual systematic errors are referred to as unmodeled errors. However, most of the existing studies mainly focus on handling the systematic errors that can be properly modeled and then simply ignore the unmodeled errors that may actually exist. To further improve the accuracy and reliability of GNSS applications, such unmodeled errors must be handled especially when they are significant. Therefore, a very first question is how to statistically validate the significance of unmodeled errors. In this research, we will propose a procedure to examine the significance of these unmodeled errors by the combined use of the hypothesis tests. With this testing procedure, three components of unmodeled errors, i.e., the nonstationary signal, stationary signal and white noise, are identified. The procedure is tested by using simulated data and real BeiDou datasets with varying error sources. The results show that the unmodeled errors can be discriminated by our procedure with approximately 90% confidence. The efficiency of the proposed procedure is further reassured by applying the time-domain Allan variance analysis and frequency-domain fast Fourier transform. In summary, the spatiotemporally correlated unmodeled errors are commonly existent in GNSS observations and mainly governed by the residual atmospheric biases and multipath. Their patterns may also be impacted by the receiver.
Lyons-Weiler, James; Pelikan, Richard; Zeh, Herbert J; Whitcomb, David C; Malehorn, David E; Bigbee, William L; Hauskrecht, Milos
2005-01-01
Peptide profiles generated using SELDI/MALDI time of flight mass spectrometry provide a promising source of patient-specific information with high potential impact on the early detection and classification of cancer and other diseases. The new profiling technology comes, however, with numerous challenges and concerns. Particularly important are concerns of reproducibility of classification results and their significance. In this work we describe a computational validation framework, called PACE (Permutation-Achieved Classification Error), that lets us assess, for a given classification model, the significance of the Achieved Classification Error (ACE) on the profile data. The framework compares the performance statistic of the classifier on true data samples and checks if these are consistent with the behavior of the classifier on the same data with randomly reassigned class labels. A statistically significant ACE increases our belief that a discriminative signal was found in the data. The advantage of PACE analysis is that it can be easily combined with any classification model and is relatively easy to interpret. PACE analysis does not protect researchers against confounding in the experimental design, or other sources of systematic or random error. We use PACE analysis to assess significance of classification results we have achieved on a number of published data sets. The results show that many of these datasets indeed possess a signal that leads to a statistically significant ACE.
Lee, Wonseok; Bae, Hyoung Won; Lee, Si Hyung; Kim, Chan Yun; Seong, Gong Je
2017-03-01
To assess the accuracy of intraocular lens (IOL) power prediction for cataract surgery with open angle glaucoma (OAG) and to identify preoperative angle parameters correlated with postoperative unpredicted refractive errors. This study comprised 45 eyes from 45 OAG subjects and 63 eyes from 63 non-glaucomatous cataract subjects (controls). We investigated differences in preoperative predicted refractive errors and postoperative refractive errors for each group. Preoperative predicted refractive errors were obtained by biometry (IOL-master) and compared to postoperative refractive errors measured by auto-refractometer 2 months postoperatively. Anterior angle parameters were determined using swept source optical coherence tomography. We investigated correlations between preoperative angle parameters [angle open distance (AOD); trabecular iris surface area (TISA); angle recess area (ARA); trabecular iris angle (TIA)] and postoperative unpredicted refractive errors. In patients with OAG, significant differences were noted between preoperative predicted and postoperative real refractive errors, with more myopia than predicted. No significant differences were recorded in controls. Angle parameters (AOD, ARA, TISA, and TIA) at the superior and inferior quadrant were significantly correlated with differences between predicted and postoperative refractive errors in OAG patients (-0.321 to -0.408, p<0.05). Superior quadrant AOD 500 was significantly correlated with postoperative refractive differences in multivariate linear regression analysis (β=-2.925, R²=0.404). Clinically unpredicted refractive errors after cataract surgery were more common in OAG than in controls. Certain preoperative angle parameters, especially AOD 500 at the superior quadrant, were significantly correlated with these unpredicted errors.
Lee, Wonseok; Bae, Hyoung Won; Lee, Si Hyung; Kim, Chan Yun
2017-01-01
Purpose To assess the accuracy of intraocular lens (IOL) power prediction for cataract surgery with open angle glaucoma (OAG) and to identify preoperative angle parameters correlated with postoperative unpredicted refractive errors. Materials and Methods This study comprised 45 eyes from 45 OAG subjects and 63 eyes from 63 non-glaucomatous cataract subjects (controls). We investigated differences in preoperative predicted refractive errors and postoperative refractive errors for each group. Preoperative predicted refractive errors were obtained by biometry (IOL-master) and compared to postoperative refractive errors measured by auto-refractometer 2 months postoperatively. Anterior angle parameters were determined using swept source optical coherence tomography. We investigated correlations between preoperative angle parameters [angle open distance (AOD); trabecular iris surface area (TISA); angle recess area (ARA); trabecular iris angle (TIA)] and postoperative unpredicted refractive errors. Results In patients with OAG, significant differences were noted between preoperative predicted and postoperative real refractive errors, with more myopia than predicted. No significant differences were recorded in controls. Angle parameters (AOD, ARA, TISA, and TIA) at the superior and inferior quadrant were significantly correlated with differences between predicted and postoperative refractive errors in OAG patients (-0.321 to -0.408, p<0.05). Superior quadrant AOD 500 was significantly correlated with postoperative refractive differences in multivariate linear regression analysis (β=-2.925, R2=0.404). Conclusion Clinically unpredicted refractive errors after cataract surgery were more common in OAG than in controls. Certain preoperative angle parameters, especially AOD 500 at the superior quadrant, were significantly correlated with these unpredicted errors. PMID:28120576
New Methods for Assessing and Reducing Uncertainty in Microgravity Studies
NASA Astrophysics Data System (ADS)
Giniaux, J. M.; Hooper, A. J.; Bagnardi, M.
2017-12-01
Microgravity surveying, also known as dynamic or 4D gravimetry is a time-dependent geophysical method used to detect mass fluctuations within the shallow crust, by analysing temporal changes in relative gravity measurements. We present here a detailed uncertainty analysis of temporal gravity measurements, considering for the first time all possible error sources, including tilt, error in drift estimations and timing errors. We find that some error sources that are actually ignored, can have a significant impact on the total error budget and it is therefore likely that some gravity signals may have been misinterpreted in previous studies. Our analysis leads to new methods for reducing some of the uncertainties associated with residual gravity estimation. In particular, we propose different approaches for drift estimation and free air correction depending on the survey set up. We also provide formulae to recalculate uncertainties for past studies and lay out a framework for best practice in future studies. We demonstrate our new approach on volcanic case studies, which include Kilauea in Hawaii and Askja in Iceland.
Dynamically correcting two-qubit gates against any systematic logical error
NASA Astrophysics Data System (ADS)
Calderon Vargas, Fernando Antonio
The reliability of quantum information processing depends on the ability to deal with noise and error in an efficient way. A significant source of error in many settings is coherent, systematic gate error. This work introduces a set of composite pulse sequences that generate maximally entangling gates and correct all systematic errors within the logical subspace to arbitrary order. These sequences are applica- ble for any two-qubit interaction Hamiltonian, and make no assumptions about the underlying noise mechanism except that it is constant on the timescale of the opera- tion. The prime use for our results will be in cases where one has limited knowledge of the underlying physical noise and control mechanisms, highly constrained control, or both. In particular, we apply these composite pulse sequences to the quantum system formed by two capacitively coupled singlet-triplet qubits, which is charac- terized by having constrained control and noise sources that are low frequency and of a non-Markovian nature.
Crowd-sourced pictures geo-localization method based on street view images and 3D reconstruction
NASA Astrophysics Data System (ADS)
Cheng, Liang; Yuan, Yi; Xia, Nan; Chen, Song; Chen, Yanming; Yang, Kang; Ma, Lei; Li, Manchun
2018-07-01
People are increasingly becoming accustomed to taking photos of everyday life in modern cities and uploading them on major photo-sharing social media sites. These sites contain numerous pictures, but some have incomplete or blurred location information. The geo-localization of crowd-sourced pictures enriches the information contained therein, and is applicable to activities such as urban construction, urban landscape analysis, and crime tracking. However, geo-localization faces huge technical challenges. This paper proposes a method for large-scale geo-localization of crowd-sourced pictures. Our approach uses structured, organized Street View images as a reference dataset and employs a three-step strategy of coarse geo-localization by image retrieval, selecting reliable matches by image registration, and fine geo-localization by 3D reconstruction to attach geographic tags to pictures from unidentified sources. In study area, 3D reconstruction based on close-range photogrammetry is used to restore the 3D geographical information of the crowd-sourced pictures, resulting in the proposed method improving the median error from 256.7 m to 69.0 m, and the percentage of the geo-localized query pictures under a 50 m error from 17.2% to 43.2% compared with the previous method. Another discovery using the proposed method is that, in respect of the causes of reconstruction error, closer distances from the cameras to the main objects in query pictures tend to produce lower errors and the component of error parallel to the road makes a more significant contribution to the Total Error. The proposed method is not limited to small areas, and could be expanded to cities and larger areas owing to its flexible parameters.
An error analysis perspective for patient alignment systems.
Figl, Michael; Kaar, Marcus; Hoffman, Rainer; Kratochwil, Alfred; Hummel, Johann
2013-09-01
This paper analyses the effects of error sources which can be found in patient alignment systems. As an example, an ultrasound (US) repositioning system and its transformation chain are assessed. The findings of this concept can also be applied to any navigation system. In a first step, all error sources were identified and where applicable, corresponding target registration errors were computed. By applying error propagation calculations on these commonly used registration/calibration and tracking errors, we were able to analyse the components of the overall error. Furthermore, we defined a special situation where the whole registration chain reduces to the error caused by the tracking system. Additionally, we used a phantom to evaluate the errors arising from the image-to-image registration procedure, depending on the image metric used. We have also discussed how this analysis can be applied to other positioning systems such as Cone Beam CT-based systems or Brainlab's ExacTrac. The estimates found by our error propagation analysis are in good agreement with the numbers found in the phantom study but significantly smaller than results from patient evaluations. We probably underestimated human influences such as the US scan head positioning by the operator and tissue deformation. Rotational errors of the tracking system can multiply these errors, depending on the relative position of tracker and probe. We were able to analyse the components of the overall error of a typical patient positioning system. We consider this to be a contribution to the optimization of the positioning accuracy for computer guidance systems.
A posteriori error estimates in voice source recovery
NASA Astrophysics Data System (ADS)
Leonov, A. S.; Sorokin, V. N.
2017-12-01
The inverse problem of voice source pulse recovery from a segment of a speech signal is under consideration. A special mathematical model is used for the solution that relates these quantities. A variational method of solving inverse problem of voice source recovery for a new parametric class of sources, that is for piecewise-linear sources (PWL-sources), is proposed. Also, a technique for a posteriori numerical error estimation for obtained solutions is presented. A computer study of the adequacy of adopted speech production model with PWL-sources is performed in solving the inverse problems for various types of voice signals, as well as corresponding study of a posteriori error estimates. Numerical experiments for speech signals show satisfactory properties of proposed a posteriori error estimates, which represent the upper bounds of possible errors in solving the inverse problem. The estimate of the most probable error in determining the source-pulse shapes is about 7-8% for the investigated speech material. It is noted that a posteriori error estimates can be used as a criterion of the quality for obtained voice source pulses in application to speaker recognition.
Dissociable Genetic Contributions to Error Processing: A Multimodal Neuroimaging Study
Agam, Yigal; Vangel, Mark; Roffman, Joshua L.; Gallagher, Patience J.; Chaponis, Jonathan; Haddad, Stephen; Goff, Donald C.; Greenberg, Jennifer L.; Wilhelm, Sabine; Smoller, Jordan W.; Manoach, Dara S.
2014-01-01
Background Neuroimaging studies reliably identify two markers of error commission: the error-related negativity (ERN), an event-related potential, and functional MRI activation of the dorsal anterior cingulate cortex (dACC). While theorized to reflect the same neural process, recent evidence suggests that the ERN arises from the posterior cingulate cortex not the dACC. Here, we tested the hypothesis that these two error markers also have different genetic mediation. Methods We measured both error markers in a sample of 92 comprised of healthy individuals and those with diagnoses of schizophrenia, obsessive-compulsive disorder or autism spectrum disorder. Participants performed the same task during functional MRI and simultaneously acquired magnetoencephalography and electroencephalography. We examined the mediation of the error markers by two single nucleotide polymorphisms: dopamine D4 receptor (DRD4) C-521T (rs1800955), which has been associated with the ERN and methylenetetrahydrofolate reductase (MTHFR) C677T (rs1801133), which has been associated with error-related dACC activation. We then compared the effects of each polymorphism on the two error markers modeled as a bivariate response. Results We replicated our previous report of a posterior cingulate source of the ERN in healthy participants in the schizophrenia and obsessive-compulsive disorder groups. The effect of genotype on error markers did not differ significantly by diagnostic group. DRD4 C-521T allele load had a significant linear effect on ERN amplitude, but not on dACC activation, and this difference was significant. MTHFR C677T allele load had a significant linear effect on dACC activation but not ERN amplitude, but the difference in effects on the two error markers was not significant. Conclusions DRD4 C-521T, but not MTHFR C677T, had a significant differential effect on two canonical error markers. Together with the anatomical dissociation between the ERN and error-related dACC activation, these findings suggest that these error markers have different neural and genetic mediation. PMID:25010186
NASA Astrophysics Data System (ADS)
Zhang, Y. K.; Liang, X.
2014-12-01
Effects of aquifer heterogeneity and uncertainties in source/sink, and initial and boundary conditions in a groundwater flow model on the spatiotemporal variations of groundwater level, h(x,t), were investigated. Analytical solutions for the variance and covariance of h(x, t) in an unconfined aquifer described by a linearized Boussinesq equation with a white noise source/sink and a random transmissivity field were derived. It was found that in a typical aquifer the error in h(x,t) in early time is mainly caused by the random initial condition and the error reduces as time goes to reach a constant error in later time. The duration during which the effect of the random initial condition is significant may last a few hundred days in most aquifers. The constant error in groundwater in later time is due to the combined effects of the uncertain source/sink and flux boundary: the closer to the flux boundary, the larger the error. The error caused by the uncertain head boundary is limited in a narrow zone near the boundary but it remains more or less constant over time. The effect of the heterogeneity is to increase the variation of groundwater level and the maximum effect occurs close to the constant head boundary because of the linear mean hydraulic gradient. The correlation of groundwater level decreases with temporal interval and spatial distance. In addition, the heterogeneity enhances the correlation of groundwater level, especially at larger time intervals and small spatial distances.
The Robustness of Acoustic Analogies
NASA Technical Reports Server (NTRS)
Freund, J. B.; Lele, S. K.; Wei, M.
2004-01-01
Acoustic analogies for the prediction of flow noise are exact rearrangements of the flow equations N(right arrow q) = 0 into a nominal sound source S(right arrow q) and sound propagation operator L such that L(right arrow q) = S(right arrow q). In practice, the sound source is typically modeled and the propagation operator inverted to make predictions. Since the rearrangement is exact, any sufficiently accurate model of the source will yield the correct sound, so other factors must determine the merits of any particular formulation. Using data from a two-dimensional mixing layer direct numerical simulation (DNS), we evaluate the robustness of two analogy formulations to different errors intentionally introduced into the source. The motivation is that since S can not be perfectly modeled, analogies that are less sensitive to errors in S are preferable. Our assessment is made within the framework of Goldstein's generalized acoustic analogy, in which different choices of a base flow used in constructing L give different sources S and thus different analogies. A uniform base flow yields a Lighthill-like analogy, which we evaluate against a formulation in which the base flow is the actual mean flow of the DNS. The more complex mean flow formulation is found to be significantly more robust to errors in the energetic turbulent fluctuations, but its advantage is less pronounced when errors are made in the smaller scales.
Jensen, Jonas; Olesen, Jacob Bjerring; Stuart, Matthias Bo; Hansen, Peter Møller; Nielsen, Michael Bachmann; Jensen, Jørgen Arendt
2016-08-01
A method for vector velocity volume flow estimation is presented, along with an investigation of its sources of error and correction of actual volume flow measurements. Volume flow errors are quantified theoretically by numerical modeling, through flow phantom measurements, and studied in vivo. This paper investigates errors from estimating volumetric flow using a commercial ultrasound scanner and the common assumptions made in the literature. The theoretical model shows, e.g. that volume flow is underestimated by 15%, when the scan plane is off-axis with the vessel center by 28% of the vessel radius. The error sources were also studied in vivo under realistic clinical conditions, and the theoretical results were applied for correcting the volume flow errors. Twenty dialysis patients with arteriovenous fistulas were scanned to obtain vector flow maps of fistulas. When fitting an ellipsis to cross-sectional scans of the fistulas, the major axis was on average 10.2mm, which is 8.6% larger than the minor axis. The ultrasound beam was on average 1.5mm from the vessel center, corresponding to 28% of the semi-major axis in an average fistula. Estimating volume flow with an elliptical, rather than circular, vessel area and correcting the ultrasound beam for being off-axis, gave a significant (p=0.008) reduction in error from 31.2% to 24.3%. The error is relative to the Ultrasound Dilution Technique, which is considered the gold standard for volume flow estimation for dialysis patients. The study shows the importance of correcting for volume flow errors, which are often made in clinical practice. Copyright © 2016 Elsevier B.V. All rights reserved.
Impact of source collinearity in simulated PM 2.5 data on the PMF receptor model solution
NASA Astrophysics Data System (ADS)
Habre, Rima; Coull, Brent; Koutrakis, Petros
2011-12-01
Positive Matrix Factorization (PMF) is a factor analytic model used to identify particle sources and to estimate their contributions to PM 2.5 concentrations observed at receptor sites. Collinearity in source contributions due to meteorological conditions introduces uncertainty in the PMF solution. We simulated datasets of speciated PM 2.5 concentrations associated with three ambient particle sources: "Motor Vehicle" (MV), "Sodium Chloride" (NaCl), and "Sulfur" (S), and we varied the correlation structure between their mass contributions to simulate collinearity. We analyzed the datasets in PMF using the ME-2 multilinear engine. The Pearson correlation coefficients between the simulated and PMF-predicted source contributions and profiles are denoted by " G correlation" and " F correlation", respectively. In sensitivity analyses, we examined how the means or variances of the source contributions affected the stability of the PMF solution with collinearity. The % errors in predicting the average source contributions were 23, 80 and 23% for MV, NaCl, and S, respectively. On average, the NaCl contribution was overestimated, while MV and S contributions were underestimated. The ability of PMF to predict the contributions and profiles of the three sources deteriorated significantly as collinearity in their contributions increased. When the mean of NaCl or variance of NaCl and MV source contributions was increased, the deterioration in G correlation with increasing collinearity became less significant, and the ability of PMF to predict the NaCl and MV loading profiles improved. When the three factor profiles were simulated to share more elements, the decrease in G and F correlations became non-significant. Our findings agree with previous simulation studies reporting that correlated sources are predicted with higher error and bias. Consequently, the power to detect significant concentration-response estimates in health effect analyses weakens.
NASA Astrophysics Data System (ADS)
Zheng, Sifa; Liu, Haitao; Dan, Jiabi; Lian, Xiaomin
2015-05-01
Linear time-invariant assumption for the determination of acoustic source characteristics, the source strength and the source impedance in the frequency domain has been proved reasonable in the design of an exhaust system. Different methods have been proposed to its identification and the multi-load method is widely used for its convenience by varying the load number and impedance. Theoretical error analysis has rarely been referred to and previous results have shown an overdetermined set of open pipes can reduce the identification error. This paper contributes a theoretical error analysis for the load selection. The relationships between the error in the identification of source characteristics and the load selection were analysed. A general linear time-invariant model was built based on the four-load method. To analyse the error of the source impedance, an error estimation function was proposed. The dispersion of the source pressure was obtained by an inverse calculation as an indicator to detect the accuracy of the results. It was found that for a certain load length, the load resistance at the frequency points of one-quarter wavelength of odd multiples results in peaks and in the maximum error for source impedance identification. Therefore, the load impedance of frequency range within the one-quarter wavelength of odd multiples should not be used for source impedance identification. If the selected loads have more similar resistance values (i.e., the same order of magnitude), the identification error of the source impedance could be effectively reduced.
Swing arm profilometer: analytical solutions of misalignment errors for testing axisymmetric optics
NASA Astrophysics Data System (ADS)
Xiong, Ling; Luo, Xiao; Liu, Zhenyu; Wang, Xiaokun; Hu, Haixiang; Zhang, Feng; Zheng, Ligong; Zhang, Xuejun
2016-07-01
The swing arm profilometer (SAP) has been playing a very important role in testing large aspheric optics. As one of most significant error sources that affects the test accuracy, misalignment error leads to low-order errors such as aspherical aberrations and coma apart from power. In order to analyze the effect of misalignment errors, the relation between alignment parameters and test results of axisymmetric optics is presented. Analytical solutions of SAP system errors from tested mirror misalignment, arm length L deviation, tilt-angle θ deviation, air-table spin error, and air-table misalignment are derived, respectively; and misalignment tolerance is given to guide surface measurement. In addition, experiments on a 2-m diameter parabolic mirror are demonstrated to verify the model; according to the error budget, we achieve the SAP test for low-order errors except power with accuracy of 0.1 μm root-mean-square.
NASA Astrophysics Data System (ADS)
Watanabe, Y.; Abe, S.
2014-06-01
Terrestrial neutron-induced soft errors in MOSFETs from a 65 nm down to a 25 nm design rule are analyzed by means of multi-scale Monte Carlo simulation using the PHITS-HyENEXSS code system. Nuclear reaction models implemented in PHITS code are validated by comparisons with experimental data. From the analysis of calculated soft error rates, it is clarified that secondary He and H ions provide a major impact on soft errors with decreasing critical charge. It is also found that the high energy component from 10 MeV up to several hundreds of MeV in secondary cosmic-ray neutrons has the most significant source of soft errors regardless of design rule.
Predictors of Errors of Novice Java Programmers
ERIC Educational Resources Information Center
Bringula, Rex P.; Manabat, Geecee Maybelline A.; Tolentino, Miguel Angelo A.; Torres, Edmon L.
2012-01-01
This descriptive study determined which of the sources of errors would predict the errors committed by novice Java programmers. Descriptive statistics revealed that the respondents perceived that they committed the identified eighteen errors infrequently. Thought error was perceived to be the main source of error during the laboratory programming…
Andreeva, I G; Vartanian, I A
2012-01-01
The ability to evaluate direction of amplitude changes of sound stimuli was studied in adults and in the 11-12- and 15-16-year old teenagers. The stimuli representing sequences of fragments of the tone of 1 kHz, whose amplitude is changing with time, are used as model of approach and withdrawal of the sound sources. The 11-12-year old teenagers at estimation of direction of amplitude changes were shown to make the significantly higher number of errors as compared with two other examined groups, including those in repeated experiments. The structure of errors - the ratio of the portion of errors at estimation of increasing and decreasing by amplitude stimulus - turned out to be different in teenagers and in adults. The question is discussed about the effect of unspecific activation of the large hemisphere cortex in teenagers on processes if taking solution about the complex sound stimulus, including a possibility estimation of approach and withdrawal of the sound source.
NASA Technical Reports Server (NTRS)
Taylor, B. K.; Casasent, D. P.
1989-01-01
The use of simplified error models to accurately simulate and evaluate the performance of an optical linear-algebra processor is described. The optical architecture used to perform banded matrix-vector products is reviewed, along with a linear dynamic finite-element case study. The laboratory hardware and ac-modulation technique used are presented. The individual processor error-source models and their simulator implementation are detailed. Several significant simplifications are introduced to ease the computational requirements and complexity of the simulations. The error models are verified with a laboratory implementation of the processor, and are used to evaluate its potential performance.
The Brera Multiscale Wavelet ROSAT HRI Source Catalog. I. The Algorithm
NASA Astrophysics Data System (ADS)
Lazzati, Davide; Campana, Sergio; Rosati, Piero; Panzera, Maria Rosa; Tagliaferri, Gianpiero
1999-10-01
We present a new detection algorithm based on the wavelet transform for the analysis of high-energy astronomical images. The wavelet transform, because of its multiscale structure, is suited to the optimal detection of pointlike as well as extended sources, regardless of any loss of resolution with the off-axis angle. Sources are detected as significant enhancements in the wavelet space, after the subtraction of the nonflat components of the background. Detection thresholds are computed through Monte Carlo simulations in order to establish the expected number of spurious sources per field. The source characterization is performed through a multisource fitting in the wavelet space. The procedure is designed to correctly deal with very crowded fields, allowing for the simultaneous characterization of nearby sources. To obtain a fast and reliable estimate of the source parameters and related errors, we apply a novel decimation technique that, taking into account the correlation properties of the wavelet transform, extracts a subset of almost independent coefficients. We test the performance of this algorithm on synthetic fields, analyzing with particular care the characterization of sources in poor background situations, where the assumption of Gaussian statistics does not hold. In these cases, for which standard wavelet algorithms generally provide underestimated errors, we infer errors through a procedure that relies on robust basic statistics. Our algorithm is well suited to the analysis of images taken with the new generation of X-ray instruments equipped with CCD technology, which will produce images with very low background and/or high source density.
NASA Astrophysics Data System (ADS)
Pan, X.; Yang, Y.; Liu, Y.; Fan, X.; Shan, L.; Zhang, X.
2018-04-01
Error source analyses are critical for the satellite-retrieved surface net radiation (Rn) products. In this study, we evaluate the Rn error sources in the Clouds and the Earth's Radiant Energy System (CERES) project at 43 sites from July in 2007 to December in 2007 in China. The results show that cloud fraction (CF), land surface temperature (LST), atmospheric temperature (AT) and algorithm error dominate the Rn error, with error contributions of -20, 15, 10 and 10 W/m2 (net shortwave (NSW)/longwave (NLW) radiation), respectively. For NSW, the dominant error source is algorithm error (more than 10 W/m2), particularly in spring and summer with abundant cloud. For NLW, due to the high sensitivity of algorithm and large LST/CF error, LST and CF are the largest error sources, especially in northern China. The AT influences the NLW error large in southern China because of the large AT error in there. The total precipitable water has weak influence on Rn error even with the high sensitivity of algorithm. In order to improve Rn quality, CF and LST (AT) error in northern (southern) China should be decreased.
Bacon, Dave; Flammia, Steven T
2009-09-18
The difficulty in producing precisely timed and controlled quantum gates is a significant source of error in many physical implementations of quantum computers. Here we introduce a simple universal primitive, adiabatic gate teleportation, which is robust to timing errors and many control errors and maintains a constant energy gap throughout the computation above a degenerate ground state space. This construction allows for geometric robustness based upon the control of two independent qubit interactions. Further, our piecewise adiabatic evolution easily relates to the quantum circuit model, enabling the use of standard methods from fault-tolerance theory for establishing thresholds.
Performance Analysis of an Inter-Relay Co-operation in FSO Communication System
NASA Astrophysics Data System (ADS)
Khanna, Himanshu; Aggarwal, Mona; Ahuja, Swaran
2018-04-01
In this work, we analyze the outage and error performance of a one-way inter-relay assisted free space optical link. The assumption of the absence of direct link between the source and destination node is being made for the analysis, and the feasibility of such system configuration is studied. We consider the influence of path loss, atmospheric turbulence and pointing error impairments, and investigate the effect of these parameters on the system performance. The turbulence-induced fading is modeled by independent but not necessarily identically distributed gamma-gamma fading statistics. The closed-form expressions for outage probability and probability of error are derived and illustrated by numerical plots. It is concluded that the absence of line of sight path between source and destination nodes does not lead to significant performance degradation. Moreover, for the system model under consideration, interconnected relaying provides better error performance than the non-interconnected relaying and dual-hop serial relaying techniques.
NASA Astrophysics Data System (ADS)
Kirstetter, P.; Hong, Y.; Gourley, J. J.; Chen, S.; Flamig, Z.; Zhang, J.; Howard, K.; Petersen, W. A.
2011-12-01
Proper characterization of the error structure of TRMM Precipitation Radar (PR) quantitative precipitation estimation (QPE) is needed for their use in TRMM combined products, water budget studies and hydrological modeling applications. Due to the variety of sources of error in spaceborne radar QPE (attenuation of the radar signal, influence of land surface, impact of off-nadir viewing angle, etc.) and the impact of correction algorithms, the problem is addressed by comparison of PR QPEs with reference values derived from ground-based measurements (GV) using NOAA/NSSL's National Mosaic QPE (NMQ) system. An investigation of this subject has been carried out at the PR estimation scale (instantaneous and 5 km) on the basis of a 3-month-long data sample. A significant effort has been carried out to derive a bias-corrected, robust reference rainfall source from NMQ. The GV processing details will be presented along with preliminary results of PR's error characteristics using contingency table statistics, probability distribution comparisons, scatter plots, semi-variograms, and systematic biases and random errors.
Sources of medical error in refractive surgery.
Moshirfar, Majid; Simpson, Rachel G; Dave, Sonal B; Christiansen, Steven M; Edmonds, Jason N; Culbertson, William W; Pascucci, Stephen E; Sher, Neal A; Cano, David B; Trattler, William B
2013-05-01
To evaluate the causes of laser programming errors in refractive surgery and outcomes in these cases. In this multicenter, retrospective chart review, 22 eyes of 18 patients who had incorrect data entered into the refractive laser computer system at the time of treatment were evaluated. Cases were analyzed to uncover the etiology of these errors, patient follow-up treatments, and final outcomes. The results were used to identify potential methods to avoid similar errors in the future. Every patient experienced compromised uncorrected visual acuity requiring additional intervention, and 7 of 22 eyes (32%) lost corrected distance visual acuity (CDVA) of at least one line. Sixteen patients were suitable candidates for additional surgical correction to address these residual visual symptoms and six were not. Thirteen of 22 eyes (59%) received surgical follow-up treatment; nine eyes were treated with contact lenses. After follow-up treatment, six patients (27%) still had a loss of one line or more of CDVA. Three significant sources of error were identified: errors of cylinder conversion, data entry, and patient identification error. Twenty-seven percent of eyes with laser programming errors ultimately lost one or more lines of CDVA. Patients who underwent surgical revision had better outcomes than those who did not. Many of the mistakes identified were likely avoidable had preventive measures been taken, such as strict adherence to patient verification protocol or rigorous rechecking of treatment parameters. Copyright 2013, SLACK Incorporated.
2013-01-01
Background The growing interest in research on the health effects of near-highway air pollutants requires an assessment of potential sources of error in exposure assignment techniques that rely on residential proximity to roadways. Methods We compared the amount of positional error in the geocoding process for three different data sources (parcels, TIGER and StreetMap USA) to a “gold standard” residential geocoding process that used ortho-photos, large multi-building parcel layouts or large multi-unit building floor plans. The potential effect of positional error for each geocoding method was assessed as part of a proximity to highway epidemiological study in the Boston area, using all participants with complete address information (N = 703). Hourly time-activity data for the most recent workday/weekday and non-workday/weekend were collected to examine time spent in five different micro-environments (inside of home, outside of home, school/work, travel on highway, and other). Analysis included examination of whether time-activity patterns were differentially distributed either by proximity to highway or across demographic groups. Results Median positional error was significantly higher in street network geocoding (StreetMap USA = 23 m; TIGER = 22 m) than parcel geocoding (8 m). When restricted to multi-building parcels and large multi-unit building parcels, all three geocoding methods had substantial positional error (parcels = 24 m; StreetMap USA = 28 m; TIGER = 37 m). Street network geocoding also differentially introduced greater amounts of positional error in the proximity to highway study in the 0–50 m proximity category. Time spent inside home on workdays/weekdays differed significantly by demographic variables (age, employment status, educational attainment, income and race). Time-activity patterns were also significantly different when stratified by proximity to highway, with those participants residing in the 0–50 m proximity category reporting significantly more time in the school/work micro-environment on workdays/weekdays than all other distance groups. Conclusions These findings indicate the potential for both differential and non-differential exposure misclassification due to geocoding error and time-activity patterns in studies of highway proximity. We also propose a multi-stage manual correction process to minimize positional error. Additional research is needed in other populations and geographic settings. PMID:24010639
Lane, Kevin J; Kangsen Scammell, Madeleine; Levy, Jonathan I; Fuller, Christina H; Parambi, Ron; Zamore, Wig; Mwamburi, Mkaya; Brugge, Doug
2013-09-08
The growing interest in research on the health effects of near-highway air pollutants requires an assessment of potential sources of error in exposure assignment techniques that rely on residential proximity to roadways. We compared the amount of positional error in the geocoding process for three different data sources (parcels, TIGER and StreetMap USA) to a "gold standard" residential geocoding process that used ortho-photos, large multi-building parcel layouts or large multi-unit building floor plans. The potential effect of positional error for each geocoding method was assessed as part of a proximity to highway epidemiological study in the Boston area, using all participants with complete address information (N = 703). Hourly time-activity data for the most recent workday/weekday and non-workday/weekend were collected to examine time spent in five different micro-environments (inside of home, outside of home, school/work, travel on highway, and other). Analysis included examination of whether time-activity patterns were differentially distributed either by proximity to highway or across demographic groups. Median positional error was significantly higher in street network geocoding (StreetMap USA = 23 m; TIGER = 22 m) than parcel geocoding (8 m). When restricted to multi-building parcels and large multi-unit building parcels, all three geocoding methods had substantial positional error (parcels = 24 m; StreetMap USA = 28 m; TIGER = 37 m). Street network geocoding also differentially introduced greater amounts of positional error in the proximity to highway study in the 0-50 m proximity category. Time spent inside home on workdays/weekdays differed significantly by demographic variables (age, employment status, educational attainment, income and race). Time-activity patterns were also significantly different when stratified by proximity to highway, with those participants residing in the 0-50 m proximity category reporting significantly more time in the school/work micro-environment on workdays/weekdays than all other distance groups. These findings indicate the potential for both differential and non-differential exposure misclassification due to geocoding error and time-activity patterns in studies of highway proximity. We also propose a multi-stage manual correction process to minimize positional error. Additional research is needed in other populations and geographic settings.
The Chandra Source Catalog 2.0: Early Cross-matches
NASA Astrophysics Data System (ADS)
Rots, Arnold H.; Allen, Christopher E.; Anderson, Craig S.; Budynkiewicz, Jamie A.; Burke, Douglas; Chen, Judy C.; Civano, Francesca Maria; D'Abrusco, Raffaele; Doe, Stephen M.; Evans, Ian N.; Evans, Janet D.; Fabbiano, Giuseppina; Gibbs, Danny G., II; Glotfelty, Kenny J.; Graessle, Dale E.; Grier, John D.; Hain, Roger; Hall, Diane M.; Harbo, Peter N.; Houck, John C.; Lauer, Jennifer L.; Laurino, Omar; Lee, Nicholas P.; Martínez-Galarza, Rafael; McCollough, Michael L.; McDowell, Jonathan C.; Miller, Joseph; McLaughlin, Warren; Morgan, Douglas L.; Mossman, Amy E.; Nguyen, Dan T.; Nichols, Joy S.; Nowak, Michael A.; Paxson, Charles; Plummer, David A.; Primini, Francis Anthony; Siemiginowska, Aneta; Sundheim, Beth A.; Tibbetts, Michael; Van Stone, David W.; Zografou, Panagoula
2018-01-01
Cross-matching the Chandra Source Catalog (CSC) with other catalogs presents considerable challenges, since the Point Spread Function (PSF) of the Chandra X-ray Observatory varies significantly over the field of view. For the second release of the CSC (CSC2) we have been developing a cross-match tool that is based on the Bayesian algorithms by Budavari, Heinis, and Szalay (ApJ 679, 301 and 705, 739), making use of the error ellipses for the derived positions of the sources.However, calculating match probabilities only on the basis of error ellipses breaks down when the PSFs are significantly different. Not only can bonafide matches easily be missed, but the scene is also muddied by ambiguous multiple matches. These are issues that are not commonly addressed in cross-match tools. We have applied a satisfactory modification to the algorithm that, although not perfect, ameliorates the problems for the vast majority of such cases.We will present some early cross-matches of the CSC2 catalog with obvious candidate catalogs and report on the determination of the absolute astrometric error of the CSC2 based on such cross-matches.This work has been supported by NASA under contract NAS 8-03060 to the Smithsonian Astrophysical Observatory for operation of the Chandra X-ray Center.
Geographically correlated orbit error
NASA Technical Reports Server (NTRS)
Rosborough, G. W.
1989-01-01
The dominant error source in estimating the orbital position of a satellite from ground based tracking data is the modeling of the Earth's gravity field. The resulting orbit error due to gravity field model errors are predominantly long wavelength in nature. This results in an orbit error signature that is strongly correlated over distances on the size of ocean basins. Anderle and Hoskin (1977) have shown that the orbit error along a given ground track also is correlated to some degree with the orbit error along adjacent ground tracks. This cross track correlation is verified here and is found to be significant out to nearly 1000 kilometers in the case of TOPEX/POSEIDON when using the GEM-T1 gravity model. Finally, it was determined that even the orbit error at points where ascending and descending ground traces cross is somewhat correlated. The implication of these various correlations is that the orbit error due to gravity error is geographically correlated. Such correlations have direct implications when using altimetry to recover oceanographic signals.
Aydin, Ümit; Vorwerk, Johannes; Küpper, Philipp; Heers, Marcel; Kugel, Harald; Galka, Andreas; Hamid, Laith; Wellmer, Jörg; Kellinghaus, Christoph; Rampp, Stefan; Wolters, Carsten Hermann
2014-01-01
To increase the reliability for the non-invasive determination of the irritative zone in presurgical epilepsy diagnosis, we introduce here a new experimental and methodological source analysis pipeline that combines the complementary information in EEG and MEG, and apply it to data from a patient, suffering from refractory focal epilepsy. Skull conductivity parameters in a six compartment finite element head model with brain anisotropy, constructed from individual MRI data, are estimated in a calibration procedure using somatosensory evoked potential (SEP) and field (SEF) data. These data are measured in a single run before acquisition of further runs of spontaneous epileptic activity. Our results show that even for single interictal spikes, volume conduction effects dominate over noise and need to be taken into account for accurate source analysis. While cerebrospinal fluid and brain anisotropy influence both modalities, only EEG is sensitive to skull conductivity and conductivity calibration significantly reduces the difference in especially depth localization of both modalities, emphasizing its importance for combining EEG and MEG source analysis. On the other hand, localization differences which are due to the distinct sensitivity profiles of EEG and MEG persist. In case of a moderate error in skull conductivity, combined source analysis results can still profit from the different sensitivity profiles of EEG and MEG to accurately determine location, orientation and strength of the underlying sources. On the other side, significant errors in skull modeling are reflected in EEG reconstruction errors and could reduce the goodness of fit to combined datasets. For combined EEG and MEG source analysis, we therefore recommend calibrating skull conductivity using additionally acquired SEP/SEF data. PMID:24671208
NASA Technical Reports Server (NTRS)
Pierson, W. J.; Salfi, R. E.
1978-01-01
Significant wave heights estimated from the shape of the return pulse wave form of the altimeter on GEOS-3 for forty-four orbit segments obtained during 1975 and 1976 are compared with the significant wave heights specified by the spectral ocean wave model (SOWM), which is the presently operational numerical wave forecasting model at the Fleet Numerical Weather Central. Except for a number of orbit segments with poor agreement and larger errors, the SOWM specifications tended to be biased from 0.5 to 1.0 meters too low and to have RMS errors of 1.0 to 1.4 meters. The much fewer larger errors can be attributed to poor wind data for some parts of the Northern Hemisphere oceans. The bias can be attributed to the somewhat too light winds used to generate the waves in the model. Other sources of error are identified in the equatorial and trade wind areas.
North Alabama Lightning Mapping Array (LMA): VHF Source Retrieval Algorithm and Error Analyses
NASA Technical Reports Server (NTRS)
Koshak, W. J.; Solakiewicz, R. J.; Blakeslee, R. J.; Goodman, S. J.; Christian, H. J.; Hall, J.; Bailey, J.; Krider, E. P.; Bateman, M. G.; Boccippio, D.
2003-01-01
Two approaches are used to characterize how accurately the North Alabama Lightning Mapping Array (LMA) is able to locate lightning VHF sources in space and in time. The first method uses a Monte Carlo computer simulation to estimate source retrieval errors. The simulation applies a VHF source retrieval algorithm that was recently developed at the NASA Marshall Space Flight Center (MSFC) and that is similar, but not identical to, the standard New Mexico Tech retrieval algorithm. The second method uses a purely theoretical technique (i.e., chi-squared Curvature Matrix Theory) to estimate retrieval errors. Both methods assume that the LMA system has an overall rms timing error of 50 ns, but all other possible errors (e.g., multiple sources per retrieval attempt) are neglected. The detailed spatial distributions of retrieval errors are provided. Given that the two methods are completely independent of one another, it is shown that they provide remarkably similar results. However, for many source locations, the Curvature Matrix Theory produces larger altitude error estimates than the (more realistic) Monte Carlo simulation.
Fiber optic distributed temperature sensing for fire source localization
NASA Astrophysics Data System (ADS)
Sun, Miao; Tang, Yuquan; Yang, Shuang; Sigrist, Markus W.; Li, Jun; Dong, Fengzhong
2017-08-01
A method for localizing a fire source based on a distributed temperature sensor system is proposed. Two sections of optical fibers were placed orthogonally to each other as the sensing elements. A tray of alcohol was lit to act as a fire outbreak in a cabinet with an uneven ceiling to simulate a real scene of fire. Experiments were carried out to demonstrate the feasibility of the method. Rather large fluctuations and systematic errors with respect to predicting the exact room coordinates of the fire source caused by the uneven ceiling were observed. Two mathematical methods (smoothing recorded temperature curves and finding temperature peak positions) to improve the prediction accuracy are presented, and the experimental results indicate that the fluctuation ranges and systematic errors are significantly reduced. The proposed scheme is simple and appears reliable enough to locate a fire source in large spaces.
NASA Technical Reports Server (NTRS)
Gordon, Steven C.
1993-01-01
Spacecraft in orbit near libration point L1 in the Sun-Earth system are excellent platforms for research concerning solar effects on the terrestrial environment. One spacecraft mission launched in 1978 used an L1 orbit for nearly 4 years, and future L1 orbital missions are also being planned. Orbit determination and station-keeping are, however, required for these orbits. In particular, orbit determination error analysis may be used to compute the state uncertainty after a predetermined tracking period; the predicted state uncertainty levels then will impact the control costs computed in station-keeping simulations. Error sources, such as solar radiation pressure and planetary mass uncertainties, are also incorporated. For future missions, there may be some flexibility in the type and size of the spacecraft's nominal trajectory, but different orbits may produce varying error analysis and station-keeping results. The nominal path, for instance, can be (nearly) periodic or distinctly quasi-periodic. A periodic 'halo' orbit may be constructed to be significantly larger than a quasi-periodic 'Lissajous' path; both may meet mission requirements, but perhaps the required control costs for these orbits are probably different. Also for this spacecraft tracking and control simulation problem, experimental design methods can be used to determine the most significant uncertainties. That is, these methods can determine the error sources in the tracking and control problem that most impact the control cost (output); it also produces an equation that gives the approximate functional relationship between the error inputs and the output.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kertzscher, Gustavo, E-mail: guke@dtu.dk; Andersen, Claus E., E-mail: clan@dtu.dk; Tanderup, Kari, E-mail: karitand@rm.dk
Purpose: This study presents an adaptive error detection algorithm (AEDA) for real-timein vivo point dosimetry during high dose rate (HDR) or pulsed dose rate (PDR) brachytherapy (BT) where the error identification, in contrast to existing approaches, does not depend on an a priori reconstruction of the dosimeter position. Instead, the treatment is judged based on dose rate comparisons between measurements and calculations of the most viable dosimeter position provided by the AEDA in a data driven approach. As a result, the AEDA compensates for false error cases related to systematic effects of the dosimeter position reconstruction. Given its nearly exclusivemore » dependence on stable dosimeter positioning, the AEDA allows for a substantially simplified and time efficient real-time in vivo BT dosimetry implementation. Methods: In the event of a measured potential treatment error, the AEDA proposes the most viable dosimeter position out of alternatives to the original reconstruction by means of a data driven matching procedure between dose rate distributions. If measured dose rates do not differ significantly from the most viable alternative, the initial error indication may be attributed to a mispositioned or misreconstructed dosimeter (false error). However, if the error declaration persists, no viable dosimeter position can be found to explain the error, hence the discrepancy is more likely to originate from a misplaced or misreconstructed source applicator or from erroneously connected source guide tubes (true error). Results: The AEDA applied on twoin vivo dosimetry implementations for pulsed dose rate BT demonstrated that the AEDA correctly described effects responsible for initial error indications. The AEDA was able to correctly identify the major part of all permutations of simulated guide tube swap errors and simulated shifts of individual needles from the original reconstruction. Unidentified errors corresponded to scenarios where the dosimeter position was sufficiently symmetric with respect to error and no-error source position constellations. The AEDA was able to correctly identify all false errors represented by mispositioned dosimeters contrary to an error detection algorithm relying on the original reconstruction. Conclusions: The study demonstrates that the AEDA error identification during HDR/PDR BT relies on a stable dosimeter position rather than on an accurate dosimeter reconstruction, and the AEDA’s capacity to distinguish between true and false error scenarios. The study further shows that the AEDA can offer guidance in decision making in the event of potential errors detected with real-timein vivo point dosimetry.« less
NASA Technical Reports Server (NTRS)
Whitmore, Stephen A.; Moes, Timothy R.
1994-01-01
Presented is a feasibility and error analysis for a hypersonic flush airdata system on a hypersonic flight experiment (HYFLITE). HYFLITE heating loads make intrusive airdata measurement impractical. Although this analysis is specifically for the HYFLITE vehicle and trajectory, the problems analyzed are generally applicable to hypersonic vehicles. A layout of the flush-port matrix is shown. Surface pressures are related airdata parameters using a simple aerodynamic model. The model is linearized using small perturbations and inverted using nonlinear least-squares. Effects of various error sources on the overall uncertainty are evaluated using an error simulation. Error sources modeled include boundarylayer/viscous interactions, pneumatic lag, thermal transpiration in the sensor pressure tubing, misalignment in the matrix layout, thermal warping of the vehicle nose, sampling resolution, and transducer error. Using simulated pressure data for input to the estimation algorithm, effects caused by various error sources are analyzed by comparing estimator outputs with the original trajectory. To obtain ensemble averages the simulation is run repeatedly and output statistics are compiled. Output errors resulting from the various error sources are presented as a function of Mach number. Final uncertainties with all modeled error sources included are presented as a function of Mach number.
Accounting for measurement error in log regression models with applications to accelerated testing.
Richardson, Robert; Tolley, H Dennis; Evenson, William E; Lunt, Barry M
2018-01-01
In regression settings, parameter estimates will be biased when the explanatory variables are measured with error. This bias can significantly affect modeling goals. In particular, accelerated lifetime testing involves an extrapolation of the fitted model, and a small amount of bias in parameter estimates may result in a significant increase in the bias of the extrapolated predictions. Additionally, bias may arise when the stochastic component of a log regression model is assumed to be multiplicative when the actual underlying stochastic component is additive. To account for these possible sources of bias, a log regression model with measurement error and additive error is approximated by a weighted regression model which can be estimated using Iteratively Re-weighted Least Squares. Using the reduced Eyring equation in an accelerated testing setting, the model is compared to previously accepted approaches to modeling accelerated testing data with both simulations and real data.
Tropical forecasting - Predictability perspective
NASA Technical Reports Server (NTRS)
Shukla, J.
1989-01-01
Results are presented of classical predictability studies and forecast experiments with observed initial conditions to show the nature of initial error growth and final error equilibration for the tropics and midlatitudes, separately. It is found that the theoretical upper limit of tropical circulation predictability is far less than for midlatitudes. The error growth for a complete general circulation model is compared to a dry version of the same model in which there is no prognostic equation for moisture, and diabatic heat sources are prescribed. It is found that the growth rate of synoptic-scale errors for the dry model is significantly smaller than for the moist model, suggesting that the interactions between dynamics and moist processes are among the important causes of atmospheric flow predictability degradation. Results are then presented of numerical experiments showing that correct specification of the slowly varying boundary condition of SST produces significant improvement in the prediction of time-averaged circulation and rainfall over the tropics.
Time trend of injection drug errors before and after implementation of bar-code verification system.
Sakushima, Ken; Umeki, Reona; Endoh, Akira; Ito, Yoichi M; Nasuhara, Yasuyuki
2015-01-01
Bar-code technology, used for verification of patients and their medication, could prevent medication errors in clinical practice. Retrospective analysis of electronically stored medical error reports was conducted in a university hospital. The number of reported medication errors of injected drugs, including wrong drug administration and administration to the wrong patient, was compared before and after implementation of the bar-code verification system for inpatient care. A total of 2867 error reports associated with injection drugs were extracted. Wrong patient errors decreased significantly after implementation of the bar-code verification system (17.4/year vs. 4.5/year, p< 0.05), although wrong drug errors did not decrease sufficiently (24.2/year vs. 20.3/year). The source of medication errors due to wrong drugs was drug preparation in hospital wards. Bar-code medication administration is effective for prevention of wrong patient errors. However, ordinary bar-code verification systems are limited in their ability to prevent incorrect drug preparation in hospital wards.
Chiu, Chui-De; Tseng, Mei-Chih Meg; Chien, Yi-Ling; Liao, Shih-Cheng; Liu, Chih-Min; Yeh, Yei-Yu; Hwu, Hai-Gwo
2016-01-01
Objective: An intertwined relationship has been found between dissociative and psychotic symptoms, as the two symptom clusters frequently co-occur, suggesting some shared risk factors. Using a source monitoring paradigm, previous studies have shown that patients with schizophrenia made more errors in source monitoring, suggesting that a weakened sense of individuality may be associated with psychotic symptoms. However, no studies have verified a relationship between sense of individuality and dissociation, and it is unclear whether an altered sense of individuality is a shared sociocognitive deficit underlying both dissociation and psychosis. Method: Data from 80 acute psychiatric patients with unspecified mental disorders were analyzed to test the hypothesis that an altered sense of individuality underlies dissociation and psychosis. Behavioral tasks, including tests of intelligence and source monitoring, as well as interview schedules and self-report measures of dissociative and psychotic symptoms, general psychopathology, and trauma history, were administered. Results: Significant correlations of medium effect sizes indicated an association between errors attributing the source of self-generated items and positive psychotic symptoms and the absorption and amnesia measures of dissociation. The associations with dissociative measures remained significant after the effects of intelligence, general psychopathology, and trauma history were excluded. Moreover, the relationships between source misattribution and dissociative measures remained marginally significant and significant after controlling for positive and negative psychotic symptoms, respectively. Limitations: Self-reported measures were collected from a small sample, and most of the participants were receiving medications when tested, which may have influenced their cognitive performance. Conclusions: A tendency to misidentify the source of self-generated items characterized both dissociation and psychosis. An altered sense of individuality embedded in self-referential representations appears to be a common sociocognitive deficit of dissociation and psychosis. PMID:27148147
NASA Technical Reports Server (NTRS)
Taff, L. G.
1998-01-01
Since the announcement of the discovery of sources of bursts of gamma-ray radiation in 1973, hundreds more reports of such bursts have now been published. Numerous artificial satellites have been equipped with gamma-ray detectors including the very successful Compton Gamma Ray Observatory BATSE instrument. Unfortunately, we have made no progress in identifying the source(s) of this high energy radiation. We suspected that this was a consequence of the method used to define gamma-ray burst source "error boxes." An alternative procedure to compute gamma-ray burst source positions, with a purely physical underpinning, was proposed in 1988 by Taff. Since then we have also made significant progress in understanding the analytical nature of the triangulation problem and in computing actual gamma-ray burst positions and their corresponding error boxes. For the former, we can now mathematically illustrate the crucial role of the area occupied by the detectors, while for the latter, the Atteia et al. (1987) catalog has been completely re-reduced. There are very few discrepancies in locations between our results and those of the customary "time difference of arrival" procedure. Thus, we have numerically demonstrated that the end result, for the positions, of these two very different-looking procedures is the same. Finally, for the first time, we provide a sample of realistic "error boxes" whose non-simple shapes vividly portray the difficulty of burst source localization.
Accounting for measurement error: a critical but often overlooked process.
Harris, Edward F; Smith, Richard N
2009-12-01
Due to instrument imprecision and human inconsistencies, measurements are not free of error. Technical error of measurement (TEM) is the variability encountered between dimensions when the same specimens are measured at multiple sessions. A goal of a data collection regimen is to minimise TEM. The few studies that actually quantify TEM, regardless of discipline, report that it is substantial and can affect results and inferences. This paper reviews some statistical approaches for identifying and controlling TEM. Statistically, TEM is part of the residual ('unexplained') variance in a statistical test, so accounting for TEM, which requires repeated measurements, enhances the chances of finding a statistically significant difference if one exists. The aim of this paper was to review and discuss common statistical designs relating to types of error and statistical approaches to error accountability. This paper addresses issues of landmark location, validity, technical and systematic error, analysis of variance, scaled measures and correlation coefficients in order to guide the reader towards correct identification of true experimental differences. Researchers commonly infer characteristics about populations from comparatively restricted study samples. Most inferences are statistical and, aside from concerns about adequate accounting for known sources of variation with the research design, an important source of variability is measurement error. Variability in locating landmarks that define variables is obvious in odontometrics, cephalometrics and anthropometry, but the same concerns about measurement accuracy and precision extend to all disciplines. With increasing accessibility to computer-assisted methods of data collection, the ease of incorporating repeated measures into statistical designs has improved. Accounting for this technical source of variation increases the chance of finding biologically true differences when they exist.
Joint Source Location and Focal Mechanism Inversion: efficiency, accuracy and applications
NASA Astrophysics Data System (ADS)
Liang, C.; Yu, Y.
2017-12-01
The analysis of induced seismicity has become a common practice to evaluate the results of hydraulic fracturing treatment. Liang et al (2016) proposed a joint Source Scanning Algorithms (jSSA for short) to obtain microseismic events and focal mechanisms simultaneously. The jSSA is superior over traditional SSA in many aspects, but the computation cost is too significant to be applied in real time monitoring. In this study, we have developed several scanning schemas to reduce computation time. A multi-stage scanning schema is proved to be able to improve the efficiency significantly while also retain its accuracy. A series of tests have been carried out by using both real field data and synthetic data to evaluate the accuracy of the method and its dependence on noise level, source depths, focal mechanisms and other factors. The surface-based arrays have better constraints on horizontal location errors (<20m) and angular errors of P axes (within 10 degree, for S/N>0.5). For sources with varying rakes, dips, strikes and depths, the errors are mostly controlled by the partition of positive and negative polarities in different quadrants. More evenly partitioned polarities in different quadrants yield better results in both locations and focal mechanisms. Nevertheless, even with bad resolutions for some FMs, the optimized jSSA method can still improve location accuracies significantly. Based on much more densely distributed events and focal mechanisms, a gridded stress inversion is conducted to get a evenly distributed stress field. The full potential of the jSSA has yet to be explored in different directions, especially in earthquake seismology as seismic array becoming incleasingly dense.
Determination of Barometric Altimeter Errors for the Orion Exploration Flight Test-1 Entry
NASA Technical Reports Server (NTRS)
Brown, Denise L.; Munoz, Jean-Philippe; Gay, Robert
2011-01-01
The EFT-1 mission is the unmanned flight test for the upcoming Multi-Purpose Crew Vehicle (MPCV). During entry, the EFT-1 vehicle will trigger several Landing and Recovery System (LRS) events, such as parachute deployment, based on onboard altitude information. The primary altitude source is the filtered navigation solution updated with GPS measurement data. The vehicle also has three barometric altimeters that will be used to measure atmospheric pressure during entry. In the event that GPS data is not available during entry, the altitude derived from the barometric altimeter pressure will be used to trigger chute deployment for the drogues and main parachutes. Therefore it is important to understand the impact of error sources on the pressure measured by the barometric altimeters and on the altitude derived from that pressure. There are four primary error sources impacting the sensed pressure: sensor errors, Analog to Digital conversion errors, aerodynamic errors, and atmosphere modeling errors. This last error source is induced by the conversion from pressure to altitude in the vehicle flight software, which requires an atmosphere model such as the US Standard 1976 Atmosphere model. There are several secondary error sources as well, such as waves, tides, and latencies in data transmission. Typically, for error budget calculations it is assumed that all error sources are independent, normally distributed variables. Thus, the initial approach to developing the EFT-1 barometric altimeter altitude error budget was to create an itemized error budget under these assumptions. This budget was to be verified by simulation using high fidelity models of the vehicle hardware and software. The simulation barometric altimeter model includes hardware error sources and a data-driven model of the aerodynamic errors expected to impact the pressure in the midbay compartment in which the sensors are located. The aerodynamic model includes the pressure difference between the midbay compartment and the free stream pressure as a function of altitude, oscillations in sensed pressure due to wake effects, and an acoustics model capturing fluctuations in pressure due to motion of the passive vents separating the barometric altimeters from the outside of the vehicle.
High accuracy switched-current circuits using an improved dynamic mirror
NASA Technical Reports Server (NTRS)
Zweigle, G.; Fiez, T.
1991-01-01
The switched-current technique, a recently developed circuit approach to analog signal processing, has emerged as an alternative/compliment to the well established switched-capacitor circuit technique. High speed switched-current circuits offer potential cost and power savings over slower switched-capacitor circuits. Accuracy improvements are a primary concern at this stage in the development of the switched-current technique. Use of the dynamic current mirror has produced circuits that are insensitive to transistor matching errors. The dynamic current mirror has been limited by other sources of error including clock-feedthrough and voltage transient errors. In this paper we present an improved switched-current building block using the dynamic current mirror. Utilizing current feedback the errors due to current imbalance in the dynamic current mirror are reduced. Simulations indicate that this feedback can reduce total harmonic distortion by as much as 9 dB. Additionally, we have developed a clock-feedthrough reduction scheme for which simulations reveal a potential 10 dB total harmonic distortion improvement. The clock-feedthrough reduction scheme also significantly reduces offset errors and allows for cancellation with a constant current source. Experimental results confirm the simulated improvements.
Digital Mirror Device Application in Reduction of Wave-front Phase Errors
Zhang, Yaping; Liu, Yan; Wang, Shuxue
2009-01-01
In order to correct the image distortion created by the mixing/shear layer, creative and effectual correction methods are necessary. First, a method combining adaptive optics (AO) correction with a digital micro-mirror device (DMD) is presented. Second, performance of an AO system using the Phase Diverse Speckle (PDS) principle is characterized in detail. Through combining the DMD method with PDS, a significant reduction in wavefront phase error is achieved in simulations and experiments. This kind of complex correction principle can be used to recovery the degraded images caused by unforeseen error sources. PMID:22574016
Permeable Surface Corrections for Ffowcs Williams and Hawkings Integrals
NASA Technical Reports Server (NTRS)
Lockard, David P.; Casper, Jay H.
2005-01-01
The acoustic prediction methodology discussed herein applies an acoustic analogy to calculate the sound generated by sources in an aerodynamic simulation. Sound is propagated from the computed flow field by integrating the Ffowcs Williams and Hawkings equation on a suitable control surface. Previous research suggests that, for some applications, the integration surface must be placed away from the solid surface to incorporate source contributions from within the flow volume. As such, the fluid mechanisms in the input flow field that contribute to the far-field noise are accounted for by their mathematical projection as a distribution of source terms on a permeable surface. The passage of nonacoustic disturbances through such an integration surface can result in significant error in an acoustic calculation. A correction for the error is derived in the frequency domain using a frozen gust assumption. The correction is found to work reasonably well in several test cases where the error is a small fraction of the actual radiated noise. However, satisfactory agreement has not been obtained between noise predictions using the solution from a three-dimensional, detached-eddy simulation of flow over a cylinder.
NASA Technical Reports Server (NTRS)
Loughman, R.; Flittner, D.; Herman, B.; Bhartia, P.; Hilsenrath, E.; McPeters, R.; Rault, D.
2002-01-01
The SOLSE (Shuttle Ozone Limb Sounding Experiment) and LORE (Limb Ozone Retrieval Experiment) instruments are scheduled for reflight on Space Shuttle flight STS-107 in July 2002. In addition, the SAGE III (Stratospheric Aerosol and Gas Experiment) instrument will begin to make limb scattering measurements during Spring 2002. The optimal estimation technique is used to analyze visible and ultraviolet limb scattered radiances and produce a retrieved ozone profile. The algorithm used to analyze data from the initial flight of the SOLSE/LORE instruments (on Space Shuttle flight STS-87 in November 1997) forms the basis of the current algorithms, with expansion to take advantage of the increased multispectral information provided by SOLSE/LORE-2 and SAGE III. We also present detailed sensitivity analysis for these ozone retrieval algorithms. The primary source of ozone retrieval error is tangent height misregistration (i.e., instrument pointing error), which is relevant throughout the altitude range of interest, and can produce retrieval errors on the order of 10-20 percent due to a tangent height registration error of 0.5 km at the tangent point. Other significant sources of error are sensitivity to stratospheric aerosol and sensitivity to error in the a priori ozone estimate (given assumed instrument signal-to-noise = 200). These can produce errors up to 10 percent for the ozone retrieval at altitudes less than 20 km, but produce little error above that level.
Economic impact of medication error: a systematic review.
Walsh, Elaine K; Hansen, Christina Raae; Sahm, Laura J; Kearney, Patricia M; Doherty, Edel; Bradley, Colin P
2017-05-01
Medication error is a significant source of morbidity and mortality among patients. Clinical and cost-effectiveness evidence are required for the implementation of quality of care interventions. Reduction of error-related cost is a key potential benefit of interventions addressing medication error. The aim of this review was to describe and quantify the economic burden associated with medication error. PubMed, Cochrane, Embase, CINAHL, EconLit, ABI/INFORM, Business Source Complete were searched. Studies published 2004-2016 assessing the economic impact of medication error were included. Cost values were expressed in Euro 2015. A narrative synthesis was performed. A total of 4572 articles were identified from database searching, and 16 were included in the review. One study met all applicable quality criteria. Fifteen studies expressed economic impact in monetary terms. Mean cost per error per study ranged from €2.58 to €111 727.08. Healthcare costs were used to measure economic impact in 15 of the included studies with one study measuring litigation costs. Four studies included costs incurred in primary care with the remaining 12 measuring hospital costs. Five studies looked at general medication error in a general population with 11 studies reporting the economic impact of an individual type of medication error or error within a specific patient population. Considerable variability existed between studies in terms of financial cost, patients, settings and errors included. Many were of poor quality. Assessment of economic impact was conducted predominantly in the hospital setting with little assessment of primary care impact. Limited parameters were used to establish economic impact. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
SU-E-T-210: Surviving a Visit by the Radiological Physics Center.
Grant, W; Mcgary, J; Rosen, I; Nitsch, P; Davidson, S
2012-06-01
To demonstrate an objective approach to determining if a negative report from the Radiological Physics Center (RPC) of greater than 10% error is valid or has clinical significance. The discrepancy involved the clinical activity (mgRaEq) of Cs-137 sources, some manufactured by 3M and some by Amersham. Measurements were made in the proprietary RPC Well Counter calibrated by the MD Anderson ADCL and our Well Counter (CNMC, Model 44D) calibrated by the same laboratory as well as the University of Wisconsin ADCL. In addition, we possess an Amersham Cs-137 Check Source that had been calibrated by the UW-ADCL in 2002. All clinical sources were checked in both Well Counters on the first visit. One clinical source and the Check Source were measured in a second visit that occurred 51 days later. On the initial RPC visit, 9 of 25 sources had a minimum of an 8% discrepancy between the RPC and the Institution, with a maximum of 11%. Contributing errors included using the incorrect straw position by us, an unexplained 2.3% error in the RPC data identified 73 days post-visit, a 2% variation in Chamber Factors for our Well Counter from the two ADCL's. When we use the 2004 value of Air Kerma Strength for the Check Source to determine a Calibration Factor of the Well Counter, all sources were within 0.5% of their decayed value established in 2002. This work emphasizes the value of having simple Constancy Check systems in a Quality Assurance program as 'Accuracy' has error bars. The disagreement in calibration data between the ADCL Laboratories, which was at the 2% maximum quoted in their Calibration Reports, is a reminder that there is uncertainty in measurements. Constancy Checks allow one to sort out discrepancies and to answer challenges to the validity of your program. © 2012 American Association of Physicists in Medicine.
Field errors in hybrid insertion devices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schlueter, R.D.
1995-02-01
Hybrid magnet theory as applied to the error analyses used in the design of Advanced Light Source (ALS) insertion devices is reviewed. Sources of field errors in hybrid insertion devices are discussed.
Lee, Sangyoon; Hu, Xinda; Hua, Hong
2016-05-01
Many error sources have been explored in regards to the depth perception problem in augmented reality environments using optical see-through head-mounted displays (OST-HMDs). Nonetheless, two error sources are commonly neglected: the ray-shift phenomenon and the change in interpupillary distance (IPD). The first source of error arises from the difference in refraction for virtual and see-through optical paths caused by an optical combiner, which is required of OST-HMDs. The second occurs from the change in the viewer's IPD due to eye convergence. In this paper, we analyze the effects of these two error sources on near-field depth perception and propose methods to compensate for these two types of errors. Furthermore, we investigate their effectiveness through an experiment comparing the conditions with and without our error compensation methods applied. In our experiment, participants estimated the egocentric depth of a virtual and a physical object located at seven different near-field distances (40∼200 cm) using a perceptual matching task. Although the experimental results showed different patterns depending on the target distance, the results demonstrated that the near-field depth perception error can be effectively reduced to a very small level (at most 1 percent error) by compensating for the two mentioned error sources.
Meteorological Error Budget Using Open Source Data
2016-09-01
ARL-TR-7831 ● SEP 2016 US Army Research Laboratory Meteorological Error Budget Using Open- Source Data by J Cogan, J Smith, P...needed. Do not return it to the originator. ARL-TR-7831 ● SEP 2016 US Army Research Laboratory Meteorological Error Budget Using...Error Budget Using Open-Source Data 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) J Cogan, J Smith, P Haines
Error Sources in Asteroid Astrometry
NASA Technical Reports Server (NTRS)
Owen, William M., Jr.
2000-01-01
Asteroid astrometry, like any other scientific measurement process, is subject to both random and systematic errors, not all of which are under the observer's control. To design an astrometric observing program or to improve an existing one requires knowledge of the various sources of error, how different errors affect one's results, and how various errors may be minimized by careful observation or data reduction techniques.
Characterization of in Band Stray Light in SBUV-2 Instruments
NASA Technical Reports Server (NTRS)
Huang, L. K.; DeLand, M. T.; Taylor, S. L.; Flynn, L. E.
2014-01-01
Significant in-band stray light (IBSL) error at solar zenith angle (SZA) values larger than 77deg near sunset in 4 SBUV/2 (Solar Backscattered Ultraviolet) instruments, on board the NOAA-14, 17, 18 and 19 satellites, has been characterized. The IBSL error is caused by large surface reflection and scattering of the air-gapped depolarizer in front of the instrument's monochromator aperture. The source of the IBSL error is direct solar illumination of instrument components near the aperture rather than from earth shine. The IBSL contamination at 273 nm can reach 40% of earth radiance near sunset, which results in as much as a 50% error in the retrieved ozone from the upper stratosphere. We have analyzed SBUV/2 albedo measurements on both the dayside and nightside to develop an empirical model for the IBSL error. This error has been corrected in the V8.6 SBUV/2 ozone retrieval.
The use of source memory to identify one's own episodic confusion errors.
Smith, S M; Tindell, D R; Pierce, B H; Gilliland, T R; Gerkens, D R
2001-03-01
In 4 category cued recall experiments, participants falsely recalled nonlist common members, a semantic confusion error. Errors were more likely if critical nonlist words were presented on an incidental task, causing source memory failures called episodic confusion errors. Participants could better identify the source of falsely recalled words if they had deeply processed the words on the incidental task. For deep but not shallow processing, participants could reliably include or exclude incidentally shown category members in recall. The illusion that critical items actually appeared on categorized lists was diminished but not eradicated when participants identified episodic confusion errors post hoc among their own recalled responses; participants often believed that critical items had been on both the incidental task and the study list. Improved source monitoring can potentially mitigate episodic (but not semantic) confusion errors.
Grieco-Calub, Tina M.; Litovsky, Ruth Y.
2010-01-01
Objectives To measure sound source localization in children who have sequential bilateral cochlear implants (BICIs); to determine if localization accuracy correlates with performance on a right-left discrimination task (i.e., spatial acuity); to determine if there is a measurable bilateral benefit on a sound source identification task (i.e., localization accuracy) by comparing performance under bilateral and unilateral listening conditions; to determine if sound source localization continues to improve with longer durations of bilateral experience. Design Two groups of children participated in this study: a group of 21 children who received BICIs in sequential procedures (5–14 years old) and a group of 7 typically-developing children with normal acoustic hearing (5 years old). Testing was conducted in a large sound-treated booth with loudspeakers positioned on a horizontal arc with a radius of 1.2 m. Children participated in two experiments that assessed spatial hearing skills. Spatial hearing acuity was assessed with a discrimination task in which listeners determined if a sound source was presented on the right or left side of center; the smallest angle at which performance on this task was reliably above chance is the minimum audible angle. Sound localization accuracy was assessed with a sound source identification task in which children identified the perceived position of the sound source from a multi-loudspeaker array (7 or 15); errors are quantified using the root-mean-square (RMS) error. Results Sound localization accuracy was highly variable among the children with BICIs, with RMS errors ranging from 19°–56°. Performance of the NH group, with RMS errors ranging from 9°–29° was significantly better. Within the BICI group, in 11/21 children RMS errors were smaller in the bilateral vs. unilateral listening condition, indicating bilateral benefit. There was a significant correlation between spatial acuity and sound localization accuracy (R2=0.68, p<0.01), suggesting that children who achieve small RMS errors tend to have the smallest MAAs. Although there was large intersubject variability, testing of 11 children in the BICI group at two sequential visits revealed a subset of children who show improvement in spatial hearing skills over time. Conclusions A subset of children who use sequential BICIs can acquire sound localization abilities, even after long intervals between activation of hearing in the first- and second-implanted ears. This suggests that children with activation of the second implant later in life may be capable of developing spatial hearing abilities. The large variability in performance among the children with BICIs suggests that maturation of sound localization abilities in children with BICIs may be dependent on various individual subject factors such as age of implantation and chronological age. PMID:20592615
Improved methods for the measurement and analysis of stellar magnetic fields
NASA Technical Reports Server (NTRS)
Saar, Steven H.
1988-01-01
The paper presents several improved methods for the measurement of magnetic fields on cool stars which take into account simple radiative transfer effects and the exact Zeeman patterns. Using these methods, high-resolution, low-noise data can be fitted with theoretical line profiles to determine the mean magnetic field strength in stellar active regions and a model-dependent fraction of the stellar surface (filling factor) covered by these regions. Random errors in the derived field strength and filling factor are parameterized in terms of signal-to-noise ratio, wavelength, spectral resolution, stellar rotation rate, and the magnetic parameters themselves. Weak line blends, if left uncorrected, can have significant systematic effects on the derived magnetic parameters, and thus several methods are developed to compensate partially for them. The magnetic parameters determined by previous methods likely have systematic errors because of such line blends and because of line saturation effects. Other sources of systematic error are explored in detail. These sources of error currently make it difficult to determine the magnetic parameters of individual stars to better than about + or - 20 percent.
NASA Astrophysics Data System (ADS)
Gourdji, S. M.; Yadav, V.; Karion, A.; Mueller, K. L.; Conley, S.; Ryerson, T.; Nehrkorn, T.; Kort, E. A.
2018-04-01
Urban greenhouse gas (GHG) flux estimation with atmospheric measurements and modeling, i.e. the ‘top-down’ approach, can potentially support GHG emission reduction policies by assessing trends in surface fluxes and detecting anomalies from bottom-up inventories. Aircraft-collected GHG observations also have the potential to help quantify point-source emissions that may not be adequately sampled by fixed surface tower-based atmospheric observing systems. Here, we estimate CH4 emissions from a known point source, the Aliso Canyon natural gas leak in Los Angeles, CA from October 2015–February 2016, using atmospheric inverse models with airborne CH4 observations from twelve flights ≈4 km downwind of the leak and surface sensitivities from a mesoscale atmospheric transport model. This leak event has been well-quantified previously using various methods by the California Air Resources Board, thereby providing high confidence in the mass-balance leak rate estimates of (Conley et al 2016), used here for comparison to inversion results. Inversions with an optimal setup are shown to provide estimates of the leak magnitude, on average, within a third of the mass balance values, with remaining errors in estimated leak rates predominantly explained by modeled wind speed errors of up to 10 m s‑1, quantified by comparing airborne meteorological observations with modeled values along the flight track. An inversion setup using scaled observational wind speed errors in the model-data mismatch covariance matrix is shown to significantly reduce the influence of transport model errors on spatial patterns and estimated leak rates from the inversions. In sum, this study takes advantage of a natural tracer release experiment (i.e. the Aliso Canyon natural gas leak) to identify effective approaches for reducing the influence of transport model error on atmospheric inversions of point-source emissions, while suggesting future potential for integrating surface tower and aircraft atmospheric GHG observations in top-down urban emission monitoring systems.
Tropospheric Correction for InSAR Using Interpolated ECMWF Data and GPS Zenith Total Delay
NASA Technical Reports Server (NTRS)
Webb, Frank H.; Fishbein, Evan F.; Moore, Angelyn W.; Owen, Susan E.; Fielding, Eric J.; Granger, Stephanie L.; Bjorndahl, Fredrik; Lofgren Johan
2011-01-01
To mitigate atmospheric errors caused by the troposphere, which is a limiting error source for spaceborne interferometric synthetic aperture radar (InSAR) imaging, a tropospheric correction method has been developed using data from the European Centre for Medium- Range Weather Forecasts (ECMWF) and the Global Positioning System (GPS). The ECMWF data was interpolated using a Stretched Boundary Layer Model (SBLM), and ground-based GPS estimates of the tropospheric delay from the Southern California Integrated GPS Network were interpolated using modified Gaussian and inverse distance weighted interpolations. The resulting Zenith Total Delay (ZTD) correction maps have been evaluated, both separately and using a combination of the two data sets, for three short-interval InSAR pairs from Envisat during 2006 on an area stretching from northeast from the Los Angeles basin towards Death Valley. Results show that the root mean square (rms) in the InSAR images was greatly reduced, meaning a significant reduction in the atmospheric noise of up to 32 percent. However, for some of the images, the rms increased and large errors remained after applying the tropospheric correction. The residuals showed a constant gradient over the area, suggesting that a remaining orbit error from Envisat was present. The orbit reprocessing in ROI_pac and the plane fitting both require that the only remaining error in the InSAR image be the orbit error. If this is not fulfilled, the correction can be made anyway, but it will be done using all remaining errors assuming them to be orbit errors. By correcting for tropospheric noise, the biggest error source is removed, and the orbit error becomes apparent and can be corrected for
Finger blood content, light transmission, and pulse oximetry errors.
Craft, T M; Lawson, R A; Young, J D
1992-01-01
The changes in light emitting diode current necessary to maintain a constant level of light incident upon a photodetector were measured in 20 volunteers at the two wavelengths employed by pulse oximeters. Three states of finger blood content were assessed; exsanguinated, hyperaemic, and normal. The changes in light emitting diode current with changes in finger blood content were small and are not thought to represent a significant source of error in saturation as measured by pulse oximetry.
The Errors Sources Affect to the Results of One-Way Nested Ocean Regional Circulation Model
NASA Astrophysics Data System (ADS)
Pham, S. V.
2016-02-01
Pham-Van Sy1, Jin Hwan Hwang2 and Hyeyun Ku3 Dept. of Civil & Environmental Engineering, Seoul National University, KoreaEmail: 1phamsymt@gmail.com (Corresponding author) Email: 2jinhwang@snu.ac.krEmail: 3hyeyun.ku@gmail.comAbstractThe Oceanic Regional Circulation Model (ORCM) is an essential tool in resolving highly a regional scale through downscaling dynamically the results from the roughly revolved global model. However, when dynamic downscaling from a coarse resolution of the global model or observations to the small scale, errors are generated due to the different sizes of resolution and lateral updating frequency. This research evaluated the effect of four main sources on the results of the ocean regional circulation model (ORCMs) during downscaling and nesting the output data from the ocean global circulation model (OGCMs). Representative four error sources should be the way of the LBC formulation, the spatial resolution difference between driving and driven data, the frequency for up-dating LBCs and domain size. Errors which are contributed from each error source to the results of the ORCMs are investigated separately by applying the Big-Brother Experiment (BBE). Within resolution of 3km grid point of the ORCMs imposing in the BBE framework, it clearly exposes that the simulation results of the ORCMs significantly depend on the domain size and specially the spatial and temporal resolution of lateral boundary conditions (LBCs). The ratio resolution of spatial resolution between driving data and driven model could be up to 3, the updating frequency of the LBCs can be up to every 6 hours per day. The optimal domain size of the ORCMs could be smaller than the OGCMs' domain size around 2 to 10 times. Key words: ORCMs, error source, lateral boundary conditions, domain size Acknowledgement: This research was supported by grants from the Korean Ministry of Oceans and Fisheries entitled as "Developing total management system for the Keum river estuary and coast" and "Development of Technology for CO2 Marine Geological Storage". We also thank to the administrative supports of the Integrated Research Institute of Construction and Environmental Engineering of the Seoul National University.
Error Analyses of the North Alabama Lightning Mapping Array (LMA)
NASA Technical Reports Server (NTRS)
Koshak, W. J.; Solokiewicz, R. J.; Blakeslee, R. J.; Goodman, S. J.; Christian, H. J.; Hall, J. M.; Bailey, J. C.; Krider, E. P.; Bateman, M. G.; Boccippio, D. J.
2003-01-01
Two approaches are used to characterize how accurately the North Alabama Lightning Mapping Array (LMA) is able to locate lightning VHF sources in space and in time. The first method uses a Monte Carlo computer simulation to estimate source retrieval errors. The simulation applies a VHF source retrieval algorithm that was recently developed at the NASA-MSFC and that is similar, but not identical to, the standard New Mexico Tech retrieval algorithm. The second method uses a purely theoretical technique (i.e., chi-squared Curvature Matrix theory) to estimate retrieval errors. Both methods assume that the LMA system has an overall rms timing error of 50ns, but all other possible errors (e.g., multiple sources per retrieval attempt) are neglected. The detailed spatial distributions of retrieval errors are provided. Given that the two methods are completely independent of one another, it is shown that they provide remarkably similar results, except that the chi-squared theory produces larger altitude error estimates than the (more realistic) Monte Carlo simulation.
Understanding EFL Students' Errors in Writing
ERIC Educational Resources Information Center
Phuket, Pimpisa Rattanadilok Na; Othman, Normah Binti
2015-01-01
Writing is the most difficult skill in English, so most EFL students tend to make errors in writing. In assisting the learners to successfully acquire writing skill, the analysis of errors and the understanding of their sources are necessary. This study attempts to explore the major sources of errors occurred in the writing of EFL students. It…
Olson, Andrew; Halloran, Elizabeth; Romani, Cristina
2015-12-01
We present three jargonaphasic patients who made phonological errors in naming, repetition and reading. We analyse target/response overlap using statistical models to answer three questions: 1) Is there a single phonological source for errors or two sources, one for target-related errors and a separate source for abstruse errors? 2) Can correct responses be predicted by the same distribution used to predict errors or do they show a completion boost (CB)? 3) Is non-lexical and lexical information summed during reading and repetition? The answers were clear. 1) Abstruse errors did not require a separate distribution created by failure to access word forms. Abstruse and target-related errors were the endpoints of a single overlap distribution. 2) Correct responses required a special factor, e.g., a CB or lexical/phonological feedback, to preserve their integrity. 3) Reading and repetition required separate lexical and non-lexical contributions that were combined at output. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Jiang, Cong; Yu, Zong-Wen; Wang, Xiang-Bin
2017-03-01
We show how to calculate the secure final key rate in the four-intensity decoy-state measurement-device-independent quantum key distribution protocol with both source errors and statistical fluctuations with a certain failure probability. Our results rely only on the range of only a few parameters in the source state. All imperfections in this protocol have been taken into consideration without assuming any specific error patterns of the source.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Callan, J.R.; Kelly, R.T.; Quinn, M.L.
1995-05-01
Remote Afterloading Brachytherapy (RAB) is a medical process used in the treatment of cancer. RAB uses a computer-controlled device to remotely insert and remove radioactive sources close to a target (or tumor) in the body. Some RAB problems affecting the radiation dose to the patient have been reported and attributed to human error. To determine the root cause of human error in the RAB system, a human factors team visited 23 RAB treatment sites in the US The team observed RAB treatment planning and delivery, interviewed RAB personnel, and performed walk-throughs, during which staff demonstrated the procedures and practices usedmore » in performing RAB tasks. Factors leading to human error in the RAB system were identified. The impact of those factors on the performance of RAB was then evaluated and prioritized in terms of safety significance. Finally, the project identified and evaluated alternative approaches for resolving the safety significant problems related to human error.« less
NASA Technical Reports Server (NTRS)
Mahesh, Ashwin; Spinhirne, James D.; Duda, David P.; Eloranta, Edwin W.; Starr, David O'C (Technical Monitor)
2001-01-01
The altimetry bias in GLAS (Geoscience Laser Altimeter System) or other laser altimeters resulting from atmospheric multiple scattering is studied in relationship to current knowledge of cloud properties over the Antarctic Plateau. Estimates of seasonal and interannual changes in the bias are presented. Results show the bias in altitude from multiple scattering in clouds would be a significant error source without correction. The selective use of low optical depth clouds or cloudfree observations, as well as improved analysis of the return pulse such as by the Gaussian method used here, are necessary to minimize the surface altitude errors. The magnitude of the bias is affected by variations in cloud height, cloud effective particle size and optical depth. Interannual variations in these properties as well as in cloud cover fraction could lead to significant year-to-year variations in the altitude bias. Although cloud-free observations reduce biases in surface elevation measurements from space, over Antarctica these may often include near-surface blowing snow, also a source of scattering-induced delay. With careful selection and analysis of data, laser altimetry specifications can be met.
Acoustic holography as a metrological tool for characterizing medical ultrasound sources and fields
Sapozhnikov, Oleg A.; Tsysar, Sergey A.; Khokhlova, Vera A.; Kreider, Wayne
2015-01-01
Acoustic holography is a powerful technique for characterizing ultrasound sources and the fields they radiate, with the ability to quantify source vibrations and reduce the number of required measurements. These capabilities are increasingly appealing for meeting measurement standards in medical ultrasound; however, associated uncertainties have not been investigated systematically. Here errors associated with holographic representations of a linear, continuous-wave ultrasound field are studied. To facilitate the analysis, error metrics are defined explicitly, and a detailed description of a holography formulation based on the Rayleigh integral is provided. Errors are evaluated both for simulations of a typical therapeutic ultrasound source and for physical experiments with three different ultrasound sources. Simulated experiments explore sampling errors introduced by the use of a finite number of measurements, geometric uncertainties in the actual positions of acquired measurements, and uncertainties in the properties of the propagation medium. Results demonstrate the theoretical feasibility of keeping errors less than about 1%. Typical errors in physical experiments were somewhat larger, on the order of a few percent; comparison with simulations provides specific guidelines for improving the experimental implementation to reduce these errors. Overall, results suggest that holography can be implemented successfully as a metrological tool with small, quantifiable errors. PMID:26428789
SU-F-T-24: Impact of Source Position and Dose Distribution Due to Curvature of HDR Transfer Tubes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khan, A; Yue, N
2016-06-15
Purpose: Brachytherapy is a highly targeted from of radiotherapy. While this may lead to ideal dose distributions on the treatment planning system, a small error in source location can lead to change in the dose distribution. The purpose of this study is to quantify the impact on source position error due to curvature of the transfer tubes and the impact this may have on the dose distribution. Methods: Since the source travels along the midline of the tube, an estimate of the positioning error for various angles of curvature was determined using geometric properties of the tube. Based on themore » range of values a specific shift was chosen to alter the treatment plans for a number of cervical cancer patients who had undergone HDR brachytherapy boost using tandem and ovoids. Impact of dose to target and organs at risk were determined and checked against guidelines outlined by radiation oncologist. Results: The estimate of the positioning error was 2mm short of the expected position (the curved tube can only cause the source to not reach as far as with a flat tube). Quantitative impact on the dose distribution is still in the process of being analyzed. Conclusion: The accepted positioning tolerance for the source position of a HDR brachytherapy unit is plus or minus 1mm. If there is an additional 2mm discrepancy due to tube curvature, this can result in a source being 1mm to 3mm short of the expected location. While we do always attempt to keep the tubes straight, in some cases such as with tandem and ovoids, the tandem connector does not extend as far out from the patient so the ovoid tubes always contain some degree of curvature. The dose impact of this may be significant.« less
Structured methods for identifying and correcting potential human errors in aviation operations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, W.R.
1997-10-01
Human errors have been identified as the source of approximately 60% of the incidents and accidents that occur in commercial aviation. It can be assumed that a very large number of human errors occur in aviation operations, even though in most cases the redundancies and diversities built into the design of aircraft systems prevent the errors from leading to serious consequences. In addition, when it is acknowledged that many system failures have their roots in human errors that occur in the design phase, it becomes apparent that the identification and elimination of potential human errors could significantly decrease the risksmore » of aviation operations. This will become even more critical during the design of advanced automation-based aircraft systems as well as next-generation systems for air traffic management. Structured methods to identify and correct potential human errors in aviation operations have been developed and are currently undergoing testing at the Idaho National Engineering and Environmental Laboratory (INEEL).« less
Self-Interaction Error in Density Functional Theory: An Appraisal.
Bao, Junwei Lucas; Gagliardi, Laura; Truhlar, Donald G
2018-05-03
Self-interaction error (SIE) is considered to be one of the major sources of error in most approximate exchange-correlation functionals for Kohn-Sham density-functional theory (KS-DFT), and it is large with all local exchange-correlation functionals and with some hybrid functionals. In this work, we consider systems conventionally considered to be dominated by SIE. For these systems, we demonstrate that by using multiconfiguration pair-density functional theory (MC-PDFT), the error of a translated local density-functional approximation is significantly reduced (by a factor of 3) when using an MCSCF density and on-top density, as compared to using KS-DFT with the parent functional; the error in MC-PDFT with local on-top functionals is even lower than the error in some popular KS-DFT hybrid functionals. Density-functional theory, either in MC-PDFT form with local on-top functionals or in KS-DFT form with some functionals having 50% or more nonlocal exchange, has smaller errors for SIE-prone systems than does CASSCF, which has no SIE.
Common but unappreciated sources of error in one, two, and multiple-color pyrometry
NASA Technical Reports Server (NTRS)
Spjut, R. Erik
1988-01-01
The most common sources of error in optical pyrometry are examined. They can be classified as either noise and uncertainty errors, stray radiation errors, or speed-of-response errors. Through judicious choice of detectors and optical wavelengths the effect of noise errors can be minimized, but one should strive to determine as many of the system properties as possible. Careful consideration of the optical-collection system can minimize stray radiation errors. Careful consideration must also be given to the slowest elements in a pyrometer when measuring rapid phenomena.
NASA Technical Reports Server (NTRS)
Ancheta, T. C., Jr.
1976-01-01
A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A 'universal' generalization of syndrome-source-coding is formulated which provides robustly effective distortionless coding of source ensembles. Two examples are given, comparing the performance of noiseless universal syndrome-source-coding to (1) run-length coding and (2) Lynch-Davisson-Schalkwijk-Cover universal coding for an ensemble of binary memoryless sources.
NASA Astrophysics Data System (ADS)
Winiarek, Victor; Bocquet, Marc; Saunier, Olivier; Mathieu, Anne
2012-03-01
A major difficulty when inverting the source term of an atmospheric tracer dispersion problem is the estimation of the prior errors: those of the atmospheric transport model, those ascribed to the representativity of the measurements, those that are instrumental, and those attached to the prior knowledge on the variables one seeks to retrieve. In the case of an accidental release of pollutant, the reconstructed source is sensitive to these assumptions. This sensitivity makes the quality of the retrieval dependent on the methods used to model and estimate the prior errors of the inverse modeling scheme. We propose to use an estimation method for the errors' amplitude based on the maximum likelihood principle. Under semi-Gaussian assumptions, it takes into account, without approximation, the positivity assumption on the source. We apply the method to the estimation of the Fukushima Daiichi source term using activity concentrations in the air. The results are compared to an L-curve estimation technique and to Desroziers's scheme. The total reconstructed activities significantly depend on the chosen method. Because of the poor observability of the Fukushima Daiichi emissions, these methods provide lower bounds for cesium-137 and iodine-131 reconstructed activities. These lower bound estimates, 1.2 × 1016 Bq for cesium-137, with an estimated standard deviation range of 15%-20%, and 1.9 - 3.8 × 1017 Bq for iodine-131, with an estimated standard deviation range of 5%-10%, are of the same order of magnitude as those provided by the Japanese Nuclear and Industrial Safety Agency and about 5 to 10 times less than the Chernobyl atmospheric releases.
A Comprehensive Radial Velocity Error Budget for Next Generation Doppler Spectrometers
NASA Technical Reports Server (NTRS)
Halverson, Samuel; Ryan, Terrien; Mahadevan, Suvrath; Roy, Arpita; Bender, Chad; Stefansson, Guomundur Kari; Monson, Andrew; Levi, Eric; Hearty, Fred; Blake, Cullen;
2016-01-01
We describe a detailed radial velocity error budget for the NASA-NSF Extreme Precision Doppler Spectrometer instrument concept NEID (NN-explore Exoplanet Investigations with Doppler spectroscopy). Such an instrument performance budget is a necessity for both identifying the variety of noise sources currently limiting Doppler measurements, and estimating the achievable performance of next generation exoplanet hunting Doppler spectrometers. For these instruments, no single source of instrumental error is expected to set the overall measurement floor. Rather, the overall instrumental measurement precision is set by the contribution of many individual error sources. We use a combination of numerical simulations, educated estimates based on published materials, extrapolations of physical models, results from laboratory measurements of spectroscopic subsystems, and informed upper limits for a variety of error sources to identify likely sources of systematic error and construct our global instrument performance error budget. While natively focused on the performance of the NEID instrument, this modular performance budget is immediately adaptable to a number of current and future instruments. Such an approach is an important step in charting a path towards improving Doppler measurement precisions to the levels necessary for discovering Earth-like planets.
NASA Technical Reports Server (NTRS)
Berg, Wesley; Avery, Susan K.
1995-01-01
Estimates of monthly rainfall have been computed over the tropical Pacific using passive microwave satellite observations from the special sensor microwave/imager (SSM/I) for the period from July 1987 through December 1990. These monthly estimates are calibrated using data from a network of Pacific atoll rain gauges in order to account for systematic biases and are then compared with several visible and infrared satellite-based rainfall estimation techniques for the purpose of evaluating the performance of the microwave-based estimates. Although several key differences among the various techniques are observed, the general features of the monthly rainfall time series agree very well. Finally, the significant error sources contributing to uncertainties in the monthly estimates are examined and an estimate of the total error is produced. The sampling error characteristics are investigated using data from two SSM/I sensors and a detailed analysis of the characteristics of the diurnal cycle of rainfall over the oceans and its contribution to sampling errors in the monthly SSM/I estimates is made using geosynchronous satellite data. Based on the analysis of the sampling and other error sources the total error was estimated to be of the order of 30 to 50% of the monthly rainfall for estimates averaged over 2.5 deg x 2.5 deg latitude/longitude boxes, with a contribution due to diurnal variability of the order of 10%.
Measuring Data Quality Through a Source Data Verification Audit in a Clinical Research Setting.
Houston, Lauren; Probst, Yasmine; Humphries, Allison
2015-01-01
Health data has long been scrutinised in relation to data quality and integrity problems. Currently, no internationally accepted or "gold standard" method exists measuring data quality and error rates within datasets. We conducted a source data verification (SDV) audit on a prospective clinical trial dataset. An audit plan was applied to conduct 100% manual verification checks on a 10% random sample of participant files. A quality assurance rule was developed, whereby if >5% of data variables were incorrect a second 10% random sample would be extracted from the trial data set. Error was coded: correct, incorrect (valid or invalid), not recorded or not entered. Audit-1 had a total error of 33% and audit-2 36%. The physiological section was the only audit section to have <5% error. Data not recorded to case report forms had the greatest impact on error calculations. A significant association (p=0.00) was found between audit-1 and audit-2 and whether or not data was deemed correct or incorrect. Our study developed a straightforward method to perform a SDV audit. An audit rule was identified and error coding was implemented. Findings demonstrate that monitoring data quality by a SDV audit can identify data quality and integrity issues within clinical research settings allowing quality improvement to be made. The authors suggest this approach be implemented for future research.
Using CO2:CO Correlations to Improve Inverse Analyses of Carbon Fluxes
NASA Technical Reports Server (NTRS)
Palmer, Paul I.; Suntharalingam, Parvadha; Jones, Dylan B. A.; Jacob, Daniel J.; Streets, David G.; Fu, Qingyan; Vay, Stephanie A.; Sachse, Glen W.
2006-01-01
Observed correlations between atmospheric concentrations of CO2 and CO represent potentially powerful information for improving CO2 surface flux estimates through coupled CO2-CO inverse analyses. We explore the value of these correlations in improving estimates of regional CO2 fluxes in east Asia by using aircraft observations of CO2 and CO from the TRACE-P campaign over the NW Pacific in March 2001. Our inverse model uses regional CO2 and CO surface fluxes as the state vector, separating biospheric and combustion contributions to CO2. CO2-CO error correlation coefficients are included in the inversion as off-diagonal entries in the a priori and observation error covariance matrices. We derive error correlations in a priori combustion source estimates of CO2 and CO by propagating error estimates of fuel consumption rates and emission factors. However, we find that these correlations are weak because CO source uncertainties are mostly determined by emission factors. Observed correlations between atmospheric CO2 and CO concentrations imply corresponding error correlations in the chemical transport model used as the forward model for the inversion. These error correlations in excess of 0.7, as derived from the TRACE-P data, enable a coupled CO2-CO inversion to achieve significant improvement over a CO2-only inversion for quantifying regional fluxes of CO2.
NASA Astrophysics Data System (ADS)
Pany, A.; Böhm, J.; MacMillan, D.; Schuh, H.; Nilsson, T.; Wresnik, J.
2011-01-01
Within the International VLBI Service for Geodesy and Astrometry (IVS) Monte Carlo simulations have been carried out to design the next generation VLBI system ("VLBI2010"). Simulated VLBI observables were generated taking into account the three most important stochastic error sources in VLBI, i.e. wet troposphere delay, station clock, and measurement error. Based on realistic physical properties of the troposphere and clocks we ran simulations to investigate the influence of the troposphere on VLBI analyses, and to gain information about the role of clock performance and measurement errors of the receiving system in the process of reaching VLBI2010's goal of mm position accuracy on a global scale. Our simulations confirm that the wet troposphere delay is the most important of these three error sources. We did not observe significant improvement of geodetic parameters if the clocks were simulated with an Allan standard deviation better than 1 × 10-14 at 50 min and found the impact of measurement errors to be relatively small compared with the impact of the troposphere. Along with simulations to test different network sizes, scheduling strategies, and antenna slew rates these studies were used as a basis for the definition and specification of VLBI2010 antennas and recording system and might also be an example for other space geodetic techniques.
NASA Technical Reports Server (NTRS)
Casper, Paul W.; Bent, Rodney B.
1991-01-01
The algorithm used in previous technology time-of-arrival lightning mapping systems was based on the assumption that the earth is a perfect spheroid. These systems yield highly-accurate lightning locations, which is their major strength. However, extensive analysis of tower strike data has revealed occasionally significant (one to two kilometer) systematic offset errors which are not explained by the usual error sources. It was determined that these systematic errors reduce dramatically (in some cases) when the oblate shape of the earth is taken into account. The oblate spheroid correction algorithm and a case example is presented.
2010 drug packaging review: identifying problems to prevent errors.
2011-06-01
Prescrire's analyses showed that the quality of drug packaging in 2010 still left much to be desired. Potentially dangerous packaging remains a significant problem: unclear labelling is source of medication errors; dosing devices for some psychotropic drugs create a risk of overdose; child-proof caps are often lacking; and too many patient information leaflets are misleading or difficult to understand. Everything that is needed for safe drug packaging is available; it is now up to regulatory agencies and drug companies to act responsibly. In the meantime, health professionals can help their patients by learning to identify the pitfalls of drug packaging and providing safe information to help prevent medication errors.
Derivation of error sources for experimentally derived heliostat shapes
NASA Astrophysics Data System (ADS)
Cumpston, Jeff; Coventry, Joe
2017-06-01
Data gathered using photogrammetry that represents the surface and structure of a heliostat mirror panel is investigated in detail. A curve-fitting approach that allows the retrieval of four distinct mirror error components, while prioritizing the best fit possible to paraboloidal terms in the curve fitting equation, is presented. The angular errors associated with each of the four surfaces are calculated, and the relative magnitude for each of them is given. It is found that in this case, the mirror had a significant structural twist, and an estimate of the improvement to the mirror surface quality in the case of no twist was made.
NASA Astrophysics Data System (ADS)
Wang, Ji; Zhang, Ru; Yan, Yuting; Dong, Xiaoqiang; Li, Jun Ming
2017-05-01
Hazardous gas leaks in the atmosphere can cause significant economic losses in addition to environmental hazards, such as fires and explosions. A three-stage hazardous gas leak source localization method was developed that uses movable and stationary gas concentration sensors. The method calculates a preliminary source inversion with a modified genetic algorithm (MGA) and has the potential to crossover with eliminated individuals from the population, following the selection of the best candidate. The method then determines a search zone using Markov Chain Monte Carlo (MCMC) sampling, utilizing a partial evaluation strategy. The leak source is then accurately localized using a modified guaranteed convergence particle swarm optimization algorithm with several bad-performing individuals, following selection of the most successful individual with dynamic updates. The first two stages are based on data collected by motionless sensors, and the last stage is based on data from movable robots with sensors. The measurement error adaptability and the effect of the leak source location were analyzed. The test results showed that this three-stage localization process can localize a leak source within 1.0 m of the source for different leak source locations, with measurement error standard deviation smaller than 2.0.
The Accuracy of Webcams in 2D Motion Analysis: Sources of Error and Their Control
ERIC Educational Resources Information Center
Page, A.; Moreno, R.; Candelas, P.; Belmar, F.
2008-01-01
In this paper, we show the potential of webcams as precision measuring instruments in a physics laboratory. Various sources of error appearing in 2D coordinate measurements using low-cost commercial webcams are discussed, quantifying their impact on accuracy and precision, and simple procedures to control these sources of error are presented.…
Modeling and characterization of multipath in global navigation satellite system ranging signals
NASA Astrophysics Data System (ADS)
Weiss, Jan Peter
The Global Positioning System (GPS) provides position, velocity, and time information to users in anywhere near the earth in real-time and regardless of weather conditions. Since the system became operational, improvements in many areas have reduced systematic errors affecting GPS measurements such that multipath, defined as any signal taking a path other than the direct, has become a significant, if not dominant, error source for many applications. This dissertation utilizes several approaches to characterize and model multipath errors in GPS measurements. Multipath errors in GPS ranging signals are characterized for several receiver systems and environments. Experimental P(Y) code multipath data are analyzed for ground stations with multipath levels ranging from minimal to severe, a C-12 turboprop, an F-18 jet, and an aircraft carrier. Comparisons between receivers utilizing single patch antennas and multi-element arrays are also made. In general, the results show significant reductions in multipath with antenna array processing, although large errors can occur even with this kind of equipment. Analysis of airborne platform multipath shows that the errors tend to be small in magnitude because the size of the aircraft limits the geometric delay of multipath signals, and high in frequency because aircraft dynamics cause rapid variations in geometric delay. A comprehensive multipath model is developed and validated. The model integrates 3D structure models, satellite ephemerides, electromagnetic ray-tracing algorithms, and detailed antenna and receiver models to predict multipath errors. Validation is performed by comparing experimental and simulated multipath via overall error statistics, per satellite time histories, and frequency content analysis. The validation environments include two urban buildings, an F-18, an aircraft carrier, and a rural area where terrain multipath dominates. The validated models are used to identify multipath sources, characterize signal properties, evaluate additional antenna and receiver tracking configurations, and estimate the reflection coefficients of multipath-producing surfaces. Dynamic models for an F-18 landing on an aircraft carrier correlate aircraft dynamics to multipath frequency content; the model also characterizes the separate contributions of multipath due to the aircraft, ship, and ocean to the overall error statistics. Finally, reflection coefficients for multipath produced by terrain are estimated via a least-squares algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qian, S.; Jark, W.; Takacs, P.Z.
1995-02-01
Metrology requirements for optical components for third generation synchrotron sources are taxing the state-of-the-art in manufacturing technology. We have investigated a number of effect sources in a commercial figure measurement instrument, the Long Trace Profiler II (LTP II), and have demonstrated that, with some simple modifications, we can significantly reduce the effect of error sources and improve the accuracy and reliability of the measurement. By keeping the optical head stationary and moving a penta prism along the translation stage, the stability of the optical system is greatly improved, and the remaining error signals can be corrected by a simple referencemore » beam subtraction. We illustrate the performance of the modified system by investigating the distortion produced by gravity on a typical synchrotron mirror and demonstrate the repeatability of the instrument despite relaxed tolerances on the translation stage.« less
Latin hypercube approach to estimate uncertainty in ground water vulnerability
Gurdak, J.J.; McCray, J.E.; Thyne, G.; Qi, S.L.
2007-01-01
A methodology is proposed to quantify prediction uncertainty associated with ground water vulnerability models that were developed through an approach that coupled multivariate logistic regression with a geographic information system (GIS). This method uses Latin hypercube sampling (LHS) to illustrate the propagation of input error and estimate uncertainty associated with the logistic regression predictions of ground water vulnerability. Central to the proposed method is the assumption that prediction uncertainty in ground water vulnerability models is a function of input error propagation from uncertainty in the estimated logistic regression model coefficients (model error) and the values of explanatory variables represented in the GIS (data error). Input probability distributions that represent both model and data error sources of uncertainty were simultaneously sampled using a Latin hypercube approach with logistic regression calculations of probability of elevated nonpoint source contaminants in ground water. The resulting probability distribution represents the prediction intervals and associated uncertainty of the ground water vulnerability predictions. The method is illustrated through a ground water vulnerability assessment of the High Plains regional aquifer. Results of the LHS simulations reveal significant prediction uncertainties that vary spatially across the regional aquifer. Additionally, the proposed method enables a spatial deconstruction of the prediction uncertainty that can lead to improved prediction of ground water vulnerability. ?? 2007 National Ground Water Association.
Prediction of discretization error using the error transport equation
NASA Astrophysics Data System (ADS)
Celik, Ismail B.; Parsons, Don Roscoe
2017-06-01
This study focuses on an approach to quantify the discretization error associated with numerical solutions of partial differential equations by solving an error transport equation (ETE). The goal is to develop a method that can be used to adequately predict the discretization error using the numerical solution on only one grid/mesh. The primary problem associated with solving the ETE is the formulation of the error source term which is required for accurately predicting the transport of the error. In this study, a novel approach is considered which involves fitting the numerical solution with a series of locally smooth curves and then blending them together with a weighted spline approach. The result is a continuously differentiable analytic expression that can be used to determine the error source term. Once the source term has been developed, the ETE can easily be solved using the same solver that is used to obtain the original numerical solution. The new methodology is applied to the two-dimensional Navier-Stokes equations in the laminar flow regime. A simple unsteady flow case is also considered. The discretization error predictions based on the methodology presented in this study are in good agreement with the 'true error'. While in most cases the error predictions are not quite as accurate as those from Richardson extrapolation, the results are reasonable and only require one numerical grid. The current results indicate that there is much promise going forward with the newly developed error source term evaluation technique and the ETE.
When Does Air Resistance Become Significant in Projectile Motion?
ERIC Educational Resources Information Center
Mohazzabi, Pirooz
2018-01-01
In an article in this journal, it was shown that air resistance could never be a significant source of error in typical free-fall experiments in introductory physics laboratories. Since projectile motion is the two-dimensional version of the free-fall experiment and usually follows the former experiment in such laboratories, it seemed natural to…
Speedup computation of HD-sEMG signals using a motor unit-specific electrical source model.
Carriou, Vincent; Boudaoud, Sofiane; Laforet, Jeremy
2018-01-23
Nowadays, bio-reliable modeling of muscle contraction is becoming more accurate and complex. This increasing complexity induces a significant increase in computation time which prevents the possibility of using this model in certain applications and studies. Accordingly, the aim of this work is to significantly reduce the computation time of high-density surface electromyogram (HD-sEMG) generation. This will be done through a new model of motor unit (MU)-specific electrical source based on the fibers composing the MU. In order to assess the efficiency of this approach, we computed the normalized root mean square error (NRMSE) between several simulations on single generated MU action potential (MUAP) using the usual fiber electrical sources and the MU-specific electrical source. This NRMSE was computed for five different simulation sets wherein hundreds of MUAPs are generated and summed into HD-sEMG signals. The obtained results display less than 2% error on the generated signals compared to the same signals generated with fiber electrical sources. Moreover, the computation time of the HD-sEMG signal generation model is reduced to about 90% compared to the fiber electrical source model. Using this model with MU electrical sources, we can simulate HD-sEMG signals of a physiological muscle (hundreds of MU) in less than an hour on a classical workstation. Graphical Abstract Overview of the simulation of HD-sEMG signals using the fiber scale and the MU scale. Upscaling the electrical source to the MU scale reduces the computation time by 90% inducing only small deviation of the same simulated HD-sEMG signals.
NASA Astrophysics Data System (ADS)
Winiarek, Victor; Bocquet, Marc; Duhanyan, Nora; Roustan, Yelva; Saunier, Olivier; Mathieu, Anne
2013-04-01
A major difficulty when inverting the source term of an atmospheric tracer dispersion problem is the estimation of the prior errors: those of the atmospheric transport model, those ascribed to the representativeness of the measurements, the instrumental errors, and those attached to the prior knowledge on the variables one seeks to retrieve. In the case of an accidental release of pollutant, and specially in a situation of sparse observability, the reconstructed source is sensitive to these assumptions. This sensitivity makes the quality of the retrieval dependent on the methods used to model and estimate the prior errors of the inverse modeling scheme. In Winiarek et al. (2012), we proposed to use an estimation method for the errors' amplitude based on the maximum likelihood principle. Under semi-Gaussian assumptions, it takes into account, without approximation, the positivity assumption on the source. We applied the method to the estimation of the Fukushima Daiichi cesium-137 and iodine-131 source terms using activity concentrations in the air. The results were compared to an L-curve estimation technique, and to Desroziers's scheme. Additionally to the estimations of released activities, we provided related uncertainties (12 PBq with a std. of 15 - 20 % for cesium-137 and 190 - 380 PBq with a std. of 5 - 10 % for iodine-131). We also enlightened that, because of the low number of available observations (few hundreds) and even if orders of magnitude were consistent, the reconstructed activities significantly depended on the method used to estimate the prior errors. In order to use more data, we propose to extend the methods to the use of several data types, such as activity concentrations in the air and fallout measurements. The idea is to simultaneously estimate the prior errors related to each dataset, in order to fully exploit the information content of each one. Using the activity concentration measurements, but also daily fallout data from prefectures and cumulated deposition data over a region lying approximately 150 km around the nuclear power plant, we can use a few thousands of data in our inverse modeling algorithm to reconstruct the Cesium-137 source term. To improve the parameterization of removal processes, rainfall fields have also been corrected using outputs from the mesoscale meteorological model WRF and ground station rainfall data. As expected, the different methods yield closer results as the number of data increases. Reference : Winiarek, V., M. Bocquet, O. Saunier, A. Mathieu (2012), Estimation of errors in the inverse modeling of accidental release of atmospheric pollutant : Application to the reconstruction of the cesium-137 and iodine-131 source terms from the Fukushima Daiichi power plant, J. Geophys. Res., 117, D05122, doi:10.1029/2011JD016932.
Space-Borne Laser Altimeter Geolocation Error Analysis
NASA Astrophysics Data System (ADS)
Wang, Y.; Fang, J.; Ai, Y.
2018-05-01
This paper reviews the development of space-borne laser altimetry technology over the past 40 years. Taking the ICESAT satellite as an example, a rigorous space-borne laser altimeter geolocation model is studied, and an error propagation equation is derived. The influence of the main error sources, such as the platform positioning error, attitude measurement error, pointing angle measurement error and range measurement error, on the geolocation accuracy of the laser spot are analysed by simulated experiments. The reasons for the different influences on geolocation accuracy in different directions are discussed, and to satisfy the accuracy of the laser control point, a design index for each error source is put forward.
Location error uncertainties - an advanced using of probabilistic inverse theory
NASA Astrophysics Data System (ADS)
Debski, Wojciech
2016-04-01
The spatial location of sources of seismic waves is one of the first tasks when transient waves from natural (uncontrolled) sources are analyzed in many branches of physics, including seismology, oceanology, to name a few. Source activity and its spatial variability in time, the geometry of recording network, the complexity and heterogeneity of wave velocity distribution are all factors influencing the performance of location algorithms and accuracy of the achieved results. While estimating of the earthquake foci location is relatively simple a quantitative estimation of the location accuracy is really a challenging task even if the probabilistic inverse method is used because it requires knowledge of statistics of observational, modelling, and apriori uncertainties. In this presentation we addressed this task when statistics of observational and/or modeling errors are unknown. This common situation requires introduction of apriori constraints on the likelihood (misfit) function which significantly influence the estimated errors. Based on the results of an analysis of 120 seismic events from the Rudna copper mine operating in southwestern Poland we illustrate an approach based on an analysis of Shanon's entropy calculated for the aposteriori distribution. We show that this meta-characteristic of the aposteriori distribution carries some information on uncertainties of the solution found.
Comparisons of single event vulnerability of GaAs SRAMS
NASA Astrophysics Data System (ADS)
Weatherford, T. R.; Hauser, J. R.; Diehl, S. E.
1986-12-01
A GaAs MESFET/JFET model incorporated into SPICE has been used to accurately describe C-EJFET, E/D MESFET and D MESFET/resistor GaAs memory technologies. These cells have been evaluated for critical charges due to gate-to-drain and drain-to-source charge collection. Low gate-to-drain critical charges limit conventional GaAs SRAM soft error rates to approximately 1E-6 errors/bit-day. SEU hardening approaches including decoupling resistors, diodes, and FETs have been investigated. Results predict GaAs RAM cell critical charges can be increased to over 0.1 pC. Soft error rates in such hardened memories may approach 1E-7 errors/bit-day without significantly reducing memory speed. Tradeoffs between hardening level, performance and fabrication complexity are discussed.
Determination of Barometric Altimeter Errors for the Orion Exploration Flight Test-1 Entry
NASA Technical Reports Server (NTRS)
Brown, Denise L.; Bunoz, Jean-Philippe; Gay, Robert
2012-01-01
The Exploration Flight Test 1 (EFT-1) mission is the unmanned flight test for the upcoming Multi-Purpose Crew Vehicle (MPCV). During entry, the EFT-1 vehicle will trigger several Landing and Recovery System (LRS) events, such as parachute deployment, based on on-board altitude information. The primary altitude source is the filtered navigation solution updated with GPS measurement data. The vehicle also has three barometric altimeters that will be used to measure atmospheric pressure during entry. In the event that GPS data is not available during entry, the altitude derived from the barometric altimeter pressure will be used to trigger chute deployment for the drogues and main parachutes. Therefore it is important to understand the impact of error sources on the pressure measured by the barometric altimeters and on the altitude derived from that pressure. The error sources for the barometric altimeters are not independent, and many error sources result in bias in a specific direction. Therefore conventional error budget methods could not be applied. Instead, high fidelity Monte-Carlo simulation was performed and error bounds were determined based on the results of this analysis. Aerodynamic errors were the largest single contributor to the error budget for the barometric altimeters. The large errors drove a change to the altitude trigger setpoint for FBC jettison deploy.
WE-G-BRA-04: Common Errors and Deficiencies in Radiation Oncology Practice
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kry, S; Dromgoole, L; Alvarez, P
Purpose: Dosimetric errors in radiotherapy dose delivery lead to suboptimal treatments and outcomes. This work reviews the frequency and severity of dosimetric and programmatic errors identified by on-site audits performed by the IROC Houston QA center. Methods: IROC Houston on-site audits evaluate absolute beam calibration, relative dosimetry data compared to the treatment planning system data, and processes such as machine QA. Audits conducted from 2000-present were abstracted for recommendations, including type of recommendation and magnitude of error when applicable. Dosimetric recommendations corresponded to absolute dose errors >3% and relative dosimetry errors >2%. On-site audits of 1020 accelerators at 409 institutionsmore » were reviewed. Results: A total of 1280 recommendations were made (average 3.1/institution). The most common recommendation was for inadequate QA procedures per TG-40 and/or TG-142 (82% of institutions) with the most commonly noted deficiency being x-ray and electron off-axis constancy versus gantry angle. Dosimetrically, the most common errors in relative dosimetry were in small-field output factors (59% of institutions), wedge factors (33% of institutions), off-axis factors (21% of institutions), and photon PDD (18% of institutions). Errors in calibration were also problematic: 20% of institutions had an error in electron beam calibration, 8% had an error in photon beam calibration, and 7% had an error in brachytherapy source calibration. Almost all types of data reviewed included errors up to 7% although 20 institutions had errors in excess of 10%, and 5 had errors in excess of 20%. The frequency of electron calibration errors decreased significantly with time, but all other errors show non-significant changes. Conclusion: There are many common and often serious errors made during the establishment and maintenance of a radiotherapy program that can be identified through independent peer review. Physicists should be cautious, particularly in areas highlighted herein that show a tendency for errors.« less
Real Time Quality Control Methods for Cued EMI Data Collection
2016-01-12
of magnetic geology creating false source locations; out of the remaining 78 recollects that were due to legitimate sources (i.e., a metal...White River Technologies January 2016 The magnetic geology at the site presented the most significant challenge to the technology and...object, magnetic geology , etc.); however, many of the reacquires due to common errors such as inaccurate target picking could be replaced by in-field
A line-source method for aligning on-board and other pinhole SPECT systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, Susu; Bowsher, James; Yin, Fang-Fang
2013-12-15
Purpose: In order to achieve functional and molecular imaging as patients are in position for radiation therapy, a robotic multipinhole SPECT system is being developed. Alignment of the SPECT system—to the linear accelerator (LINAC) coordinate frame and to the coordinate frames of other on-board imaging systems such as cone-beam CT (CBCT)—is essential for target localization and image reconstruction. An alignment method that utilizes line sources and one pinhole projection is proposed and investigated to achieve this goal. Potentially, this method could also be applied to the calibration of the other pinhole SPECT systems.Methods: An alignment model consisting of multiple alignmentmore » parameters was developed which maps line sources in three-dimensional (3D) space to their two-dimensional (2D) projections on the SPECT detector. In a computer-simulation study, 3D coordinates of line-sources were defined in a reference room coordinate frame, such as the LINAC coordinate frame. Corresponding 2D line-source projections were generated by computer simulation that included SPECT blurring and noise effects. The Radon transform was utilized to detect angles (α) and offsets (ρ) of the line-source projections. Alignment parameters were then estimated by a nonlinear least squares method, based on the α and ρ values and the alignment model. Alignment performance was evaluated as a function of number of line sources, Radon transform accuracy, finite line-source width, intrinsic camera resolution, Poisson noise, and acquisition geometry. Experimental evaluations were performed using a physical line-source phantom and a pinhole-collimated gamma camera attached to a robot.Results: In computer-simulation studies, when there was no error in determining angles (α) and offsets (ρ) of the measured projections, six alignment parameters (three translational and three rotational) were estimated perfectly using three line sources. When angles (α) and offsets (ρ) were provided by the Radon transform, estimation accuracy was reduced. The estimation error was associated with rounding errors of Radon transform, finite line-source width, Poisson noise, number of line sources, intrinsic camera resolution, and detector acquisition geometry. Statistically, the estimation accuracy was significantly improved by using four line sources rather than three and by thinner line-source projections (obtained by better intrinsic detector resolution). With five line sources, median errors were 0.2 mm for the detector translations, 0.7 mm for the detector radius of rotation, and less than 0.5° for detector rotation, tilt, and twist. In experimental evaluations, average errors relative to a different, independent registration technique were about 1.8 mm for detector translations, 1.1 mm for the detector radius of rotation (ROR), 0.5° and 0.4° for detector rotation and tilt, respectively, and 1.2° for detector twist.Conclusions: Alignment parameters can be estimated using one pinhole projection of line sources. Alignment errors are largely associated with limited accuracy of the Radon transform in determining angles (α) and offsets (ρ) of the line-source projections. This alignment method may be important for multipinhole SPECT, where relative pinhole alignment may vary during rotation. For pinhole and multipinhole SPECT imaging on-board radiation therapy machines, the method could provide alignment of SPECT coordinates with those of CBCT and the LINAC.« less
A line-source method for aligning on-board and other pinhole SPECT systems
Yan, Susu; Bowsher, James; Yin, Fang-Fang
2013-01-01
Purpose: In order to achieve functional and molecular imaging as patients are in position for radiation therapy, a robotic multipinhole SPECT system is being developed. Alignment of the SPECT system—to the linear accelerator (LINAC) coordinate frame and to the coordinate frames of other on-board imaging systems such as cone-beam CT (CBCT)—is essential for target localization and image reconstruction. An alignment method that utilizes line sources and one pinhole projection is proposed and investigated to achieve this goal. Potentially, this method could also be applied to the calibration of the other pinhole SPECT systems. Methods: An alignment model consisting of multiple alignment parameters was developed which maps line sources in three-dimensional (3D) space to their two-dimensional (2D) projections on the SPECT detector. In a computer-simulation study, 3D coordinates of line-sources were defined in a reference room coordinate frame, such as the LINAC coordinate frame. Corresponding 2D line-source projections were generated by computer simulation that included SPECT blurring and noise effects. The Radon transform was utilized to detect angles (α) and offsets (ρ) of the line-source projections. Alignment parameters were then estimated by a nonlinear least squares method, based on the α and ρ values and the alignment model. Alignment performance was evaluated as a function of number of line sources, Radon transform accuracy, finite line-source width, intrinsic camera resolution, Poisson noise, and acquisition geometry. Experimental evaluations were performed using a physical line-source phantom and a pinhole-collimated gamma camera attached to a robot. Results: In computer-simulation studies, when there was no error in determining angles (α) and offsets (ρ) of the measured projections, six alignment parameters (three translational and three rotational) were estimated perfectly using three line sources. When angles (α) and offsets (ρ) were provided by the Radon transform, estimation accuracy was reduced. The estimation error was associated with rounding errors of Radon transform, finite line-source width, Poisson noise, number of line sources, intrinsic camera resolution, and detector acquisition geometry. Statistically, the estimation accuracy was significantly improved by using four line sources rather than three and by thinner line-source projections (obtained by better intrinsic detector resolution). With five line sources, median errors were 0.2 mm for the detector translations, 0.7 mm for the detector radius of rotation, and less than 0.5° for detector rotation, tilt, and twist. In experimental evaluations, average errors relative to a different, independent registration technique were about 1.8 mm for detector translations, 1.1 mm for the detector radius of rotation (ROR), 0.5° and 0.4° for detector rotation and tilt, respectively, and 1.2° for detector twist. Conclusions: Alignment parameters can be estimated using one pinhole projection of line sources. Alignment errors are largely associated with limited accuracy of the Radon transform in determining angles (α) and offsets (ρ) of the line-source projections. This alignment method may be important for multipinhole SPECT, where relative pinhole alignment may vary during rotation. For pinhole and multipinhole SPECT imaging on-board radiation therapy machines, the method could provide alignment of SPECT coordinates with those of CBCT and the LINAC. PMID:24320537
A line-source method for aligning on-board and other pinhole SPECT systems.
Yan, Susu; Bowsher, James; Yin, Fang-Fang
2013-12-01
In order to achieve functional and molecular imaging as patients are in position for radiation therapy, a robotic multipinhole SPECT system is being developed. Alignment of the SPECT system-to the linear accelerator (LINAC) coordinate frame and to the coordinate frames of other on-board imaging systems such as cone-beam CT (CBCT)-is essential for target localization and image reconstruction. An alignment method that utilizes line sources and one pinhole projection is proposed and investigated to achieve this goal. Potentially, this method could also be applied to the calibration of the other pinhole SPECT systems. An alignment model consisting of multiple alignment parameters was developed which maps line sources in three-dimensional (3D) space to their two-dimensional (2D) projections on the SPECT detector. In a computer-simulation study, 3D coordinates of line-sources were defined in a reference room coordinate frame, such as the LINAC coordinate frame. Corresponding 2D line-source projections were generated by computer simulation that included SPECT blurring and noise effects. The Radon transform was utilized to detect angles (α) and offsets (ρ) of the line-source projections. Alignment parameters were then estimated by a nonlinear least squares method, based on the α and ρ values and the alignment model. Alignment performance was evaluated as a function of number of line sources, Radon transform accuracy, finite line-source width, intrinsic camera resolution, Poisson noise, and acquisition geometry. Experimental evaluations were performed using a physical line-source phantom and a pinhole-collimated gamma camera attached to a robot. In computer-simulation studies, when there was no error in determining angles (α) and offsets (ρ) of the measured projections, six alignment parameters (three translational and three rotational) were estimated perfectly using three line sources. When angles (α) and offsets (ρ) were provided by the Radon transform, estimation accuracy was reduced. The estimation error was associated with rounding errors of Radon transform, finite line-source width, Poisson noise, number of line sources, intrinsic camera resolution, and detector acquisition geometry. Statistically, the estimation accuracy was significantly improved by using four line sources rather than three and by thinner line-source projections (obtained by better intrinsic detector resolution). With five line sources, median errors were 0.2 mm for the detector translations, 0.7 mm for the detector radius of rotation, and less than 0.5° for detector rotation, tilt, and twist. In experimental evaluations, average errors relative to a different, independent registration technique were about 1.8 mm for detector translations, 1.1 mm for the detector radius of rotation (ROR), 0.5° and 0.4° for detector rotation and tilt, respectively, and 1.2° for detector twist. Alignment parameters can be estimated using one pinhole projection of line sources. Alignment errors are largely associated with limited accuracy of the Radon transform in determining angles (α) and offsets (ρ) of the line-source projections. This alignment method may be important for multipinhole SPECT, where relative pinhole alignment may vary during rotation. For pinhole and multipinhole SPECT imaging on-board radiation therapy machines, the method could provide alignment of SPECT coordinates with those of CBCT and the LINAC.
TOWARD ERROR ANALYSIS OF LARGE-SCALE FOREST CARBON BUDGETS
Quantification of forest carbon sources and sinks is an important part of national inventories of net greenhouse gas emissions. Several such forest carbon budgets have been constructed, but little effort has been made to analyse the sources of error and how these errors propagate...
10 CFR 74.45 - Measurements and measurement control.
Code of Federal Regulations, 2011 CFR
2011-01-01
... to calculate bias corrections and measurement limits of error. (3) Ensure that potential sources of... determine significant contributors to the measurement uncertainties associated with inventory differences and shipper-receiver differences, so that if SEID exceeds the limits established in paragraph (c)(4...
Consequences of Secondary Calibrations on Divergence Time Estimates.
Schenk, John J
2016-01-01
Secondary calibrations (calibrations based on the results of previous molecular dating studies) are commonly applied in divergence time analyses in groups that lack fossil data; however, the consequences of applying secondary calibrations in a relaxed-clock approach are not fully understood. I tested whether applying the posterior estimate from a primary study as a prior distribution in a secondary study results in consistent age and uncertainty estimates. I compared age estimates from simulations with 100 randomly replicated secondary trees. On average, the 95% credible intervals of node ages for secondary estimates were significantly younger and narrower than primary estimates. The primary and secondary age estimates were significantly different in 97% of the replicates after Bonferroni corrections. Greater error in magnitude was associated with deeper than shallower nodes, but the opposite was found when standardized by median node age, and a significant positive relationship was determined between the number of tips/age of secondary trees and the total amount of error. When two secondary calibrated nodes were analyzed, estimates remained significantly different, and although the minimum and median estimates were associated with less error, maximum age estimates and credible interval widths had greater error. The shape of the prior also influenced error, in which applying a normal, rather than uniform, prior distribution resulted in greater error. Secondary calibrations, in summary, lead to a false impression of precision and the distribution of age estimates shift away from those that would be inferred by the primary analysis. These results suggest that secondary calibrations should not be applied as the only source of calibration in divergence time analyses that test time-dependent hypotheses until the additional error associated with secondary calibrations is more properly modeled to take into account increased uncertainty in age estimates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Savage, B; Peter, D; Covellone, B
2009-07-02
Efforts to update current wave speed models of the Middle East require a thoroughly tested database of sources and recordings. Recordings of seismic waves traversing the region from Tibet to the Red Sea will be the principal metric in guiding improvements to the current wave speed model. Precise characterizations of the earthquakes, specifically depths and faulting mechanisms, are essential to avoid mapping source errors into the refined wave speed model. Errors associated with the source are manifested in amplitude and phase changes. Source depths and paths near nodal planes are particularly error prone as small changes may severely affect themore » resulting wavefield. Once sources are quantified, regions requiring refinement will be highlighted using adjoint tomography methods based on spectral element simulations [Komatitsch and Tromp (1999)]. An initial database of 250 regional Middle Eastern events from 1990-2007, was inverted for depth and focal mechanism using teleseismic arrivals [Kikuchi and Kanamori (1982)] and regional surface and body waves [Zhao and Helmberger (1994)]. From this initial database, we reinterpreted a large, well recorded subset of 201 events through a direct comparison between data and synthetics based upon a centroid moment tensor inversion [Liu et al. (2004)]. Evaluation was done using both a 1D reference model [Dziewonski and Anderson (1981)] at periods greater than 80 seconds and a 3D model [Kustowski et al. (2008)] at periods of 25 seconds and longer. The final source reinterpretations will be within the 3D model, as this is the initial starting point for the adjoint tomography. Transitioning from a 1D to 3D wave speed model shows dramatic improvements when comparisons are done at shorter periods, (25 s). Synthetics from the 1D model were created through mode summations while those from the 3D simulations were created using the spectral element method. To further assess errors in source depth and focal mechanism, comparisons between the three methods were made. These comparisons help to identify problematic stations and sources which may bias the final solution. Estimates of standard errors were generated for each event's source depth and focal mechanism to identify poorly constrained events. A final, well characterized set of sources and stations will be then used to iteratively improve the wave speed model of the Middle East. After a few iterations during the adjoint inversion process, the sources will be reexamined and relocated to further reduce mapping of source errors into structural features. Finally, efforts continue in developing the infrastructure required to 'quickly' generate event kernels at the n-th iteration and invert for a new, (n+1)-th, wave speed model of the Middle East. While development of the infrastructure proceeds, initial tests using a limited number of events shows the 3D model, while showing vast improvement compared to the 1D model, still requires substantial modifications. Employing our new, full source set and iterating the adjoint inversions at successively shorter periods will lead to significant changes and refined wave speed structures of the Middle East.« less
NASA Technical Reports Server (NTRS)
Ricks, Douglas W.
1993-01-01
There are a number of sources of scattering in binary optics: etch depth errors, line edge errors, quantization errors, roughness, and the binary approximation to the ideal surface. These sources of scattering can be systematic (deterministic) or random. In this paper, scattering formulas for both systematic and random errors are derived using Fourier optics. These formulas can be used to explain the results of scattering measurements and computer simulations.
Psychrometric Measurement of Leaf Water Potential: Lack of Error Attributable to Leaf Permeability.
Barrs, H D
1965-07-02
A report that low permeability could cause gross errors in psychrometric determinations of water potential in leaves has not been confirmed. No measurable error from this source could be detected for either of two types of thermocouple psychrometer tested on four species, each at four levels of water potential. No source of error other than tissue respiration could be demonstrated.
Armbrecht, Anne-Simone; Wöhrmann, Anne; Gibbons, Henning; Stahl, Jutta
2010-09-01
The present electrophysiological study investigated the temporal development of response conflict and the effects of diverging conflict sources on error(-related) negativity (Ne). Eighteen participants performed a combined stop-signal flanker task, which was comprised of two different conflict sources: a left-right and a go-stop response conflict. It is assumed that the Ne reflects the activity of a conflict monitoring system and thus increases according to (i) the number of conflict sources and (ii) the temporal development of the conflict activity. No increase of the Ne amplitude after double errors (comprising two conflict sources) as compared to hand- and stop-errors (comprising one conflict source) was found, whereas a higher Ne amplitude was observed after a delayed stop-signal onset. The results suggest that the Ne is not sensitive to an increase in the number of conflict sources, but to the temporal dynamics of a go-stop response conflict. Copyright (c) 2010 Elsevier B.V. All rights reserved.
Quesada, Jose Antonio; Nolasco, Andreu; Moncho, Joaquín
2013-01-01
Geocoding is the assignment of geographic coordinates to spatial points, which often are postal addresses. The error made in applying this process can introduce bias in estimates of spatiotemporal models in epidemiological studies. No studies have been found to measure the error made in applying this process in Spanish cities. The objective is to evaluate the errors in magnitude and direction from two free sources (Google and Yahoo) with regard to a GPS in two Spanish cities. 30 addresses were geocoded with those two sources and the GPS in Santa Pola (Alicante) and Alicante city. The distances were calculated in metres (median, CI95%) between the sources and the GPS, globally and according to the status reported by each source. The directionality of the error was evaluated by calculating the location quadrant and applying a Chi-Square test. The GPS error was evaluated by geocoding 11 addresses twice at 4 days interval. The overall median in Google-GPS was 23,2 metres (16,0-32,1) for Santa Pola, and 21,4 meters (14,9-31,1) for Alicante. The overall median in Yahoo was 136,0 meters (19,2-318,5) for Santa Pola, and 23,8 meters (13,6- 29,2) for Alicante. Between the 73% and 90% were geocoded by status as "exact or interpolated" (minor error), where Goggle and Yahoo had a median error between 19 and 23 metres in the two cities. The GPS had a median error of 13.8 meters (6,7-17,8). No error directionality was detected. Google error is acceptable and stable in the two cities, so that it is a reliable source for Para medir elgeocoding addresses in Spain in epidemiological studies.
AGILE confirmation of gamma-ray activity from the IceCube-170922A error region
NASA Astrophysics Data System (ADS)
Lucarelli, F.; Piano, G.; Pittori, C.; Verrecchia, F.; Tavani, M.; Bulgarelli, A.; Munar-Adrover, P.; Minervini, G.; Ursi, A.; Vercellone, S.; Donnarumma, I.; Fioretti, V.; Zoli, A.; Striani, E.; Cardillo, M.; Gianotti, F.; Trifoglio, M.; Giuliani, A.; Mereghetti, S.; Caraveo, P.; Perotti, F.; Chen, A.; Argan, A.; Costa, E.; Del Monte, E.; Evangelista, Y.; Feroci, M.; Lazzarotto, F.; Lapshov, I.; Pacciani, L.; Soffitta, P.; Sabatini, S.; Vittorini, V.; Pucella, G.; Rapisarda, M.; Di Cocco, G.; Fuschino, F.; Galli, M.; Labanti, C.; Marisaldi, M.; Pellizzoni, A.; Pilia, M.; Trois, A.; Barbiellini, G.; Vallazza, E.; Longo, F.; Morselli, A.; Picozza, P.; Prest, M.; Lipari, P.; Zanello, D.; Cattaneo, P. W.; Rappoldi, A.; Colafrancesco, S.; Parmiggiani, N.; Ferrari, A.; Paoletti, F.; Antonelli, A.; Giommi, P.; Salotti, L.; Valentini, G.; D'Amico, F.
2017-09-01
Following the IceCube observation of a high-energy neutrino candidate event, IceCube-170922A, at T0 = 17/09/22 20:54:30.43 UT (https://gcn.gsfc.nasa.gov/gcn3/21916.gcn3), and the detection of increased gamma-ray activity from a previously known Fermi-LAT gamma-ray source (3FGL J0509.4+0541) in the IceCube-170922A error region (ATel #10791), we have analysed the AGILE-GRID data acquired in the days before and after the neutrino event T0, searching for significant gamma-ray excess above 100 MeV from a position compatible with the IceCube and Fermi-LAT error regions.
Sources of errors in the simulation of south Asian summer monsoon in the CMIP5 GCMs
Ashfaq, Moetasim; Rastogi, Deeksha; Mei, Rui; ...
2016-09-19
Accurate simulation of the South Asian summer monsoon (SAM) is still an unresolved challenge. There has not been a benchmark effort to decipher the origin of undesired yet virtually invariable unsuccessfulness of general circulation models (GCMs) over this region. This study analyzes a large ensemble of CMIP5 GCMs to show that most of the simulation errors in the precipitation distribution and their driving mechanisms are systematic and of similar nature across the GCMs, with biases in meridional differential heating playing a critical role in determining the timing of monsoon onset over land, the magnitude of seasonal precipitation distribution and themore » trajectories of monsoon depressions. Errors in the pre-monsoon heat low over the lower latitudes and atmospheric latent heating over the slopes of Himalayas and Karakoram Range induce significant errors in the atmospheric circulations and meridional differential heating. Lack of timely precipitation further exacerbates such errors by limiting local moisture recycling and latent heating aloft from convection. Most of the summer monsoon errors and their sources are reproducible in the land–atmosphere configuration of a GCM when it is configured at horizontal grid spacing comparable to the CMIP5 GCMs. While an increase in resolution overcomes many modeling challenges, coarse resolution is not necessarily the primary driver in the exhibition of errors over South Asia. Ultimately, these results highlight the importance of previously less well known pre-monsoon mechanisms that critically influence the strength of SAM in the GCMs and highlight the importance of land–atmosphere interactions in the development and maintenance of SAM.« less
Sources of errors in the simulation of south Asian summer monsoon in the CMIP5 GCMs
NASA Astrophysics Data System (ADS)
Ashfaq, Moetasim; Rastogi, Deeksha; Mei, Rui; Touma, Danielle; Ruby Leung, L.
2017-07-01
Accurate simulation of the South Asian summer monsoon (SAM) is still an unresolved challenge. There has not been a benchmark effort to decipher the origin of undesired yet virtually invariable unsuccessfulness of general circulation models (GCMs) over this region. This study analyzes a large ensemble of CMIP5 GCMs to show that most of the simulation errors in the precipitation distribution and their driving mechanisms are systematic and of similar nature across the GCMs, with biases in meridional differential heating playing a critical role in determining the timing of monsoon onset over land, the magnitude of seasonal precipitation distribution and the trajectories of monsoon depressions. Errors in the pre-monsoon heat low over the lower latitudes and atmospheric latent heating over the slopes of Himalayas and Karakoram Range induce significant errors in the atmospheric circulations and meridional differential heating. Lack of timely precipitation further exacerbates such errors by limiting local moisture recycling and latent heating aloft from convection. Most of the summer monsoon errors and their sources are reproducible in the land-atmosphere configuration of a GCM when it is configured at horizontal grid spacing comparable to the CMIP5 GCMs. While an increase in resolution overcomes many modeling challenges, coarse resolution is not necessarily the primary driver in the exhibition of errors over South Asia. These results highlight the importance of previously less well known pre-monsoon mechanisms that critically influence the strength of SAM in the GCMs and highlight the importance of land-atmosphere interactions in the development and maintenance of SAM.
Sources of errors in the simulation of south Asian summer monsoon in the CMIP5 GCMs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ashfaq, Moetasim; Rastogi, Deeksha; Mei, Rui
2016-09-19
Accurate simulation of the South Asian summer monsoon (SAM) is still an unresolved challenge. There has not been a benchmark effort to decipher the origin of undesired yet virtually invariable unsuccessfulness of general circulation models (GCMs) over this region. This study analyzes a large ensemble of CMIP5 GCMs to show that most of the simulation errors in the precipitation distribution and their driving mechanisms are systematic and of similar nature across the GCMs, with biases in meridional differential heating playing a critical role in determining the timing of monsoon onset over land, the magnitude of seasonal precipitation distribution and themore » trajectories of monsoon depressions. Errors in the pre-monsoon heat low over the lower latitudes and atmospheric latent heating over the slopes of Himalayas and Karakoram Range induce significant errors in the atmospheric circulations and meridional differential heating. Lack of timely precipitation further exacerbates such errors by limiting local moisture recycling and latent heating aloft from convection. Most of the summer monsoon errors and their sources are reproducible in the land–atmosphere configuration of a GCM when it is configured at horizontal grid spacing comparable to the CMIP5 GCMs. While an increase in resolution overcomes many modeling challenges, coarse resolution is not necessarily the primary driver in the exhibition of errors over South Asia. These results highlight the importance of previously less well known pre-monsoon mechanisms that critically influence the strength of SAM in the GCMs and highlight the importance of land–atmosphere interactions in the development and maintenance of SAM.« less
Measuring Diagnoses: ICD Code Accuracy
O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M
2005-01-01
Objective To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. Data Sources/Study Setting The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. Study Design/Methods We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Principle Findings Main error sources along the “patient trajectory” include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the “paper trail” include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. Conclusions By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways. PMID:16178999
A 1400-MHz survey of 1478 Abell clusters of galaxies
NASA Technical Reports Server (NTRS)
Owen, F. N.; White, R. A.; Hilldrup, K. C.; Hanisch, R. J.
1982-01-01
Observations of 1478 Abell clusters of galaxies with the NRAO 91-m telescope at 1400 MHz are reported. The measured beam shape was deconvolved from the measured source Gaussian fits in order to estimate the source size and position angle. All detected sources within 0.5 corrected Abell cluster radii are listed, including the cluster number, richness class, distance class, magnitude of the tenth brightest galaxy, redshift estimate, corrected cluster radius in arcmin, right ascension and error, declination and error, total flux density and error, and angular structure for each source.
Peak fitting and integration uncertainties for the Aerodyne Aerosol Mass Spectrometer
NASA Astrophysics Data System (ADS)
Corbin, J. C.; Othman, A.; Haskins, J. D.; Allan, J. D.; Sierau, B.; Worsnop, D. R.; Lohmann, U.; Mensah, A. A.
2015-04-01
The errors inherent in the fitting and integration of the pseudo-Gaussian ion peaks in Aerodyne High-Resolution Aerosol Mass Spectrometers (HR-AMS's) have not been previously addressed as a source of imprecision for these instruments. This manuscript evaluates the significance of these uncertainties and proposes a method for their estimation in routine data analysis. Peak-fitting uncertainties, the most complex source of integration uncertainties, are found to be dominated by errors in m/z calibration. These calibration errors comprise significant amounts of both imprecision and bias, and vary in magnitude from ion to ion. The magnitude of these m/z calibration errors is estimated for an exemplary data set, and used to construct a Monte Carlo model which reproduced well the observed trends in fits to the real data. The empirically-constrained model is used to show that the imprecision in the fitted height of isolated peaks scales linearly with the peak height (i.e., as n1), thus contributing a constant-relative-imprecision term to the overall uncertainty. This constant relative imprecision term dominates the Poisson counting imprecision term (which scales as n0.5) at high signals. The previous HR-AMS uncertainty model therefore underestimates the overall fitting imprecision. The constant relative imprecision in fitted peak height for isolated peaks in the exemplary data set was estimated as ~4% and the overall peak-integration imprecision was approximately 5%. We illustrate the importance of this constant relative imprecision term by performing Positive Matrix Factorization (PMF) on a~synthetic HR-AMS data set with and without its inclusion. Finally, the ability of an empirically-constrained Monte Carlo approach to estimate the fitting imprecision for an arbitrary number of known overlapping peaks is demonstrated. Software is available upon request to estimate these error terms in new data sets.
Accuracy of Press Reports in Astronomy
NASA Astrophysics Data System (ADS)
Schaefer, B. E.; Hurley, K.; Nemiroff, R. J.; Branch, D.; Perlmutter, S.; Schaefer, M. W.; Consolmagno, G. J.; McSween, H.; Strom, R.
1999-12-01
Most Americans learn about modern science from press reports, while such articles have a bad reputation among scientists. We have performed a study of 403 news articles on three topics (gamma-ray astronomy, supernovae, and Mars) to quantitatively answer the questions 'How accurate are press reports of astronomy?' and 'What fraction of the basic science claims in the press are correct?' We have taken all articles on the topics from five news sources (UPI, NYT, S&T, SN, and 5 newspapers) for one decade (1987-1996). All articles were evaluated for a variety of errors, ranging from the fundamental to the trivial. For 'trivial' errors, S&T and SN were virtually perfect while the various newspapers averaged roughly one trivial error every two articles. For meaningful errors, we found that none of our 403 articles significantly mislead the reader or misrepresented the science. So a major result of our study is that reporters should be rehabilitated into the good graces of astronomers, since they are actually doing a good job. For our second question, we rated each story with the probability that its basic new science claim is correct. We found that the average probability over all stories is 70%, regardless of source, topic, importance, or quoted pundit. How do we reconcile our findings that the press does not make significant errors yet the basic science presented is 30% wrong? The reason is that the nature of news reporting is to present front-line science and the nature of front-line science is that reliable conclusions have not yet been reached. So a second major result of our study is to make the distinction between textbook science (with reliability near 100%) and front-line science which you read in the press (with reliability near 70%).
ERIC Educational Resources Information Center
Zhao, Xueyu; Solano-Flores, Guillermo; Qian, Ming
2018-01-01
This article addresses test translation review in international test comparisons. We investigated the applicability of the theory of test translation error--a theory of the multidimensionality and inevitability of test translation error--across source language-target language combinations in the translation of PISA (Programme of International…
Optical linear algebra processors: noise and error-source modeling.
Casasent, D; Ghosh, A
1985-06-01
The modeling of system and component noise and error sources in optical linear algebra processors (OLAP's) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.
Optical linear algebra processors - Noise and error-source modeling
NASA Technical Reports Server (NTRS)
Casasent, D.; Ghosh, A.
1985-01-01
The modeling of system and component noise and error sources in optical linear algebra processors (OLAPs) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.
ROSAT X-Ray Observation of the Second Error Box for SGR 1900+14
NASA Technical Reports Server (NTRS)
Li, P.; Hurley, K.; Vrba, F.; Kouveliotou, C.; Meegan, C. A.; Fishman, G. J.; Kulkarni, S.; Frail, D.
1997-01-01
The positions of the two error boxes for the soft gamma repeater (SGR) 1900+14 were determined by the "network synthesis" method, which employs observations by the Ulysses gamma-ray burst and CGRO BATSE instruments. The location of the first error box has been observed at optical, infrared, and X-ray wavelengths, resulting in the discovery of a ROSAT X-ray point source and a curious double infrared source. We have recently used the ROSAT HRI to observe the second error box to complete the counterpart search. A total of six X-ray sources were identified within the field of view. None of them falls within the network synthesis error box, and a 3 sigma upper limit to any X-ray counterpart was estimated to be 6.35 x 10(exp -14) ergs/sq cm/s. The closest source is approximately 3 min. away, and has an estimated unabsorbed flux of 1.5 x 10(exp -12) ergs/sq cm/s. Unlike the first error box, there is no supernova remnant near the second error box. The closest one, G43.9+1.6, lies approximately 2.dg6 away. For these reasons, we believe that the first error box is more likely to be the correct one.
Dynamically corrected gates for singlet-triplet spin qubits with control-dependent errors
NASA Astrophysics Data System (ADS)
Jacobson, N. Tobias; Witzel, Wayne M.; Nielsen, Erik; Carroll, Malcolm S.
2013-03-01
Magnetic field inhomogeneity due to random polarization of quasi-static local magnetic impurities is a major source of environmentally induced error for singlet-triplet double quantum dot (DQD) spin qubits. Moreover, for singlet-triplet qubits this error may depend on the applied controls. This effect is significant when a static magnetic field gradient is applied to enable full qubit control. Through a configuration interaction analysis, we observe that the dependence of the field inhomogeneity-induced error on the DQD bias voltage can vary systematically as a function of the controls for certain experimentally relevant operating regimes. To account for this effect, we have developed a straightforward prescription for adapting dynamically corrected gate sequences that assume control-independent errors into sequences that compensate for systematic control-dependent errors. We show that accounting for such errors may lead to a substantial increase in gate fidelities. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DOE's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Proton upsets in LSI memories in space
NASA Technical Reports Server (NTRS)
Mcnulty, P. J.; Wyatt, R. C.; Filz, R. C.; Rothwell, P. L.; Farrell, G. E.
1980-01-01
Two types of large scale integrated dynamic random access memory devices were tested and found to be subject to soft errors when exposed to protons incident at energies between 18 and 130 MeV. These errors are shown to differ significantly from those induced in the same devices by alphas from an Am-241 source. There is considerable variation among devices in their sensitivity to proton-induced soft errors, even among devices of the same type. For protons incident at 130 MeV, the soft error cross sections measured in these experiments varied from 10 to the -8th to 10 to the -6th sq cm/proton. For individual devices, however, the soft error cross section consistently increased with beam energy from 18-130 MeV. Analysis indicates that the soft errors induced by energetic protons result from spallation interactions between the incident protons and the nuclei of the atoms comprising the device. Because energetic protons are the most numerous of both the galactic and solar cosmic rays and form the inner radiation belt, proton-induced soft errors have potentially serious implications for many electronic systems flown in space.
Sources of variability and systematic error in mouse timing behavior.
Gallistel, C R; King, Adam; McDonald, Robert
2004-01-01
In the peak procedure, starts and stops in responding bracket the target time at which food is expected. The variability in start and stop times is proportional to the target time (scalar variability), as is the systematic error in the mean center (scalar error). The authors investigated the source of the error and the variability, using head poking in the mouse, with target intervals of 5 s, 15 s, and 45 s, in the standard procedure, and in a variant with 3 different target intervals at 3 different locations in a single trial. The authors conclude that the systematic error is due to the asymmetric location of start and stop decision criteria, and the scalar variability derives primarily from sources other than memory.
Earth Orientation Effects on Mobile VLBI Baselines
NASA Technical Reports Server (NTRS)
Allen, S. L.
1984-01-01
Improvements in data quality for the mobile VLBI systems have placed higher accuracy requirements on Earth orientation calibrations. Errors in these calibrations may give rise to systematic effects in the nonlength components of the baselines. Various sources of Earth orientation data were investigated for calibration of Mobile VLBI baselines. Significant differences in quality between the several available sources of UT1-UTC were found. It was shown that the JPL Kalman filtered space technology data were at least as good as any other and adequate to the needs of current Mobile VLBI systems and observing plans. For polar motion, the values from all service suffice. The effect of Earth orientation errors on the accuracy of differenced baselines was also investigated. It is shown that the effect is negligible for the current mobile systems and observing plan.
The computation of equating errors in international surveys in education.
Monseur, Christian; Berezner, Alla
2007-01-01
Since the IEA's Third International Mathematics and Science Study, one of the major objectives of international surveys in education has been to report trends in achievement. The names of the two current IEA surveys reflect this growing interest: Trends in International Mathematics and Science Study (TIMSS) and Progress in International Reading Literacy Study (PIRLS). Similarly a central concern of the OECD's PISA is with trends in outcomes over time. To facilitate trend analyses these studies link their tests using common item equating in conjunction with item response modelling methods. IEA and PISA policies differ in terms of reporting the error associated with trends. In IEA surveys, the standard errors of the trend estimates do not include the uncertainty associated with the linking step while PISA does include a linking error component in the standard errors of trend estimates. In other words, PISA implicitly acknowledges that trend estimates partly depend on the selected common items, while the IEA's surveys do not recognise this source of error. Failing to recognise the linking error leads to an underestimation of the standard errors and thus increases the Type I error rate, thereby resulting in reporting of significant changes in achievement when in fact these are not significant. The growing interest of policy makers in trend indicators and the impact of the evaluation of educational reforms appear to be incompatible with such underestimation. However, the procedure implemented by PISA raises a few issues about the underlying assumptions for the computation of the equating error. After a brief introduction, this paper will describe the procedure PISA implemented to compute the linking error. The underlying assumptions of this procedure will then be discussed. Finally an alternative method based on replication techniques will be presented, based on a simulation study and then applied to the PISA 2000 data.
Grantham, D Wesley; Ashmead, Daniel H; Haynes, David S; Hornsby, Benjamin W Y; Labadie, Robert F; Ricketts, Todd A
2012-01-01
: One purpose of this investigation was to evaluate the effect of a unilateral bone-anchored hearing aid (Baha) on horizontal plane localization performance in single-sided deaf adults who had either a conductive or sensorineural hearing loss in their impaired ear. The use of a 33-loudspeaker array allowed for a finer response measure than has previously been used to investigate localization in this population. In addition, a detailed analysis of error patterns allowed an evaluation of the contribution of random error and bias error to the total rms error computed in the various conditions studied. A second purpose was to investigate the effect of stimulus duration and head-turning on localization performance. : Two groups of single-sided deaf adults were tested in a localization task in which they had to identify the direction of a spoken phrase on each trial. One group had a sensorineural hearing loss (SNHL group; N = 7), and the other group had a conductive hearing loss (CHL group; N = 5). In addition, a control group of four normal-hearing adults was tested. The spoken phrase was either 1250 msec in duration (a male saying "Where am I coming from now?") or 341 msec in duration (the same male saying "Where?"). For the longer-duration phrase, subjects were tested in conditions in which they either were or were not allowed to move their heads before the termination of the phrase. The source came from one of nine positions in the front horizontal plane (from -79° to +79°). The response range included 33 choices (from -90° to +90°, separated by 5.6°). Subjects were tested in all stimulus conditions, both with and without the Baha device. Overall rms error was computed for each condition. Contributions of random error and bias error to the overall error were also computed. : There was considerable intersubject variability in all conditions. However, for the CHL group, the average overall error was significantly smaller when the Baha was on than when it was off. Further analysis of error patterns indicated that this improvement was primarily based on reduced response bias when the device was on; that is, the average response azimuth was nearer to the source azimuth when the device was on than when it was off. The SNHL group, on the other hand, had significantly greater overall error when the Baha was on than when it was off. Collapsed across listening conditions and groups, localization performance was significantly better with the 1250 msec stimulus than with the 341 msec stimulus. However, for the longer-duration stimulus, there was no significant beneficial effect of head-turning. Error scores in all conditions for both groups were considerably larger than those in the normal-hearing control group. : On average, single-sided deaf adults with CHL showed improved localization ability when using the Baha, whereas single-sided deaf adults with SNHL showed a decrement in performance when using the device. These results may have implications for clinical counseling for patients with unilateral hearing impairment.
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Bifftu, Berhanu Boru; Dachew, Berihun Assefa; Tiruneh, Bewket Tadesse; Beshah, Debrework Tesgera
2016-01-01
Medication administration is the final step/phase of medication process in which its error directly affects the patient health. Due to the central role of nurses in medication administration, whether they are the source of an error, a contributor, or an observer they have the professional, legal and ethical responsibility to recognize and report. The aim of this study was to assess the prevalence of medication administration error reporting and associated factors among nurses working at The University of Gondar Referral Hospital, Northwest Ethiopia. Institution based quantitative cross - sectional study was conducted among 282 Nurses. Data were collected using semi-structured, self-administered questionnaire of the Medication Administration Errors Reporting (MAERs). Binary logistic regression with 95 % confidence interval was used to identify factors associated with medication administration errors reporting. The estimated medication administration error reporting was found to be 29.1 %. The perceived rates of medication administration errors reporting for non-intravenous related medications were ranged from 16.8 to 28.6 % and for intravenous-related from 20.6 to 33.4 %. Education status (AOR =1.38, 95 % CI: 4.009, 11.128), disagreement over time - error definition (AOR = 0.44, 95 % CI: 0.468, 0.990), administrative reason (AOR = 0.35, 95 % CI: 0.168, 0.710) and fear (AOR = 0.39, 95 % CI: 0.257, 0.838) were factors statistically significant for the refusal of reporting medication administration errors at p-value <0.05. In this study, less than one third of the study participants reported medication administration errors. Educational status, disagreement over time - error definition, administrative reason and fear were factors statistically significant for the refusal of errors reporting at p-value <0.05. Therefore, the results of this study suggest strategies that enhance the cultures of error reporting such as providing a clear definition of reportable errors and strengthen the educational status of nurses by the health care organization.
A Nonlinear Adaptive Filter for Gyro Thermal Bias Error Cancellation
NASA Technical Reports Server (NTRS)
Galante, Joseph M.; Sanner, Robert M.
2012-01-01
Deterministic errors in angular rate gyros, such as thermal biases, can have a significant impact on spacecraft attitude knowledge. In particular, thermal biases are often the dominant error source in MEMS gyros after calibration. Filters, such as J\\,fEKFs, are commonly used to mitigate the impact of gyro errors and gyro noise on spacecraft closed loop pointing accuracy, but often have difficulty in rapidly changing thermal environments and can be computationally expensive. In this report an existing nonlinear adaptive filter is used as the basis for a new nonlinear adaptive filter designed to estimate and cancel thermal bias effects. A description of the filter is presented along with an implementation suitable for discrete-time applications. A simulation analysis demonstrates the performance of the filter in the presence of noisy measurements and provides a comparison with existing techniques.
Measuring diagnoses: ICD code accuracy.
O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M
2005-10-01
To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Main error sources along the "patient trajectory" include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the "paper trail" include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways.
A Monte-Carlo Bayesian framework for urban rainfall error modelling
NASA Astrophysics Data System (ADS)
Ochoa Rodriguez, Susana; Wang, Li-Pen; Willems, Patrick; Onof, Christian
2016-04-01
Rainfall estimates of the highest possible accuracy and resolution are required for urban hydrological applications, given the small size and fast response which characterise urban catchments. While significant progress has been made in recent years towards meeting rainfall input requirements for urban hydrology -including increasing use of high spatial resolution radar rainfall estimates in combination with point rain gauge records- rainfall estimates will never be perfect and the true rainfall field is, by definition, unknown [1]. Quantifying the residual errors in rainfall estimates is crucial in order to understand their reliability, as well as the impact that their uncertainty may have in subsequent runoff estimates. The quantification of errors in rainfall estimates has been an active topic of research for decades. However, existing rainfall error models have several shortcomings, including the fact that they are limited to describing errors associated to a single data source (i.e. errors associated to rain gauge measurements or radar QPEs alone) and to a single representative error source (e.g. radar-rain gauge differences, spatial temporal resolution). Moreover, rainfall error models have been mostly developed for and tested at large scales. Studies at urban scales are mostly limited to analyses of propagation of errors in rain gauge records-only through urban drainage models and to tests of model sensitivity to uncertainty arising from unmeasured rainfall variability. Only few radar rainfall error models -originally developed for large scales- have been tested at urban scales [2] and have been shown to fail to well capture small-scale storm dynamics, including storm peaks, which are of utmost important for urban runoff simulations. In this work a Monte-Carlo Bayesian framework for rainfall error modelling at urban scales is introduced, which explicitly accounts for relevant errors (arising from insufficient accuracy and/or resolution) in multiple data sources (in this case radar and rain gauge estimates typically available at present), while at the same time enabling dynamic combination of these data sources (thus not only quantifying uncertainty, but also reducing it). This model generates an ensemble of merged rainfall estimates, which can then be used as input to urban drainage models in order to examine how uncertainties in rainfall estimates propagate to urban runoff estimates. The proposed model is tested using as case study a detailed rainfall and flow dataset, and a carefully verified urban drainage model of a small (~9 km2) pilot catchment in North-East London. The model has shown to well characterise residual errors in rainfall data at urban scales (which remain after the merging), leading to improved runoff estimates. In fact, the majority of measured flow peaks are bounded within the uncertainty area produced by the runoff ensembles generated with the ensemble rainfall inputs. REFERENCES: [1] Ciach, G. J. & Krajewski, W. F. (1999). On the estimation of radar rainfall error variance. Advances in Water Resources, 22 (6), 585-595. [2] Rico-Ramirez, M. A., Liguori, S. & Schellart, A. N. A. (2015). Quantifying radar-rainfall uncertainties in urban drainage flow modelling. Journal of Hydrology, 528, 17-28.
A Comparison of Three PML Treatments for CAA (and CFD)
NASA Technical Reports Server (NTRS)
Goodrich, John W.
2008-01-01
In this paper we compare three Perfectly Matched Layer (PML) treatments by means of a series of numerical experiments, using common numerical algorithms, computational grids, and code implementations. These comparisons are with the Linearized Euler Equations, for base uniform base flow. We see that there are two very good PML candidates, and that can both control the introduced error. Furthermore, we also show that corners can be handled with essentially no increase in the introduced error, and that with a good PML, the outer boundary is the most significant source of err
ON NONSTATIONARY STOCHASTIC MODELS FOR EARTHQUAKES.
Safak, Erdal; Boore, David M.
1986-01-01
A seismological stochastic model for earthquake ground-motion description is presented. Seismological models are based on the physical properties of the source and the medium and have significant advantages over the widely used empirical models. The model discussed here provides a convenient form for estimating structural response by using random vibration theory. A commonly used random process for ground acceleration, filtered white-noise multiplied by an envelope function, introduces some errors in response calculations for structures whose periods are longer than the faulting duration. An alternate random process, filtered shot-noise process, eliminates these errors.
NASA Technical Reports Server (NTRS)
Schilling, D. L.
1971-01-01
The conclusions of the design research of the song adaptive delta modulator are presented for source encoding voice signals. The variation of output SNR vs input signal power/when 8, 9, and 10 bit internal arithmetic is employed. Voice intelligibility tapes to test the 10-bit system are used. An analysis of a delta modulator is also presented designed to minimize the in-band rms error. This is accomplished by frequency shaping the error signal in the modulator prior to hard limiting. The result is a significant increase in the output SNR measured after low pass filtering.
Syndrome source coding and its universal generalization
NASA Technical Reports Server (NTRS)
Ancheta, T. C., Jr.
1975-01-01
A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A universal generalization of syndrome-source-coding is formulated which provides robustly-effective, distortionless, coding of source ensembles.
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
A data-driven modeling approach to stochastic computation for low-energy biomedical devices.
Lee, Kyong Ho; Jang, Kuk Jin; Shoeb, Ali; Verma, Naveen
2011-01-01
Low-power devices that can detect clinically relevant correlations in physiologically-complex patient signals can enable systems capable of closed-loop response (e.g., controlled actuation of therapeutic stimulators, continuous recording of disease states, etc.). In ultra-low-power platforms, however, hardware error sources are becoming increasingly limiting. In this paper, we present how data-driven methods, which allow us to accurately model physiological signals, also allow us to effectively model and overcome prominent hardware error sources with nearly no additional overhead. Two applications, EEG-based seizure detection and ECG-based arrhythmia-beat classification, are synthesized to a logic-gate implementation, and two prominent error sources are introduced: (1) SRAM bit-cell errors and (2) logic-gate switching errors ('stuck-at' faults). Using patient data from the CHB-MIT and MIT-BIH databases, performance similar to error-free hardware is achieved even for very high fault rates (up to 0.5 for SRAMs and 7 × 10(-2) for logic) that cause computational bit error rates as high as 50%.
Impact of numerical choices on water conservation in the E3SM Atmosphere Model Version 1 (EAM V1)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Kai; Rasch, Philip J.; Taylor, Mark A.
The conservation of total water is an important numerical feature for global Earth system models. Even small conservation problems in the water budget can lead to systematic errors in century-long simulations for sea level rise projection. This study quantifies and reduces various sources of water conservation error in the atmosphere component of the Energy Exascale Earth System Model. Several sources of water conservation error have been identified during the development of the version 1 (V1) model. The largest errors result from the numerical coupling between the resolved dynamics and the parameterized sub-grid physics. A hybrid coupling using different methods formore » fluid dynamics and tracer transport provides a reduction of water conservation error by a factor of 50 at 1° horizontal resolution as well as consistent improvements at other resolutions. The second largest error source is the use of an overly simplified relationship between the surface moisture flux and latent heat flux at the interface between the host model and the turbulence parameterization. This error can be prevented by applying the same (correct) relationship throughout the entire model. Two additional types of conservation error that result from correcting the surface moisture flux and clipping negative water concentrations can be avoided by using mass-conserving fixers. With all four error sources addressed, the water conservation error in the V1 model is negligible and insensitive to the horizontal resolution. The associated changes in the long-term statistics of the main atmospheric features are small. A sensitivity analysis is carried out to show that the magnitudes of the conservation errors decrease strongly with temporal resolution but increase with horizontal resolution. The increased vertical resolution in the new model results in a very thin model layer at the Earth’s surface, which amplifies the conservation error associated with the surface moisture flux correction. We note that for some of the identified error sources, the proposed fixers are remedies rather than solutions to the problems at their roots. Future improvements in time integration would be beneficial for this model.« less
Impact of numerical choices on water conservation in the E3SM Atmosphere Model version 1 (EAMv1)
NASA Astrophysics Data System (ADS)
Zhang, Kai; Rasch, Philip J.; Taylor, Mark A.; Wan, Hui; Leung, Ruby; Ma, Po-Lun; Golaz, Jean-Christophe; Wolfe, Jon; Lin, Wuyin; Singh, Balwinder; Burrows, Susannah; Yoon, Jin-Ho; Wang, Hailong; Qian, Yun; Tang, Qi; Caldwell, Peter; Xie, Shaocheng
2018-06-01
The conservation of total water is an important numerical feature for global Earth system models. Even small conservation problems in the water budget can lead to systematic errors in century-long simulations. This study quantifies and reduces various sources of water conservation error in the atmosphere component of the Energy Exascale Earth System Model. Several sources of water conservation error have been identified during the development of the version 1 (V1) model. The largest errors result from the numerical coupling between the resolved dynamics and the parameterized sub-grid physics. A hybrid coupling using different methods for fluid dynamics and tracer transport provides a reduction of water conservation error by a factor of 50 at 1° horizontal resolution as well as consistent improvements at other resolutions. The second largest error source is the use of an overly simplified relationship between the surface moisture flux and latent heat flux at the interface between the host model and the turbulence parameterization. This error can be prevented by applying the same (correct) relationship throughout the entire model. Two additional types of conservation error that result from correcting the surface moisture flux and clipping negative water concentrations can be avoided by using mass-conserving fixers. With all four error sources addressed, the water conservation error in the V1 model becomes negligible and insensitive to the horizontal resolution. The associated changes in the long-term statistics of the main atmospheric features are small. A sensitivity analysis is carried out to show that the magnitudes of the conservation errors in early V1 versions decrease strongly with temporal resolution but increase with horizontal resolution. The increased vertical resolution in V1 results in a very thin model layer at the Earth's surface, which amplifies the conservation error associated with the surface moisture flux correction. We note that for some of the identified error sources, the proposed fixers are remedies rather than solutions to the problems at their roots. Future improvements in time integration would be beneficial for V1.
ON COMPUTING UPPER LIMITS TO SOURCE INTENSITIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kashyap, Vinay L.; Siemiginowska, Aneta; Van Dyk, David A.
2010-08-10
A common problem in astrophysics is determining how bright a source could be and still not be detected in an observation. Despite the simplicity with which the problem can be stated, the solution involves complicated statistical issues that require careful analysis. In contrast to the more familiar confidence bound, this concept has never been formally analyzed, leading to a great variety of often ad hoc solutions. Here we formulate and describe the problem in a self-consistent manner. Detection significance is usually defined by the acceptable proportion of false positives (background fluctuations that are claimed as detections, or Type I error),more » and we invoke the complementary concept of false negatives (real sources that go undetected, or Type II error), based on the statistical power of a test, to compute an upper limit to the detectable source intensity. To determine the minimum intensity that a source must have for it to be detected, we first define a detection threshold and then compute the probabilities of detecting sources of various intensities at the given threshold. The intensity that corresponds to the specified Type II error probability defines that minimum intensity and is identified as the upper limit. Thus, an upper limit is a characteristic of the detection procedure rather than the strength of any particular source. It should not be confused with confidence intervals or other estimates of source intensity. This is particularly important given the large number of catalogs that are being generated from increasingly sensitive surveys. We discuss, with examples, the differences between these upper limits and confidence bounds. Both measures are useful quantities that should be reported in order to extract the most science from catalogs, though they answer different statistical questions: an upper bound describes an inference range on the source intensity, while an upper limit calibrates the detection process. We provide a recipe for computing upper limits that applies to all detection algorithms.« less
Robust video transmission with distributed source coded auxiliary channel.
Wang, Jiajun; Majumdar, Abhik; Ramchandran, Kannan
2009-12-01
We propose a novel solution to the problem of robust, low-latency video transmission over lossy channels. Predictive video codecs, such as MPEG and H.26x, are very susceptible to prediction mismatch between encoder and decoder or "drift" when there are packet losses. These mismatches lead to a significant degradation in the decoded quality. To address this problem, we propose an auxiliary codec system that sends additional information alongside an MPEG or H.26x compressed video stream to correct for errors in decoded frames and mitigate drift. The proposed system is based on the principles of distributed source coding and uses the (possibly erroneous) MPEG/H.26x decoder reconstruction as side information at the auxiliary decoder. The distributed source coding framework depends upon knowing the statistical dependency (or correlation) between the source and the side information. We propose a recursive algorithm to analytically track the correlation between the original source frame and the erroneous MPEG/H.26x decoded frame. Finally, we propose a rate-distortion optimization scheme to allocate the rate used by the auxiliary encoder among the encoding blocks within a video frame. We implement the proposed system and present extensive simulation results that demonstrate significant gains in performance both visually and objectively (on the order of 2 dB in PSNR over forward error correction based solutions and 1.5 dB in PSNR over intrarefresh based solutions for typical scenarios) under tight latency constraints.
NASA Astrophysics Data System (ADS)
Debski, Wojciech
2015-06-01
The spatial location of sources of seismic waves is one of the first tasks when transient waves from natural (uncontrolled) sources are analysed in many branches of physics, including seismology, oceanology, to name a few. Source activity and its spatial variability in time, the geometry of recording network, the complexity and heterogeneity of wave velocity distribution are all factors influencing the performance of location algorithms and accuracy of the achieved results. Although estimating of the earthquake foci location is relatively simple, a quantitative estimation of the location accuracy is really a challenging task even if the probabilistic inverse method is used because it requires knowledge of statistics of observational, modelling and a priori uncertainties. In this paper, we addressed this task when statistics of observational and/or modelling errors are unknown. This common situation requires introduction of a priori constraints on the likelihood (misfit) function which significantly influence the estimated errors. Based on the results of an analysis of 120 seismic events from the Rudna copper mine operating in southwestern Poland, we propose an approach based on an analysis of Shanon's entropy calculated for the a posteriori distribution. We show that this meta-characteristic of the a posteriori distribution carries some information on uncertainties of the solution found.
The penta-prism LTP: A long-trace-profiler with stationary optical head and moving penta prism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qian, S.; Jark, W.; Takacs, P.Z.
1995-03-01
Metrology requirements for optical components for third-generation synchrotron sources are taxing the state of the art in manufacturing technology. We have investigated a number of error sources in a commercial figure measurement instrument, the Long-Trace-Profiler II, and have demonstrated that, with some simple modifications, we can significantly reduce the effect of error sources and improve the accuracy and reliability of the measurement. By keeping the optical head stationary and moving a penta prism along the translation stage, as in the original pencil-beam interferometer design of von Bieren, the stability of the optical system is greatly improved, and the remaining errormore » signals can be corrected by a simple reference beam subtraction. We illustrate the performance of the modified system by investigating the distortion produced by gravity on a typical synchrotron mirror and demonstrate the repeatability of the instrument despite relaxed tolerances on the translation stage.« less
NASA Astrophysics Data System (ADS)
Wang, Lian; Zhou, Yuan-yuan; Zhou, Xue-jun; Chen, Xiao
2018-03-01
Based on the orbital angular momentum and pulse position modulation, we present a novel passive measurement-device-independent quantum key distribution (MDI-QKD) scheme with the two-mode source. Combining with the tight bounds of the yield and error rate of single-photon pairs given in our paper, we conduct performance analysis on the scheme with heralded single-photon source. The numerical simulations show that the performance of our scheme is significantly superior to the traditional MDI-QKD in the error rate, key generation rate and secure transmission distance, since the application of orbital angular momentum and pulse position modulation can exclude the basis-dependent flaw and increase the information content for each single photon. Moreover, the performance is improved with the rise of the frame length. Therefore, our scheme, without intensity modulation, avoids the source side channels and enhances the key generation rate. It has greatly utility value in the MDI-QKD setups.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morton, April M; Nagle, Nicholas N; Piburn, Jesse O
As urban areas continue to grow and evolve in a world of increasing environmental awareness, the need for detailed information regarding residential energy consumption patterns has become increasingly important. Though current modeling efforts mark significant progress in the effort to better understand the spatial distribution of energy consumption, the majority of techniques are highly dependent on region-specific data sources and often require building- or dwelling-level details that are not publicly available for many regions in the United States. Furthermore, many existing methods do not account for errors in input data sources and may not accurately reflect inherent uncertainties in modelmore » outputs. We propose an alternative and more general hybrid approach to high-resolution residential electricity consumption modeling by merging a dasymetric model with a complementary machine learning algorithm. The method s flexible data requirement and statistical framework ensure that the model both is applicable to a wide range of regions and considers errors in input data sources.« less
Chandra Source Catalog: User Interface
NASA Astrophysics Data System (ADS)
Bonaventura, Nina; Evans, Ian N.; Rots, Arnold H.; Tibbetts, Michael S.; van Stone, David W.; Zografou, Panagoula; Primini, Francis A.; Glotfelty, Kenny J.; Anderson, Craig S.; Chen, Judy C.; Davis, John E.; Doe, Stephen M.; Evans, Janet D.; Fabbiano, Giuseppina; Galle, Elizabeth C.; Gibbs, Danny G., II; Grier, John D.; Hain, Roger; Hall, Diane M.; Harbo, Peter N.; He, Helen; Houck, John C.; Karovska, Margarita; Kashyap, Vinay L.; Lauer, Jennifer; McCollough, Michael L.; McDowell, Jonathan C.; Miller, Joseph B.; Mitschang, Arik W.; Morgan, Douglas L.; Mossman, Amy E.; Nichols, Joy S.; Nowak, Michael A.; Plummer, David A.; Refsdal, Brian L.; Siemiginowska, Aneta L.; Sundheim, Beth A.; Winkelman, Sherry L.
2009-09-01
The Chandra Source Catalog (CSC) is intended to be the definitive catalog of all X-ray sources detected by Chandra. For each source, the CSC provides positions and multi-band fluxes, as well as derived spatial, spectral, and temporal source properties. Full-field and source region data products are also available, including images, photon event lists, light curves, and spectra. The Chandra X-ray Center CSC website (http://cxc.harvard.edu/csc/) is the place to visit for high-level descriptions of each source property and data product included in the catalog, along with other useful information, such as step-by-step catalog tutorials, answers to FAQs, and a thorough summary of the catalog statistical characterization. Eight categories of detailed catalog documents may be accessed from the navigation bar on most of the 50+ CSC pages; these categories are: About the Catalog, Creating the Catalog, Using the Catalog, Catalog Columns, Column Descriptions, Documents, Conferences, and Useful Links. There are also prominent links to CSCview, the CSC data access GUI, and related help documentation, as well as a tutorial for using the new CSC/Google Earth interface. Catalog source properties are presented in seven scientific categories, within two table views: the Master Source and Source Observations tables. Each X-ray source has one ``master source'' entry and one or more ``source observation'' entries, the details of which are documented on the CSC ``Catalog Columns'' pages. The master source properties represent the best estimates of the properties of a source; these are extensively described on the following pages of the website: Position and Position Errors, Source Flags, Source Extent and Errors, Source Fluxes, Source Significance, Spectral Properties, and Source Variability. The eight tutorials (``threads'') available on the website serve as a collective guide for accessing, understanding, and manipulating the source properties and data products provided by the catalog.
NASA Astrophysics Data System (ADS)
Winiarek, Victor; Bocquet, Marc; Duhanyan, Nora; Roustan, Yelva; Saunier, Olivier; Mathieu, Anne
2014-01-01
Inverse modelling techniques can be used to estimate the amount of radionuclides and the temporal profile of the source term released in the atmosphere during the accident of the Fukushima Daiichi nuclear power plant in March 2011. In Winiarek et al. (2012b), the lower bounds of the caesium-137 and iodine-131 source terms were estimated with such techniques, using activity concentration measurements. The importance of an objective assessment of prior errors (the observation errors and the background errors) was emphasised for a reliable inversion. In such critical context where the meteorological conditions can make the source term partly unobservable and where only a few observations are available, such prior estimation techniques are mandatory, the retrieved source term being very sensitive to this estimation. We propose to extend the use of these techniques to the estimation of prior errors when assimilating observations from several data sets. The aim is to compute an estimate of the caesium-137 source term jointly using all available data about this radionuclide, such as activity concentrations in the air, but also daily fallout measurements and total cumulated fallout measurements. It is crucial to properly and simultaneously estimate the background errors and the prior errors relative to each data set. A proper estimation of prior errors is also a necessary condition to reliably estimate the a posteriori uncertainty of the estimated source term. Using such techniques, we retrieve a total released quantity of caesium-137 in the interval 11.6-19.3 PBq with an estimated standard deviation range of 15-20% depending on the method and the data sets. The “blind” time intervals of the source term have also been strongly mitigated compared to the first estimations with only activity concentration data.
Alsmadi, A M; Alatas, A; Zhao, J Y; Hu, M Y; Yan, L; Alp, E E
2014-05-01
Synchrotron radiation from third-generation high-brilliance storage rings is an ideal source for X-ray microbeams. The aim of this paper is to describe a microfocusing scheme that combines both a toroidal mirror and Kirkpatrick-Baez (KB) mirrors for upgrading the existing optical system for inelastic X-ray scattering experiments at sector 3 of the Advanced Photon Source. SHADOW ray-tracing simulations without considering slope errors of both the toroidal mirror and KB mirrors show that this combination can provide a beam size of 4.5 µm (H) × 0.6 µm (V) (FWHM) at the end of the existing D-station (66 m from the source) with use of full beam transmission of up to 59%, and a beam size of 3.7 µm (H) × 0.46 µm (V) (FWHM) at the front-end of the proposed E-station (68 m from the source) with a transmission of up to 52%. A beam size of about 5 µm (H) × 1 µm (V) can be obtained, which is close to the ideal case, by using high-quality mirrors (with slope errors of less than 0.5 µrad r.m.s.). Considering the slope errors of the existing toroidal and KB mirrors (5 and 2.9 µrad r.m.s., respectively), the beam size grows to about 13.5 µm (H) × 6.3 µm (V) at the end of the D-station and to 12.0 µm (H) × 6.0 µm (V) at the front-end of the proposed E-station. The simulations presented here are compared with the experimental measurements that are significantly larger than the theoretical values even when slope error is included in the simulations. This is because of the experimental set-up that could not yet be optimized.
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
2002-12-01
applications, vibration sources are numerous such as: ! Launch Loading ! Man-induced accelerations like on the Shuttle or space station ! Solar ...However, the lack of significant tracking errors during times when other actuators were stationary, and the fact that the local maximum tracking...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wollaeger, Ryan T.; Wollaber, Allan B.; Urbatsch, Todd J.
2016-02-23
Here, the non-linear thermal radiative-transfer equations can be solved in various ways. One popular way is the Fleck and Cummings Implicit Monte Carlo (IMC) method. The IMC method was originally formulated with piecewise-constant material properties. For domains with a coarse spatial grid and large temperature gradients, an error known as numerical teleportation may cause artificially non-causal energy propagation and consequently an inaccurate material temperature. Source tilting is a technique to reduce teleportation error by constructing sub-spatial-cell (or sub-cell) emission profiles from which IMC particles are sampled. Several source tilting schemes exist, but some allow teleportation error to persist. We examinemore » the effect of source tilting in problems with a temperature-dependent opacity. Within each cell, the opacity is evaluated continuously from a temperature profile implied by the source tilt. For IMC, this is a new approach to modeling the opacity. We find that applying both source tilting along with a source tilt-dependent opacity can introduce another dominant error that overly inhibits thermal wavefronts. We show that we can mitigate both teleportation and under-propagation errors if we discretize the temperature equation with a linear discontinuous (LD) trial space. Our method is for opacities ~ 1/T 3, but we formulate and test a slight extension for opacities ~ 1/T 3.5, where T is temperature. We find our method avoids errors that can be incurred by IMC with continuous source tilt constructions and piecewise-constant material temperature updates.« less
Feasibility of Equivalent Dipole Models for Electroencephalogram-Based Brain Computer Interfaces.
Schimpf, Paul H
2017-09-15
This article examines the localization errors of equivalent dipolar sources inverted from the surface electroencephalogram in order to determine the feasibility of using their location as classification parameters for non-invasive brain computer interfaces. Inverse localization errors are examined for two head models: a model represented by four concentric spheres and a realistic model based on medical imagery. It is shown that the spherical model results in localization ambiguity such that a number of dipolar sources, with different azimuths and varying orientations, provide a near match to the electroencephalogram of the best equivalent source. No such ambiguity exists for the elevation of inverted sources, indicating that for spherical head models, only the elevation of inverted sources (and not the azimuth) can be expected to provide meaningful classification parameters for brain-computer interfaces. In a realistic head model, all three parameters of the inverted source location are found to be reliable, providing a more robust set of parameters. In both cases, the residual error hypersurfaces demonstrate local minima, indicating that a search for the best-matching sources should be global. Source localization error vs. signal-to-noise ratio is also demonstrated for both head models.
Long Term Mean Local Time of the Ascending Node Prediction
NASA Technical Reports Server (NTRS)
McKinley, David P.
2007-01-01
Significant error has been observed in the long term prediction of the Mean Local Time of the Ascending Node on the Aqua spacecraft. This error of approximately 90 seconds over a two year prediction is a complication in planning and timing of maneuvers for all members of the Earth Observing System Afternoon Constellation, which use Aqua's MLTAN as the reference for their inclination maneuvers. It was determined that the source of the prediction error was the lack of a solid Earth tide model in the operational force models. The Love Model of the solid Earth tide potential was used to derive analytic corrections to the inclination and right ascension of the ascending node of Aqua's Sun-synchronous orbit. Additionally, it was determined that the resonance between the Sun and orbit plane of the Sun-synchronous orbit is the primary driver of this error. The analytic corrections have been added to the operational force models for the Aqua spacecraft reducing the two-year 90-second error to less than 7 seconds.
Hanson, Sonya M.; Ekins, Sean; Chodera, John D.
2015-01-01
All experimental assay data contains error, but the magnitude, type, and primary origin of this error is often not obvious. Here, we describe a simple set of assay modeling techniques based on the bootstrap principle that allow sources of error and bias to be simulated and propagated into assay results. We demonstrate how deceptively simple operations—such as the creation of a dilution series with a robotic liquid handler—can significantly amplify imprecision and even contribute substantially to bias. To illustrate these techniques, we review an example of how the choice of dispensing technology can impact assay measurements, and show how large contributions to discrepancies between assays can be easily understood and potentially corrected for. These simple modeling techniques—illustrated with an accompanying IPython notebook—can allow modelers to understand the expected error and bias in experimental datasets, and even help experimentalists design assays to more effectively reach accuracy and imprecision goals. PMID:26678597
Study on analysis from sources of error for Airborne LIDAR
NASA Astrophysics Data System (ADS)
Ren, H. C.; Yan, Q.; Liu, Z. J.; Zuo, Z. Q.; Xu, Q. Q.; Li, F. F.; Song, C.
2016-11-01
With the advancement of Aerial Photogrammetry, it appears that to obtain geo-spatial information of high spatial and temporal resolution provides a new technical means for Airborne LIDAR measurement techniques, with unique advantages and broad application prospects. Airborne LIDAR is increasingly becoming a new kind of space for earth observation technology, which is mounted by launching platform for aviation, accepting laser pulses to get high-precision, high-density three-dimensional coordinate point cloud data and intensity information. In this paper, we briefly demonstrates Airborne laser radar systems, and that some errors about Airborne LIDAR data sources are analyzed in detail, so the corresponding methods is put forwarded to avoid or eliminate it. Taking into account the practical application of engineering, some recommendations were developed for these designs, which has crucial theoretical and practical significance in Airborne LIDAR data processing fields.
Techniques for avoiding discrimination errors in the dynamic sampling of condensable vapors
NASA Technical Reports Server (NTRS)
Lincoln, K. A.
1983-01-01
In the mass spectrometric sampling of dynamic systems, measurements of the relative concentrations of condensable and noncondensable vapors can be significantly distorted if some subtle, but important, instrumental factors are overlooked. Even with in situ measurements, the condensables are readily lost to the container walls, and the noncondensables can persist within the vacuum chamber and yield a disproportionately high output signal. Where single pulses of vapor are sampled this source of error is avoided by gating either the mass spectrometer ""on'' or the data acquisition instrumentation ""on'' only during the very brief time-window when the initial vapor cloud emanating directly from the vapor source passes through the ionizer. Instrumentation for these techniques is detailed and its effectiveness is demonstrated by comparing gated and nongated spectra obtained from the pulsed-laser vaporization of several materials.
Why do we miss rare targets? Exploring the boundaries of the low prevalence effect
Rich, Anina N.; Kunar, Melina A.; Van Wert, Michael J.; Hidalgo-Sotelo, Barbara; Horowitz, Todd S.; Wolfe, Jeremy M.
2011-01-01
Observers tend to miss a disproportionate number of targets in visual search tasks with rare targets. This ‘prevalence effect’ may have practical significance since many screening tasks (e.g., airport security, medical screening) are low prevalence searches. It may also shed light on the rules used to terminate search when a target is not found. Here, we use perceptually simple stimuli to explore the sources of this effect. Experiment 1 shows a prevalence effect in inefficient spatial configuration search. Experiment 2 demonstrates this effect occurs even in a highly efficient feature search. However, the two prevalence effects differ. In spatial configuration search, misses seem to result from ending the search prematurely, while in feature search, they seem due to response errors. In Experiment 3, a minimum delay before response eliminated the prevalence effect for feature but not spatial configuration search. In Experiment 4, a target was present on each trial in either two (2AFC) or four (4AFC) orientations. With only two response alternatives, low prevalence produced elevated errors. Providing four response alternatives eliminated this effect. Low target prevalence puts searchers under pressure that tends to increase miss errors. We conclude that the specific source of those errors depends on the nature of the search. PMID:19146299
Error analysis in stereo vision for location measurement of 3D point
NASA Astrophysics Data System (ADS)
Li, Yunting; Zhang, Jun; Tian, Jinwen
2015-12-01
Location measurement of 3D point in stereo vision is subjected to different sources of uncertainty that propagate to the final result. For current methods of error analysis, most of them are based on ideal intersection model to calculate the uncertainty region of point location via intersecting two fields of view of pixel that may produce loose bounds. Besides, only a few of sources of error such as pixel error or camera position are taken into account in the process of analysis. In this paper we present a straightforward and available method to estimate the location error that is taken most of source of error into account. We summed up and simplified all the input errors to five parameters by rotation transformation. Then we use the fast algorithm of midpoint method to deduce the mathematical relationships between target point and the parameters. Thus, the expectations and covariance matrix of 3D point location would be obtained, which can constitute the uncertainty region of point location. Afterwards, we turned back to the error propagation of the primitive input errors in the stereo system and throughout the whole analysis process from primitive input errors to localization error. Our method has the same level of computational complexity as the state-of-the-art method. Finally, extensive experiments are performed to verify the performance of our methods.
Adaptation to sensory-motor reflex perturbations is blind to the source of errors.
Hudson, Todd E; Landy, Michael S
2012-01-06
In the study of visual-motor control, perhaps the most familiar findings involve adaptation to externally imposed movement errors. Theories of visual-motor adaptation based on optimal information processing suppose that the nervous system identifies the sources of errors to effect the most efficient adaptive response. We report two experiments using a novel perturbation based on stimulating a visually induced reflex in the reaching arm. Unlike adaptation to an external force, our method induces a perturbing reflex within the motor system itself, i.e., perturbing forces are self-generated. This novel method allows a test of the theory that error source information is used to generate an optimal adaptive response. If the self-generated source of the visually induced reflex perturbation is identified, the optimal response will be via reflex gain control. If the source is not identified, a compensatory force should be generated to counteract the reflex. Gain control is the optimal response to reflex perturbation, both because energy cost and movement errors are minimized. Energy is conserved because neither reflex-induced nor compensatory forces are generated. Precision is maximized because endpoint variance is proportional to force production. We find evidence against source-identified adaptation in both experiments, suggesting that sensory-motor information processing is not always optimal.
STARS 2.0: 2nd-generation open-source archiving and query software
NASA Astrophysics Data System (ADS)
Winegar, Tom
2008-07-01
The Subaru Telescope is in process of developing an open-source alternative to the 1st-generation software and databases (STARS 1) used for archiving and query. For STARS 2, we have chosen PHP and Python for scripting and MySQL as the database software. We have collected feedback from staff and observers, and used this feedback to significantly improve the design and functionality of our future archiving and query software. Archiving - We identified two weaknesses in 1st-generation STARS archiving software: a complex and inflexible table structure and uncoordinated system administration for our business model: taking pictures from the summit and archiving them in both Hawaii and Japan. We adopted a simplified and normalized table structure with passive keyword collection, and we are designing an archive-to-archive file transfer system that automatically reports real-time status and error conditions and permits error recovery. Query - We identified several weaknesses in 1st-generation STARS query software: inflexible query tools, poor sharing of calibration data, and no automatic file transfer mechanisms to observers. We are developing improved query tools and sharing of calibration data, and multi-protocol unassisted file transfer mechanisms for observers. In the process, we have redefined a 'query': from an invisible search result that can only transfer once in-house right now, with little status and error reporting and no error recovery - to a stored search result that can be monitored, transferred to different locations with multiple protocols, reporting status and error conditions and permitting recovery from errors.
Understanding error generation in fused deposition modeling
NASA Astrophysics Data System (ADS)
Bochmann, Lennart; Bayley, Cindy; Helu, Moneer; Transchel, Robert; Wegener, Konrad; Dornfeld, David
2015-03-01
Additive manufacturing offers completely new possibilities for the manufacturing of parts. The advantages of flexibility and convenience of additive manufacturing have had a significant impact on many industries, and optimizing part quality is crucial for expanding its utilization. This research aims to determine the sources of imprecision in fused deposition modeling (FDM). Process errors in terms of surface quality, accuracy and precision are identified and quantified, and an error-budget approach is used to characterize errors of the machine tool. It was determined that accuracy and precision in the y direction (0.08-0.30 mm) are generally greater than in the x direction (0.12-0.62 mm) and the z direction (0.21-0.57 mm). Furthermore, accuracy and precision tend to decrease at increasing axis positions. The results of this work can be used to identify possible process improvements in the design and control of FDM technology.
Hellier, Elizabeth; Tucker, Mike; Kenny, Natalie; Rowntree, Anna; Edworthy, Judy
2010-09-01
This study aimed to examine the utility of using color and shape to differentiate drug strength information on over-the-counter medicine packages. Medication errors are an important threat to patient safety, and confusions between drug strengths are a significant source of medication error. A visual search paradigm required laypeople to search for medicine packages of a particular strength from among distracter packages of different strengths, and measures of reaction time and error were recorded. Using color to differentiate drug strength information conferred an advantage on search times and accuracy. Shape differentiation did not improve search times and had only a weak effect on search accuracy. Using color to differentiate drug strength information improves drug strength identification performance. Color differentiation of drug strength information may be a useful way of reducing medication errors and improving patient safety.
NASA Astrophysics Data System (ADS)
Gebregiorgis, A. S.; Peters-Lidard, C. D.; Tian, Y.; Hossain, F.
2011-12-01
Hydrologic modeling has benefited from operational production of high resolution satellite rainfall products. The global coverage, near-real time availability, spatial and temporal sampling resolutions have advanced the application of physically based semi-distributed and distributed hydrologic models for wide range of environmental decision making processes. Despite these successes, the existence of uncertainties due to indirect way of satellite rainfall estimates and hydrologic models themselves remain a challenge in making meaningful and more evocative predictions. This study comprises breaking down of total satellite rainfall error into three independent components (hit bias, missed precipitation and false alarm), characterizing them as function of land use and land cover (LULC), and tracing back the source of simulated soil moisture and runoff error in physically based distributed hydrologic model. Here, we asked "on what way the three independent total bias components, hit bias, missed, and false precipitation, affect the estimation of soil moisture and runoff in physically based hydrologic models?" To understand the clear picture of the outlined question above, we implemented a systematic approach by characterizing and decomposing the total satellite rainfall error as a function of land use and land cover in Mississippi basin. This will help us to understand the major source of soil moisture and runoff errors in hydrologic model simulation and trace back the information to algorithm development and sensor type which ultimately helps to improve algorithms better and will improve application and data assimilation in future for GPM. For forest and woodland and human land use system, the soil moisture was mainly dictated by the total bias for 3B42-RT, CMORPH, and PERSIANN products. On the other side, runoff error was largely dominated by hit bias than the total bias. This difference occurred due to the presence of missed precipitation which is a major contributor to the total bias both during the summer and winter seasons. Missed precipitation, most likely light rain and rain over snow cover, has significant effect on soil moisture and are less capable of producing runoff that results runoff dependency on the hit bias only.
ERIC Educational Resources Information Center
Tajeddin, Zia; Alemi, Minoo; Pashmforoosh, Roya
2017-01-01
Unlike linguistic fossilization, pragmatic fossilization has received scant attention in fossilization research. To bridge this gap, the present study adopted a typical-error method of fossilization research to identify the most frequent errors in pragmatic routines committed by Persian-speaking learners of L2 English and explore the sources of…
Exchange-Correlation Effects for Noncovalent Interactions in Density Functional Theory.
Otero-de-la-Roza, A; DiLabio, Gino A; Johnson, Erin R
2016-07-12
In this article, we develop an understanding of how errors from exchange-correlation functionals affect the modeling of noncovalent interactions in dispersion-corrected density-functional theory. Computed CCSD(T) reference binding energies for a collection of small-molecule clusters are decomposed via a molecular many-body expansion and are used to benchmark density-functional approximations, including the effect of semilocal approximation, exact-exchange admixture, and range separation. Three sources of error are identified. Repulsion error arises from the choice of semilocal functional approximation. This error affects intermolecular repulsions and is present in all n-body exchange-repulsion energies with a sign that alternates with the order n of the interaction. Delocalization error is independent of the choice of semilocal functional but does depend on the exact exchange fraction. Delocalization error misrepresents the induction energies, leading to overbinding in all induction n-body terms, and underestimates the electrostatic contribution to the 2-body energies. Deformation error affects only monomer relaxation (deformation) energies and behaves similarly to bond-dissociation energy errors. Delocalization and deformation errors affect systems with significant intermolecular orbital interactions (e.g., hydrogen- and halogen-bonded systems), whereas repulsion error is ubiquitous. Many-body errors from the underlying exchange-correlation functional greatly exceed in general the magnitude of the many-body dispersion energy term. A functional built to accurately model noncovalent interactions must contain a dispersion correction, semilocal exchange, and correlation components that minimize the repulsion error independently and must also incorporate exact exchange in such a way that delocalization error is absent.
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Accuracy analysis and design of A3 parallel spindle head
NASA Astrophysics Data System (ADS)
Ni, Yanbing; Zhang, Biao; Sun, Yupeng; Zhang, Yuan
2016-03-01
As functional components of machine tools, parallel mechanisms are widely used in high efficiency machining of aviation components, and accuracy is one of the critical technical indexes. Lots of researchers have focused on the accuracy problem of parallel mechanisms, but in terms of controlling the errors and improving the accuracy in the stage of design and manufacturing, further efforts are required. Aiming at the accuracy design of a 3-DOF parallel spindle head(A3 head), its error model, sensitivity analysis and tolerance allocation are investigated. Based on the inverse kinematic analysis, the error model of A3 head is established by using the first-order perturbation theory and vector chain method. According to the mapping property of motion and constraint Jacobian matrix, the compensatable and uncompensatable error sources which affect the accuracy in the end-effector are separated. Furthermore, sensitivity analysis is performed on the uncompensatable error sources. The sensitivity probabilistic model is established and the global sensitivity index is proposed to analyze the influence of the uncompensatable error sources on the accuracy in the end-effector of the mechanism. The results show that orientation error sources have bigger effect on the accuracy in the end-effector. Based upon the sensitivity analysis results, the tolerance design is converted into the issue of nonlinearly constrained optimization with the manufacturing cost minimum being the optimization objective. By utilizing the genetic algorithm, the allocation of the tolerances on each component is finally determined. According to the tolerance allocation results, the tolerance ranges of ten kinds of geometric error sources are obtained. These research achievements can provide fundamental guidelines for component manufacturing and assembly of this kind of parallel mechanisms.
NASA Astrophysics Data System (ADS)
Jiang, Cong; Yu, Zong-Wen; Wang, Xiang-Bin
2018-04-01
We present an analysis for measurement-device-independent quantum key distribution with correlated source-light-intensity errors. Numerical results show that the results here can greatly improve the key rate especially with large intensity fluctuations and channel attenuation compared with prior results if the intensity fluctuations of different sources are correlated.
NASA Astrophysics Data System (ADS)
Pei, Yong; Modestino, James W.
2004-12-01
Digital video delivered over wired-to-wireless networks is expected to suffer quality degradation from both packet loss and bit errors in the payload. In this paper, the quality degradation due to packet loss and bit errors in the payload are quantitatively evaluated and their effects are assessed. We propose the use of a concatenated forward error correction (FEC) coding scheme employing Reed-Solomon (RS) codes and rate-compatible punctured convolutional (RCPC) codes to protect the video data from packet loss and bit errors, respectively. Furthermore, the performance of a joint source-channel coding (JSCC) approach employing this concatenated FEC coding scheme for video transmission is studied. Finally, we describe an improved end-to-end architecture using an edge proxy in a mobile support station to implement differential error protection for the corresponding channel impairments expected on the two networks. Results indicate that with an appropriate JSCC approach and the use of an edge proxy, FEC-based error-control techniques together with passive error-recovery techniques can significantly improve the effective video throughput and lead to acceptable video delivery quality over time-varying heterogeneous wired-to-wireless IP networks.
The Sensitivity of Adverse Event Cost Estimates to Diagnostic Coding Error
Wardle, Gavin; Wodchis, Walter P; Laporte, Audrey; Anderson, Geoffrey M; Baker, Ross G
2012-01-01
Objective To examine the impact of diagnostic coding error on estimates of hospital costs attributable to adverse events. Data Sources Original and reabstracted medical records of 9,670 complex medical and surgical admissions at 11 hospital corporations in Ontario from 2002 to 2004. Patient specific costs, not including physician payments, were retrieved from the Ontario Case Costing Initiative database. Study Design Adverse events were identified among the original and reabstracted records using ICD10-CA (Canadian adaptation of ICD10) codes flagged as postadmission complications. Propensity score matching and multivariate regression analysis were used to estimate the cost of the adverse events and to determine the sensitivity of cost estimates to diagnostic coding error. Principal Findings Estimates of the cost of the adverse events ranged from $16,008 (metabolic derangement) to $30,176 (upper gastrointestinal bleeding). Coding errors caused the total cost attributable to the adverse events to be underestimated by 16 percent. The impact of coding error on adverse event cost estimates was highly variable at the organizational level. Conclusions Estimates of adverse event costs are highly sensitive to coding error. Adverse event costs may be significantly underestimated if the likelihood of error is ignored. PMID:22091908
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Georgetown Cogeneration Project as a Minor Source
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Stationary Source Committee Recommendation on NOx RACT for Utility Boilers
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Source Construction Prior to Issuance of PSD Permit
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Summit Petroleum Corporation Single Source Determination
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Source Determinations for Oil and Gas Industries
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
PSD Source Classification for Safety Kleen's Lubricating Oil Recovery Facility
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Need for Emission Cap on Complex Netting Sources
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
BP American Production Company's Florida River Compression Facility Single Source Determination
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Representative Source Operations Over Last Two Years
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Federal Enforceability of State's Existing Minor New Source Review (NSR) Programs
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
American Soda.. Multi-facility Source Determination
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Northeast Hub Partners and United Salts Single Source Determination
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
4/5/95 Letter on Definition of Major Source
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Pollution Control Projects and New Source Review (NSR) Applicability
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
PSD and NSPS Applicability to a Reactivated Source
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Agreement that the PSD Regulations Require a Source to Commence Construction
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Applicability of PSD to Debottlenecked Sources
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Potential to Emit (PTE) Guidance for Specific Source Categories
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Great Salt Lake Minerals Source Determination
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Consideration of Fugitive Emissions in Major Source Determinations
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
ESCO Corp. Source Determination
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Cut-Off Date for Determining LAER in Major New Source Permitting
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Single Source Determination for Coors/TriGen
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Alcoa Massena Modernization Project and Request for a Single Source Determination
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
PSD Requirements for Reactivated Sources
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Single Source Determination for Gallitan Steel Co. and Heckett MultiServ
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Guidance on Limiting Potential to Emit in New Source Permitting
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
NSR and PSD Rules Regarding Fugitive Emissions Applicable to Major Sources
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Plantwide Definition Of Major Stationary Sources Of Air Pollution
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Single Source Determination for General Dynamics
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Resolving Nonattainment NSR Violations by Making Major Sources Minor
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
New Source Offset Against Vehicle Inspection and Maintenance Program
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Clarification of Sources Subject to PSD Review
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Should Pro-Tec be Permitted as a Major Source
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
NASA Technical Reports Server (NTRS)
Morris, A. Terry
1999-01-01
This paper examines various sources of error in MIT's improved top oil temperature rise over ambient temperature model and estimation process. The sources of error are the current parameter estimation technique, quantization noise, and post-processing of the transformer data. Results from this paper will show that an output error parameter estimation technique should be selected to replace the current least squares estimation technique. The output error technique obtained accurate predictions of transformer behavior, revealed the best error covariance, obtained consistent parameter estimates, and provided for valid and sensible parameters. This paper will also show that the output error technique should be used to minimize errors attributed to post-processing (decimation) of the transformer data. Models used in this paper are validated using data from a large transformer in service.
Modeling Security Aspects of Network
NASA Astrophysics Data System (ADS)
Schoch, Elmar
With more and more widespread usage of computer systems and networks, dependability becomes a paramount requirement. Dependability typically denotes tolerance or protection against all kinds of failures, errors and faults. Sources of failures can basically be accidental, e.g., in case of hardware errors or software bugs, or intentional due to some kind of malicious behavior. These intentional, malicious actions are subject of security. A more complete overview on the relations between dependability and security can be found in [31]. In parallel to the increased use of technology, misuse also has grown significantly, requiring measures to deal with it.
Accuracy Study of a Robotic System for MRI-guided Prostate Needle Placement
Seifabadi, Reza; Cho, Nathan BJ.; Song, Sang-Eun; Tokuda, Junichi; Hata, Nobuhiko; Tempany, Clare M.; Fichtinger, Gabor; Iordachita, Iulian
2013-01-01
Background Accurate needle placement is the first concern in percutaneous MRI-guided prostate interventions. In this phantom study, different sources contributing to the overall needle placement error of a MRI-guided robot for prostate biopsy have been identified, quantified, and minimized to the possible extent. Methods and Materials The overall needle placement error of the system was evaluated in a prostate phantom. This error was broken into two parts: the error associated with the robotic system (called before-insertion error) and the error associated with needle-tissue interaction (called due-to-insertion error). The before-insertion error was measured directly in a soft phantom and different sources contributing into this part were identified and quantified. A calibration methodology was developed to minimize the 4-DOF manipulator’s error. The due-to-insertion error was indirectly approximated by comparing the overall error and the before-insertion error. The effect of sterilization on the manipulator’s accuracy and repeatability was also studied. Results The average overall system error in phantom study was 2.5 mm (STD=1.1mm). The average robotic system error in super soft phantom was 1.3 mm (STD=0.7 mm). Assuming orthogonal error components, the needle-tissue interaction error was approximated to be 2.13 mm thus having larger contribution to the overall error. The average susceptibility artifact shift was 0.2 mm. The manipulator’s targeting accuracy was 0.71 mm (STD=0.21mm) after robot calibration. The robot’s repeatability was 0.13 mm. Sterilization had no noticeable influence on the robot’s accuracy and repeatability. Conclusions The experimental methodology presented in this paper may help researchers to identify, quantify, and minimize different sources contributing into the overall needle placement error of an MRI-guided robotic system for prostate needle placement. In the robotic system analyzed here, the overall error of the studied system remained within the acceptable range. PMID:22678990
Accuracy study of a robotic system for MRI-guided prostate needle placement.
Seifabadi, Reza; Cho, Nathan B J; Song, Sang-Eun; Tokuda, Junichi; Hata, Nobuhiko; Tempany, Clare M; Fichtinger, Gabor; Iordachita, Iulian
2013-09-01
Accurate needle placement is the first concern in percutaneous MRI-guided prostate interventions. In this phantom study, different sources contributing to the overall needle placement error of a MRI-guided robot for prostate biopsy have been identified, quantified and minimized to the possible extent. The overall needle placement error of the system was evaluated in a prostate phantom. This error was broken into two parts: the error associated with the robotic system (called 'before-insertion error') and the error associated with needle-tissue interaction (called 'due-to-insertion error'). Before-insertion error was measured directly in a soft phantom and different sources contributing into this part were identified and quantified. A calibration methodology was developed to minimize the 4-DOF manipulator's error. The due-to-insertion error was indirectly approximated by comparing the overall error and the before-insertion error. The effect of sterilization on the manipulator's accuracy and repeatability was also studied. The average overall system error in the phantom study was 2.5 mm (STD = 1.1 mm). The average robotic system error in the Super Soft plastic phantom was 1.3 mm (STD = 0.7 mm). Assuming orthogonal error components, the needle-tissue interaction error was found to be approximately 2.13 mm, thus making a larger contribution to the overall error. The average susceptibility artifact shift was 0.2 mm. The manipulator's targeting accuracy was 0.71 mm (STD = 0.21 mm) after robot calibration. The robot's repeatability was 0.13 mm. Sterilization had no noticeable influence on the robot's accuracy and repeatability. The experimental methodology presented in this paper may help researchers to identify, quantify and minimize different sources contributing into the overall needle placement error of an MRI-guided robotic system for prostate needle placement. In the robotic system analysed here, the overall error of the studied system remained within the acceptable range. Copyright © 2012 John Wiley & Sons, Ltd.
Self-calibration method without joint iteration for distributed small satellite SAR systems
NASA Astrophysics Data System (ADS)
Xu, Qing; Liao, Guisheng; Liu, Aifei; Zhang, Juan
2013-12-01
The performance of distributed small satellite synthetic aperture radar systems degrades significantly due to the unavoidable array errors, including gain, phase, and position errors, in real operating scenarios. In the conventional method proposed in (IEEE T Aero. Elec. Sys. 42:436-451, 2006), the spectrum components within one Doppler bin are considered as calibration sources. However, it is found in this article that the gain error estimation and the position error estimation in the conventional method can interact with each other. The conventional method may converge to suboptimal solutions in large position errors since it requires the joint iteration between gain-phase error estimation and position error estimation. In addition, it is also found that phase errors can be estimated well regardless of position errors when the zero Doppler bin is chosen. In this article, we propose a method obtained by modifying the conventional one, based on these two observations. In this modified method, gain errors are firstly estimated and compensated, which eliminates the interaction between gain error estimation and position error estimation. Then, by using the zero Doppler bin data, the phase error estimation can be performed well independent of position errors. Finally, position errors are estimated based on the Taylor-series expansion. Meanwhile, the joint iteration between gain-phase error estimation and position error estimation is not required. Therefore, the problem of suboptimal convergence, which occurs in the conventional method, can be avoided with low computational method. The modified method has merits of faster convergence and lower estimation error compared to the conventional one. Theoretical analysis and computer simulation results verified the effectiveness of the modified method.
Response Monitoring, the Error-Related Negativity, and Differences in Social Behavior in Autism
ERIC Educational Resources Information Center
Henderson, Heather; Schwartz, Caley; Mundy, Peter; Burnette, Courtney; Sutton, Steve; Zahka, Nicole; Pradella, Anna
2006-01-01
Children with autism not only display social impairments but also significant individual differences in social development. Understanding the source of these differences, as well as the nature of social impairments, is important for improved diagnosis and treatments for these children. Current theory and research suggests that individual…
Deep Kalman Filter: Simultaneous Multi-Sensor Integration and Modelling; A GNSS/IMU Case Study
Hosseinyalamdary, Siavash
2018-01-01
Bayes filters, such as the Kalman and particle filters, have been used in sensor fusion to integrate two sources of information and obtain the best estimate of unknowns. The efficient integration of multiple sensors requires deep knowledge of their error sources. Some sensors, such as Inertial Measurement Unit (IMU), have complicated error sources. Therefore, IMU error modelling and the efficient integration of IMU and Global Navigation Satellite System (GNSS) observations has remained a challenge. In this paper, we developed deep Kalman filter to model and remove IMU errors and, consequently, improve the accuracy of IMU positioning. To achieve this, we added a modelling step to the prediction and update steps of the Kalman filter, so that the IMU error model is learned during integration. The results showed our deep Kalman filter outperformed the conventional Kalman filter and reached a higher level of accuracy. PMID:29695119
Deep Kalman Filter: Simultaneous Multi-Sensor Integration and Modelling; A GNSS/IMU Case Study.
Hosseinyalamdary, Siavash
2018-04-24
Bayes filters, such as the Kalman and particle filters, have been used in sensor fusion to integrate two sources of information and obtain the best estimate of unknowns. The efficient integration of multiple sensors requires deep knowledge of their error sources. Some sensors, such as Inertial Measurement Unit (IMU), have complicated error sources. Therefore, IMU error modelling and the efficient integration of IMU and Global Navigation Satellite System (GNSS) observations has remained a challenge. In this paper, we developed deep Kalman filter to model and remove IMU errors and, consequently, improve the accuracy of IMU positioning. To achieve this, we added a modelling step to the prediction and update steps of the Kalman filter, so that the IMU error model is learned during integration. The results showed our deep Kalman filter outperformed the conventional Kalman filter and reached a higher level of accuracy.
Total absorption cross sections of several gases of aeronomic interest at 584 A.
NASA Technical Reports Server (NTRS)
Starr, W. L.; Loewenstein, M.
1972-01-01
Total photoabsorption cross sections have been measured at 584.3 A for N2, O2, Ar, CO2, CO, NO, N2O, NH3, CH4, H2, and H2S. A monochromator was used to isolate the He I 584 line produced in a helium resonance lamp, and thin aluminum filters were used as absorption cell windows, thereby eliminating possible errors associated with the use of undispersed radiation or windowless cells. Sources of error are examined, and limits of uncertainty are given. Previous relevant cross-sectional measurements and possible error sources are reviewed. Wall adsorption as a source of error in cross-sectional measurements has not previously been considered and is discussed briefly.
Added-value joint source modelling of seismic and geodetic data
NASA Astrophysics Data System (ADS)
Sudhaus, Henriette; Heimann, Sebastian; Walter, Thomas R.; Krueger, Frank
2013-04-01
In tectonically active regions earthquake source studies strongly support the analysis of the current faulting processes as they reveal the location and geometry of active faults, the average slip released or more. For source modelling of shallow, moderate to large earthquakes often a combination of geodetic (GPS, InSAR) and seismic data is used. A truly joint use of these data, however, usually takes place only on a higher modelling level, where some of the first-order characteristics (time, centroid location, fault orientation, moment) have been fixed already. These required basis model parameters have to be given, assumed or inferred in a previous, separate and highly non-linear modelling step using one of the these data sets alone. We present a new earthquake rupture model implementation that realizes a fully combined data integration of surface displacement measurements and seismic data in a non-linear optimization of simple but extended planar ruptures. The model implementation allows for fast forward calculations of full seismograms and surface deformation and therefore enables us to use Monte Carlo global search algorithms. Furthermore, we benefit from the complementary character of seismic and geodetic data, e. g. the high definition of the source location from geodetic data and the sensitivity of the resolution of the seismic data on moment releases at larger depth. These increased constraints from the combined dataset make optimizations efficient, even for larger model parameter spaces and with a very limited amount of a priori assumption on the source. A vital part of our approach is rigorous data weighting based on the empirically estimated data errors. We construct full data error variance-covariance matrices for geodetic data to account for correlated data noise and also weight the seismic data based on their signal-to-noise ratio. The estimation of the data errors and the fast forward modelling opens the door for Bayesian inferences of the source model parameters. The source model product then features parameter uncertainty estimates and reveals parameter trade-offs that arise from imperfect data coverage and data errors. We applied our new source modelling approach to the 2010 Haiti earthquake for which a number of apparently different seismic, geodetic and joint source models has been reported already - mostly without any model parameter estimations. We here show that the variability of all these source models seems to arise from inherent model parameter trade-offs and mostly has little statistical significance, e.g. even using a large dataset comprising seismic and geodetic data the confidence interval of the fault dip remains as wide as about 20 degrees.
Sources of Error in Substance Use Prevalence Surveys
Johnson, Timothy P.
2014-01-01
Population-based estimates of substance use patterns have been regularly reported now for several decades. Concerns with the quality of the survey methodologies employed to produce those estimates date back almost as far. Those concerns have led to a considerable body of research specifically focused on understanding the nature and consequences of survey-based errors in substance use epidemiology. This paper reviews and summarizes that empirical research by organizing it within a total survey error model framework that considers multiple types of representation and measurement errors. Gaps in our knowledge of error sources in substance use surveys and areas needing future research are also identified. PMID:27437511
Characterizing the SWOT discharge error budget on the Sacramento River, CA
NASA Astrophysics Data System (ADS)
Yoon, Y.; Durand, M. T.; Minear, J. T.; Smith, L.; Merry, C. J.
2013-12-01
The Surface Water and Ocean Topography (SWOT) is an upcoming satellite mission (2020 year) that will provide surface-water elevation and surface-water extent globally. One goal of SWOT is the estimation of river discharge directly from SWOT measurements. SWOT discharge uncertainty is due to two sources. First, SWOT cannot measure channel bathymetry and determine roughness coefficient data necessary for discharge calculations directly; these parameters must be estimated from the measurements or from a priori information. Second, SWOT measurement errors directly impact the discharge estimate accuracy. This study focuses on characterizing parameter and measurement uncertainties for SWOT river discharge estimation. A Bayesian Markov Chain Monte Carlo scheme is used to calculate parameter estimates, given the measurements of river height, slope and width, and mass and momentum constraints. The algorithm is evaluated using simulated both SWOT and AirSWOT (the airborne version of SWOT) observations over seven reaches (about 40 km) of the Sacramento River. The SWOT and AirSWOT observations are simulated by corrupting the ';true' HEC-RAS hydraulic modeling results with the instrument error. This experiment answers how unknown bathymetry and roughness coefficients affect the accuracy of the river discharge algorithm. From the experiment, the discharge error budget is almost completely dominated by unknown bathymetry and roughness; 81% of the variance error is explained by uncertainties in bathymetry and roughness. Second, we show how the errors in water surface, slope, and width observations influence the accuracy of discharge estimates. Indeed, there is a significant sensitivity to water surface, slope, and width errors due to the sensitivity of bathymetry and roughness to measurement errors. Increasing water-surface error above 10 cm leads to a corresponding sharper increase of errors in bathymetry and roughness. Increasing slope error above 1.5 cm/km leads to a significant degradation due to direct error in the discharge estimates. As the width error increases past 20%, the discharge error budget is dominated by the width error. Above two experiments are performed based on AirSWOT scenarios. In addition, we explore the sensitivity of the algorithm to the SWOT scenarios.
NASA Astrophysics Data System (ADS)
Wang, C.; Platnick, S. E.; Meyer, K.; Zhang, Z.
2014-12-01
We developed an optimal estimation (OE)-based method using infrared (IR) observations to retrieve ice cloud optical thickness (COT), cloud effective radius (CER), and cloud top height (CTH) simultaneously. The OE-based retrieval is coupled with a fast IR radiative transfer model (RTM) that simulates observations of different sensors, and corresponding Jacobians in cloudy atmospheres. Ice cloud optical properties are calculated using the MODIS Collection 6 (C6) ice crystal habit (severely roughened hexagonal column aggregates). The OE-based method can be applied to various IR space-borne and airborne sensors, such as the Moderate Resolution Imaging Spectroradiometer (MODIS) and the enhanced MODIS Airborne Simulator (eMAS), by optimally selecting IR bands with high information content. Four major error sources (i.e., the measurement error, fast RTM error, model input error, and pre-assumed ice crystal habit error) are taken into account in our OE retrieval method. We show that measurement error and fast RTM error have little impact on cloud retrievals, whereas errors from the model input and pre-assumed ice crystal habit significantly increase retrieval uncertainties when the cloud is optically thin. Comparisons between the OE-retrieved ice cloud properties and other operational cloud products (e.g., the MODIS C6 and CALIOP cloud products) are shown.
First order error corrections in common introductory physics experiments
NASA Astrophysics Data System (ADS)
Beckey, Jacob; Baker, Andrew; Aravind, Vasudeva; Clarion Team
As a part of introductory physics courses, students perform different standard lab experiments. Almost all of these experiments are prone to errors owing to factors like friction, misalignment of equipment, air drag, etc. Usually these types of errors are ignored by students and not much thought is paid to the source of these errors. However, paying attention to these factors that give rise to errors help students make better physics models and understand physical phenomena behind experiments in more detail. In this work, we explore common causes of errors in introductory physics experiment and suggest changes that will mitigate the errors, or suggest models that take the sources of these errors into consideration. This work helps students build better and refined physical models and understand physics concepts in greater detail. We thank Clarion University undergraduate student grant for financial support involving this project.
Yohay Carmel; Curtis Flather; Denis Dean
2006-01-01
This paper summarizes our efforts to investigate the nature, behavior, and implications of positional error and attribute error in spatiotemporal datasets. Estimating the combined influence of these errors on map analysis has been hindered by the fact that these two error types are traditionally expressed in different units (distance units, and categorical units,...
Fluorescence errors in integrating sphere measurements of remote phosphor type LED light sources
NASA Astrophysics Data System (ADS)
Keppens, A.; Zong, Y.; Podobedov, V. B.; Nadal, M. E.; Hanselaer, P.; Ohno, Y.
2011-05-01
The relative spectral radiant flux error caused by phosphor fluorescence during integrating sphere measurements is investigated both theoretically and experimentally. Integrating sphere and goniophotometer measurements are compared and used for model validation, while a case study provides additional clarification. Criteria for reducing fluorescence errors to a degree of negligibility as well as a fluorescence error correction method based on simple matrix algebra are presented. Only remote phosphor type LED light sources are studied because of their large phosphor surfaces and high application potential in general lighting.
Investigating error structure of shuttle radar topography mission elevation data product
NASA Astrophysics Data System (ADS)
Becek, Kazimierz
2008-08-01
An attempt was made to experimentally assess the instrumental component of error of the C-band SRTM (SRTM). This was achieved by comparing elevation data of 302 runways from airports all over the world with the shuttle radar topography mission data product (SRTM). It was found that the rms of the instrumental error is about +/-1.55 m. Modeling of the remaining SRTM error sources, including terrain relief and pixel size, shows that downsampling from 30 m to 90 m (1 to 3 arc-sec pixels) worsened SRTM vertical accuracy threefold. It is suspected that the proximity of large metallic objects is a source of large SRTM errors. The achieved error estimates allow a pixel-based accuracy assessment of the SRTM elevation data product to be constructed. Vegetation-induced errors were not considered in this work.
An interpretation of radiosonde errors in the atmospheric boundary layer
Bernadette H. Connell; David R. Miller
1995-01-01
The authors review sources of error in radiosonde measurements in the atmospheric boundary layer and analyze errors of two radiosonde models manufactured by Atmospheric Instrumentation Research, Inc. The authors focus on temperature and humidity lag errors and wind errors. Errors in measurement of azimuth and elevation angles and pressure over short time intervals and...
NASA Astrophysics Data System (ADS)
Sheng, Jian-Xiong; Jacob, Daniel J.; Turner, Alexander J.; Maasakkers, Joannes D.; Sulprizio, Melissa P.; Bloom, A. Anthony; Andrews, Arlyn E.; Wunch, Debra
2018-05-01
We use observations of boundary layer methane from the SEAC4RS aircraft campaign over the Southeast US in August-September 2013 to estimate methane emissions in that region through an inverse analysis with up to 0.25° × 0.3125° (25×25 km2) resolution and with full error characterization. The Southeast US is a major source region for methane including large contributions from oil and gas production and wetlands. Our inversion uses state-of-the-art emission inventories as prior estimates, including a gridded version of the anthropogenic EPA Greenhouse Gas Inventory and the mean of the WetCHARTs ensemble for wetlands. Inversion results are independently verified by comparison with surface (NOAA/ESRL) and column (TCCON) methane observations. Our posterior estimates for the Southeast US are 12.8 ± 0.9 Tg a-1 for anthropogenic sources (no significant change from the gridded EPA inventory) and 9.4 ± 0.8 Tg a-1 for wetlands (27 % decrease from the mean in the WetCHARTs ensemble). The largest source of error in the WetCHARTs wetlands ensemble is the land cover map specification of wetland areal extent. Our results support the accuracy of the EPA anthropogenic inventory on a regional scale but there are significant local discrepancies for oil and gas production fields, suggesting that emission factors are more variable than assumed in the EPA inventory.
The Chandra Source Catalog: User Interface
NASA Astrophysics Data System (ADS)
Bonaventura, Nina; Evans, I. N.; Harbo, P. N.; Rots, A. H.; Tibbetts, M. S.; Van Stone, D. W.; Zografou, P.; Anderson, C. S.; Chen, J. C.; Davis, J. E.; Doe, S. M.; Evans, J. D.; Fabbiano, G.; Galle, E.; Gibbs, D. G.; Glotfelty, K. J.; Grier, J. D.; Hain, R.; Hall, D. M.; He, X.; Houck, J. C.; Karovska, M.; Lauer, J.; McCollough, M. L.; McDowell, J. C.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Nowak, M. A.; Plummer, D. A.; Primini, F. A.; Refsdal, B. L.; Siemiginowska, A. L.; Sundheim, B. A.; Winkelman, S. L.
2009-01-01
The Chandra Source Catalog (CSC) is the definitive catalog of all X-ray sources detected by Chandra. The CSC is presented to the user in two tables: the Master Chandra Source Table and the Table of Individual Source Observations. Each distinct X-ray source identified in the CSC is represented by a single master source entry and one or more individual source entries. If a source is unaffected by confusion and pile-up in multiple observations, the individual source observations are merged to produce a master source. In each table, a row represents a source, and each column a quantity that is officially part of the catalog. The CSC contains positions and multi-band fluxes for the sources, as well as derived spatial, spectral, and temporal source properties. The CSC also includes associated source region and full-field data products for each source, including images, photon event lists, light curves, and spectra. The master source properties represent the best estimates of the properties of a source, and are presented in the following categories: Position and Position Errors, Source Flags, Source Extent and Errors, Source Fluxes, Source Significance, Spectral Properties, and Source Variability. The CSC Data Access GUI provides direct access to the source properties and data products contained in the catalog. The user may query the catalog database via a web-style search or an SQL command-line query. Each query returns a table of source properties, along with the option to browse and download associated data products. The GUI is designed to run in a web browser with Java version 1.5 or higher, and may be accessed via a link on the CSC website homepage (http://cxc.harvard.edu/csc/). As an alternative to the GUI, the contents of the CSC may be accessed directly through a URL, using the command-line tool, cURL. Support: NASA contract NAS8-03060 (CXC).
Propagation of angular errors in two-axis rotation systems
NASA Astrophysics Data System (ADS)
Torrington, Geoffrey K.
2003-10-01
Two-Axis Rotation Systems, or "goniometers," are used in diverse applications including telescope pointing, automotive headlamp testing, and display testing. There are three basic configurations in which a goniometer can be built depending on the orientation and order of the stages. Each configuration has a governing set of equations which convert motion between the system "native" coordinates to other base systems, such as direction cosines, optical field angles, or spherical-polar coordinates. In their simplest form, these equations neglect errors present in real systems. In this paper, a statistical treatment of error source propagation is developed which uses only tolerance data, such as can be obtained from the system mechanical drawings prior to fabrication. It is shown that certain error sources are fully correctable, partially correctable, or uncorrectable, depending upon the goniometer configuration and zeroing technique. The system error budget can be described by a root-sum-of-squares technique with weighting factors describing the sensitivity of each error source. This paper tabulates weighting factors at 67% (k=1) and 95% (k=2) confidence for various levels of maximum travel for each goniometer configuration. As a practical example, this paper works through an error budget used for the procurement of a system at Sandia National Laboratories.
Accounting for optical errors in microtensiometry.
Hinton, Zachary R; Alvarez, Nicolas J
2018-09-15
Drop shape analysis (DSA) techniques measure interfacial tension subject to error in image analysis and the optical system. While considerable efforts have been made to minimize image analysis errors, very little work has treated optical errors. There are two main sources of error when considering the optical system: the angle of misalignment and the choice of focal plane. Due to the convoluted nature of these sources, small angles of misalignment can lead to large errors in measured curvature. We demonstrate using microtensiometry the contributions of these sources to measured errors in radius, and, more importantly, deconvolute the effects of misalignment and focal plane. Our findings are expected to have broad implications on all optical techniques measuring interfacial curvature. A geometric model is developed to analytically determine the contributions of misalignment angle and choice of focal plane on measurement error for spherical cap interfaces. This work utilizes a microtensiometer to validate the geometric model and to quantify the effect of both sources of error. For the case of a microtensiometer, an empirical calibration is demonstrated that corrects for optical errors and drastically simplifies implementation. The combination of geometric modeling and experimental results reveal a convoluted relationship between the true and measured interfacial radius as a function of the misalignment angle and choice of focal plane. The validated geometric model produces a full operating window that is strongly dependent on the capillary radius and spherical cap height. In all cases, the contribution of optical errors is minimized when the height of the spherical cap is equivalent to the capillary radius, i.e. a hemispherical interface. The understanding of these errors allow for correct measure of interfacial curvature and interfacial tension regardless of experimental setup. For the case of microtensiometry, this greatly decreases the time for experimental setup and increases experiential accuracy. In a broad sense, this work outlines the importance of optical errors in all DSA techniques. More specifically, these results have important implications for all microscale and microfluidic measurements of interface curvature. Copyright © 2018 Elsevier Inc. All rights reserved.
Voluminator 2.0 - Speeding up the Approximation of the Volume of Defective 3d Building Models
NASA Astrophysics Data System (ADS)
Sindram, M.; Machl, T.; Steuer, H.; Pültz, M.; Kolbe, T. H.
2016-06-01
Semantic 3D city models are increasingly used as a data source in planning and analyzing processes of cities. They represent a virtual copy of the reality and are a common information base and source of information for examining urban questions. A significant advantage of virtual city models is that important indicators such as the volume of buildings, topological relationships between objects and other geometric as well as thematic information can be derived. Knowledge about the exact building volume is an essential base for estimating the building energy demand. In order to determine the volume of buildings with conventional algorithms and tools, the buildings may not contain any topological and geometrical errors. The reality, however, shows that city models very often contain errors such as missing surfaces, duplicated faces and misclosures. To overcome these errors (Steuer et al., 2015) have presented a robust method for approximating the volume of building models. For this purpose, a bounding box of the building is divided into a regular grid of voxels and it is determined which voxels are inside the building. The regular arrangement of the voxels leads to a high number of topological tests and prevents the application of this method using very high resolutions. In this paper we present an extension of the algorithm using an octree approach limiting the subdivision of space to regions around surfaces of the building models and to regions where, in the case of defective models, the topological tests are inconclusive. We show that the computation time can be significantly reduced, while preserving the robustness against geometrical and topological errors.
Sources of error in the retracted scientific literature.
Casadevall, Arturo; Steen, R Grant; Fang, Ferric C
2014-09-01
Retraction of flawed articles is an important mechanism for correction of the scientific literature. We recently reported that the majority of retractions are associated with scientific misconduct. In the current study, we focused on the subset of retractions for which no misconduct was identified, in order to identify the major causes of error. Analysis of the retraction notices for 423 articles indexed in PubMed revealed that the most common causes of error-related retraction are laboratory errors, analytical errors, and irreproducible results. The most common laboratory errors are contamination and problems relating to molecular biology procedures (e.g., sequencing, cloning). Retractions due to contamination were more common in the past, whereas analytical errors are now increasing in frequency. A number of publications that have not been retracted despite being shown to contain significant errors suggest that barriers to retraction may impede correction of the literature. In particular, few cases of retraction due to cell line contamination were found despite recognition that this problem has affected numerous publications. An understanding of the errors leading to retraction can guide practices to improve laboratory research and the integrity of the scientific literature. Perhaps most important, our analysis has identified major problems in the mechanisms used to rectify the scientific literature and suggests a need for action by the scientific community to adopt protocols that ensure the integrity of the publication process. © FASEB.
The relationship between somatic and cognitive-affective depression symptoms and error-related ERP’s
Bridwell, David A.; Steele, Vaughn R.; Maurer, J. Michael; Kiehl, Kent A.; Calhoun, Vince D.
2014-01-01
Background The symptoms that contribute to the clinical diagnosis of depression likely emerge from, or are related to, underlying cognitive deficits. To understand this relationship further, we examined the relationship between self-reported somatic and cognitive-affective Beck’s Depression Inventory-II (BDI-II) symptoms and aspects of cognitive control reflected in error event-related potential (ERP) responses. Methods Task and assessment data were analyzed within 51 individuals. The group contained a broad distribution of depressive symptoms, as assessed by BDI-II scores. ERP’s were collected following error responses within a go/no-go task. Individual error ERP amplitudes were estimated by conducting group independent component analysis (ICA) on the electroencephalographic (EEG) time series and analyzing the individual reconstructed source epochs. Source error amplitudes were correlated with the subset of BDI-II scores representing somatic and cognitive-affective symptoms. Results We demonstrate a negative relationship between somatic depression symptoms (i.e. fatigue or loss of energy) (after regressing out cognitive-affective scores, age and IQ) and the central-parietal ERP response that peaks at 359 ms. The peak amplitudes within this ERP response were not significantly related to cognitive-affective symptom severity (after regressing out the somatic symptom scores, age, and IQ). Limitations These findings were obtained within a population of female adults from a maximum-security correctional facility. Thus, additional research is required to verify that they generalize to the broad population. Conclusions These results suggest that individuals with greater somatic depression symptoms demonstrate a reduced awareness of behavioral errors, and help clarify the relationship between clinical measures of self-reported depression symptoms and cognitive control. PMID:25451400
The relationship between somatic and cognitive-affective depression symptoms and error-related ERPs.
Bridwell, David A; Steele, Vaughn R; Maurer, J Michael; Kiehl, Kent A; Calhoun, Vince D
2015-02-01
The symptoms that contribute to the clinical diagnosis of depression likely emerge from, or are related to, underlying cognitive deficits. To understand this relationship further, we examined the relationship between self-reported somatic and cognitive-affective Beck'sDepression Inventory-II (BDI-II) symptoms and aspects of cognitive control reflected in error event-related potential (ERP) responses. Task and assessment data were analyzed within 51 individuals. The group contained a broad distribution of depressive symptoms, as assessed by BDI-II scores. ERPs were collected following error responses within a go/no-go task. Individual error ERP amplitudes were estimated by conducting group independent component analysis (ICA) on the electroencephalographic (EEG) time series and analyzing the individual reconstructed source epochs. Source error amplitudes were correlated with the subset of BDI-II scores representing somatic and cognitive-affective symptoms. We demonstrate a negative relationship between somatic depression symptoms (i.e. fatigue or loss of energy) (after regressing out cognitive-affective scores, age and IQ) and the central-parietal ERP response that peaks at 359 ms. The peak amplitudes within this ERP response were not significantly related to cognitive-affective symptom severity (after regressing out the somatic symptom scores, age, and IQ). These findings were obtained within a population of female adults from a maximum-security correctional facility. Thus, additional research is required to verify that they generalize to the broad population. These results suggest that individuals with greater somatic depression symptoms demonstrate a reduced awareness of behavioral errors, and help clarify the relationship between clinical measures of self-reported depression symptoms and cognitive control. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Vilnrotter, Victor A.
2012-01-01
Initial optical communications experiments with a Vertex polished aluminum panel have been described. The polished panel was mounted on the main reflector of the DSN's research antenna at DSS-13. The PSF was recorded via remotely controlled digital camera mounted on the subreflector structure. Initial PSF generated by Jupiter showed significant tilt error and some mechanical deformation. After upgrades, the PSF improved significantly, leading to much better concentration of light. Communications performance of the initial and upgraded panel structure were compared. After the upgrades, simulated PPM symbol error probability decreased by six orders of magnitude. Work is continuing to demonstrate closed-loop tracking of sources from zenith to horizon, and better characterize communications performance in realistic daytime background environments.
Error analysis of satellite attitude determination using a vision-based approach
NASA Astrophysics Data System (ADS)
Carozza, Ludovico; Bevilacqua, Alessandro
2013-09-01
Improvements in communication and processing technologies have opened the doors to exploit on-board cameras to compute objects' spatial attitude using only the visual information from sequences of remote sensed images. The strategies and the algorithmic approach used to extract such information affect the estimation accuracy of the three-axis orientation of the object. This work presents a method for analyzing the most relevant error sources, including numerical ones, possible drift effects and their influence on the overall accuracy, referring to vision-based approaches. The method in particular focuses on the analysis of the image registration algorithm, carried out through on-purpose simulations. The overall accuracy has been assessed on a challenging case study, for which accuracy represents the fundamental requirement. In particular, attitude determination has been analyzed for small satellites, by comparing theoretical findings to metric results from simulations on realistic ground-truth data. Significant laboratory experiments, using a numerical control unit, have further confirmed the outcome. We believe that our analysis approach, as well as our findings in terms of error characterization, can be useful at proof-of-concept design and planning levels, since they emphasize the main sources of error for visual based approaches employed for satellite attitude estimation. Nevertheless, the approach we present is also of general interest for all the affine applicative domains which require an accurate estimation of three-dimensional orientation parameters (i.e., robotics, airborne stabilization).
Comparison of Highly Resolved Model-Based Exposure ...
Human exposure to air pollution in many studies is represented by ambient concentrations from space-time kriging of observed values. Space-time kriging techniques based on a limited number of ambient monitors may fail to capture the concentration from local sources. Further, because people spend more time indoors, using ambient concentration to represent exposure may cause error. To quantify the associated exposure error, we computed a series of six different hourly-based exposure metrics at 16,095 Census blocks of three Counties in North Carolina for CO, NOx, PM2.5, and elemental carbon (EC) during 2012. These metrics include ambient background concentration from space-time ordinary kriging (STOK), ambient on-road concentration from the Research LINE source dispersion model (R-LINE), a hybrid concentration combining STOK and R-LINE, and their associated indoor concentrations from an indoor infiltration mass balance model. Using a hybrid-based indoor concentration as the standard, the comparison showed that outdoor STOK metrics yielded large error at both population (67% to 93%) and individual level (average bias between −10% to 95%). For pollutants with significant contribution from on-road emission (EC and NOx), the on-road based indoor metric performs the best at the population level (error less than 52%). At the individual level, however, the STOK-based indoor concentration performs the best (average bias below 30%). For PM2.5, due to the relatively low co
Bayesian models for comparative analysis integrating phylogenetic uncertainty.
de Villemereuil, Pierre; Wells, Jessie A; Edwards, Robert D; Blomberg, Simon P
2012-06-28
Uncertainty in comparative analyses can come from at least two sources: a) phylogenetic uncertainty in the tree topology or branch lengths, and b) uncertainty due to intraspecific variation in trait values, either due to measurement error or natural individual variation. Most phylogenetic comparative methods do not account for such uncertainties. Not accounting for these sources of uncertainty leads to false perceptions of precision (confidence intervals will be too narrow) and inflated significance in hypothesis testing (e.g. p-values will be too small). Although there is some application-specific software for fitting Bayesian models accounting for phylogenetic error, more general and flexible software is desirable. We developed models to directly incorporate phylogenetic uncertainty into a range of analyses that biologists commonly perform, using a Bayesian framework and Markov Chain Monte Carlo analyses. We demonstrate applications in linear regression, quantification of phylogenetic signal, and measurement error models. Phylogenetic uncertainty was incorporated by applying a prior distribution for the phylogeny, where this distribution consisted of the posterior tree sets from Bayesian phylogenetic tree estimation programs. The models were analysed using simulated data sets, and applied to a real data set on plant traits, from rainforest plant species in Northern Australia. Analyses were performed using the free and open source software OpenBUGS and JAGS. Incorporating phylogenetic uncertainty through an empirical prior distribution of trees leads to more precise estimation of regression model parameters than using a single consensus tree and enables a more realistic estimation of confidence intervals. In addition, models incorporating measurement errors and/or individual variation, in one or both variables, are easily formulated in the Bayesian framework. We show that BUGS is a useful, flexible general purpose tool for phylogenetic comparative analyses, particularly for modelling in the face of phylogenetic uncertainty and accounting for measurement error or individual variation in explanatory variables. Code for all models is provided in the BUGS model description language.
Bayesian models for comparative analysis integrating phylogenetic uncertainty
2012-01-01
Background Uncertainty in comparative analyses can come from at least two sources: a) phylogenetic uncertainty in the tree topology or branch lengths, and b) uncertainty due to intraspecific variation in trait values, either due to measurement error or natural individual variation. Most phylogenetic comparative methods do not account for such uncertainties. Not accounting for these sources of uncertainty leads to false perceptions of precision (confidence intervals will be too narrow) and inflated significance in hypothesis testing (e.g. p-values will be too small). Although there is some application-specific software for fitting Bayesian models accounting for phylogenetic error, more general and flexible software is desirable. Methods We developed models to directly incorporate phylogenetic uncertainty into a range of analyses that biologists commonly perform, using a Bayesian framework and Markov Chain Monte Carlo analyses. Results We demonstrate applications in linear regression, quantification of phylogenetic signal, and measurement error models. Phylogenetic uncertainty was incorporated by applying a prior distribution for the phylogeny, where this distribution consisted of the posterior tree sets from Bayesian phylogenetic tree estimation programs. The models were analysed using simulated data sets, and applied to a real data set on plant traits, from rainforest plant species in Northern Australia. Analyses were performed using the free and open source software OpenBUGS and JAGS. Conclusions Incorporating phylogenetic uncertainty through an empirical prior distribution of trees leads to more precise estimation of regression model parameters than using a single consensus tree and enables a more realistic estimation of confidence intervals. In addition, models incorporating measurement errors and/or individual variation, in one or both variables, are easily formulated in the Bayesian framework. We show that BUGS is a useful, flexible general purpose tool for phylogenetic comparative analyses, particularly for modelling in the face of phylogenetic uncertainty and accounting for measurement error or individual variation in explanatory variables. Code for all models is provided in the BUGS model description language. PMID:22741602
Real-time Neuroimaging and Cognitive Monitoring Using Wearable Dry EEG
Mullen, Tim R.; Kothe, Christian A.E.; Chi, Mike; Ojeda, Alejandro; Kerth, Trevor; Makeig, Scott; Jung, Tzyy-Ping; Cauwenberghs, Gert
2015-01-01
Goal We present and evaluate a wearable high-density dry electrode EEG system and an open-source software framework for online neuroimaging and state classification. Methods The system integrates a 64-channel dry EEG form-factor with wireless data streaming for online analysis. A real-time software framework is applied, including adaptive artifact rejection, cortical source localization, multivariate effective connectivity inference, data visualization, and cognitive state classification from connectivity features using a constrained logistic regression approach (ProxConn). We evaluate the system identification methods on simulated 64-channel EEG data. Then we evaluate system performance, using ProxConn and a benchmark ERP method, in classifying response errors in 9 subjects using the dry EEG system. Results Simulations yielded high accuracy (AUC=0.97±0.021) for real-time cortical connectivity estimation. Response error classification using cortical effective connectivity (sdDTF) was significantly above chance with similar performance (AUC) for cLORETA (0.74±0.09) and LCMV (0.72±0.08) source localization. Cortical ERP-based classification was equivalent to ProxConn for cLORETA (0.74±0.16) but significantly better for LCMV (0.82±0.12). Conclusion We demonstrated the feasibility for real-time cortical connectivity analysis and cognitive state classification from high-density wearable dry EEG. Significance This paper is the first validated application of these methods to 64-channel dry EEG. The work addresses a need for robust real-time measurement and interpretation of complex brain activity in the dynamic environment of the wearable setting. Such advances can have broad impact in research, medicine, and brain-computer interfaces. The pipelines are made freely available in the open-source SIFT and BCILAB toolboxes. PMID:26415149
Fu, Haijin; Wang, Yue; Tan, Jiubin; Fan, Zhigang
2018-01-01
Even after the Heydemann correction, residual nonlinear errors, ranging from hundreds of picometers to several nanometers, are still found in heterodyne laser interferometers. This is a crucial factor impeding the realization of picometer level metrology, but its source and mechanism have barely been investigated. To study this problem, a novel nonlinear model based on optical mixing and coupling with ghost reflection is proposed and then verified by experiments. After intense investigation of this new model’s influence, results indicate that new additional high-order and negative-order nonlinear harmonics, arising from ghost reflection and its coupling with optical mixing, have only a negligible contribution to the overall nonlinear error. In real applications, any effect on the Lissajous trajectory might be invisible due to the small ghost reflectance. However, even a tiny ghost reflection can significantly worsen the effectiveness of the Heydemann correction, or even make this correction completely ineffective, i.e., compensation makes the error larger rather than smaller. Moreover, the residual nonlinear error after correction is dominated only by ghost reflectance. PMID:29498685
NASA Technical Reports Server (NTRS)
Esbensen, S. K.; Chelton, D. B.; Vickers, D.; Sun, J.
1993-01-01
The method proposed by Liu (1984) is used to estimate monthly averaged evaporation over the global oceans from 1 yr of special sensor microwave imager (SDSM/I) data. Intercomparisons involving SSM/I and in situ data are made over a wide range of oceanic conditions during August 1987 and February 1988 to determine the source of errors in the evaporation estimates. The most significant spatially coherent evaporation errors are found to come from estimates of near-surface specific humidity, q. Systematic discrepancies of over 2 g/kg are found in the tropics, as well as in the middle and high latitudes. The q errors are partitioned into contributions from the parameterization of q in terms of the columnar water vapor, i.e., the Liu q/W relationship, and from the retrieval algorithm for W. The effects of W retrieval errors are found to be smaller over most of the global oceans and due primarily to the implicitly assumed vertical structures of temperature and specific humidity on which the physically based SSM/I retrievals of W are based.
Image processing methods to compensate for IFOV errors in microgrid imaging polarimeters
NASA Astrophysics Data System (ADS)
Ratliff, Bradley M.; Boger, James K.; Fetrow, Matthew P.; Tyo, J. Scott; Black, Wiley T.
2006-05-01
Long-wave infrared imaging Stokes vector polarimeters are used in many remote sensing applications. Imaging polarimeters require that several measurements be made under optically different conditions in order to estimate the polarization signature at a given scene point. This multiple-measurement requirement introduces error in the signature estimates, and the errors differ depending upon the type of measurement scheme used. Here, we investigate a LWIR linear microgrid polarimeter. This type of instrument consists of a mosaic of micropolarizers at different orientations that are masked directly onto a focal plane array sensor. In this scheme, each polarization measurement is acquired spatially and hence each is made at a different point in the scene. This is a significant source of error, as it violates the requirement that each polarization measurement have the same instantaneous field-of-view (IFOV). In this paper, we first study the amount of error introduced by the IFOV handicap in microgrid instruments. We then proceed to investigate means for mitigating the effects of these errors to improve the quality of polarimetric imagery. In particular, we examine different interpolation schemes and gauge their performance. These studies are completed through the use of both real instrumental and modeled data.
Hybrid Correlation Algorithms. A Bridge Between Feature Matching and Image Correlation,
1979-11-01
spa- tially into groups of pixels. The intensity level preprocessing is designed to compensate for any biases or gain changes in the system ; whereas...number of error sources that affect the performance of the system . It would be desirable to lump these errors into ge- neric categories in discussing... system performance rather than treat- ing each error source separately. Such a generic categorization should possess the following properties: 1. The
Performance assessment of the BEBIG MultiSource® high dose rate brachytherapy treatment unit
NASA Astrophysics Data System (ADS)
Palmer, Antony; Mzenda, Bongile
2009-12-01
A comprehensive system characterisation was performed of the Eckert & Ziegler BEBIG GmbH MultiSource® High Dose Rate (HDR) brachytherapy treatment unit with an 192Ir source. The unit is relatively new to the UK market, with the first installation in the country having been made in the summer of 2009. A detailed commissioning programme was devised and is reported including checks of the fundamental parameters of source positioning, dwell timing, transit doses and absolute dosimetry of the source. Well chamber measurements, autoradiography and video camera analysis techniques were all employed. The absolute dosimetry was verified by the National Physical Laboratory, UK, and compared to a measurement based on a calibration from PTB, Germany, and the supplied source certificate, as well as an independent assessment by a visiting UK centre. The use of the 'Krieger' dosimetry phantom has also been evaluated. Users of the BEBIG HDR system should take care to avoid any significant bend in the transfer tube, as this will lead to positioning errors of the source, of up to 1.0 mm for slight bends, 2.0 mm for moderate bends and 5.0 mm for extreme curvature (depending on applicators and transfer tube used) for the situations reported in this study. The reason for these errors and the potential clinical impact are discussed. Users should also note the methodology employed by the system for correction of transit doses, and that no correction is made for the initial and final transit doses. The results of this investigation found that the uncorrected transit doses lead to small errors in the delivered dose at the first dwell position, of up to 2.5 cGy at 2 cm (5.6 cGy at 1 cm) from a 10 Ci source, but the transit dose correction for other dwells was accurate within 0.2 cGy. The unit has been mechanically reliable, and source positioning accuracy and dwell timing have been reproducible, with overall performance similar to other existing HDR equipment. The unit is capable of high quality brachytherapy treatment delivery, taking the above factors into account.
Identification of driver errors : overview and recommendations
DOT National Transportation Integrated Search
2002-08-01
Driver error is cited as a contributing factor in most automobile crashes, and although estimates vary by source, driver error is cited as the principal cause of from 45 to 75 percent of crashes. However, the specific errors that lead to crashes, and...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xing, Y; Macq, B; Bondar, L
Purpose: To quantify the accuracy in predicting the Bragg peak position using simulated in-room measurements of prompt gamma (PG) emissions for realistic treatment error scenarios that combine several sources of errors. Methods: Prompt gamma measurements by a knife-edge slit camera were simulated using an experimentally validated analytical simulation tool. Simulations were performed, for 143 treatment error scenarios, on an anthropomorphic phantom and a pencil beam scanning plan for nasal cavity. Three types of errors were considered: translation along each axis, rotation around each axis, and CT-calibration errors with magnitude ranging respectively, between −3 and 3 mm, −5 and 5 degrees,more » and between −5 and +5%. We investigated the correlation between the Bragg peak (BP) shift and the horizontal shift of PG profiles. The shifts were calculated between the planned (reference) position and the position by the error scenario. The prediction error for one spot was calculated as the absolute difference between the PG profile shift and the BP shift. Results: The PG shift was significantly and strongly correlated with the BP shift for 92% of the cases (p<0.0001, Pearson correlation coefficient R>0.8). Moderate but significant correlations were obtained for all cases that considered only CT-calibration errors and for 1 case that combined translation and CT-errors (p<0.0001, R ranged between 0.61 and 0.8). The average prediction errors for the simulated scenarios ranged between 0.08±0.07 and 1.67±1.3 mm (grand mean 0.66±0.76 mm). The prediction error was moderately correlated with the value of the BP shift (p=0, R=0.64). For the simulated scenarios the average BP shift ranged between −8±6.5 mm and 3±1.1 mm. Scenarios that considered combinations of the largest treatment errors were associated with large BP shifts. Conclusion: Simulations of in-room measurements demonstrate that prompt gamma profiles provide reliable estimation of the Bragg peak position for complex error scenarios. Yafei Xing and Luiza Bondar are funded by BEWARE grants from the Walloon Region. The work presents simulations results for a prompt gamma camera prototype developed by IBA.« less
NASA Astrophysics Data System (ADS)
Liu, C. L.; Kirchengast, G.; Zhang, K. F.; Norman, R.; Li, Y.; Zhang, S. C.; Carter, B.; Fritzer, J.; Schwaerz, M.; Choy, S. L.; Wu, S. Q.; Tan, Z. X.
2013-09-01
Global Navigation Satellite System (GNSS) radio occultation (RO) is an innovative meteorological remote sensing technique for measuring atmospheric parameters such as refractivity, temperature, water vapour and pressure for the improvement of numerical weather prediction (NWP) and global climate monitoring (GCM). GNSS RO has many unique characteristics including global coverage, long-term stability of observations, as well as high accuracy and high vertical resolution of the derived atmospheric profiles. One of the main error sources in GNSS RO observations that significantly affect the accuracy of the derived atmospheric parameters in the stratosphere is the ionospheric error. In order to mitigate the effect of this error, the linear ionospheric correction approach for dual-frequency GNSS RO observations is commonly used. However, the residual ionospheric errors (RIEs) can be still significant, especially when large ionospheric disturbances occur and prevail such as during the periods of active space weather. In this study, the RIEs were investigated under different local time, propagation direction and solar activity conditions and their effects on RO bending angles are characterised using end-to-end simulations. A three-step simulation study was designed to investigate the characteristics of the RIEs through comparing the bending angles with and without the effects of the RIEs. This research forms an important step forward in improving the accuracy of the atmospheric profiles derived from the GNSS RO technique.
Biased interpretation and memory in children with varying levels of spider fear.
Klein, Anke M; Titulaer, Geraldine; Simons, Carlijn; Allart, Esther; de Gier, Erwin; Bögels, Susan M; Becker, Eni S; Rinck, Mike
2014-01-01
This study investigated multiple cognitive biases in children simultaneously, to investigate whether spider-fearful children display an interpretation bias, a recall bias, and source monitoring errors, and whether these biases are specific for spider-related materials. Furthermore, the independent ability of these biases to predict spider fear was investigated. A total of 121 children filled out the Spider Anxiety and Disgust Screening for Children (SADS-C), and they performed an interpretation task, a memory task, and a Behavioural Assessment Test (BAT). As expected, a specific interpretation bias was found: Spider-fearful children showed more negative interpretations of ambiguous spider-related scenarios, but not of other scenarios. We also found specific source monitoring errors: Spider-fearful children made more fear-related source monitoring errors for the spider-related scenarios, but not for the other scenarios. Only limited support was found for a recall bias. Finally, interpretation bias, recall bias, and source monitoring errors predicted unique variance components of spider fear.
Error propagation of partial least squares for parameters optimization in NIR modeling.
Du, Chenzhao; Dai, Shengyun; Qiao, Yanjiang; Wu, Zhisheng
2018-03-05
A novel methodology is proposed to determine the error propagation of partial least-square (PLS) for parameters optimization in near-infrared (NIR) modeling. The parameters include spectral pretreatment, latent variables and variable selection. In this paper, an open source dataset (corn) and a complicated dataset (Gardenia) were used to establish PLS models under different modeling parameters. And error propagation of modeling parameters for water quantity in corn and geniposide quantity in Gardenia were presented by both type І and type II error. For example, when variable importance in the projection (VIP), interval partial least square (iPLS) and backward interval partial least square (BiPLS) variable selection algorithms were used for geniposide in Gardenia, compared with synergy interval partial least squares (SiPLS), the error weight varied from 5% to 65%, 55% and 15%. The results demonstrated how and what extent the different modeling parameters affect error propagation of PLS for parameters optimization in NIR modeling. The larger the error weight, the worse the model. Finally, our trials finished a powerful process in developing robust PLS models for corn and Gardenia under the optimal modeling parameters. Furthermore, it could provide a significant guidance for the selection of modeling parameters of other multivariate calibration models. Copyright © 2017. Published by Elsevier B.V.
Error propagation of partial least squares for parameters optimization in NIR modeling
NASA Astrophysics Data System (ADS)
Du, Chenzhao; Dai, Shengyun; Qiao, Yanjiang; Wu, Zhisheng
2018-03-01
A novel methodology is proposed to determine the error propagation of partial least-square (PLS) for parameters optimization in near-infrared (NIR) modeling. The parameters include spectral pretreatment, latent variables and variable selection. In this paper, an open source dataset (corn) and a complicated dataset (Gardenia) were used to establish PLS models under different modeling parameters. And error propagation of modeling parameters for water quantity in corn and geniposide quantity in Gardenia were presented by both type І and type II error. For example, when variable importance in the projection (VIP), interval partial least square (iPLS) and backward interval partial least square (BiPLS) variable selection algorithms were used for geniposide in Gardenia, compared with synergy interval partial least squares (SiPLS), the error weight varied from 5% to 65%, 55% and 15%. The results demonstrated how and what extent the different modeling parameters affect error propagation of PLS for parameters optimization in NIR modeling. The larger the error weight, the worse the model. Finally, our trials finished a powerful process in developing robust PLS models for corn and Gardenia under the optimal modeling parameters. Furthermore, it could provide a significant guidance for the selection of modeling parameters of other multivariate calibration models.
Xu, Hang; Merryweather, Andrew; Bloswick, Donald; Mao, Qi; Wang, Tong
2015-01-01
Marker placement can be a significant source of error in biomechanical studies of human movement. The toe marker placement error is amplified by footwear since the toe marker placement on the shoe only relies on an approximation of underlying anatomical landmarks. Three total knee replacement subjects were recruited and three self-speed gait trials per subject were collected. The height variation between toe and heel markers of four types of footwear was evaluated from the results of joint kinematics and muscle forces using OpenSim. The reference condition was considered as the same vertical height of toe and heel markers. The results showed that the residual variances for joint kinematics had an approximately linear relationship with toe marker placement error for lower limb joints. Ankle dorsiflexion/plantarflexion is most sensitive to toe marker placement error. The influence of toe marker placement error is generally larger for hip flexion/extension and rotation than hip abduction/adduction and knee flexion/extension. The muscle forces responded to the residual variance of joint kinematics to various degrees based on the muscle function for specific joint kinematics. This study demonstrates the importance of evaluating marker error for joint kinematics and muscle forces when explaining relative clinical gait analysis and treatment intervention.
Evaluation of Trajectory Errors in an Automated Terminal-Area Environment
NASA Technical Reports Server (NTRS)
Oseguera-Lohr, Rosa M.; Williams, David H.
2003-01-01
A piloted simulation experiment was conducted to document the trajectory errors associated with use of an airplane's Flight Management System (FMS) in conjunction with a ground-based ATC automation system, Center-TRACON Automation System (CTAS) in the terminal area. Three different arrival procedures were compared: current-day (vectors from ATC), modified (current-day with minor updates), and data link with FMS lateral navigation. Six active airline pilots flew simulated arrivals in a fixed-base simulator. The FMS-datalink procedure resulted in the smallest time and path distance errors, indicating that use of this procedure could reduce the CTAS arrival-time prediction error by about half over the current-day procedure. Significant sources of error contributing to the arrival-time error were crosstrack errors and early speed reduction in the last 2-4 miles before the final approach fix. Pilot comments were all very positive, indicating the FMS-datalink procedure was easy to understand and use, and the increased head-down time and workload did not detract from the benefit. Issues that need to be resolved before this method of operation would be ready for commercial use include development of procedures acceptable to controllers, better speed conformance monitoring, and FMS database procedures to support the approach transitions.
Uncertainty Analysis Principles and Methods
2007-09-01
error source . The Data Processor converts binary coded numbers to values, performs D/A curve fitting and applies any correction factors that may be...describes the stages or modules involved in the measurement process. We now need to identify all relevant error sources and develop the mathematical... sources , gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden
A national physician survey of diagnostic error in paediatrics.
Perrem, Lucy M; Fanshawe, Thomas R; Sharif, Farhana; Plüddemann, Annette; O'Neill, Michael B
2016-10-01
This cross-sectional survey explored paediatric physician perspectives regarding diagnostic errors. All paediatric consultants and specialist registrars in Ireland were invited to participate in this anonymous online survey. The response rate for the study was 54 % (n = 127). Respondents had a median of 9-year clinical experience (interquartile range (IQR) 4-20 years). A diagnostic error was reported at least monthly by 19 (15.0 %) respondents. Consultants reported significantly less diagnostic errors compared to trainees (p value = 0.01). Cognitive error was the top-ranked contributing factor to diagnostic error, with incomplete history and examination considered to be the principal cognitive error. Seeking a second opinion and close follow-up of patients to ensure that the diagnosis is correct were the highest-ranked, clinician-based solutions to diagnostic error. Inadequate staffing levels and excessive workload were the most highly ranked system-related and situational factors. Increased access to and availability of consultants and experts was the most highly ranked system-based solution to diagnostic error. We found a low level of self-perceived diagnostic error in an experienced group of paediatricians, at variance with the literature and warranting further clarification. The results identify perceptions on the major cognitive, system-related and situational factors contributing to diagnostic error and also key preventative strategies. • Diagnostic errors are an important source of preventable patient harm and have an estimated incidence of 10-15 %. • They are multifactorial in origin and include cognitive, system-related and situational factors. What is New: • We identified a low rate of self-perceived diagnostic error in contrast to the existing literature. • Incomplete history and examination, inadequate staffing levels and excessive workload are cited as the principal contributing factors to diagnostic error in this study.
Schmidt, Frank L; Le, Huy; Ilies, Remus
2003-06-01
On the basis of an empirical study of measures of constructs from the cognitive domain, the personality domain, and the domain of affective traits, the authors of this study examine the implications of transient measurement error for the measurement of frequently studied individual differences variables. The authors clarify relevant reliability concepts as they relate to transient error and present a procedure for estimating the coefficient of equivalence and stability (L. J. Cronbach, 1947), the only classical reliability coefficient that assesses all 3 major sources of measurement error (random response, transient, and specific factor errors). The authors conclude that transient error exists in all 3 trait domains and is especially large in the domain of affective traits. Their findings indicate that the nearly universal use of the coefficient of equivalence (Cronbach's alpha; L. J. Cronbach, 1951), which fails to assess transient error, leads to overestimates of reliability and undercorrections for biases due to measurement error.
Application of Exactly Linearized Error Transport Equations to AIAA CFD Prediction Workshops
NASA Technical Reports Server (NTRS)
Derlaga, Joseph M.; Park, Michael A.; Rallabhandi, Sriram
2017-01-01
The computational fluid dynamics (CFD) prediction workshops sponsored by the AIAA have created invaluable opportunities in which to discuss the predictive capabilities of CFD in areas in which it has struggled, e.g., cruise drag, high-lift, and sonic boom pre diction. While there are many factors that contribute to disagreement between simulated and experimental results, such as modeling or discretization error, quantifying the errors contained in a simulation is important for those who make decisions based on the computational results. The linearized error transport equations (ETE) combined with a truncation error estimate is a method to quantify one source of errors. The ETE are implemented with a complex-step method to provide an exact linearization with minimal source code modifications to CFD and multidisciplinary analysis methods. The equivalency of adjoint and linearized ETE functional error correction is demonstrated. Uniformly refined grids from a series of AIAA prediction workshops demonstrate the utility of ETE for multidisciplinary analysis with a connection between estimated discretization error and (resolved or under-resolved) flow features.
Visual symptoms associated with refractive errors among Thangka artists of Kathmandu valley.
Dhungel, Deepa; Shrestha, Gauri Shankar
2017-12-21
Prolong near work, especially among people with uncorrected refractive error is considered a potential source of visual symptoms. The present study aims to determine the visual symptoms and the association of those with refractive errors among Thangka artists. In a descriptive cross-sectional study, 242 (46.1%) participants of 525 thangka artists examined, with age ranged between 16 years to 39 years which comprised of 112 participants with significant refractive errors and 130 absolutely emmetropic participants, were enrolled from six Thangka painting schools. The visual symptoms were assessed using a structured questionnaire consisting of nine items and scoring from 0 to 6 consecutive scales. The eye examination included detailed anterior and posterior segment examination, objective and subjective refraction, and assessment of heterophoria, vergence and accommodation. Symptoms were presented in percentage and median. Variation in distribution of participants and symptoms was analysed using the Kruskal Wallis test for mean, and the correlation with the Pearson correlation coefficient. A significance level of 0.05 was applied for 95% confidence interval. The majority of participants (65.1%) among refractive error group (REG) were above the age of 30 years, with a male predominance (61.6%), compared to the participants in the normal cohort group (NCG), where majority of them (72.3%) were below 30 years of age (72.3%) and female (51.5%). Overall, the visual symptoms are high among Thangka artists. However, blurred vision (p = 0.003) and dry eye (p = 0.004) are higher among the REG than the NCG. Females have slightly higher symptoms than males. Most of the symptoms, such as sore/aching eye (p = 0.003), feeling dry (p = 0.005) and blurred vision (p = 0.02) are significantly associated with astigmatism. Thangka artists present with significant proportion of refractive error and visual symptoms, especially among females. The most commonly reported symptoms are blurred vision, dry eye and watering of the eye. The visual symptoms are more correlated with astigmatism.
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Discounted Cash Flow (DCF) Analysis for Craven County Project New Source Review
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Single Source Evaluation for the Hartford Working Group and Premcor Distribution Center
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Source Definition Issue for KN Power - Front Range Energy Associates, LLC/PSCo Generating Facility
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Procedures for EPA to Address Deficient New Source Permits Under the Clean Air Act
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Utilizing the N beam position monitor method for turn-by-turn optics measurements
NASA Astrophysics Data System (ADS)
Langner, A.; Benedetti, G.; Carlà, M.; Iriso, U.; Martí, Z.; de Portugal, J. Coello; Tomás, R.
2016-09-01
The N beam position monitor method (N -BPM) which was recently developed for the LHC has significantly improved the precision of optics measurements that are based on BPM turn-by-turn data. The main improvement is due to the consideration of correlations for statistical and systematic error sources, as well as increasing the amount of BPM combinations which are used to derive the β -function at one location. We present how this technique can be applied at light sources like ALBA, and compare the results with other methods.
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Time Request for the Finalization of a BACT Determination for a New Emissions Source
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Issuance of PSD Permit to Sources Impacting Dirty and Clean Air Areas
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Response to Request for Guidance in Defining Adjacent with Respect to Source Aggregation
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Clarification of the Use of Appendix I of the Clean Air Act Stationary Source Civil Penalty Policy
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Formation of a Federal Advisory Committee Act Subcommittee for New Source Review (NSR) Issues
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Guidance on the Appropriate Injunctive Relief for Violations of Major New Source Review Requirements
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Administrator's Decision on PSD Issue -- Review of New Source's Ability to Meet BACT
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
PSD Applicability Determination for Multiple Owner/Operator Point Sources Within a Single Facility
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Clarification of New Source Review Policy on Averaging Times For Production Limitations
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
When is a Source Required to Undergo Review For Both Offsets and PSD
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Issuance of Permit to Operate Several Air Pollution Sources Operated by AM General Corporation
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Should DuPont and DUSA International be Considered a Single Source for Title V and PSD
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
NASA Astrophysics Data System (ADS)
Nunez, F.; Romero, A.; Clua, J.; Mas, J.; Tomas, A.; Catalan, A.; Castellsaguer, J.
2005-08-01
MARES (Muscle Atrophy Research and Exercise System) is a computerized ergometer for neuromuscular research to be flown and installed onboard the International Space Station in 2007. Validity of data acquired depends on controlling and reducing all significant error sources. One of them is the misalignment of the joint rotation axis with respect to the motor axis.The error induced on the measurements is proportional to the misalignment between both axis. Therefore, the restraint system's performance is critical [1]. MARES HRS (Human Restraint System) assures alignment within an acceptable range while performing the exercise (results: elbow movement:13.94mm+/-5.45, Knee movement: 22.36mm+/- 6.06 ) and reproducibility of human positioning (results: elbow movement: 2.82mm+/-1.56, Knee movement 7.45mm+/-4.8 ). These results allow limiting measurement errors induced by misalignment.
Determination of Earth orientation using the Global Positioning System
NASA Technical Reports Server (NTRS)
Freedman, A. P.
1989-01-01
Modern spacecraft tracking and navigation require highly accurate Earth-orientation parameters. For near-real-time applications, errors in these quantities and their extrapolated values are a significant error source. A globally distributed network of high-precision receivers observing the full Global Positioning System (GPS) configuration of 18 or more satellites may be an efficient and economical method for the rapid determination of short-term variations in Earth orientation. A covariance analysis using the JPL Orbit Analysis and Simulation Software (OASIS) was performed to evaluate the errors associated with GPS measurements of Earth orientation. These GPS measurements appear to be highly competitive with those from other techniques and can potentially yield frequent and reliable centimeter-level Earth-orientation information while simultaneously allowing the oversubscribed Deep Space Network (DSN) antennas to be used more for direct project support.
Ghorbani, Mahdi; Salahshour, Fateme; Haghparast, Abbas; Knaup, Courtney
2014-01-01
Purpose The aim of this study is to compare the dose in various soft tissues in brachytherapy with photon emitting sources. Material and methods 103Pd, 125I, 169Yb, 192Ir brachytherapy sources were simulated with MCNPX Monte Carlo code, and their dose rate constant and radial dose function were compared with the published data. A spherical phantom with 50 cm radius was simulated and the dose at various radial distances in adipose tissue, breast tissue, 4-component soft tissue, brain (grey/white matter), muscle (skeletal), lung tissue, blood (whole), 9-component soft tissue, and water were calculated. The absolute dose and relative dose difference with respect to 9-component soft tissue was obtained for various materials, sources, and distances. Results There was good agreement between the dosimetric parameters of the sources and the published data. Adipose tissue, breast tissue, 4-component soft tissue, and water showed the greatest difference in dose relative to the dose to the 9-component soft tissue. The other soft tissues showed lower dose differences. The dose difference was also higher for 103Pd source than for 125I, 169Yb, and 192Ir sources. Furthermore, greater distances from the source had higher relative dose differences and the effect can be justified due to the change in photon spectrum (softening or hardening) as photons traverse the phantom material. Conclusions The ignorance of soft tissue characteristics (density, composition, etc.) by treatment planning systems incorporates a significant error in dose delivery to the patient in brachytherapy with photon sources. The error depends on the type of soft tissue, brachytherapy source, as well as the distance from the source. PMID:24790623
Groundwater Pollution Source Identification using Linked ANN-Optimization Model
NASA Astrophysics Data System (ADS)
Ayaz, Md; Srivastava, Rajesh; Jain, Ashu
2014-05-01
Groundwater is the principal source of drinking water in several parts of the world. Contamination of groundwater has become a serious health and environmental problem today. Human activities including industrial and agricultural activities are generally responsible for this contamination. Identification of groundwater pollution source is a major step in groundwater pollution remediation. Complete knowledge of pollution source in terms of its source characteristics is essential to adopt an effective remediation strategy. Groundwater pollution source is said to be identified completely when the source characteristics - location, strength and release period - are known. Identification of unknown groundwater pollution source is an ill-posed inverse problem. It becomes more difficult for real field conditions, when the lag time between the first reading at observation well and the time at which the source becomes active is not known. We developed a linked ANN-Optimization model for complete identification of an unknown groundwater pollution source. The model comprises two parts- an optimization model and an ANN model. Decision variables of linked ANN-Optimization model contain source location and release period of pollution source. An objective function is formulated using the spatial and temporal data of observed and simulated concentrations, and then minimized to identify the pollution source parameters. In the formulation of the objective function, we require the lag time which is not known. An ANN model with one hidden layer is trained using Levenberg-Marquardt algorithm to find the lag time. Different combinations of source locations and release periods are used as inputs and lag time is obtained as the output. Performance of the proposed model is evaluated for two and three dimensional case with error-free and erroneous data. Erroneous data was generated by adding uniformly distributed random error (error level 0-10%) to the analytically computed concentration values. The main advantage of the proposed model is that it requires only upper half of the breakthrough curve and is capable of predicting source parameters when the lag time is not known. Linking of ANN model with proposed optimization model reduces the dimensionality of the decision variables of the optimization model by one and hence complexity of optimization model is reduced. The results show that our proposed linked ANN-Optimization model is able to predict the source parameters for the error-free data accurately. The proposed model was run several times to obtain the mean, standard deviation and interval estimate of the predicted parameters for observations with random measurement errors. It was observed that mean values as predicted by the model were quite close to the exact values. An increasing trend was observed in the standard deviation of the predicted values with increasing level of measurement error. The model appears to be robust and may be efficiently utilized to solve the inverse pollution source identification problem.
Catastrophic photometric redshift errors: Weak-lensing survey requirements
Bernstein, Gary; Huterer, Dragan
2010-01-11
We study the sensitivity of weak lensing surveys to the effects of catastrophic redshift errors - cases where the true redshift is misestimated by a significant amount. To compute the biases in cosmological parameters, we adopt an efficient linearized analysis where the redshift errors are directly related to shifts in the weak lensing convergence power spectra. We estimate the number N spec of unbiased spectroscopic redshifts needed to determine the catastrophic error rate well enough that biases in cosmological parameters are below statistical errors of weak lensing tomography. While the straightforward estimate of N spec is ~10 6 we findmore » that using only the photometric redshifts with z ≤ 2.5 leads to a drastic reduction in N spec to ~ 30,000 while negligibly increasing statistical errors in dark energy parameters. Therefore, the size of spectroscopic survey needed to control catastrophic errors is similar to that previously deemed necessary to constrain the core of the z s – z p distribution. We also study the efficacy of the recent proposal to measure redshift errors by cross-correlation between the photo-z and spectroscopic samples. We find that this method requires ~ 10% a priori knowledge of the bias and stochasticity of the outlier population, and is also easily confounded by lensing magnification bias. In conclusion, the cross-correlation method is therefore unlikely to supplant the need for a complete spectroscopic redshift survey of the source population.« less
The influence of phonological context on the sound errors of a speaker with Wernicke's aphasia.
Goldmann, R E; Schwartz, M F; Wilshire, C E
2001-09-01
A corpus of phonological errors produced in narrative speech by a Wernicke's aphasic speaker (R.W.B.) was tested for context effects using two new methods for establishing chance baselines. A reliable anticipatory effect was found using the second method, which estimated chance from the distance between phoneme repeats in the speech sample containing the errors. Relative to this baseline, error-source distances were shorter than expected for anticipations, but not perseverations. R.W.B.'s anticipation/perseveration ratio measured intermediate between a nonaphasic error corpus and that of a more severe aphasic speaker (both reported in Schwartz et al., 1994), supporting the view that the anticipatory bias correlates to severity. Finally, R.W.B's anticipations favored word-initial segments, although errors and sources did not consistently share word or syllable position. Copyright 2001 Academic Press.
The Sources of Error in Spanish Writing.
ERIC Educational Resources Information Center
Justicia, Fernando; Defior, Sylvia; Pelegrina, Santiago; Martos, Francisco J.
1999-01-01
Determines the pattern of errors in Spanish spelling. Analyzes and proposes a classification system for the errors made by children in the initial stages of the acquisition of spelling skills. Finds the diverse forms of only 20 Spanish words produces 36% of the spelling errors in Spanish; and substitution is the most frequent type of error. (RS)
Geometric error analysis for shuttle imaging spectrometer experiment
NASA Technical Reports Server (NTRS)
Wang, S. J.; Ih, C. H.
1984-01-01
The demand of more powerful tools for remote sensing and management of earth resources steadily increased over the last decade. With the recent advancement of area array detectors, high resolution multichannel imaging spectrometers can be realistically constructed. The error analysis study for the Shuttle Imaging Spectrometer Experiment system is documented for the purpose of providing information for design, tradeoff, and performance prediction. Error sources including the Shuttle attitude determination and control system, instrument pointing and misalignment, disturbances, ephemeris, Earth rotation, etc., were investigated. Geometric error mapping functions were developed, characterized, and illustrated extensively with tables and charts. Selected ground patterns and the corresponding image distortions were generated for direct visual inspection of how the various error sources affect the appearance of the ground object images.
2017-01-01
Previous reviews estimated that approximately 20 to 25% of assertions cited from original research articles, or “facts,” are inaccurately quoted in the medical literature. These reviews noted that the original studies were dissimilar and only began to compare the methods of the original studies. The aim of this review is to examine the methods of the original studies and provide a more specific rate of incorrectly cited assertions, or quotation errors, in original research articles published in medical journals. Additionally, the estimate of quotation errors calculated here is based on the ratio of quotation errors to quotations examined (a percent) rather than the more prevalent and weighted metric of quotation errors to the references selected. Overall, this resulted in a lower estimate of the quotation error rate in original medical research articles. A total of 15 studies met the criteria for inclusion in the primary quantitative analysis. Quotation errors were divided into two categories: content ("factual") or source (improper indirect citation) errors. Content errors were further subdivided into major and minor errors depending on the degree that the assertion differed from the original source. The rate of quotation errors recalculated here is 14.5% (10.5% to 18.6% at a 95% confidence interval). These content errors are predominantly, 64.8% (56.1% to 73.5% at a 95% confidence interval), major errors or cited assertions in which the referenced source either fails to substantiate, is unrelated to, or contradicts the assertion. Minor errors, which are an oversimplification, overgeneralization, or trivial inaccuracies, are 35.2% (26.5% to 43.9% at a 95% confidence interval). Additionally, improper secondary (or indirect) citations, which are distinguished from calculations of quotation accuracy, occur at a rate of 10.4% (3.4% to 17.5% at a 95% confidence interval). PMID:28910404
Mogull, Scott A
2017-01-01
Previous reviews estimated that approximately 20 to 25% of assertions cited from original research articles, or "facts," are inaccurately quoted in the medical literature. These reviews noted that the original studies were dissimilar and only began to compare the methods of the original studies. The aim of this review is to examine the methods of the original studies and provide a more specific rate of incorrectly cited assertions, or quotation errors, in original research articles published in medical journals. Additionally, the estimate of quotation errors calculated here is based on the ratio of quotation errors to quotations examined (a percent) rather than the more prevalent and weighted metric of quotation errors to the references selected. Overall, this resulted in a lower estimate of the quotation error rate in original medical research articles. A total of 15 studies met the criteria for inclusion in the primary quantitative analysis. Quotation errors were divided into two categories: content ("factual") or source (improper indirect citation) errors. Content errors were further subdivided into major and minor errors depending on the degree that the assertion differed from the original source. The rate of quotation errors recalculated here is 14.5% (10.5% to 18.6% at a 95% confidence interval). These content errors are predominantly, 64.8% (56.1% to 73.5% at a 95% confidence interval), major errors or cited assertions in which the referenced source either fails to substantiate, is unrelated to, or contradicts the assertion. Minor errors, which are an oversimplification, overgeneralization, or trivial inaccuracies, are 35.2% (26.5% to 43.9% at a 95% confidence interval). Additionally, improper secondary (or indirect) citations, which are distinguished from calculations of quotation accuracy, occur at a rate of 10.4% (3.4% to 17.5% at a 95% confidence interval).
The Pearson-Readhead Survey of Compact Extragalactic Radio Sources from Space. I. The Images
NASA Astrophysics Data System (ADS)
Lister, M. L.; Tingay, S. J.; Murphy, D. W.; Piner, B. G.; Jones, D. L.; Preston, R. A.
2001-06-01
We present images from a space-VLBI survey using the facilities of the VLBI Space Observatory Programme (VSOP), drawing our sample from the well-studied Pearson-Readhead survey of extragalactic radio sources. Our survey has taken advantage of long space-VLBI baselines and large arrays of ground antennas, such as the Very Long Baseline Array and European VLBI Network, to obtain high-resolution images of 27 active galactic nuclei and to measure the core brightness temperatures of these sources more accurately than is possible from the ground. A detailed analysis of the source properties is given in accompanying papers. We have also performed an extensive series of simulations to investigate the errors in VSOP images caused by the relatively large holes in the (u,v)-plane when sources are observed near the orbit normal direction. We find that while the nominal dynamic range (defined as the ratio of map peak to off-source error) often exceeds 1000:1, the true dynamic range (map peak to on-source error) is only about 30:1 for relatively complex core-jet sources. For sources dominated by a strong point source, this value rises to approximately 100:1. We find the true dynamic range to be a relatively weak function of the difference in position angle (P.A.) between the jet P.A. and u-v coverage major axis P.A. For regions with low signal-to-noise ratios, typically located down the jet away from the core, large errors can occur, causing spurious features in VSOP images that should be interpreted with caution.
Trial-to-trial adaptation in control of arm reaching and standing posture
Pienciak-Siewert, Alison; Horan, Dylan P.
2016-01-01
Classical theories of motor learning hypothesize that adaptation is driven by sensorimotor error; this is supported by studies of arm and eye movements that have shown that trial-to-trial adaptation increases with error. Studies of postural control have shown that anticipatory postural adjustments increase with the magnitude of a perturbation. However, differences in adaptation have been observed between the two modalities, possibly due to either the inherent instability or sensory uncertainty in standing posture. Therefore, we hypothesized that trial-to-trial adaptation in posture should be driven by error, similar to what is observed in arm reaching, but the nature of the relationship between error and adaptation may differ. Here we investigated trial-to-trial adaptation of arm reaching and postural control concurrently; subjects made reaching movements in a novel dynamic environment of varying strengths, while standing and holding the handle of a force-generating robotic arm. We found that error and adaptation increased with perturbation strength in both arm and posture. Furthermore, in both modalities, adaptation showed a significant correlation with error magnitude. Our results indicate that adaptation scales proportionally with error in the arm and near proportionally in posture. In posture only, adaptation was not sensitive to small error sizes, which were similar in size to errors experienced in unperturbed baseline movements due to inherent variability. This finding may be explained as an effect of uncertainty about the source of small errors. Our findings suggest that in rehabilitation, postural error size should be considered relative to the magnitude of inherent movement variability. PMID:27683888
Trial-to-trial adaptation in control of arm reaching and standing posture.
Pienciak-Siewert, Alison; Horan, Dylan P; Ahmed, Alaa A
2016-12-01
Classical theories of motor learning hypothesize that adaptation is driven by sensorimotor error; this is supported by studies of arm and eye movements that have shown that trial-to-trial adaptation increases with error. Studies of postural control have shown that anticipatory postural adjustments increase with the magnitude of a perturbation. However, differences in adaptation have been observed between the two modalities, possibly due to either the inherent instability or sensory uncertainty in standing posture. Therefore, we hypothesized that trial-to-trial adaptation in posture should be driven by error, similar to what is observed in arm reaching, but the nature of the relationship between error and adaptation may differ. Here we investigated trial-to-trial adaptation of arm reaching and postural control concurrently; subjects made reaching movements in a novel dynamic environment of varying strengths, while standing and holding the handle of a force-generating robotic arm. We found that error and adaptation increased with perturbation strength in both arm and posture. Furthermore, in both modalities, adaptation showed a significant correlation with error magnitude. Our results indicate that adaptation scales proportionally with error in the arm and near proportionally in posture. In posture only, adaptation was not sensitive to small error sizes, which were similar in size to errors experienced in unperturbed baseline movements due to inherent variability. This finding may be explained as an effect of uncertainty about the source of small errors. Our findings suggest that in rehabilitation, postural error size should be considered relative to the magnitude of inherent movement variability. Copyright © 2016 the American Physiological Society.
NASA Astrophysics Data System (ADS)
Srivastava, R.; Ayaz, M.; Jain, A.
2013-12-01
Knowledge of the release history of a groundwater pollutant source is critical in the prediction of the future trend of the pollutant movement and in choosing an effective remediation strategy. Moreover, for source sites which have undergone an ownership change, the estimated release history can be utilized for appropriate allocation of the costs of remediation among different parties who may be responsible for the contamination. Estimation of the release history with the help of concentration data is an inverse problem that becomes ill-posed because of the irreversible nature of the dispersion process. Breakthrough curves represent the temporal variation of pollutant concentration at a particular location, and contain significant information about the source and the release history. Several methodologies have been developed to solve the inverse problem of estimating the source and/or porous medium properties using the breakthrough curves as a known input. A common problem in the use of the breakthrough curves for this purpose is that, in most field situations, we have little or no information about the time of measurement of the breakthrough curve with respect to the time when the pollutant source becomes active. We develop an Artificial Neural Network (ANN) model to estimate the release history of a groundwater pollutant source through the use of breakthrough curves. It is assumed that the source location is known but the time dependent contaminant source strength is unknown. This temporal variation of the strength of the pollutant source is the output of the ANN model that is trained using the Levenberg-Marquardt algorithm utilizing synthetically generated breakthrough curves as inputs. A single hidden layer was used in the neural network and, to utilize just sufficient information and reduce the required sampling duration, only the upper half of the curve is used as the input pattern. The second objective of this work was to identify the aquifer parameters. An ANN model was developed to estimate the longitudinal and transverse dispersion coefficients following a philosophy similar to the one used earlier. Performance of the trained ANN model is evaluated for a 3-Dimensional case, first with perfect data and then with erroneous data with an error level up to 10 percent. Since the solution is highly sensitive to the errors in the input data, instead of using the raw data, we smoothen the upper half of the erroneous breakthrough curve by approximating it with a fourth order polynomial which is used as the input pattern for the ANN model. The main advantage of the proposed model is that it requires only the upper half of the breakthrough curve and, in addition to minimizing the effect of uncertainties in the tail ends of the breakthrough curve, is capable of estimating both the release history and aquifer parameters reasonably well. Results for the case with erroneous data having different error levels demonstrate the practical applicability and robustness of the ANN models. It is observed that with increase in the error level, the correlation coefficient of the training, testing and validation regressions tends to decrease, although the value stays within acceptable limits even for reasonably large error levels.
Statistics of the epoch of reionization 21-cm signal - I. Power spectrum error-covariance
NASA Astrophysics Data System (ADS)
Mondal, Rajesh; Bharadwaj, Somnath; Majumdar, Suman
2016-02-01
The non-Gaussian nature of the epoch of reionization (EoR) 21-cm signal has a significant impact on the error variance of its power spectrum P(k). We have used a large ensemble of seminumerical simulations and an analytical model to estimate the effect of this non-Gaussianity on the entire error-covariance matrix {C}ij. Our analytical model shows that {C}ij has contributions from two sources. One is the usual variance for a Gaussian random field which scales inversely of the number of modes that goes into the estimation of P(k). The other is the trispectrum of the signal. Using the simulated 21-cm Signal Ensemble, an ensemble of the Randomized Signal and Ensembles of Gaussian Random Ensembles we have quantified the effect of the trispectrum on the error variance {C}II. We find that its relative contribution is comparable to or larger than that of the Gaussian term for the k range 0.3 ≤ k ≤ 1.0 Mpc-1, and can be even ˜200 times larger at k ˜ 5 Mpc-1. We also establish that the off-diagonal terms of {C}ij have statistically significant non-zero values which arise purely from the trispectrum. This further signifies that the error in different k modes are not independent. We find a strong correlation between the errors at large k values (≥0.5 Mpc-1), and a weak correlation between the smallest and largest k values. There is also a small anticorrelation between the errors in the smallest and intermediate k values. These results are relevant for the k range that will be probed by the current and upcoming EoR 21-cm experiments.
Systematic Errors in an Air Track Experiment.
ERIC Educational Resources Information Center
Ramirez, Santos A.; Ham, Joe S.
1990-01-01
Errors found in a common physics experiment to measure acceleration resulting from gravity using a linear air track are investigated. Glider position at release and initial velocity are shown to be sources of systematic error. (CW)
When Does Air Resistance Become Significant in Projectile Motion?
NASA Astrophysics Data System (ADS)
Mohazzabi, Pirooz
2018-03-01
In an article in this journal, it was shown that air resistance could never be a significant source of error in typical free-fall experiments in introductory physics laboratories. Since projectile motion is the two-dimensional version of the free-fall experiment and usually follows the former experiment in such laboratories, it seemed natural to extend the same analysis to this type of motion. We shall find that again air resistance does not play a significant role in the parameters of interest in a traditional projectile motion experiment.
Attitude errors arising from antenna/satellite altitude errors - Recognition and reduction
NASA Technical Reports Server (NTRS)
Godbey, T. W.; Lambert, R.; Milano, G.
1972-01-01
A review is presented of the three basic types of pulsed radar altimeter designs, as well as the source and form of altitude bias errors arising from antenna/satellite attitude errors in each design type. A quantitative comparison of the three systems was also made.
Sensitivity of Magnetospheric Multi-Scale (MMS) Mission Navigation Accuracy to Major Error Sources
NASA Technical Reports Server (NTRS)
Olson, Corwin; Long, Anne; Car[emter. Russell
2011-01-01
The Magnetospheric Multiscale (MMS) mission consists of four satellites flying in formation in highly elliptical orbits about the Earth, with a primary objective of studying magnetic reconnection. The baseline navigation concept is independent estimation of each spacecraft state using GPS pseudorange measurements referenced to an Ultra Stable Oscillator (USO) with accelerometer measurements included during maneuvers. MMS state estimation is performed onboard each spacecraft using the Goddard Enhanced Onboard Navigation System (GEONS), which is embedded in the Navigator GPS receiver. This paper describes the sensitivity of MMS navigation performance to two major error sources: USO clock errors and thrust acceleration knowledge errors.
Sensitivity of Magnetospheric Multi-Scale (MMS) Mission Naviation Accuracy to Major Error Sources
NASA Technical Reports Server (NTRS)
Olson, Corwin; Long, Anne; Carpenter, J. Russell
2011-01-01
The Magnetospheric Multiscale (MMS) mission consists of four satellites flying in formation in highly elliptical orbits about the Earth, with a primary objective of studying magnetic reconnection. The baseline navigation concept is independent estimation of each spacecraft state using GPS pseudorange measurements referenced to an Ultra Stable Oscillator (USO) with accelerometer measurements included during maneuvers. MMS state estimation is performed onboard each spacecraft using the Goddard Enhanced Onboard Navigation System (GEONS), which is embedded in the Navigator GPS receiver. This paper describes the sensitivity of MMS navigation performance to two major error sources: USO clock errors and thrust acceleration knowledge errors.
NASA Astrophysics Data System (ADS)
Hovatta, Talvikki; Lister, Matthew L.; Aller, Margo F.; Aller, Hugh D.; Homan, Daniel C.; Kovalev, Yuri Y.; Pushkarev, Alexander B.; Savolainen, Tuomas
2012-10-01
We report observations of Faraday rotation measures for a sample of 191 extragalactic radio jets observed within the MOJAVE program. Multifrequency Very Long Baseline Array observations were carried out over 12 epochs in 2006 at four frequencies between 8 and 15 GHz. We detect parsec-scale Faraday rotation measures in 149 sources and find the quasars to have larger rotation measures on average than BL Lac objects. The median core rotation measures are significantly higher than in the jet components. This is especially true for quasars where we detect a significant negative correlation between the magnitude of the rotation measure and the de-projected distance from the core. We perform detailed simulations of the observational errors of total intensity, polarization, and Faraday rotation, and concentrate on the errors of transverse Faraday rotation measure gradients in unresolved jets. Our simulations show that the finite image restoring beam size has a significant effect on the observed rotation measure gradients, and spurious gradients can occur due to noise in the data if the jet is less than two beams wide in polarization. We detect significant transverse rotation measure gradients in four sources (0923+392, 1226+023, 2230+114, and 2251+158). In 1226+023 the rotation measure is for the first time seen to change sign from positive to negative over the transverse cuts, which supports the presence of a helical magnetic field in the jet. In this source we also detect variations in the jet rotation measure over a timescale of three months, which are difficult to explain with external Faraday screens and suggest internal Faraday rotation. By comparing fractional polarization changes in jet components between the four frequency bands to depolarization models, we find that an external purely random Faraday screen viewed through only a few lines of sight can explain most of our polarization observations, but in some sources, such as 1226+023 and 2251+158, internal Faraday rotation is needed.
Goede, Simon L; Leow, Melvin Khee-Shing
2013-01-01
This treatise investigates error sources in measurements applicable to the hypothalamus-pituitary-thyroid (HPT) system of analysis for homeostatic set point computation. The hypothalamus-pituitary transfer characteristic (HP curve) describes the relationship between plasma free thyroxine [FT4] and thyrotropin [TSH]. We define the origin, types, causes, and effects of errors that are commonly encountered in TFT measurements and examine how we can interpret these to construct a reliable HP function for set point establishment. The error sources in the clinical measurement procedures are identified and analyzed in relation to the constructed HP model. The main sources of measurement and interpretation uncertainties are (1) diurnal variations in [TSH], (2) TFT measurement variations influenced by timing of thyroid medications, (3) error sensitivity in ranges of [TSH] and [FT4] (laboratory assay dependent), (4) rounding/truncation of decimals in [FT4] which in turn amplify curve fitting errors in the [TSH] domain in the lower [FT4] range, (5) memory effects (rate-independent hysteresis effect). When the main uncertainties in thyroid function tests (TFT) are identified and analyzed, we can find the most acceptable model space with which we can construct the best HP function and the related set point area.
NASA Astrophysics Data System (ADS)
Lee, Eunji; Park, Sang-Young; Shin, Bumjoon; Cho, Sungki; Choi, Eun-Jung; Jo, Junghyun; Park, Jang-Hyun
2017-03-01
The optical wide-field patrol network (OWL-Net) is a Korean optical surveillance system that tracks and monitors domestic satellites. In this study, a batch least squares algorithm was developed for optical measurements and verified by Monte Carlo simulation and covariance analysis. Potential error sources of OWL-Net, such as noise, bias, and clock errors, were analyzed. There is a linear relation between the estimation accuracy and the noise level, and the accuracy significantly depends on the declination bias. In addition, the time-tagging error significantly degrades the observation accuracy, while the time-synchronization offset corresponds to the orbital motion. The Cartesian state vector and measurement bias were determined using the OWL-Net tracking data of the KOMPSAT-1 and Cryosat-2 satellites. The comparison with known orbital information based on two-line elements (TLE) and the consolidated prediction format (CPF) shows that the orbit determination accuracy is similar to that of TLE. Furthermore, the precision and accuracy of OWL-Net observation data were determined to be tens of arcsec and sub-degree level, respectively.
Unaccounted source of systematic errors in measurements of the Newtonian gravitational constant G
NASA Astrophysics Data System (ADS)
DeSalvo, Riccardo
2015-06-01
Many precision measurements of G have produced a spread of results incompatible with measurement errors. Clearly an unknown source of systematic errors is at work. It is proposed here that most of the discrepancies derive from subtle deviations from Hooke's law, caused by avalanches of entangled dislocations. The idea is supported by deviations from linearity reported by experimenters measuring G, similarly to what is observed, on a larger scale, in low-frequency spring oscillators. Some mitigating experimental apparatus modifications are suggested.
Sources of variation in Landsat autocorrelation
NASA Technical Reports Server (NTRS)
Craig, R. G.; Labovitz, M. L.
1980-01-01
Analysis of sixty-four scan lines representing diverse conditions across satellites, channels, scanners, locations and cloud cover confirms that Landsat data are autocorrelated and consistently follow an Arima (1,0,1) pattern. The AR parameter varies significantly with location and the MA coefficient with cloud cover. Maximum likelihood classification functions are considerably in error unless this autocorrelation is compensated for in sampling.
Evaluation of The Operational Benefits Versus Costs of An Automated Cargo Mover
2016-12-01
logistics footprint and life-cycle cost are presented as part of this report. Analysis of modeling and simulation results identified statistically...life-cycle cost are presented as part of this report. Analysis of modeling and simulation results identified statistically significant differences...Error of Estimation. Source: Eskew and Lawler (1994). ...........................75 Figure 24. Load Results (100 Runs per Scenario
Toward the ICRF3: Astrometric Comparison of the USNO 2016A VLBI Solution with ICRF2 and Gaia DR1
NASA Astrophysics Data System (ADS)
Frouard, Julien; Johnson, Megan C.; Fey, Alan; Makarov, Valeri V.; Dorland, Bryan N.
2018-06-01
The VLBI USNO 2016A (U16A) solution is part of a work-in-progress effort by USNO toward the preparation of the ICRF3. Most of the astrometric improvement with respect to the ICRF2 is due to the re-observation of the VCS sources. Our objective in this paper is to assess U16A’s astrometry. A comparison with ICRF2 shows statistically significant offsets of size 0.1 mas between the two solutions. While Gaia DR1 positions are not precise enough to resolve these offsets, they are found to be significantly closer to U16A than ICRF2. In particular, the trend for typically larger errors for southern sources in VLBI solutions is decreased in U16A. Overall, the VLBI-Gaia offsets are reduced by 21%. The U16A list includes 718 sources not previously included in ICRF2. Twenty of those new sources have statistically significant radio-optical offsets. In two-thirds of the cases, these offsets can be explained from PanSTARRS images.
Ghirlando, Rodolfo; Balbo, Andrea; Piszczek, Grzegorz; Brown, Patrick H.; Lewis, Marc S.; Brautigam, Chad A.; Schuck, Peter; Zhao, Huaying
2013-01-01
Sedimentation velocity (SV) is a method based on first-principles that provides a precise hydrodynamic characterization of macromolecules in solution. Due to recent improvements in data analysis, the accuracy of experimental SV data emerges as a limiting factor in its interpretation. Our goal was to unravel the sources of experimental error and develop improved calibration procedures. We implemented the use of a Thermochron iButton® temperature logger to directly measure the temperature of a spinning rotor, and detected deviations that can translate into an error of as much as 10% in the sedimentation coefficient. We further designed a precision mask with equidistant markers to correct for instrumental errors in the radial calibration, which were observed to span a range of 8.6%. The need for an independent time calibration emerged with use of the current data acquisition software (Zhao et al., doi 10.1016/j.ab.2013.02.011) and we now show that smaller but significant time errors of up to 2% also occur with earlier versions. After application of these calibration corrections, the sedimentation coefficients obtained from eleven instruments displayed a significantly reduced standard deviation of ∼ 0.7 %. This study demonstrates the need for external calibration procedures and regular control experiments with a sedimentation coefficient standard. PMID:23711724
Ghirlando, Rodolfo; Balbo, Andrea; Piszczek, Grzegorz; Brown, Patrick H; Lewis, Marc S; Brautigam, Chad A; Schuck, Peter; Zhao, Huaying
2013-09-01
Sedimentation velocity (SV) is a method based on first principles that provides a precise hydrodynamic characterization of macromolecules in solution. Due to recent improvements in data analysis, the accuracy of experimental SV data emerges as a limiting factor in its interpretation. Our goal was to unravel the sources of experimental error and develop improved calibration procedures. We implemented the use of a Thermochron iButton temperature logger to directly measure the temperature of a spinning rotor and detected deviations that can translate into an error of as much as 10% in the sedimentation coefficient. We further designed a precision mask with equidistant markers to correct for instrumental errors in the radial calibration that were observed to span a range of 8.6%. The need for an independent time calibration emerged with use of the current data acquisition software (Zhao et al., Anal. Biochem., 437 (2013) 104-108), and we now show that smaller but significant time errors of up to 2% also occur with earlier versions. After application of these calibration corrections, the sedimentation coefficients obtained from 11 instruments displayed a significantly reduced standard deviation of approximately 0.7%. This study demonstrates the need for external calibration procedures and regular control experiments with a sedimentation coefficient standard. Published by Elsevier Inc.
C-band radar pulse Doppler error: Its discovery, modeling, and elimination
NASA Technical Reports Server (NTRS)
Krabill, W. B.; Dempsey, D. J.
1978-01-01
The discovery of a C Band radar pulse Doppler error is discussed and use of the GEOS 3 satellite's coherent transponder to isolate the error source is described. An analysis of the pulse Doppler tracking loop is presented and a mathematical model for the error was developed. Error correction techniques were developed and are described including implementation details.
Liu, Mao Tong; Lim, Han Chuen
2014-09-22
When implementing O-band quantum key distribution on optical fiber transmission lines carrying C-band data traffic, noise photons that arise from spontaneous Raman scattering or insufficient filtering of the classical data channels could cause the quantum bit-error rate to exceed the security threshold. In this case, a photon heralding scheme may be used to reject the uncorrelated noise photons in order to restore the quantum bit-error rate to a low level. However, the secure key rate would suffer unless one uses a heralded photon source with sufficiently high heralding rate and heralding efficiency. In this work we demonstrate a heralded photon source that has a heralding efficiency that is as high as 74.5%. One disadvantage of a typical heralded photon source is that the long deadtime of the heralding detector results in a significant drop in the heralding rate. To counter this problem, we propose a passively spatial-multiplexed configuration at the heralding arm. Using two heralding detectors in this configuration, we obtain an increase in the heralding rate by 37% and a corresponding increase in the heralded photon detection rate by 16%. We transmit the O-band photons over 10 km of noisy optical fiber to observe the relation between quantum bit-error rate and noise-degraded second-order correlation function of the transmitted photons. The effects of afterpulsing when we shorten the deadtime of the heralding detectors are also observed and discussed.
Prescription errors in the National Health Services, time to change practice.
Hamid, Tahir; Harper, Luke; Rose, Samman; Petkar, Sanjive; Fienman, Richard; Athar, Syed M; Cushley, Michael
2016-02-01
Medication error is a major source of iatrogenic illness. Error in prescription is the most common form of avoidable medication error. We present our study, performed at two, UK, National Health Services Hospitals. The prescription practice of junior doctor's working on general medical and surgical wards in National Health Service District General and University Teaching Hospitals in the UK was reviewed. Practice was assessed against standard hospital prescription charts, developed in accordance with local pharmacy guidance. A total of 407 prescription charts were reviewed in both initial audit and re-audit one year later. In the District General Hospital, documentation of allergy, weight and capital-letter prescription was achieved in 31, 5 and 40% of charts, respectively. Forty-nine per cent of discontinued prescriptions were properly deleted and signed for. In re-audit significant improvement was noted in documentation of the patient's name 100%, gender 54%, allergy status 51% and use of generic drug name 71%. Similarly, in the University Teaching Hospital, 82, 63 and 65% compliance was achieved in documentation of age, generic drug name prescription and capital-letter prescription, respectively. Prescription practice was reassessed one year later after recommendations and changes in the prescription practice, leading to significant improvement in documentation of unit number, generic drug name prescription, insulin prescription and documentation of the patient's ward. Prescription error remains an important, modifiable form of medical error, which may be rectified by introducing multidisciplinary assessment of practice, nationwide standardised prescription charts and revision of current prescribing clinical training. © The Author(s) 2016.
Adaptive Sparse Representation for Source Localization with Gain/Phase Errors
Sun, Ke; Liu, Yimin; Meng, Huadong; Wang, Xiqin
2011-01-01
Sparse representation (SR) algorithms can be implemented for high-resolution direction of arrival (DOA) estimation. Additionally, SR can effectively separate the coherent signal sources because the spectrum estimation is based on the optimization technique, such as the L1 norm minimization, but not on subspace orthogonality. However, in the actual source localization scenario, an unknown gain/phase error between the array sensors is inevitable. Due to this nonideal factor, the predefined overcomplete basis mismatches the actual array manifold so that the estimation performance is degraded in SR. In this paper, an adaptive SR algorithm is proposed to improve the robustness with respect to the gain/phase error, where the overcomplete basis is dynamically adjusted using multiple snapshots and the sparse solution is adaptively acquired to match with the actual scenario. The simulation results demonstrate the estimation robustness to the gain/phase error using the proposed method. PMID:22163875
NASA Astrophysics Data System (ADS)
Chen, Shanyong; Li, Shengyi; Wang, Guilin
2014-11-01
The wavefront error of large telescopes requires to be measured to check the system quality and also estimate the misalignment of the telescope optics including the primary, the secondary and so on. It is usually realized by a focal plane interferometer and an autocollimator flat (ACF) of the same aperture with the telescope. However, it is challenging for meter class telescopes due to high cost and technological challenges in producing the large ACF. Subaperture test with a smaller ACF is hence proposed in combination with advanced stitching algorithms. Major error sources include the surface error of the ACF, misalignment of the ACF and measurement noises. Different error sources have different impacts on the wavefront error. Basically the surface error of the ACF behaves like systematic error and the astigmatism will be cumulated and enlarged if the azimuth of subapertures remains fixed. It is difficult to accurately calibrate the ACF because it suffers considerable deformation induced by gravity or mechanical clamping force. Therefore a selfcalibrated stitching algorithm is employed to separate the ACF surface error from the subaperture wavefront error. We suggest the ACF be rotated around the optical axis of the telescope for subaperture test. The algorithm is also able to correct the subaperture tip-tilt based on the overlapping consistency. Since all subaperture measurements are obtained in the same imaging plane, lateral shift of the subapertures is always known and the real overlapping points can be recognized in this plane. Therefore lateral positioning error of subapertures has no impact on the stitched wavefront. In contrast, the angular positioning error changes the azimuth of the ACF and finally changes the systematic error. We propose an angularly uneven layout of subapertures to minimize the stitching error, which is very different from our knowledge. At last, measurement noises could never be corrected but be suppressed by means of averaging and environmental control. We simulate the performance of the stitching algorithm dealing with surface error and misalignment of the ACF, and noise suppression, which provides guidelines to optomechanical design of the stitching test system.
NASA Astrophysics Data System (ADS)
Hansen, Scott K.; Vesselinov, Velimir V.
2016-10-01
We develop empirically-grounded error envelopes for localization of a point contamination release event in the saturated zone of a previously uncharacterized heterogeneous aquifer into which a number of plume-intercepting wells have been drilled. We assume that flow direction in the aquifer is known exactly and velocity is known to within a factor of two of our best guess from well observations prior to source identification. Other aquifer and source parameters must be estimated by interpretation of well breakthrough data via the advection-dispersion equation. We employ high performance computing to generate numerous random realizations of aquifer parameters and well locations, simulate well breakthrough data, and then employ unsupervised machine optimization techniques to estimate the most likely spatial (or space-time) location of the source. Tabulating the accuracy of these estimates from the multiple realizations, we relate the size of 90% and 95% confidence envelopes to the data quantity (number of wells) and model quality (fidelity of ADE interpretation model to actual concentrations in a heterogeneous aquifer with channelized flow). We find that for purely spatial localization of the contaminant source, increased data quantities can make up for reduced model quality. For space-time localization, we find similar qualitative behavior, but significantly degraded spatial localization reliability and less improvement from extra data collection. Since the space-time source localization problem is much more challenging, we also tried a multiple-initial-guess optimization strategy. This greatly enhanced performance, but gains from additional data collection remained limited.
Prevention of Significant Deterioration Workshop Manual 1980
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Ambient Monitoring Guidelines for Prevention of Significant Deterioration
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Additional Guidance on Prevention of Significant Deterioration (PSD) Regulations
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Prevention of Significant Deterioration (PSD) Emission Thresholds for Fountain Foundry
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Significant Modification to LAER/PSD Application: Ocean Peaking Power, L.P.
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Guidance on Extension of Prevention of Significant Deterioration (PSD) Permits
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Interim Policy Determination Related to NSR/PSD Significance Level for ODS
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Class I Area Significant Impact Levels
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
NASA Technical Reports Server (NTRS)
Howlett, J. T.
1979-01-01
The partial coherence analysis method for noise source/path determination is summarized and the application to a two input, single output system with coherence between the inputs is illustrated. The augmentation of the calculations on a digital computer interfaced with a two channel, real time analyzer is also discussed. The results indicate possible sources of error in the computations and suggest procedures for avoiding these errors.
Simulating a transmon implementation of the surface code, Part I
NASA Astrophysics Data System (ADS)
Tarasinski, Brian; O'Brien, Thomas; Rol, Adriaan; Bultink, Niels; Dicarlo, Leo
Current experimental efforts aim to realize Surface-17, a distance-3 surface-code logical qubit, using transmon qubits in a circuit QED architecture. Following experimental proposals for this device, and currently achieved fidelities on physical qubits, we define a detailed error model that takes experimentally relevant error sources into account, such as amplitude and phase damping, imperfect gate pulses, and coherent errors due to low-frequency flux noise. Using the GPU-accelerated software package 'quantumsim', we simulate the density matrix evolution of the logical qubit under this error model. Combining the simulation results with a minimum-weight matching decoder, we obtain predictions for the error rate of the resulting logical qubit when used as a quantum memory, and estimate the contribution of different error sources to the logical error budget. Research funded by the Foundation for Fundamental Research on Matter (FOM), the Netherlands Organization for Scientific Research (NWO/OCW), IARPA, an ERC Synergy Grant, the China Scholarship Council, and Intel Corporation.
Remember the source: dissociating frontal and parietal contributions to episodic memory.
Donaldson, David I; Wheeler, Mark E; Petersen, Steve E
2010-02-01
Event-related fMRI studies reveal that episodic memory retrieval modulates lateral and medial parietal cortices, dorsal middle frontal gyrus (MFG), and anterior PFC. These regions respond more for recognized old than correctly rejected new words, suggesting a neural correlate of retrieval success. Despite significant efforts examining retrieval success regions, their role in retrieval remains largely unknown. Here we asked the question, to what degree are the regions performing memory-specific operations? And if so, are they all equally sensitive to successful retrieval, or are other factors such as error detection also implicated? We investigated this question by testing whether activity in retrieval success regions was associated with task-specific contingencies (i.e., perceived targetness) or mnemonic relevance (e.g., retrieval of source context). To do this, we used a source memory task that required discrimination between remembered targets and remembered nontargets. For a given region, the modulation of neural activity by a situational factor such as target status would suggest a more domain-general role; similarly, modulations of activity linked to error detection would suggest a role in monitoring and control rather than the accumulation of evidence from memory per se. We found that parietal retrieval success regions exhibited greater activity for items receiving correct than incorrect source responses, whereas frontal retrieval success regions were most active on error trials, suggesting that posterior regions signal successful retrieval whereas frontal regions monitor retrieval outcome. In addition, perceived targetness failed to modulate fMRI activity in any retrieval success region, suggesting that these regions are retrieval specific. We discuss the different functions that these regions may support and propose an accumulator model that captures the different pattern of responses seen in frontal and parietal retrieval success regions.
Sorensen, James P R; Baker, Andy; Cumberland, Susan A; Lapworth, Dan J; MacDonald, Alan M; Pedley, Steve; Taylor, Richard G; Ward, Jade S T
2018-05-01
We assess the use of fluorescent dissolved organic matter at excitation-emission wavelengths of 280nm and 360nm, termed tryptophan-like fluorescence (TLF), as an indicator of faecally contaminated drinking water. A significant logistic regression model was developed using TLF as a predictor of thermotolerant coliforms (TTCs) using data from groundwater- and surface water-derived drinking water sources in India, Malawi, South Africa and Zambia. A TLF threshold of 1.3ppb dissolved tryptophan was selected to classify TTC contamination. Validation of the TLF threshold indicated a false-negative error rate of 15% and a false-positive error rate of 18%. The threshold was unsuccessful at classifying contaminated sources containing <10 TTC cfu per 100mL, which we consider the current limit of detection. If only sources above this limit were classified, the false-negative error rate was very low at 4%. TLF intensity was very strongly correlated with TTC concentration (ρ s =0.80). A higher threshold of 6.9ppb dissolved tryptophan is proposed to indicate heavily contaminated sources (≥100 TTC cfu per 100mL). Current commercially available fluorimeters are easy-to-use, suitable for use online and in remote environments, require neither reagents nor consumables, and crucially provide an instantaneous reading. TLF measurements are not appreciably impaired by common intereferents, such as pH, turbidity and temperature, within typical natural ranges. The technology is a viable option for the real-time screening of faecally contaminated drinking water globally. Copyright © 2017 Natural Environment Research Council (NERC), as represented by the British Geological Survey (BGS. Published by Elsevier B.V. All rights reserved.
Aging of biogenic secondary organic aerosol via gas-phase OH radical reactions
Donahue, Neil M.; Henry, Kaytlin M.; Mentel, Thomas F.; Kiendler-Scharr, Astrid; Spindler, Christian; Bohn, Birger; Brauers, Theo; Dorn, Hans P.; Fuchs, Hendrik; Tillmann, Ralf; Wahner, Andreas; Saathoff, Harald; Naumann, Karl-Heinz; Möhler, Ottmar; Leisner, Thomas; Müller, Lars; Reinnig, Marc-Christopher; Hoffmann, Thorsten; Salo, Kent; Hallquist, Mattias; Frosch, Mia; Bilde, Merete; Tritscher, Torsten; Barmet, Peter; Praplan, Arnaud P.; DeCarlo, Peter F.; Dommen, Josef; Prévôt, Andre S.H.; Baltensperger, Urs
2012-01-01
The Multiple Chamber Aerosol Chemical Aging Study (MUCHACHAS) tested the hypothesis that hydroxyl radical (OH) aging significantly increases the concentration of first-generation biogenic secondary organic aerosol (SOA). OH is the dominant atmospheric oxidant, and MUCHACHAS employed environmental chambers of very different designs, using multiple OH sources to explore a range of chemical conditions and potential sources of systematic error. We isolated the effect of OH aging, confirming our hypothesis while observing corresponding changes in SOA properties. The mass increases are consistent with an existing gap between global SOA sources and those predicted in models, and can be described by a mechanism suitable for implementation in those models. PMID:22869714
Digital autonomous terminal access communications
NASA Technical Reports Server (NTRS)
Novacki, S.
1987-01-01
A significant problem for the Bus Monitor Unit is to identify the source of a given transmission. This problem arises from the fact that the label which identifies the source of the transmission as it is put into the bus is intercepted by the Digital Autonomous Terminal Access Communications (DATAC) terminal and removed from the transmission. Thus, a given subsystem will see only data associated with a label and never the identifying label itself. The Bus Monitor must identify the source of the transmission so as to be able to provide some type of error identification/location in the event that some problem with the data transmission occurs. Steps taken to alleviate this problem by modifications to the DATAC terminal are discussed.
Source Memory Errors Associated with Reports of Posttraumatic Flashbacks: A Proof of Concept Study
ERIC Educational Resources Information Center
Brewin, Chris R.; Huntley, Zoe; Whalley, Matthew G.
2012-01-01
Flashbacks are involuntary, emotion-laden images experienced by individuals with posttraumatic stress disorder (PTSD). The qualities of flashbacks could under certain circumstances lead to source memory errors. Participants with PTSD wrote a trauma narrative and reported the experience of flashbacks. They were later presented with stimuli from…
An Application of Multivariate Generalizability in Selection of Mathematically Gifted Students
ERIC Educational Resources Information Center
Kim, Sungyeun; Berebitsky, Dan
2016-01-01
This study investigates error sources and the effects of each error source to determine optimal weights of the composite score of teacher recommendation letters and self-introduction letters using multivariate generalizability theory. Data were collected from the science education institute for the gifted attached to the university located within…
Estimating Uncertainty in Annual Forest Inventory Estimates
Ronald E. McRoberts; Veronica C. Lessard
1999-01-01
The precision of annual forest inventory estimates may be negatively affected by uncertainty from a variety of sources including: (1) sampling error; (2) procedures for updating plots not measured in the current year; and (3) measurement errors. The impact of these sources of uncertainty on final inventory estimates is investigated using Monte Carlo simulation...
Kuikka, Liisa; Pitkälä, Kaisu
2014-01-01
Abstract Objective. To study coping differences between young and experienced GPs in primary care who experience medical errors and uncertainty. Design. Questionnaire-based survey (self-assessment) conducted in 2011. Setting. Finnish primary practice offices in Southern Finland. Subjects. Finnish GPs engaged in primary health care from two different respondent groups: young (working experience ≤ 5years, n = 85) and experienced (working experience > 5 years, n = 80). Main outcome measures. Outcome measures included experiences and attitudes expressed by the included participants towards medical errors and tolerance of uncertainty, their coping strategies, and factors that may influence (positively or negatively) sources of errors. Results. In total, 165/244 GPs responded (response rate: 68%). Young GPs expressed significantly more often fear of committing a medical error (70.2% vs. 48.1%, p = 0.004) and admitted more often than experienced GPs that they had committed a medical error during the past year (83.5% vs. 68.8%, p = 0.026). Young GPs were less prone to apologize to a patient for an error (44.7% vs. 65.0%, p = 0.009) and found, more often than their more experienced colleagues, on-site consultations and electronic databases useful for avoiding mistakes. Conclusion. Experienced GPs seem to better tolerate uncertainty and also seem to fear medical errors less than their young colleagues. Young and more experienced GPs use different coping strategies for dealing with medical errors. Implications. When GPs become more experienced, they seem to get better at coping with medical errors. Means to support these skills should be studied in future research. PMID:24914458
Sohrabpour, Abbas; Ye, Shuai; Worrell, Gregory A.; Zhang, Wenbo
2016-01-01
Objective Combined source imaging techniques and directional connectivity analysis can provide useful information about the underlying brain networks in a non-invasive fashion. Source imaging techniques have been used successfully to either determine the source of activity or to extract source time-courses for Granger causality analysis, previously. In this work, we utilize source imaging algorithms to both find the network nodes (regions of interest) and then extract the activation time series for further Granger causality analysis. The aim of this work is to find network nodes objectively from noninvasive electromagnetic signals, extract activation time-courses and apply Granger analysis on the extracted series to study brain networks under realistic conditions. Methods Source imaging methods are used to identify network nodes and extract time-courses and then Granger causality analysis is applied to delineate the directional functional connectivity of underlying brain networks. Computer simulations studies where the underlying network (nodes and connectivity pattern) is known were performed; additionally, this approach has been evaluated in partial epilepsy patients to study epilepsy networks from inter-ictal and ictal signals recorded by EEG and/or MEG. Results Localization errors of network nodes are less than 5 mm and normalized connectivity errors of ~20% in estimating underlying brain networks in simulation studies. Additionally, two focal epilepsy patients were studied and the identified nodes driving the epileptic network were concordant with clinical findings from intracranial recordings or surgical resection. Conclusion Our study indicates that combined source imaging algorithms with Granger causality analysis can identify underlying networks precisely (both in terms of network nodes location and internodal connectivity). Significance The combined source imaging and Granger analysis technique is an effective tool for studying normal or pathological brain conditions. PMID:27740473
Developing Performance Estimates for High Precision Astrometry with TMT
NASA Astrophysics Data System (ADS)
Schoeck, Matthias; Do, Tuan; Ellerbroek, Brent; Herriot, Glen; Meyer, Leo; Suzuki, Ryuji; Wang, Lianqi; Yelda, Sylvana
2013-12-01
Adaptive optics on Extremely Large Telescopes will open up many new science cases or expand existing science into regimes unattainable with the current generation of telescopes. One example of this is high-precision astrometry, which has requirements in the range from 10 to 50 micro-arc-seconds for some instruments and science cases. Achieving these requirements imposes stringent constraints on the design of the entire observatory, but also on the calibration procedures, observing sequences and the data analysis techniques. This paper summarizes our efforts to develop a top down astrometry error budget for TMT. It is predominantly developed for the first-light AO system, NFIRAOS, and the IRIS instrument, but many terms are applicable to other configurations as well. Astrometry error sources are divided into 5 categories: Reference source and catalog errors, atmospheric refraction correction errors, other residual atmospheric effects, opto-mechanical errors and focal plane measurement errors. Results are developed in parametric form whenever possible. However, almost every error term in the error budget depends on the details of the astrometry observations, such as whether absolute or differential astrometry is the goal, whether one observes a sparse or crowded field, what the time scales of interest are, etc. Thus, it is not possible to develop a single error budget that applies to all science cases and separate budgets are developed and detailed for key astrometric observations. Our error budget is consistent with the requirements for differential astrometry of tens of micro-arc-seconds for certain science cases. While no show stoppers have been found, the work has resulted in several modifications to the NFIRAOS optical surface specifications and reference source design that will help improve the achievable astrometry precision even further.
Surface characterization protocol for precision aspheric optics
NASA Astrophysics Data System (ADS)
Sarepaka, RamaGopal V.; Sakthibalan, Siva; Doodala, Somaiah; Panwar, Rakesh S.; Kotaria, Rajendra
2017-10-01
In Advanced Optical Instrumentation, Aspherics provide an effective performance alternative. The aspheric fabrication and surface metrology, followed by aspheric design are complementary iterative processes for Precision Aspheric development. As in fabrication, a holistic approach of aspheric surface characterization is adopted to evaluate actual surface error and to aim at the deliverance of aspheric optics with desired surface quality. Precision optical surfaces are characterized by profilometry or by interferometry. Aspheric profiles are characterized by contact profilometers, through linear surface scans to analyze their Form, Figure and Finish errors. One must ensure that, the surface characterization procedure does not add to the resident profile errors (generated during the aspheric surface fabrication). This presentation examines the errors introduced post-surface generation and during profilometry of aspheric profiles. This effort is to identify sources of errors and is to optimize the metrology process. The sources of error during profilometry may be due to: profilometer settings, work-piece placement on the profilometer stage, selection of zenith/nadir points of aspheric profiles, metrology protocols, clear aperture - diameter analysis, computational limitations of the profiler and the software issues etc. At OPTICA, a PGI 1200 FTS contact profilometer (Taylor-Hobson make) is used for this study. Precision Optics of various profiles are studied, with due attention to possible sources of errors during characterization, with multi-directional scan approach for uniformity and repeatability of error estimation. This study provides an insight of aspheric surface characterization and helps in optimal aspheric surface production methodology.
Information-Gathering Patterns Associated with Higher Rates of Diagnostic Error
ERIC Educational Resources Information Center
Delzell, John E., Jr.; Chumley, Heidi; Webb, Russell; Chakrabarti, Swapan; Relan, Anju
2009-01-01
Diagnostic errors are an important source of medical errors. Problematic information-gathering is a common cause of diagnostic errors among physicians and medical students. The objectives of this study were to (1) determine if medical students' information-gathering patterns formed clusters of similar strategies, and if so (2) to calculate the…
More on Systematic Error in a Boyle's Law Experiment
ERIC Educational Resources Information Center
McCall, Richard P.
2012-01-01
A recent article in "The Physics Teacher" describes a method for analyzing a systematic error in a Boyle's law laboratory activity. Systematic errors are important to consider in physics labs because they tend to bias the results of measurements. There are numerous laboratory examples and resources that discuss this common source of error.
Adult age differences in unconscious transference: source confusion or identity blending?
Perfect, Timothy J; Harris, Lucy J
2003-06-01
Eyewitnesses are known often to falsely identify a familiar but innocent bystander when asked to pick out a perpetrator from a lineup. Such unconscious transference errors have been attributed to either identity confusions at encoding or source retrieval errors. Three experiments contrasted younger and older adults in their susceptibility to such misidentifications. Participants saw photographs of perpetrators, then a series of mug shots of innocent bystanders. A week later, they saw lineups containing bystanders (and others containing perpetrators in Experiment 3) and were asked whether any of the perpetrators were present. When younger faces were used as stimuli (Experiments 1 and 3), older adults showed higher rates of transference errors. When older faces were used as stimuli (Experiments 2 and 3), no such age effects in rates of unconscious transference were apparent. In addition, older adults in Experiment 3 showed an own-age bias effect for correct identification of targets. Unconscious transference errors were found to be due to both source retrieval errors and identity confusions, but age-related increases were found only in the latter.
NASA Astrophysics Data System (ADS)
Zheng, Yuejiu; Ouyang, Minggao; Han, Xuebing; Lu, Languang; Li, Jianqiu
2018-02-01
Sate of charge (SOC) estimation is generally acknowledged as one of the most important functions in battery management system for lithium-ion batteries in new energy vehicles. Though every effort is made for various online SOC estimation methods to reliably increase the estimation accuracy as much as possible within the limited on-chip resources, little literature discusses the error sources for those SOC estimation methods. This paper firstly reviews the commonly studied SOC estimation methods from a conventional classification. A novel perspective focusing on the error analysis of the SOC estimation methods is proposed. SOC estimation methods are analyzed from the views of the measured values, models, algorithms and state parameters. Subsequently, the error flow charts are proposed to analyze the error sources from the signal measurement to the models and algorithms for the widely used online SOC estimation methods in new energy vehicles. Finally, with the consideration of the working conditions, choosing more reliable and applicable SOC estimation methods is discussed, and the future development of the promising online SOC estimation methods is suggested.
Computing in the presence of soft bit errors. [caused by single event upset on spacecraft
NASA Technical Reports Server (NTRS)
Rasmussen, R. D.
1984-01-01
It is shown that single-event-upsets (SEUs) due to cosmic rays are a significant source of single bit error in spacecraft computers. The physical mechanism of SEU, electron hole generation by means of Linear Energy Transfer (LET), it discussed with reference made to the results of a study of the environmental effects on computer systems of the Galileo spacecraft. Techniques for making software more tolerant of cosmic ray effects are considered, including: reducing the number of registers used by the software; continuity testing of variables; redundant execution of major procedures for error detection; and encoding state variables to detect single-bit changes. Attention is also given to design modifications which may reduce the cosmic ray exposure of on-board hardware. These modifications include: shielding components operating in LEO; removing low-power Schottky parts; and the use of CMOS diodes. The SEU parameters of different electronic components are listed in a table.
Bayesian historical earthquake relocation: an example from the 1909 Taipei earthquake
Minson, Sarah E.; Lee, William H.K.
2014-01-01
Locating earthquakes from the beginning of the modern instrumental period is complicated by the fact that there are few good-quality seismograms and what traveltimes do exist may be corrupted by both large phase-pick errors and clock errors. Here, we outline a Bayesian approach to simultaneous inference of not only the hypocentre location but also the clock errors at each station and the origin time of the earthquake. This methodology improves the solution for the source location and also provides an uncertainty analysis on all of the parameters included in the inversion. As an example, we applied this Bayesian approach to the well-studied 1909 Mw 7 Taipei earthquake. While our epicentre location and origin time for the 1909 Taipei earthquake are consistent with earlier studies, our focal depth is significantly shallower suggesting a higher seismic hazard to the populous Taipei metropolitan area than previously supposed.
Error-related negativity varies with the activation of gender stereotypes.
Ma, Qingguo; Shu, Liangchao; Wang, Xiaoyi; Dai, Shenyi; Che, Hongmin
2008-09-19
The error-related negativity (ERN) was suggested to reflect the response-performance monitoring process. The purpose of this study is to investigate how the activation of gender stereotypes influences the ERN. Twenty-eight male participants were asked to complete a tool or kitchenware identification task. The prime stimulus is a picture of a male or female face and the target stimulus is either a kitchen utensil or a hand tool. The ERN amplitude on male-kitchenware trials is significantly larger than that on female-kitchenware trials, which reveals the low-level, automatic activation of gender stereotypes. The ERN that was elicited in this task has two sources--operation errors and the conflict between the gender stereotype activation and the non-prejudice beliefs. And the gender stereotype activation may be the key factor leading to this difference of ERN. In other words, the stereotype activation in this experimental paradigm may be indexed by the ERN.
Age-related variation in genetic control of height growth in Douglas-fir.
Namkoong, G; Usanis, R A; Silen, R R
1972-01-01
The development of genetic variances in height growth of Douglas-fir over a 53-year period is analyzed and found to fall into three periods. In the juvenile period, variances in environmental error increase logarithmically, genetic variance within populations exists at moderate levels, and variance among populations is low but increasing. In the early reproductive period, the response to environmental sources of error variance is restricted, genetic variance within populations disappears, and populational differences strongly emerge but do not increase as expected. In the later period, environmental error again increases rapidly, but genetic variance within populations does not reappear and population differences are maintained at about the same level as established in the early reproductive period. The change between the juvenile and early reproductive periods is perhaps associated with the onset of ecological dominance and significant allocations of energy to reproduction.
NASA Technical Reports Server (NTRS)
Whiteman, David N.
2003-01-01
In a companion paper, the temperature dependence of Raman scattering and its influence on the Raman and Rayleigh-Mie lidar equations was examined. New forms of the lidar equation were developed to account for this temperature sensitivity. Here those results are used to derive the temperature dependent forms of the equations for the water vapor mixing ratio, aerosol scattering ratio, aerosol backscatter coefficient, and extinction to backscatter ratio (Sa). The error equations are developed, the influence of differential transmission is studied and different laser sources are considered in the analysis. The results indicate that the temperature functions become significant when using narrowband detection. Errors of 5% and more can be introduced in the water vapor mixing ratio calculation at high altitudes and errors larger than 10% are possible for calculations of aerosol scattering ratio and thus aerosol backscatter coefficient and extinction to backscatter ratio.
Space shuttle entry and landing navigation analysis
NASA Technical Reports Server (NTRS)
Jones, H. L.; Crawford, B. S.
1974-01-01
A navigation system for the entry phase of a Space Shuttle mission which is an aided-inertial system which uses a Kalman filter to mix IMU data with data derived from external navigation aids is evaluated. A drag pseudo-measurement used during radio blackout is treated as an additional external aid. A comprehensive truth model with 101 states is formulated and used to generate detailed error budgets at several significant time points -- end-of-blackout, start of final approach, over runway threshold, and touchdown. Sensitivity curves illustrating the effect of variations in the size of individual error sources on navigation accuracy are presented. The sensitivity of the navigation system performance to filter modifications is analyzed. The projected overall performance is shown in the form of time histories of position and velocity error components. The detailed results are summarized and interpreted, and suggestions are made concerning possible software improvements.
Overlay improvement by exposure map based mask registration optimization
NASA Astrophysics Data System (ADS)
Shi, Irene; Guo, Eric; Chen, Ming; Lu, Max; Li, Gordon; Li, Rivan; Tian, Eric
2015-03-01
Along with the increased miniaturization of semiconductor electronic devices, the design rules of advanced semiconductor devices shrink dramatically. [1] One of the main challenges of lithography step is the layer-to-layer overlay control. Furthermore, DPT (Double Patterning Technology) has been adapted for the advanced technology node like 28nm and 14nm, corresponding overlay budget becomes even tighter. [2][3] After the in-die mask registration (pattern placement) measurement is introduced, with the model analysis of a KLA SOV (sources of variation) tool, it's observed that registration difference between masks is a significant error source of wafer layer-to-layer overlay at 28nm process. [4][5] Mask registration optimization would highly improve wafer overlay performance accordingly. It was reported that a laser based registration control (RegC) process could be applied after the pattern generation or after pellicle mounting and allowed fine tuning of the mask registration. [6] In this paper we propose a novel method of mask registration correction, which can be applied before mask writing based on mask exposure map, considering the factors of mask chip layout, writing sequence, and pattern density distribution. Our experiment data show if pattern density on the mask keeps at a low level, in-die mask registration residue error in 3sigma could be always under 5nm whatever blank type and related writer POSCOR (position correction) file was applied; it proves random error induced by material or equipment would occupy relatively fixed error budget as an error source of mask registration. On the real production, comparing the mask registration difference through critical production layers, it could be revealed that registration residue error of line space layers with higher pattern density is always much larger than the one of contact hole layers with lower pattern density. Additionally, the mask registration difference between layers with similar pattern density could also achieve under 5nm performance. We assume mask registration excluding random error is mostly induced by charge accumulation during mask writing, which may be calculated from surrounding exposed pattern density. Multi-loading test mask registration result shows that with x direction writing sequence, mask registration behavior in x direction is mainly related to sequence direction, but mask registration in y direction would be highly impacted by pattern density distribution map. It proves part of mask registration error is due to charge issue from nearby environment. If exposure sequence is chip by chip for normal multi chip layout case, mask registration of both x and y direction would be impacted analogously, which has also been proved by real data. Therefore, we try to set up a simple model to predict the mask registration error based on mask exposure map, and correct it with the given POSCOR (position correction) file for advanced mask writing if needed.
Missed lung cancer: when, where, and why?
del Ciello, Annemilia; Franchi, Paola; Contegiacomo, Andrea; Cicchetti, Giuseppe; Bonomo, Lorenzo; Larici, Anna Rita
2017-01-01
Missed lung cancer is a source of concern among radiologists and an important medicolegal challenge. In 90% of the cases, errors in diagnosis of lung cancer occur on chest radiographs. It may be challenging for radiologists to distinguish a lung lesion from bones, pulmonary vessels, mediastinal structures, and other complex anatomical structures on chest radiographs. Nevertheless, lung cancer can also be overlooked on computed tomography (CT) scans, regardless of the context, either if a clinical or radiologic suspect exists or for other reasons. Awareness of the possible causes of overlooking a pulmonary lesion can give radiologists a chance to reduce the occurrence of this eventuality. Various factors contribute to a misdiagnosis of lung cancer on chest radiographs and on CT, often very similar in nature to each other. Observer error is the most significant one and comprises scanning error, recognition error, decision-making error, and satisfaction of search. Tumor characteristics such as lesion size, conspicuity, and location are also crucial in this context. Even technical aspects can contribute to the probability of skipping lung cancer, including image quality and patient positioning and movement. Albeit it is hard to remove missed lung cancer completely, strategies to reduce observer error and methods to improve technique and automated detection may be valuable in reducing its likelihood. PMID:28206951
Bohil, Corey J; Higgins, Nicholas A; Keebler, Joseph R
2014-01-01
We compared methods for predicting and understanding the source of confusion errors during military vehicle identification training. Participants completed training to identify main battle tanks. They also completed card-sorting and similarity-rating tasks to express their mental representation of resemblance across the set of training items. We expected participants to selectively attend to a subset of vehicle features during these tasks, and we hypothesised that we could predict identification confusion errors based on the outcomes of the card-sort and similarity-rating tasks. Based on card-sorting results, we were able to predict about 45% of observed identification confusions. Based on multidimensional scaling of the similarity-rating data, we could predict more than 80% of identification confusions. These methods also enabled us to infer the dimensions receiving significant attention from each participant. This understanding of mental representation may be crucial in creating personalised training that directs attention to features that are critical for accurate identification. Participants completed military vehicle identification training and testing, along with card-sorting and similarity-rating tasks. The data enabled us to predict up to 84% of identification confusion errors and to understand the mental representation underlying these errors. These methods have potential to improve training and reduce identification errors leading to fratricide.
Qin, Feng; Zhan, Xingqun; Du, Gang
2013-01-01
Ultra-tight integration was first proposed by Abbott in 2003 with the purpose of integrating a global navigation satellite system (GNSS) and an inertial navigation system (INS). This technology can improve the tracking performances of a receiver by reconfiguring the tracking loops in GNSS-challenged environments. In this paper, the models of all error sources known to date in the phase lock loops (PLLs) of a standard receiver and an ultra-tightly integrated GNSS/INS receiver are built, respectively. Based on these models, the tracking performances of the two receivers are compared to verify the improvement due to the ultra-tight integration. Meanwhile, the PLL error distributions of the two receivers are also depicted to analyze the error changes of the tracking loops. These results show that the tracking error is significantly reduced in the ultra-tightly integrated GNSS/INS receiver since the receiver's dynamics are estimated and compensated by an INS. Moreover, the mathematical relationship between the tracking performances of the ultra-tightly integrated GNSS/INS receiver and the quality of the selected inertial measurement unit (IMU) is derived from the error models and proved by the error comparisons of four ultra-tightly integrated GNSS/INS receivers aided by different grade IMUs.
[The error, source of learning].
Joyeux, Stéphanie; Bohic, Valérie
2016-05-01
The error itself is not recognised as a fault. It is the intentionality which differentiates between an error and a fault. An error is unintentional while a fault is a failure to respect known rules. The risk of error is omnipresent in health institutions. Public authorities have therefore set out a series of measures to reduce this risk. Copyright © 2016 Elsevier Masson SAS. All rights reserved.
Types of Possible Survey Errors in Estimates Published in the Weekly Natural Gas Storage Report
2016-01-01
This document lists types of potential errors in EIA estimates published in the WNGSR. Survey errors are an unavoidable aspect of data collection. Error is inherent in all collected data, regardless of the source of the data and the care and competence of data collectors. The type and extent of error depends on the type and characteristics of the survey.
Methodological uncertainties in multi-regression analyses of middle-atmospheric data series.
Kerzenmacher, Tobias E; Keckhut, Philippe; Hauchecorne, Alain; Chanin, Marie-Lise
2006-07-01
Multi-regression analyses have often been used recently to detect trends, in particular in ozone or temperature data sets in the stratosphere. The confidence in detecting trends depends on a number of factors which generate uncertainties. Part of these uncertainties comes from the random variability and these are what is usually considered. They can be statistically estimated from residual deviations between the data and the fitting model. However, interferences between different sources of variability affecting the data set, such as the Quasi-Biennal Oscillation (QBO), volcanic aerosols, solar flux variability and the trend can also be a critical source of errors. This type of error has hitherto not been well quantified. In this work an artificial data series has been generated to carry out such estimates. The sources of errors considered here are: the length of the data series, the dependence on the choice of parameters used in the fitting model and the time evolution of the trend in the data series. Curves provided here, will permit future studies to test the magnitude of the methodological bias expected for a given case, as shown in several real examples. It is found that, if the data series is shorter than a decade, the uncertainties are very large, whatever factors are chosen to identify the source of the variability. However the errors can be limited when dealing with natural variability, if a sufficient number of periods (for periodic forcings) are covered by the analysed dataset. However when analysing the trend, the response to volcanic eruption induces a bias, whatever the length of the data series. The signal to noise ratio is a key factor: doubling the noise increases the period for which data is required in order to obtain an error smaller than 10%, from 1 to 3-4 decades. Moreover, if non-linear trends are superimposed on the data, and if the length of the series is longer than five years, a non-linear function has to be used to estimate trends. When applied to real data series, and when a breakpoint in the series occurs, the study reveals that data extending over 5 years are needed to detect a significant change in the slope of the ozone trends at mid-latitudes.
The impact of 14-nm photomask uncertainties on computational lithography solutions
NASA Astrophysics Data System (ADS)
Sturtevant, John; Tejnil, Edita; Lin, Tim; Schultze, Steffen; Buck, Peter; Kalk, Franklin; Nakagawa, Kent; Ning, Guoxiang; Ackmann, Paul; Gans, Fritz; Buergel, Christian
2013-04-01
Computational lithography solutions rely upon accurate process models to faithfully represent the imaging system output for a defined set of process and design inputs. These models, which must balance accuracy demands with simulation runtime boundary conditions, rely upon the accurate representation of multiple parameters associated with the scanner and the photomask. While certain system input variables, such as scanner numerical aperture, can be empirically tuned to wafer CD data over a small range around the presumed set point, it can be dangerous to do so since CD errors can alias across multiple input variables. Therefore, many input variables for simulation are based upon designed or recipe-requested values or independent measurements. It is known, however, that certain measurement methodologies, while precise, can have significant inaccuracies. Additionally, there are known errors associated with the representation of certain system parameters. With shrinking total CD control budgets, appropriate accounting for all sources of error becomes more important, and the cumulative consequence of input errors to the computational lithography model can become significant. In this work, we examine with a simulation sensitivity study, the impact of errors in the representation of photomask properties including CD bias, corner rounding, refractive index, thickness, and sidewall angle. The factors that are most critical to be accurately represented in the model are cataloged. CD Bias values are based on state of the art mask manufacturing data and other variables changes are speculated, highlighting the need for improved metrology and awareness.
Baldwin, DeWitt C; Daugherty, Steven R; Ryan, Patrick M; Yaghmour, Nicholas A; Philibert, Ingrid
2018-04-01
Medical errors and patient safety are major concerns for the medical and medical education communities. Improving clinical supervision for residents is important in avoiding errors, yet little is known about how residents perceive the adequacy of their supervision and how this relates to medical errors and other education outcomes, such as learning and satisfaction. We analyzed data from a 2009 survey of residents in 4 large specialties regarding the adequacy and quality of supervision they receive as well as associations with self-reported data on medical errors and residents' perceptions of their learning environment. Residents' reports of working without adequate supervision were lower than data from a 1999 survey for all 4 specialties, and residents were least likely to rate "lack of supervision" as a problem. While few residents reported that they received inadequate supervision, problems with supervision were negatively correlated with sufficient time for clinical activities, overall ratings of the residency experience, and attending physicians as a source of learning. Problems with supervision were positively correlated with resident reports that they had made a significant medical error, had been belittled or humiliated, or had observed others falsifying medical records. Although working without supervision was not a pervasive problem in 2009, when it happened, it appeared to have negative consequences. The association between inadequate supervision and medical errors is of particular concern.
Survey and Method for Determination of Trajectory Predictor Requirements
NASA Technical Reports Server (NTRS)
Rentas, Tamika L.; Green, Steven M.; Cate, Karen Tung
2009-01-01
A survey of air-traffic-management researchers, representing a broad range of automation applications, was conducted to document trajectory-predictor requirements for future decision-support systems. Results indicated that the researchers were unable to articulate a basic set of trajectory-prediction requirements for their automation concepts. Survey responses showed the need to establish a process to help developers determine the trajectory-predictor-performance requirements for their concepts. Two methods for determining trajectory-predictor requirements are introduced. A fast-time simulation method is discussed that captures the sensitivity of a concept to the performance of its trajectory-prediction capability. A characterization method is proposed to provide quicker, yet less precise results, based on analysis and simulation to characterize the trajectory-prediction errors associated with key modeling options for a specific concept. Concept developers can then identify the relative sizes of errors associated with key modeling options, and qualitatively determine which options lead to significant errors. The characterization method is demonstrated for a case study involving future airport surface traffic management automation. Of the top four sources of error, results indicated that the error associated with accelerations to and from turn speeds was unacceptable, the error associated with the turn path model was acceptable, and the error associated with taxi-speed estimation was of concern and needed a higher fidelity concept simulation to obtain a more precise result
Restoration of the ASCA Source Position Accuracy
NASA Astrophysics Data System (ADS)
Gotthelf, E. V.; Ueda, Y.; Fujimoto, R.; Kii, T.; Yamaoka, K.
2000-11-01
We present a calibration of the absolute pointing accuracy of the Advanced Satellite for Cosmology and Astrophysics (ASCA) which allows us to compensate for a large error (up to 1') in the derived source coordinates. We parameterize a temperature dependent deviation of the attitude solution which is responsible for this error. By analyzing ASCA coordinates of 100 bright active galactic nuclei, we show that it is possible to reduce the uncertainty in the sky position for any given observation by a factor of 4. The revised 90% error circle radius is then 12", consistent with preflight specifications, effectively restoring the full ASCA pointing accuracy. Herein, we derive an algorithm which compensates for this attitude error and present an internet-based table to be used to correct post facto the coordinate of all ASCA observations. While the above error circle is strictly applicable to data taken with the on-board Solid-state Imaging Spectrometers (SISs), similar coordinate corrections are derived for data obtained with the Gas Imaging Spectrometers (GISs), which, however, have additional instrumental uncertainties. The 90% error circle radius for the central 20' diameter of the GIS is 24". The large reduction in the error circle area for the two instruments offers the opportunity to greatly enhance the search for X-ray counterparts at other wavelengths. This has important implications for current and future ASCA source catalogs and surveys.
NASA Astrophysics Data System (ADS)
Hutchinson, G. L.; Livingston, G. P.; Healy, R. W.; Striegl, R. G.
2000-04-01
We employed a three-dimensional finite difference gas diffusion model to simulate the performance of chambers used to measure surface-atmosphere trace gas exchange. We found that systematic errors often result from conventional chamber design and deployment protocols, as well as key assumptions behind the estimation of trace gas exchange rates from observed concentration data. Specifically, our simulations showed that (1) when a chamber significantly alters atmospheric mixing processes operating near the soil surface, it also nearly instantaneously enhances or suppresses the postdeployment gas exchange rate, (2) any change resulting in greater soil gas diffusivity, or greater partitioning of the diffusing gas to solid or liquid soil fractions, increases the potential for chamber-induced measurement error, and (3) all such errors are independent of the magnitude, kinetics, and/or distribution of trace gas sources, but greater for trace gas sinks with the same initial absolute flux. Finally, and most importantly, we found that our results apply to steady state as well as non-steady-state chambers, because the slow rate of gas diffusion in soil inhibits recovery of the former from their initial non-steady-state condition. Over a range of representative conditions, the error in steady state chamber estimates of the trace gas flux varied from -30 to +32%, while estimates computed by linear regression from non-steady-state chamber concentrations were 2 to 31% too small. Although such errors are relatively small in comparison to the temporal and spatial variability characteristic of trace gas exchange, they bias the summary statistics for each experiment as well as larger scale trace gas flux estimates based on them.
Hutchinson, G.L.; Livingston, G.P.; Healy, R.W.; Striegl, Robert G.
2000-01-01
We employed a three-dimensional finite difference gas diffusion model to simulate the performance of chambers used to measure surface-atmosphere tace gas exchange. We found that systematic errors often result from conventional chamber design and deployment protocols, as well as key assumptions behind the estimation of trace gas exchange rates from observed concentration data. Specifically, our simulationshowed that (1) when a chamber significantly alters atmospheric mixing processes operating near the soil surface, it also nearly instantaneously enhances or suppresses the postdeployment gas exchange rate, (2) any change resulting in greater soil gas diffusivity, or greater partitioning of the diffusing gas to solid or liquid soil fractions, increases the potential for chamber-induced measurement error, and (3) all such errors are independent of the magnitude, kinetics, and/or distribution of trace gas sources, but greater for trace gas sinks with the same initial absolute flux. Finally, and most importantly, we found that our results apply to steady state as well as non-steady-state chambers, because the slow rate of gas diffusion in soil inhibits recovery of the former from their initial non-steady-state condition. Over a range of representative conditions, the error in steady state chamber estimates of the trace gas flux varied from -30 to +32%, while estimates computed by linear regression from non-steadystate chamber concentrations were 2 to 31% too small. Although such errors are relatively small in comparison to the temporal and spatial variability characteristic of trace gas exchange, they bias the summary statistics for each experiment as well as larger scale trace gas flux estimates based on them.
NASA Astrophysics Data System (ADS)
Samboju, Vishal; Adams, Matthew; Salgaonkar, Vasant; Diederich, Chris J.; Cunha, J. Adam M.
2017-02-01
The speed of sound (SOS) for ultrasound devices used for imaging soft tissue is often calibrated to water, 1540 m/s1 , despite in-vivo soft tissue SOS varying from 1450 to 1613 m/s2 . Images acquired with 1540 m/s and used in conjunction with stereotactic external coordinate systems can thus result in displacement errors of several millimeters. Ultrasound imaging systems are routinely used to guide interventional thermal ablation and cryoablation devices, or radiation sources for brachytherapy3 . Brachytherapy uses small radioactive pellets, inserted interstitially with needles under ultrasound guidance, to eradicate cancerous tissue4 . Since the radiation dose diminishes with distance from the pellet as 1/r2 , imaging uncertainty of a few millimeters can result in significant erroneous dose delivery5,6. Likewise, modeling of power deposition and thermal dose accumulations from ablative sources are also prone to errors due to placement offsets from SOS errors7 . This work presents a method of mitigating needle placement error due to SOS variances without the need of ionizing radiation2,8. We demonstrate the effects of changes in dosimetry in a prostate brachytherapy environment due to patientspecific SOS variances and the ability to mitigate dose delivery uncertainty. Electromagnetic (EM) sensors embedded in the brachytherapy ultrasound system provide information regarding 3D position and orientation of the ultrasound array. Algorithms using data from these two modalities are used to correct bmode images to account for SOS errors. While ultrasound localization resulted in >3 mm displacements, EM resolution was verified to <1 mm precision using custom-built phantoms with various SOS, showing 1% accuracy in SOS measurement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Passarge, M; Fix, M K; Manser, P
Purpose: To create and test an accurate EPID-frame-based VMAT QA metric to detect gross dose errors in real-time and to provide information about the source of error. Methods: A Swiss cheese model was created for an EPID-based real-time QA process. The system compares a treatmentplan- based reference set of EPID images with images acquired over each 2° gantry angle interval. The metric utilizes a sequence of independent consecutively executed error detection Methods: a masking technique that verifies infield radiation delivery and ensures no out-of-field radiation; output normalization checks at two different stages; global image alignment to quantify rotation, scaling andmore » translation; standard gamma evaluation (3%, 3 mm) and pixel intensity deviation checks including and excluding high dose gradient regions. Tolerances for each test were determined. For algorithm testing, twelve different types of errors were selected to modify the original plan. Corresponding predictions for each test case were generated, which included measurement-based noise. Each test case was run multiple times (with different noise per run) to assess the ability to detect introduced errors. Results: Averaged over five test runs, 99.1% of all plan variations that resulted in patient dose errors were detected within 2° and 100% within 4° (∼1% of patient dose delivery). Including cases that led to slightly modified but clinically equivalent plans, 91.5% were detected by the system within 2°. Based on the type of method that detected the error, determination of error sources was achieved. Conclusion: An EPID-based during-treatment error detection system for VMAT deliveries was successfully designed and tested. The system utilizes a sequence of methods to identify and prevent gross treatment delivery errors. The system was inspected for robustness with realistic noise variations, demonstrating that it has the potential to detect a large majority of errors in real-time and indicate the error source. J. V. Siebers receives funding support from Varian Medical Systems.« less
Quantifying uncertainty in carbon and nutrient pools of coarse woody debris
NASA Astrophysics Data System (ADS)
See, C. R.; Campbell, J. L.; Fraver, S.; Domke, G. M.; Harmon, M. E.; Knoepp, J. D.; Woodall, C. W.
2016-12-01
Woody detritus constitutes a major pool of both carbon and nutrients in forested ecosystems. Estimating coarse wood stocks relies on many assumptions, even when full surveys are conducted. Researchers rarely report error in coarse wood pool estimates, despite the importance to ecosystem budgets and modelling efforts. To date, no study has attempted a comprehensive assessment of error rates and uncertainty inherent in the estimation of this pool. Here, we use Monte Carlo analysis to propagate the error associated with the major sources of uncertainty present in the calculation of coarse wood carbon and nutrient (i.e., N, P, K, Ca, Mg, Na) pools. We also evaluate individual sources of error to identify the importance of each source of uncertainty in our estimates. We quantify sampling error by comparing the three most common field methods used to survey coarse wood (two transect methods and a whole-plot survey). We quantify the measurement error associated with length and diameter measurement, and technician error in species identification and decay class using plots surveyed by multiple technicians. We use previously published values of model error for the four most common methods of volume estimation: Smalian's, conical frustum, conic paraboloid, and average-of-ends. We also use previously published values for error in the collapse ratio (cross-sectional height/width) of decayed logs that serves as a surrogate for the volume remaining. We consider sampling error in chemical concentration and density for all decay classes, using distributions from both published and unpublished studies. Analytical uncertainty is calculated using standard reference plant material from the National Institute of Standards. Our results suggest that technician error in decay classification can have a large effect on uncertainty, since many of the error distributions included in the calculation (e.g. density, chemical concentration, volume-model selection, collapse ratio) are decay-class specific.
Clinical Errors and Medical Negligence
Oyebode, Femi
2013-01-01
This paper discusses the definition, nature and origins of clinical errors including their prevention. The relationship between clinical errors and medical negligence is examined as are the characteristics of litigants and events that are the source of litigation. The pattern of malpractice claims in different specialties and settings is examined. Among hospitalized patients worldwide, 3–16s% suffer injury as a result of medical intervention, the most common being the adverse effects of drugs. The frequency of adverse drug effects appears superficially to be higher in intensive care units and emergency departments but once rates have been corrected for volume of patients, comorbidity of conditions and number of drugs prescribed, the difference is not significant. It is concluded that probably no more than 1 in 7 adverse events in medicine result in a malpractice claim and the factors that predict that a patient will resort to litigation include a prior poor relationship with the clinician and the feeling that the patient is not being kept informed. Methods for preventing clinical errors are still in their infancy. The most promising include new technologies such as electronic prescribing systems, diagnostic and clinical decision-making aids and error-resistant systems. PMID:23343656
Clinical errors and medical negligence.
Oyebode, Femi
2013-01-01
This paper discusses the definition, nature and origins of clinical errors including their prevention. The relationship between clinical errors and medical negligence is examined as are the characteristics of litigants and events that are the source of litigation. The pattern of malpractice claims in different specialties and settings is examined. Among hospitalized patients worldwide, 3-16% suffer injury as a result of medical intervention, the most common being the adverse effects of drugs. The frequency of adverse drug effects appears superficially to be higher in intensive care units and emergency departments but once rates have been corrected for volume of patients, comorbidity of conditions and number of drugs prescribed, the difference is not significant. It is concluded that probably no more than 1 in 7 adverse events in medicine result in a malpractice claim and the factors that predict that a patient will resort to litigation include a prior poor relationship with the clinician and the feeling that the patient is not being kept informed. Methods for preventing clinical errors are still in their infancy. The most promising include new technologies such as electronic prescribing systems, diagnostic and clinical decision-making aids and error-resistant systems. Copyright © 2013 S. Karger AG, Basel.
Partial pressure analysis in space testing
NASA Technical Reports Server (NTRS)
Tilford, Charles R.
1994-01-01
For vacuum-system or test-article analysis it is often desirable to know the species and partial pressures of the vacuum gases. Residual gas or Partial Pressure Analyzers (PPA's) are commonly used for this purpose. These are mass spectrometer-type instruments, most commonly employing quadrupole filters. These instruments can be extremely useful, but they should be used with caution. Depending on the instrument design, calibration procedures, and conditions of use, measurements made with these instruments can be accurate to within a few percent, or in error by two or more orders of magnitude. Significant sources of error can include relative gas sensitivities that differ from handbook values by an order of magnitude, changes in sensitivity with pressure by as much as two orders of magnitude, changes in sensitivity with time after exposure to chemically active gases, and the dependence of the sensitivity for one gas on the pressures of other gases. However, for most instruments, these errors can be greatly reduced with proper operating procedures and conditions of use. In this paper, data are presented illustrating performance characteristics for different instruments and gases, operating parameters are recommended to minimize some errors, and calibrations procedures are described that can detect and/or correct other errors.
Uncertainty Analysis of Seebeck Coefficient and Electrical Resistivity Characterization
NASA Technical Reports Server (NTRS)
Mackey, Jon; Sehirlioglu, Alp; Dynys, Fred
2014-01-01
In order to provide a complete description of a materials thermoelectric power factor, in addition to the measured nominal value, an uncertainty interval is required. The uncertainty may contain sources of measurement error including systematic bias error and precision error of a statistical nature. The work focuses specifically on the popular ZEM-3 (Ulvac Technologies) measurement system, but the methods apply to any measurement system. The analysis accounts for sources of systematic error including sample preparation tolerance, measurement probe placement, thermocouple cold-finger effect, and measurement parameters; in addition to including uncertainty of a statistical nature. Complete uncertainty analysis of a measurement system allows for more reliable comparison of measurement data between laboratories.
Remmersmann, Christian; Stürwald, Stephan; Kemper, Björn; Langehanenberg, Patrik; von Bally, Gert
2009-03-10
In temporal phase-shifting-based digital holographic microscopy, high-resolution phase contrast imaging requires optimized conditions for hologram recording and phase retrieval. To optimize the phase resolution, for the example of a variable three-step algorithm, a theoretical analysis on statistical errors, digitalization errors, uncorrelated errors, and errors due to a misaligned temporal phase shift is carried out. In a second step the theoretically predicted results are compared to the measured phase noise obtained from comparative experimental investigations with several coherent and partially coherent light sources. Finally, the applicability for noise reduction is demonstrated by quantitative phase contrast imaging of pancreas tumor cells.
Observability of ionospheric space-time structure with ISR: A simulation study
NASA Astrophysics Data System (ADS)
Swoboda, John; Semeter, Joshua; Zettergren, Matthew; Erickson, Philip J.
2017-02-01
The sources of error from electronically steerable array (ESA) incoherent scatter radar (ISR) systems are investigated both theoretically and with use of an open-source ISR simulator, developed by the authors, called Simulator for ISR (SimISR). The main sources of error incorporated in the simulator include statistical uncertainty, which arises due to nature of the measurement mechanism and the inherent space-time ambiguity from the sensor. SimISR can take a field of plasma parameters, parameterized by time and space, and create simulated ISR data at the scattered electric field (i.e., complex receiver voltage) level, subsequently processing these data to show possible reconstructions of the original parameter field. To demonstrate general utility, we show a number of simulation examples, with two cases using data from a self-consistent multifluid transport model. Results highlight the significant influence of the forward model of the ISR process and the resulting statistical uncertainty on plasma parameter measurements and the core experiment design trade-offs that must be made when planning observations. These conclusions further underscore the utility of this class of measurement simulator as a design tool for more optimal experiment design efforts using flexible ESA class ISR systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tweardy, Matthew C.; McConchie, Seth; Hayward, Jason P.
An extension of the point kinetics model is developed in this paper to describe the neutron multiplicity response of a bare uranium object under interrogation by an associated particle imaging deuterium-tritium (D-T) measurement system. This extended model is used to estimate the total neutron multiplication of the uranium. Both MCNPX-PoliMi simulations and data from active interrogation measurements of highly enriched and depleted uranium geometries are used to evaluate the potential of this method and to identify the sources of systematic error. The detection efficiency correction for measured coincidence response is identified as a large source of systematic error. If themore » detection process is not considered, results suggest that the method can estimate total multiplication to within 13% of the simulated value. Values for multiplicity constants in the point kinetics equations are sensitive to enrichment due to (n, xn) interactions by D-T neutrons and can introduce another significant source of systematic bias. This can theoretically be corrected if isotopic composition is known a priori. Finally, the spatial dependence of multiplication is also suspected of introducing further systematic bias for high multiplication uranium objects.« less
Tweardy, Matthew C.; McConchie, Seth; Hayward, Jason P.
2017-06-13
An extension of the point kinetics model is developed in this paper to describe the neutron multiplicity response of a bare uranium object under interrogation by an associated particle imaging deuterium-tritium (D-T) measurement system. This extended model is used to estimate the total neutron multiplication of the uranium. Both MCNPX-PoliMi simulations and data from active interrogation measurements of highly enriched and depleted uranium geometries are used to evaluate the potential of this method and to identify the sources of systematic error. The detection efficiency correction for measured coincidence response is identified as a large source of systematic error. If themore » detection process is not considered, results suggest that the method can estimate total multiplication to within 13% of the simulated value. Values for multiplicity constants in the point kinetics equations are sensitive to enrichment due to (n, xn) interactions by D-T neutrons and can introduce another significant source of systematic bias. This can theoretically be corrected if isotopic composition is known a priori. Finally, the spatial dependence of multiplication is also suspected of introducing further systematic bias for high multiplication uranium objects.« less
Spatially Resolved Isotopic Source Signatures of Wetland Methane Emissions
NASA Astrophysics Data System (ADS)
Ganesan, A. L.; Stell, A. C.; Gedney, N.; Comyn-Platt, E.; Hayman, G.; Rigby, M.; Poulter, B.; Hornibrook, E. R. C.
2018-04-01
We present the first spatially resolved wetland δ13C(CH4) source signature map based on data characterizing wetland ecosystems and demonstrate good agreement with wetland signatures derived from atmospheric observations. The source signature map resolves a latitudinal difference of 10‰ between northern high-latitude (mean -67.8‰) and tropical (mean -56.7‰) wetlands and shows significant regional variations on top of the latitudinal gradient. We assess the errors in inverse modeling studies aiming to separate CH4 sources and sinks by comparing atmospheric δ13C(CH4) derived using our spatially resolved map against the common assumption of globally uniform wetland δ13C(CH4) signature. We find a larger interhemispheric gradient, a larger high-latitude seasonal cycle, and smaller trend over the period 2000-2012. The implication is that erroneous CH4 fluxes would be derived to compensate for the biases imposed by not utilizing spatially resolved signatures for the largest source of CH4 emissions. These biases are significant when compared to the size of observed signals.
NASA Astrophysics Data System (ADS)
Zhang, Hao; Yuan, Yan; Su, Lijuan; Huang, Fengzhen; Bai, Qing
2016-09-01
The Risley-prism-based light beam steering apparatus delivers superior pointing accuracy and it is used in imaging LIDAR and imaging microscopes. A general model for pointing error analysis of the Risley prisms is proposed in this paper, based on ray direction deviation in light refraction. This model captures incident beam deviation, assembly deflections, and prism rotational error. We derive the transmission matrixes of the model firstly. Then, the independent and cumulative effects of different errors are analyzed through this model. Accuracy study of the model shows that the prediction deviation of pointing error for different error is less than 4.1×10-5° when the error amplitude is 0.1°. Detailed analyses of errors indicate that different error sources affect the pointing accuracy to varying degree, and the major error source is the incident beam deviation. The prism tilting has a relative big effect on the pointing accuracy when prism tilts in the principal section. The cumulative effect analyses of multiple errors represent that the pointing error can be reduced by tuning the bearing tilting in the same direction. The cumulative effect of rotational error is relative big when the difference of these two prism rotational angles equals 0 or π, while it is relative small when the difference equals π/2. The novelty of these results suggests that our analysis can help to uncover the error distribution and aid in measurement calibration of Risley-prism systems.
Yazmir, Boris; Reiner, Miriam
2018-05-15
Any motor action is, by nature, potentially accompanied by human errors. In order to facilitate development of error-tailored Brain-Computer Interface (BCI) correction systems, we focused on internal, human-initiated errors, and investigated EEG correlates of user outcome successes and errors during a continuous 3D virtual tennis game against a computer player. We used a multisensory, 3D, highly immersive environment. Missing and repelling the tennis ball were considered, as 'error' (miss) and 'success' (repel). Unlike most previous studies, where the environment "encouraged" the participant to perform a mistake, here errors happened naturally, resulting from motor-perceptual-cognitive processes of incorrect estimation of the ball kinematics, and can be regarded as user internal, self-initiated errors. Results show distinct and well-defined Event-Related Potentials (ERPs), embedded in the ongoing EEG, that differ across conditions by waveforms, scalp signal distribution maps, source estimation results (sLORETA) and time-frequency patterns, establishing a series of typical features that allow valid discrimination between user internal outcome success and error. The significant delay in latency between positive peaks of error- and success-related ERPs, suggests a cross-talk between top-down and bottom-up processing, represented by an outcome recognition process, in the context of the game world. Success-related ERPs had a central scalp distribution, while error-related ERPs were centro-parietal. The unique characteristics and sharp differences between EEG correlates of error/success provide the crucial components for an improved BCI system. The features of the EEG waveform can be used to detect user action outcome, to be fed into the BCI correction system. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.
Effect of Numerical Error on Gravity Field Estimation for GRACE and Future Gravity Missions
NASA Astrophysics Data System (ADS)
McCullough, Christopher; Bettadpur, Srinivas
2015-04-01
In recent decades, gravity field determination from low Earth orbiting satellites, such as the Gravity Recovery and Climate Experiment (GRACE), has become increasingly more effective due to the incorporation of high accuracy measurement devices. Since instrumentation quality will only increase in the near future and the gravity field determination process is computationally and numerically intensive, numerical error from the use of double precision arithmetic will eventually become a prominent error source. While using double-extended or quadruple precision arithmetic will reduce these errors, the numerical limitations of current orbit determination algorithms and processes must be accurately identified and quantified in order to adequately inform the science data processing techniques of future gravity missions. The most obvious numerical limitation in the orbit determination process is evident in the comparison of measured observables with computed values, derived from mathematical models relating the satellites' numerically integrated state to the observable. Significant error in the computed trajectory will corrupt this comparison and induce error in the least squares solution of the gravitational field. In addition, errors in the numerically computed trajectory propagate into the evaluation of the mathematical measurement model's partial derivatives. These errors amalgamate in turn with numerical error from the computation of the state transition matrix, computed using the variational equations of motion, in the least squares mapping matrix. Finally, the solution of the linearized least squares system, computed using a QR factorization, is also susceptible to numerical error. Certain interesting combinations of each of these numerical errors are examined in the framework of GRACE gravity field determination to analyze and quantify their effects on gravity field recovery.
The impact of modelling errors on interferometer calibration for 21 cm power spectra
NASA Astrophysics Data System (ADS)
Ewall-Wice, Aaron; Dillon, Joshua S.; Liu, Adrian; Hewitt, Jacqueline
2017-09-01
We study the impact of sky-based calibration errors from source mismodelling on 21 cm power spectrum measurements with an interferometer and propose a method for suppressing their effects. While emission from faint sources that are not accounted for in calibration catalogues is believed to be spectrally smooth, deviations of true visibilities from model visibilities are not, due to the inherent chromaticity of the interferometer's sky response (the 'wedge'). Thus, unmodelled foregrounds, below the confusion limit of many instruments, introduce frequency structure into gain solutions on the same line-of-sight scales on which we hope to observe the cosmological signal. We derive analytic expressions describing these errors using linearized approximations of the calibration equations and estimate the impact of this bias on measurements of the 21 cm power spectrum during the epoch of reionization. Given our current precision in primary beam and foreground modelling, this noise will significantly impact the sensitivity of existing experiments that rely on sky-based calibration. Our formalism describes the scaling of calibration with array and sky-model parameters and can be used to guide future instrument design and calibration strategy. We find that sky-based calibration that downweights long baselines can eliminate contamination in most of the region outside of the wedge with only a modest increase in instrumental noise.
Peleato, Nicolás M; Andrews, Robert C
2015-01-01
This work investigated the application of several fluorescence excitation-emission matrix analysis methods as natural organic matter (NOM) indicators for use in predicting the formation of trihalomethanes (THMs) and haloacetic acids (HAAs). Waters from four different sources (two rivers and two lakes) were subjected to jar testing followed by 24hr disinfection by-product formation tests using chlorine. NOM was quantified using three common measures: dissolved organic carbon, ultraviolet absorbance at 254 nm, and specific ultraviolet absorbance as well as by principal component analysis, peak picking, and parallel factor analysis of fluorescence spectra. Based on multi-linear modeling of THMs and HAAs, principle component (PC) scores resulted in the lowest mean squared prediction error of cross-folded test sets (THMs: 43.7 (μg/L)(2), HAAs: 233.3 (μg/L)(2)). Inclusion of principle components representative of protein-like material significantly decreased prediction error for both THMs and HAAs. Parallel factor analysis did not identify a protein-like component and resulted in prediction errors similar to traditional NOM surrogates as well as fluorescence peak picking. These results support the value of fluorescence excitation-emission matrix-principal component analysis as a suitable NOM indicator in predicting the formation of THMs and HAAs for the water sources studied. Copyright © 2014. Published by Elsevier B.V.
Interpolation for de-Dopplerisation
NASA Astrophysics Data System (ADS)
Graham, W. R.
2018-05-01
'De-Dopplerisation' is one aspect of a problem frequently encountered in experimental acoustics: deducing an emitted source signal from received data. It is necessary when source and receiver are in relative motion, and requires interpolation of the measured signal. This introduces error. In acoustics, typical current practice is to employ linear interpolation and reduce error by over-sampling. In other applications, more advanced approaches with better performance have been developed. Associated with this work is a large body of theoretical analysis, much of which is highly specialised. Nonetheless, a simple and compact performance metric is available: the Fourier transform of the 'kernel' function underlying the interpolation method. Furthermore, in the acoustics context, it is a more appropriate indicator than other, more abstract, candidates. On this basis, interpolators from three families previously identified as promising - - piecewise-polynomial, windowed-sinc, and B-spline-based - - are compared. The results show that significant improvements over linear interpolation can straightforwardly be obtained. The recommended approach is B-spline-based interpolation, which performs best irrespective of accuracy specification. Its only drawback is a pre-filtering requirement, which represents an additional implementation cost compared to other methods. If this cost is unacceptable, and aliasing errors (on re-sampling) up to approximately 1% can be tolerated, a family of piecewise-cubic interpolators provides the best alternative.
GPS Data Filtration Method for Drive Cycle Analysis Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duran, A.; Earleywine, M.
2013-02-01
When employing GPS data acquisition systems to capture vehicle drive-cycle information, a number of errors often appear in the raw data samples, such as sudden signal loss, extraneous or outlying data points, speed drifting, and signal white noise, all of which limit the quality of field data for use in downstream applications. Unaddressed, these errors significantly impact the reliability of source data and limit the effectiveness of traditional drive-cycle analysis approaches and vehicle simulation software. Without reliable speed and time information, the validity of derived metrics for drive cycles, such as acceleration, power, and distance, become questionable. This study exploresmore » some of the common sources of error present in raw onboard GPS data and presents a detailed filtering process designed to correct for these issues. Test data from both light and medium/heavy duty applications are examined to illustrate the effectiveness of the proposed filtration process across the range of vehicle vocations. Graphical comparisons of raw and filtered cycles are presented, and statistical analyses are performed to determine the effects of the proposed filtration process on raw data. Finally, an evaluation of the overall benefits of data filtration on raw GPS data and present potential areas for continued research is presented.« less
Insight into biases and sequencing errors for amplicon sequencing with the Illumina MiSeq platform.
Schirmer, Melanie; Ijaz, Umer Z; D'Amore, Rosalinda; Hall, Neil; Sloan, William T; Quince, Christopher
2015-03-31
With read lengths of currently up to 2 × 300 bp, high throughput and low sequencing costs Illumina's MiSeq is becoming one of the most utilized sequencing platforms worldwide. The platform is manageable and affordable even for smaller labs. This enables quick turnaround on a broad range of applications such as targeted gene sequencing, metagenomics, small genome sequencing and clinical molecular diagnostics. However, Illumina error profiles are still poorly understood and programs are therefore not designed for the idiosyncrasies of Illumina data. A better knowledge of the error patterns is essential for sequence analysis and vital if we are to draw valid conclusions. Studying true genetic variation in a population sample is fundamental for understanding diseases, evolution and origin. We conducted a large study on the error patterns for the MiSeq based on 16S rRNA amplicon sequencing data. We tested state-of-the-art library preparation methods for amplicon sequencing and showed that the library preparation method and the choice of primers are the most significant sources of bias and cause distinct error patterns. Furthermore we tested the efficiency of various error correction strategies and identified quality trimming (Sickle) combined with error correction (BayesHammer) followed by read overlapping (PANDAseq) as the most successful approach, reducing substitution error rates on average by 93%. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Liu, Yan; Salvendy, Gavriel
2009-05-01
This paper aims to demonstrate the effects of measurement errors on psychometric measurements in ergonomics studies. A variety of sources can cause random measurement errors in ergonomics studies and these errors can distort virtually every statistic computed and lead investigators to erroneous conclusions. The effects of measurement errors on five most widely used statistical analysis tools have been discussed and illustrated: correlation; ANOVA; linear regression; factor analysis; linear discriminant analysis. It has been shown that measurement errors can greatly attenuate correlations between variables, reduce statistical power of ANOVA, distort (overestimate, underestimate or even change the sign of) regression coefficients, underrate the explanation contributions of the most important factors in factor analysis and depreciate the significance of discriminant function and discrimination abilities of individual variables in discrimination analysis. The discussions will be restricted to subjective scales and survey methods and their reliability estimates. Other methods applied in ergonomics research, such as physical and electrophysiological measurements and chemical and biomedical analysis methods, also have issues of measurement errors, but they are beyond the scope of this paper. As there has been increasing interest in the development and testing of theories in ergonomics research, it has become very important for ergonomics researchers to understand the effects of measurement errors on their experiment results, which the authors believe is very critical to research progress in theory development and cumulative knowledge in the ergonomics field.
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
SUS Source Level Error Analysis
1978-01-20
RIECIP1IEN’ CATALOG NUMBER * ITLE (and SubaltIe) S. TYP aof REPORT & _V9RCO SUS~ SOURCE LEVEL ERROR ANALYSIS & Fia 1.r,. -. pAURWORONTIUMm N (s)$S...Fourier Transform (FFTl) SUS Signal model ___ 10 TRA&C (CeEOINIMII1& ro"* *140O tidat n9#*#*Y a"d 0e~ntiff 6T 69*.4 apbt The report provides an analysis ...of major terms which contribute to signal analysis error in a proposed experiment to c-librate sourr - I levels of SUS (Signal Underwater Sound). A
Inference of emission rates from multiple sources using Bayesian probability theory.
Yee, Eugene; Flesch, Thomas K
2010-03-01
The determination of atmospheric emission rates from multiple sources using inversion (regularized least-squares or best-fit technique) is known to be very susceptible to measurement and model errors in the problem, rendering the solution unusable. In this paper, a new perspective is offered for this problem: namely, it is argued that the problem should be addressed as one of inference rather than inversion. Towards this objective, Bayesian probability theory is used to estimate the emission rates from multiple sources. The posterior probability distribution for the emission rates is derived, accounting fully for the measurement errors in the concentration data and the model errors in the dispersion model used to interpret the data. The Bayesian inferential methodology for emission rate recovery is validated against real dispersion data, obtained from a field experiment involving various source-sensor geometries (scenarios) consisting of four synthetic area sources and eight concentration sensors. The recovery of discrete emission rates from three different scenarios obtained using Bayesian inference and singular value decomposition inversion are compared and contrasted.
Patrick L. Zimmerman; Greg C. Liknes
2010-01-01
Dot grids are often used to estimate the proportion of land cover belonging to some class in an aerial photograph. Interpreter misclassification is an often-ignored source of error in dot-grid sampling that has the potential to significantly bias proportion estimates. For the case when the true class of items is unknown, we present a maximum-likelihood estimator of...
Ueguchi, Takashi; Ogihara, Ryota; Yamada, Sachiko
2018-03-21
To investigate the accuracy of dual-energy virtual monochromatic computed tomography (CT) numbers obtained by two typical hardware and software implementations: the single-source projection-based method and the dual-source image-based method. A phantom with different tissue equivalent inserts was scanned with both single-source and dual-source scanners. A fast kVp-switching feature was used on the single-source scanner, whereas a tin filter was used on the dual-source scanner. Virtual monochromatic CT images of the phantom at energy levels of 60, 100, and 140 keV were obtained by both projection-based (on the single-source scanner) and image-based (on the dual-source scanner) methods. The accuracy of virtual monochromatic CT numbers for all inserts was assessed by comparing measured values to their corresponding true values. Linear regression analysis was performed to evaluate the dependency of measured CT numbers on tissue attenuation, method, and their interaction. Root mean square values of systematic error over all inserts at 60, 100, and 140 keV were approximately 53, 21, and 29 Hounsfield unit (HU) with the single-source projection-based method, and 46, 7, and 6 HU with the dual-source image-based method, respectively. Linear regression analysis revealed that the interaction between the attenuation and the method had a statistically significant effect on the measured CT numbers at 100 and 140 keV. There were attenuation-, method-, and energy level-dependent systematic errors in the measured virtual monochromatic CT numbers. CT number reproducibility was comparable between the two scanners, and CT numbers had better accuracy with the dual-source image-based method at 100 and 140 keV. Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
Increased Error-Related Negativity (ERN) in Childhood Anxiety Disorders: ERP and Source Localization
ERIC Educational Resources Information Center
Ladouceur, Cecile D.; Dahl, Ronald E.; Birmaher, Boris; Axelson, David A.; Ryan, Neal D.
2006-01-01
Background: In this study we used event-related potentials (ERPs) and source localization analyses to track the time course of neural activity underlying response monitoring in children diagnosed with an anxiety disorder compared to age-matched low-risk normal controls. Methods: High-density ERPs were examined following errors on a flanker task…
Development of Action Monitoring through Adolescence into Adulthood: ERP and Source Localization
ERIC Educational Resources Information Center
Ladouceur, Cecile D.; Dahl, Ronald E.; Carter, Cameron S.
2007-01-01
In this study we examined the development of three action monitoring event-related potentials (ERPs)--the error-related negativity (ERN/Ne), error positivity (P[subscript E]) and the N2--and estimated their neural sources. These ERPs were recorded during a flanker task in the following groups: early adolescents (mean age = 12 years), late…
Random Error in Judgment: The Contribution of Encoding and Retrieval Processes
ERIC Educational Resources Information Center
Pleskac, Timothy J.; Dougherty, Michael R.; Rivadeneira, A. Walkyria; Wallsten, Thomas S.
2009-01-01
Theories of confidence judgments have embraced the role random error plays in influencing responses. An important next step is to identify the source(s) of these random effects. To do so, we used the stochastic judgment model (SJM) to distinguish the contribution of encoding and retrieval processes. In particular, we investigated whether dividing…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yashchuk, Valeriy V.; Morrison, Gregory Y.; Marcus, Matthew A.
The Advanced Light Source (ALS) beamline (BL) 10.3.2 is an apparatus for X-ray microprobe spectroscopy and diffraction experiments, operating in the energy range 2.4–17 keV. The performance of the beamline, namely the spatial and energy resolutions of the measurements, depends significantly on the collimation quality of light incident on the monochromator. In the BL 10.3.2 end-station, the synchrotron source is imaged 1:1 onto a set of roll slits which form a virtual source. The light from this source is collimated in the vertical direction by a bendable parabolic cylinder mirror. Details are presented of the mirror design, which allows formore » precision assembly, alignment and shaping of the mirror, as well as for extending of the mirror operating lifetime by a factor of ~10. Assembly, mirror optimal shaping and preliminary alignment were performed ex situ in the ALS X-ray Optics Laboratory (XROL). Using an original method for optimal ex situ characterization and setting of bendable X-ray optics developed at the XROL, a root-mean-square (RMS) residual surface slope error of 0.31 µrad with respect to the desired parabola, and an RMS residual height error of less than 3 nm were achieved. Once in place at the beamline, deviations from the designed optical geometry ( e.g. due to the tolerances for setting the distance to the virtual source, the grazing incidence angle, the transverse position) and/or mirror shape ( e.g. due to a heat load deformation) may appear. Due to the errors, on installation the energy spread from the monochromator is typically a few electron-volts. Here, a new technique developed and successfully implemented for at-wavelength ( in situ) fine optimal tuning of the mirror, enabling us to reduce the collimation-induced energy spread to ~0.05 eV, is described.« less
Yashchuk, Valeriy V.; Morrison, Gregory Y.; Marcus, Matthew A.; Domning, Edward E.; Merthe, Daniel J.; Salmassi, Farhad; Smith, Brian V.
2015-01-01
The Advanced Light Source (ALS) beamline (BL) 10.3.2 is an apparatus for X-ray microprobe spectroscopy and diffraction experiments, operating in the energy range 2.4–17 keV. The performance of the beamline, namely the spatial and energy resolutions of the measurements, depends significantly on the collimation quality of light incident on the monochromator. In the BL 10.3.2 end-station, the synchrotron source is imaged 1:1 onto a set of roll slits which form a virtual source. The light from this source is collimated in the vertical direction by a bendable parabolic cylinder mirror. Details are presented of the mirror design, which allows for precision assembly, alignment and shaping of the mirror, as well as for extending of the mirror operating lifetime by a factor of ∼10. Assembly, mirror optimal shaping and preliminary alignment were performed ex situ in the ALS X-ray Optics Laboratory (XROL). Using an original method for optimal ex situ characterization and setting of bendable X-ray optics developed at the XROL, a root-mean-square (RMS) residual surface slope error of 0.31 µrad with respect to the desired parabola, and an RMS residual height error of less than 3 nm were achieved. Once in place at the beamline, deviations from the designed optical geometry (e.g. due to the tolerances for setting the distance to the virtual source, the grazing incidence angle, the transverse position) and/or mirror shape (e.g. due to a heat load deformation) may appear. Due to the errors, on installation the energy spread from the monochromator is typically a few electron-volts. Here, a new technique developed and successfully implemented for at-wavelength (in situ) fine optimal tuning of the mirror, enabling us to reduce the collimation-induced energy spread to ∼0.05 eV, is described. PMID:25931083
Yashchuk, Valeriy V.; Morrison, Gregory Y.; Marcus, Matthew A.; ...
2015-04-08
The Advanced Light Source (ALS) beamline (BL) 10.3.2 is an apparatus for X-ray microprobe spectroscopy and diffraction experiments, operating in the energy range 2.4–17 keV. The performance of the beamline, namely the spatial and energy resolutions of the measurements, depends significantly on the collimation quality of light incident on the monochromator. In the BL 10.3.2 end-station, the synchrotron source is imaged 1:1 onto a set of roll slits which form a virtual source. The light from this source is collimated in the vertical direction by a bendable parabolic cylinder mirror. Details are presented of the mirror design, which allows formore » precision assembly, alignment and shaping of the mirror, as well as for extending of the mirror operating lifetime by a factor of ~10. Assembly, mirror optimal shaping and preliminary alignment were performed ex situ in the ALS X-ray Optics Laboratory (XROL). Using an original method for optimal ex situ characterization and setting of bendable X-ray optics developed at the XROL, a root-mean-square (RMS) residual surface slope error of 0.31 µrad with respect to the desired parabola, and an RMS residual height error of less than 3 nm were achieved. Once in place at the beamline, deviations from the designed optical geometry ( e.g. due to the tolerances for setting the distance to the virtual source, the grazing incidence angle, the transverse position) and/or mirror shape ( e.g. due to a heat load deformation) may appear. Due to the errors, on installation the energy spread from the monochromator is typically a few electron-volts. Here, a new technique developed and successfully implemented for at-wavelength ( in situ) fine optimal tuning of the mirror, enabling us to reduce the collimation-induced energy spread to ~0.05 eV, is described.« less
Measuring the Lense-Thirring precession using a second Lageos satellite
NASA Technical Reports Server (NTRS)
Tapley, B. D.; Ciufolini, I.
1989-01-01
A complete numerical simulation and error analysis was performed for the proposed experiment with the objective of establishing an accurate assessment of the feasibility and the potential accuracy of the measurement of the Lense-Thirring precession. Consideration was given to identifying the error sources which limit the accuracy of the experiment and proposing procedures for eliminating or reducing the effect of these errors. Analytic investigations were conducted to study the effects of major error sources with the objective of providing error bounds on the experiment. The analysis of realistic simulated data is used to demonstrate that satellite laser ranging of two Lageos satellites, orbiting with supplemental inclinations, collected for a period of 3 years or more, can be used to verify the Lense-Thirring precession. A comprehensive covariance analysis for the solution was also developed.
The Use of Neural Networks in Identifying Error Sources in Satellite-Derived Tropical SST Estimates
Lee, Yung-Hsiang; Ho, Chung-Ru; Su, Feng-Chun; Kuo, Nan-Jung; Cheng, Yu-Hsin
2011-01-01
An neural network model of data mining is used to identify error sources in satellite-derived tropical sea surface temperature (SST) estimates from thermal infrared sensors onboard the Geostationary Operational Environmental Satellite (GOES). By using the Back Propagation Network (BPN) algorithm, it is found that air temperature, relative humidity, and wind speed variation are the major factors causing the errors of GOES SST products in the tropical Pacific. The accuracy of SST estimates is also improved by the model. The root mean square error (RMSE) for the daily SST estimate is reduced from 0.58 K to 0.38 K and mean absolute percentage error (MAPE) is 1.03%. For the hourly mean SST estimate, its RMSE is also reduced from 0.66 K to 0.44 K and the MAPE is 1.3%. PMID:22164030
Article Errors in the English Writing of Saudi EFL Preparatory Year Students
ERIC Educational Resources Information Center
Alhaisoni, Eid; Gaudel, Daya Ram; Al-Zuoud, Khalid M.
2017-01-01
This study aims at providing a comprehensive account of the types of errors produced by Saudi EFL students enrolled in the preparatory year programe in their use of articles, based on the Surface Structure Taxonomies (SST) of errors. The study describes the types, frequency and sources of the definite and indefinite article errors in writing…
An Analysis of Spanish and German Learners' Errors. Working Papers on Bilingualism, No. 7.
ERIC Educational Resources Information Center
LoCoco, Veronica Gonzalez-Mena
This study analyzes Spanish and German errors committed by adult native speakers of English enrolled in elementary and intermediate levels. Four written samples were collected for each target language, over a period of five months. Errors were categorized according to their possible source. Types of errors were ordered according to their…
NASA Astrophysics Data System (ADS)
Acebron, Ana; Jullo, Eric; Limousin, Marceau; Tilquin, André; Giocoli, Carlo; Jauzac, Mathilde; Mahler, Guillaume; Richard, Johan
2017-09-01
Strong gravitational lensing by galaxy clusters is a fundamental tool to study dark matter and constrain the geometry of the Universe. Recently, the Hubble Space Telescope Frontier Fields programme has allowed a significant improvement of mass and magnification measurements but lensing models still have a residual root mean square between 0.2 arcsec and few arcseconds, not yet completely understood. Systematic errors have to be better understood and treated in order to use strong lensing clusters as reliable cosmological probes. We have analysed two simulated Hubble-Frontier-Fields-like clusters from the Hubble Frontier Fields Comparison Challenge, Ares and Hera. We use several estimators (relative bias on magnification, density profiles, ellipticity and orientation) to quantify the goodness of our reconstructions by comparing our multiple models, optimized with the parametric software lenstool, with the input models. We have quantified the impact of systematic errors arising, first, from the choice of different density profiles and configurations and, secondly, from the availability of constraints (spectroscopic or photometric redshifts, redshift ranges of the background sources) in the parametric modelling of strong lensing galaxy clusters and therefore on the retrieval of cosmological parameters. We find that substructures in the outskirts have a significant impact on the position of the multiple images, yielding tighter cosmological contours. The need for wide-field imaging around massive clusters is thus reinforced. We show that competitive cosmological constraints can be obtained also with complex multimodal clusters and that photometric redshifts improve the constraints on cosmological parameters when considering a narrow range of (spectroscopic) redshifts for the sources.
Spindle Thermal Error Optimization Modeling of a Five-axis Machine Tool
NASA Astrophysics Data System (ADS)
Guo, Qianjian; Fan, Shuo; Xu, Rufeng; Cheng, Xiang; Zhao, Guoyong; Yang, Jianguo
2017-05-01
Aiming at the problem of low machining accuracy and uncontrollable thermal errors of NC machine tools, spindle thermal error measurement, modeling and compensation of a two turntable five-axis machine tool are researched. Measurement experiment of heat sources and thermal errors are carried out, and GRA(grey relational analysis) method is introduced into the selection of temperature variables used for thermal error modeling. In order to analyze the influence of different heat sources on spindle thermal errors, an ANN (artificial neural network) model is presented, and ABC(artificial bee colony) algorithm is introduced to train the link weights of ANN, a new ABC-NN(Artificial bee colony-based neural network) modeling method is proposed and used in the prediction of spindle thermal errors. In order to test the prediction performance of ABC-NN model, an experiment system is developed, the prediction results of LSR (least squares regression), ANN and ABC-NN are compared with the measurement results of spindle thermal errors. Experiment results show that the prediction accuracy of ABC-NN model is higher than LSR and ANN, and the residual error is smaller than 3 μm, the new modeling method is feasible. The proposed research provides instruction to compensate thermal errors and improve machining accuracy of NC machine tools.
NASA Astrophysics Data System (ADS)
Zhang, Shou-ping; Xin, Xiao-kang
2017-07-01
Identification of pollutant sources for river pollution incidents is an important and difficult task in the emergency rescue, and an intelligent optimization method can effectively compensate for the weakness of traditional methods. An intelligent model for pollutant source identification has been established using the basic genetic algorithm (BGA) as an optimization search tool and applying an analytic solution formula of one-dimensional unsteady water quality equation to construct the objective function. Experimental tests show that the identification model is effective and efficient: the model can accurately figure out the pollutant amounts or positions no matter single pollution source or multiple sources. Especially when the population size of BGA is set as 10, the computing results are sound agree with analytic results for a single source amount and position identification, the relative errors are no more than 5 %. For cases of multi-point sources and multi-variable, there are some errors in computing results for the reasons that there exist many possible combinations of the pollution sources. But, with the help of previous experience to narrow the search scope, the relative errors of the identification results are less than 5 %, which proves the established source identification model can be used to direct emergency responses.
MO-FG-202-06: Improving the Performance of Gamma Analysis QA with Radiomics- Based Image Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wootton, L; Nyflot, M; Ford, E
2016-06-15
Purpose: The use of gamma analysis for IMRT quality assurance has well-known limitations. Traditionally, a simple thresholding technique is used to evaluated passing criteria. However, like any image the gamma distribution is rich in information which thresholding mostly discards. We therefore propose a novel method of analyzing gamma images that uses quantitative image features borrowed from radiomics, with the goal of improving error detection. Methods: 368 gamma images were generated from 184 clinical IMRT beams. For each beam the dose to a phantom was measured with EPID dosimetry and compared to the TPS dose calculated with and without normally distributedmore » (2mm sigma) errors in MLC positions. The magnitude of 17 intensity histogram and size-zone radiomic features were derived from each image. The features that differed most significantly between image sets were determined with ROC analysis. A linear machine-learning model was trained on these features to classify images as with or without errors on 180 gamma images.The model was then applied to an independent validation set of 188 additional gamma distributions, half with and half without errors. Results: The most significant features for detecting errors were histogram kurtosis (p=0.007) and three size-zone metrics (p<1e-6 for each). The sizezone metrics detected clusters of high gamma-value pixels under mispositioned MLCs. The model applied to the validation set had an AUC of 0.8, compared to 0.56 for traditional gamma analysis with the decision threshold restricted to 98% or less. Conclusion: A radiomics-based image analysis method was developed that is more effective in detecting error than traditional gamma analysis. Though the pilot study here considers only MLC position errors, radiomics-based methods for other error types are being developed, which may provide better error detection and useful information on the source of detected errors. This work was partially supported by a grant from the Agency for Healthcare Research and Quality, grant number R18 HS022244-01.« less
NASA Astrophysics Data System (ADS)
Gabor, Allen H.; Brendler, Andrew C.; Brunner, Timothy A.; Chen, Xuemei; Culp, James A.; Levinson, Harry J.
2018-03-01
The relationship between edge placement error, semiconductor design-rule determination and predicted yield in the era of EUV lithography is examined. This paper starts with the basics of edge placement error and then builds up to design-rule calculations. We show that edge placement error (EPE) definitions can be used as the building blocks for design-rule equations but that in the last several years the term "EPE" has been used in the literature to refer to many patterning errors that are not EPE. We then explore the concept of "Good Fields"1 and use it predict the n-sigma value needed for design-rule determination. Specifically, fundamental yield calculations based on the failure opportunities per chip are used to determine at what n-sigma "value" design-rules need to be tested to ensure high yield. The "value" can be a space between two features, an intersect area between two features, a minimum area of a feature, etc. It is shown that across chip variation of design-rule important values needs to be tested at sigma values between seven and eight which is much higher than the four-sigma values traditionally used for design-rule determination. After recommending new statistics be used for design-rule calculations the paper examines the impact of EUV lithography on sources of variation important for design-rule calculations. We show that stochastics can be treated as an effective dose variation that is fully sampled across every chip. Combining the increased within chip variation from EUV with the understanding that across chip variation of design-rule important values needs to not cause a yield loss at significantly higher sigma values than have traditionally been looked at, the conclusion is reached that across-wafer, wafer-to-wafer and lot-to-lot variation will have to overscale for any technology introducing EUV lithography where stochastic noise is a significant fraction of the effective dose variation. We will emphasize stochastic effects on edge placement error distributions and appropriate design-rule setting. While CD distributions with long tails coming from stochastic effects do bring increased risk of failure (especially on chips that may have over a billion failure opportunities per layer) there are other sources of variation that have sharp cutoffs, i.e. have no tails. We will review these sources and show how distributions with different skew and kurtosis values combine.
Repeatability and reproducibility of ribotyping and its computer interpretation.
Lefresne, Gwénola; Latrille, Eric; Irlinger, Françoise; Grimont, Patrick A D
2004-04-01
Many molecular typing methods are difficult to interpret because their repeatability (within-laboratory variance) and reproducibility (between-laboratory variance) have not been thoroughly studied. In the present work, ribotyping of coryneform bacteria was the basis of a study involving within-gel and between-gel repeatability and between-laboratory reproducibility (two laboratories involved). The effect of different technical protocols, different algorithms, and different software for fragment size determination was studied. Analysis of variance (ANOVA) showed, within a laboratory, that there was no significant added variance between gels. However, between-laboratory variance was significantly higher than within-laboratory variance. This may be due to the use of different protocols. An experimental function was calculated to transform the data and make them compatible (i.e., erase the between-laboratory variance). The use of different interpolation algorithms (spline, Schaffer and Sederoff) was a significant source of variation in one laboratory only. The use of either Taxotron (Institut Pasteur) or GelCompar (Applied Maths) was not a significant source of added variation when the same algorithm (spline) was used. However, the use of Bio-Gene (Vilber Lourmat) dramatically increased the error (within laboratory, within gel) in one laboratory, while decreasing the error in the other laboratory; this might be due to automatic normalization attempts. These results were taken into account for building a database and performing automatic pattern identification using Taxotron. Conversion of the data considerably improved the identification of patterns irrespective of the laboratory in which the data were obtained.
The effect of an acute bout of exercise on executive function among individuals with schizophrenia.
Subramaniapillai, Mehala; Tremblay, Luc; Grassmann, Viviane; Remington, Gary; Faulkner, Guy
2016-12-30
Cognitive impairment represents a significant source of disability among individuals with schizophrenia. Therefore, the aim of this study was to investigate, at a proof-of-concept level, whether one single bout of exercise can improve executive function among these individuals. In this within-participant, counterbalanced experiment, participants with schizophrenia (n=36) completed two sessions (cycling at moderate-intensity and passively sitting) for 20min, with a one-week washout period between the two sessions. Participants completed the Wisconsin Card Sorting Test (WCST) before and after each session to measure changes in executive function. The inclusion of both sessions completed by each participant in the analyses revealed a significant carryover effect. Consequently, only the WCST scores from the first session completed by each participant was analyzed. There was a significant time by session interaction effect for non-perseverative errors. Post-hoc Tukey's HSD contrasts revealed a significant reduction in non-perseverative errors in the exercise group that was of moderate-to-large effect. Furthermore, there was also a moderate between-group difference at post-testing. Therefore, an acute bout of exercise can improve performance on an executive function task in individuals with schizophrenia. Specifically, the reduction in non-perseverative errors on the WCST may reflect improved attention, inhibition and overall working memory. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Topography-Dependent Motion Compensation: Application to UAVSAR Data
NASA Technical Reports Server (NTRS)
Jones, Cathleen E.; Hensley, Scott; Michel, Thierry
2009-01-01
The UAVSAR L-band synthetic aperture radar system has been designed for repeat track interferometry in support of Earth science applications that require high-precision measurements of small surface deformations over timescales from hours to years. Conventional motion compensation algorithms, which are based upon assumptions of a narrow beam and flat terrain, yield unacceptably large errors in areas with even moderate topographic relief, i.e., in most areas of interest. This often limits the ability to achieve sub-centimeter surface change detection over significant portions of an acquired scene. To reduce this source of error in the interferometric phase, we have implemented an advanced motion compensation algorithm that corrects for the scene topography and radar beam width. Here we discuss the algorithm used, its implementation in the UAVSAR data processor, and the improvement in interferometric phase and correlation achieved in areas with significant topographic relief.
Optimal secondary source position in exterior spherical acoustical holophony
NASA Astrophysics Data System (ADS)
Pasqual, A. M.; Martin, V.
2012-02-01
Exterior spherical acoustical holophony is a branch of spatial audio reproduction that deals with the rendering of a given free-field radiation pattern (the primary field) by using a compact spherical loudspeaker array (the secondary source). More precisely, the primary field is known on a spherical surface surrounding the primary and secondary sources and, since the acoustic fields are described in spherical coordinates, they are naturally subjected to spherical harmonic analysis. Besides, the inverse problem of deriving optimal driving signals from a known primary field is ill-posed because the secondary source cannot radiate high-order spherical harmonics efficiently, especially in the low-frequency range. As a consequence, a standard least-squares solution will overload the transducers if the primary field contains such harmonics. Here, this is avoided by discarding the strongly decaying spherical waves, which are identified through inspection of the radiation efficiency curves of the secondary source. However, such an unavoidable regularization procedure increases the least-squares error, which also depends on the position of the secondary source. This paper deals with the above-mentioned questions in the context of far-field directivity reproduction at low and medium frequencies. In particular, an optimal secondary source position is sought, which leads to the lowest reproduction error in the least-squares sense without overloading the transducers. In order to address this issue, a regularization quality factor is introduced to evaluate the amount of regularization required. It is shown that the optimal position improves significantly the holophonic reconstruction and maximizes the regularization quality factor (minimizes the amount of regularization), which is the main general contribution of this paper. Therefore, this factor can also be used as a cost function to obtain the optimal secondary source position.
Improving NGDC Track-line Data Quality Control
NASA Astrophysics Data System (ADS)
Chandler, M. T.; Wessel, P.
2004-12-01
Ship-board gravity, magnetic and bathymetry data archived at the National Geophysical Data Center (NGDC) represent decades of seagoing research, containing over 4,500 cruises. Cruise data remain relevent despite the prominence of satellite altimetry-derived global grids because many geologic processes remain resolvable by oceanographic research alone. Due to the tremendous investment put forth by scientists and taxpayers to compile this vast archive and the significant errors found within it, additional quality assessment and corrections are warranted. These can best be accomplished by adding to existing quality control measures at NGDC. We are currently developing open source software to provide additional quality control. Along with NGDC's current sanity checking, new data at NGDC will also be subjected to an along-track ``sniffer'' which will detect and flag suspicious data for later graphical inspection using a visual editor. If new data pass these tests, they will undergo further scrutinization using a crossover error (COE) calculator which will compare new data values to existing values at points of intersection within the archive. Data passing these tests will be deemed ``quality data`` and suitable for permanent addition to the archive, while data that fail will be returned to the source institution for correction. Crossover errors will be stored and an online COE database will be available. The COE database will allow users to apply corrections to the NGDC track-line database to produce corrected data files. At no time will the archived data itself be modified. An attempt will also be made to reduce navigational errors for pre-GPS navigated cruises. Upon completion these programs will be used to explore and model systematic errors within the archive, generate correction tables for all cruises, and to quantify the error budget in marine geophysical observations. Software will be released and these procedures will be implemented in cooperation with NGDC staff.
Gonçalves, Fabio; Treuhaft, Robert; Law, Beverly; ...
2017-01-07
Mapping and monitoring of forest carbon stocks across large areas in the tropics will necessarily rely on remote sensing approaches, which in turn depend on field estimates of biomass for calibration and validation purposes. Here, we used field plot data collected in a tropical moist forest in the central Amazon to gain a better understanding of the uncertainty associated with plot-level biomass estimates obtained specifically for the calibration of remote sensing measurements. In addition to accounting for sources of error that would be normally expected in conventional biomass estimates (e.g., measurement and allometric errors), we examined two sources of uncertaintymore » that are specific to the calibration process and should be taken into account in most remote sensing studies: the error resulting from spatial disagreement between field and remote sensing measurements (i.e., co-location error), and the error introduced when accounting for temporal differences in data acquisition. We found that the overall uncertainty in the field biomass was typically 25% for both secondary and primary forests, but ranged from 16 to 53%. Co-location and temporal errors accounted for a large fraction of the total variance (>65%) and were identified as important targets for reducing uncertainty in studies relating tropical forest biomass to remotely sensed data. Although measurement and allometric errors were relatively unimportant when considered alone, combined they accounted for roughly 30% of the total variance on average and should not be ignored. Lastly, our results suggest that a thorough understanding of the sources of error associated with field-measured plot-level biomass estimates in tropical forests is critical to determine confidence in remote sensing estimates of carbon stocks and fluxes, and to develop strategies for reducing the overall uncertainty of remote sensing approaches.« less
Yandayan, T; Geckeler, R D; Aksulu, M; Akgoz, S A; Ozgur, B
2016-05-01
The application of advanced error-separating shearing techniques to the precise calibration of autocollimators with Small Angle Generators (SAGs) was carried out for the first time. The experimental realization was achieved using the High Precision Small Angle Generator (HPSAG) of TUBITAK UME under classical dimensional metrology laboratory environmental conditions. The standard uncertainty value of 5 mas (24.2 nrad) reached by classical calibration method was improved to the level of 1.38 mas (6.7 nrad). Shearing techniques, which offer a unique opportunity to separate the errors of devices without recourse to any external standard, were first adapted by Physikalisch-Technische Bundesanstalt (PTB) to the calibration of autocollimators with angle encoders. It has been demonstrated experimentally in a clean room environment using the primary angle standard of PTB (WMT 220). The application of the technique to a different type of angle measurement system extends the range of the shearing technique further and reveals other advantages. For example, the angular scales of the SAGs are based on linear measurement systems (e.g., capacitive nanosensors for the HPSAG). Therefore, SAGs show different systematic errors when compared to angle encoders. In addition to the error-separation of HPSAG and the autocollimator, detailed investigations on error sources were carried out. Apart from determination of the systematic errors of the capacitive sensor used in the HPSAG, it was also demonstrated that the shearing method enables the unique opportunity to characterize other error sources such as errors due to temperature drift in long term measurements. This proves that the shearing technique is a very powerful method for investigating angle measuring systems, for their improvement, and for specifying precautions to be taken during the measurements.
Compensation for loads during arm movements using equilibrium-point control.
Gribble, P L; Ostry, D J
2000-12-01
A significant problem in motor control is how information about movement error is used to modify control signals to achieve desired performance. A potential source of movement error and one that is readily controllable experimentally relates to limb dynamics and associated movement-dependent loads. In this paper, we have used a position control model to examine changes to control signals for arm movements in the context of movement-dependent loads. In the model, based on the equilibrium-point hypothesis, equilibrium shifts are adjusted directly in proportion to the positional error between desired and actual movements. The model is used to simulate multi-joint movements in the presence of both "internal" loads due to joint interaction torques, and externally applied loads resulting from velocity-dependent force fields. In both cases it is shown that the model can achieve close correspondence to empirical data using a simple linear adaptation procedure. An important feature of the model is that it achieves compensation for loads during movement without the need for either coordinate transformations between positional error and associated corrective forces, or inverse dynamics calculations.
Optical radiation measurements: instrumentation and sources of error.
Landry, R J; Andersen, F A
1982-07-01
Accurate measurement of optical radiation is required when sources of this radiation are used in biological research. The most difficult measurements of broadband noncoherent optical radiations usually must be performed by a highly trained specialist using sophisticated, complex, and expensive instruments. Presentation of the results of such measurement requires correct use of quantities and units with which many biological researchers are unfamiliar. The measurement process, physical quantities and units, measurement systems with instruments, and sources of error and uncertainties associated with optical radiation measurements are reviewed.
The role of blood vessels in high-resolution volume conductor head modeling of EEG.
Fiederer, L D J; Vorwerk, J; Lucka, F; Dannhauer, M; Yang, S; Dümpelmann, M; Schulze-Bonhage, A; Aertsen, A; Speck, O; Wolters, C H; Ball, T
2016-03-01
Reconstruction of the electrical sources of human EEG activity at high spatio-temporal accuracy is an important aim in neuroscience and neurological diagnostics. Over the last decades, numerous studies have demonstrated that realistic modeling of head anatomy improves the accuracy of source reconstruction of EEG signals. For example, including a cerebro-spinal fluid compartment and the anisotropy of white matter electrical conductivity were both shown to significantly reduce modeling errors. Here, we for the first time quantify the role of detailed reconstructions of the cerebral blood vessels in volume conductor head modeling for EEG. To study the role of the highly arborized cerebral blood vessels, we created a submillimeter head model based on ultra-high-field-strength (7T) structural MRI datasets. Blood vessels (arteries and emissary/intraosseous veins) were segmented using Frangi multi-scale vesselness filtering. The final head model consisted of a geometry-adapted cubic mesh with over 17×10(6) nodes. We solved the forward model using a finite-element-method (FEM) transfer matrix approach, which allowed reducing computation times substantially and quantified the importance of the blood vessel compartment by computing forward and inverse errors resulting from ignoring the blood vessels. Our results show that ignoring emissary veins piercing the skull leads to focal localization errors of approx. 5 to 15mm. Large errors (>2cm) were observed due to the carotid arteries and the dense arterial vasculature in areas such as in the insula or in the medial temporal lobe. Thus, in such predisposed areas, errors caused by neglecting blood vessels can reach similar magnitudes as those previously reported for neglecting white matter anisotropy, the CSF or the dura - structures which are generally considered important components of realistic EEG head models. Our findings thus imply that including a realistic blood vessel compartment in EEG head models will be helpful to improve the accuracy of EEG source analyses particularly when high accuracies in brain areas with dense vasculature are required. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Water displacement leg volumetry in clinical studies - A discussion of error sources
2010-01-01
Background Water displacement leg volumetry is a highly reproducible method, allowing the confirmation of efficacy of vasoactive substances. Nevertheless errors of its execution and the selection of unsuitable patients are likely to negatively affect the outcome of clinical studies in chronic venous insufficiency (CVI). Discussion Placebo controlled double-blind drug studies in CVI were searched (Cochrane Review 2005, MedLine Search until December 2007) and assessed with regard to efficacy (volume reduction of the leg), patient characteristics, and potential methodological error sources. Almost every second study reported only small drug effects (≤ 30 mL volume reduction). As the most relevant error source the conduct of volumetry was identified. Because the practical use of available equipment varies, volume differences of more than 300 mL - which is a multifold of a potential treatment effect - have been reported between consecutive measurements. Other potential error sources were insufficient patient guidance or difficulties with the transition from the Widmer CVI classification to the CEAP (Clinical Etiological Anatomical Pathophysiological) grading. Summary Patients should be properly diagnosed with CVI and selected for stable oedema and further clinical symptoms relevant for the specific study. Centres require a thorough training on the use of the volumeter and on patient guidance. Volumetry should be performed under constant conditions. The reproducibility of short term repeat measurements has to be ensured. PMID:20070899
An improved methodology for heliostat testing and evaluation at the Plataforma Solar de Almería
NASA Astrophysics Data System (ADS)
Monterreal, Rafael; Enrique, Raúl; Fernández-Reche, Jesús
2017-06-01
The optical quality of a heliostat basically quantifies the difference between the scattering effects of the actual solar radiation reflected on its optical surface, compared to the so called canonical dispersion, that is, the one reflected on an optical surface free of constructional errors (paradigm). However, apart from the uncertainties of the measuring process itself, the value of the optical quality must be independent of the measuring instrument; so, any new measuring techniques that provide additional information about the error sources on the heliostat reflecting surface would be welcome. That error sources are responsible for the final optical quality value, with different degrees of influence. For the constructor of heliostats it will be extremely useful to know the value of the classical sources of error and their weight on the overall optical quality of a heliostat, such as facets geometry or focal length, as well as the characteristics of the heliostat as a whole, i.e., its geometry, focal length, facets misalignment and also the possible dependence of these effects with mechanical and/or meteorological factors. It is the goal of the present paper to unfold these optical quality error sources by exploring directly the reflecting surface of the heliostat with the help of a laser-scanner device and link the result with the traditional methods of heliostat evaluation at the Plataforma Solar de Almería.
Comparison of different source calculations in two-nucleon channel at large quark mass
NASA Astrophysics Data System (ADS)
Yamazaki, Takeshi; Ishikawa, Ken-ichi; Kuramashi, Yoshinobu
2018-03-01
We investigate a systematic error coming from higher excited state contributions in the energy shift of light nucleus in the two-nucleon channel by comparing two different source calculations with the exponential and wall sources. Since it is hard to obtain a clear signal of the wall source correlation function in a plateau region, we employ a large quark mass as the pion mass is 0.8 GeV in quenched QCD. We discuss the systematic error in the spin-triplet channel of the two-nucleon system, and the volume dependence of the energy shift.
Quantifying uncertainty in stable isotope mixing models
Davis, Paul; Syme, James; Heikoop, Jeffrey; ...
2015-05-19
Mixing models are powerful tools for identifying biogeochemical sources and determining mixing fractions in a sample. However, identification of actual source contributors is often not simple, and source compositions typically vary or even overlap, significantly increasing model uncertainty in calculated mixing fractions. This study compares three probabilistic methods, SIAR [ Parnell et al., 2010] a pure Monte Carlo technique (PMC), and Stable Isotope Reference Source (SIRS) mixing model, a new technique that estimates mixing in systems with more than three sources and/or uncertain source compositions. In this paper, we use nitrate stable isotope examples (δ 15N and δ 18O) butmore » all methods tested are applicable to other tracers. In Phase I of a three-phase blind test, we compared methods for a set of six-source nitrate problems. PMC was unable to find solutions for two of the target water samples. The Bayesian method, SIAR, experienced anchoring problems, and SIRS calculated mixing fractions that most closely approximated the known mixing fractions. For that reason, SIRS was the only approach used in the next phase of testing. In Phase II, the problem was broadened where any subset of the six sources could be a possible solution to the mixing problem. Results showed a high rate of Type I errors where solutions included sources that were not contributing to the sample. In Phase III some sources were eliminated based on assumed site knowledge and assumed nitrate concentrations, substantially reduced mixing fraction uncertainties and lowered the Type I error rate. These results demonstrate that valuable insights into stable isotope mixing problems result from probabilistic mixing model approaches like SIRS. The results also emphasize the importance of identifying a minimal set of potential sources and quantifying uncertainties in source isotopic composition as well as demonstrating the value of additional information in reducing the uncertainty in calculated mixing fractions.« less
Reyes, Jeanette M; Xu, Yadong; Vizuete, William; Serre, Marc L
2017-01-01
The regulatory Community Multiscale Air Quality (CMAQ) model is a means to understanding the sources, concentrations and regulatory attainment of air pollutants within a model's domain. Substantial resources are allocated to the evaluation of model performance. The Regionalized Air quality Model Performance (RAMP) method introduced here explores novel ways of visualizing and evaluating CMAQ model performance and errors for daily Particulate Matter ≤ 2.5 micrometers (PM2.5) concentrations across the continental United States. The RAMP method performs a non-homogenous, non-linear, non-homoscedastic model performance evaluation at each CMAQ grid. This work demonstrates that CMAQ model performance, for a well-documented 2001 regulatory episode, is non-homogeneous across space/time. The RAMP correction of systematic errors outperforms other model evaluation methods as demonstrated by a 22.1% reduction in Mean Square Error compared to a constant domain wide correction. The RAMP method is able to accurately reproduce simulated performance with a correlation of r = 76.1%. Most of the error coming from CMAQ is random error with only a minority of error being systematic. Areas of high systematic error are collocated with areas of high random error, implying both error types originate from similar sources. Therefore, addressing underlying causes of systematic error will have the added benefit of also addressing underlying causes of random error.
Gollob, Stephan; Kocur, Georg Karl; Schumacher, Thomas; Mhamdi, Lassaad; Vogel, Thomas
2017-02-01
In acoustic emission analysis, common source location algorithms assume, independently of the nature of the propagation medium, a straight (shortest) wave path between the source and the sensors. For heterogeneous media such as concrete, the wave travels in complex paths due to the interaction with the dissimilar material contents and with the possible geometrical and material irregularities present in these media. For instance, cracks and large air voids present in concrete influence significantly the way the wave travels, by causing wave path deviations. Neglecting these deviations by assuming straight paths can introduce significant errors to the source location results. In this paper, a novel source localization method called FastWay is proposed. It accounts, contrary to most available shortest path-based methods, for the different effects of material discontinuities (cracks and voids). FastWay, based on a heterogeneous velocity model, uses the fastest rather than the shortest travel paths between the source and each sensor. The method was evaluated both numerically and experimentally and the results from both evaluation tests show that, in general, FastWay was able to locate sources of acoustic emissions more accurately and reliably than the traditional source localization methods. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Pan, X. G.; Wang, J. Q.; Zhou, H. Y.
2013-05-01
The variance component estimation (VCE) based on semi-parametric estimator with weighted matrix of data depth has been proposed, because the coupling system model error and gross error exist in the multi-source heterogeneous measurement data of space and ground combined TT&C (Telemetry, Tracking and Command) technology. The uncertain model error has been estimated with the semi-parametric estimator model, and the outlier has been restrained with the weighted matrix of data depth. On the basis of the restriction of the model error and outlier, the VCE can be improved and used to estimate weighted matrix for the observation data with uncertain model error or outlier. Simulation experiment has been carried out under the circumstance of space and ground combined TT&C. The results show that the new VCE based on the model error compensation can determine the rational weight of the multi-source heterogeneous data, and restrain the outlier data.
On precisely modelling surface deformation due to interacting magma chambers and dykes
NASA Astrophysics Data System (ADS)
Pascal, Karen; Neuberg, Jurgen; Rivalta, Eleonora
2014-01-01
Combined data sets of InSAR and GPS allow us to observe surface deformation in volcanic settings. However, at the vast majority of volcanoes, a detailed 3-D structure that could guide the modelling of deformation sources is not available, due to the lack of tomography studies, for example. Therefore, volcano ground deformation due to magma movement in the subsurface is commonly modelled using simple point (Mogi) or dislocation (Okada) sources, embedded in a homogeneous, isotropic and elastic half-space. When data sets are too complex to be explained by a single deformation source, the magmatic system is often represented by a combination of these sources and their displacements fields are simply summed. By doing so, the assumption of homogeneity in the half-space is violated and the resulting interaction between sources is neglected. We have quantified the errors of such a simplification and investigated the limits in which the combination of analytical sources is justified. We have calculated the vertical and horizontal displacements for analytical models with adjacent deformation sources and have tested them against the solutions of corresponding 3-D finite element models, which account for the interaction between sources. We have tested various double-source configurations with either two spherical sources representing magma chambers, or a magma chamber and an adjacent dyke, modelled by a rectangular tensile dislocation or pressurized crack. For a tensile Okada source (representing an opening dyke) aligned or superposed to a Mogi source (magma chamber), we find the discrepancies with the numerical models to be insignificant (<5 per cent) independently of the source separation. However, if a Mogi source is placed side by side to an Okada source (in the strike-perpendicular direction), we find the discrepancies to become significant for a source separation less than four times the radius of the magma chamber. For horizontally or vertically aligned pressurized sources, the discrepancies are up to 20 per cent, which translates into surprisingly large errors when inverting deformation data for source parameters such as depth and volume change. Beyond 8 radii however, we demonstrate that the summation of analytical sources represents adjacent magma chambers correctly.
Active control of fan-generated plane wave noise
NASA Technical Reports Server (NTRS)
Gerhold, Carl H.; Nuckolls, William E.; Santamaria, Odillyn L.; Martinson, Scott D.
1993-01-01
Subsonic propulsion systems for future aircraft may incorporate ultra-high bypass ratio ducted fan engines whose dominant noise source is the fan with blade passage frequency less than 1000 Hz. This low frequency combines with the requirement of a short nacelle to diminish the effectiveness of passive duct liners. Active noise control is seen as a viable method to augment the conventional passive treatments. An experiment to control ducted fan noise using a time domain active adaptive system is reported. The control sound source consists of loudspeakers arrayed around the fan duct. The error sensor location is in the fan duct. The purpose of this experiment is to demonstrate that the in-duct error sensor reduces the mode spillover in the far field, thereby increasing the efficiency of the control system. In this first series of tests, the fan is configured so that predominantly zero order circumferential waves are generated. The control system is found to reduce the blade passage frequency tone significantly in the acoustic far field when the mode orders of the noise source and of the control source are the same. The noise reduction is not as great when the mode orders are not the same even though the noise source modes are evanescent, but the control system converges stably and global noise reduction is demonstrated in the far field. Further experimentation is planned in which the performance of the system will be evaluated when higher order radial and spinning modes are generated.
Geolocation error tracking of ZY-3 three line cameras
NASA Astrophysics Data System (ADS)
Pan, Hongbo
2017-01-01
The high-accuracy geolocation of high-resolution satellite images (HRSIs) is a key issue for mapping and integrating multi-temporal, multi-sensor images. In this manuscript, we propose a new geometric frame for analysing the geometric error of a stereo HRSI, in which the geolocation error can be divided into three parts: the epipolar direction, cross base direction, and height direction. With this frame, we proved that the height error of three line cameras (TLCs) is independent of nadir images, and that the terrain effect has a limited impact on the geolocation errors. For ZY-3 error sources, the drift error in both the pitch and roll angle and its influence on the geolocation accuracy are analysed. Epipolar and common tie-point constraints are proposed to study the bundle adjustment of HRSIs. Epipolar constraints explain that the relative orientation can reduce the number of compensation parameters in the cross base direction and have a limited impact on the height accuracy. The common tie points adjust the pitch-angle errors to be consistent with each other for TLCs. Therefore, free-net bundle adjustment of a single strip cannot significantly improve the geolocation accuracy. Furthermore, the epipolar and common tie-point constraints cause the error to propagate into the adjacent strip when multiple strips are involved in the bundle adjustment, which results in the same attitude uncertainty throughout the whole block. Two adjacent strips-Orbit 305 and Orbit 381, covering 7 and 12 standard scenes separately-and 308 ground control points (GCPs) were used for the experiments. The experiments validate the aforementioned theory. The planimetric and height root mean square errors were 2.09 and 1.28 m, respectively, when two GCPs were settled at the beginning and end of the block.
The relationships among work stress, strain and self-reported errors in UK community pharmacy.
Johnson, S J; O'Connor, E M; Jacobs, S; Hassell, K; Ashcroft, D M
2014-01-01
Changes in the UK community pharmacy profession including new contractual frameworks, expansion of services, and increasing levels of workload have prompted concerns about rising levels of workplace stress and overload. This has implications for pharmacist health and well-being and the occurrence of errors that pose a risk to patient safety. Despite these concerns being voiced in the profession, few studies have explored work stress in the community pharmacy context. To investigate work-related stress among UK community pharmacists and to explore its relationships with pharmacists' psychological and physical well-being, and the occurrence of self-reported dispensing errors and detection of prescribing errors. A cross-sectional postal survey of a random sample of practicing community pharmacists (n = 903) used ASSET (A Shortened Stress Evaluation Tool) and questions relating to self-reported involvement in errors. Stress data were compared to general working population norms, and regressed on well-being and self-reported errors. Analysis of the data revealed that pharmacists reported significantly higher levels of workplace stressors than the general working population, with concerns about work-life balance, the nature of the job, and work relationships being the most influential on health and well-being. Despite this, pharmacists were not found to report worse health than the general working population. Self-reported error involvement was linked to both high dispensing volume and being troubled by perceived overload (dispensing errors), and resources and communication (detection of prescribing errors). This study contributes to the literature by benchmarking community pharmacists' health and well-being, and investigating sources of stress using a quantitative approach. A further important contribution to the literature is the identification of a quantitative link between high workload and self-reported dispensing errors. Copyright © 2014 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Mulungye, Mary M.; O'Connor, Miheso; Ndethiu, S.
2016-01-01
This paper is based on a study which sought to examine the various errors and misconceptions committed by students in algebra with the view to exposing the nature and origin of the errors and misconceptions in secondary schools in Machakos district. Teachers' knowledge on students' errors was investigated together with strategies for remedial…
The inference of atmospheric ozone using satellite horizon measurements in the 1042 per cm band.
NASA Technical Reports Server (NTRS)
Russell, J. M., III; Drayson, S. R.
1972-01-01
Description of a method for inferring atmospheric ozone information using infrared horizon radiance measurements in the 1042 per cm band. An analysis based on this method proves the feasibility of the horizon experiment for determining ozone information and shows that the ozone partial pressure can be determined in the altitude range from 50 down to 25 km. A comprehensive error study is conducted which considers effects of individual errors as well as the effect of all error sources acting simultaneously. The results show that in the absence of a temperature profile bias error, it should be possible to determine the ozone partial pressure to within an rms value of 15 to 20%. It may be possible to reduce this rms error to 5% by smoothing the solution profile. These results would be seriously degraded by an atmospheric temperature bias error of only 3 K; thus, great care should be taken to minimize this source of error in an experiment. It is probable, in view of recent technological developments, that these errors will be much smaller in future flight experiments and the altitude range will widen to include from about 60 km down to the tropopause region.
The Reliability and Sources of Error of Using Rubrics-Based Assessment for Student Projects
ERIC Educational Resources Information Center
Menéndez-Varela, José-Luis; Gregori-Giralt, Eva
2018-01-01
Rubrics are widely used in higher education to assess performance in project-based learning environments. To date, the sources of error that may affect their reliability have not been studied in depth. Using generalisability theory as its starting-point, this article analyses the influence of the assessors and the criteria of the rubrics on the…
A Robust Sound Source Localization Approach for Microphone Array with Model Errors
NASA Astrophysics Data System (ADS)
Xiao, Hua; Shao, Huai-Zong; Peng, Qi-Cong
In this paper, a robust sound source localization approach is proposed. The approach retains good performance even when model errors exist. Compared with previous work in this field, the contributions of this paper are as follows. First, an improved broad-band and near-field array model is proposed. It takes array gain, phase perturbations into account and is based on the actual positions of the elements. It can be used in arbitrary planar geometry arrays. Second, a subspace model errors estimation algorithm and a Weighted 2-Dimension Multiple Signal Classification (W2D-MUSIC) algorithm are proposed. The subspace model errors estimation algorithm estimates unknown parameters of the array model, i. e., gain, phase perturbations, and positions of the elements, with high accuracy. The performance of this algorithm is improved with the increasing of SNR or number of snapshots. The W2D-MUSIC algorithm based on the improved array model is implemented to locate sound sources. These two algorithms compose the robust sound source approach. The more accurate steering vectors can be provided for further processing such as adaptive beamforming algorithm. Numerical examples confirm effectiveness of this proposed approach.
Discovery of a Nonblazar Gamma-Ray Transient Source Near the Galactic Plane: GRO J1838-04
NASA Technical Reports Server (NTRS)
Tavani, M.; Oliversen, Ronald (Technical Monitor)
2001-01-01
We report the discovery of a remarkable gamma-ray transient source near the Galactic plane, GRO J1838-04. This source was serendipitously discovered by EGRET in 1995 June with a peak intensity of approx. (4 +/- 1) x 10(exp -6) photons/sq cm s (for photon energies larger than 100 MeV) and a 5.9 sigma significance. At that time, GRO J1838-04 was the second brightest gamma-ray source in the sky. A subsequent EGRET pointing in 1995 late September detected the source at a flux smaller than its peak value by a factor of approx. 7. We determine that no radio-loud spectrally flat blazar is within the error box of GRO J1838-04. We discuss the origin of the gamma-ray transient source and show that interpretations in terms of active galactic nuclei or isolated pulsars are highly problematic. GRO J1838-04 provides strong evidence for the existence of a new class of variable gamma-ray sources.
Hansen, Scott K.; Vesselinov, Velimir Valentinov
2016-10-01
We develop empirically-grounded error envelopes for localization of a point contamination release event in the saturated zone of a previously uncharacterized heterogeneous aquifer into which a number of plume-intercepting wells have been drilled. We assume that flow direction in the aquifer is known exactly and velocity is known to within a factor of two of our best guess from well observations prior to source identification. Other aquifer and source parameters must be estimated by interpretation of well breakthrough data via the advection-dispersion equation. We employ high performance computing to generate numerous random realizations of aquifer parameters and well locations, simulatemore » well breakthrough data, and then employ unsupervised machine optimization techniques to estimate the most likely spatial (or space-time) location of the source. Tabulating the accuracy of these estimates from the multiple realizations, we relate the size of 90% and 95% confidence envelopes to the data quantity (number of wells) and model quality (fidelity of ADE interpretation model to actual concentrations in a heterogeneous aquifer with channelized flow). We find that for purely spatial localization of the contaminant source, increased data quantities can make up for reduced model quality. For space-time localization, we find similar qualitative behavior, but significantly degraded spatial localization reliability and less improvement from extra data collection. Since the space-time source localization problem is much more challenging, we also tried a multiple-initial-guess optimization strategy. Furthermore, this greatly enhanced performance, but gains from additional data collection remained limited.« less
Language function distribution in left-handers: A navigated transcranial magnetic stimulation study.
Tussis, Lorena; Sollmann, Nico; Boeckh-Behrens, Tobias; Meyer, Bernhard; Krieg, Sandro M
2016-02-01
Recent studies suggest that in left-handers, the right hemisphere (RH) is more involved in language function when compared to right-handed subjects. Since data on lesion-based approaches is lacking, we aimed to investigate language distribution of left-handers by repetitive navigated transcranial magnetic stimulation (rTMS). Thus, rTMS was applied to the left hemisphere (LH) and RH in 15 healthy left-handers during an object-naming task, and resulting naming errors were categorized. Then, we calculated error rates (ERs=number of errors per number of stimulations) for both hemispheres separately and defined a laterality score as the quotient of the LH ER - RH ER through the LH ER + RH ER (abbreviated as (L-R)/(L+R)). In this context, (L-R)/(L+R)>0 indicates that the LH is dominant, whereas (L-R)/(L+R)<0 shows that the RH is dominant. No significant difference in ERs was found between hemispheres (all errors: mean LH 18.0±11.7%, mean RH 18.1±12.2%, p=0.94; all errors without hesitation: mean LH 12.4±9.8%, mean RH 12.9±10.0%, p=0.65; no responses: mean LH 9.3±9.2%, mean RH 11.5±10.3%, p=0.84). However, a significant difference between the results of (L-R)/(L+R) of left-handers and right-handers (source data of another study) for all errors (mean 0.01±0.14 vs. 0.19±0.20, p=0.0019) and all errors without hesitation (mean -0.02±0.20 vs. 0.19±0.28, p=0.0051) was revealed, whereas the comparison for no responses did not show a significant difference (mean: -0.004±0.27 vs. 0.09±0.44, p=0.64). Accordingly, left-handers present a comparatively equal language distribution across both hemispheres with language dominance being nearly equally distributed between hemispheres in contrast to right-handers. Copyright © 2016 Elsevier Ltd. All rights reserved.
Error Model and Compensation of Bell-Shaped Vibratory Gyro
Su, Zhong; Liu, Ning; Li, Qing
2015-01-01
A bell-shaped vibratory angular velocity gyro (BVG), inspired by the Chinese traditional bell, is a type of axisymmetric shell resonator gyroscope. This paper focuses on development of an error model and compensation of the BVG. A dynamic equation is firstly established, based on a study of the BVG working mechanism. This equation is then used to evaluate the relationship between the angular rate output signal and bell-shaped resonator character, analyze the influence of the main error sources and set up an error model for the BVG. The error sources are classified from the error propagation characteristics, and the compensation method is presented based on the error model. Finally, using the error model and compensation method, the BVG is calibrated experimentally including rough compensation, temperature and bias compensation, scale factor compensation and noise filter. The experimentally obtained bias instability is from 20.5°/h to 4.7°/h, the random walk is from 2.8°/h1/2 to 0.7°/h1/2 and the nonlinearity is from 0.2% to 0.03%. Based on the error compensation, it is shown that there is a good linear relationship between the sensing signal and the angular velocity, suggesting that the BVG is a good candidate for the field of low and medium rotational speed measurement. PMID:26393593
The pros and cons of code validation
NASA Technical Reports Server (NTRS)
Bobbitt, Percy J.
1988-01-01
Computational and wind tunnel error sources are examined and quantified using specific calculations of experimental data, and a substantial comparison of theoretical and experimental results, or a code validation, is discussed. Wind tunnel error sources considered include wall interference, sting effects, Reynolds number effects, flow quality and transition, and instrumentation such as strain gage balances, electronically scanned pressure systems, hot film gages, hot wire anemometers, and laser velocimeters. Computational error sources include math model equation sets, the solution algorithm, artificial viscosity/dissipation, boundary conditions, the uniqueness of solutions, grid resolution, turbulence modeling, and Reynolds number effects. It is concluded that, although improvements in theory are being made more quickly than in experiments, wind tunnel research has the advantage of the more realistic transition process of a right turbulence model in a free-transition test.
Fiedler, John L; Yadav, Suryakant
2017-10-01
Despite acknowledged shortcomings, household consumption and expenditure surveys (HCES) are increasingly being used to proxy food consumption because they are relatively more available and affordable than surveys using more precise dietary assessment methods. One of the most common, significant sources of HCES measurement error is their under-estimation of food away from home (FAFH). In 2011, India's National Survey Sample Organization introduced revisions in its HCES questionnaire that included replacing "cooked meals"-the single item in the food consumption module designed to capture FAFH at the household level-with five more detailed and explicitly FAFH sub-categories. The survey also contained a section with seven, household member-specific questions about meal patterns during the reference period and included three sources of meals away from home (MAFH) that overlapped three of the new FAFH categories. By providing a conceptual framework with which to organize and consider each household member's meal pattern throughout the reference period, and breaking down the recalling (or estimating) process into household member-specific responses, we assume the MAFH approach makes the key respondent's task less memory- and arithmetically-demanding, and thus more accurate than the FAFH household level approach. We use the MAFH estimates as a reference point, and approximate one portion of FAFH measurement error as the differences in MAFH and FAFH estimates. The MAFH estimates reveal marked heterogeneity in intra-household meal patterns, reflecting the complexity of the HCES's key informant task of reporting household level data, and underscoring its importance as a source of measurement error. We find the household level-based estimates of FAFH increase from just 60.4% of the individual-based estimates in the round prior to the questionnaire modifications to 96.7% after the changes. We conclude that the MFAH-FAFH linked approach substantially reduced FAFH measurement error in India. The approach has wider applicability in global efforts to improve HCES.
Image reduction pipeline for the detection of variable sources in highly crowded fields
NASA Astrophysics Data System (ADS)
Gössl, C. A.; Riffeser, A.
2002-01-01
We present a reduction pipeline for CCD (charge-coupled device) images which was built to search for variable sources in highly crowded fields like the M 31 bulge and to handle extensive databases due to large time series. We describe all steps of the standard reduction in detail with emphasis on the realisation of per pixel error propagation: Bias correction, treatment of bad pixels, flatfielding, and filtering of cosmic rays. The problems of conservation of PSF (point spread function) and error propagation in our image alignment procedure as well as the detection algorithm for variable sources are discussed: we build difference images via image convolution with a technique called OIS (optimal image subtraction, Alard & Lupton \\cite{1998ApJ...503..325A}), proceed with an automatic detection of variable sources in noise dominated images and finally apply a PSF-fitting, relative photometry to the sources found. For the WeCAPP project (Riffeser et al. \\cite{2001A&A...0000..00R}) we achieve 3sigma detections for variable sources with an apparent brightness of e.g. m = 24.9;mag at their minimum and a variation of Delta m = 2.4;mag (or m = 21.9;mag brightness minimum and a variation of Delta m = 0.6;mag) on a background signal of 18.1;mag/arcsec2 based on a 500;s exposure with 1.5;arcsec seeing at a 1.2;m telescope. The complete per pixel error propagation allows us to give accurate errors for each measurement.
Chandra Detection of Intracluster X-Ray sources in Virgo
NASA Astrophysics Data System (ADS)
Hou, Meicun; Li, Zhiyuan; Peng, Eric W.; Liu, Chengze
2017-09-01
We present a survey of X-ray point sources in the nearest and dynamically young galaxy cluster, Virgo, using archival Chandra observations that sample the vicinity of 80 early-type member galaxies. The X-ray source populations at the outskirts of these galaxies are of particular interest. We detect a total of 1046 point sources (excluding galactic nuclei) out to a projected galactocentric radius of ˜40 kpc and down to a limiting 0.5-8 keV luminosity of ˜ 2× {10}38 {erg} {{{s}}}-1. Based on the cumulative spatial and flux distributions of these sources, we statistically identify ˜120 excess sources that are not associated with the main stellar content of the individual galaxies, nor with the cosmic X-ray background. This excess is significant at a 3.5σ level, when Poisson error and cosmic variance are taken into account. On the other hand, no significant excess sources are found at the outskirts of a control sample of field galaxies, suggesting that at least some fraction of the excess sources around the Virgo galaxies are truly intracluster X-ray sources. Assisted with ground-based and HST optical imaging of Virgo, we discuss the origins of these intracluster X-ray sources, in terms of supernova-kicked low-mass X-ray binaries (LMXBs), globular clusters, LMXBs associated with the diffuse intracluster light, stripped nucleated dwarf galaxies and free-floating massive black holes.
Orbit determination strategy and results for the Pioneer 10 Jupiter mission
NASA Technical Reports Server (NTRS)
Wong, S. K.; Lubeley, A. J.
1974-01-01
Pioneer 10 is the first earth-based vehicle to encounter Jupiter and occult its moon, Io. In contributing to the success of the mission, the Orbit Determination Group evaluated the effects of the dominant error sources on the spacecraft's computed orbit and devised an encounter strategy minimizing the effects of these error sources. The encounter results indicated that: (1) errors in the satellite model played a very important role in the accuracy of the computed orbit, (2) encounter strategy was sound, (3) all mission objectives were met, and (4) Jupiter-Saturn mission for Pioneer 11 is within the navigation capability.
The Influence of Gantry Geometry on Aliasing and Other Geometry Dependent Errors
NASA Astrophysics Data System (ADS)
Joseph, Peter M.
1980-06-01
At least three gantry geometries are widely used in medical CT scanners: (1) rotate-translate, (2) rotating detectors, (3) stationary detectors. There are significant geometrical differences between these designs, especially regarding (a) the region of space scanned by any given detector and (b) the sample density of rays which scan the patient. It is imperative to distinguish between "views" and "rays" in analyzing this situation. In particular, views are defined by the x-ray source in type 2 and by the detector in type 3 gantries. It is known that ray dependent errors are generally much more important than view dependent errors. It is shown that spatial resolution is primarily limited by the spacing between rays in any view, while the number of ray samples per beam width determines the extent of aliasing artifacts. Rotating detector gantries are especially susceptible to aliasing effects. It is shown that aliasing effects can distort the point spread function in a way that is highly dependent on the position of the point in the scanned field. Such effects can cause anomalies in the MTF functions as derived from points in machines with significant aliasing problems.
High accuracy satellite drag model (HASDM)
NASA Astrophysics Data System (ADS)
Storz, M.; Bowman, B.; Branson, J.
The dominant error source in the force models used to predict low perigee satellite trajectories is atmospheric drag. Errors in operational thermospheric density models cause significant errors in predicted satellite positions, since these models do not account for dynamic changes in atmospheric drag for orbit predictions. The Air Force Space Battlelab's High Accuracy Satellite Drag Model (HASDM) estimates and predicts (out three days) a dynamically varying high-resolution density field. HASDM includes the Dynamic Calibration Atmosphere (DCA) algorithm that solves for the phases and amplitudes of the diurnal, semidiurnal and terdiurnal variations of thermospheric density near real-time from the observed drag effects on a set of Low Earth Orbit (LEO) calibration satellites. The density correction is expressed as a function of latitude, local solar time and altitude. In HASDM, a time series prediction filter relates the extreme ultraviolet (EUV) energy index E10.7 and the geomagnetic storm index a p to the DCA density correction parameters. The E10.7 index is generated by the SOLAR2000 model, the first full spectrum model of solar irradiance. The estimated and predicted density fields will be used operationally to significantly improve the accuracy of predicted trajectories for all low perigee satellites.
High accuracy satellite drag model (HASDM)
NASA Astrophysics Data System (ADS)
Storz, Mark F.; Bowman, Bruce R.; Branson, Major James I.; Casali, Stephen J.; Tobiska, W. Kent
The dominant error source in force models used to predict low-perigee satellite trajectories is atmospheric drag. Errors in operational thermospheric density models cause significant errors in predicted satellite positions, since these models do not account for dynamic changes in atmospheric drag for orbit predictions. The Air Force Space Battlelab's High Accuracy Satellite Drag Model (HASDM) estimates and predicts (out three days) a dynamically varying global density field. HASDM includes the Dynamic Calibration Atmosphere (DCA) algorithm that solves for the phases and amplitudes of the diurnal and semidiurnal variations of thermospheric density near real-time from the observed drag effects on a set of Low Earth Orbit (LEO) calibration satellites. The density correction is expressed as a function of latitude, local solar time and altitude. In HASDM, a time series prediction filter relates the extreme ultraviolet (EUV) energy index E10.7 and the geomagnetic storm index ap, to the DCA density correction parameters. The E10.7 index is generated by the SOLAR2000 model, the first full spectrum model of solar irradiance. The estimated and predicted density fields will be used operationally to significantly improve the accuracy of predicted trajectories for all low-perigee satellites.
Stecker, Mona; Stecker, Mark M
2014-07-01
This study sought to explore the prevalence of workplace stress, gender differences, and the relationship of workplace incivility to the experience of stress. Effects of stress on performance have been explored for many years. Work stress has been at the root of many physical and psychological problems and has even been linked to medical errors and suboptimal patient outcomes. In this study, 617 respondents completed a Provider Conflict Questionnaire (PCQ) as well as a ten-item stress survey. Work was the main stressor according to 78.2% of respondents. The stress index was moderately high, ranging between 10 and 48 (mean = 25.5). Females demonstrated a higher stress index. Disruptive behavior showed a significant positive correlation with increased stress. This study concludes that employees of institutions with less disruptive behavior exhibited lower stress levels. This finding is important in improving employee satisfaction and reducing medical errors. It is difficult to retain experienced nurses, and stress is a significant contributor to job dissatisfaction. Moreover, workplace conflict and its correlation to increased stress levels must be managed as a strategy to reduce medical errors and increase job satisfaction.
1949-09-08
error from this source can be substantially reduced by the use of polystyrene insulating materials in the plugboard system of problem patching (Section...present at some point in the machine (see Section 5b). -10- ( 1 d ~ PLUGBOARD Our experience ·with the operation of the REAC indicates that...utilization of the machine could be very significantly increased by a drastic revision of the patch bay. We propose to install a separable plugboard which
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Requirement to Publish All Significant Final Actions Under Title I of The Clean Air Act
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Follow-up of negative MRI-targeted prostate biopsies: when are we missing cancer?
Gold, Samuel A; Hale, Graham R; Bloom, Jonathan B; Smith, Clayton P; Rayn, Kareem N; Valera, Vladimir; Wood, Bradford J; Choyke, Peter L; Turkbey, Baris; Pinto, Peter A
2018-05-21
Multiparametric magnetic resonance imaging (mpMRI) has improved clinicians' ability to detect clinically significant prostate cancer (csPCa). Combining or fusing these images with the real-time imaging of transrectal ultrasound (TRUS) allows urologists to better sample lesions with a targeted biopsy (Tbx) leading to the detection of greater rates of csPCa and decreased rates of low-risk PCa. In this review, we evaluate the technical aspects of the mpMRI-guided Tbx procedure to identify possible sources of error and provide clinical context to a negative Tbx. A literature search was conducted of possible reasons for false-negative TBx. This includes discussion on false-positive mpMRI findings, termed "PCa mimics," that may incorrectly suggest high likelihood of csPCa as well as errors during Tbx resulting in inexact image fusion or biopsy needle placement. Despite the strong negative predictive value associated with Tbx, concerns of missed disease often remain, especially with MR-visible lesions. This raises questions about what to do next after a negative Tbx result. Potential sources of error can arise from each step in the targeted biopsy process ranging from "PCa mimics" or technical errors during mpMRI acquisition to failure to properly register MRI and TRUS images on a fusion biopsy platform to technical or anatomic limits on needle placement accuracy. A better understanding of these potential pitfalls in the mpMRI-guided Tbx procedure will aid interpretation of a negative Tbx, identify areas for improving technical proficiency, and improve both physician understanding of negative Tbx and patient-management options.
Impact and quantification of the sources of error in DNA pooling designs.
Jawaid, A; Sham, P
2009-01-01
The analysis of genome wide variation offers the possibility of unravelling the genes involved in the pathogenesis of disease. Genome wide association studies are also particularly useful for identifying and validating targets for therapeutic intervention as well as for detecting markers for drug efficacy and side effects. The cost of such large-scale genetic association studies may be reduced substantially by the analysis of pooled DNA from multiple individuals. However, experimental errors inherent in pooling studies lead to a potential increase in the false positive rate and a loss in power compared to individual genotyping. Here we quantify various sources of experimental error using empirical data from typical pooling experiments and corresponding individual genotyping counts using two statistical methods. We provide analytical formulas for calculating these different errors in the absence of complete information, such as replicate pool formation, and for adjusting for the errors in the statistical analysis. We demonstrate that DNA pooling has the potential of estimating allele frequencies accurately, and adjusting the pooled allele frequency estimates for differential allelic amplification considerably improves accuracy. Estimates of the components of error show that differential allelic amplification is the most important contributor to the error variance in absolute allele frequency estimation, followed by allele frequency measurement and pool formation errors. Our results emphasise the importance of minimising experimental errors and obtaining correct error estimates in genetic association studies.
Drought Persistence in Models and Observations
NASA Astrophysics Data System (ADS)
Moon, Heewon; Gudmundsson, Lukas; Seneviratne, Sonia
2017-04-01
Many regions of the world have experienced drought events that persisted several years and caused substantial economic and ecological impacts in the 20th century. However, it remains unclear whether there are significant trends in the frequency or severity of these prolonged drought events. In particular, an important issue is linked to systematic biases in the representation of persistent drought events in climate models, which impedes analysis related to the detection and attribution of drought trends. This study assesses drought persistence errors in global climate model (GCM) simulations from the 5th phase of Coupled Model Intercomparison Project (CMIP5), in the period of 1901-2010. The model simulations are compared with five gridded observational data products. The analysis focuses on two aspects: the identification of systematic biases in the models and the partitioning of the spread of drought-persistence-error into four possible sources of uncertainty: model uncertainty, observation uncertainty, internal climate variability and the estimation error of drought persistence. We use monthly and yearly dry-to-dry transition probabilities as estimates for drought persistence with drought conditions defined as negative precipitation anomalies. For both time scales we find that most model simulations consistently underestimated drought persistence except in a few regions such as India and Eastern South America. Partitioning the spread of the drought-persistence-error shows that at the monthly time scale model uncertainty and observation uncertainty are dominant, while the contribution from internal variability does play a minor role in most cases. At the yearly scale, the spread of the drought-persistence-error is dominated by the estimation error, indicating that the partitioning is not statistically significant, due to a limited number of considered time steps. These findings reveal systematic errors in the representation of drought persistence in current climate models and highlight the main contributors of uncertainty of drought-persistence-error. Future analyses will focus on investigating the temporal propagation of drought persistence to better understand the causes for the identified errors in the representation of drought persistence in state-of-the-art climate models.
Dark Signal Characterization of 1.7 micron cutoff devices for SNAP
NASA Astrophysics Data System (ADS)
Smith, R. M.; SNAP Collaboration
2004-12-01
We report initial progress characterizing non-photometric sources of error -- dark current, noise, and zero point drift -- for 1.7 micron cutoff HgCdTe and InGaAs detectors under development by Raytheon, Rockwell, and Sensors Unlimited for SNAP. Dark current specifications can already be met with several detector types. Changes to the manufacturing process are being explored to improve the noise reduction available through multiple sampling. In some cases, a significant number of pixels suffer from popcorn noise, with a few percent of all pixels exhibiting a ten fold noise increase. A careful study of zero point drifts is also under way, since these errors can dominate dark current, and may contribute to the noise degradation seen in long exposures.
Braiding errors in interacting Majorana quantum wires
NASA Astrophysics Data System (ADS)
Sekania, Michael; Plugge, Stephan; Greiter, Martin; Thomale, Ronny; Schmitteckert, Peter
2017-09-01
Avenues of Majorana bound states (MBSs) have become one of the primary directions towards a possible realization of topological quantum computation. For a Y junction of Kitaev quantum wires, we numerically investigate the braiding of MBSs while considering the full quasiparticle background. The two central sources of braiding errors are found to be the fidelity loss due to the incomplete adiabaticity of the braiding operation as well as the finite hybridization of the MBSs. The explicit extraction of the braiding phase from the full many-particle states allows us to analyze the breakdown of the independent-particle picture of Majorana braiding. Furthermore, we find nearest-neighbor interactions to significantly affect the braiding performance for better or worse, depending on the sign and magnitude of the coupling.
Stewart, Heather; Massoudieh, Arash; Gellis, Allen C.
2015-01-01
A Bayesian chemical mass balance (CMB) approach was used to assess the contribution of potential sources for fluvial samples from Laurel Hill Creek in southwest Pennsylvania. The Bayesian approach provides joint probability density functions of the sources' contributions considering the uncertainties due to source and fluvial sample heterogeneity and measurement error. Both elemental profiles of sources and fluvial samples and 13C and 15N isotopes were used for source apportionment. The sources considered include stream bank erosion, forest, roads and agriculture (pasture and cropland). Agriculture was found to have the largest contribution, followed by stream bank erosion. Also, road erosion was found to have a significant contribution in three of the samples collected during lower-intensity rain events. The source apportionment was performed with and without isotopes. The results were largely consistent; however, the use of isotopes was found to slightly increase the uncertainty in most of the cases. The correlation analysis between the contributions of sources shows strong correlations between stream bank and agriculture, whereas roads and forest seem to be less correlated to other sources. Thus, the method was better able to estimate road and forest contributions independently. The hypothesis that the contributions of sources are not seasonally changing was tested by assuming that all ten fluvial samples had the same source contributions. This hypothesis was rejected, demonstrating a significant seasonal variation in the sources of sediments in the stream.
Optimizing dynamic downscaling in one-way nesting using a regional ocean model
NASA Astrophysics Data System (ADS)
Pham, Van Sy; Hwang, Jin Hwan; Ku, Hyeyun
2016-10-01
Dynamical downscaling with nested regional oceanographic models has been demonstrated to be an effective approach for both operationally forecasted sea weather on regional scales and projections of future climate change and its impact on the ocean. However, when nesting procedures are carried out in dynamic downscaling from a larger-scale model or set of observations to a smaller scale, errors are unavoidable due to the differences in grid sizes and updating intervals. The present work assesses the impact of errors produced by nesting procedures on the downscaled results from Ocean Regional Circulation Models (ORCMs). Errors are identified and evaluated based on their sources and characteristics by employing the Big-Brother Experiment (BBE). The BBE uses the same model to produce both nesting and nested simulations; so it addresses those error sources separately (i.e., without combining the contributions of errors from different sources). Here, we focus on discussing errors resulting from the spatial grids' differences, the updating times and the domain sizes. After the BBE was separately run for diverse cases, a Taylor diagram was used to analyze the results and recommend an optimal combination of grid size, updating period and domain sizes. Finally, suggested setups for the downscaling were evaluated by examining the spatial correlations of variables and the relative magnitudes of variances between the nested model and the original data.
Error Sources in Proccessing LIDAR Based Bridge Inspection
NASA Astrophysics Data System (ADS)
Bian, H.; Chen, S. E.; Liu, W.
2017-09-01
Bridge inspection is a critical task in infrastructure management and is facing unprecedented challenges after a series of bridge failures. The prevailing visual inspection was insufficient in providing reliable and quantitative bridge information although a systematic quality management framework was built to ensure visual bridge inspection data quality to minimize errors during the inspection process. The LiDAR based remote sensing is recommended as an effective tool in overcoming some of the disadvantages of visual inspection. In order to evaluate the potential of applying this technology in bridge inspection, some of the error sources in LiDAR based bridge inspection are analysed. The scanning angle variance in field data collection and the different algorithm design in scanning data processing are the found factors that will introduce errors into inspection results. Besides studying the errors sources, advanced considerations should be placed on improving the inspection data quality, and statistical analysis might be employed to evaluate inspection operation process that contains a series of uncertain factors in the future. Overall, the development of a reliable bridge inspection system requires not only the improvement of data processing algorithms, but also systematic considerations to mitigate possible errors in the entire inspection workflow. If LiDAR or some other technology can be accepted as a supplement for visual inspection, the current quality management framework will be modified or redesigned, and this would be as urgent as the refine of inspection techniques.
Grogger, P; Sacher, C; Weber, S; Millesi, G; Seemann, R
2018-04-10
Deviations in measuring dentofacial components in a lateral X-ray represent a major hurdle in the subsequent treatment of dysgnathic patients. In a retrospective study, we investigated the most prevalent source of error in the following commonly used cephalometric measurements: the angles Sella-Nasion-Point A (SNA), Sella-Nasion-Point B (SNB) and Point A-Nasion-Point B (ANB); the Wits appraisal; the anteroposterior dysplasia indicator (APDI); and the overbite depth indicator (ODI). Preoperative lateral radiographic images of patients with dentofacial deformities were collected and the landmarks digitally traced by three independent raters. Cephalometric analysis was automatically performed based on 1116 tracings. Error analysis identified the x-coordinate of Point A as the prevalent source of error in all investigated measurements, except SNB, in which it is not incorporated. In SNB, the y-coordinate of Nasion predominated error variance. SNB showed lowest inter-rater variation. In addition, our observations confirmed previous studies showing that landmark identification variance follows characteristic error envelopes in the highest number of tracings analysed up to now. Variance orthogonal to defining planes was of relevance, while variance parallel to planes was not. Taking these findings into account, orthognathic surgeons as well as orthodontists would be able to perform cephalometry more accurately and accomplish better therapeutic results. Copyright © 2018 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
Rieger, Martina; Bart, Victoria K. E.
2016-01-01
We investigated to what extent different sources of information are used in typing on a computer keyboard. Using self-reports 10 finger typists and idiosyncratic typists estimated how much attention they pay to different sources of information during copy typing and free typing and how much they use them for error detection. 10 finger typists reported less attention to the keyboard and the fingers and more attention to the template and the screen than idiosyncratic typists. The groups did not differ in attention to touch/kinaesthesis in copy typing and free typing, but 10 finger typists reported more use of touch/kinaesthesis in error detection. This indicates that processing of tactile/kinaesthetic information may occur largely outside conscious control, as long as no errors occur. 10 finger typists reported more use of internal prediction of movement consequences for error detection than idiosyncratic typists, reflecting more precise internal models. Further in copy typing compared to free typing attention to the template is required, thus leaving less attentional capacity for other sources of information. Correlations showed that higher skilled typists, regardless of typing style, rely more on sources of information which are usually associated with 10 finger typing. One limitation of the study is that only self-reports were used. We conclude that typing task, typing proficiency, and typing style influence how attention is distributed during typing. PMID:28018256
Rieger, Martina; Bart, Victoria K E
2016-01-01
We investigated to what extent different sources of information are used in typing on a computer keyboard. Using self-reports 10 finger typists and idiosyncratic typists estimated how much attention they pay to different sources of information during copy typing and free typing and how much they use them for error detection. 10 finger typists reported less attention to the keyboard and the fingers and more attention to the template and the screen than idiosyncratic typists. The groups did not differ in attention to touch/kinaesthesis in copy typing and free typing, but 10 finger typists reported more use of touch/kinaesthesis in error detection. This indicates that processing of tactile/kinaesthetic information may occur largely outside conscious control, as long as no errors occur. 10 finger typists reported more use of internal prediction of movement consequences for error detection than idiosyncratic typists, reflecting more precise internal models. Further in copy typing compared to free typing attention to the template is required, thus leaving less attentional capacity for other sources of information. Correlations showed that higher skilled typists, regardless of typing style, rely more on sources of information which are usually associated with 10 finger typing. One limitation of the study is that only self-reports were used. We conclude that typing task, typing proficiency, and typing style influence how attention is distributed during typing.
The Public Understanding of Error in Educational Assessment
ERIC Educational Resources Information Center
Gardner, John
2013-01-01
Evidence from recent research suggests that in the UK the public perception of errors in national examinations is that they are simply mistakes; events that are preventable. This perception predominates over the more sophisticated technical view that errors arise from many sources and create an inevitable variability in assessment outcomes. The…