Binocular optical axis parallelism detection precision analysis based on Monte Carlo method
NASA Astrophysics Data System (ADS)
Ying, Jiaju; Liu, Bingqi
2018-02-01
According to the working principle of the binocular photoelectric instrument optical axis parallelism digital calibration instrument, and in view of all components of the instrument, the various factors affect the system precision is analyzed, and then precision analysis model is established. Based on the error distribution, Monte Carlo method is used to analyze the relationship between the comprehensive error and the change of the center coordinate of the circle target image. The method can further guide the error distribution, optimize control the factors which have greater influence on the comprehensive error, and improve the measurement accuracy of the optical axis parallelism digital calibration instrument.
Kandel, Himal; Khadka, Jyoti; Goggin, Michael; Pesudovs, Konrad
2017-12-01
This review has identified the best existing patient-reported outcome (PRO) instruments in refractive error. The article highlights the limitations of the existing instruments and discusses the way forward. A systematic review was conducted to identify the types of PROs used in refractive error, to determine the quality of the existing PRO instruments in terms of their psychometric properties, and to determine the limitations in the content of the existing PRO instruments. Articles describing a PRO instrument measuring 1 or more domains of quality of life in people with refractive error were identified by electronic searches on the MEDLINE, PubMed, Scopus, Web of Science, and Cochrane databases. The information on content development, psychometric properties, validity, reliability, and responsiveness of those PRO instruments was extracted from the selected articles. The analysis was done based on a comprehensive set of assessment criteria. One hundred forty-eight articles describing 47 PRO instruments in refractive error were included in the review. Most of the articles (99 [66.9%]) used refractive error-specific PRO instruments. The PRO instruments comprised 19 refractive, 12 vision but nonrefractive, and 16 generic PRO instruments. Only 17 PRO instruments were validated in refractive error populations; six of them were developed using Rasch analysis. None of the PRO instruments has items across all domains of quality of life. The Quality of Life Impact of Refractive Correction, the Quality of Vision, and the Contact Lens Impact on Quality of Life have comparatively better quality with some limitations, compared with the other PRO instruments. This review describes the PRO instruments and informs the choice of an appropriate measure in refractive error. We identified need of a comprehensive and scientifically robust refractive error-specific PRO instrument. Item banking and computer-adaptive testing system can be the way to provide such an instrument.
NASA Technical Reports Server (NTRS)
Levy, G.; Brown, R. A.
1986-01-01
A simple economical objective analysis scheme is devised and tested on real scatterometer data. It is designed to treat dense data such as those of the Seasat A Satellite Scatterometer (SASS) for individual or multiple passes, and preserves subsynoptic scale features. Errors are evaluated with the aid of sampling ('bootstrap') statistical methods. In addition, sensitivity tests have been performed which establish qualitative confidence in calculated fields of divergence and vorticity. The SASS wind algorithm could be improved; however, the data at this point are limited by instrument errors rather than analysis errors. The analysis error is typically negligible in comparison with the instrument error, but amounts to 30 percent of the instrument error in areas of strong wind shear. The scheme is very economical, and thus suitable for large volumes of dense data such as SASS data.
Partial pressure analysis in space testing
NASA Technical Reports Server (NTRS)
Tilford, Charles R.
1994-01-01
For vacuum-system or test-article analysis it is often desirable to know the species and partial pressures of the vacuum gases. Residual gas or Partial Pressure Analyzers (PPA's) are commonly used for this purpose. These are mass spectrometer-type instruments, most commonly employing quadrupole filters. These instruments can be extremely useful, but they should be used with caution. Depending on the instrument design, calibration procedures, and conditions of use, measurements made with these instruments can be accurate to within a few percent, or in error by two or more orders of magnitude. Significant sources of error can include relative gas sensitivities that differ from handbook values by an order of magnitude, changes in sensitivity with pressure by as much as two orders of magnitude, changes in sensitivity with time after exposure to chemically active gases, and the dependence of the sensitivity for one gas on the pressures of other gases. However, for most instruments, these errors can be greatly reduced with proper operating procedures and conditions of use. In this paper, data are presented illustrating performance characteristics for different instruments and gases, operating parameters are recommended to minimize some errors, and calibrations procedures are described that can detect and/or correct other errors.
Khanna, Rajesh; Handa, Aashish; Virk, Rupam Kaur; Ghai, Deepika; Handa, Rajni Sharma; Goel, Asim
2017-01-01
Background: The process of cleaning and shaping the canal is not an easy goal to obtain, as canal curvature played a significant role during the instrumentation of the curved canals. Aim: The present in vivo study was conducted to evaluate procedural errors during the preparation of curved root canals using hand Nitiflex and rotary K3XF instruments. Materials and Methods: Procedural errors such as ledge formation, instrument separation, and perforation (apical, furcal, strip) were determined in sixty patients, divided into two groups. In Group I, thirty teeth in thirty patients were prepared using hand Nitiflex system, and in Group II, thirty teeth in thirty patients were prepared using K3XF rotary system. The evaluation was done clinically as well as radiographically. The results recorded from both groups were compiled and put to statistical analysis. Statistical Analysis: Chi-square test was used to compare the procedural errors (instrument separation, ledge formation, and perforation). Results: In the present study, both hand Nitiflex and rotary K3XF showed ledge formation and instrument separation. Although ledge formation and instrument separation by rotary K3XF file system was less as compared to hand Nitiflex. No perforation was seen in both the instrument groups. Conclusion: Canal curvature played a significant role during the instrumentation of the curved canals. Procedural errors such as ledge formation and instrument separation by rotary K3XF file system were less as compared to hand Nitiflex. PMID:29042727
Error analysis of multi-needle Langmuir probe measurement technique.
Barjatya, Aroh; Merritt, William
2018-04-01
Multi-needle Langmuir probe is a fairly new instrument technique that has been flown on several recent sounding rockets and is slated to fly on a subset of QB50 CubeSat constellation. This paper takes a fundamental look into the data analysis procedures used for this instrument to derive absolute electron density. Our calculations suggest that while the technique remains promising, the current data analysis procedures could easily result in errors of 50% or more. We present a simple data analysis adjustment that can reduce errors by at least a factor of five in typical operation.
Error analysis of multi-needle Langmuir probe measurement technique
NASA Astrophysics Data System (ADS)
Barjatya, Aroh; Merritt, William
2018-04-01
Multi-needle Langmuir probe is a fairly new instrument technique that has been flown on several recent sounding rockets and is slated to fly on a subset of QB50 CubeSat constellation. This paper takes a fundamental look into the data analysis procedures used for this instrument to derive absolute electron density. Our calculations suggest that while the technique remains promising, the current data analysis procedures could easily result in errors of 50% or more. We present a simple data analysis adjustment that can reduce errors by at least a factor of five in typical operation.
Effects of Correlated Errors on the Analysis of Space Geodetic Data
NASA Technical Reports Server (NTRS)
Romero-Wolf, Andres; Jacobs, C. S.
2011-01-01
As thermal errors are reduced instrumental and troposphere correlated errors will increasingly become more important. Work in progress shows that troposphere covariance error models improve data analysis results. We expect to see stronger effects with higher data rates. Temperature modeling of delay errors may further reduce temporal correlations in the data.
Casas, Francisco J; Ortiz, David; Villa, Enrique; Cano, Juan L; Cagigas, Jaime; Pérez, Ana R; Aja, Beatriz; Terán, J Vicente; de la Fuente, Luisa; Artal, Eduardo; Hoyland, Roger; Génova-Santos, Ricardo
2015-08-05
This paper presents preliminary polarization measurements and systematic-error characterization of the Thirty Gigahertz Instrument receiver developed for the QUIJOTE experiment. The instrument has been designed to measure the polarization of Cosmic Microwave Background radiation from the sky, obtaining the Q, U, and I Stokes parameters of the incoming signal simultaneously. Two kinds of linearly polarized input signals have been used as excitations in the polarimeter measurement tests in the laboratory; these show consistent results in terms of the Stokes parameters obtained. A measurement-based systematic-error characterization technique has been used in order to determine the possible sources of instrumental errors and to assist in the polarimeter calibration process.
Monitoring Instrument Performance in Regional Broadband Seismic Network Using Ambient Seismic Noise
NASA Astrophysics Data System (ADS)
Ye, F.; Lyu, S.; Lin, J.
2017-12-01
In the past ten years, the number of seismic stations has increased significantly, and regional seismic networks with advanced technology have been gradually developed all over the world. The resulting broadband data help to improve the seismological research. It is important to monitor the performance of broadband instruments in a new network in a long period of time to ensure the accuracy of seismic records. Here, we propose a method that uses ambient noise data in the period range 5-25 s to monitor instrument performance and check data quality in situ. The method is based on an analysis of amplitude and phase index parameters calculated from pairwise cross-correlations of three stations, which provides multiple references for reliable error estimates. Index parameters calculated daily during a two-year observation period are evaluated to identify stations with instrument response errors in near real time. During data processing, initial instrument responses are used in place of available instrument responses to simulate instrument response errors, which are then used to verify our results. We also examine feasibility of the tailing noise using data from stations selected from USArray in different locations and analyze the possible instrumental errors resulting in time-shifts used to verify the method. Additionally, we show an application that effects of instrument response errors that experience pole-zeros variations on monitoring temporal variations in crustal properties appear statistically significant velocity perturbation larger than the standard deviation. The results indicate that monitoring seismic instrument performance helps eliminate data pollution before analysis begins.
NASA Technical Reports Server (NTRS)
Loughman, R.; Flittner, D.; Herman, B.; Bhartia, P.; Hilsenrath, E.; McPeters, R.; Rault, D.
2002-01-01
The SOLSE (Shuttle Ozone Limb Sounding Experiment) and LORE (Limb Ozone Retrieval Experiment) instruments are scheduled for reflight on Space Shuttle flight STS-107 in July 2002. In addition, the SAGE III (Stratospheric Aerosol and Gas Experiment) instrument will begin to make limb scattering measurements during Spring 2002. The optimal estimation technique is used to analyze visible and ultraviolet limb scattered radiances and produce a retrieved ozone profile. The algorithm used to analyze data from the initial flight of the SOLSE/LORE instruments (on Space Shuttle flight STS-87 in November 1997) forms the basis of the current algorithms, with expansion to take advantage of the increased multispectral information provided by SOLSE/LORE-2 and SAGE III. We also present detailed sensitivity analysis for these ozone retrieval algorithms. The primary source of ozone retrieval error is tangent height misregistration (i.e., instrument pointing error), which is relevant throughout the altitude range of interest, and can produce retrieval errors on the order of 10-20 percent due to a tangent height registration error of 0.5 km at the tangent point. Other significant sources of error are sensitivity to stratospheric aerosol and sensitivity to error in the a priori ozone estimate (given assumed instrument signal-to-noise = 200). These can produce errors up to 10 percent for the ozone retrieval at altitudes less than 20 km, but produce little error above that level.
Evaluation of Acoustic Doppler Current Profiler measurements of river discharge
Morlock, S.E.
1996-01-01
The standard deviations of the ADCP measurements ranged from approximately 1 to 6 percent and were generally higher than the measurement errors predicted by error-propagation analysis of ADCP instrument performance. These error-prediction methods assume that the largest component of ADCP discharge measurement error is instrument related. The larger standard deviations indicate that substantial portions of measurement error may be attributable to sources unrelated to ADCP electronics or signal processing and are functions of the field environment.
Analysis of Solar Spectral Irradiance Measurements from the SBUV/2-Series and the SSBUV Instruments
NASA Technical Reports Server (NTRS)
Cebula, Richard P.; DeLand, Matthew T.; Hilsenrath, Ernest
1997-01-01
During this period of performance, 1 March 1997 - 31 August 1997, the NOAA-11 SBUV/2 solar spectral irradiance data set was validated using both internal and external assessments. Initial quality checking revealed minor problems with the data (e.g. residual goniometric errors, that were manifest as differences between the two scans acquired each day). The sources of these errors were determined and the errors were corrected. Time series were constructed for selected wavelengths and the solar irradiance changes measured by the instrument were compared to a Mg II proxy-based model of short- and long-term solar irradiance variations. This analysis suggested that errors due to residual, uncorrected long-term instrument drift have been reduced to less than 1-2% over the entire 5.5 year NOAA-11 data record. Detailed statistical analysis was performed. This analysis, which will be documented in a manuscript now in preparation, conclusively demonstrates the evolution of solar rotation periodicity and strength during solar cycle 22.
Skeletal and body composition evaluation
NASA Technical Reports Server (NTRS)
Mazess, R. B.
1983-01-01
Research on radiation detectors for absorptiometry; analysis of errors affective single photon absorptiometry and development of instrumentation; analysis of errors affecting dual photon absorptiometry and development of instrumentation; comparison of skeletal measurements with other techniques; cooperation with NASA projects for skeletal evaluation in spaceflight (Experiment MO-78) and in laboratory studies with immobilized animals; studies of postmenopausal osteoporosis; organization of scientific meetings and workshops on absorptiometric measurement; and development of instrumentation for measurement of fluid shifts in the human body were performed. Instrumentation was developed that allows accurate and precise (2% error) measurements of mineral content in compact and trabecular bone and of the total skeleton. Instrumentation was also developed to measure fluid shifts in the extremities. Radiation exposure with those procedures is low (2-10 MREM). One hundred seventy three technical reports and one hundred and four published papers of studies from the University of Wisconsin Bone Mineral Lab are listed.
Casas, Francisco J.; Ortiz, David; Villa, Enrique; Cano, Juan L.; Cagigas, Jaime; Pérez, Ana R.; Aja, Beatriz; Terán, J. Vicente; de la Fuente, Luisa; Artal, Eduardo; Hoyland, Roger; Génova-Santos, Ricardo
2015-01-01
This paper presents preliminary polarization measurements and systematic-error characterization of the Thirty Gigahertz Instrument receiver developed for the QUIJOTE experiment. The instrument has been designed to measure the polarization of Cosmic Microwave Background radiation from the sky, obtaining the Q, U, and I Stokes parameters of the incoming signal simultaneously. Two kinds of linearly polarized input signals have been used as excitations in the polarimeter measurement tests in the laboratory; these show consistent results in terms of the Stokes parameters obtained. A measurement-based systematic-error characterization technique has been used in order to determine the possible sources of instrumental errors and to assist in the polarimeter calibration process. PMID:26251906
An Introduction to Error Analysis for Quantitative Chemistry
ERIC Educational Resources Information Center
Neman, R. L.
1972-01-01
Describes two formulas for calculating errors due to instrument limitations which are usually found in gravimetric volumetric analysis and indicates their possible applications to other fields of science. (CC)
NASA Technical Reports Server (NTRS)
Mohr, R. L.
1975-01-01
A set of four digital computer programs is presented which can be used to investigate the effects of instrumentation errors on the accuracy of aircraft and helicopter stability-and-control derivatives identified from flight test data. The programs assume that the differential equations of motion are linear and consist of small perturbations about a quasi-steady flight condition. It is also assumed that a Newton-Raphson optimization technique is used for identifying the estimates of the parameters. Flow charts and printouts are included.
Estimating Uncertainty in Long Term Total Ozone Records from Multiple Sources
NASA Technical Reports Server (NTRS)
Frith, Stacey M.; Stolarski, Richard S.; Kramarova, Natalya; McPeters, Richard D.
2014-01-01
Total ozone measurements derived from the TOMS and SBUV backscattered solar UV instrument series cover the period from late 1978 to the present. As the SBUV series of instruments comes to an end, we look to the 10 years of data from the AURA Ozone Monitoring Instrument (OMI) and two years of data from the Ozone Mapping Profiler Suite (OMPS) on board the Suomi National Polar-orbiting Partnership satellite to continue the record. When combining these records to construct a single long-term data set for analysis we must estimate the uncertainty in the record resulting from potential biases and drifts in the individual measurement records. In this study we present a Monte Carlo analysis used to estimate uncertainties in the Merged Ozone Dataset (MOD), constructed from the Version 8.6 SBUV2 series of instruments. We extend this analysis to incorporate OMI and OMPS total ozone data into the record and investigate the impact of multiple overlapping measurements on the estimated error. We also present an updated column ozone trend analysis and compare the size of statistical error (error from variability not explained by our linear regression model) to that from instrument uncertainty.
Applications of inertial-sensor high-inheritance instruments to DSN precision antenna pointing
NASA Technical Reports Server (NTRS)
Goddard, R. E.
1992-01-01
Laboratory test results of the initialization and tracking performance of an existing inertial-sensor-based instrument are given. The instrument, although not primarily designed for precision antenna pointing applications, demonstrated an on-average 10-hour tracking error of several millidegrees. The system-level instrument performance is shown by analysis to be sensor limited. Simulated instrument improvements show a tracking error of less than 1 mdeg, which would provide acceptable performance, i.e., low pointing loss, for the DSN 70-m antenna sub network, operating at Ka-band (1-cm wavelength).
Applications of inertial-sensor high-inheritance instruments to DSN precision antenna pointing
NASA Technical Reports Server (NTRS)
Goddard, R. E.
1992-01-01
Laboratory test results of the initialization and tracking performance of an existing inertial-sensor-based instrument are given. The instrument, although not primarily designed for precision antenna pointing applications, demonstrated an on-average 10-hour tracking error of several millidegrees. The system-level instrument performance is shown by analysis to be sensor limited. Simulated instrument improvements show a tracking error of less than 1 mdeg, which would provide acceptable performance, i.e., low pointing loss, for the Deep Space Network 70-m antenna subnetwork, operating at Ka-band (1-cm wavelength).
Liquid Medication Dosing Errors in Children: Role of Provider Counseling Strategies
Yin, H. Shonna; Dreyer, Benard P.; Moreira, Hannah A.; van Schaick, Linda; Rodriguez, Luis; Boettger, Susanne; Mendelsohn, Alan L.
2014-01-01
Objective To examine the degree to which recommended provider counseling strategies, including advanced communication techniques and dosing instrument provision, are associated with reductions in parent liquid medication dosing errors. Methods Cross-sectional analysis of baseline data on provider communication and dosing instrument provision from a study of a health literacy intervention to reduce medication errors. Parents whose children (<9 years) were seen in two urban public hospital pediatric emergency departments (EDs) and were prescribed daily dose liquid medications self-reported whether they received counseling about their child’s medication, including advanced strategies (teachback, drawings/pictures, demonstration, showback) and receipt of a dosing instrument. Primary dependent variable: observed dosing error (>20% deviation from prescribed). Multivariate logistic regression analyses performed, controlling for: parent age, language, country, ethnicity, socioeconomic status, education, health literacy (Short Test of Functional Health Literacy in Adults); child age, chronic disease status; site. Results Of 287 parents, 41.1% made dosing errors. Advanced counseling and instrument provision in the ED were reported by 33.1% and 19.2%, respectively; 15.0% reported both. Advanced counseling and instrument provision in the ED were associated with decreased errors (30.5 vs. 46.4%, p=0.01; 21.8 vs. 45.7%, p=0.001). In adjusted analyses, ED advanced counseling in combination with instrument provision was associated with a decreased odds of error compared to receiving neither (AOR 0.3; 95% CI 0.1–0.7); advanced counseling alone and instrument alone were not significantly associated with odds of error. Conclusion Provider use of advanced counseling strategies and dosing instrument provision may be especially effective in reducing errors when used together. PMID:24767779
Liquid medication dosing errors in children: role of provider counseling strategies.
Yin, H Shonna; Dreyer, Benard P; Moreira, Hannah A; van Schaick, Linda; Rodriguez, Luis; Boettger, Susanne; Mendelsohn, Alan L
2014-01-01
To examine the degree to which recommended provider counseling strategies, including advanced communication techniques and dosing instrument provision, are associated with reductions in parent liquid medication dosing errors. Cross-sectional analysis of baseline data on provider communication and dosing instrument provision from a study of a health literacy intervention to reduce medication errors. Parents whose children (<9 years) were seen in 2 urban public hospital pediatric emergency departments (EDs) and were prescribed daily dose liquid medications self-reported whether they received counseling about their child's medication, including advanced strategies (teachback, drawings/pictures, demonstration, showback) and receipt of a dosing instrument. The primary dependent variable was observed dosing error (>20% deviation from prescribed). Multivariate logistic regression analyses were performed, controlling for parent age, language, country, ethnicity, socioeconomic status, education, health literacy (Short Test of Functional Health Literacy in Adults); child age, chronic disease status; and site. Of 287 parents, 41.1% made dosing errors. Advanced counseling and instrument provision in the ED were reported by 33.1% and 19.2%, respectively; 15.0% reported both. Advanced counseling and instrument provision in the ED were associated with decreased errors (30.5 vs. 46.4%, P = .01; 21.8 vs. 45.7%, P = .001). In adjusted analyses, ED advanced counseling in combination with instrument provision was associated with a decreased odds of error compared to receiving neither (adjusted odds ratio 0.3; 95% confidence interval 0.1-0.7); advanced counseling alone and instrument alone were not significantly associated with odds of error. Provider use of advanced counseling strategies and dosing instrument provision may be especially effective in reducing errors when used together. Copyright © 2014 Academic Pediatric Association. Published by Elsevier Inc. All rights reserved.
Spatial sampling considerations of the CERES (Clouds and Earth Radiant Energy System) instrument
NASA Astrophysics Data System (ADS)
Smith, G. L.; Manalo-Smith, Natividdad; Priestley, Kory
2014-10-01
The CERES (Clouds and Earth Radiant Energy System) instrument is a scanning radiometer with three channels for measuring Earth radiation budget. At present CERES models are operating aboard the Terra, Aqua and Suomi/NPP spacecraft and flights of CERES instruments are planned for the JPSS-1 spacecraft and its successors. CERES scans from one limb of the Earth to the other and back. The footprint size grows with distance from nadir simply due to geometry so that the size of the smallest features which can be resolved from the data increases and spatial sampling errors increase with nadir angle. This paper presents an analysis of the effect of nadir angle on spatial sampling errors of the CERES instrument. The analysis performed in the Fourier domain. Spatial sampling errors are created by smoothing of features which are the size of the footprint and smaller, or blurring, and inadequate sampling, that causes aliasing errors. These spatial sampling errors are computed in terms of the system transfer function, which is the Fourier transform of the point response function, the spacing of data points and the spatial spectrum of the radiance field.
Khanna, Rajesh; Handa, Aashish; Virk, Rupam Kaur; Ghai, Deepika; Handa, Rajni Sharma; Goel, Asim
2017-01-01
The process of cleaning and shaping the canal is not an easy goal to obtain, as canal curvature played a significant role during the instrumentation of the curved canals. The present in vivo study was conducted to evaluate procedural errors during the preparation of curved root canals using hand Nitiflex and rotary K3XF instruments. Procedural errors such as ledge formation, instrument separation, and perforation (apical, furcal, strip) were determined in sixty patients, divided into two groups. In Group I, thirty teeth in thirty patients were prepared using hand Nitiflex system, and in Group II, thirty teeth in thirty patients were prepared using K3XF rotary system. The evaluation was done clinically as well as radiographically. The results recorded from both groups were compiled and put to statistical analysis. Chi-square test was used to compare the procedural errors (instrument separation, ledge formation, and perforation). In the present study, both hand Nitiflex and rotary K3XF showed ledge formation and instrument separation. Although ledge formation and instrument separation by rotary K3XF file system was less as compared to hand Nitiflex. No perforation was seen in both the instrument groups. Canal curvature played a significant role during the instrumentation of the curved canals. Procedural errors such as ledge formation and instrument separation by rotary K3XF file system were less as compared to hand Nitiflex.
International Space Station Remote Sensing Pointing Analysis
NASA Technical Reports Server (NTRS)
Jacobson, Craig A.
2007-01-01
This paper analyzes the geometric and disturbance aspects of utilizing the International Space Station for remote sensing of earth targets. The proposed instrument (in prototype development) is SHORE (Station High-Performance Ocean Research Experiment), a multiband optical spectrometer with 15 m pixel resolution. The analysis investigates the contribution of the error effects to the quality of data collected by the instrument. This analysis supported the preliminary studies to determine feasibility of utilizing the International Space Station as an observing platform for a SHORE type of instrument. Rigorous analyses will be performed if a SHORE flight program is initiated. The analysis begins with the discussion of the coordinate systems involved and then conversion from the target coordinate system to the instrument coordinate system. Next the geometry of remote observations from the Space Station is investigated including the effects of the instrument location in Space Station and the effects of the line of sight to the target. The disturbance and error environment on Space Station is discussed covering factors contributing to drift and jitter, accuracy of pointing data and target and instrument accuracies.
Current Issues in the Design and Information Content of Instrument Approach Charts
DOT National Transportation Integrated Search
1995-03-01
This report documents an analysis and interview effort conducted to identify common operational errors made using : current Instrument Approach Plates (IAP), Standard Terminal Arrival Route (STAR) charts. Standard Instrument Departure : (SID) charts,...
NASA Astrophysics Data System (ADS)
Borovski, A.; Postylyakov, O.; Elokhov, A.; Bruchkovski, I.
2017-11-01
An instrument for measuring atmospheric trace gases by DOAS method using scattered solar radiation was developed in A.M.Obukhov IAP RAS. The instrument layout is based on the lab Shamrock 303i spectrograph supplemented by 2-port radiation input system employing optical fiber. Optical ports may be used with a telescope with fixed field of view or with a scanning MAX-DOAS unit. MAX-DOAS unit port will be used for investigation of gas contents and profiles in the low troposphere. In September 2016 the IAP instrument participated in the CINDI-2 campaign, held in the Netherlands. CINDI 2 (2nd Cabauw Intercomparison of Nitrogen Dioxide Measuring Instruments) involves about 40 instruments quasi-synchronously performing DOAS measurements of NO2 and other trace gases. During the campaign the instrument ports had telescopes A and B with similar field of view of about 0.3°. Telescope A was always directed to the zenith. Telescope B was directed at 5° elevation angle. Two gratings were installed in the spectrometer. They provide different spectral resolution (FWHM 0.4 and 0.8 nm respectively) and spectral window width ( 70 and 140 nm respectively). During CINDI-2 campaign we performed test measurements in UV and visible wavelength ranges to investigate instrument stability and retrieval errors of NO2 and HCHO contents. We perform the preliminary error analysis of retrieval of the NO2 and HCHO differential slant column densities using spectra measured in four modes of the instrument basing on residual noise analysis in this paper. It was found that rotation of grating turret does not significantly affected on quality of NO2 DSCD retrieval from spectra which measured in visible spectral region. Influence of grating turret rotation is much more significant for gas DSCD retrieval from spectra which measured in UV spectral region. Standard deviation of retrieval error points to presence of some systematic error.
Masullo, Carlo; Piccininni, Chiara; Quaranta, Davide; Vita, Maria Gabriella; Gaudino, Simona; Gainotti, Guido
2012-10-01
Semantic memory was investigated in a patient (MR) affected by a severe apperceptive visual agnosia, due to an ischemic cerebral lesion, bilaterally affecting the infero-mesial parts of the temporo-occipital cortices. The study was made by means of a Semantic Knowledge Questionnaire (Laiacona, Barbarotto, Trivelli, & Capitani, 1993), which takes separately into account four categories of living beings (animals, fruits, vegetables and body parts) and of artefacts (furniture, tools, vehicles and musical instruments), does not require a visual analysis and allows to distinguish errors concerning super-ordinate categorization, perceptual features and functional/encyclopedic knowledge. When the total number of errors obtained on all the categories of living and non-living beings was considered, a non-significant trend toward a higher number of errors in living stimuli was observed. This difference, however, became significant when body parts and musical instruments were excluded from the analysis. Furthermore, the number of errors obtained on the musical instruments was similar to that obtained on the living categories of animals, fruits and vegetables and significantly higher of that obtained in the other artefact categories. This difference was still significant when familiarity, frequency of use and prototypicality of each stimulus entered into a logistic regression analysis. On the other hand, a separate analysis of errors obtained on questions exploring super-ordinate categorization, perceptual features and functional/encyclopedic attributes showed that the differences between living and non-living stimuli and between musical instruments and other artefact categories were mainly due to errors obtained on questions exploring perceptual features. All these data are at variance with the 'domains of knowledge' hypothesis', which assumes that the breakdown of different categories of living and non-living things respects the distinction between biological entities and artefacts and support the models assuming that 'category-specific semantic disorders' are the by-product of the differential weighting that visual-perceptual and functional (or action-related) attributes have in the construction of different biological and artefacts categories. Copyright © 2012 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Ioup, George E.; Ioup, Juliette W.
1991-01-01
The final report for work on the determination of design and operation parameters for upper atmospheric research instrumentation to yield optimum resolution with deconvolution is presented. Papers and theses prepared during the research report period are included. Among all the research results reported, note should be made of the specific investigation of the determination of design and operation parameters for upper atmospheric research instrumentation to yield optimum resolution with deconvolution. A methodology was developed to determine design and operation parameters for error minimization when deconvolution is included in data analysis. An error surface is plotted versus the signal-to-noise ratio (SNR) and all parameters of interest. Instrumental characteristics will determine a curve in this space. The SNR and parameter values which give the projection from the curve to the surface, corresponding to the smallest value for the error, are the optimum values. These values are constrained by the curve and so will not necessarily correspond to an absolute minimum in the error surface.
Effects of Tropospheric Spatio-Temporal Correlated Noise on the Analysis of Space Geodetic Data
NASA Technical Reports Server (NTRS)
Romero-Wolf, A. F.; Jacobs, C. S.
2011-01-01
The standard VLBI analysis models measurement noise as purely thermal errors modeled according to uncorrelated Gaussian distributions. As the price of recording bits steadily decreases, thermal errors will soon no longer dominate. It is therefore expected that troposphere and instrumentation/clock errors will increasingly become more dominant. Given that both of these errors have correlated spectra, properly modeling the error distributions will become more relevant for optimal analysis. This paper will discuss the advantages of including the correlations between tropospheric delays using a Kolmogorov spectrum and the frozen ow model pioneered by Treuhaft and Lanyi. We will show examples of applying these correlated noise spectra to the weighting of VLBI data analysis.
NASA Technical Reports Server (NTRS)
Aronstein, David L.; Smith, J. Scott; Zielinski, Thomas P.; Telfer, Randal; Tournois, Severine C.; Moore, Dustin B.; Fienup, James R.
2016-01-01
The science instruments (SIs) comprising the James Webb Space Telescope (JWST) Integrated Science Instrument Module (ISIM) were tested in three cryogenic-vacuum test campaigns in the NASA Goddard Space Flight Center (GSFC)'s Space Environment Simulator (SES). In this paper, we describe the results of optical wavefront-error performance characterization of the SIs. The wavefront error is determined using image-based wavefront sensing (also known as phase retrieval), and the primary data used by this process are focus sweeps, a series of images recorded by the instrument under test in its as-used configuration, in which the focal plane is systematically changed from one image to the next. High-precision determination of the wavefront error also requires several sources of secondary data, including 1) spectrum, apodization, and wavefront-error characterization of the optical ground-support equipment (OGSE) illumination module, called the OTE Simulator (OSIM), 2) plate scale measurements made using a Pseudo-Nonredundant Mask (PNRM), and 3) pupil geometry predictions as a function of SI and field point, which are complicated because of a tricontagon-shaped outer perimeter and small holes that appear in the exit pupil due to the way that different light sources are injected into the optical path by the OGSE. One set of wavefront-error tests, for the coronagraphic channel of the Near-Infrared Camera (NIRCam) Longwave instruments, was performed using data from transverse translation diversity sweeps instead of focus sweeps, in which a sub-aperture is translated andor rotated across the exit pupil of the system.Several optical-performance requirements that were verified during this ISIM-level testing are levied on the uncertainties of various wavefront-error-related quantities rather than on the wavefront errors themselves. This paper also describes the methodology, based on Monte Carlo simulations of the wavefront-sensing analysis of focus-sweep data, used to establish the uncertainties of the wavefront error maps.
NASA Technical Reports Server (NTRS)
Aronstein, David L.; Smith, J. Scott; Zielinski, Thomas P.; Telfer, Randal; Tournois, Severine C.; Moore, Dustin B.; Fienup, James R.
2016-01-01
The science instruments (SIs) comprising the James Webb Space Telescope (JWST) Integrated Science Instrument Module (ISIM) were tested in three cryogenic-vacuum test campaigns in the NASA Goddard Space Flight Center (GSFC)'s Space Environment Simulator (SES) test chamber. In this paper, we describe the results of optical wavefront-error performance characterization of the SIs. The wavefront error is determined using image-based wavefront sensing, and the primary data used by this process are focus sweeps, a series of images recorded by the instrument under test in its as-used configuration, in which the focal plane is systematically changed from one image to the next. High-precision determination of the wavefront error also requires several sources of secondary data, including 1) spectrum, apodization, and wavefront-error characterization of the optical ground-support equipment (OGSE) illumination module, called the OTE Simulator (OSIM), 2) F-number and pupil-distortion measurements made using a pseudo-nonredundant mask (PNRM), and 3) pupil geometry predictions as a function of SI and field point, which are complicated because of a tricontagon-shaped outer perimeter and small holes that appear in the exit pupil due to the way that different light sources are injected into the optical path by the OGSE. One set of wavefront-error tests, for the coronagraphic channel of the Near-Infrared Camera (NIRCam) Longwave instruments, was performed using data from transverse translation diversity sweeps instead of focus sweeps, in which a sub-aperture is translated and/or rotated across the exit pupil of the system. Several optical-performance requirements that were verified during this ISIM-level testing are levied on the uncertainties of various wavefront-error-related quantities rather than on the wavefront errors themselves. This paper also describes the methodology, based on Monte Carlo simulations of the wavefront-sensing analysis of focus-sweep data, used to establish the uncertainties of the wavefront-error maps.
Mapping GRACE Accelerometer Error
NASA Astrophysics Data System (ADS)
Sakumura, C.; Harvey, N.; McCullough, C. M.; Bandikova, T.; Kruizinga, G. L. H.
2017-12-01
After more than fifteen years in orbit, instrument noise, and accelerometer noise in particular, remains one of the limiting error sources for the NASA/DLR Gravity Recovery and Climate Experiment mission. The recent V03 Level-1 reprocessing campaign used a Kalman filter approach to produce a high fidelity, smooth attitude solution fusing star camera and angular acceleration data. This process provided an unprecedented method for analysis and error estimation of each instrument. The accelerometer exhibited signal aliasing, differential scale factors between electrode plates, and magnetic effects. By applying the noise model developed for the angular acceleration data to the linear measurements, we explore the magnitude and geophysical pattern of gravity field error due to the electrostatic accelerometer.
Error Ratio Analysis: Alternate Mathematics Assessment for General and Special Educators.
ERIC Educational Resources Information Center
Miller, James H.; Carr, Sonya C.
1997-01-01
Eighty-seven elementary students in grades four, five, and six, were administered a 30-item multiplication instrument to assess performance in computation across grade levels. An interpretation of student performance using error ratio analysis is provided and the use of this method with groups of students for instructional decision making is…
Skylab water balance error analysis
NASA Technical Reports Server (NTRS)
Leonard, J. I.
1977-01-01
Estimates of the precision of the net water balance were obtained for the entire Skylab preflight and inflight phases as well as for the first two weeks of flight. Quantitative estimates of both total sampling errors and instrumentation errors were obtained. It was shown that measurement error is minimal in comparison to biological variability and little can be gained from improvement in analytical accuracy. In addition, a propagation of error analysis demonstrated that total water balance error could be accounted for almost entirely by the errors associated with body mass changes. Errors due to interaction between terms in the water balance equation (covariances) represented less than 10% of the total error. Overall, the analysis provides evidence that daily measurements of body water changes obtained from the indirect balance technique are reasonable, precise, and relaible. The method is not biased toward net retention or loss.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-14
... stocks) that comprise a particular ETF, HOLDR or index. Any such proposal would be the subject of a... security--or with respect to ETF(s), HOLDRS(s), and index options the related instrument(s) that would be... applying the obvious error analysis. The ``related instrument(s)'' may include related ETF(s), HOLDRS(s...
Development of TPS flight test and operational instrumentation
NASA Technical Reports Server (NTRS)
Carnahan, K. R.; Hartman, G. J.; Neuner, G. J.
1975-01-01
Thermal and flow sensor instrumentation was developed for use as an integral part of the space shuttle orbiter reusable thermal protection system. The effort was performed in three tasks: a study to determine the optimum instruments and instrument installations for the space shuttle orbiter RSI and RCC TPS; tests and/or analysis to determine the instrument installations to minimize measurement errors; and analysis using data from the test program for comparison to analytical methods. A detailed review of existing state of the art instrumentation in industry was performed to determine the baseline for the departure of the research effort. From this information, detailed criteria for thermal protection system instrumentation were developed.
Translating Radiometric Requirements for Satellite Sensors to Match International Standards.
Pearlman, Aaron; Datla, Raju; Kacker, Raghu; Cao, Changyong
2014-01-01
International scientific standards organizations created standards on evaluating uncertainty in the early 1990s. Although scientists from many fields use these standards, they are not consistently implemented in the remote sensing community, where traditional error analysis framework persists. For a satellite instrument under development, this can create confusion in showing whether requirements are met. We aim to create a methodology for translating requirements from the error analysis framework to the modern uncertainty approach using the product level requirements of the Advanced Baseline Imager (ABI) that will fly on the Geostationary Operational Environmental Satellite R-Series (GOES-R). In this paper we prescribe a method to combine several measurement performance requirements, written using a traditional error analysis framework, into a single specification using the propagation of uncertainties formula. By using this approach, scientists can communicate requirements in a consistent uncertainty framework leading to uniform interpretation throughout the development and operation of any satellite instrument.
Translating Radiometric Requirements for Satellite Sensors to Match International Standards
Pearlman, Aaron; Datla, Raju; Kacker, Raghu; Cao, Changyong
2014-01-01
International scientific standards organizations created standards on evaluating uncertainty in the early 1990s. Although scientists from many fields use these standards, they are not consistently implemented in the remote sensing community, where traditional error analysis framework persists. For a satellite instrument under development, this can create confusion in showing whether requirements are met. We aim to create a methodology for translating requirements from the error analysis framework to the modern uncertainty approach using the product level requirements of the Advanced Baseline Imager (ABI) that will fly on the Geostationary Operational Environmental Satellite R-Series (GOES-R). In this paper we prescribe a method to combine several measurement performance requirements, written using a traditional error analysis framework, into a single specification using the propagation of uncertainties formula. By using this approach, scientists can communicate requirements in a consistent uncertainty framework leading to uniform interpretation throughout the development and operation of any satellite instrument. PMID:26601032
NASA Astrophysics Data System (ADS)
Ying, Jia-ju; Yin, Jian-ling; Wu, Dong-sheng; Liu, Jie; Chen, Yu-dan
2017-11-01
Low-light level night vision device and thermal infrared imaging binocular photoelectric instrument are used widely. The maladjustment of binocular instrument ocular axises parallelism will cause the observer the symptom such as dizziness, nausea, when use for a long time. Binocular photoelectric equipment digital calibration instrument is developed for detecting ocular axises parallelism. And the quantitative value of optical axis deviation can be quantitatively measured. As a testing instrument, the precision must be much higher than the standard of test instrument. Analyzes the factors that influence the accuracy of detection. Factors exist in each testing process link which affect the precision of the detecting instrument. They can be divided into two categories, one category is factors which directly affect the position of reticle image, the other category is factors which affect the calculation the center of reticle image. And the Synthesize error is calculated out. And further distribute the errors reasonably to ensure the accuracy of calibration instruments.
Mariner Jupiter/Saturn infrared instrument study
NASA Technical Reports Server (NTRS)
1972-01-01
The Mariner Jupiter/Saturn infrared instrumentation conceptual design study was conducted to determine the physical and operational characteristics of the instruments needed to satisfy the experiment science requirements. The design of the instruments is based on using as many proven concepts as possible. Many design features are taken from current developments such as the Mariner, Pioneer 10, Viking Orbiter radiometers, and Nimbus D spectrometer. Calibration techniques and error analysis for the instrument system are discussed.
Poster Presentation: Optical Test of NGST Developmental Mirrors
NASA Technical Reports Server (NTRS)
Hadaway, James B.; Geary, Joseph; Reardon, Patrick; Peters, Bruce; Keidel, John; Chavers, Greg
2000-01-01
An Optical Testing System (OTS) has been developed to measure the figure and radius of curvature of NGST developmental mirrors in the vacuum, cryogenic environment of the X-Ray Calibration Facility (XRCF) at Marshall Space Flight Center (MSFC). The OTS consists of a WaveScope Shack-Hartmann sensor from Adaptive Optics Associates as the main instrument, a Point Diffraction Interferometer (PDI), a Point Spread Function (PSF) imager, an alignment system, a Leica Disto Pro distance measurement instrument, and a laser source palette (632.8 nm wavelength) that is fiber-coupled to the sensor instruments. All of the instruments except the laser source palette are located on a single breadboard known as the Wavefront Sensor Pallet (WSP). The WSP is located on top of a 5-DOF motion system located at the center of curvature of the test mirror. Two PC's are used to control the OTS. The error in the figure measurement is dominated by the WaveScope's measurement error. An analysis using the absolute wavefront gradient error of 1/50 wave P-V (at 0.6328 microns) provided by the manufacturer leads to a total surface figure measurement error of approximately 1/100 wave rms. This easily meets the requirement of 1/10 wave P-V. The error in radius of curvature is dominated by the Leica's absolute measurement error of VI.5 mm and the focus setting error of Vi.4 mm, giving an overall error of V2 mm. The OTS is currently being used to test the NGST Mirror System Demonstrators (NMSD's) and the Subscale Beryllium Mirror Demonstrator (SBNM).
NASA Technical Reports Server (NTRS)
Tangborn, Andrew; Menard, Richard; Ortland, David; Einaudi, Franco (Technical Monitor)
2001-01-01
A new approach to the analysis of systematic and random observation errors is presented in which the error statistics are obtained using forecast data rather than observations from a different instrument type. The analysis is carried out at an intermediate retrieval level, instead of the more typical state variable space. This method is carried out on measurements made by the High Resolution Doppler Imager (HRDI) on board the Upper Atmosphere Research Satellite (UARS). HRDI, a limb sounder, is the only satellite instrument measuring winds in the stratosphere, and the only instrument of any kind making global wind measurements in the upper atmosphere. HRDI measures doppler shifts in the two different O2 absorption bands (alpha and B) and the retrieved products are tangent point Line-of-Sight wind component (level 2 retrieval) and UV winds (level 3 retrieval). This analysis is carried out on a level 1.9 retrieval, in which the contributions from different points along the line-of-sight have not been removed. Biases are calculated from O-F (observed minus forecast) LOS wind components and are separated into a measurement parameter space consisting of 16 different values. The bias dependence on these parameters (plus an altitude dependence) is used to create a bias correction scheme carried out on the level 1.9 retrieval. The random error component is analyzed by separating the gamma and B band observations and locating observation pairs where both bands are very nearly looking at the same location at the same time. It is shown that the two observation streams are uncorrelated and that this allows the forecast error variance to be estimated. The bias correction is found to cut the effective observation error variance in half.
Rödig, T; Hausdörfer, T; Konietschke, F; Dullin, C; Hahn, W; Hülsmann, M
2012-06-01
To compare the efficacy of two rotary NiTi retreatment systems and Hedström files in removing filling material from curved root canals. Curved root canals of 57 extracted teeth were prepared using FlexMaster instruments and filled with gutta-percha and AH Plus. After determination of root canal curvatures and radii in two directions, the teeth were assigned to three identical groups (n = 19). The root fillings were removed with D-RaCe instruments, ProTaper Universal Retreatment instruments or Hedström files. Pre- and postoperative micro-CT imaging was used to assess the percentage of residual filling material as well as the amount of dentine removal. Working time and procedural errors were recorded. Data were analysed using analysis of covariance and analysis of variance procedures. D-RaCe instruments were significantly more effective than ProTaper Universal Retreatment instruments and Hedström files (P < 0.05). Hedström files removed significantly less dentine than the rotary NiTi systems (P < 0.0001). D-RaCe instruments were significantly faster compared to both other groups (P < 0.05). No procedural errors such as instrument fracture, blockage, ledging or perforation were detected in the Hedström group. In the ProTaper group, four instrument fractures and one lateral perforation were observed. Five instrument fractures were recorded for D-RaCe. D-RaCe instruments were associated with significantly less residual filling material than ProTaper Universal Retreatment instruments and hand files. Hedström files removed significantly less dentine than both rotary NiTi systems. Retreatment with rotary NiTi systems resulted in a high incidence of procedural errors. © 2012 International Endodontic Journal.
The Accuracy of Webcams in 2D Motion Analysis: Sources of Error and Their Control
ERIC Educational Resources Information Center
Page, A.; Moreno, R.; Candelas, P.; Belmar, F.
2008-01-01
In this paper, we show the potential of webcams as precision measuring instruments in a physics laboratory. Various sources of error appearing in 2D coordinate measurements using low-cost commercial webcams are discussed, quantifying their impact on accuracy and precision, and simple procedures to control these sources of error are presented.…
Palmer, C.A.
1990-01-01
Twenty-nine elements have been determined in triplicate splits of the eight Argonne National Laboratory Premium Coal Samples by instrumental neutron activtaion analysis. Data for control samples NBS 1633 (fly ash) and NBS 1632b are also reported. The factors that could lead to errors in analysis for these samples, such as spectral overlaps, low sensitivity, and interfering nuclear reactions, are discussed.
A water-vapor radiometer error model. [for ionosphere in geodetic microwave techniques
NASA Technical Reports Server (NTRS)
Beckman, B.
1985-01-01
The water-vapor radiometer (WVR) is used to calibrate unpredictable delays in the wet component of the troposphere in geodetic microwave techniques such as very-long-baseline interferometry (VLBI) and Global Positioning System (GPS) tracking. Based on experience with Jet Propulsion Laboratory (JPL) instruments, the current level of accuracy in wet-troposphere calibration limits the accuracy of local vertical measurements to 5-10 cm. The goal for the near future is 1-3 cm. Although the WVR is currently the best calibration method, many instruments are prone to systematic error. In this paper, a treatment of WVR data is proposed and evaluated. This treatment reduces the effect of WVR systematic errors by estimating parameters that specify an assumed functional form for the error. The assumed form of the treatment is evaluated by comparing the results of two similar WVR's operating near each other. Finally, the observability of the error parameters is estimated by covariance analysis.
Development and validity of a method for the evaluation of printed education material
Castro, Mauro Silveira; Pilger, Diogo; Fuchs, Flávio Danni; Ferreira, Maria Beatriz Cardoso
Objectives To develop and study the validity of an instrument for evaluation of Printed Education Materials (PEM); to evaluate the use of acceptability indices; to identify possible influences of professional aspects. Methods An instrument for PEM evaluation was developed which included tree steps: domain identification, item generation and instrument design. A reading to easy PEM was developed for education of patient with systemic hypertension and its treatment with hydrochlorothiazide. Construct validity was measured based on previously established errors purposively introduced into the PEM, which served as extreme groups. An acceptability index was applied taking into account the rate of professionals who should approve each item. Participants were 10 physicians (9 men) and 5 nurses (all women). Results Many professionals identified intentional errors of crude character. Few participants identified errors that needed more careful evaluation, and no one detected the intentional error that required literature analysis. Physicians considered as acceptable 95.8% of the items of the PEM, and nurses 29.2%. The differences between the scoring were statistically significant in 27% of the items. In the overall evaluation, 66.6% were considered as acceptable. The analysis of each item revealed a behavioral pattern for each professional group. Conclusions The use of instruments for evaluation of printed education materials is required and may improve the quality of the PEM available for the patients. Not always are the acceptability indices totally correct or represent high quality of information. The professional experience, the practice pattern, and perhaps the gendre of the reviewers may influence their evaluation. An analysis of the PEM by professionals in communication, in drug information, and patients should be carried out to improve the quality of the proposed material. PMID:25214924
NASA Technical Reports Server (NTRS)
Mallinckrodt, A. J.
1977-01-01
Data from an extensive array of collocated instrumentation at the Wallops Island test facility were intercompared in order to (1) determine the practical achievable accuracy limitations of various tropospheric and ionospheric correction techniques; (2) examine the theoretical bases and derivation of improved refraction correction techniques; and (3) estimate internal systematic and random error levels of the various tracking stations. The GEOS 2 satellite was used as the target vehicle. Data were obtained regarding the ionospheric and tropospheric propagation errors, the theoretical and data analysis of which was documented in some 30 separate reports over the last 6 years. An overview of project results is presented.
Unit of Measurement Used and Parent Medication Dosing Errors
Dreyer, Benard P.; Ugboaja, Donna C.; Sanchez, Dayana C.; Paul, Ian M.; Moreira, Hannah A.; Rodriguez, Luis; Mendelsohn, Alan L.
2014-01-01
BACKGROUND AND OBJECTIVES: Adopting the milliliter as the preferred unit of measurement has been suggested as a strategy to improve the clarity of medication instructions; teaspoon and tablespoon units may inadvertently endorse nonstandard kitchen spoon use. We examined the association between unit used and parent medication errors and whether nonstandard instruments mediate this relationship. METHODS: Cross-sectional analysis of baseline data from a larger study of provider communication and medication errors. English- or Spanish-speaking parents (n = 287) whose children were prescribed liquid medications in 2 emergency departments were enrolled. Medication error defined as: error in knowledge of prescribed dose, error in observed dose measurement (compared to intended or prescribed dose); >20% deviation threshold for error. Multiple logistic regression performed adjusting for parent age, language, country, race/ethnicity, socioeconomic status, education, health literacy (Short Test of Functional Health Literacy in Adults); child age, chronic disease; site. RESULTS: Medication errors were common: 39.4% of parents made an error in measurement of the intended dose, 41.1% made an error in the prescribed dose. Furthermore, 16.7% used a nonstandard instrument. Compared with parents who used milliliter-only, parents who used teaspoon or tablespoon units had twice the odds of making an error with the intended (42.5% vs 27.6%, P = .02; adjusted odds ratio=2.3; 95% confidence interval, 1.2–4.4) and prescribed (45.1% vs 31.4%, P = .04; adjusted odds ratio=1.9; 95% confidence interval, 1.03–3.5) dose; associations greater for parents with low health literacy and non–English speakers. Nonstandard instrument use partially mediated teaspoon and tablespoon–associated measurement errors. CONCLUSIONS: Findings support a milliliter-only standard to reduce medication errors. PMID:25022742
Unit of measurement used and parent medication dosing errors.
Yin, H Shonna; Dreyer, Benard P; Ugboaja, Donna C; Sanchez, Dayana C; Paul, Ian M; Moreira, Hannah A; Rodriguez, Luis; Mendelsohn, Alan L
2014-08-01
Adopting the milliliter as the preferred unit of measurement has been suggested as a strategy to improve the clarity of medication instructions; teaspoon and tablespoon units may inadvertently endorse nonstandard kitchen spoon use. We examined the association between unit used and parent medication errors and whether nonstandard instruments mediate this relationship. Cross-sectional analysis of baseline data from a larger study of provider communication and medication errors. English- or Spanish-speaking parents (n = 287) whose children were prescribed liquid medications in 2 emergency departments were enrolled. Medication error defined as: error in knowledge of prescribed dose, error in observed dose measurement (compared to intended or prescribed dose); >20% deviation threshold for error. Multiple logistic regression performed adjusting for parent age, language, country, race/ethnicity, socioeconomic status, education, health literacy (Short Test of Functional Health Literacy in Adults); child age, chronic disease; site. Medication errors were common: 39.4% of parents made an error in measurement of the intended dose, 41.1% made an error in the prescribed dose. Furthermore, 16.7% used a nonstandard instrument. Compared with parents who used milliliter-only, parents who used teaspoon or tablespoon units had twice the odds of making an error with the intended (42.5% vs 27.6%, P = .02; adjusted odds ratio=2.3; 95% confidence interval, 1.2-4.4) and prescribed (45.1% vs 31.4%, P = .04; adjusted odds ratio=1.9; 95% confidence interval, 1.03-3.5) dose; associations greater for parents with low health literacy and non-English speakers. Nonstandard instrument use partially mediated teaspoon and tablespoon-associated measurement errors. Findings support a milliliter-only standard to reduce medication errors. Copyright © 2014 by the American Academy of Pediatrics.
Laboratory calibration of AAFE radiometer/scatterometer (RADSCAT)
NASA Technical Reports Server (NTRS)
Schroeder, L. C.; Jones, W. L., Jr.; Mitchell, J. L.
1976-01-01
A brief description of the electrical and mechanical instrument configuration, followed by an extensive discussion of laboratory tests and results are contained herein. This information is required to provide parameters for data reduction, and a basis for analysis of the measurement errors in data taken with this instrument.
1951-05-01
prccedur&:s to be of hipn accuracy. Ambij;uity of subject responizes due to overlap of entries on tU,, record sheets vas negligible. Handwriting ...experimental variables on reading errors us carried out by analysis of variance methods. For this purpose it was convenient to consider different classes...on any scale - an error ofY one numbered division. For this reason, the result. of the analysis of variance of the /10’s errors by dial types may
A Comprehensive Radial Velocity Error Budget for Next Generation Doppler Spectrometers
NASA Technical Reports Server (NTRS)
Halverson, Samuel; Ryan, Terrien; Mahadevan, Suvrath; Roy, Arpita; Bender, Chad; Stefansson, Guomundur Kari; Monson, Andrew; Levi, Eric; Hearty, Fred; Blake, Cullen;
2016-01-01
We describe a detailed radial velocity error budget for the NASA-NSF Extreme Precision Doppler Spectrometer instrument concept NEID (NN-explore Exoplanet Investigations with Doppler spectroscopy). Such an instrument performance budget is a necessity for both identifying the variety of noise sources currently limiting Doppler measurements, and estimating the achievable performance of next generation exoplanet hunting Doppler spectrometers. For these instruments, no single source of instrumental error is expected to set the overall measurement floor. Rather, the overall instrumental measurement precision is set by the contribution of many individual error sources. We use a combination of numerical simulations, educated estimates based on published materials, extrapolations of physical models, results from laboratory measurements of spectroscopic subsystems, and informed upper limits for a variety of error sources to identify likely sources of systematic error and construct our global instrument performance error budget. While natively focused on the performance of the NEID instrument, this modular performance budget is immediately adaptable to a number of current and future instruments. Such an approach is an important step in charting a path towards improving Doppler measurement precisions to the levels necessary for discovering Earth-like planets.
Cirrus Cloud Retrieval Using Infrared Sounding Data: Multilevel Cloud Errors.
NASA Astrophysics Data System (ADS)
Baum, Bryan A.; Wielicki, Bruce A.
1994-01-01
In this study we perform an error analysis for cloud-top pressure retrieval using the High-Resolution Infrared Radiometric Sounder (HIRS/2) 15-µm CO2 channels for the two-layer case of transmissive cirrus overlying an overcast, opaque stratiform cloud. This analysis includes standard deviation and bias error due to instrument noise and the presence of two cloud layers, the lower of which is opaque. Instantaneous cloud pressure retrieval errors are determined for a range of cloud amounts (0.1 1.0) and cloud-top pressures (850250 mb). Large cloud-top pressure retrieval errors are found to occur when a lower opaque layer is present underneath an upper transmissive cloud layer in the satellite field of view (FOV). Errors tend to increase with decreasing upper-cloud elective cloud amount and with decreasing cloud height (increasing pressure). Errors in retrieved upper-cloud pressure result in corresponding errors in derived effective cloud amount. For the case in which a HIRS FOV has two distinct cloud layers, the difference between the retrieved and actual cloud-top pressure is positive in all casts, meaning that the retrieved upper-cloud height is lower than the actual upper-cloud height. In addition, errors in retrieved cloud pressure are found to depend upon the lapse rate between the low-level cloud top and the surface. We examined which sounder channel combinations would minimize the total errors in derived cirrus cloud height caused by instrument noise and by the presence of a lower-level cloud. We find that while the sounding channels that peak between 700 and 1000 mb minimize random errors, the sounding channels that peak at 300—500 mb minimize bias errors. For a cloud climatology, the bias errors are most critical.
Skydan, Oleksandr A; Lilley, Francis; Lalor, Michael J; Burton, David R
2003-09-10
We present an investigation into the phase errors that occur in fringe pattern analysis that are caused by quantization effects. When acquisition devices with a limited value of camera bit depth are used, there are a limited number of quantization levels available to record the signal. This may adversely affect the recorded signal and adds a potential source of instrumental error to the measurement system. Quantization effects also determine the accuracy that may be achieved by acquisition devices in a measurement system. We used the Fourier fringe analysis measurement technique. However, the principles can be applied equally well for other phase measuring techniques to yield a phase error distribution that is caused by the camera bit depth.
Analysis of Students' Error in Learning of Quadratic Equations
ERIC Educational Resources Information Center
Zakaria, Effandi; Ibrahim; Maat, Siti Mistima
2010-01-01
The purpose of the study was to determine the students' error in learning quadratic equation. The samples were 30 form three students from a secondary school in Jambi, Indonesia. Diagnostic test was used as the instrument of this study that included three components: factorization, completing the square and quadratic formula. Diagnostic interview…
Dealing with systematic laser scanner errors due to misalignment at area-based deformation analyses
NASA Astrophysics Data System (ADS)
Holst, Christoph; Medić, Tomislav; Kuhlmann, Heiner
2018-04-01
The ability to acquire rapid, dense and high quality 3D data has made terrestrial laser scanners (TLS) a desirable instrument for tasks demanding a high geometrical accuracy, such as geodetic deformation analyses. However, TLS measurements are influenced by systematic errors due to internal misalignments of the instrument. The resulting errors in the point cloud might exceed the magnitude of random errors. Hence, it is important to assure that the deformation analysis is not biased by these influences. In this study, we propose and evaluate several strategies for reducing the effect of TLS misalignments on deformation analyses. The strategies are based on the bundled in-situ self-calibration and on the exploitation of two-face measurements. The strategies are verified analyzing the deformation of the Onsala Space Observatory's radio telescope's main reflector. It is demonstrated that either two-face measurements as well as the in-situ calibration of the laser scanner in a bundle adjustment improve the results of deformation analysis. The best solution is gained by a combination of both strategies.
Geometric error analysis for shuttle imaging spectrometer experiment
NASA Technical Reports Server (NTRS)
Wang, S. J.; Ih, C. H.
1984-01-01
The demand of more powerful tools for remote sensing and management of earth resources steadily increased over the last decade. With the recent advancement of area array detectors, high resolution multichannel imaging spectrometers can be realistically constructed. The error analysis study for the Shuttle Imaging Spectrometer Experiment system is documented for the purpose of providing information for design, tradeoff, and performance prediction. Error sources including the Shuttle attitude determination and control system, instrument pointing and misalignment, disturbances, ephemeris, Earth rotation, etc., were investigated. Geometric error mapping functions were developed, characterized, and illustrated extensively with tables and charts. Selected ground patterns and the corresponding image distortions were generated for direct visual inspection of how the various error sources affect the appearance of the ground object images.
Intertester agreement in refractive error measurements.
Huang, Jiayan; Maguire, Maureen G; Ciner, Elise; Kulp, Marjean T; Quinn, Graham E; Orel-Bixler, Deborah; Cyert, Lynn A; Moore, Bruce; Ying, Gui-Shuang
2013-10-01
To determine the intertester agreement of refractive error measurements between lay and nurse screeners using the Retinomax Autorefractor and the SureSight Vision Screener. Trained lay and nurse screeners measured refractive error in 1452 preschoolers (3 to 5 years old) using the Retinomax and the SureSight in a random order for screeners and instruments. Intertester agreement between lay and nurse screeners was assessed for sphere, cylinder, and spherical equivalent (SE) using the mean difference and the 95% limits of agreement. The mean intertester difference (lay minus nurse) was compared between groups defined based on the child's age, cycloplegic refractive error, and the reading's confidence number using analysis of variance. The limits of agreement were compared between groups using the Brown-Forsythe test. Intereye correlation was accounted for in all analyses. The mean intertester differences (95% limits of agreement) were -0.04 (-1.63, 1.54) diopter (D) sphere, 0.00 (-0.52, 0.51) D cylinder, and -0.04 (1.65, 1.56) D SE for the Retinomax and 0.05 (-1.48, 1.58) D sphere, 0.01 (-0.58, 0.60) D cylinder, and 0.06 (-1.45, 1.57) D SE for the SureSight. For either instrument, the mean intertester differences in sphere and SE did not differ by the child's age, cycloplegic refractive error, or the reading's confidence number. However, for both instruments, the limits of agreement were wider when eyes had significant refractive error or the reading's confidence number was below the manufacturer's recommended value. Among Head Start preschool children, trained lay and nurse screeners agree well in measuring refractive error using the Retinomax or the SureSight. Both instruments had similar intertester agreement in refractive error measurements independent of the child's age. Significant refractive error and a reading with low confidence number were associated with worse intertester agreement.
Uncertainty Analysis of Instrument Calibration and Application
NASA Technical Reports Server (NTRS)
Tripp, John S.; Tcheng, Ping
1999-01-01
Experimental aerodynamic researchers require estimated precision and bias uncertainties of measured physical quantities, typically at 95 percent confidence levels. Uncertainties of final computed aerodynamic parameters are obtained by propagation of individual measurement uncertainties through the defining functional expressions. In this paper, rigorous mathematical techniques are extended to determine precision and bias uncertainties of any instrument-sensor system. Through this analysis, instrument uncertainties determined through calibration are now expressed as functions of the corresponding measurement for linear and nonlinear univariate and multivariate processes. Treatment of correlated measurement precision error is developed. During laboratory calibration, calibration standard uncertainties are assumed to be an order of magnitude less than those of the instrument being calibrated. Often calibration standards do not satisfy this assumption. This paper applies rigorous statistical methods for inclusion of calibration standard uncertainty and covariance due to the order of their application. The effects of mathematical modeling error on calibration bias uncertainty are quantified. The effects of experimental design on uncertainty are analyzed. The importance of replication is emphasized, techniques for estimation of both bias and precision uncertainties using replication are developed. Statistical tests for stationarity of calibration parameters over time are obtained.
NASA Astrophysics Data System (ADS)
Fisher, L. E.; Lynch, K. A.; Fernandes, P. A.; Bekkeng, T. A.; Moen, J.; Zettergren, M.; Miceli, R. J.; Powell, S.; Lessard, M. R.; Horak, P.
2016-04-01
The interpretation of planar retarding potential analyzers (RPA) during ionospheric sounding rocket missions requires modeling the thick 3D plasma sheath. This paper overviews the theory of RPAs with an emphasis placed on the impact of the sheath on current-voltage (I-V) curves. It then describes the Petite Ion Probe (PIP) which has been designed to function in this difficult regime. The data analysis procedure for this instrument is discussed in detail. Data analysis begins by modeling the sheath with the Spacecraft Plasma Interaction System (SPIS), a particle-in-cell code. Test particles are traced through the sheath and detector to determine the detector's response. A training set is constructed from these simulated curves for a support vector regression analysis which relates the properties of the I-V curve to the properties of the plasma. The first in situ use of the PIPs occurred during the MICA sounding rocket mission which launched from Poker Flat, Alaska in February of 2012. These data are presented as a case study, providing valuable cross-instrument comparisons. A heritage top-hat thermal ion electrostatic analyzer, called the HT, and a multi-needle Langmuir probe have been used to validate both the PIPs and the data analysis method. Compared to the HT, the PIP ion temperature measurements agree with a root-mean-square error of 0.023 eV. These two instruments agree on the parallel-to-B plasma flow velocity with a root-mean-square error of 130 m/s. The PIP with its field of view aligned perpendicular-to-B provided a density measurement with an 11% error compared to the multi-needle Langmuir Probe. Higher error in the other PIP's density measurement is likely due to simplifications in the SPIS model geometry.
Nano-metrology: The art of measuring X-ray mirrors with slope errors <100 nrad
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alcock, Simon G., E-mail: simon.alcock@diamond.ac.uk; Nistea, Ioana; Sawhney, Kawal
2016-05-15
We present a comprehensive investigation of the systematic and random errors of the nano-metrology instruments used to characterize synchrotron X-ray optics at Diamond Light Source. With experimental skill and careful analysis, we show that these instruments used in combination are capable of measuring state-of-the-art X-ray mirrors. Examples are provided of how Diamond metrology data have helped to achieve slope errors of <100 nrad for optical systems installed on synchrotron beamlines, including: iterative correction of substrates using ion beam figuring and optimal clamping of monochromator grating blanks in their holders. Simulations demonstrate how random noise from the Diamond-NOM’s autocollimator adds intomore » the overall measured value of the mirror’s slope error, and thus predict how many averaged scans are required to accurately characterize different grades of mirror.« less
NASA Astrophysics Data System (ADS)
Ubelmann, C.; Gerald, D.
2016-12-01
The SWOT data validation will be a first challenge after launch, as the nature of the measurement, in particular the two-dimensionality at short spatial scales, is new in altimetry. If the comparison with independent observations may be locally possible, a validation of the full signal and error spectrum will be challenging. However, some recent analyses in simulations have shown the possibility to separate the geophysical signals from the spatially coherent instrumental errors in the spectral space, through cross-spectral analysis. These results suggest that rapidly after launch, the instrument error canl be spectrally separated providing some validations and insights on the Ocean energy spectrum, as well as optimal calibrations. Beyond CalVal, such spectral computations will be also essential for producing high-level Ocean estimates (two and three dimensional Ocean state reconstructions).
NASA Technical Reports Server (NTRS)
Buglia, James J.
1989-01-01
An analysis was made of the error in the minimum altitude of a geometric ray from an orbiting spacecraft to the Sun. The sunrise and sunset errors are highly correlated and are opposite in sign. With the ephemeris generated for the SAGE 1 instrument data reduction, these errors can be as large as 200 to 350 meters (1 sigma) after 7 days of orbit propagation. The bulk of this error results from errors in the position of the orbiting spacecraft rather than errors in computing the position of the Sun. These errors, in turn, result from the discontinuities in the ephemeris tapes resulting from the orbital determination process. Data taken from the end of the definitive ephemeris tape are used to generate the predict data for the time interval covered by the next arc of the orbit determination process. The predicted data are then updated by using the tracking data. The growth of these errors is very nearly linear, with a slight nonlinearity caused by the beta angle. An approximate analytic method is given, which predicts the magnitude of the errors and their growth in time with reasonable fidelity.
Design principles in telescope development: invariance, innocence, and the costs
NASA Astrophysics Data System (ADS)
Steinbach, Manfred
1997-03-01
Instrument design is, for the most part, a battle against errors and costs. Passive methods of error damping are in many cases effective and inexpensive. This paper shows examples of error minimization in our design of telescopes, instrumentation and evaluation instruments.
Estimating Uncertainties in the Multi-Instrument SBUV Profile Ozone Merged Data Set
NASA Technical Reports Server (NTRS)
Frith, Stacey; Stolarski, Richard
2015-01-01
The MOD data set is uniquely qualified for use in long-term ozone analysis because of its long record, high spatial coverage, and consistent instrument design and algorithm. The estimated MOD uncertainty term significantly increases the uncertainty over the statistical error alone. Trends in the post-2000 period are generally positive in the upper stratosphere, but only significant at 1-1.6 hPa. Remaining uncertainties not yet included in the Monte Carlo model are Smoothing Error ( 1 from 10 to 1 hPa) Relative calibration uncertainty between N11 and N17Seasonal cycle differences between SBUV records.
Program Instrumentation and Trace Analysis
NASA Technical Reports Server (NTRS)
Havelund, Klaus; Goldberg, Allen; Filman, Robert; Rosu, Grigore; Koga, Dennis (Technical Monitor)
2002-01-01
Several attempts have been made recently to apply techniques such as model checking and theorem proving to the analysis of programs. This shall be seen as a current trend to analyze real software systems instead of just their designs. This includes our own effort to develop a model checker for Java, the Java PathFinder 1, one of the very first of its kind in 1998. However, model checking cannot handle very large programs without some kind of abstraction of the program. This paper describes a complementary scalable technique to handle such large programs. Our interest is turned on the observation part of the equation: How much information can be extracted about a program from observing a single execution trace? It is our intention to develop a technology that can be applied automatically and to large full-size applications, with minimal modification to the code. We present a tool, Java PathExplorer (JPaX), for exploring execution traces of Java programs. The tool prioritizes scalability for completeness, and is directed towards detecting errors in programs, not to prove correctness. One core element in JPaX is an instrumentation package that allows to instrument Java byte code files to log various events when executed. The instrumentation is driven by a user provided script that specifies what information to log. Examples of instructions that such a script can contain are: 'report name and arguments of all called methods defined in class C, together with a timestamp'; 'report all updates to all variables'; and 'report all acquisitions and releases of locks'. In more complex instructions one can specify that certain expressions should be evaluated and even that certain code should be executed under various conditions. The instrumentation package can hence be seen as implementing Aspect Oriented Programming for Java in the sense that one can add functionality to a Java program without explicitly changing the code of the original program, but one rather writes an aspect and compiles it into the original program using the instrumentation. Another core element of JPaX is an observation package that supports the analysis of the generated event stream. Two kinds of analysis are currently supported. In temporal analysis the execution trace is evaluated against formulae written in temporal logic. We have implemented a temporal logic evaluator on finite traces using the Maude rewriting system from SRI International, USA. Temporal logic is defined in Maude by giving its syntax as a signature and its semantics as rewrite equations. The resulting semantics is extremely efficient and can handle event streams of hundreds of millions events in few minutes. Furthermore, the implementation is very succinct. The second form of even stream analysis supported is error pattern analysis where an execution trace is analyzed using various error detection algorithms that can identify error-prone programming practices that may potentially lead to errors in some different executions. Two such algorithms focusing on concurrency errors have been implemented in JPaX, one for deadlocks and the other for data races. It is important to note, that a deadlock or data race potential does not need to occur in order for its potential to be detected with these algorithms. This is what makes them very scalable in practice. The data race algorithm implemented is the Eraser algorithm from Compaq, however adopted to Java. The tool is currently being applied to a code base for controlling a spacecraft by the developers of that software in order to evaluate its applicability.
The performance of the standard rate turn (SRT) by student naval helicopter pilots.
Chapman, F; Temme, L A; Still, D L
2001-04-01
During flight training, student naval helicopter pilots learn the use of flight instruments through a prescribed series of simulator training events. The training simulator is a 6-degrees-of-freedom, motion-based, high-fidelity instrument trainer. From the final basic instrument simulator flights of student pilots, we selected for evaluation and analysis their performance of the Standard Rate Turn (SRT), a routine flight maneuver. The performance of the SRT was scored with air speed, altitude and heading average error from target values and standard deviations. These average errors and standard deviations were used in a Multiple Analysis of Variance (MANOVA) to evaluate the effects of three independent variables: 1) direction of turn (left vs. right), 2) degree of turn (180 vs. 360 degrees); and 3) segment of turn (roll-in, first 30 s, last 30 s, and roll-out of turn). Only the main effects of the three independent variables were significant; there were no significant interactions. This result greatly reduces the number of different conditions that should be scored separately for the evaluation of SRT performance. The results also showed that the magnitude of the heading and altitude errors at the beginning of the SRT correlated with the magnitude of the heading and altitude errors throughout the turn. This result suggests that for the turn to be well executed, it is important for it to begin with little error in these two response parameters. The observations reported here should be considered when establishing SRT performance norms and comparing student scores. Furthermore, it seems easier for pilots to maintain good performance than to correct poor performance.
A Comparison of seismic instrument noise coherence analysis techniques
Ringler, A.T.; Hutt, C.R.; Evans, J.R.; Sandoval, L.D.
2011-01-01
The self-noise of a seismic instrument is a fundamental characteristic used to evaluate the quality of the instrument. It is important to be able to measure this self-noise robustly, to understand how differences among test configurations affect the tests, and to understand how different processing techniques and isolation methods (from nonseismic sources) can contribute to differences in results. We compare two popular coherence methods used for calculating incoherent noise, which is widely used as an estimate of instrument self-noise (incoherent noise and self-noise are not strictly identical but in observatory practice are approximately equivalent; Holcomb, 1989; Sleeman et al., 2006). Beyond directly comparing these two coherence methods on similar models of seismometers, we compare how small changes in test conditions can contribute to incoherent-noise estimates. These conditions include timing errors, signal-to-noise ratio changes (ratios between background noise and instrument incoherent noise), relative sensor locations, misalignment errors, processing techniques, and different configurations of sensor types.
Characterizing error distributions for MISR and MODIS optical depth data
NASA Astrophysics Data System (ADS)
Paradise, S.; Braverman, A.; Kahn, R.; Wilson, B.
2008-12-01
The Multi-angle Imaging SpectroRadiometer (MISR) and Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA's EOS satellites collect massive, long term data records on aerosol amounts and particle properties. MISR and MODIS have different but complementary sampling characteristics. In order to realize maximum scientific benefit from these data, the nature of their error distributions must be quantified and understood so that discrepancies between them can be rectified and their information combined in the most beneficial way. By 'error' we mean all sources of discrepancies between the true value of the quantity of interest and the measured value, including instrument measurement errors, artifacts of retrieval algorithms, and differential spatial and temporal sampling characteristics. Previously in [Paradise et al., Fall AGU 2007: A12A-05] we presented a unified, global analysis and comparison of MISR and MODIS measurement biases and variances over lives of the missions. We used AErosol RObotic NETwork (AERONET) data as ground truth and evaluated MISR and MODIS optical depth distributions relative to AERONET using simple linear regression. However, AERONET data are themselves instrumental measurements subject to sources of uncertainty. In this talk, we discuss results from an improved analysis of MISR and MODIS error distributions that uses errors-in-variables regression, accounting for uncertainties in both the dependent and independent variables. We demonstrate on optical depth data, but the method is generally applicable to other aerosol properties as well.
[µCT analysis of mandibular molars before and after instrumentation by Reciproc files].
Ametrano, Gianluca; Riccitiello, Francesco; Amato, Massimo; Formisano, Anna; Muto, Massimo; Grassi, Roberta; Valletta, Alessandra; Simeone, Michele
2013-01-01
Cleaning and shaping are important section for the root canal treatment. A number of different methodologies have been developed to overcome these problems, including the introduction of rotary instruments nickel-titanium (NiTi). In endodontics NiTi have been shown to significantly reduce procedural errors compared to manual techniques of instrumentation. The efficiency of files is related to many factor. Although previous investigations that have used µCT analysis were hampered by insufficient resolution or projection incorrect. The new generation of μCT performance best offer, as micron resolution and accurate measurement software for evaluating the accurate anatomy of the root canal. The aim the paper was to evaluate the efficiency of Reciproc files in root canal treatment, evaluated before and after instrumentation by using μ-CT analysis.
Study of an instrument for sensing errors in a telescope wavefront
NASA Technical Reports Server (NTRS)
Golden, L. J.; Shack, R. V.; Slater, P. N.
1974-01-01
Focal plane sensors for determining the error in a telescope wavefront were investigated. The construction of three candidate test instruments and their evaluation in terms of small wavefront error aberration measurements are described. A laboratory wavefront simulator was designed and fabricated to evaluate the test instruments. The laboratory wavefront error simulator was used to evaluate three tests; a Hartmann test, a polarization shearing interferometer test, and an interferometric Zernike test.
Aquatic habitat mapping with an acoustic doppler current profiler: Considerations for data quality
Gaeuman, David; Jacobson, Robert B.
2005-01-01
When mounted on a boat or other moving platform, acoustic Doppler current profilers (ADCPs) can be used to map a wide range of ecologically significant phenomena, including measures of fluid shear, turbulence, vorticity, and near-bed sediment transport. However, the instrument movement necessary for mapping applications can generate significant errors, many of which have not been inadequately described. This report focuses on the mechanisms by which moving-platform errors are generated, and quantifies their magnitudes under typical habitat-mapping conditions. The potential for velocity errors caused by mis-alignment of the instrument?s internal compass are widely recognized, but has not previously been quantified for moving instruments. Numerical analyses show that even relatively minor compass mis-alignments can produce significant velocity errors, depending on the ratio of absolute instrument velocity to the target velocity and on the relative directions of instrument and target motion. A maximum absolute instrument velocity of about 1 m/s is recommended for most mapping applications. Lower velocities are appropriate when making bed velocity measurements, an emerging application that makes use of ADCP bottom-tracking to measure the velocity of sediment particles at the bed. The mechanisms by which heterogeneities in the flow velocity field generate horizontal velocities errors are also quantified, and some basic limitations in the effectiveness of standard error-detection criteria for identifying these errors are described. Bed velocity measurements may be particularly vulnerable to errors caused by spatial variability in the sediment transport field.
The NBS scale of radiance temperature
NASA Technical Reports Server (NTRS)
Waters, William R.; Walker, James H.; Hattenburg, Albert T.
1988-01-01
The measurement methods and instrumentation used in the realization and transfer of the International Practical Temperature Scale (IPTS-68) above the temperature of freezing gold are described. The determination of the ratios of spectral radiance of tungsten-strip lamps to a gold-point blackbody at a wavelength of 654.6 nm is detailed. The response linearity, spectral responsivity, scattering error, and polarization properties of the instrumentation are described. The analysis of the sources of error and estimates of uncertainty are presented. The assigned uncertainties (three standard deviations) in radiance temperature range from + or - 2 K at 2573 K to + or - 0.5 K at 1073 K.
Bonmati, Ester; Hu, Yipeng; Villarini, Barbara; Rodell, Rachael; Martin, Paul; Han, Lianghao; Donaldson, Ian; Ahmed, Hashim U; Moore, Caroline M; Emberton, Mark; Barratt, Dean C
2018-04-01
Image-guided systems that fuse magnetic resonance imaging (MRI) with three-dimensional (3D) ultrasound (US) images for performing targeted prostate needle biopsy and minimally invasive treatments for prostate cancer are of increasing clinical interest. To date, a wide range of different accuracy estimation procedures and error metrics have been reported, which makes comparing the performance of different systems difficult. A set of nine measures are presented to assess the accuracy of MRI-US image registration, needle positioning, needle guidance, and overall system error, with the aim of providing a methodology for estimating the accuracy of instrument placement using a MR/US-guided transperineal approach. Using the SmartTarget fusion system, an MRI-US image alignment error was determined to be 2.0 ± 1.0 mm (mean ± SD), and an overall system instrument targeting error of 3.0 ± 1.2 mm. Three needle deployments for each target phantom lesion was found to result in a 100% lesion hit rate and a median predicted cancer core length of 5.2 mm. The application of a comprehensive, unbiased validation assessment for MR/US guided systems can provide useful information on system performance for quality assurance and system comparison. Furthermore, such an analysis can be helpful in identifying relationships between these errors, providing insight into the technical behavior of these systems. © 2018 American Association of Physicists in Medicine.
The Accuracy of GBM GRB Localizations
NASA Astrophysics Data System (ADS)
Briggs, Michael Stephen; Connaughton, V.; Meegan, C.; Hurley, K.
2010-03-01
We report an study of the accuracy of GBM GRB localizations, analyzing three types of localizations: those produced automatically by the GBM Flight Software on board GBM, those produced automatically with ground software in near real time, and localizations produced with human guidance. The two types of automatic locations are distributed in near real-time via GCN Notices; the human-guided locations are distributed on timescale of many minutes or hours using GCN Circulars. This work uses a Bayesian analysis that models the distribution of the GBM total location error by comparing GBM locations to more accurate locations obtained with other instruments. Reference locations are obtained from Swift, Super-AGILE, the LAT, and with the IPN. We model the GBM total location errors as having systematic errors in addition to the statistical errors and use the Bayesian analysis to constrain the systematic errors.
[A basic research to share Fourier transform near-infrared spectrum information resource].
Zhang, Lu-Da; Li, Jun-Hui; Zhao, Long-Lian; Zhao, Li-Li; Qin, Fang-Li; Yan, Yan-Lu
2004-08-01
A method to share the information resource in the database of Fourier transform near-infrared(FTNIR) spectrum information of agricultural products and utilize the spectrum information sufficiently is explored in this paper. Mapping spectrum information from one instrument to another is studied to express the spectrum information accurately between the instruments. Then mapping spectrum information is used to establish a mathematical model of quantitative analysis without including standard samples. The analysis result is that the relative coefficient r is 0.941 and the relative error is 3.28% between the model estimate values and the Kjeldahl's value for the protein content of twenty-two wheat samples, while the relative coefficient r is 0.963 and the relative error is 2.4% for the other model, which is established by using standard samples. It is shown that the spectrum information can be shared by using the mapping spectrum information. So it can be concluded that the spectrum information in one FTNIR spectrum information database can be transformed to another instrument's mapping spectrum information, which makes full use of the information resource in the database of FTNIR spectrum information to realize the resource sharing between different instruments.
Creel, Scott; Creel, Michael
2009-11-01
1. Sampling error in annual estimates of population size creates two widely recognized problems for the analysis of population growth. First, if sampling error is mistakenly treated as process error, one obtains inflated estimates of the variation in true population trajectories (Staples, Taper & Dennis 2004). Second, treating sampling error as process error is thought to overestimate the importance of density dependence in population growth (Viljugrein et al. 2005; Dennis et al. 2006). 2. In ecology, state-space models are used to account for sampling error when estimating the effects of density and other variables on population growth (Staples et al. 2004; Dennis et al. 2006). In econometrics, regression with instrumental variables is a well-established method that addresses the problem of correlation between regressors and the error term, but requires fewer assumptions than state-space models (Davidson & MacKinnon 1993; Cameron & Trivedi 2005). 3. We used instrumental variables to account for sampling error and fit a generalized linear model to 472 annual observations of population size for 35 Elk Management Units in Montana, from 1928 to 2004. We compared this model with state-space models fit with the likelihood function of Dennis et al. (2006). We discuss the general advantages and disadvantages of each method. Briefly, regression with instrumental variables is valid with fewer distributional assumptions, but state-space models are more efficient when their distributional assumptions are met. 4. Both methods found that population growth was negatively related to population density and winter snow accumulation. Summer rainfall and wolf (Canis lupus) presence had much weaker effects on elk (Cervus elaphus) dynamics [though limitation by wolves is strong in some elk populations with well-established wolf populations (Creel et al. 2007; Creel & Christianson 2008)]. 5. Coupled with predictions for Montana from global and regional climate models, our results predict a substantial reduction in the limiting effect of snow accumulation on Montana elk populations in the coming decades. If other limiting factors do not operate with greater force, population growth rates would increase substantially.
The Re-Analysis of Ozone Profile Data from a 41-Year Series of SBUV Instruments
NASA Technical Reports Server (NTRS)
Kramarova, Natalya; Frith, Stacey; Bhartia, Pawan K.; McPeters, Richard; Labow, Gordon; Taylor, Steven; Fisher, Bradford
2012-01-01
In this study we present the validation of ozone profiles from a number of Solar Back Scattered Ultra Violet (SBUV) and SBUV/2 instruments that were recently reprocessed using an updated (Version 8.6) algorithm. The SBUV dataset provides the longest available record of global ozone profiles, spanning a 41-year period from 1970 to 2011 (except a 5-year gap in the 1970s) and includes ozone profile records obtained from the Nimbus-4 BUV and Nimbus-7 SBUV instruments, and a series of SBUV(/2) instruments launched on NOAA operational satellites (NOAA 09, 11, 14, 16, 17, 18, 19). Although modifications in instrument design were made in the evolution from the BUV instrument to the modern SBUV(/2) model, the basic principles of the measurement technique and retrieval algorithm remain the same. The long term SBUV data record allows us to create a consistent, calibrated dataset of ozone profiles that can be used for climate studies and trend analyses. In particular, we focus on estimating the various sources of error in the SBUV profile ozone retrievals using independent observations and analysis of the algorithm itself. For the first time we include in the metadata a quantitative estimate of the smoothing error, defined as the error due to profile variability that the SBUV observing system cannot inherently measure. The magnitude of the smoothing error varies with altitude, latitude, season and solar zenith angle. Between 10 and 1 hPa the smoothing errors for the SBUV monthly zonal mean retrievals are of the order of 1 %, but start to increase above and below this layer. The largest smoothing errors, as large as 15-20%, were detected in in the troposphere. The SBUV averaging kernels, provided with the ozone profiles in version 8.6, help to eliminate the smoothing effect when comparing the SBUV profiles with high vertical resolution measurements, and make it convenient to use the SBUV ozone profiles for data assimilation and model validation purposes. The smoothing error can also be minimized by combining layers of data, and we will discuss recommendations for this approach as well. The SBUV ozone profiles have been intensively validated against satellite profile measurements obtained from the Microwave Limb Sounders (MLS) (on board the UARS and AURA satellites), Stratospheric Aerosol and Gas Experiment (SAGE) and Michelson Interferometer for Passive Atmospheric Sounding (MIPAS). Also, we compare coincident and collocated SBUV ozone retrievals with observations made by ground-based instruments, such as microwave spectrometers, lidars, Umkehr instruments and balloon-borne ozonosondes. Finally, we compare the SBUV ozone profiles with output from the NASA GSFC GEOS-CCM model. In the stratosphere between 25 and 1 hPa the mean biases and standard deviations are within 5% for monthly mean ozone profiles. Above and below this layer the vertical resolution of the SBUV algorithm decreases and the effects of vertical smoothing should be taken into account. Though the SBUV algorithm has a coarser vertical resolution in the lower stratosphere and troposphere, it is capable of precisely estimating the integrated ozone column between the surface and 25 hPa. The time series of the tropospheric - lower stratospheric ozone column derived from SBUV agrees within 5% with the corresponding values observed by an ensemble of ozone sonde stations in North Hemisphere. Drift of the ozone time series obtained from each SBUV(/2) instrument relative to ground based and satellite measurements are evaluated and some features of individual SBUV(l2) instruments are discussed. In addition to evaluating individual instruments against independent observations, we also focus on the instrument to instrument consistency in the series. Overall, Version 8.6 ozone profiles obtained from two different SBUV(l2) instruments compare within a couple of percent during overlap periods and are consistently varying in time, with some exceptions. Some of the noted discrepancies might bssociated with ozone diurnal variations, since the difference in the local time of the observations for a pair of SBUV(l2) instruments could be several hours. Other issues include the potential short-term drift in measurements as the instrument orbit drifts, and measurements are obtained at high solar zenith angles (>85 ). Based on the results of the validation, a consistent, calibrated dataset of SBUV ozone profiles has been created based on internal calibration only.
NASA Technical Reports Server (NTRS)
Williams, Daniel M.; Consiglio, Maria C.; Murdoch, Jennifer L.; Adams, Catherine H.
2005-01-01
This paper provides an analysis of Flight Technical Error (FTE) from recent SATS experiments, called the Higher Volume Operations (HVO) Simulation and Flight experiments, which NASA conducted to determine pilot acceptability of the HVO concept for normal operating conditions. Reported are FTE results from simulation and flight experiment data indicating the SATS HVO concept is viable and acceptable to low-time instrument rated pilots when compared with today s system (baseline). Described is the comparative FTE analysis of lateral, vertical, and airspeed deviations from the baseline and SATS HVO experimental flight procedures. Based on FTE analysis, all evaluation subjects, low-time instrument-rated pilots, flew the HVO procedures safely and proficiently in comparison to today s system. In all cases, the results of the flight experiment validated the results of the simulation experiment and confirm the utility of the simulation platform for comparative Human in the Loop (HITL) studies of SATS HVO and Baseline operations.
Calibration Issues and Operating System Requirements for Electron-Probe Microanalysis
NASA Technical Reports Server (NTRS)
Carpenter, P.
2006-01-01
Instrument purchase requirements and dialogue with manufacturers have established hardware parameters for alignment, stability, and reproducibility, which have helped improve the precision and accuracy of electron microprobe analysis (EPMA). The development of correction algorithms and the accurate solution to quantitative analysis problems requires the minimization of systematic errors and relies on internally consistent data sets. Improved hardware and computer systems have resulted in better automation of vacuum systems, stage and wavelength-dispersive spectrometer (WDS) mechanisms, and x-ray detector systems which have improved instrument stability and precision. Improved software now allows extended automated runs involving diverse setups and better integrates digital imaging and quantitative analysis. However, instrumental performance is not regularly maintained, as WDS are aligned and calibrated during installation but few laboratories appear to check and maintain this calibration. In particular, detector deadtime (DT) data is typically assumed rather than measured, due primarily to the difficulty and inconvenience of the measurement process. This is a source of fundamental systematic error in many microprobe laboratories and is unknown to the analyst, as the magnitude of DT correction is not listed in output by microprobe operating systems. The analyst must remain vigilant to deviations in instrumental alignment and calibration, and microprobe system software must conveniently verify the necessary parameters. Microanalysis of mission critical materials requires an ongoing demonstration of instrumental calibration. Possible approaches to improvements in instrument calibration, quality control, and accuracy will be discussed. Development of a set of core requirements based on discussions with users, researchers, and manufacturers can yield documents that improve and unify the methods by which instruments can be calibrated. These results can be used to continue improvements of EPMA.
Ultraspectral sounding retrieval error budget and estimation
NASA Astrophysics Data System (ADS)
Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, Larrabee L.; Yang, Ping
2011-11-01
The ultraspectral infrared radiances obtained from satellite observations provide atmospheric, surface, and/or cloud information. The intent of the measurement of the thermodynamic state is the initialization of weather and climate models. Great effort has been given to retrieving and validating these atmospheric, surface, and/or cloud properties. Error Consistency Analysis Scheme (ECAS), through fast radiative transfer model (RTM) forward and inverse calculations, has been developed to estimate the error budget in terms of absolute and standard deviation of differences in both spectral radiance and retrieved geophysical parameter domains. The retrieval error is assessed through ECAS without assistance of other independent measurements such as radiosonde data. ECAS re-evaluates instrument random noise, and establishes the link between radiometric accuracy and retrieved geophysical parameter accuracy. ECAS can be applied to measurements of any ultraspectral instrument and any retrieval scheme with associated RTM. In this paper, ECAS is described and demonstration is made with the measurements of the METOP-A satellite Infrared Atmospheric Sounding Interferometer (IASI).
Ultraspectral Sounding Retrieval Error Budget and Estimation
NASA Technical Reports Server (NTRS)
Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, L. Larrabee; Yang, Ping
2011-01-01
The ultraspectral infrared radiances obtained from satellite observations provide atmospheric, surface, and/or cloud information. The intent of the measurement of the thermodynamic state is the initialization of weather and climate models. Great effort has been given to retrieving and validating these atmospheric, surface, and/or cloud properties. Error Consistency Analysis Scheme (ECAS), through fast radiative transfer model (RTM) forward and inverse calculations, has been developed to estimate the error budget in terms of absolute and standard deviation of differences in both spectral radiance and retrieved geophysical parameter domains. The retrieval error is assessed through ECAS without assistance of other independent measurements such as radiosonde data. ECAS re-evaluates instrument random noise, and establishes the link between radiometric accuracy and retrieved geophysical parameter accuracy. ECAS can be applied to measurements of any ultraspectral instrument and any retrieval scheme with associated RTM. In this paper, ECAS is described and demonstration is made with the measurements of the METOP-A satellite Infrared Atmospheric Sounding Interferometer (IASI)..
NASA Technical Reports Server (NTRS)
Menard, Richard; Chang, Lang-Ping
1998-01-01
A Kalman filter system designed for the assimilation of limb-sounding observations of stratospheric chemical tracers, which has four tunable covariance parameters, was developed in Part I (Menard et al. 1998) The assimilation results of CH4 observations from the Cryogenic Limb Array Etalon Sounder instrument (CLAES) and the Halogen Observation Experiment instrument (HALOE) on board of the Upper Atmosphere Research Satellite are described in this paper. A robust (chi)(sup 2) criterion, which provides a statistical validation of the forecast and observational error covariances, was used to estimate the tunable variance parameters of the system. In particular, an estimate of the model error variance was obtained. The effect of model error on the forecast error variance became critical after only three days of assimilation of CLAES observations, although it took 14 days of forecast to double the initial error variance. We further found that the model error due to numerical discretization as arising in the standard Kalman filter algorithm, is comparable in size to the physical model error due to wind and transport modeling errors together. Separate assimilations of CLAES and HALOE observations were compared to validate the state estimate away from the observed locations. A wave-breaking event that took place several thousands of kilometers away from the HALOE observation locations was well captured by the Kalman filter due to highly anisotropic forecast error correlations. The forecast error correlation in the assimilation of the CLAES observations was found to have a structure similar to that in pure forecast mode except for smaller length scales. Finally, we have conducted an analysis of the variance and correlation dynamics to determine their relative importance in chemical tracer assimilation problems. Results show that the optimality of a tracer assimilation system depends, for the most part, on having flow-dependent error correlation rather than on evolving the error variance.
NASA Astrophysics Data System (ADS)
Jacobson, Gloria; Rella, Chris; Farinas, Alejandro
2014-05-01
Technological advancement of instrumentation in atmospheric and other geoscience disciplines over the past decade has lead to a shift from discrete sample analysis to continuous, in-situ monitoring. Standard error analysis used for discrete measurements is not sufficient to assess and compare the error contribution of noise and drift from continuous-measurement instruments, and a different statistical analysis approach should be applied. The Allan standard deviation analysis technique developed for atomic clock stability assessment by David W. Allan [1] can be effectively and gainfully applied to continuous measurement instruments. As an example, P. Werle et al has applied these techniques to look at signal averaging for atmospheric monitoring by Tunable Diode-Laser Absorption Spectroscopy (TDLAS) [2]. This presentation will build on, and translate prior foundational publications to provide contextual definitions and guidelines for the practical application of this analysis technique to continuous scientific measurements. The specific example of a Picarro G2401 Cavity Ringdown Spectroscopy (CRDS) analyzer used for continuous, atmospheric monitoring of CO2, CH4 and CO will be used to define the basics features the Allan deviation, assess factors affecting the analysis, and explore the time-series to Allan deviation plot translation for different types of instrument noise (white noise, linear drift, and interpolated data). In addition, the useful application of using an Allan deviation to optimize and predict the performance of different calibration schemes will be presented. Even though this presentation will use the specific example of the Picarro G2401 CRDS Analyzer for atmospheric monitoring, the objective is to present the information such that it can be successfully applied to other instrument sets and disciplines. [1] D.W. Allan, "Statistics of Atomic Frequency Standards," Proc, IEEE, vol. 54, pp 221-230, Feb 1966 [2] P. Werle, R. Miicke, F. Slemr, "The Limits of Signal Averaging in Atmospheric Trace-Gas Monitoring by Tunable Diode-Laser Absorption Spectroscopy (TDLAS)," Applied Physics, B57, pp 131-139, April 1993
ERIC Educational Resources Information Center
Juttner, Melanie; Neuhaus, Birgit J.
2012-01-01
In view of the lack of instruments for measuring biology teachers' pedagogical content knowledge (PCK), this article reports on a study about the development of PCK items for measuring teachers' knowledge of pupils' errors and ways for dealing with them. This study investigated 9th and 10th grade German pupils' (n = 461) drawings in an achievement…
A multi-site analysis of random error in tower-based measurements of carbon and energy fluxes
Andrew D. Richardson; David Y. Hollinger; George G. Burba; Kenneth J. Davis; Lawrence B. Flanagan; Gabriel G. Katul; J. William Munger; Daniel M. Ricciuto; Paul C. Stoy; Andrew E. Suyker; Shashi B. Verma; Steven C. Wofsy; Steven C. Wofsy
2006-01-01
Measured surface-atmosphere fluxes of energy (sensible heat, H, and latent heat, LE) and CO2 (FCO2) represent the ``true?? flux plus or minus potential random and systematic measurement errors. Here, we use data from seven sites in the AmeriFlux network, including five forested sites (two of which include ``tall tower?? instrumentation), one grassland site, and one...
Characterization of in Band Stray Light in SBUV-2 Instruments
NASA Technical Reports Server (NTRS)
Huang, L. K.; DeLand, M. T.; Taylor, S. L.; Flynn, L. E.
2014-01-01
Significant in-band stray light (IBSL) error at solar zenith angle (SZA) values larger than 77deg near sunset in 4 SBUV/2 (Solar Backscattered Ultraviolet) instruments, on board the NOAA-14, 17, 18 and 19 satellites, has been characterized. The IBSL error is caused by large surface reflection and scattering of the air-gapped depolarizer in front of the instrument's monochromator aperture. The source of the IBSL error is direct solar illumination of instrument components near the aperture rather than from earth shine. The IBSL contamination at 273 nm can reach 40% of earth radiance near sunset, which results in as much as a 50% error in the retrieved ozone from the upper stratosphere. We have analyzed SBUV/2 albedo measurements on both the dayside and nightside to develop an empirical model for the IBSL error. This error has been corrected in the V8.6 SBUV/2 ozone retrieval.
Development and validity of an instrumented handbike: initial results of propulsion kinetics.
van Drongelen, Stefan; van den Berg, Jos; Arnet, Ursina; Veeger, Dirkjan H E J; van der Woude, Lucas H V
2011-11-01
To develop an instrumented handbike system to measure the forces applied to the handgrip during handbiking. A 6 degrees of freedom force sensor was built into the handgrip of an attach-unit handbike, together with two optical encoders to measure the orientation of the handgrip and crank in space. Linearity, precision, and percent error were determined for static and dynamic tests. High linearity was demonstrated for both the static and the dynamic condition (r=1.01). Precision was high under the static condition (standard deviation of 0.2N), however the precision decreased with higher loads during the dynamic condition. Percent error values were between 0.3 and 5.1%. This is the first instrumented handbike system that can register 3-dimensional forces. It can be concluded that the instrumented handbike system allows for an accurate force analysis based on forces registered at the handle bars. Copyright © 2011 IPEM. Published by Elsevier Ltd. All rights reserved.
Once-Daily Amikacin Dosing in Burn Patients Treated with Continuous Venovenous Hemofiltration
2011-10-01
previously documented inaccuracies with the Vitek 2 instrument (1), all Acinetobacter baumannii isolates reported as susceptible to amikacin were confirmed...to the analysis. Bacteriology. Amikacin MICs were determined for Gram-negative blood- stream isolates using the Vitek 2 instrument (bioMérieux...al. 2010. Aminoglycoside resistance and susceptibility testing errors in Acinetobacter baumannii-calcoaceticus complex. J. Clin. Microbiol. 48:1132
Rasch Analysis of the Student Refractive Error and Eyeglass Questionnaire
Crescioni, Mabel; Messer, Dawn H.; Warholak, Terri L.; Miller, Joseph M.; Twelker, J. Daniel; Harvey, Erin M.
2014-01-01
Purpose To evaluate and refine a newly developed instrument, the Student Refractive Error and Eyeglasses Questionnaire (SREEQ), designed to measure the impact of uncorrected and corrected refractive error on vision-related quality of life (VRQoL) in school-aged children. Methods. A 38 statement instrument consisting of two parts was developed: Part A relates to perceptions regarding uncorrected vision and Part B relates to perceptions regarding corrected vision and includes other statements regarding VRQoL with spectacle correction. The SREEQ was administered to 200 Native American 6th through 12th grade students known to have previously worn and who currently require eyeglasses. Rasch analysis was conducted to evaluate the functioning of the SREEQ. Statements on Part A and Part B were analyzed to examine the dimensionality and constructs of the questionnaire, how well the items functioned, and the appropriateness of the response scale used. Results Rasch analysis suggested two items be eliminated and the measurement scale for matching items be reduced from a 4-point response scale to a 3-point response scale. With these modifications, categorical data were converted to interval level data, to conduct an item and person analysis. A shortened version of the SREEQ was constructed with these modifications, the SREEQ-R, which included the statements that were able to capture changes in VRQoL associated with spectacle wear for those with significant refractive error in our study population. Conclusions While the SREEQ Part B appears to be a have less than optimal reliability to assess the impact of spectacle correction on VRQoL in our student population, it is also able to detect statistically significant differences from pretest to posttest on both the group and individual levels to show that the instrument can assess the impact that glasses have on VRQoL. Further modifications to the questionnaire, such as those included in the SREEQ-R, could enhance its functionality. PMID:24811844
Twenty-Five Years of Landsat Thermal Band Calibration
NASA Technical Reports Server (NTRS)
Barsi, Julia A.; Markham, Brian L.; Schoff, John R.; Hook, Simon J.; Raqueno, Nina G.
2010-01-01
Landsat-7 Enhanced Thematic Mapper+ (ETM+), launched in April 1999, and Landsat-5 Thematic Mapper (TM), launched in 1984, both have a single thermal band. Both instruments thermal band calibrations have been updated previously: ETM+ in 2001 for a pre-launch calibration error and TM in 2007 for data acquired since the current era of vicarious calibration has been in place (1999). Vicarious calibration teams at Rochester Institute of Technology (RIT) and NASA/Jet Propulsion Laboratory (JPL) have been working to validate the instrument calibration since 1999. Recent developments in their techniques and sites have expanded the temperature and temporal range of the validation. The new data indicate that the calibration of both instruments had errors: the ETM+ calibration contained a gain error of 5.8% since launch; the TM calibration contained a gain error of 5% and an additional offset error between 1997 and 1999. Both instruments required adjustments in their thermal calibration coefficients in order to correct for the errors. The new coefficients were calculated and added to the Landsat operational processing system in early 2010. With the corrections, both instruments are calibrated to within +/-0.7K.
Analysis of localizer and glide slope Flight Technical Error
DOT National Transportation Integrated Search
2008-12-09
A new wake turbulence procedure has been developed that permits two dependent arrival traffic streams during instrument meteorological conditions : to runways with centerline separations less than 2500 ft. For the proposed procedure, aircraft approac...
Specification and Error Pattern Based Program Monitoring
NASA Technical Reports Server (NTRS)
Havelund, Klaus; Johnson, Scott; Rosu, Grigore; Clancy, Daniel (Technical Monitor)
2001-01-01
We briefly present Java PathExplorer (JPAX), a tool developed at NASA Ames for monitoring the execution of Java programs. JPAX can be used not only during program testing to reveal subtle errors, but also can be applied during operation to survey safety critical systems. The tool facilitates automated instrumentation of a program in order to properly observe its execution. The instrumentation can be either at the bytecode level or at the source level when the source code is available. JPaX is an instance of a more general project, called PathExplorer (PAX), which is a basis for experiments rather than a fixed system, capable of monitoring various programming languages and experimenting with other logics and analysis techniques
Study on avalanche photodiode influence on heterodyne laser interferometer linearity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Budzyn, Grzegorz, E-mail: grzegorz.budzyn@pwr.wroc.pl; Podzorny, Tomasz
2016-06-28
In the paper we analyze factors reducing the possible accuracy of the heterodyne laser interferometers. The analysis is performed for the avalanche-photodiode input stages but is in main points valid also for stages with other type of photodetectors. Instrumental error originating from optical, electronic and digital signal processing factors is taken into consideration. We stress factors which are critical and those which can be neglected at certain accuracy requirements. In the work we prove that it is possible to reduce errors of the laser instrument below 1 nm point for multiaxial APD based interferometers by precise control of incident optical powermore » and the temperature of the photodiode.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nagler, Peter C.; Tucker, Gregory S.; Fixsen, Dale J.
The detection of the primordial B-mode polarization signal of the cosmic microwave background (CMB) would provide evidence for inflation. Yet as has become increasingly clear, the detection of a such a faint signal requires an instrument with both wide frequency coverage to reject foregrounds and excellent control over instrumental systematic effects. Using a polarizing Fourier transform spectrometer (FTS) for CMB observations meets both of these requirements. In this work, we present an analysis of instrumental systematic effects in polarizing FTSs, using the Primordial Inflation Explorer (PIXIE) as a worked example. We analytically solve for the most important systematic effects inherentmore » to the FTS—emissive optical components, misaligned optical components, sampling and phase errors, and spin synchronous effects—and demonstrate that residual systematic error terms after corrections will all be at the sub-nK level, well below the predicted 100 nK B-mode signal.« less
A steep peripheral ring in irregular cornea topography, real or an instrument error?
Galindo-Ferreiro, Alicia; Galvez-Ruiz, Alberto; Schellini, Silvana A; Galindo-Alonso, Julio
2016-01-01
To demonstrate that the steep peripheral ring (red zone) on corneal topography after myopic laser in situ keratomileusis (LASIK) could possibly due to instrument error and not always to a real increase in corneal curvature. A spherical model for the corneal surface and modifying topography software was used to analyze the cause of an error due to instrument design. This study involved modification of the software of a commercially available topographer. A small modification of the topography image results in a red zone on the corneal topography color map. Corneal modeling indicates that the red zone could be an artifact due to an instrument-induced error. The steep curvature changes after LASIK, signified by the red zone, could be also an error due to the plotting algorithms of the corneal topographer, besides a steep curvature change.
Space shuttle navigation analysis. Volume 2: Baseline system navigation
NASA Technical Reports Server (NTRS)
Jones, H. L.; Luders, G.; Matchett, G. A.; Rains, R. G.
1980-01-01
Studies related to the baseline navigation system for the orbiter are presented. The baseline navigation system studies include a covariance analysis of the Inertial Measurement Unit calibration and alignment procedures, postflight IMU error recovery for the approach and landing phases, on-orbit calibration of IMU instrument biases, and a covariance analysis of entry and prelaunch navigation system performance.
Trends in MODIS Geolocation Error Analysis
NASA Technical Reports Server (NTRS)
Wolfe, R. E.; Nishihama, Masahiro
2009-01-01
Data from the two MODIS instruments have been accurately geolocated (Earth located) to enable retrieval of global geophysical parameters. The authors describe the approach used to geolocate with sub-pixel accuracy over nine years of data from M0DIS on NASA's E0S Terra spacecraft and seven years of data from MODIS on the Aqua spacecraft. The approach uses a geometric model of the MODIS instruments, accurate navigation (orbit and attitude) data and an accurate Earth terrain model to compute the location of each MODIS pixel. The error analysis approach automatically matches MODIS imagery with a global set of over 1,000 ground control points from the finer-resolution Landsat satellite to measure static biases and trends in the MO0lS geometric model parameters. Both within orbit and yearly thermally induced cyclic variations in the pointing have been found as well as a general long-term trend.
The Michelson Stellar Interferometer Error Budget for Triple Triple-Satellite Configuration
NASA Technical Reports Server (NTRS)
Marathay, Arvind S.; Shiefman, Joe
1996-01-01
This report presents the results of a study of the instrumentation tolerances for a conventional style Michelson stellar interferometer (MSI). The method used to determine the tolerances was to determine the change, due to the instrument errors, in the measured fringe visibility and phase relative to the ideal values. The ideal values are those values of fringe visibility and phase that would be measured by a perfect MSI and are attributable solely to the object being detected. Once the functional relationship for changes in visibility and phase as a function of various instrument errors is understood it is then possible to set limits on the instrument errors in order to ensure that the measured visibility and phase are different from the ideal values by no more than some specified amount. This was done as part of this study. The limits we obtained are based on a visibility error of no more than 1% and a phase error of no more than 0.063 radians (this comes from 1% of 2(pi) radians). The choice of these 1% limits is supported in the literture. The approach employed in the study involved the use of ASAP (Advanced System Analysis Program) software provided by Breault Research Organization, Inc., in conjunction with parallel analytical calculations. The interferometer accepts object radiation into two separate arms each consisting of an outer mirror, an inner mirror, a delay line (made up of two moveable mirrors and two static mirrors), and a 10:1 afocal reduction telescope. The radiation coming out of both arms is incident on a slit plane which is opaque with two openings (slits). One of the two slits is centered directly under one of the two arms of the interferometer and the other slit is centered directly under the other arm. The slit plane is followed immediately by an ideal combining lens which images the radiation in the fringe plane (also referred to subsequently as the detector plane).
A computer-controlled instrumentation system for third octave analysis
NASA Technical Reports Server (NTRS)
Faulcon, N. D.; Monteith, J. H.
1978-01-01
An instrumentation system is described which employs a minicomputer, a one-third octave band analyzer, and a time code/tape search unit for the automatic control and analysis of third-octave data. With this system the information necessary for data adjustment is formatted in such a way as to eliminate much operator interface, thereby substantially reducing the probability for error. A description of a program for the calculation of effective perceived noise level from aircraft noise data is included as an example of how this system can be used.
NASA Technical Reports Server (NTRS)
Dobson, Chris C.; Jones, Jonathan E.; Chavers, Greg
2003-01-01
A polychromatic microwave quadrature interferometer has been characterized using several laboratory plasmas. Reflections between the transmitter and the receiver have been observed, and the effects of including reflection terms in the data reduction equation have been examined. An error analysis which includes the reflections, modulation of the scene beam amplitude by the plasma, and simultaneous measurements at two frequencies has been applied to the empirical database, and the results are summarized. For reflection amplitudes around 1096, the reflection terms were found to reduce the calculated error bars for electron density measurements by about a factor of 2. The impact of amplitude modulation is also quantified. In the complete analysis, the mean error bar for high- density measurements is 7.596, and the mean phase shift error for low-density measurements is 1.2". .
Optimization of multimagnetometer systems on a spacecraft
NASA Technical Reports Server (NTRS)
Neubauer, F. M.
1975-01-01
The problem of optimizing the position of magnetometers along a boom of given length to yield a minimized total error is investigated. The discussion is limited to at most four magnetometers, which seems to be a practical limit due to weight, power, and financial considerations. The outlined error analysis is applied to some illustrative cases. The optimal magnetometer locations, for which the total error is minimum, are computed for given boom length, instrument errors, and very conservative magnetic field models characteristic for spacecraft with only a restricted or ineffective magnetic cleanliness program. It is shown that the error contribution by the magnetometer inaccuracy is increased as the number of magnetometers is increased, whereas the spacecraft field uncertainty is diminished by an appreciably larger amount.
Dealing with dietary measurement error in nutritional cohort studies.
Freedman, Laurence S; Schatzkin, Arthur; Midthune, Douglas; Kipnis, Victor
2011-07-20
Dietary measurement error creates serious challenges to reliably discovering new diet-disease associations in nutritional cohort studies. Such error causes substantial underestimation of relative risks and reduction of statistical power for detecting associations. On the basis of data from the Observing Protein and Energy Nutrition Study, we recommend the following approaches to deal with these problems. Regarding data analysis of cohort studies using food-frequency questionnaires, we recommend 1) using energy adjustment for relative risk estimation; 2) reporting estimates adjusted for measurement error along with the usual relative risk estimates, whenever possible (this requires data from a relevant, preferably internal, validation study in which participants report intakes using both the main instrument and a more detailed reference instrument such as a 24-hour recall or multiple-day food record); 3) performing statistical adjustment of relative risks, based on such validation data, if they exist, using univariate (only for energy-adjusted intakes such as densities or residuals) or multivariate regression calibration. We note that whereas unadjusted relative risk estimates are biased toward the null value, statistical significance tests of unadjusted relative risk estimates are approximately valid. Regarding study design, we recommend increasing the sample size to remedy loss of power; however, it is important to understand that this will often be an incomplete solution because the attenuated signal may be too small to distinguish from unmeasured confounding in the model relating disease to reported intake. Future work should be devoted to alleviating the problem of signal attenuation, possibly through the use of improved self-report instruments or by combining dietary biomarkers with self-report instruments.
ALT space shuttle barometric altimeter altitude analysis
NASA Technical Reports Server (NTRS)
Killen, R.
1978-01-01
The accuracy was analyzed of the barometric altimeters onboard the space shuttle orbiter. Altitude estimates from the air data systems including the operational instrumentation and the developmental flight instrumentation were obtained for each of the approach and landing test flights. By comparing the barometric altitude estimates to altitudes derived from radar tracking data filtered through a Kalman filter and fully corrected for atmospheric refraction, the errors in the barometric altitudes were shown to be 4 to 5 percent of the Kalman altitudes. By comparing the altitude determined from the true atmosphere derived from weather balloon data to the altitude determined from the U.S. Standard Atmosphere of 1962, it was determined that the assumption of the Standard Atmosphere equations contributes roughly 75 percent of the total error in the baro estimates. After correcting the barometric altitude estimates using an average summer model atmosphere computed for the average latitude of the space shuttle landing sites, the residual error in the altitude estimates was reduced to less than 373 feet. This corresponds to an error of less than 1.5 percent for altitudes above 4000 feet for all flights.
NASA Technical Reports Server (NTRS)
Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, Larry, L.
2013-01-01
Great effort has been devoted towards validating geophysical parameters retrieved from ultraspectral infrared radiances obtained from satellite remote sensors. An error consistency analysis scheme (ECAS), utilizing fast radiative transfer model (RTM) forward and inverse calculations, has been developed to estimate the error budget in terms of mean difference and standard deviation of error in both spectral radiance and retrieval domains. The retrieval error is assessed through ECAS without relying on other independent measurements such as radiosonde data. ECAS establishes a link between the accuracies of radiances and retrieved geophysical parameters. ECAS can be applied to measurements from any ultraspectral instrument and any retrieval scheme with its associated RTM. In this manuscript, ECAS is described and demonstrated with measurements from the MetOp-A satellite Infrared Atmospheric Sounding Interferometer (IASI). This scheme can be used together with other validation methodologies to give a more definitive characterization of the error and/or uncertainty of geophysical parameters retrieved from ultraspectral radiances observed from current and future satellite remote sensors such as IASI, the Atmospheric Infrared Sounder (AIRS), and the Cross-track Infrared Sounder (CrIS).
Error analysis for fast scintillator-based inertial confinement fusion burn history measurements
NASA Astrophysics Data System (ADS)
Lerche, R. A.; Ognibene, T. J.
1999-01-01
Plastic scintillator material acts as a neutron-to-light converter in instruments that make inertial confinement fusion burn history measurements. Light output for a detected neutron in current instruments has a fast rise time (<20 ps) and a relatively long decay constant (1.2 ns). For a burst of neutrons whose duration is much shorter than the decay constant, instantaneous light output is approximately proportional to the integral of the neutron interaction rate with the scintillator material. Burn history is obtained by deconvolving the exponential decay from the recorded signal. The error in estimating signal amplitude for these integral measurements is calculated and compared with a direct measurement in which light output is linearly proportional to the interaction rate.
Integrated fiducial sample mount and software for correlated microscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Timothy R McJunkin; Jill R. Scott; Tammy L. Trowbridge
2014-02-01
A novel design sample mount with integrated fiducials and software for assisting operators in easily and efficiently locating points of interest established in previous analytical sessions is described. The sample holder and software were evaluated with experiments to demonstrate the utility and ease of finding the same points of interest in two different microscopy instruments. Also, numerical analysis of expected errors in determining the same position with errors unbiased by a human operator was performed. Based on the results, issues related to acquiring reproducibility and best practices for using the sample mount and software were identified. Overall, the sample mountmore » methodology allows data to be efficiently and easily collected on different instruments for the same sample location.« less
Parameter estimation for terrain modeling from gradient data. [navigation system for Martian rover
NASA Technical Reports Server (NTRS)
Dangelo, K. R.
1974-01-01
A method is developed for modeling terrain surfaces for use on an unmanned Martian roving vehicle. The modeling procedure employs a two-step process which uses gradient as well as height data in order to improve the accuracy of the model's gradient. Least square approximation is used in order to stochastically determine the parameters which describe the modeled surface. A complete error analysis of the modeling procedure is included which determines the effect of instrumental measurement errors on the model's accuracy. Computer simulation is used as a means of testing the entire modeling process which includes the acquisition of data points, the two-step modeling process and the error analysis. Finally, to illustrate the procedure, a numerical example is included.
NASA Astrophysics Data System (ADS)
Langford, B.; Acton, W.; Ammann, C.; Valach, A.; Nemitz, E.
2015-10-01
All eddy-covariance flux measurements are associated with random uncertainties which are a combination of sampling error due to natural variability in turbulence and sensor noise. The former is the principal error for systems where the signal-to-noise ratio of the analyser is high, as is usually the case when measuring fluxes of heat, CO2 or H2O. Where signal is limited, which is often the case for measurements of other trace gases and aerosols, instrument uncertainties dominate. Here, we are applying a consistent approach based on auto- and cross-covariance functions to quantify the total random flux error and the random error due to instrument noise separately. As with previous approaches, the random error quantification assumes that the time lag between wind and concentration measurement is known. However, if combined with commonly used automated methods that identify the individual time lag by looking for the maximum in the cross-covariance function of the two entities, analyser noise additionally leads to a systematic bias in the fluxes. Combining data sets from several analysers and using simulations, we show that the method of time-lag determination becomes increasingly important as the magnitude of the instrument error approaches that of the sampling error. The flux bias can be particularly significant for disjunct data, whereas using a prescribed time lag eliminates these effects (provided the time lag does not fluctuate unduly over time). We also demonstrate that when sampling at higher elevations, where low frequency turbulence dominates and covariance peaks are broader, both the probability and magnitude of bias are magnified. We show that the statistical significance of noisy flux data can be increased (limit of detection can be decreased) by appropriate averaging of individual fluxes, but only if systematic biases are avoided by using a prescribed time lag. Finally, we make recommendations for the analysis and reporting of data with low signal-to-noise and their associated errors.
NASA Astrophysics Data System (ADS)
Langford, B.; Acton, W.; Ammann, C.; Valach, A.; Nemitz, E.
2015-03-01
All eddy-covariance flux measurements are associated with random uncertainties which are a combination of sampling error due to natural variability in turbulence and sensor noise. The former is the principal error for systems where the signal-to-noise ratio of the analyser is high, as is usually the case when measuring fluxes of heat, CO2 or H2O. Where signal is limited, which is often the case for measurements of other trace gases and aerosols, instrument uncertainties dominate. We are here applying a consistent approach based on auto- and cross-covariance functions to quantifying the total random flux error and the random error due to instrument noise separately. As with previous approaches, the random error quantification assumes that the time-lag between wind and concentration measurement is known. However, if combined with commonly used automated methods that identify the individual time-lag by looking for the maximum in the cross-covariance function of the two entities, analyser noise additionally leads to a systematic bias in the fluxes. Combining datasets from several analysers and using simulations we show that the method of time-lag determination becomes increasingly important as the magnitude of the instrument error approaches that of the sampling error. The flux bias can be particularly significant for disjunct data, whereas using a prescribed time-lag eliminates these effects (provided the time-lag does not fluctuate unduly over time). We also demonstrate that when sampling at higher elevations, where low frequency turbulence dominates and covariance peaks are broader, both the probability and magnitude of bias are magnified. We show that the statistical significance of noisy flux data can be increased (limit of detection can be decreased) by appropriate averaging of individual fluxes, but only if systematic biases are avoided by using a prescribed time-lag. Finally, we make recommendations for the analysis and reporting of data with low signal-to-noise and their associated errors.
Improved effectiveness of performance monitoring in amateur instrumental musicians☆
Jentzsch, Ines; Mkrtchian, Anahit; Kansal, Nayantara
2014-01-01
Here we report a cross-sectional study investigating the influence of instrumental music practice on the ability to monitor for and respond to processing conflicts and performance errors. Behavioural and electrophysiological indicators of response monitoring in amateur musicians with various skill levels were collected using simple conflict tasks. The results show that instrumental musicians are better able than non-musicians to detect conflicts and errors as indicated by systematic increases in the amplitude of the error-related negativity and the N200 with increasing levels of instrumental practice. Also, high levels of musical training were associated with more efficient and less reactive responses after experience of conflicts and errors as indicated by reduced post-error interference and post-conflict processing adjustments. Together, the present findings suggest that playing a musical instrument might improve the ability to monitor our behavior and adjust our responses effectively when needed. As these processes are amongst the first to be affected by cognitive aging, our evidence could promote musical activity as a realistic intervention to slow or even prevent age-related decline in frontal cortex mediated executive functioning. PMID:24056298
Error analysis for the ground-based microwave ozone measurements during STOIC
NASA Technical Reports Server (NTRS)
Connor, Brian J.; Parrish, Alan; Tsou, Jung-Jung; McCormick, M. Patrick
1995-01-01
We present a formal error analysis and characterization of the microwave measurements made during the Stratospheric Ozone Intercomparison Campaign (STOIC). The most important error sources are found to be determination of the tropospheric opacity, the pressure-broadening coefficient of the observed line, and systematic variations in instrument response as a function of frequency ('baseline'). Net precision is 4-6% between 55 and 0.2 mbar, while accuracy is 6-10%. Resolution is 8-10 km below 3 mbar and increases to 17km at 0.2 mbar. We show the 'blind' microwave measurements from STOIC and make limited comparisons to other measurements. We use the averaging kernels of the microwave measurement to eliminate resolution and a priori effects from a comparison to SAGE 2. The STOIC results and comparisons are broadly consistent with the formal analysis.
Statistical methods for biodosimetry in the presence of both Berkson and classical measurement error
NASA Astrophysics Data System (ADS)
Miller, Austin
In radiation epidemiology, the true dose received by those exposed cannot be assessed directly. Physical dosimetry uses a deterministic function of the source term, distance and shielding to estimate dose. For the atomic bomb survivors, the physical dosimetry system is well established. The classical measurement errors plaguing the location and shielding inputs to the physical dosimetry system are well known. Adjusting for the associated biases requires an estimate for the classical measurement error variance, for which no data-driven estimate exists. In this case, an instrumental variable solution is the most viable option to overcome the classical measurement error indeterminacy. Biological indicators of dose may serve as instrumental variables. Specification of the biodosimeter dose-response model requires identification of the radiosensitivity variables, for which we develop statistical definitions and variables. More recently, researchers have recognized Berkson error in the dose estimates, introduced by averaging assumptions for many components in the physical dosimetry system. We show that Berkson error induces a bias in the instrumental variable estimate of the dose-response coefficient, and then address the estimation problem. This model is specified by developing an instrumental variable mixed measurement error likelihood function, which is then maximized using a Monte Carlo EM Algorithm. These methods produce dose estimates that incorporate information from both physical and biological indicators of dose, as well as the first instrumental variable based data-driven estimate for the classical measurement error variance.
NASA Astrophysics Data System (ADS)
Badocco, Denis; Lavagnini, Irma; Mondin, Andrea; Favaro, Gabriella; Pastore, Paolo
2015-12-01
The limit of quantification (LOQ) in the presence of instrumental and non-instrumental errors was proposed. It was theoretically defined combining the two-component variance regression and LOQ schemas already present in the literature and applied to the calibration of zinc by the ICP-MS technique. At low concentration levels, the two-component variance LOQ definition should be always used above all when a clean room is not available. Three LOQ definitions were accounted for. One of them in the concentration and two in the signal domain. The LOQ computed in the concentration domain, proposed by Currie, was completed by adding the third order terms in the Taylor expansion because they are of the same order of magnitude of the second ones so that they cannot be neglected. In this context, the error propagation was simplified by eliminating the correlation contributions by using independent random variables. Among the signal domain definitions, a particular attention was devoted to the recently proposed approach based on at least one significant digit in the measurement. The relative LOQ values resulted very large in preventing the quantitative analysis. It was found that the Currie schemas in the signal and concentration domains gave similar LOQ values but the former formulation is to be preferred as more easily computable.
S-193 scatterometer transfer function analysis for data processing
NASA Technical Reports Server (NTRS)
Johnson, L.
1974-01-01
A mathematical model for converting raw data measurements of the S-193 scatterometer into processed values of radar scattering coefficient is presented. The argument is based on an approximation derived from the Radar Equation and actual operating principles of the S-193 Scatterometer hardware. Possible error sources are inaccuracies in transmitted wavelength, range, antenna illumination integrals, and the instrument itself. The dominant source of error in the calculation of scattering coefficent is accuracy of the range. All other ractors with the possible exception of illumination integral are not considered to cause significant error in the calculation of scattering coefficient.
Monitoring Error Rates In Illumina Sequencing.
Manley, Leigh J; Ma, Duanduan; Levine, Stuart S
2016-12-01
Guaranteeing high-quality next-generation sequencing data in a rapidly changing environment is an ongoing challenge. The introduction of the Illumina NextSeq 500 and the depreciation of specific metrics from Illumina's Sequencing Analysis Viewer (SAV; Illumina, San Diego, CA, USA) have made it more difficult to determine directly the baseline error rate of sequencing runs. To improve our ability to measure base quality, we have created an open-source tool to construct the Percent Perfect Reads (PPR) plot, previously provided by the Illumina sequencers. The PPR program is compatible with HiSeq 2000/2500, MiSeq, and NextSeq 500 instruments and provides an alternative to Illumina's quality value (Q) scores for determining run quality. Whereas Q scores are representative of run quality, they are often overestimated and are sourced from different look-up tables for each platform. The PPR's unique capabilities as a cross-instrument comparison device, as a troubleshooting tool, and as a tool for monitoring instrument performance can provide an increase in clarity over SAV metrics that is often crucial for maintaining instrument health. These capabilities are highlighted.
Educational Diagnostic Assessment.
ERIC Educational Resources Information Center
Bejar, Isaac I.
1984-01-01
Approaches proposed for educational diagnostic assessment are reviewed and identified as deficit assessment and error analysis. The development of diagnostic instruments may require a reexamination of existing psychometric models and development of alternative ones. The psychometric and content demands of diagnostic assessment all but require test…
Analysis and discussion on the experimental data of electrolyte analyzer
NASA Astrophysics Data System (ADS)
Dong, XinYu; Jiang, JunJie; Liu, MengJun; Li, Weiwei
2018-06-01
In the subsequent verification of electrolyte analyzer, we found that the instrument can achieve good repeatability and stability in repeated measurements with a short period of time, in line with the requirements of verification regulation of linear error and cross contamination rate, but the phenomenon of large indication error is very common, the measurement results of different manufacturers have great difference, in order to find and solve this problem, help enterprises to improve quality of product, to obtain accurate and reliable measurement data, we conducted the experimental evaluation of electrolyte analyzer, and the data were analyzed by statistical analysis.
A practical method of estimating standard error of age in the fission track dating method
Johnson, N.M.; McGee, V.E.; Naeser, C.W.
1979-01-01
A first-order approximation formula for the propagation of error in the fission track age equation is given by PA = C[P2s+P2i+P2??-2rPsPi] 1 2, where PA, Ps, Pi and P?? are the percentage error of age, of spontaneous track density, of induced track density, and of neutron dose, respectively, and C is a constant. The correlation, r, between spontaneous are induced track densities is a crucial element in the error analysis, acting generally to improve the standard error of age. In addition, the correlation parameter r is instrumental is specifying the level of neutron dose, a controlled variable, which will minimize the standard error of age. The results from the approximation equation agree closely with the results from an independent statistical model for the propagation of errors in the fission-track dating method. ?? 1979.
Effects of Tropospheric Spatio-Temporal Correlated Noise on the Analysis of Space Geodetic Data
NASA Technical Reports Server (NTRS)
Romero-Wolf, A.; Jacobs, C. S.; Ratcliff, J. T.
2012-01-01
The standard VLBI analysis models the distribution of measurement noise as Gaussian. Because the price of recording bits is steadily decreasing, thermal errors will soon no longer dominate. As a result, it is expected that troposphere and instrumentation/clock errors will increasingly become more dominant. Given that both of these errors have correlated spectra, properly modeling the error distributions will become increasingly relevant for optimal analysis. We discuss the advantages of modeling the correlations between tropospheric delays using a Kolmogorov spectrum and the frozen flow assumption pioneered by Treuhaft and Lanyi. We then apply these correlated noise spectra to the weighting of VLBI data analysis for two case studies: X/Ka-band global astrometry and Earth orientation. In both cases we see improved results when the analyses are weighted with correlated noise models vs. the standard uncorrelated models. The X/Ka astrometric scatter improved by approx.10% and the systematic Delta delta vs. delta slope decreased by approx. 50%. The TEMPO Earth orientation results improved by 17% in baseline transverse and 27% in baseline vertical.
Advanced study of global oceanographic requirements for EOS A/B: Appendix volume
NASA Technical Reports Server (NTRS)
1972-01-01
Tables and graphs are presented for a review of oceanographic studies using satellite-borne instruments. The topics considered include sensor requirements, error analysis for wind determination from glitter pattern measurements, coverage frequency plots, ground station rise and set times, a technique for reduction and analysis of ocean spectral data, rationale for the selection of a 2 PM descending orbit, and a priority analysis.
NASA Technical Reports Server (NTRS)
Jedlovec, G. J.; Menzel, W. P.; Atkinson, R.; Wilson, G. S.; Arvesen, J.
1986-01-01
A new instrument has been developed to produce high resolution imagery in eight visible and three infared spectral bands from an aircraft platform. An analysis of the data and calibration procedures has shown that useful data can be obtained at up to 50 m resolution with a 2.5 milliradian aperture. Single sample standard errors for the measurements are 0.5, 0.2, and 0.9 K for the 6.5, 11.1, and 12.3 micron spectral bands, respectively. These errors are halved when a 5.0 milliradian aperture is used to obtain 100 m resolution data. Intercomparisons with VAS and AVHRR measurements show good relative calibration. MAMS development is part of a larger program to develop multispectral Earth imaging capabilities from space platforms during the 1990s.
Calibration and filtering strategies for frequency domain electromagnetic data
Minsley, Burke J.; Smith, Bruce D.; Hammack, Richard; Sams, James I.; Veloski, Garret
2010-01-01
echniques for processing frequency-domain electromagnetic (FDEM) data that address systematic instrument errors and random noise are presented, improving the ability to invert these data for meaningful earth models that can be quantitatively interpreted. A least-squares calibration method, originally developed for airborne electromagnetic datasets, is implemented for a ground-based survey in order to address systematic instrument errors, and new insights are provided into the importance of calibration for preserving spectral relationships within the data that lead to more reliable inversions. An alternative filtering strategy based on principal component analysis, which takes advantage of the strong correlation observed in FDEM data, is introduced to help address random noise in the data without imposing somewhat arbitrary spatial smoothing.Read More: http://library.seg.org/doi/abs/10.4133/1.3445431
A Systematic Error Correction Method for TOVS Radiances
NASA Technical Reports Server (NTRS)
Joiner, Joanna; Rokke, Laurie; Einaudi, Franco (Technical Monitor)
2000-01-01
Treatment of systematic errors is crucial for the successful use of satellite data in a data assimilation system. Systematic errors in TOVS radiance measurements and radiative transfer calculations can be as large or larger than random instrument errors. The usual assumption in data assimilation is that observational errors are unbiased. If biases are not effectively removed prior to assimilation, the impact of satellite data will be lessened and can even be detrimental. Treatment of systematic errors is important for short-term forecast skill as well as the creation of climate data sets. A systematic error correction algorithm has been developed as part of a 1D radiance assimilation. This scheme corrects for spectroscopic errors, errors in the instrument response function, and other biases in the forward radiance calculation for TOVS. Such algorithms are often referred to as tuning of the radiances. The scheme is able to account for the complex, air-mass dependent biases that are seen in the differences between TOVS radiance observations and forward model calculations. We will show results of systematic error correction applied to the NOAA 15 Advanced TOVS as well as its predecessors. We will also discuss the ramifications of inter-instrument bias with a focus on stratospheric measurements.
Developing Performance Estimates for High Precision Astrometry with TMT
NASA Astrophysics Data System (ADS)
Schoeck, Matthias; Do, Tuan; Ellerbroek, Brent; Herriot, Glen; Meyer, Leo; Suzuki, Ryuji; Wang, Lianqi; Yelda, Sylvana
2013-12-01
Adaptive optics on Extremely Large Telescopes will open up many new science cases or expand existing science into regimes unattainable with the current generation of telescopes. One example of this is high-precision astrometry, which has requirements in the range from 10 to 50 micro-arc-seconds for some instruments and science cases. Achieving these requirements imposes stringent constraints on the design of the entire observatory, but also on the calibration procedures, observing sequences and the data analysis techniques. This paper summarizes our efforts to develop a top down astrometry error budget for TMT. It is predominantly developed for the first-light AO system, NFIRAOS, and the IRIS instrument, but many terms are applicable to other configurations as well. Astrometry error sources are divided into 5 categories: Reference source and catalog errors, atmospheric refraction correction errors, other residual atmospheric effects, opto-mechanical errors and focal plane measurement errors. Results are developed in parametric form whenever possible. However, almost every error term in the error budget depends on the details of the astrometry observations, such as whether absolute or differential astrometry is the goal, whether one observes a sparse or crowded field, what the time scales of interest are, etc. Thus, it is not possible to develop a single error budget that applies to all science cases and separate budgets are developed and detailed for key astrometric observations. Our error budget is consistent with the requirements for differential astrometry of tens of micro-arc-seconds for certain science cases. While no show stoppers have been found, the work has resulted in several modifications to the NFIRAOS optical surface specifications and reference source design that will help improve the achievable astrometry precision even further.
Measuring The cmb Polarization At 94 GHz With The QUIET Pseudo-cL Pipeline
NASA Astrophysics Data System (ADS)
Buder, Immanuel; QUIET Collaboration
2012-01-01
The Q/U Imaging ExperimenT (QUIET) aims to limit or detect cosmic microwave background (CMB) B-mode polarization from inflation. This talk is part of a 3-talk series on QUIET. The previous talk describes the QUIET science and instrument. QUIET has two parallel analysis pipelines which are part of an effort to validate the analysis and confirm the result. In this talk, I will describe the analysis methods of one of these: the pseudo-Cl pipeline. Calibration, noise modeling, filtering, and data-selection choices are made following a blind-analysis strategy. Central to this strategy is a suite of 30 null tests, each motivated by a possible instrumental problem or systematic effect. The systematic errors are also evaluated through full-season simulations in the blind stage of the analysis before the result is known. The CMB power spectra are calculated using a pseudo-Cl cross-correlation technique which suppresses contamination and makes the result insensitive to noise bias. QUIET will detect the first three peaks of the even-parity (E-mode) spectrum at high significance. I will show forecasts of the systematic errors for these results and for the upper limit on B-mode polarization. The very low systematic errors in these forecasts show that the technology is ready to be applied in a more sensitive next-generation experiment. The next and final talk in this series covers the other parallel analysis pipeline, based on maximum likelihood methods. This work was supported by NSF and the Department of Education.
Weighing Rocky Exoplanets with Improved Radial Velocimetry
NASA Astrophysics Data System (ADS)
Xuesong Wang, Sharon; Wright, Jason; California Planet Survey Consortium
2016-01-01
The synergy between Kepler and the ground-based radial velocity (RV) surveys have made numerous discoveries of small and rocky exoplanets, opening the age of Earth analogs. However, most (29/33) of the RV-detected exoplanets that are smaller than 3 Earth radii do not have their masses constrained to better than 20% - limited by the current RV precision (1-2 m/s). Our work improves the RV precision of the Keck telescope, which is responsible for most of the mass measurements for small Kepler exoplanets. We have discovered and verified, for the first time, two of the dominant terms in Keck's RV systematic error budget: modeling errors (mostly in deconvolution) and telluric contamination. These two terms contribute 1 m/s and 0.6 m/s, respectively, to the RV error budget (RMS in quadrature), and they create spurious signals at periods of one sidereal year and its harmonics with amplitudes of 0.2-1 m/s. Left untreated, these errors can mimic the signals of Earth-like or Super-Earth planets in the Habitable Zone. Removing these errors will bring better precision to ten-year worth of Keck data and better constraints on the masses and compositions of small Kepler planets. As more precise RV instruments coming online, we need advanced data analysis tools to overcome issues like these in order to detect the Earth twin (RV amplitude 8 cm/s). We are developing a new, open-source RV data analysis tool in Python, which uses Bayesian MCMC and Gaussian processes, to fully exploit the hardware improvements brought by new instruments like MINERVA and NASA's WIYN/EPDS.
Application of Monte Carlo algorithms to the Bayesian analysis of the Cosmic Microwave Background
NASA Technical Reports Server (NTRS)
Jewell, J.; Levin, S.; Anderson, C. H.
2004-01-01
Power spectrum estimation and evaluation of associated errors in the presence of incomplete sky coverage; nonhomogeneous, correlated instrumental noise; and foreground emission are problems of central importance for the extraction of cosmological information from the cosmic microwave background (CMB).
NASA Technical Reports Server (NTRS)
Gregory, J. C.; Smith, A. E.
1994-01-01
BUGS-4 (Bristol University Gas Scintillator-4) made its maiden engineering flight from Fort Summer (NM) on the 29th of September 1993. The instrument was consumed by fire after striking a power line during landing following 24 hours at float. The analysis of the telemetered data from this sophisticated instrument is a demanding task. Early analysis was compromised by electronic artifacts. Unravelling these problems has been difficult and time consuming, especially as the flight hardware was burned beyond salvage, but is is an essential preliminary to analysis. During this report period we have concentrated on a small sub-set of data (the first 30,000 events; 90 minutes at float), and developed software algorithms to correct systematic errors. Using these corrected events we have begun to develop the analysis algorithms. Although the analysis is preliminary, and restricted to the first 30,000 events, the results are encouraging, and suggest the design concepts are well matched to this application. Further work will refine the analysis, and allow quantitative evaluation of the concepts employed in BUGS-4 for applicability to future instruments. We believe this work will justify fabrication of a new instrument employing techniques deployed on BUGS-4.
Yin, Xingzhe; Cheung, Gary Shun-Pan; Zhang, Chengfei; Masuda, Yoshiko Murakami; Kimura, Yuichi; Matsumoto, Koukichi
2010-04-01
The purpose of this study was to assess the efficacy of instrumentation of C-shaped canals with ProTaper rotary system and traditional instruments by using micro-computed tomography (micro-CT). Twenty-four mandibular molars with C-shaped canals were selected in pairs and sorted equally into 2 groups, which were assigned for instrumentation by ProTaper rotary system (ProTaper group) or by K-files and Gates-Glidden burs (Hand Instrument group). Three-dimensional images were constructed by micro-CT. The volume of dentin removed, uninstrumented canal area, time taken for instrumentation, and iatrogenic error of instrumentation were investigated. Hand Instrument group showed greater amount of volumetric dentin removal and left less uninstrumented canal area than ProTaper group (P < .01). The time needed for instrumentation was shorter for ProTaper group than for Hand Instrument group (P < .05). No instrument breakage occurred in both groups, but more conspicuous procedural errors were detected in Hand Instrument group than for ProTaper group. It was concluded that ProTaper rotary system maintained the canal curvature with speediness and few procedural errors, whereas traditional instrumentation can clean more canal surface. Copyright (c) 2010 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.
A probabilistic approach to remote compositional analysis of planetary surfaces
Lapotre, Mathieu G.A.; Ehlmann, Bethany L.; Minson, Sarah E.
2017-01-01
Reflected light from planetary surfaces provides information, including mineral/ice compositions and grain sizes, by study of albedo and absorption features as a function of wavelength. However, deconvolving the compositional signal in spectra is complicated by the nonuniqueness of the inverse problem. Trade-offs between mineral abundances and grain sizes in setting reflectance, instrument noise, and systematic errors in the forward model are potential sources of uncertainty, which are often unquantified. Here we adopt a Bayesian implementation of the Hapke model to determine sets of acceptable-fit mineral assemblages, as opposed to single best fit solutions. We quantify errors and uncertainties in mineral abundances and grain sizes that arise from instrument noise, compositional end members, optical constants, and systematic forward model errors for two suites of ternary mixtures (olivine-enstatite-anorthite and olivine-nontronite-basaltic glass) in a series of six experiments in the visible-shortwave infrared (VSWIR) wavelength range. We show that grain sizes are generally poorly constrained from VSWIR spectroscopy. Abundance and grain size trade-offs lead to typical abundance errors of ≤1 wt % (occasionally up to ~5 wt %), while ~3% noise in the data increases errors by up to ~2 wt %. Systematic errors further increase inaccuracies by a factor of 4. Finally, phases with low spectral contrast or inaccurate optical constants can further increase errors. Overall, typical errors in abundance are <10%, but sometimes significantly increase for specific mixtures, prone to abundance/grain-size trade-offs that lead to high unmixing uncertainties. These results highlight the need for probabilistic approaches to remote determination of planetary surface composition.
Radiometric Calibration of a Dual-Wavelength, Full-Waveform Terrestrial Lidar.
Li, Zhan; Jupp, David L B; Strahler, Alan H; Schaaf, Crystal B; Howe, Glenn; Hewawasam, Kuravi; Douglas, Ewan S; Chakrabarti, Supriya; Cook, Timothy A; Paynter, Ian; Saenz, Edward J; Schaefer, Michael
2016-03-02
Radiometric calibration of the Dual-Wavelength Echidna(®) Lidar (DWEL), a full-waveform terrestrial laser scanner with two simultaneously-pulsing infrared lasers at 1064 nm and 1548 nm, provides accurate dual-wavelength apparent reflectance (ρ(app)), a physically-defined value that is related to the radiative and structural characteristics of scanned targets and independent of range and instrument optics and electronics. The errors of ρ(app) are 8.1% for 1064 nm and 6.4% for 1548 nm. A sensitivity analysis shows that ρ(app) error is dominated by range errors at near ranges, but by lidar intensity errors at far ranges. Our semi-empirical model for radiometric calibration combines a generalized logistic function to explicitly model telescopic effects due to defocusing of return signals at near range with a negative exponential function to model the fall-off of return intensity with range. Accurate values of ρ(app) from the radiometric calibration improve the quantification of vegetation structure, facilitate the comparison and coupling of lidar datasets from different instruments, campaigns or wavelengths and advance the utilization of bi- and multi-spectral information added to 3D scans by novel spectral lidars.
Radiometric Calibration of a Dual-Wavelength, Full-Waveform Terrestrial Lidar
Li, Zhan; Jupp, David L. B.; Strahler, Alan H.; Schaaf, Crystal B.; Howe, Glenn; Hewawasam, Kuravi; Douglas, Ewan S.; Chakrabarti, Supriya; Cook, Timothy A.; Paynter, Ian; Saenz, Edward J.; Schaefer, Michael
2016-01-01
Radiometric calibration of the Dual-Wavelength Echidna® Lidar (DWEL), a full-waveform terrestrial laser scanner with two simultaneously-pulsing infrared lasers at 1064 nm and 1548 nm, provides accurate dual-wavelength apparent reflectance (ρapp), a physically-defined value that is related to the radiative and structural characteristics of scanned targets and independent of range and instrument optics and electronics. The errors of ρapp are 8.1% for 1064 nm and 6.4% for 1548 nm. A sensitivity analysis shows that ρapp error is dominated by range errors at near ranges, but by lidar intensity errors at far ranges. Our semi-empirical model for radiometric calibration combines a generalized logistic function to explicitly model telescopic effects due to defocusing of return signals at near range with a negative exponential function to model the fall-off of return intensity with range. Accurate values of ρapp from the radiometric calibration improve the quantification of vegetation structure, facilitate the comparison and coupling of lidar datasets from different instruments, campaigns or wavelengths and advance the utilization of bi- and multi-spectral information added to 3D scans by novel spectral lidars. PMID:26950126
Preparatory studies for the WFIRST supernova cosmology measurements
NASA Astrophysics Data System (ADS)
Perlmutter, Saul
In the context of the WFIRST-AFTA Science Definition Team we developed a first version of a supernova program, described in the WFIRST-AFTA SDT report. This program uses the imager to discover supernova candidates and an Integral Field Spectrograph (IFS) to obtain spectrophotometric light curves and higher signal to noise spectra of the supernovae near peak to better characterize the supernovae and thus minimize systematic errors. While this program was judged a robust one, and the estimates of the sensitivity to the cosmological parameters were felt to be reliable, due to limitation of time the analysis was clearly limited in depth on a number of issues. The goal of this proposal is to further develop this program and refine the estimates of the sensitivities to the cosmological parameters using more sophisticated systematic uncertainty models and covariance error matrices that fold in more realistic data concerning observed populations of SNe Ia as well as more realistic instrument models. We propose to develop analysis algorithms and approaches that are needed to build, optimize, and refine the WFIRST instrument and program requirements to accomplish the best supernova cosmology measurements possible. We plan to address the following: a) Use realistic Supernova populations, subclasses and population drift. One bothersome uncertainty with the supernova technique is the possibility of population drift with redshift. We are in a unique position to characterize and mitigate such effects using the spectrophotometric time series of real Type Ia supernovae from the Nearby Supernova Factory (SNfactory). Each supernova in this sample has global galaxy measurements as well as additional local environment information derived from the IFS spectroscopy. We plan to develop methods of coping with this issue, e.g., by selecting similar subsamples of supernovae and allowing additional model flexibility, in order to reduce systematic uncertainties. These studies will allow us to tune details, like the wavelength coverage and S/N requirements, of the WFIRST IFS to capitalize on these systematic error reduction methods. b) Supernova extraction and host galaxy subtractions. The underlying light of the host galaxy must be subtracted from the supernova images making up the lightcurves. Using the IFS to provide the lightcurve points via spectrophotometry requires the subtraction of a reference spectrum of the galaxy taken after the supernova light has faded to a negligible level. We plan to apply the expertise obtained from the SNfactory to develop galaxy background procedures that minimize the systematic errors introduced by this step in the analysis. c) Instrument calibration and ground to space cross calibration. Calibrating the entire supernova sample will be a challenge as no standard stars exist that span the range of magnitudes and wavelengths relevant to the WFIRST survey. Linking the supernova measurements to the relatively brighter standards will require several links. WFIRST will produce the high redshift sample, but the nearby supernova to anchor the Hubble diagram will have to come from ground based observations. Developing algorithms to carry out the cross calibration of these two samples to the required one percent level will be an important goal of our proposal. An integral part of this calibration will be to remove all instrumental signatures and to develop unbiased measurement techniques starting at the pixel level. We then plan to pull the above studies together in a synthesis to produce a correlated error matrix. We plan to develop a Fisher Matrix based model to evaluate the correlated error matrix due to the various systematic errors discussed above. A realistic error model will allow us to carry out a more reliable estimates of the eventual errors on the measurement of the cosmological parameters, as well as serve as a means of optimizing and fine tuning the requirements for the instruments and survey strategies.
Goldman, Gretchen T; Mulholland, James A; Russell, Armistead G; Strickland, Matthew J; Klein, Mitchel; Waller, Lance A; Tolbert, Paige E
2011-06-22
Two distinctly different types of measurement error are Berkson and classical. Impacts of measurement error in epidemiologic studies of ambient air pollution are expected to depend on error type. We characterize measurement error due to instrument imprecision and spatial variability as multiplicative (i.e. additive on the log scale) and model it over a range of error types to assess impacts on risk ratio estimates both on a per measurement unit basis and on a per interquartile range (IQR) basis in a time-series study in Atlanta. Daily measures of twelve ambient air pollutants were analyzed: NO2, NOx, O3, SO2, CO, PM10 mass, PM2.5 mass, and PM2.5 components sulfate, nitrate, ammonium, elemental carbon and organic carbon. Semivariogram analysis was applied to assess spatial variability. Error due to this spatial variability was added to a reference pollutant time-series on the log scale using Monte Carlo simulations. Each of these time-series was exponentiated and introduced to a Poisson generalized linear model of cardiovascular disease emergency department visits. Measurement error resulted in reduced statistical significance for the risk ratio estimates for all amounts (corresponding to different pollutants) and types of error. When modelled as classical-type error, risk ratios were attenuated, particularly for primary air pollutants, with average attenuation in risk ratios on a per unit of measurement basis ranging from 18% to 92% and on an IQR basis ranging from 18% to 86%. When modelled as Berkson-type error, risk ratios per unit of measurement were biased away from the null hypothesis by 2% to 31%, whereas risk ratios per IQR were attenuated (i.e. biased toward the null) by 5% to 34%. For CO modelled error amount, a range of error types were simulated and effects on risk ratio bias and significance were observed. For multiplicative error, both the amount and type of measurement error impact health effect estimates in air pollution epidemiology. By modelling instrument imprecision and spatial variability as different error types, we estimate direction and magnitude of the effects of error over a range of error types.
[A new method of processing quantitative PCR data].
Ke, Bing-Shen; Li, Guang-Yun; Chen, Shi-Min; Huang, Xiang-Yan; Chen, Ying-Jian; Xu, Jun
2003-05-01
Today standard PCR can't satisfy the need of biotechnique development and clinical research any more. After numerous dynamic research, PE company found there is a linear relation between initial template number and cycling time when the accumulating fluorescent product is detectable.Therefore,they developed a quantitative PCR technique to be used in PE7700 and PE5700. But the error of this technique is too great to satisfy the need of biotechnique development and clinical research. A better quantitative PCR technique is needed. The mathematical model submitted here is combined with the achievement of relative science,and based on the PCR principle and careful analysis of molecular relationship of main members in PCR reaction system. This model describes the function relation between product quantity or fluorescence intensity and initial template number and other reaction conditions, and can reflect the accumulating rule of PCR product molecule accurately. Accurate quantitative PCR analysis can be made use this function relation. Accumulated PCR product quantity can be obtained from initial template number. Using this model to do quantitative PCR analysis,result error is only related to the accuracy of fluorescence intensity or the instrument used. For an example, when the fluorescence intensity is accurate to 6 digits and the template size is between 100 to 1,000,000, the quantitative result accuracy will be more than 99%. The difference of result error is distinct using same condition,same instrument but different analysis method. Moreover,if the PCR quantitative analysis system is used to process data, it will get result 80 times of accuracy than using CT method.
Establishment of gold-quartz standard GQS-1
Millard, Hugh T.; Marinenko, John; McLane, John E.
1969-01-01
A homogeneous gold-quartz standard, GQS-1, was prepared from a heterogeneous gold-bearing quartz by chemical treatment. The concentration of gold in GQS-1 was determined by both instrumental neutron activation analysis and radioisotope dilution analysis to be 2.61?0.10 parts per million. Analysis of 10 samples of the standard by both instrumental neutron activation analysis and radioisotope dilution analysis failed to reveal heterogeneity within the standard. The precision of the analytical methods, expressed as standard error, was approximately 0.1 part per million. The analytical data were also used to estimate the average size of gold particles. The chemical treatment apparently reduced the average diameter of the gold particles by at least an order of magnitude and increased the concentration of gold grains by a factor of at least 4,000.
A Virtual Instrument System for Determining Sugar Degree of Honey
Wu, Qijun; Gong, Xun
2015-01-01
This study established a LabVIEW-based virtual instrument system to measure optical activity through the communication of conventional optical instrument with computer via RS232 port. This system realized the functions for automatic acquisition, real-time display, data processing, results playback, and so forth. Therefore, it improved accuracy of the measurement results by avoiding the artificial operation, cumbersome data processing, and the artificial error in optical activity measurement. The system was applied to the analysis of the batch inspection on the sugar degree of honey. The results obtained were satisfying. Moreover, it showed advantages such as friendly man-machine dialogue, simple operation, and easily expanded functions. PMID:26504615
NASA Astrophysics Data System (ADS)
Juanola-Parramon, Roser; Zimmerman, Neil; Bolcar, Matthew R.; Rizzo, Maxime; Roberge, Aki
2018-01-01
The Coronagraph is a key instrument on the Large UV-Optical-Infrared (LUVOIR) Surveyor mission concept. The Apodized Pupil Lyot Coronagraph (APLC) is one of the baselined mask technologies to enable 1E10 contrast observations in the habitable zones of nearby stars. Both the LUVOIR architectures A and B present a segmented aperture as input pupil, introducing a set of random tip/tilt and piston errors, among others, that greatly affect the performance of the coronagraph instrument by increasing the wavefront errors hence reducing the instrument sensitivity. In this poster we present the latest results of the simulation of these effects for different working angle regions and discuss the achieved contrast for exoplanet detection and characterization, including simulated observations under these circumstances, setting boundaries for the tolerance of such errors.
NASA Technical Reports Server (NTRS)
Gracey, William; Jewel, Joseph W., Jr.; Carpenter, Gene T.
1960-01-01
The overall errors of the service altimeter installations of a variety of civil transport, military, and general-aviation airplanes have been experimentally determined during normal landing-approach and take-off operations. The average height above the runway at which the data were obtained was about 280 feet for the landings and about 440 feet for the take-offs. An analysis of the data obtained from 196 airplanes during 415 landing approaches and from 70 airplanes during 152 take-offs showed that: 1. The overall error of the altimeter installations in the landing- approach condition had a probable value (50 percent probability) of +/- 36 feet and a maximum probable value (99.7 percent probability) of +/- 159 feet with a bias of +10 feet. 2. The overall error in the take-off condition had a probable value of +/- 47 feet and a maximum probable value of +/- 207 feet with a bias of -33 feet. 3. The overall errors of the military airplanes were generally larger than those of the civil transports in both the landing-approach and take-off conditions. In the landing-approach condition the probable error and the maximum probable error of the military airplanes were +/- 43 and +/- 189 feet, respectively, with a bias of +15 feet, whereas those for the civil transports were +/- 22 and +/- 96 feet, respectively, with a bias of +1 foot. 4. The bias values of the error distributions (+10 feet for the landings and -33 feet for the take-offs) appear to represent a measure of the hysteresis characteristics (after effect and recovery) and friction of the instrument and the pressure lag of the tubing-instrument system.
Correcting for the effects of pupil discontinuities with the ACAD method
NASA Astrophysics Data System (ADS)
Mazoyer, Johan; Pueyo, Laurent; N'Diaye, Mamadou; Mawet, Dimitri; Soummer, Rémi; Norman, Colin
2016-07-01
The current generation of ground-based coronagraphic instruments uses deformable mirrors to correct for phase errors and to improve contrast levels at small angular separations. Improving these techniques, several space and ground based instruments are currently developed using two deformable mirrors to correct for both phase and amplitude errors. However, as wavefront control techniques improve, more complex telescope pupil geometries (support structures, segmentation) will soon be a limiting factor for these next generation coronagraphic instruments. The technique presented in this proceeding, the Active Correction of Aperture Discontinuities method, is taking advantage of the fact that most future coronagraphic instruments will include two deformable mirrors, and is proposing to find the shapes and actuator movements to correct for the effect introduced by these complex pupil geometries. For any coronagraph previously designed for continuous apertures, this technique allow to obtain similar performance in contrast with a complex aperture (with segmented and secondary mirror support structures), with high throughput and flexibility to adapt to changing pupil geometry (e.g. in case of segment failure or maintenance of the segments). We here present the results of the parametric analysis realized on the WFIRST pupil for which we obtained high contrast levels with several deformable mirror setups (size, separation between them), coronagraphs (Vortex charge 2, vortex charge 4, APLC) and spectral bandwidths. However, because contrast levels and separation are not the only metrics to maximize the scientific return of an instrument, we also included in this study the influence of these deformable mirror shapes on the throughput of the instrument and sensitivity to pointing jitters. Finally, we present results obtained on another potential space based telescope segmented aperture. The main result of this proceeding is that we now obtain comparable performance than the coronagraphs previously designed for WFIRST. First result from the parametric analysis strongly suggest that the 2 deformable mirror set up (size and distance between them) have a important impact on the performance in contrast and throughput of the final instrument.
High stability integrated Tri-axial fluxgate sensor with suspended technology
NASA Astrophysics Data System (ADS)
Wang, Chen; Teng, Yuntian; Wang, Xiaomei; Fan, Xiaoyong; Wu, Qiong
2017-04-01
The relative geomagnetic record of China Geomagnetic Network of China(GNC) has been digitized, network, meanwhile achieving second data acquisition and storage during after 9th five-year and 10th five-year plan upgraded. Currently the relative record in geomagnetic observatories are generally two sets of the same type instrument with parallel observation, which could distinguish the differential between observation instrument failures and environmental interference, and ensure the continuity and integrity of the observation data. Fluxgate magnetometer has become mainstream equipment for relative geomagnetic record because of its low noise, high sensitivity, and fast response. There is a problem about data inconsistency by the same type of instrument in the same station though few years observation data analysis. The researchers have done a lot of experiments and found three main error sources:1. The instrument performances, due to the limitation of manufacturing and assembly process level it is difficult to ensure the orthogonality of the instrument; other performances of scale, zero offset and temperature coefficient; 2. horizontal error, which introduced by the initial installation process due to horizontal adjustment and pillar tilling due to long-term observations; 3.The observation environment, the temperature and humidity, power supply system. The new fluxgate magnetometer uses special nonmagnetic gimbaled (made by beryllium / bronze material) construction for suspension, so the fluxgate sensor is fixed at the suspended platform in order to automatically keep the horizontal level. The advantage of this design is to eliminate horizontal error introduced by the initial installation process due to horizontal adjustment and pillar tilling due to long-term observations. The signal processing circuit board is fixed on the top of the suspended platform with certain distance to ensure the static and dynamic magnetic field produced by circuit board no effect to the sensor, so we could get flexible instrument due to signal attenuation resulting signal transmission cable limited length.
Uncertainty Analysis for the Miniaturized Laser Heterodyne Radiometer (mini-LHR)
NASA Technical Reports Server (NTRS)
Clarke, G. B.; Wilson E. L.; Miller, J. H.; Melroy, H. R.
2014-01-01
Presented here is a sensitivity analysis for the miniaturized laser heterodyne radiometer (mini-LHR). This passive, ground-based instrument measures carbon dioxide (CO2) in the atmospheric column and has been under development at NASA/GSFC since 2009. The goal of this development is to produce a low-cost, easily-deployable instrument that can extend current ground measurement networks in order to (1) validate column satellite observations, (2) provide coverage in regions of limited satellite observations, (3) target regions of interest such as thawing permafrost, and (4) support the continuity of a long-term climate record. In this paper an uncertainty analysis of the instrument performance is presented and compared with results from three sets of field measurements. The signal-to-noise ratio (SNR) and corresponding uncertainty for a single scan are calculated to be 329.4+/-1.3 by deploying error propagation through the equation governing the SNR. Reported is an absorbance noise of 0.0024 for 6 averaged scans of field data, for an instrument precision of approximately 0.2 ppmv for CO2.
Instrument Attitude Precision Control
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan
2004-01-01
A novel approach is presented in this paper to analyze attitude precision and control for an instrument gimbaled to a spacecraft subject to an internal disturbance caused by a moving component inside the instrument. Nonlinear differential equations of motion for some sample cases are derived and solved analytically to gain insight into the influence of the disturbance on the attitude pointing error. A simple control law is developed to eliminate the instrument pointing error caused by the internal disturbance. Several cases are presented to demonstrate and verify the concept presented in this paper.
Regenbogen, Scott E; Greenberg, Caprice C; Studdert, David M; Lipsitz, Stuart R; Zinner, Michael J; Gawande, Atul A
2007-11-01
To identify the most prevalent patterns of technical errors in surgery, and evaluate commonly recommended interventions in light of these patterns. The majority of surgical adverse events involve technical errors, but little is known about the nature and causes of these events. We examined characteristics of technical errors and common contributing factors among closed surgical malpractice claims. Surgeon reviewers analyzed 444 randomly sampled surgical malpractice claims from four liability insurers. Among 258 claims in which injuries due to error were detected, 52% (n = 133) involved technical errors. These technical errors were further analyzed with a structured review instrument designed by qualitative content analysis. Forty-nine percent of the technical errors caused permanent disability; an additional 16% resulted in death. Two-thirds (65%) of the technical errors were linked to manual error, 9% to errors in judgment, and 26% to both manual and judgment error. A minority of technical errors involved advanced procedures requiring special training ("index operations"; 16%), surgeons inexperienced with the task (14%), or poorly supervised residents (9%). The majority involved experienced surgeons (73%), and occurred in routine, rather than index, operations (84%). Patient-related complexities-including emergencies, difficult or unexpected anatomy, and previous surgery-contributed to 61% of technical errors, and technology or systems failures contributed to 21%. Most technical errors occur in routine operations with experienced surgeons, under conditions of increased patient complexity or systems failure. Commonly recommended interventions, including restricting high-complexity operations to experienced surgeons, additional training for inexperienced surgeons, and stricter supervision of trainees, are likely to address only a minority of technical errors. Surgical safety research should instead focus on improving decision-making and performance in routine operations for complex patients and circumstances.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, T. S.
Meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is stable in time and uniform over the sky to 1% precision or better. Past surveys have achieved photometric precision of 1-2% by calibrating the survey's stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations in the wavelength dependence of the atmospheric transmissionmore » and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors using photometry from the Dark Energy Survey (DES) as an example. We define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes, when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the systematic chromatic errors caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane, can be up to 2% in some bandpasses. We compare the calculated systematic chromatic errors with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput. The residual after correction is less than 0.3%. We also find that the errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.« less
Analysis of Meteorological Satellite location and data collection system concepts
NASA Technical Reports Server (NTRS)
Wallace, R. G.; Reed, D. L.
1981-01-01
A satellite system that employs a spaceborne RF interferometer to determine the location and velocity of data collection platforms attached to meteorological balloons is proposed. This meteorological advanced location and data collection system (MALDCS) is intended to fly aboard a low polar orbiting satellite. The flight instrument configuration includes antennas supported on long deployable booms. The platform location and velocity estimation errors introduced by the dynamic and thermal behavior of the antenna booms and the effects of the presence of the booms on the performance of the spacecraft's attitude control system, and the control system design considerations critical to stable operations are examined. The physical parameters of the Astromast type of deployable boom were used in the dynamic and thermal boom analysis, and the TIROS N system was assumed for the attitude control analysis. Velocity estimation error versus boom length was determined. There was an optimum, minimum error, antenna separation distance. A description of the proposed MALDCS system and a discussion of ambiguity resolution are included.
Performance evaluation of the microINR® point-of-care INR-testing system.
Joubert, J; van Zyl, M C; Raubenheimer, J
2018-04-01
Point-of-care International Normalised Ratio (INR) testing is used frequently. We evaluated the microINR ® POC system for accuracy, precision and measurement repeatability, and investigated instrument and test chip variability and error rates. Venous blood INRs of 210 patients on warfarin were obtained with Thromborel ® S on the Sysmex CS-2100i ® analyser and compared with capillary blood microINR ® values. Precision was assessed using control materials. Measurement repeatability was calculated on 51 duplicate finger-prick INRs. Triplicate finger-prick INRs using three different instruments (30 patients) and three different test chip lots (29 patients) were used to evaluate instrument and test chip variability. Linear regression analysis of microINR ® and Sysmex CS2100i ® values showed a correlation coefficient of 0.96 (P < .0001) and a positive proportional bias of 4.4%. Dosage concordance was 93.8% and clinical agreement 95.7%. All acceptance criteria based on ISO standard 17593:2007 system accuracy requirements were met. Control material coefficients of variation (CV) varied from 6.2% to 16.7%. The capillary blood measurement repeatability CV was 7.5%. No significant instrument (P = .93) or test chip (P = .81) variability was found, and the error rate was low (2.8%). The microINR ® instrument is accurate and precise for monitoring warfarin therapy. © 2017 John Wiley & Sons Ltd.
Ghirlando, Rodolfo; Balbo, Andrea; Piszczek, Grzegorz; Brown, Patrick H.; Lewis, Marc S.; Brautigam, Chad A.; Schuck, Peter; Zhao, Huaying
2013-01-01
Sedimentation velocity (SV) is a method based on first-principles that provides a precise hydrodynamic characterization of macromolecules in solution. Due to recent improvements in data analysis, the accuracy of experimental SV data emerges as a limiting factor in its interpretation. Our goal was to unravel the sources of experimental error and develop improved calibration procedures. We implemented the use of a Thermochron iButton® temperature logger to directly measure the temperature of a spinning rotor, and detected deviations that can translate into an error of as much as 10% in the sedimentation coefficient. We further designed a precision mask with equidistant markers to correct for instrumental errors in the radial calibration, which were observed to span a range of 8.6%. The need for an independent time calibration emerged with use of the current data acquisition software (Zhao et al., doi 10.1016/j.ab.2013.02.011) and we now show that smaller but significant time errors of up to 2% also occur with earlier versions. After application of these calibration corrections, the sedimentation coefficients obtained from eleven instruments displayed a significantly reduced standard deviation of ∼ 0.7 %. This study demonstrates the need for external calibration procedures and regular control experiments with a sedimentation coefficient standard. PMID:23711724
Ghirlando, Rodolfo; Balbo, Andrea; Piszczek, Grzegorz; Brown, Patrick H; Lewis, Marc S; Brautigam, Chad A; Schuck, Peter; Zhao, Huaying
2013-09-01
Sedimentation velocity (SV) is a method based on first principles that provides a precise hydrodynamic characterization of macromolecules in solution. Due to recent improvements in data analysis, the accuracy of experimental SV data emerges as a limiting factor in its interpretation. Our goal was to unravel the sources of experimental error and develop improved calibration procedures. We implemented the use of a Thermochron iButton temperature logger to directly measure the temperature of a spinning rotor and detected deviations that can translate into an error of as much as 10% in the sedimentation coefficient. We further designed a precision mask with equidistant markers to correct for instrumental errors in the radial calibration that were observed to span a range of 8.6%. The need for an independent time calibration emerged with use of the current data acquisition software (Zhao et al., Anal. Biochem., 437 (2013) 104-108), and we now show that smaller but significant time errors of up to 2% also occur with earlier versions. After application of these calibration corrections, the sedimentation coefficients obtained from 11 instruments displayed a significantly reduced standard deviation of approximately 0.7%. This study demonstrates the need for external calibration procedures and regular control experiments with a sedimentation coefficient standard. Published by Elsevier Inc.
Runtime Detection of C-Style Errors in UPC Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pirkelbauer, P; Liao, C; Panas, T
2011-09-29
Unified Parallel C (UPC) extends the C programming language (ISO C 99) with explicit parallel programming support for the partitioned global address space (PGAS), which provides a global memory space with localized partitions to each thread. Like its ancestor C, UPC is a low-level language that emphasizes code efficiency over safety. The absence of dynamic (and static) safety checks allows programmer oversights and software flaws that can be hard to spot. In this paper, we present an extension of a dynamic analysis tool, ROSE-Code Instrumentation and Runtime Monitor (ROSECIRM), for UPC to help programmers find C-style errors involving the globalmore » address space. Built on top of the ROSE source-to-source compiler infrastructure, the tool instruments source files with code that monitors operations and keeps track of changes to the system state. The resulting code is linked to a runtime monitor that observes the program execution and finds software defects. We describe the extensions to ROSE-CIRM that were necessary to support UPC. We discuss complications that arise from parallel code and our solutions. We test ROSE-CIRM against a runtime error detection test suite, and present performance results obtained from running error-free codes. ROSE-CIRM is released as part of the ROSE compiler under a BSD-style open source license.« less
Zhang, T; Gordon, H R
1997-04-20
We report a sensitivity analysis for the algorithm presented by Gordon and Zhang [Appl. Opt. 34, 5552 (1995)] for inverting the radiance exiting the top and bottom of the atmosphere to yield the aerosol-scattering phase function [P(?)] and single-scattering albedo (omega(0)). The study of the algorithm's sensitivity to radiometric calibration errors, mean-zero instrument noise, sea-surface roughness, the curvature of the Earth's atmosphere, the polarization of the light field, and incorrect assumptions regarding the vertical structure of the atmosphere, indicates that the retrieved omega(0) has excellent stability even for very large values (~2) of the aerosol optical thickness; however, the error in the retrieved P(?) strongly depends on the measurement error and on the assumptions made in the retrieval algorithm. The retrieved phase functions in the blue are usually poor compared with those in the near infrared.
Experimental Investigation of Jet Impingement Heat Transfer Using Thermochromic Liquid Crystals
NASA Technical Reports Server (NTRS)
Dempsey, Brian Paul
1997-01-01
Jet impingement cooling of a hypersonic airfoil leading edge is experimentally investigated using thermochromic liquid crystals (TLCS) to measure surface temperature. The experiment uses computer data acquisition with digital imaging of the TLCs to determine heat transfer coefficients during a transient experiment. The data reduction relies on analysis of a coupled transient conduction - convection heat transfer problem that characterizes the experiment. The recovery temperature of the jet is accounted for by running two experiments with different heating rates, thereby generating a second equation that is used to solve for the recovery temperature. The resulting solution requires a complicated numerical iteration that is handled by a computer. Because the computational data reduction method is complex, special attention is paid to error assessment. The error analysis considers random and systematic errors generated by the instrumentation along with errors generated by the approximate nature of the numerical methods. Results of the error analysis show that the experimentally determined heat transfer coefficients are accurate to within 15%. The error analysis also shows that the recovery temperature data may be in error by more than 50%. The results show that the recovery temperature data is only reliable when the recovery temperature of the jet is greater than 5 C, i.e. the jet velocity is in excess of 100 m/s. Parameters that were investigated include nozzle width, distance from the nozzle exit to the airfoil surface, and jet velocity. Heat transfer data is presented in graphical and tabular forms. An engineering analysis of hypersonic airfoil leading edge cooling is performed using the results from these experiments. Several suggestions for the improvement of the experimental technique are discussed.
NASA Technical Reports Server (NTRS)
Amer, Tahani; Tripp, John; Tcheng, Ping; Burkett, Cecil; Sealey, Bradley
2004-01-01
This paper presents the calibration results and uncertainty analysis of a high-precision reference pressure measurement system currently used in wind tunnels at the NASA Langley Research Center (LaRC). Sensors, calibration standards, and measurement instruments are subject to errors due to aging, drift with time, environment effects, transportation, the mathematical model, the calibration experimental design, and other factors. Errors occur at every link in the chain of measurements and data reduction from the sensor to the final computed results. At each link of the chain, bias and precision uncertainties must be separately estimated for facility use, and are combined to produce overall calibration and prediction confidence intervals for the instrument, typically at a 95% confidence level. The uncertainty analysis and calibration experimental designs used herein, based on techniques developed at LaRC, employ replicated experimental designs for efficiency, separate estimation of bias and precision uncertainties, and detection of significant parameter drift with time. Final results, including calibration confidence intervals and prediction intervals given as functions of the applied inputs, not as a fixed percentage of the full-scale value are presented. System uncertainties are propagated beginning with the initial reference pressure standard, to the calibrated instrument as a working standard in the facility. Among the several parameters that can affect the overall results are operating temperature, atmospheric pressure, humidity, and facility vibration. Effects of factors such as initial zeroing and temperature are investigated. The effects of the identified parameters on system performance and accuracy are discussed.
ERIC Educational Resources Information Center
Huang, Francis L.; Cornell, Dewey G.
2016-01-01
Advances in multilevel modeling techniques now make it possible to investigate the psychometric properties of instruments using clustered data. Factor models that overlook the clustering effect can lead to underestimated standard errors, incorrect parameter estimates, and model fit indices. In addition, factor structures may differ depending on…
LV software support for supersonic flow analysis
NASA Technical Reports Server (NTRS)
Bell, W. A.; Lepicovsky, J.
1992-01-01
The software for configuring an LV counter processor system has been developed using structured design. The LV system includes up to three counter processors and a rotary encoder. The software for configuring and testing the LV system has been developed, tested, and included in an overall software package for data acquisition, analysis, and reduction. Error handling routines respond to both operator and instrument errors which often arise in the course of measuring complex, high-speed flows. The use of networking capabilities greatly facilitates the software development process by allowing software development and testing from a remote site. In addition, high-speed transfers allow graphics files or commands to provide viewing of the data from a remote site. Further advances in data analysis require corresponding advances in procedures for statistical and time series analysis of nonuniformly sampled data.
LV software support for supersonic flow analysis
NASA Technical Reports Server (NTRS)
Bell, William A.
1992-01-01
The software for configuring a Laser Velocimeter (LV) counter processor system was developed using structured design. The LV system includes up to three counter processors and a rotary encoder. The software for configuring and testing the LV system was developed, tested, and included in an overall software package for data acquisition, analysis, and reduction. Error handling routines respond to both operator and instrument errors which often arise in the course of measuring complex, high-speed flows. The use of networking capabilities greatly facilitates the software development process by allowing software development and testing from a remote site. In addition, high-speed transfers allow graphics files or commands to provide viewing of the data from a remote site. Further advances in data analysis require corresponding advances in procedures for statistical and time series analysis of nonuniformly sampled data.
NASA Technical Reports Server (NTRS)
Ackermann, M.; Ajello, M.; Albert, A.; Allafort, A.; Atwood, W. B.; Axelsson, M.; Baldini, L.; Ballet, J.; Barbiellini, G.; Bastieri, D.;
2012-01-01
The Fermi Large Area Telescope (Fermi-LAT, hereafter LAT), the primary instrument on the Fermi Gamma-ray Space Telescope (Fermi) mission, is an imaging, wide field-of-view, high-energy -ray telescope, covering the energy range from 20 MeV to more than 300 GeV. During the first years of the mission the LAT team has gained considerable insight into the in-flight performance of the instrument. Accordingly, we have updated the analysis used to reduce LAT data for public release as well as the Instrument Response Functions (IRFs), the description of the instrument performance provided for data analysis. In this paper we describe the effects that motivated these updates. Furthermore, we discuss how we originally derived IRFs from Monte Carlo simulations and later corrected those IRFs for discrepancies observed between flight and simulated data. We also give details of the validations performed using flight data and quantify the residual uncertainties in the IRFs. Finally, we describe techniques the LAT team has developed to propagate those uncertainties into estimates of the systematic errors on common measurements such as fluxes and spectra of astrophysical sources.
[Application of virtual instrumentation technique in toxicological studies].
Moczko, Jerzy A
2005-01-01
Research investigations require frequently direct connection of measuring equipment to the computer. Virtual instrumentation technique considerably facilitates programming of sophisticated acquisition-and-analysis procedures. In standard approach these two steps are performed subsequently with separate software tools. The acquired data are transfered with export / import procedures of particular program to the another one which executes next step of analysis. The described procedure is cumbersome, time consuming and may be potential source of the errors. In 1987 National Instruments Corporation introduced LabVIEW language based on the concept of graphical programming. Contrary to conventional textual languages it allows the researcher to concentrate on the resolved problem and omit all syntactical rules. Programs developed in LabVIEW are called as virtual instruments (VI) and are portable among different computer platforms as PCs, Macintoshes, Sun SPARCstations, Concurrent PowerMAX stations, HP PA/RISK workstations. This flexibility warrants that the programs prepared for one particular platform would be also appropriate to another one. In presented paper basic principles of connection of research equipment to computer systems were described.
EXPRES: a next generation RV spectrograph in the search for earth-like worlds
NASA Astrophysics Data System (ADS)
Jurgenson, C.; Fischer, D.; McCracken, T.; Sawyer, D.; Szymkowiak, A.; Davis, A.; Muller, G.; Santoro, F.
2016-08-01
The EXtreme PREcision Spectrograph (EXPRES) is an optical fiber fed echelle instrument being designed and built at the Yale Exoplanet Laboratory to be installed on the 4.3-meter Discovery Channel Telescope operated by Lowell Observatory. The primary science driver for EXPRES is to detect Earth-like worlds around Sun-like stars. With this in mind, we are designing the spectrograph to have an instrumental precision of 15 cm/s so that the on-sky measurement precision (that includes modeling for RV noise from the star) can reach to better than 30 cm/s. This goal places challenging requirements on every aspect of the instrument development, including optomechanical design, environmental control, image stabilization, wavelength calibration, and data analysis. In this paper we describe our error budget, and instrument optomechanical design.
NASA Astrophysics Data System (ADS)
Platonov, I. A.; Kolesnichenko, I. N.; Lange, P. K.
2018-05-01
In this paper, the chromatography desorption method of obtaining gas mixtures of known compositions stable for a time sufficient to calibrate analytical instruments is considered. The comparative analysis results of the preparation accuracy of gas mixtures with volatile organic compounds using diffusion, polyabarbotage and chromatography desorption methods are presented. It is shown that the application of chromatography desorption devices allows one to obtain gas mixtures that are stable for 10...60 hours in a dynamic condition. These gas mixtures contain volatile aliphatic and aromatic hydrocarbons with a concentration error of no more than 7%. It is shown that it is expedient to use such gas mixtures for analytical instruments calibration (chromatographs, spectrophotometers, etc.)
Exact Rayleigh scattering calculations for use with the Nimbus-7 Coastal Zone Color Scanner.
Gordon, H R; Brown, J W; Evans, R H
1988-03-01
For improved analysis of Coastal Zone Color Scanner (CZCS) imagery, the radiance reflected from a planeparallel atmosphere and flat sea surface in the absence of aerosols (Rayleigh radiance) has been computed with an exact multiple scattering code, i.e., including polarization. The results indicate that the single scattering approximation normally used to compute this radiance can cause errors of up to 5% for small and moderate solar zenith angles. At large solar zenith angles, such as encountered in the analysis of high-latitude imagery, the errors can become much larger, e.g.,>10% in the blue band. The single scattering error also varies along individual scan lines. Comparison with multiple scattering computations using scalar transfer theory, i.e., ignoring polarization, show that scalar theory can yield errors of approximately the same magnitude as single scattering when compared with exact computations at small to moderate values of the solar zenith angle. The exact computations can be easily incorporated into CZCS processing algorithms, and, for application to future instruments with higher radiometric sensitivity, a scheme is developed with which the effect of variations in the surface pressure could be easily and accurately included in the exact computation of the Rayleigh radiance. Direct application of these computations to CZCS imagery indicates that accurate atmospheric corrections can be made with solar zenith angles at least as large as 65 degrees and probably up to at least 70 degrees with a more sensitive instrument. This suggests that the new Rayleigh radiance algorithm should produce more consistent pigment retrievals, particularly at high latitudes.
NASA Technical Reports Server (NTRS)
Daniels, Janet L.; Smith, G. Louis; Priestley, Kory J.; Thomas, Susan
2014-01-01
Validation of in-orbit instrument performance is a function of stability in both instrument and calibration source. This paper describes a method using lunar observations scanning near full moon by the Clouds and Earth Radiant Energy System (CERES) instruments. The Moon offers an external source whose signal variance is predictable and non-degrading. From 2006 to present, these in-orbit observations have become standardized and compiled for the Flight Models -1 and -2 aboard the Terra satellite, for Flight Models-3 and -4 aboard the Aqua satellite, and beginning 2012, for Flight Model-5 aboard Suomi-NPP. Instrument performance measurements studied are detector sensitivity stability, pointing accuracy and static detector point response function. This validation method also shows trends per CERES data channel of 0.8% per decade or less for Flight Models 1-4. Using instrument gimbal data and computed lunar position, the pointing error of each detector telescope, the accuracy and consistency of the alignment between the detectors can be determined. The maximum pointing error was 0.2 Deg. in azimuth and 0.17 Deg. in elevation which corresponds to an error in geolocation near nadir of 2.09 km. With the exception of one detector, all instruments were found to have consistent detector alignment from 2006 to present. All alignment error was within 0.1o with most detector telescopes showing a consistent alignment offset of less than 0.02 Deg.
Morillo, Juan P; Reigal, Rafael E; Hernández-Mendo, Antonio; Montaña, Alejandro; Morales-Sánchez, Verónica
2017-01-01
Referees are essential for sports such as handball. However, there are few tools available to analyze the activity of handball referees. The aim of this study was to design an instrument for observing the behavior of referees in handball competitions and to analyze the resulting data by polar coordinate analysis. The instrument contained 6 criteria and 18 categories and can be used to monitor and describe the actions of handball referees according to their role/position on the playing court. For the data quality control analysis, we calculated Pearson's (0.99), Spearman's (0.99), and Tau Kendall's (1.00) correlation coefficients and Cohen's kappa (entre 0.72 y 0.75) and Phi (entre 0.83 y 0.87) coefficients. In the generalizability analysis, the absolute and relative generalizability coefficients were 0.99 in both cases. Polar coordinate analysis of referee decisions showed that correct calls were more common for central court and 7-meter throw calls. Likewise, calls were more likely to be incorrect (in terms of both errors of omission and commission) when taken from the goal-line position.
Morillo, Juan P.; Reigal, Rafael E.; Hernández-Mendo, Antonio; Montaña, Alejandro; Morales-Sánchez, Verónica
2017-01-01
Referees are essential for sports such as handball. However, there are few tools available to analyze the activity of handball referees. The aim of this study was to design an instrument for observing the behavior of referees in handball competitions and to analyze the resulting data by polar coordinate analysis. The instrument contained 6 criteria and 18 categories and can be used to monitor and describe the actions of handball referees according to their role/position on the playing court. For the data quality control analysis, we calculated Pearson's (0.99), Spearman's (0.99), and Tau Kendall's (1.00) correlation coefficients and Cohen's kappa (entre 0.72 y 0.75) and Phi (entre 0.83 y 0.87) coefficients. In the generalizability analysis, the absolute and relative generalizability coefficients were 0.99 in both cases. Polar coordinate analysis of referee decisions showed that correct calls were more common for central court and 7-meter throw calls. Likewise, calls were more likely to be incorrect (in terms of both errors of omission and commission) when taken from the goal-line position. PMID:29104553
Investigating error structure of shuttle radar topography mission elevation data product
NASA Astrophysics Data System (ADS)
Becek, Kazimierz
2008-08-01
An attempt was made to experimentally assess the instrumental component of error of the C-band SRTM (SRTM). This was achieved by comparing elevation data of 302 runways from airports all over the world with the shuttle radar topography mission data product (SRTM). It was found that the rms of the instrumental error is about +/-1.55 m. Modeling of the remaining SRTM error sources, including terrain relief and pixel size, shows that downsampling from 30 m to 90 m (1 to 3 arc-sec pixels) worsened SRTM vertical accuracy threefold. It is suspected that the proximity of large metallic objects is a source of large SRTM errors. The achieved error estimates allow a pixel-based accuracy assessment of the SRTM elevation data product to be constructed. Vegetation-induced errors were not considered in this work.
[Determination of high concentrations of rubidium chloride by ICP-OES].
Zhong, Yuan; Sun, Bai; Li, Hai-jun; Wang, Tao; Li, Wu; Song, Peng-sheng
2015-01-01
The method of ICP-OES for the direct determination of high content of rubidium in rubidium chloride solutions was studied through mass dilution method and optimizing parameters of the instrument in the present paper. It can reduce the times of dilution and the error introduced by the dilution, and improve the accuracy of determination results of rubidium. Through analyzing the sensitivity of the three detection spectral lines for rubidium ion, linearly dependent coefficient and the relative errors of the determination results, the spectral line of Rb 780. 023 nm was chosen as the most suitable wavelength to measure the high content of rubidium in the rubidium chloride solutions. It was found that the instrument parameters of ICP-OES such as the atomizer flow, the pump speed and the high-frequency power are the major factors for the determination of rubidium ion in the rubidium chloride solutions. As we know instrument parameters of ICP-OES have an important influence on the atomization efficiency as well as the emissive power of the spectral lines of rubidium, they are considered as the significant factors for the determination of rubidium. The optimization parameters of the instrument were obtained by orthogonal experiments and further single factor experiment, which are 0. 60 L . min-1 of atomizer flow, 60 r . min-1 of pump speed, and 1 150 W of high-frequency power. The same experiments were repeated a week later with the optimization parameters of the instrument, and the relative errors of the determination results are less than 0. 5% when the concentration of rubidium chloride ranged from 0. 09% to 0. 18%. As the concentration of rubidium chloride is 0. 06%, the relative errors of the determination results are -1. 7%. The determination of lithium chloride and potassium chloride in the high concentration of the aqueous solutions was studied under the condition of similar instrument parameters. It was found by comparison that the determination results of lithium chloride are better than that of potassium chloride and rubidium chloride. The method of ICP-OES used for determination of high content of rubidium is fast and simple for operation, and the results are accurate. It is suitable for studying the equilibrium in the salt-water system containing rubidium and for analysis of products of rubidium with high content.
Comparing measurement response and inverted results of electrical resistivity tomography instruments
Parsekian, Andrew D.; Claes, Niels; Singha, Kamini; Minsley, Burke J.; Carr, Bradley; Voytek, Emily; Harmon, Ryan; Kass, Andy; Carey, Austin; Thayer, Drew; Flinchum, Brady
2017-01-01
In this investigation, we compare the results of electrical resistivity measurements made by six commercially available instruments on the same line of electrodes to determine if there are differences in the measured data or inverted results. These comparisons are important to determine whether measurements made between different instruments are consistent. We also degraded contact resistance on one quarter of the electrodes to study how each instrument responds to different electrical connection with the ground. We find that each instrument produced statistically similar apparent resistivity results, and that any conservative assessment of the final inverted resistivity models would result in a similar interpretation for each. We also note that inversions, as expected, are affected by measurement error weights. Increased measurement errors were most closely associated with degraded contact resistance in this set of experiments. In a separate test we recorded the full measured waveform for a single four-electrode array to show how poor electrode contact and instrument-specific recording settings can lead to systematic measurement errors. We find that it would be acceptable to use more than one instrument during an investigation with the expectation that the results would be comparable assuming contact resistance remained consistent.
CIVIL AVIATION, *ALTIMETERS, FLIGHT INSTRUMENTS, RELIABILITY, ERRORS , PERFORMANCE(ENGINEERING), BAROMETERS, BAROMETRIC PRESSURE, ATMOSPHERIC TEMPERATURE, ALTITUDE, CORRECTIONS, AVIATION SAFETY, USSR.
NASA Astrophysics Data System (ADS)
Wang, Yang; Beirle, Steffen; Hendrick, Francois; Hilboll, Andreas; Jin, Junli; Kyuberis, Aleksandra A.; Lampel, Johannes; Li, Ang; Luo, Yuhan; Lodi, Lorenzo; Ma, Jianzhong; Navarro, Monica; Ortega, Ivan; Peters, Enno; Polyansky, Oleg L.; Remmers, Julia; Richter, Andreas; Puentedura, Olga; Van Roozendael, Michel; Seyler, André; Tennyson, Jonathan; Volkamer, Rainer; Xie, Pinhua; Zobov, Nikolai F.; Wagner, Thomas
2017-10-01
In order to promote the development of the passive DOAS technique the Multi Axis DOAS - Comparison campaign for Aerosols and Trace gases (MAD-CAT) was held at the Max Planck Institute for Chemistry in Mainz, Germany, from June to October 2013. Here, we systematically compare the differential slant column densities (dSCDs) of nitrous acid (HONO) derived from measurements of seven different instruments. We also compare the tropospheric difference of SCDs (delta SCD) of HONO, namely the difference of the SCDs for the non-zenith observations and the zenith observation of the same elevation sequence. Different research groups analysed the spectra from their own instruments using their individual fit software. All the fit errors of HONO dSCDs from the instruments with cooled large-size detectors are mostly in the range of 0.1 to 0.3 × 1015 molecules cm-2 for an integration time of 1 min. The fit error for the mini MAX-DOAS is around 0.7 × 1015 molecules cm-2. Although the HONO delta SCDs are normally smaller than 6 × 1015 molecules cm-2, consistent time series of HONO delta SCDs are retrieved from the measurements of different instruments. Both fits with a sequential Fraunhofer reference spectrum (FRS) and a daily noon FRS lead to similar consistency. Apart from the mini-MAX-DOAS, the systematic absolute differences of HONO delta SCDs between the instruments are smaller than 0.63 × 1015 molecules cm-2. The correlation coefficients are higher than 0.7 and the slopes of linear regressions deviate from unity by less than 16 % for the elevation angle of 1°. The correlations decrease with an increase in elevation angle. All the participants also analysed synthetic spectra using the same baseline DOAS settings to evaluate the systematic errors of HONO results from their respective fit programs. In general the errors are smaller than 0.3 × 1015 molecules cm-2, which is about half of the systematic difference between the real measurements.The differences of HONO delta SCDs retrieved in the selected three spectral ranges 335-361, 335-373 and 335-390 nm are considerable (up to 0.57 × 1015 molecules cm-2) for both real measurements and synthetic spectra. We performed sensitivity studies to quantify the dominant systematic error sources and to find a recommended DOAS setting in the three spectral ranges. The results show that water vapour absorption, temperature and wavelength dependence of O4 absorption, temperature dependence of Ring spectrum, and polynomial and intensity offset correction all together dominate the systematic errors. We recommend a fit range of 335-373 nm for HONO retrievals. In such fit range the overall systematic uncertainty is about 0.87 × 1015 molecules cm-2, much smaller than those in the other two ranges. The typical random uncertainty is estimated to be about 0.16 × 1015 molecules cm-2, which is only 25 % of the total systematic uncertainty for most of the instruments in the MAD-CAT campaign. In summary for most of the MAX-DOAS instruments for elevation angle below 5°, half daytime measurements (usually in the morning) of HONO delta SCD can be over the detection limit of 0.2 × 1015 molecules cm-2 with an uncertainty of ˜ 0.9 × 1015 molecules cm-2.
Extracting Loop Bounds for WCET Analysis Using the Instrumentation Point Graph
NASA Astrophysics Data System (ADS)
Betts, A.; Bernat, G.
2009-05-01
Every calculation engine proposed in the literature of Worst-Case Execution Time (WCET) analysis requires upper bounds on loop iterations. Existing mechanisms to procure this information are either error prone, because they are gathered from the end-user, or limited in scope, because automatic analyses target very specific loop structures. In this paper, we present a technique that obtains bounds completely automatically for arbitrary loop structures. In particular, we show how to employ the Instrumentation Point Graph (IPG) to parse traces of execution (generated by an instrumented program) in order to extract bounds relative to any loop-nesting level. With this technique, therefore, non-rectangular dependencies between loops can be captured, allowing more accurate WCET estimates to be calculated. We demonstrate the improvement in accuracy by comparing WCET estimates computed through our HMB framework against those computed with state-of-the-art techniques.
Simplified Approach Charts Improve Data Retrieval Performance
Stewart, Michael; Laraway, Sean; Jordan, Kevin; Feary, Michael S.
2016-01-01
The effectiveness of different instrument approach charts to deliver minimum visibility and altitude information during airport equipment outages was investigated. Eighteen pilots flew simulated instrument approaches in three conditions: (a) normal operations using a standard approach chart (standard-normal), (b) equipment outage conditions using a standard approach chart (standard-outage), and (c) equipment outage conditions using a prototype decluttered approach chart (prototype-outage). Errors and retrieval times in identifying minimum altitudes and visibilities were measured. The standard-outage condition produced significantly more errors and longer retrieval times versus the standard-normal condition. The prototype-outage condition had significantly fewer errors and shorter retrieval times than did the standard-outage condition. The prototype-outage condition produced significantly fewer errors but similar retrieval times when compared with the standard-normal condition. Thus, changing the presentation of minima may reduce risk and increase safety in instrument approaches, specifically with airport equipment outages. PMID:28491009
An occultation satellite system for determining pressure levels in the atmosphere
NASA Technical Reports Server (NTRS)
Ungar, S. G.; Lusignan, B. B.
1972-01-01
An operational two-satellite microwave occultation system will establish a pressure reference level to be used in fixing the temperature-pressure profile generated by the SIRS infrared sensor as a function of altitude. In the final error analysis, simulated data for the SIRS sensor were used to test the performance of the occultation system. The results of this analysis indicate that the occultation system is capable of measuring the altitude of the 300-mb level to within 24 mrms, given a maximum error of 2 K in the input temperature profile. The effects of water vapor can be corrected by suitable climatological profiles, and improvements in the accuracy of the SIRS instrument should yield additional improvements in the performance of the occultation system.
Optical radiation measurements: instrumentation and sources of error.
Landry, R J; Andersen, F A
1982-07-01
Accurate measurement of optical radiation is required when sources of this radiation are used in biological research. The most difficult measurements of broadband noncoherent optical radiations usually must be performed by a highly trained specialist using sophisticated, complex, and expensive instruments. Presentation of the results of such measurement requires correct use of quantities and units with which many biological researchers are unfamiliar. The measurement process, physical quantities and units, measurement systems with instruments, and sources of error and uncertainties associated with optical radiation measurements are reviewed.
An interpretation of radiosonde errors in the atmospheric boundary layer
Bernadette H. Connell; David R. Miller
1995-01-01
The authors review sources of error in radiosonde measurements in the atmospheric boundary layer and analyze errors of two radiosonde models manufactured by Atmospheric Instrumentation Research, Inc. The authors focus on temperature and humidity lag errors and wind errors. Errors in measurement of azimuth and elevation angles and pressure over short time intervals and...
NASA Astrophysics Data System (ADS)
Haldren, H. A.; Perey, D. F.; Yost, W. T.; Cramer, K. E.; Gupta, M. C.
2018-05-01
A digitally controlled instrument for conducting single-frequency and swept-frequency ultrasonic phase measurements has been developed based on a constant-frequency pulsed phase-locked-loop (CFPPLL) design. This instrument uses a pair of direct digital synthesizers to generate an ultrasonically transceived tone-burst and an internal reference wave for phase comparison. Real-time, constant-frequency phase tracking in an interrogated specimen is possible with a resolution of 0.000 38 rad (0.022°), and swept-frequency phase measurements can be obtained. Using phase measurements, an absolute thickness in borosilicate glass is presented to show the instrument's efficacy, and these results are compared to conventional ultrasonic pulse-echo time-of-flight (ToF) measurements. The newly developed instrument predicted the thickness with a mean error of -0.04 μm and a standard deviation of error of 1.35 μm. Additionally, the CFPPLL instrument shows a lower measured phase error in the absence of changing temperature and couplant thickness than high-resolution cross-correlation ToF measurements at a similar signal-to-noise ratio. By showing higher accuracy and precision than conventional pulse-echo ToF measurements and lower phase errors than cross-correlation ToF measurements, the new digitally controlled CFPPLL instrument provides high-resolution absolute ultrasonic velocity or path-length measurements in solids or liquids, as well as tracking of material property changes with high sensitivity. The ability to obtain absolute phase measurements allows for many new applications than possible with previous ultrasonic pulsed phase-locked loop instruments. In addition to improved resolution, swept-frequency phase measurements add useful capability in measuring properties of layered structures, such as bonded joints, or materials which exhibit non-linear frequency-dependent behavior, such as dispersive media.
Rödig, T; Reicherts, P; Konietschke, F; Dullin, C; Hahn, W; Hülsmann, M
2014-10-01
To compare the efficacy of reciprocating and rotary NiTi-instruments in removing filling material from curved root canals using micro-computed tomography. Sixty curved root canals were prepared and filled with gutta-percha and sealer. After determination of root canal curvatures and radii in two directions as well as volumes of filling material, the teeth were assigned to three comparable groups (n = 20). Retreatment was performed using Reciproc, ProTaper Universal Retreatment or Hedström files. Percentages of residual filling material and dentine removal were assessed using micro-CT imaging. Working time and procedural errors were recorded. Statistical analysis was performed by variance procedures. No significant differences amongst the three retreatment techniques concerning residual filling material were detected (P > 0.05). Hedström files removed significantly more dentine than ProTaper Universal Retreatment (P < 0.05), but the difference concerning dentine removal between both NiTi systems was not significant (P > 0.05). Reciproc and ProTaper Universal Retreatment were significantly faster than Hedström files (P = 0.0001). No procedural errors such as instrument fracture, blockage, ledging or perforation were detected for Hedström files. Three perforations were recorded for ProTaper Universal Retreatment, and in both NiTi groups, one instrument fracture occured. Remnants of filling material were observed in all samples with no significant differences between the three techniques. Hedström files removed significantly more dentine than ProTaper Universal Retreatment, but no significant differences between both NiTi systems were detected. Procedural errors were observed with ProTaper Universal Retreatment and Reciproc. © 2014 International Endodontic Journal. Published by John Wiley & Sons Ltd.
Error reduction by combining strapdown inertial measurement units in a baseball stitch
NASA Astrophysics Data System (ADS)
Tracy, Leah
A poor musical performance is rarely due to an inferior instrument. When a device is under performing, the temptation is to find a better device or a new technology to achieve performance objectives; however, another solution may be improving how existing technology is used through a better understanding of device characteristics, i.e., learning to play the instrument better. This thesis explores improving position and attitude estimates of inertial navigation systems (INS) through an understanding of inertial sensor errors, manipulating inertial measurement units (IMUs) to reduce that error and multisensor fusion of multiple IMUs to reduce error in a GPS denied environment.
Image processing methods to compensate for IFOV errors in microgrid imaging polarimeters
NASA Astrophysics Data System (ADS)
Ratliff, Bradley M.; Boger, James K.; Fetrow, Matthew P.; Tyo, J. Scott; Black, Wiley T.
2006-05-01
Long-wave infrared imaging Stokes vector polarimeters are used in many remote sensing applications. Imaging polarimeters require that several measurements be made under optically different conditions in order to estimate the polarization signature at a given scene point. This multiple-measurement requirement introduces error in the signature estimates, and the errors differ depending upon the type of measurement scheme used. Here, we investigate a LWIR linear microgrid polarimeter. This type of instrument consists of a mosaic of micropolarizers at different orientations that are masked directly onto a focal plane array sensor. In this scheme, each polarization measurement is acquired spatially and hence each is made at a different point in the scene. This is a significant source of error, as it violates the requirement that each polarization measurement have the same instantaneous field-of-view (IFOV). In this paper, we first study the amount of error introduced by the IFOV handicap in microgrid instruments. We then proceed to investigate means for mitigating the effects of these errors to improve the quality of polarimetric imagery. In particular, we examine different interpolation schemes and gauge their performance. These studies are completed through the use of both real instrumental and modeled data.
NASA Astrophysics Data System (ADS)
Bhushan, A.; Sharker, M. H.; Karimi, H. A.
2015-07-01
In this paper, we address outliers in spatiotemporal data streams obtained from sensors placed across geographically distributed locations. Outliers may appear in such sensor data due to various reasons such as instrumental error and environmental change. Real-time detection of these outliers is essential to prevent propagation of errors in subsequent analyses and results. Incremental Principal Component Analysis (IPCA) is one possible approach for detecting outliers in such type of spatiotemporal data streams. IPCA has been widely used in many real-time applications such as credit card fraud detection, pattern recognition, and image analysis. However, the suitability of applying IPCA for outlier detection in spatiotemporal data streams is unknown and needs to be investigated. To fill this research gap, this paper contributes by presenting two new IPCA-based outlier detection methods and performing a comparative analysis with the existing IPCA-based outlier detection methods to assess their suitability for spatiotemporal sensor data streams.
Passive Markers for Tracking Surgical Instruments in Real-Time 3-D Ultrasound Imaging
Stoll, Jeffrey; Ren, Hongliang; Dupont, Pierre E.
2013-01-01
A family of passive echogenic markers is presented by which the position and orientation of a surgical instrument can be determined in a 3-D ultrasound volume, using simple image processing. Markers are attached near the distal end of the instrument so that they appear in the ultrasound volume along with the instrument tip. They are detected and measured within the ultrasound image, thus requiring no external tracking device. This approach facilitates imaging instruments and tissue simultaneously in ultrasound-guided interventions. Marker-based estimates of instrument pose can be used in augmented reality displays or for image-based servoing. Design principles for marker shapes are presented that ensure imaging system and measurement uniqueness constraints are met. An error analysis is included that can be used to guide marker design and which also establishes a lower bound on measurement uncertainty. Finally, examples of marker measurement and tracking algorithms are presented along with experimental validation of the concepts. PMID:22042148
2011-01-01
Background The generation and analysis of high-throughput sequencing data are becoming a major component of many studies in molecular biology and medical research. Illumina's Genome Analyzer (GA) and HiSeq instruments are currently the most widely used sequencing devices. Here, we comprehensively evaluate properties of genomic HiSeq and GAIIx data derived from two plant genomes and one virus, with read lengths of 95 to 150 bases. Results We provide quantifications and evidence for GC bias, error rates, error sequence context, effects of quality filtering, and the reliability of quality values. By combining different filtering criteria we reduced error rates 7-fold at the expense of discarding 12.5% of alignable bases. While overall error rates are low in HiSeq data we observed regions of accumulated wrong base calls. Only 3% of all error positions accounted for 24.7% of all substitution errors. Analyzing the forward and reverse strands separately revealed error rates of up to 18.7%. Insertions and deletions occurred at very low rates on average but increased to up to 2% in homopolymers. A positive correlation between read coverage and GC content was found depending on the GC content range. Conclusions The errors and biases we report have implications for the use and the interpretation of Illumina sequencing data. GAIIx and HiSeq data sets show slightly different error profiles. Quality filtering is essential to minimize downstream analysis artifacts. Supporting previous recommendations, the strand-specificity provides a criterion to distinguish sequencing errors from low abundance polymorphisms. PMID:22067484
Update on Integrated Optical Design Analyzer
NASA Technical Reports Server (NTRS)
Moore, James D., Jr.; Troy, Ed
2003-01-01
Updated information on the Integrated Optical Design Analyzer (IODA) computer program has become available. IODA was described in Software for Multidisciplinary Concurrent Optical Design (MFS-31452), NASA Tech Briefs, Vol. 25, No. 10 (October 2001), page 8a. To recapitulate: IODA facilitates multidisciplinary concurrent engineering of highly precise optical instruments. The architecture of IODA was developed by reviewing design processes and software in an effort to automate design procedures. IODA significantly reduces design iteration cycle time and eliminates many potential sources of error. IODA integrates the modeling efforts of a team of experts in different disciplines (e.g., optics, structural analysis, and heat transfer) working at different locations and provides seamless fusion of data among thermal, structural, and optical models used to design an instrument. IODA is compatible with data files generated by the NASTRAN structural-analysis program and the Code V (Registered Trademark) optical-analysis program, and can be used to couple analyses performed by these two programs. IODA supports multiple-load-case analysis for quickly accomplishing trade studies. IODA can also model the transient response of an instrument under the influence of dynamic loads and disturbances.
NASA Astrophysics Data System (ADS)
Lichti, Derek D.; Chow, Jacky; Lahamy, Hervé
One of the important systematic error parameters identified in terrestrial laser scanners is the collimation axis error, which models the non-orthogonality between two instrumental axes. The quality of this parameter determined by self-calibration, as measured by its estimated precision and its correlation with the tertiary rotation angle κ of the scanner exterior orientation, is strongly dependent on instrument architecture. While the quality is generally very high for panoramic-type scanners, it is comparably poor for hybrid-style instruments. Two methods for improving the quality of the collimation axis error in hybrid instrument self-calibration are proposed herein: (1) the inclusion of independent observations of the tertiary rotation angle κ; and (2) the use of a new collimation axis error model. Five real datasets were captured with two different hybrid-style scanners to test each method's efficacy. While the first method achieves the desired outcome of complete decoupling of the collimation axis error from κ, it is shown that the high correlation is simply transferred to other model variables. The second method achieves partial parameter de-correlation to acceptable levels. Importantly, it does so without any adverse, secondary correlations and is therefore the method recommended for future use. Finally, systematic error model identification has been greatly aided in previous studies by graphical analyses of self-calibration residuals. This paper presents results showing the architecture dependence of this technique, revealing its limitations for hybrid scanners.
Reliability of drivers in urban intersections.
Gstalter, Herbert; Fastenmeier, Wolfgang
2010-01-01
The concept of human reliability has been widely used in industrial settings by human factors experts to optimise the person-task fit. Reliability is estimated by the probability that a task will successfully be completed by personnel in a given stage of system operation. Human Reliability Analysis (HRA) is a technique used to calculate human error probabilities as the ratio of errors committed to the number of opportunities for that error. To transfer this notion to the measurement of car driver reliability the following components are necessary: a taxonomy of driving tasks, a definition of correct behaviour in each of these tasks, a list of errors as deviations from the correct actions and an adequate observation method to register errors and opportunities for these errors. Use of the SAFE-task analysis procedure recently made it possible to derive driver errors directly from the normative analysis of behavioural requirements. Driver reliability estimates could be used to compare groups of tasks (e.g. different types of intersections with their respective regulations) as well as groups of drivers' or individual drivers' aptitudes. This approach was tested in a field study with 62 drivers of different age groups. The subjects drove an instrumented car and had to complete an urban test route, the main features of which were 18 intersections representing six different driving tasks. The subjects were accompanied by two trained observers who recorded driver errors using standardized observation sheets. Results indicate that error indices often vary between both the age group of drivers and the type of driving task. The highest error indices occurred in the non-signalised intersection tasks and the roundabout, which exactly equals the corresponding ratings of task complexity from the SAFE analysis. A comparison of age groups clearly shows the disadvantage of older drivers, whose error indices in nearly all tasks are significantly higher than those of the other groups. The vast majority of these errors could be explained by high task load in the intersections, as they represent difficult tasks. The discussion shows how reliability estimates can be used in a constructive way to propose changes in car design, intersection layout and regulation as well as driver training.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guiral, P.; Ribouton, J.; Jalade, P.
Purpose: High dose rate brachytherapy (HDR-BT) is widely used to treat gynecologic, anal, prostate, head, neck, and breast cancers. These treatments are typically administered in large dose per fraction (>5 Gy) and with high-gradient-dose-distributions, with serious consequences in case of a treatment delivery error (e.g., on dwell position and dwell time). Thus, quality assurance (QA) or quality control (QC) should be systematically and independently implemented. This paper describes the design and testing of a phantom and an instrumented gynecological applicator for pretreatment QA and in vivo QC, respectively. Methods: The authors have designed a HDR-BT phantom equipped with four GaN-basedmore » dosimeters. The authors have also instrumented a commercial multichannel HDR-BT gynecological applicator by rigid incorporation of four GaN-based dosimeters in four channels. Specific methods based on the four GaN dosimeter responses are proposed for accurate determination of dwell time and dwell position inside phantom or applicator. The phantom and the applicator have been tested for HDR-BT QA in routine over two different periods: 29 and 15 days, respectively. Measurements in dwell position and time are compared to the treatment plan. A modified position–time gamma index is used to monitor the quality of treatment delivery. Results: The HDR-BT phantom and the instrumented applicator have been used to determine more than 900 dwell positions over the different testing periods. The errors between the planned and measured dwell positions are 0.11 ± 0.70 mm (1σ) and 0.01 ± 0.42 mm (1σ), with the phantom and the applicator, respectively. The dwell time errors for these positions do not exhibit significant bias, with a standard deviation of less than 100 ms for both systems. The modified position–time gamma index sets a threshold, determining whether the treatment run passes or fails. The error detectability of their systems has been evaluated through tests on intentionally introduced error protocols. With a detection threshold of 0.7 mm, the error detection rate on dwell position is 22% at 0.5 mm, 96% at 1 mm, and 100% at and beyond 1.5 mm. On dwell time with a dwell time threshold of 0.1 s, it is 90% at 0.2 s and 100% at and beyond 0.3 s. Conclusions: The proposed HDR-BT phantom and instrumented applicator have been tested and their main characteristics have been evaluated. These systems perform unsupervised measurements and analysis without prior treatment plan information. They allow independent verification of dwell position and time with accuracy of measurements comparable with other similar systems reported in the literature.« less
NASA Astrophysics Data System (ADS)
Sagi, K.; Kasai, Y.; Philippe, B.; Suzuki, K.; Kita, K.; Hayashida, S.; Imasu, R.; Akimoto, H.
2009-12-01
A Geostationary Earth Orbit (GEO) satellite is potentially able to monitor the regional distribution of pollution with good spatial and temporal resolution. The Japan Society of Atmospheric Chemistry (JSAC) and the Japanese Space Exploration Agency (JAXA) initiated a concept study for air quality measurements from a GEO satellite targeting the Asian region [1]. This work presents the results of sensitivity studies for a Thermal Infrared (TIR) (650-2300cm-1) candidate instrument. We performed a simulation study and error analysis to optimize the instrumental operating frequencies and spectral resolution. The scientific requirements, in terms of minimum precision (or error) values, are 10% for tropospheric O3 and CO and total column of HN3 and nighttime HNO2 and 25% for O3 and CO with separating 2 or 3 column in troposphere. Two atmospheric scenarios, one is Asian background, second is polluted case, were assumed for this study. The forward calculations and the retrieval error analysis were performed with the AMATERASU model [2] developed within the NICT-THz remote sensing project. Retrieval error analysis employed the Optimal Estimation Method [3]. The geometry is off-nadir observation on Tokyo from the geostationary satellite at equator. Fine spectral resolution will allow to observe boundary layer O3 and CO. We estimate the observation precision in the spectral resolution from 0.1cm-1 to 1cm-1 for 0-2km, 2-6km, and 6-12km. A spectral resolution of 0.3 cm-1 gives good sensitivity for all target molecules (e.g. tropospheric O3 can be detected separated 2 column with error 30%). A resolution of 0.6 cm-1 is sufficient to detect tropospheric column amount of O3 and CO (in the Asian background scenario), which is within the required precision and with acceptable instrumental SNR values of 100 for O3 and 30 for CO. However, with this resolution, the boundary layer ozone will be difficult to detect in the background abundance. In addition, a spectral resolution of 0.6 cm-1 is sufficient to retrieve the total column of HNO3 and NO2 with a precision better than 10%. IR measurements will thus be useful for tropospheric pollution monitoring. Reference: [1] http://www.stelab.nagoya-u.ac.jp/ste-www1/div1/taikiken/eisei/eisei2.pdf, Japanese version only [2] P. Baron et al., AMATERASU: Model for Atmospheric TeraHertz Radiation Analysis and Simulation, Journal of the National Institute of Information and Communications Technology, 55(1), 109-121, 2008. [3] Rodgers. C. D., Inverse methods for atmospheric sounding: Theory and practice, World Scientific, Singapore (2000).
Prabhakar, Attiguppe R; Yavagal, Chandrashekar; Dixit, Kratika; Naik, Saraswathi V
2016-01-01
Primary root canals are considered to be most challenging due to their complex anatomy. "Wave one" and "one shape" are single-file systems with reciprocating and rotary motion respectively. The aim of this study was to evaluate and compare dentin thickness, centering ability, canal transportation, and instrumentation time of wave one and one shape files in primary root canals using a cone beam computed tomographic (CBCT) analysis. This is an experimental, in vitro study comparing the two groups. A total of 24 extracted human primary teeth with minimum 7 mm root length were included in the study. Cone beam computed tomographic images were taken before and after the instrumentation for each group. Dentin thickness, centering ability, canal transportation, and instrumentation times were evaluated for each group. A significant difference was found in instrumentation time and canal transportation measures between the two groups. Wave one showed less canal transportation as compared with one shape, and the mean instrumentation time of wave one was significantly less than one shape. Reciprocating single-file systems was found to be faster with much less procedural errors and can hence be recommended for shaping the root canals of primary teeth. How to cite this article: Prabhakar AR, Yavagal C, Dixit K, Naik SV. Reciprocating vs Rotary Instrumentation in Pediatric Endodontics: Cone Beam Computed Tomographic Analysis of Deciduous Root Canals using Two Single-File Systems. Int J Clin Pediatr Dent 2016;9(1):45-49.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richmond, Marshall C.; Harding, Samuel F.; Romero Gomez, Pedro DJ
The use of acoustic Doppler current profilers (ADCPs) for the characterization of flow conditions in the vicinity of both experimental and full scale marine hydrokinetic (MHK) turbines is becoming increasingly prevalent. The computation of a three dimensional velocity measurement from divergent acoustic beams requires the assumption that the flow conditions are homogeneous between all beams at a particular axial distance from the instrument. In the near wake of MHK devices, the mean fluid motion is observed to be highly spatially dependent as a result of torque generation and energy extraction. This paper examines the performance of ADCP measurements in suchmore » scenarios through the modelling of a virtual ADCP (VADCP) instrument in the velocity field in the wake of an MHK turbine resolved using unsteady computational fluid dynamics (CFD). This is achieved by sampling the CFD velocity field at equivalent locations to the sample bins of an ADCP and performing the coordinate transformation from beam coordinates to instrument coordinates and finally to global coordinates. The error in the mean velocity calculated by the VADCP relative to the reference velocity along the instrument axis is calculated for a range of instrument locations and orientations. The stream-wise velocity deficit and tangential swirl velocity caused by the rotor rotation lead to significant misrepresentation of the true flow velocity profiles by the VADCP, with the most significant errors in the transverse (cross-flow) velocity direction.« less
Chamberlin, Scott A; Moore, Alan D; Parks, Kelly
2017-09-01
Student affect plays a considerable role in mathematical problem solving performance, yet is rarely formally assessed. In this manuscript, an instrument and its properties are discussed to enable educational psychologists the opportunity to assess student affect. The study was conducted to norm the CAIMPS (instrument) with gifted students. In so doing, educational psychologists are informed of the process and the instrument's properties. The sample was comprised of 160 middle-grade (7 and 8) students, identified as gifted, in the United States. After completing one of four model-eliciting activities (MEAs), all participants completed the CAIMPS (Chamberlin Affective Instrument for Mathematical Problem Solving). Data were analysed using confirmatory factor analysis to ascertain the number of factors in the instrument. The normed fit index (0.6939), non-normed fit index (0.8072), and root mean square error approximation (.076) were at or near the acceptable levels. Alpha levels for factors were also robust (.637-.923). Data suggest that the instrument was a good fit for use with mathematics students in middle grades when solving problems. Perhaps the most impressive characteristic of the instrument was that the four factors (AVI: anxiety, value, and interest), SS (self-efficacy and self-esteem), ASP (aspiration), and ANX (anxiety) did not correlate highly with one another, which defies previous hypotheses in educational psychology. © 2017 The British Psychological Society.
Analysis of Lidar Remote Sensing Concepts
NASA Technical Reports Server (NTRS)
Spiers, Gary D.
1999-01-01
Line of sight velocity and measurement position sensitivity analyses for an orbiting coherent Doppler lidar are developed and applied to two lidars, one with a nadir angle of 30 deg. in a 300 km altitude, 58 deg. inclination orbit and the second for a 45 deg. nadir angle instrument in a 833 km altitude, 89 deg. inclination orbit. The effect of orbit related effects on the backscatter sensitivity of a coherent Doppler lidar is also discussed. Draft performance estimate, error budgets and payload accommodation requirements for the SPARCLE (Space Readiness Coherent Lidar) instrument were also developed and documented.
Alcohol consumption, beverage prices and measurement error.
Young, Douglas J; Bielinska-Kwapisz, Agnieszka
2003-03-01
Alcohol price data collected by the American Chamber of Commerce Researchers Association (ACCRA) have been widely used in studies of alcohol consumption and related behaviors. A number of problems with these data suggest that they contain substantial measurement error, which biases conventional statistical estimators toward a finding of little or no effect of prices on behavior. We test for measurement error, assess the magnitude of the bias and provide an alternative estimator that is likely to be superior. The study utilizes data on per capita alcohol consumption across U.S. states and the years 1982-1997. State and federal alcohol taxes are used as instrumental variables for prices. Formal tests strongly confim the hypothesis of measurement error. Instrumental variable estimates of the price elasticity of demand range from -0.53 to -1.24. These estimates are substantially larger in absolute value than ordinary least squares estimates, which sometimes are not significantly different from zero or even positive. The ACCRA price data are substantially contaminated with measurement error, but using state and federal taxes as instrumental variables mitigates the problem.
Measurement-based analysis of error latency. [in computer operating system
NASA Technical Reports Server (NTRS)
Chillarege, Ram; Iyer, Ravishankar K.
1987-01-01
This paper demonstrates a practical methodology for the study of error latency under a real workload. The method is illustrated with sampled data on the physical memory activity, gathered by hardware instrumentation on a VAX 11/780 during the normal workload cycle of the installation. These data are used to simulate fault occurrence and to reconstruct the error discovery process in the system. The technique provides a means to study the system under different workloads and for multiple days. An approach to determine the percentage of undiscovered errors is also developed and a verification of the entire methodology is performed. This study finds that the mean error latency, in the memory containing the operating system, varies by a factor of 10 to 1 (in hours) between the low and high workloads. It is found that of all errors occurring within a day, 70 percent are detected in the same day, 82 percent within the following day, and 91 percent within the third day. The increase in failure rate due to latency is not so much a function of remaining errors but is dependent on whether or not there is a latent error.
The Calibration and error analysis of Shallow water (less than 100m) Multibeam Echo-Sounding System
NASA Astrophysics Data System (ADS)
Lin, M.
2016-12-01
Multibeam echo-sounders(MBES) have been developed to gather bathymetric and acoustic data for more efficient and more exact mapping of the oceans. This gain in efficiency does not come without drawbacks. Indeed, the finer the resolution of remote sensing instruments, the harder they are to calibrate. This is the case for multibeam echo-sounding systems (MBES). We are no longer dealing with sounding lines where the bathymetry must be interpolated between them to engender consistent representations of the seafloor. We now need to match together strips (swaths) of totally ensonified seabed. As a consequence, misalignment and time lag problems emerge as artifacts in the bathymetry from adjacent or overlapping swaths, particularly when operating in shallow water. More importantly, one must still verify that bathymetric data meet the accuracy requirements. This paper aims to summarize the system integration involved with MBES and identify the various source of error pertaining to shallow water survey (100m and less). A systematic method for the calibration of shallow water MBES is proposed and presented as a set of field procedures. The procedures aim at detecting, quantifying and correcting systematic instrumental and installation errors. Hence, calibrating for variations of the speed of sound in the water column, which is natural in origin, is not addressed in this document. The data which used in calibration will reference International Hydrographic Organization(IHO) and other related standards to compare. This paper aims to set a model in the specific area which can calibrate the error due to instruments. We will construct a procedure in patch test and figure out all the possibilities may make sounding data with error then calculate the error value to compensate. In general, the problems which have to be solved is the patch test's 4 correction in the Hypack system 1.Roll 2.GPS Latency 3.Pitch 4.Yaw. Cause These 4 correction affect each others, we run each survey line to calibrate. GPS Latency is synchronized GPS to echo sounder. Future studies concerning any shallower portion of an area, by this procedure can be more accurate sounding value and can do more detailed research.
Sources of Response Bias in Older Ethnic Minorities: A Case of Korean American Elderly
Kim, Miyong T.; Ko, Jisook; Yoon, Hyunwoo; Kim, Kim B.; Jang, Yuri
2015-01-01
The present study was undertaken to investigate potential sources of response bias in empirical research involving older ethnic minorities and to identify prudent strategies to reduce those biases, using Korean American elderly (KAE) as an example. Data were obtained from three independent studies of KAE (N=1,297; age ≥60) in three states (Florida, New York, and Maryland) from 2000 to 2008. Two common measures, Pearlin’s Mastery Scale and the CES-D scale, were selected for a series of psychometric tests based on classical measurement theory. Survey items were analyzed in depth, using psychometric properties generated from both exploratory factor analysis and confirmatory factor analysis as well as correlational analysis. Two types of potential sources of bias were identified as the most significant contributors to increases in error variances for these psychological instruments. Error variances were most prominent when (1) items were not presented in a manner that was culturally or contextually congruent with respect to the target population and/or (2) the response anchors for items were mixed (e.g., positive vs. negative). The systemic patterns and magnitudes of the biases were also cross-validated for the three studies. The results demonstrate sources and impacts of measurement biases in studies of older ethnic minorities. The identified response biases highlight the need for re-evaluation of current measurement practices, which are based on traditional recommendations that response anchors should be mixed or that the original wording of instruments should be rigidly followed. Specifically, systematic guidelines for accommodating cultural and contextual backgrounds into instrument design are warranted. PMID:26049971
Sources of Response Bias in Older Ethnic Minorities: A Case of Korean American Elderly.
Kim, Miyong T; Lee, Ju-Young; Ko, Jisook; Yoon, Hyunwoo; Kim, Kim B; Jang, Yuri
2015-09-01
The present study was undertaken to investigate potential sources of response bias in empirical research involving older ethnic minorities and to identify prudent strategies to reduce those biases, using Korean American elderly (KAE) as an example. Data were obtained from three independent studies of KAE (N = 1,297; age ≥60) in three states (Florida, New York, and Maryland) from 2000 to 2008. Two common measures, Pearlin's Mastery Scale and the CES-D scale, were selected for a series of psychometric tests based on classical measurement theory. Survey items were analyzed in depth, using psychometric properties generated from both exploratory factor analysis and confirmatory factor analysis as well as correlational analysis. Two types of potential sources of bias were identified as the most significant contributors to increases in error variances for these psychological instruments. Error variances were most prominent when (1) items were not presented in a manner that was culturally or contextually congruent with respect to the target population and/or (2) the response anchors for items were mixed (e.g., positive vs. negative). The systemic patterns and magnitudes of the biases were also cross-validated for the three studies. The results demonstrate sources and impacts of measurement biases in studies of older ethnic minorities. The identified response biases highlight the need for re-evaluation of current measurement practices, which are based on traditional recommendations that response anchors should be mixed or that the original wording of instruments should be rigidly followed. Specifically, systematic guidelines for accommodating cultural and contextual backgrounds into instrument design are warranted.
Rectifying calibration error of Goldmann applanation tonometer is easy!
Choudhari, Nikhil S; Moorthy, Krishna P; Tungikar, Vinod B; Kumar, Mohan; George, Ronnie; Rao, Harsha L; Senthil, Sirisha; Vijaya, Lingam; Garudadri, Chandra Sekhar
2014-11-01
Purpose: Goldmann applanation tonometer (GAT) is the current Gold standard tonometer. However, its calibration error is common and can go unnoticed in clinics. Its company repair has limitations. The purpose of this report is to describe a self-taught technique of rectifying calibration error of GAT. Materials and Methods: Twenty-nine slit-lamp-mounted Haag-Streit Goldmann tonometers (Model AT 900 C/M; Haag-Streit, Switzerland) were included in this cross-sectional interventional pilot study. The technique of rectification of calibration error of the tonometer involved cleaning and lubrication of the instrument followed by alignment of weights when lubrication alone didn't suffice. We followed the South East Asia Glaucoma Interest Group's definition of calibration error tolerance (acceptable GAT calibration error within ±2, ±3 and ±4 mm Hg at the 0, 20 and 60-mm Hg testing levels, respectively). Results: Twelve out of 29 (41.3%) GATs were out of calibration. The range of positive and negative calibration error at the clinically most important 20-mm Hg testing level was 0.5 to 20 mm Hg and -0.5 to -18 mm Hg, respectively. Cleaning and lubrication alone sufficed to rectify calibration error of 11 (91.6%) faulty instruments. Only one (8.3%) faulty GAT required alignment of the counter-weight. Conclusions: Rectification of calibration error of GAT is possible in-house. Cleaning and lubrication of GAT can be carried out even by eye care professionals and may suffice to rectify calibration error in the majority of faulty instruments. Such an exercise may drastically reduce the downtime of the Gold standard tonometer.
Pre-Test Assessment of the Upper Bound of the Drag Coefficient Repeatability of a Wind Tunnel Model
NASA Technical Reports Server (NTRS)
Ulbrich, N.; L'Esperance, A.
2017-01-01
A new method is presented that computes a pre{test estimate of the upper bound of the drag coefficient repeatability of a wind tunnel model. This upper bound is a conservative estimate of the precision error of the drag coefficient. For clarity, precision error contributions associated with the measurement of the dynamic pressure are analyzed separately from those that are associated with the measurement of the aerodynamic loads. The upper bound is computed by using information about the model, the tunnel conditions, and the balance in combination with an estimate of the expected output variations as input. The model information consists of the reference area and an assumed angle of attack. The tunnel conditions are described by the Mach number and the total pressure or unit Reynolds number. The balance inputs are the partial derivatives of the axial and normal force with respect to all balance outputs. Finally, an empirical output variation of 1.0 microV/V is used to relate both random instrumentation and angle measurement errors to the precision error of the drag coefficient. Results of the analysis are reported by plotting the upper bound of the precision error versus the tunnel conditions. The analysis shows that the influence of the dynamic pressure measurement error on the precision error of the drag coefficient is often small when compared with the influence of errors that are associated with the load measurements. Consequently, the sensitivities of the axial and normal force gages of the balance have a significant influence on the overall magnitude of the drag coefficient's precision error. Therefore, results of the error analysis can be used for balance selection purposes as the drag prediction characteristics of balances of similar size and capacities can objectively be compared. Data from two wind tunnel models and three balances are used to illustrate the assessment of the precision error of the drag coefficient.
A High-Precision Instrument for Mapping of Rotational Errors in Rotary Stages
Xu, W.; Lauer, K.; Chu, Y.; ...
2014-11-02
A rotational stage is a key component of every X-ray instrument capable of providing tomographic or diffraction measurements. To perform accurate three-dimensional reconstructions, runout errors due to imperfect rotation (e.g. circle of confusion) must be quantified and corrected. A dedicated instrument capable of full characterization and circle of confusion mapping in rotary stages down to the sub-10 nm level has been developed. A high-stability design, with an array of five capacitive sensors, allows simultaneous measurements of wobble, radial and axial displacements. The developed instrument has been used for characterization of two mechanical stages which are part of an X-ray microscope.
Double-Pulse Two-Micron IPDA Lidar Simulation for Airborne Carbon Dioxide Measurements
NASA Technical Reports Server (NTRS)
Refaat, Tamer F.; Singh, Upendra N.; Yu, Jirong; Petros, Mulugeta
2015-01-01
An advanced double-pulsed 2-micron integrated path differential absorption lidar has been developed at NASA Langley Research Center for measuring atmospheric carbon dioxide. The instrument utilizes a state-of-the-art 2-micron laser transmitter with tunable on-line wavelength and advanced receiver. Instrument modeling and airborne simulations are presented in this paper. Focusing on random errors, results demonstrate instrument capabilities of performing precise carbon dioxide differential optical depth measurement with less than 3% random error for single-shot operation from up to 11 km altitude. This study is useful for defining CO2 measurement weighting, instrument setting, validation and sensitivity trade-offs.
Liquid crystal point diffraction interferometer. Ph.D. Thesis - Arizona Univ., 1995
NASA Technical Reports Server (NTRS)
Mercer, Carolyn R.
1995-01-01
A new instrument, the liquid crystal point diffraction-interferometer (LCPDI), has been developed for the measurement of phase objects. This instrument maintains the compact, robust design of Linnik's point diffraction interferometer (PDI) and adds to it phase stepping capability for quantitative interferogram analysis. The result is a compact, simple to align, environmentally insensitive interferometer capable of accurately measuring optical wavefronts with very high data density and with automated data reduction. This dissertation describes the theory of both the PDI and liquid crystal phase control. The design considerations for the LCPDI are presented, including manufacturing considerations. The operation and performance of the LCPDI are discussed, including sections regarding alignment, calibration, and amplitude modulation effects. The LCPDI is then demonstrated using two phase objects: defocus difference wavefront, and a temperature distribution across a heated chamber filled with silicone oil. The measured results are compared to theoretical or independently measured results and show excellent agreement. A computer simulation of the LCPDI was performed to verify the source of observed periodic phase measurement error. The error stems from intensity variations caused by dye molecules rotating within the liquid crystal layer. Methods are discussed for reducing this error. Algorithms are presented which reduce this error; they are also useful for any phase-stepping interferometer that has unwanted intensity fluctuations, such as those caused by unregulated lasers.
NASA Technical Reports Server (NTRS)
Cancro, George J.; Tolson, Robert H.; Keating, Gerald M.
1998-01-01
The success of aerobraking by the Mars Global Surveyor (MGS) spacecraft was partly due to the analysis of MGS accelerometer data. Accelerometer data was used to determine the effect of the atmosphere on each orbit, to characterize the nature of the atmosphere, and to predict the atmosphere for future orbits. To interpret the accelerometer data, a data reduction procedure was developed to produce density estimations utilizing inputs from the spacecraft, the Navigation Team, and pre-mission aerothermodynamic studies. This data reduction procedure was based on the calculation of aerodynamic forces from the accelerometer data by considering acceleration due to gravity gradient, solar pressure, angular motion of the MGS, instrument bias, thruster activity, and a vibration component due to the motion of the damaged solar array. Methods were developed to calculate all of the acceleration components including a 4 degree of freedom dynamics model used to gain a greater understanding of the damaged solar array. The total error inherent to the data reduction procedure was calculated as a function of altitude and density considering contributions from ephemeris errors, errors in force coefficient, and instrument errors due to bias and digitization. Comparing the results from this procedure to the data of other MGS Teams has demonstrated that this procedure can quickly and accurately describe the density and vertical structure of the Martian upper atmosphere.
Schultze, A E; Irizarry, A R
2017-02-01
Veterinary clinical pathologists are well positioned via education and training to assist in investigations of unexpected results or increased variation in clinical pathology data. Errors in testing and unexpected variability in clinical pathology data are sometimes referred to as "laboratory errors." These alterations may occur in the preanalytical, analytical, or postanalytical phases of studies. Most of the errors or variability in clinical pathology data occur in the preanalytical or postanalytical phases. True analytical errors occur within the laboratory and are usually the result of operator or instrument error. Analytical errors are often ≤10% of all errors in diagnostic testing, and the frequency of these types of errors has decreased in the last decade. Analytical errors and increased data variability may result from instrument malfunctions, inability to follow proper procedures, undetected failures in quality control, sample misidentification, and/or test interference. This article (1) illustrates several different types of analytical errors and situations within laboratories that may result in increased variability in data, (2) provides recommendations regarding prevention of testing errors and techniques to control variation, and (3) provides a list of references that describe and advise how to deal with increased data variability.
Investigation of several aspects of LANDSAT-4 data quality
NASA Technical Reports Server (NTRS)
Wrigley, R. C. (Principal Investigator)
1983-01-01
No insurmountable problems in change detection analysis were found when portions of scenes collected simultaneously by LANDSAT 4 MSS and either LANDSAT 2 or 3. The cause of the periodic noise in LANDSAT 4 MSS images which had a RMS value of approximately 2DN should be corrected in the LANDSAT D instrument before its launch. Analysis of the P-tape of the Arkansas scene shows bands within the same focal plane very well registered except for the thermal band which was misregistered by approximately three 28.5 meter pixels in both directions. It is possible to derive tight confidence bounds for the registration errors. Preliminary analyses of the Sacramento and Arkansas scenes reveals a very high degree of consistency with earlier results for bands 3 vs 1, 3 vs 4, and 3 vs 5. Results are presented in table form. It is suggested that attention be given to the standard deviations of registrations errors to judge whether or not they will be within specification once any known mean registration errors are corrected. Techniques used for MTF analysis of a Washington scene produced noisy results.
Agogo, George O; van der Voet, Hilko; van 't Veer, Pieter; Ferrari, Pietro; Muller, David C; Sánchez-Cantalejo, Emilio; Bamia, Christina; Braaten, Tonje; Knüppel, Sven; Johansson, Ingegerd; van Eeuwijk, Fred A; Boshuizen, Hendriek C
2016-10-13
Measurement error in self-reported dietary intakes is known to bias the association between dietary intake and a health outcome of interest such as risk of a disease. The association can be distorted further by mismeasured confounders, leading to invalid results and conclusions. It is, however, difficult to adjust for the bias in the association when there is no internal validation data. We proposed a method to adjust for the bias in the diet-disease association (hereafter, association), due to measurement error in dietary intake and a mismeasured confounder, when there is no internal validation data. The method combines prior information on the validity of the self-report instrument with the observed data to adjust for the bias in the association. We compared the proposed method with the method that ignores the confounder effect, and with the method that ignores measurement errors completely. We assessed the sensitivity of the estimates to various magnitudes of measurement error, error correlations and uncertainty in the literature-reported validation data. We applied the methods to fruits and vegetables (FV) intakes, cigarette smoking (confounder) and all-cause mortality data from the European Prospective Investigation into Cancer and Nutrition study. Using the proposed method resulted in about four times increase in the strength of association between FV intake and mortality. For weakly correlated errors, measurement error in the confounder minimally affected the hazard ratio estimate for FV intake. The effect was more pronounced for strong error correlations. The proposed method permits sensitivity analysis on measurement error structures and accounts for uncertainties in the reported validity coefficients. The method is useful in assessing the direction and quantifying the magnitude of bias in the association due to measurement errors in the confounders.
Ackermann, M.; Ajello, M.; Albert, A.; ...
2012-10-12
The Fermi Large Area Telescope (Fermi-LAT, hereafter LAT), the primary instrument on the Fermi Gamma-ray Space Telescope (Fermi) mission, is an imaging, wide field-of-view, high-energy γ-ray telescope, covering the energy range from 20 MeV to more than 300 GeV. During the first years of the mission, the LAT team has gained considerable insight into the in-flight performance of the instrument. Accordingly, we have updated the analysis used to reduce LAT data for public release as well as the instrument response functions (IRFs), the description of the instrument performance provided for data analysis. In this study, we describe the effects thatmore » motivated these updates. Furthermore, we discuss how we originally derived IRFs from Monte Carlo simulations and later corrected those IRFs for discrepancies observed between flight and simulated data. We also give details of the validations performed using flight data and quantify the residual uncertainties in the IRFs. In conclusion, we describe techniques the LAT team has developed to propagate those uncertainties into estimates of the systematic errors on common measurements such as fluxes and spectra of astrophysical sources.« less
Hemkens, Lars G; Hilden, Kristian M; Hartschen, Stephan; Kaiser, Thomas; Didjurgeit, Ulrike; Hansen, Roland; Bender, Ralf; Sawicki, Peter T
2008-08-01
In addition to the metrological quality of international normalized ratio (INR) monitoring devices used in patients' self-management of long-term anticoagulation, the effectiveness of self-monitoring with such devices has to be evaluated under real-life conditions with a focus on clinical implications. An approach to evaluate the clinical significance of inaccuracies is the error-grid analysis as already established in self-monitoring of blood glucose. Two anticoagulation monitors were compared in a real-life setting and a novel error-grid instrument for oral anticoagulation has been evaluated. In a randomized crossover study 16 patients performed self-management of anticoagulation using the INRatio and the CoaguChek S system. Main outcome measures were clinically relevant INR differences according to established criteria and to the error-grid approach. A lower rate of clinically relevant disagreements according to Anderson's criteria was found with CoaguChek S than with INRatio without statistical significance (10.77% vs. 12.90%; P = 0.787). Using the error-grid we found principally consistent results: More measurement pairs with discrepancies of no or low clinical relevance were found with CoaguChek S, whereas with INRatio we found more differences with a moderate clinical relevance. A high rate of patients' satisfaction with both of the point of care devices was found with only marginal differences. A principal appropriateness of the investigated point-of-care devices to adequately monitor the INR is shown. The error-grid is useful for comparing monitoring methods with a focus on clinical relevance under real-life conditions beyond assessing the pure metrological quality, but we emphasize that additional trials using this instrument with larger patient populations are needed to detect differences in clinically relevant disagreements.
NASA Technical Reports Server (NTRS)
1972-01-01
This report contains the results of additional studies which were conducted to confirm the conclusions of the MSC Mission Report and contains analyses which were not completed in time to meet the mission report deadline. The LM IMU data were examined during the lunar descent and ascent phases. Most of the PGNCS descent absolute velocity error was caused by platform misalignments. PGNCS radial velocity divergence from AGS during the early part of descent was partially caused by PGNCS gravity computation differences from AGS. The remainder of the differences between PGNCS and AGS velocity were easily attributable to attitude reference alignment differences and tolerable instrument errors. For ascent the PGNCS radial velocity error at insertion was examined. The total error of 10.8 ft/sec was well within mission constraints but larger than expected. Of the total error, 2.30 ft/sec was PIPA bias error, which was suspected to exist pre-lunar liftoff. The remaining 8.5 ft/sec is most probably satisified with a large pre-liftoff planform misalignment.
Gutiérrez, Alfonso; Prieto, Iván; Cancela, José M.
2009-01-01
The purpose of this study is to provide a tool, based on the knowledge of technical errors, which helps to improve the teaching and learning process of the Uki Goshi technique. With this aim, we set out to determine the most frequent errors made by 44 students when performing this technique and how these mistakes relate. In order to do so, an observational analysis was carried out using the OSJUDO-UKG instrument and the data were registered using Match Vision Studio (Castellano, Perea, Alday and Hernández, 2008). The results, analyzed through descriptive statistics, show that the absence of a correct initial unbalancing movement (45,5%), the lack of proper right-arm pull (56,8%), not blocking the faller’s body (Uke) against the thrower’s hip -Tori- (54,5%) and throwing the Uke through the Tori’s side are the most usual mistakes (72,7%). Through the sequencial analysis of T-Patterns obtained with the THÈME program (Magnusson, 1996, 2000) we have concluded that not blocking the body with the Tori’s hip provokes the Uke’s throw through the Tori’s side during the final phase of the technique (95,8%), and positioning the right arm on the dorsal region of the Uke’s back during the Tsukuri entails the absence of a subsequent pull of the Uke’s body (73,3%). Key Points In this study, the most frequent errors in the performance of the Uki Goshi technique have been determined and the existing relations among these mistakes have been shown through T-Patterns. The SOBJUDO-UKG is an observation instrument for detecting mistakes in the aforementioned technique. The results show that those mistakes related to the initial imbalancing movement and the main driving action of the technique are the most frequent. The use of T-Patterns turns out to be effective in order to obtain the most important relations among the observed errors. PMID:24474885
Present status of aircraft instruments
NASA Technical Reports Server (NTRS)
1932-01-01
This report gives a brief description of the present state of development and of the performance characteristics of instruments included in the following group: speed instruments, altitude instruments, navigation instruments, power-plant instruments, oxygen instruments, instruments for aerial photography, fog-flying instruments, general problems, summary of instrument and research problems. The items considered under performance include sensitivity, scale errors, effects of temperature and pressure, effects of acceleration and vibration, time lag, damping, leaks, elastic defects, and friction.
Correction of Measured Taxicab Exhaust Emission Data Based on Cmem Modle
NASA Astrophysics Data System (ADS)
Li, Q.; Jia, T.
2017-09-01
Carbon dioxide emissions from urban road traffic mainly come from automobile exhaust. However, the carbon dioxide emissions obtained by the instruments are unreliable due to time delay error. In order to improve the reliability of data, we propose a method to correct the measured vehicles' carbon dioxide emissions from instrument based on the CMEM model. Firstly, the synthetic time series of carbon dioxide emissions are simulated by CMEM model and GPS velocity data. Then, taking the simulation data as the control group, the time delay error of the measured carbon dioxide emissions can be estimated by the asynchronous correlation analysis, and the outliers can be automatically identified and corrected using the principle of DTW algorithm. Taking the taxi trajectory data of Wuhan as an example, the results show that (1) the correlation coefficient between the measured data and the control group data can be improved from 0.52 to 0.59 by mitigating the systematic time delay error. Furthermore, by adjusting the outliers which account for 4.73 % of the total data, the correlation coefficient can raise to 0.63, which suggests strong correlation. The construction of low carbon traffic has become the focus of the local government. In order to respond to the slogan of energy saving and emission reduction, the distribution of carbon emissions from motor vehicle exhaust emission was studied. So our corrected data can be used to make further air quality analysis.
NASA Astrophysics Data System (ADS)
Wetzel, Angela Payne
Previous systematic reviews indicate a lack of reporting of reliability and validity evidence in subsets of the medical education literature. Psychology and general education reviews of factor analysis also indicate gaps between current and best practices; yet, a comprehensive review of exploratory factor analysis in instrument development across the continuum of medical education had not been previously identified. Therefore, the purpose for this study was critical review of instrument development articles employing exploratory factor or principal component analysis published in medical education (2006--2010) to describe and assess the reporting of methods and validity evidence based on the Standards for Educational and Psychological Testing and factor analysis best practices. Data extraction of 64 articles measuring a variety of constructs that have been published throughout the peer-reviewed medical education literature indicate significant errors in the translation of exploratory factor analysis best practices to current practice. Further, techniques for establishing validity evidence tend to derive from a limited scope of methods including reliability statistics to support internal structure and support for test content. Instruments reviewed for this study lacked supporting evidence based on relationships with other variables and response process, and evidence based on consequences of testing was not evident. Findings suggest a need for further professional development within the medical education researcher community related to (1) appropriate factor analysis methodology and reporting and (2) the importance of pursuing multiple sources of reliability and validity evidence to construct a well-supported argument for the inferences made from the instrument. Medical education researchers and educators should be cautious in adopting instruments from the literature and carefully review available evidence. Finally, editors and reviewers are encouraged to recognize this gap in best practices and subsequently to promote instrument development research that is more consistent through the peer-review process.
Ying, Gui-shuang; Maguire, Maureen; Quinn, Graham; Kulp, Marjean Taylor; Cyert, Lynn
2011-12-28
To evaluate, by receiver operating characteristic (ROC) analysis, the accuracy of three instruments of refractive error in detecting eye conditions among 3- to 5-year-old Head Start preschoolers and to evaluate differences in accuracy between instruments and screeners and by age of the child. Children participating in the Vision In Preschoolers (VIP) Study (n = 4040), had screening tests administered by pediatric eye care providers (phase I) or by both nurse and lay screeners (phase II). Noncycloplegic retinoscopy (NCR), the Retinomax Autorefractor (Nikon, Tokyo, Japan), and the SureSight Vision Screener (SureSight, Alpharetta, GA) were used in phase I, and Retinomax and SureSight were used in phase II. Pediatric eye care providers performed a standardized eye examination to identify amblyopia, strabismus, significant refractive error, and reduced visual acuity. The accuracy of the screening tests was summarized by the area under the ROC curve (AUC) and compared between instruments and screeners and by age group. The three screening tests had a high AUC for all categories of screening personnel. The AUC for detecting any VIP-targeted condition was 0.83 for NCR, 0.83 (phase I) to 0.88 (phase II) for Retinomax, and 0.86 (phase I) to 0.87 (phase II) for SureSight. The AUC was 0.93 to 0.95 for detecting group 1 (most severe) conditions and did not differ between instruments or screeners or by age of the child. NCR, Retinomax, and SureSight had similar and high accuracy in detecting vision disorders in preschoolers across all types of screeners and age of child, consistent with previously reported results at specificity levels of 90% and 94%.
NASA Astrophysics Data System (ADS)
Martinez, P.; Kasper, M.; Costille, A.; Sauvage, J. F.; Dohlen, K.; Puget, P.; Beuzit, J. L.
2013-06-01
Context. Observing sequences have shown that the major noise source limitation in high-contrast imaging is the presence of quasi-static speckles. The timescale on which quasi-static speckles evolve is determined by various factors, mechanical or thermal deformations, among others. Aims: Understanding these time-variable instrumental speckles and, especially, their interaction with other aberrations, referred to as the pinning effect, is paramount for the search for faint stellar companions. The temporal evolution of quasi-static speckles is, for instance, required for quantifying the gain expected when using angular differential imaging (ADI) and to determining the interval on which speckle nulling techniques must be carried out. Methods: Following an early analysis of a time series of adaptively corrected, coronagraphic images obtained in a laboratory condition with the high-order test bench (HOT) at ESO Headquarters, we confirm our results with new measurements carried out with the SPHERE instrument during its final test phase in Europe. The analysis of the residual speckle pattern in both direct and differential coronagraphic images enables the characterization of the temporal stability of quasi-static speckles. Data were obtained in a thermally actively controlled environment reproducing realistic conditions encountered at the telescope. Results: The temporal evolution of the quasi-static wavefront error exhibits a linear power law, which can be used to model quasi-static speckle evolution in the context of forthcoming high-contrast imaging instruments, with implications for instrumentation (design, observing strategies, data reduction). Such a model can be used for instance to derive the timescale on which non-common path aberrations must be sensed and corrected. We found in our data that quasi-static wavefront error increases with ~0.7 Å per minute.
Poirier, Therese I; Pailden, Junvie; Jhala, Ray; Ronald, Katie; Wilhelm, Miranda; Fan, Jingyang
2017-04-01
Objectives. To conduct a prospective evaluation for effectiveness of an error disclosure assessment tool and video recordings to enhance student learning and metacognitive skills while assessing the IPEC competencies. Design. The instruments for assessing performance (planning, communication, process, and team dynamics) in interprofessional error disclosure were developed. Student self-assessment of performance before and after viewing the recordings of their encounters were obtained. Faculty used a similar instrument to conduct real-time assessments. An instrument to assess achievement of the Interprofessional Education Collaborative (IPEC) core competencies was developed. Qualitative data was reviewed to determine student and faculty perceptions of the simulation. Assessment. The interprofessional simulation training involved a total of 233 students (50 dental, 109 nursing and 74 pharmacy). Use of video recordings made a significant difference in student self-assessment for communication and process categories of error disclosure. No differences in student self-assessments were noted among the different professions. There were differences among the family member affects for planning and communication for both pre-video and post-video data. There were significant differences between student self-assessment and faculty assessment for all paired comparisons, except communication in student post-video self-assessment. Students' perceptions of achievement of the IPEC core competencies were positive. Conclusion. The use of assessment instruments and video recordings may have enhanced students' metacognitive skills for assessing performance in interprofessional error disclosure. The simulation training was effective in enhancing perceptions on achievement of IPEC core competencies. This enhanced assessment process appeared to enhance learning about the skills needed for interprofessional error disclosure.
Nano-level instrumentation for analyzing the dynamic accuracy of a rolling element bearing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Z.; Hong, J.; Zhang, J.
2013-12-15
The rotational performance of high-precision rolling bearings is fundamental to the overall accuracy of complex mechanical systems. A nano-level instrument to analyze rotational accuracy of high-precision bearings of machine tools under working conditions was developed. In this instrument, a high-precision (error motion < 0.15 μm) and high-stiffness (2600 N axial loading capacity) aerostatic spindle was applied to spin the test bearing. Operating conditions could be simulated effectively because of the large axial loading capacity. An air-cylinder, controlled by a proportional pressure regulator, was applied to drive an air-bearing subjected to non-contact and precise loaded axial forces. The measurement results onmore » axial loading and rotation constraint with five remaining degrees of freedom were completely unconstrained and uninfluenced by the instrument's structure. Dual capacity displacement sensors with 10 nm resolution were applied to measure the error motion of the spindle using a double-probe error separation method. This enabled the separation of the spindle's error motion from the measurement results of the test bearing which were measured using two orthogonal laser displacement sensors with 5 nm resolution. Finally, a Lissajous figure was used to evaluate the non-repetitive run-out (NRRO) of the bearing at different axial forces and speeds. The measurement results at various axial loadings and speeds showed the standard deviations of the measurements’ repeatability and accuracy were less than 1% and 2%. Future studies will analyze the relationship between geometrical errors and NRRO, such as the ball diameter differences of and the geometrical errors in the grooves of rings.« less
Pailden, Junvie; Jhala, Ray; Ronald, Katie; Wilhelm, Miranda; Fan, Jingyang
2017-01-01
Objectives. To conduct a prospective evaluation for effectiveness of an error disclosure assessment tool and video recordings to enhance student learning and metacognitive skills while assessing the IPEC competencies. Design. The instruments for assessing performance (planning, communication, process, and team dynamics) in interprofessional error disclosure were developed. Student self-assessment of performance before and after viewing the recordings of their encounters were obtained. Faculty used a similar instrument to conduct real-time assessments. An instrument to assess achievement of the Interprofessional Education Collaborative (IPEC) core competencies was developed. Qualitative data was reviewed to determine student and faculty perceptions of the simulation. Assessment. The interprofessional simulation training involved a total of 233 students (50 dental, 109 nursing and 74 pharmacy). Use of video recordings made a significant difference in student self-assessment for communication and process categories of error disclosure. No differences in student self-assessments were noted among the different professions. There were differences among the family member affects for planning and communication for both pre-video and post-video data. There were significant differences between student self-assessment and faculty assessment for all paired comparisons, except communication in student post-video self-assessment. Students’ perceptions of achievement of the IPEC core competencies were positive. Conclusion. The use of assessment instruments and video recordings may have enhanced students’ metacognitive skills for assessing performance in interprofessional error disclosure. The simulation training was effective in enhancing perceptions on achievement of IPEC core competencies. This enhanced assessment process appeared to enhance learning about the skills needed for interprofessional error disclosure. PMID:28496274
Nano-level instrumentation for analyzing the dynamic accuracy of a rolling element bearing.
Yang, Z; Hong, J; Zhang, J; Wang, M Y; Zhu, Y
2013-12-01
The rotational performance of high-precision rolling bearings is fundamental to the overall accuracy of complex mechanical systems. A nano-level instrument to analyze rotational accuracy of high-precision bearings of machine tools under working conditions was developed. In this instrument, a high-precision (error motion < 0.15 μm) and high-stiffness (2600 N axial loading capacity) aerostatic spindle was applied to spin the test bearing. Operating conditions could be simulated effectively because of the large axial loading capacity. An air-cylinder, controlled by a proportional pressure regulator, was applied to drive an air-bearing subjected to non-contact and precise loaded axial forces. The measurement results on axial loading and rotation constraint with five remaining degrees of freedom were completely unconstrained and uninfluenced by the instrument's structure. Dual capacity displacement sensors with 10 nm resolution were applied to measure the error motion of the spindle using a double-probe error separation method. This enabled the separation of the spindle's error motion from the measurement results of the test bearing which were measured using two orthogonal laser displacement sensors with 5 nm resolution. Finally, a Lissajous figure was used to evaluate the non-repetitive run-out (NRRO) of the bearing at different axial forces and speeds. The measurement results at various axial loadings and speeds showed the standard deviations of the measurements' repeatability and accuracy were less than 1% and 2%. Future studies will analyze the relationship between geometrical errors and NRRO, such as the ball diameter differences of and the geometrical errors in the grooves of rings.
NASA Technical Reports Server (NTRS)
Klinger, D. L.
1974-01-01
Models of noise and dynamic characteristics of gyro and autocollimator for very small signal levels are presented. Measurements were evaluated using spectral techniques for identifying noise from base motion. The experiment was constructed to measure the precession, due to relativistic effects, of an extremely precise earth-orbiting gyroscope. The design goal for nonrelativistic gyro drift is 0.001 arcsec per year. An analogous fixed base simulator was used in developing methods of instrument error modeling and performance evaluation applicable to the relativity experiment sensors and other precision pointing instruments. Analysis of autocollimator spectra uncovered the presence of a platform gimbal resonance. The source of resonance was isolated to gimbal bearing elastic restraint properties most apparent at very small levels of motion. A model of these properties which include both elastic and coulomb friction characteristics is discussed, and a describing function developed.
Kinnamon, Daniel D; Lipsitz, Stuart R; Ludwig, David A; Lipshultz, Steven E; Miller, Tracie L
2010-04-01
The hydration of fat-free mass, or hydration fraction (HF), is often defined as a constant body composition parameter in a two-compartment model and then estimated from in vivo measurements. We showed that the widely used estimator for the HF parameter in this model, the mean of the ratios of measured total body water (TBW) to fat-free mass (FFM) in individual subjects, can be inaccurate in the presence of additive technical errors. We then proposed a new instrumental variables estimator that accurately estimates the HF parameter in the presence of such errors. In Monte Carlo simulations, the mean of the ratios of TBW to FFM was an inaccurate estimator of the HF parameter, and inferences based on it had actual type I error rates more than 13 times the nominal 0.05 level under certain conditions. The instrumental variables estimator was accurate and maintained an actual type I error rate close to the nominal level in all simulations. When estimating and performing inference on the HF parameter, the proposed instrumental variables estimator should yield accurate estimates and correct inferences in the presence of additive technical errors, but the mean of the ratios of TBW to FFM in individual subjects may not.
Hyperspectral imaging spectro radiometer improves radiometric accuracy
NASA Astrophysics Data System (ADS)
Prel, Florent; Moreau, Louis; Bouchard, Robert; Bullis, Ritchie D.; Roy, Claude; Vallières, Christian; Levesque, Luc
2013-06-01
Reliable and accurate infrared characterization is necessary to measure the specific spectral signatures of aircrafts and associated infrared counter-measures protections (i.e. flares). Infrared characterization is essential to improve counter measures efficiency, improve friend-foe identification and reduce the risk of friendly fire. Typical infrared characterization measurement setups include a variety of panchromatic cameras and spectroradiometers. Each instrument brings essential information; cameras measure the spatial distribution of targets and spectroradiometers provide the spectral distribution of the emitted energy. However, the combination of separate instruments brings out possible radiometric errors and uncertainties that can be reduced with Hyperspectral imagers. These instruments combine both spectral and spatial information into the same data. These instruments measure both the spectral and spatial distribution of the energy at the same time ensuring the temporal and spatial cohesion of collected information. This paper presents a quantitative analysis of the main contributors of radiometric uncertainties and shows how a hyperspectral imager can reduce these uncertainties.
[Ultrasonic scissors. New vs resterilized instruments].
Gärtner, D; Münz, K; Hückelheim, E; Hesse, U
2008-02-01
The aim of this study was to compare reliability in handling and function of resterilized and single-use disposable ultrasonic scissors. In a prospective randomized study, the surgeon blindly tested new and resterilized ultrasonographic scissors. The parameters were force of activation, cutting effect, coagulation effect, error messages, and disturbing generator noise. Fifty-one new and 49 resterilized instruments in 94 operations were evaluated. The differences in force of activation, cutting effect, and coagulation were not significant. Error messages and disturbing noises were rare in both groups. Six new instruments and two resterilized instruments had to be exchanged because of problems during surgery. This study demonstrates comparable reliability in function and handling of resterilized and new ultrasonic scissors. The use of resterilized instruments leads to distinctly reduced costs and could contribute to efficiency in laparoscopic surgery.
Analysis of laser fluorosensor systems for remote algae detection and quantification
NASA Technical Reports Server (NTRS)
Browell, E. V.
1977-01-01
The development and performance of single- and multiple-wavelength laser fluorosensor systems for use in the remote detection and quantification of algae are discussed. The appropriate equation for the fluorescence power received by a laser fluorosensor system is derived in detail. Experimental development of a single wavelength system and a four wavelength system, which selectively excites the algae contained in the four primary algal color groups, is reviewed, and test results are presented. A comprehensive error analysis is reported which evaluates the uncertainty in the remote determination of the chlorophyll a concentration contained in algae by single- and multiple-wavelength laser fluorosensor systems. Results of the error analysis indicate that the remote quantification of chlorophyll a by a laser fluorosensor system requires optimum excitation wavelength(s), remote measurement of marine attenuation coefficients, and supplemental instrumentation to reduce uncertainties in the algal fluorescence cross sections.
Unassigned MS/MS Spectra: Who Am I?
Pathan, Mohashin; Samuel, Monisha; Keerthikumar, Shivakumar; Mathivanan, Suresh
2017-01-01
Recent advances in high resolution tandem mass spectrometry (MS) has resulted in the accumulation of high quality data. Paralleled with these advances in instrumentation, bioinformatics software have been developed to analyze such quality datasets. In spite of these advances, data analysis in mass spectrometry still remains critical for protein identification. In addition, the complexity of the generated MS/MS spectra, unpredictable nature of peptide fragmentation, sequence annotation errors, and posttranslational modifications has impeded the protein identification process. In a typical MS data analysis, about 60 % of the MS/MS spectra remains unassigned. While some of these could attribute to the low quality of the MS/MS spectra, a proportion can be classified as high quality. Further analysis may reveal how much of the unassigned MS spectra attribute to search space, sequence annotation errors, mutations, and/or posttranslational modifications. In this chapter, the tools used to identify proteins and ways to assign unassigned tandem MS spectra are discussed.
Pettiette, M T; Metzger, Z; Phillips, C; Trope, M
1999-04-01
Straightening of curved canals is one of the most common procedural errors in endodontic instrumentation. This problem is commonly encountered when dental students perform molar endodontics. The purpose of this study was to compare the effect of the type of instrument used by these students on the extent of straightening and on the incidence of other endodontic procedural errors. Nickel-titanium 0.02 taper hand files were compared with traditional stainless-steel 0.02 taper K-files. Sixty molar teeth comprised of maxillary and mandibular first and second molars were treated by senior dental students. Instrumentation was with either nickel-titanium hand files or stainless-steel K-files. Preoperative and postoperative radiographs of each tooth were taken using an XCP precision instrument with a customized bite block to ensure accurate reproduction of radiographic angulation. The radiographs were scanned and the images stored as TIFF files. By superimposing tracings from the preoperative over the postoperative radiographs, the degree of deviation of the apical third of the root canal filling from the original canal was measured. The presence of other errors, such as strip perforation and instrument breakage, was established by examining the radiographs. In curved canals instrumented by stainless-steel K-files, the average deviation of the apical third of the canals was 14.44 degrees (+/- 10.33 degrees). The deviation was significantly reduced when nickel-titanium hand files were used to an average of 4.39 degrees (+/- 4.53 degrees). The incidence of other procedural errors was also significantly reduced by the use of nickel-titanium hand files.
Age-Related Changes in Bimanual Instrument Playing with Rhythmic Cueing
Kim, Soo Ji; Cho, Sung-Rae; Yoo, Ga Eul
2017-01-01
Deficits in bimanual coordination of older adults have been demonstrated to significantly limit their functioning in daily life. As a bimanual sensorimotor task, instrument playing has great potential for motor and cognitive training in advanced age. While the process of matching a person’s repetitive movements to auditory rhythmic cueing during instrument playing was documented to involve motor and attentional control, investigation into whether the level of cognitive functioning influences the ability to rhythmically coordinate movement to an external beat in older populations is relatively limited. Therefore, the current study aimed to examine how timing accuracy during bimanual instrument playing with rhythmic cueing differed depending on the degree of participants’ cognitive aging. Twenty one young adults, 20 healthy older adults, and 17 older adults with mild dementia participated in this study. Each participant tapped an electronic drum in time to the rhythmic cueing provided using both hands simultaneously and in alternation. During bimanual instrument playing with rhythmic cueing, mean and variability of synchronization errors were measured and compared across the groups and the tempo of cueing during each type of tapping task. Correlations of such timing parameters with cognitive measures were also analyzed. The results showed that the group factor resulted in significant differences in the synchronization errors-related parameters. During bimanual tapping tasks, cognitive decline resulted in differences in synchronization errors between younger adults and older adults with mild dimentia. Also, in terms of variability of synchronization errors, younger adults showed significant differences in maintaining timing performance from older adults with and without mild dementia, which may be attributed to decreased processing time for bimanual coordination due to aging. Significant correlations were observed between variability of synchronization errors and performance of cognitive tasks involving executive control and cognitive flexibility when asked for bimanual coordination in response to external timing cues at adjusted tempi. Also, significant correlations with cognitive measures were more prevalent in variability of synchronization errors during alternative tapping compared to simultaneous tapping. The current study supports that bimanual tapping may be predictive of cognitive processing of older adults. Also, tempo and type of movement required for instrument playing both involve cognitive and motor loads at different levels, and such variables could be important factors for determining the complexity of the task and the involved task requirements for interventions using instrument playing. PMID:29085309
NASA Astrophysics Data System (ADS)
Thyagarajan, Nithyanandan
2018-05-01
Direct detection of the Epoch of Reionization (EoR) via redshifted 21 cm line of H i will reveal the nature of the first stars and galaxies as well as revolutionize our understanding of a poorly explored evolutionary phase of the Universe. Projects such as the MWA, LOFAR, and PAPER commenced in the last decade with the promise of high significance statistical detection of the EoR, but have so far only weakly constrained models owing to unforeseen challenges from bright foreground sources and instrument systematics. It is essential for next generation instruments like the HERA and SKA to have these challenges addressed. I present an analysis of these challenges - wide-field measurements, antenna beam chromaticity, reflections in the instrument, and antenna position errors - along with performance specifications and design solutions that will be critical to designing successful next-generation instruments in enabling the first detection and also in placing meaningful constraints on reionization models.
Implications of Version 8 TOMS and SBUV Data for Long-Term Trend Analysis
NASA Technical Reports Server (NTRS)
Frith, Stacey M.
2004-01-01
Total ozone data from the Total Ozone Mapping Spectrometer (TOMS) and profile/total ozone data from the Solar Backscatter Ultraviolet (SBUV; SBW/2) series of instruments have recently been reprocessed using new retrieval algorithms (referred to as Version 8 for both) and updated calibrations. In this paper, we incorporate the Version 8 data into a TOMS/SBW merged total ozone data set and an S B W merged profile ozone data set. The Total Merged Ozone Data (Total MOD) combines data from multiple TOMS and SBW instruments to form an internally consistent global data set with virtually complete time coverage from October 1978 through December 2003. Calibration differences between instruments are accounted for using external adjustments based on instrument intercomparisons during overlap periods. Previous results showed errors due to aerosol loading and sea glint are significantly reduced in the V8 TOMS retrievals. Using SBW as a transfer standard, calibration differences between V8 Nimbus 7 and Earth Probe TOMS data are approx. 1.3%, suggesting small errors in calibration remain. We will present updated total ozone long-term trends based on the Version 8 data. The Profile Merged Ozone Data (Profile MOD) data set is constructed using data from the SBUV series of instruments. In previous versions, SAGE data were used to establish the long-term external calibration of the combined data set. The SBW Version 8 we assess the V8 profile data through comparisons with SAGE and between SBW instruments in overlap periods. We then construct a consistently-calibrated long term time series. Updated zonal mean trends as a function of altitude and season from the new profile data set will be shown, and uncertainties in determining the best long-term calibration will be discussed.
NASA Technical Reports Server (NTRS)
Duhon, D. D.
1975-01-01
The shuttle orbital maneuvering system (OMS) pressure-volume-temperature (P-V-T) propellant gaging module computes the quantity of usable OMS propellant remaining based on the real gas P-V-T relationship for the propellant tank pressurant, helium. The OMS P-V-T propellant quantity gaging error was determined for four sets of instrumentation configurations and accuracies with the propellant tank operating in the normal constant pressure mode and in the blowdown mode. The instrumentation inaccuracy allowance for propellant leak detection was also computed for these same four sets of instrumentation. These gaging errors and leak detection allowances are presented in tables designed to permit a direct comparison of the effectiveness of the four instrumentation sets. The results show the magnitudes of the improvements in propellant quantity gaging accuracies and propellant leak detection allowances which can be achieved by employing more accurate pressure and temperature instrumentation.
Oudman, Erik; Nijboer, Tanja C W; Postma, Albert; Wijnia, Jan W; Kerklaan, Sandra; Lindsen, Karen; Van der Stigchel, Stefan
2013-01-01
Patients with Korsakoff's syndrome show devastating amnesia and executive deficits. Consequently, the ability to perform instrumental activities such as making coffee is frequently diminished. It is currently unknown whether patients with Korsakoff's syndrome are able to (re)learn instrumental activities. A good candidate for an effective teaching technique in Korsakoff's syndrome is errorless learning as it is based on intact implicit memory functioning. Therefore, the aim of the current study was two-fold: to investigate whether patients with Korsakoff's syndrome are able to (re)learn instrumental activities, and to compare the effectiveness of errorless learning with trial and error learning in the acquisition and maintenance of an instrumental activity, namely using a washing machine to do the laundry. Whereas initial learning performance in the errorless learning condition was superior, both intervention techniques resulted in similar improvement over eight learning sessions. Moreover, performance in a different spatial layout showed a comparable improvement. Notably, in follow-up sessions starting after four weeks without practice, performance was still elevated in the errorless learning condition, but not in the trial and error condition. The current study demonstrates that (re)learning and maintenance of an instrumental activity is possible in patients with Korsakoff's syndrome.
Effects of Contextual Sight-Singing and Aural Skills Training on Error-Detection Abilities.
ERIC Educational Resources Information Center
Sheldon, Deborah A.
1998-01-01
Examines the effects of contextual sight-singing and ear training on pitch and rhythm error detection abilities among undergraduate instrumental music education majors. Shows that additional training produced better error detection, particularly with rhythm errors and in one-part examples. Maintains that differences attributable to texture were…
Prabhakar, Attiguppe R; Yavagal, Chandrashekar; Naik, Saraswathi V
2016-01-01
ABSTRACT Background: Primary root canals are considered to be most challenging due to their complex anatomy. "Wave one" and "one shape" are single-file systems with reciprocating and rotary motion respectively. The aim of this study was to evaluate and compare dentin thickness, centering ability, canal transportation, and instrumentation time of wave one and one shape files in primary root canals using a cone beam computed tomographic (CBCT) analysis. Study design: This is an experimental, in vitro study comparing the two groups. Materials and methods: A total of 24 extracted human primary teeth with minimum 7 mm root length were included in the study. Cone beam computed tomographic images were taken before and after the instrumentation for each group. Dentin thickness, centering ability, canal transportation, and instrumentation times were evaluated for each group. Results: A significant difference was found in instrumentation time and canal transportation measures between the two groups. Wave one showed less canal transportation as compared with one shape, and the mean instrumentation time of wave one was significantly less than one shape. Conclusion: Reciprocating single-file systems was found to be faster with much less procedural errors and can hence be recommended for shaping the root canals of primary teeth. How to cite this article: Prabhakar AR, Yavagal C, Dixit K, Naik SV. Reciprocating vs Rotary Instrumentation in Pediatric Endodontics: Cone Beam Computed Tomographic Analysis of Deciduous Root Canals using Two Single-File Systems. Int J Clin Pediatr Dent 2016;9(1):45-49. PMID:27274155
Opto-mechanical design of ShaneAO: the adaptive optics system for the 3-meter Shane Telescope
NASA Astrophysics Data System (ADS)
Ratliff, C.; Cabak, J.; Gavel, D.; Kupke, R.; Dillon, D.; Gates, E.; Deich, W.; Ward, J.; Cowley, D.; Pfister, T.; Saylor, M.
2014-07-01
A Cassegrain mounted adaptive optics instrument presents unique challenges for opto-mechanical design. The flexure and temperature tolerances for stability are tighter than those of seeing limited instruments. This criteria requires particular attention to material properties and mounting techniques. This paper addresses the mechanical designs developed to meet the optical functional requirements. One of the key considerations was to have gravitational deformations, which vary with telescope orientation, stay within the optical error budget, or ensure that we can compensate with a steering mirror by maintaining predictable elastic behavior. Here we look at several cases where deformation is predicted with finite element analysis and Hertzian deformation analysis and also tested. Techniques used to address thermal deformation compensation without the use of low CTE materials will also be discussed.
The design and analysis of channel transmission communication system of XCTD profiler
NASA Astrophysics Data System (ADS)
Zheng, Yu; Wang, Xiao-Rui; Jin, Xiang-Yu; Song, Guo-Min; Shang, Ying-Sheng; Li, Hong-Zhi
2016-10-01
In this paper, a channel transmission communication system of expendable conductivity-temperature-depth is established in accordance to the operation characteristics of the transmission line to more accurately assess the characteristics of deep-sea abandoned profiler channel. The wrapping inductance is eliminated to maximum extent through the wrapping pattern of the underwater spool and the overwater spool and the calculation of the wrapping diameter. The feasibility of the proposed channel transmission communication system is verified through theoretical analysis and practical measurement of the transmission signal error rate in the amplitude shift keying (ASK) modulation. The proposed design provides a new research method for the channel assessment of complex abandoned measuring instrument and an important experiment evidence for the rapid development of the deep-sea abandoned measuring instrument.
The design and analysis of channel transmission communication system of XCTD profiler.
Zheng, Yu; Wang, Xiao-Rui; Jin, Xiang-Yu; Song, Guo-Min; Shang, Ying-Sheng; Li, Hong-Zhi
2016-10-01
In this paper, a channel transmission communication system of expendable conductivity-temperature-depth is established in accordance to the operation characteristics of the transmission line to more accurately assess the characteristics of deep-sea abandoned profiler channel. The wrapping inductance is eliminated to maximum extent through the wrapping pattern of the underwater spool and the overwater spool and the calculation of the wrapping diameter. The feasibility of the proposed channel transmission communication system is verified through theoretical analysis and practical measurement of the transmission signal error rate in the amplitude shift keying (ASK) modulation. The proposed design provides a new research method for the channel assessment of complex abandoned measuring instrument and an important experiment evidence for the rapid development of the deep-sea abandoned measuring instrument.
Single pilot IFR accident data analysis
NASA Technical Reports Server (NTRS)
Harris, D. F.
1983-01-01
The aircraft accident data recorded by the National Transportation and Safety Board (NTSR) for 1964-1979 were analyzed to determine what problems exist in the general aviation (GA) single pilot instrument flight rule (SPIFR) environment. A previous study conducted in 1978 for the years 1964-1975 provided a basis for comparison. This effort was generally limited to SPIFR pilot error landing phase accidents but includes some SPIFR takeoff and enroute accident analysis as well as some dual pilot IFR accident analysis for comparison. Analysis was performed for 554 accidents of which 39% (216) occurred during the years 1976-1979.
Global Warming Estimation from MSU
NASA Technical Reports Server (NTRS)
Prabhakara, C.; Iacovazzi, Robert; Yoo, Jung-Moon
1998-01-01
Microwave Sounding Unit (MSU) radiometer observations in Ch 2 (53.74 GHz) from sequential, sun-synchronous, polar-orbiting NOAA satellites contain small systematic errors. Some of these errors are time-dependent and some are time-independent. Small errors in Ch 2 data of successive satellites arise from calibration differences. Also, successive NOAA satellites tend to have different Local Equatorial Crossing Times (LECT), which introduce differences in Ch 2 data due to the diurnal cycle. These two sources of systematic error are largely time independent. However, because of atmospheric drag, there can be a drift in the LECT of a given satellite, which introduces time-dependent systematic errors. One of these errors is due to the progressive chance in the diurnal cycle and the other is due to associated chances in instrument heating by the sun. In order to infer global temperature trend from the these MSU data, we have eliminated explicitly the time-independent systematic errors. Both of the time-dependent errors cannot be assessed from each satellite. For this reason, their cumulative effect on the global temperature trend is evaluated implicitly. Christy et al. (1998) (CSL). based on their method of analysis of the MSU Ch 2 data, infer a global temperature cooling trend (-0.046 K per decade) from 1979 to 1997, although their near nadir measurements yield near zero trend (0.003 K/decade). Utilising an independent method of analysis, we infer global temperature warmed by 0.12 +/- 0.06 C per decade from the observations of the MSU Ch 2 during the period 1980 to 1997.
Zhang, Tan; Chen, Ang
2017-01-01
Based on the job demands-resources model, the study developed and validated an instrument that measures physical education teachers' job demands-resources perception. Expert review established content validity with the average item rating of 3.6/5.0. Construct validity and reliability were determined with a teacher sample ( n = 397). Exploratory factor analysis established a five-dimension construct structure matching the theoretical construct deliberated in the literature. The composite reliability scores for the five dimensions range from .68 to .83. Validity coefficients (intraclass correlational coefficients) are .69 for job resources items and .82 for job demands items. Inter-scale correlational coefficients range from -.32 to .47. Confirmatory factor analysis confirmed the construct validity with high dimensional factor loadings (ranging from .47 to .84 for job resources scale and from .50 to .85 for job demands scale) and adequate model fit indexes (root mean square error of approximation = .06). The instrument provides a tool to measure physical education teachers' perception of their working environment.
Zhang, Tan; Chen, Ang
2017-01-01
Based on the job demands–resources model, the study developed and validated an instrument that measures physical education teachers’ job demands–resources perception. Expert review established content validity with the average item rating of 3.6/5.0. Construct validity and reliability were determined with a teacher sample (n = 397). Exploratory factor analysis established a five-dimension construct structure matching the theoretical construct deliberated in the literature. The composite reliability scores for the five dimensions range from .68 to .83. Validity coefficients (intraclass correlational coefficients) are .69 for job resources items and .82 for job demands items. Inter-scale correlational coefficients range from −.32 to .47. Confirmatory factor analysis confirmed the construct validity with high dimensional factor loadings (ranging from .47 to .84 for job resources scale and from .50 to .85 for job demands scale) and adequate model fit indexes (root mean square error of approximation = .06). The instrument provides a tool to measure physical education teachers’ perception of their working environment. PMID:29200808
Simulation Studies of Satellite Laser CO2 Mission Concepts
NASA Technical Reports Server (NTRS)
Kawa, Stephan Randy; Mao, J.; Abshire, J. B.; Collatz, G. J.; Sun X.; Weaver, C. J.
2011-01-01
Results of mission simulation studies are presented for a laser-based atmospheric CO2 sounder. The simulations are based on real-time carbon cycle process modeling and data analysis. The mission concept corresponds to ASCENDS as recommended by the US National Academy of Sciences Decadal Survey. Compared to passive sensors, active (lidar) sensing of CO2 from space has several potentially significant advantages that hold promise to advance CO2 measurement capability in the next decade. Although the precision and accuracy requirements remain at unprecedented levels of stringency, analysis of possible instrument technology indicates that such sensors are more than feasible. Radiative transfer model calculations, an instrument model with representative errors, and a simple retrieval approach complete the cycle from "nature" run to "pseudodata" CO2. Several mission and instrument configuration options are examined, and the sensitivity to key design variables is shown. Examples are also shown of how the resulting pseudo-measurements might be used to address key carbon cycle science questions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Novak, Erik; Trolinger, James D.; Lacey, Ian
This work reports on the development of a binary pseudo-random test sample optimized to calibrate the MTF of optical microscopes. The sample consists of a number of 1-D and 2-D patterns, with different minimum sizes of spatial artifacts from 300 nm to 2 microns. We describe the mathematical background, fabrication process, data acquisition and analysis procedure to return spatial frequency based instrument calibration. We show that the developed samples satisfy the characteristics of a test standard: functionality, ease of specification and fabrication, reproducibility, and low sensitivity to manufacturing error. © (2015) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading ofmore » the abstract is permitted for personal use only.« less
Modeling and Error Analysis of a Superconducting Gravity Gradiometer.
1979-08-01
fundamental limit to instrument - -1- sensitivity is the thermal noise of the sensor . For the gradiometer design outlined above, the best sensitivity...Mapoles at Stanford. Chapter IV determines the relation between dynamic range, the sensor Q, and the thermal noise of the cryogenic accelerometer. An...C.1 Accelerometer Optimization (1) Development and optimization of the loaded diaphragm sensor . (2) Determination of the optimal values of the
Innovative and Cost Effective Remediation of Orbital Debris
2014-04-25
to face international opposition because it could be used offensively to disable spacecraft. 4 Technical Analysis Most of StreamSat’s... LDR ). 5 They demonstrated droplet dispersion of less than 1 micro radian for some generators and devised an instrument for measuring the...error can be limited to less than one micro radian using existing technology and techniques. During transit, external forces will alter the path of
Analysis of general aviation accidents during operations under instrument flight rules
NASA Technical Reports Server (NTRS)
Bennett, C. T.; Schwirzke, Martin; Harm, C.
1990-01-01
A report is presented to describe some of the errors that pilots make during flight under IFR. The data indicate that there is less risk during the approach and landing phase of IFR flights, as compared to VFR operations. Single-pilot IFR accident rates continue to be higher than two-pilot IFR incident rates, reflecting the high work load of IFR operations.
Qi, Yulin; Geib, Timon; Schorr, Pascal; Meier, Florian; Volmer, Dietrich A
2015-01-15
Isobaric interferences in human serum can potentially influence the measured concentration levels of 25-hydroxyvitamin D [25(OH)D], when low resolving power liquid chromatography/tandem mass spectrometry (LC/MS/MS) instruments and non-specific MS/MS product ions are employed for analysis. In this study, we provide a detailed characterization of these interferences and a technical solution to reduce the associated systematic errors. Detailed electrospray ionization Fourier transform ion cyclotron resonance (FTICR) high-resolution mass spectrometry (HRMS) experiments were used to characterize co-extracted isobaric components of 25(OH)D from human serum. Differential ion mobility spectrometry (DMS), as a gas-phase ion filter, was implemented on a triple quadrupole mass spectrometer for separation of the isobars. HRMS revealed the presence of multiple isobaric compounds in extracts of human serum for different sample preparation methods. Several of these isobars had the potential to increase the peak areas measured for 25(OH)D on low-resolution MS instruments. A major isobaric component was identified as pentaerythritol oleate, a technical lubricant, which was probably an artifact from the analytical instrumentation. DMS was able to remove several of these isobars prior to MS/MS, when implemented on the low-resolution triple quadrupole mass spectrometer. It was shown in this proof-of-concept study that DMS-MS has the potential to significantly decrease systematic errors, and thus improve accuracy of vitamin D measurements using LC/MS/MS. Copyright © 2014 John Wiley & Sons, Ltd.
Error Reduction Analysis and Optimization of Varying GRACE-Type Micro-Satellite Constellations
NASA Astrophysics Data System (ADS)
Widner, M. V., IV; Bettadpur, S. V.; Wang, F.; Yunck, T. P.
2017-12-01
The Gravity Recovery and Climate Experiment (GRACE) mission has been a principal contributor in the study and quantification of Earth's time-varying gravity field. Both GRACE and its successor, GRACE Follow-On, are limited by their paired satellite design which only provide a full map of Earth's gravity field approximately every thirty days and at large spatial resolutions of over 300 km. Micro-satellite technology has presented the feasibility of improving the architecture of future missions to address these issues with the implementation of a constellations of satellites having similar characteristics as GRACE. To optimize the constellation's architecture, several scenarios are evaluated to determine how implementing this configuration affects the resultant gravity field maps and characterize which instrument system errors improve, which do not, and how changes in constellation architecture affect these errors.
Calibration Transfer in LIBS and Raman Spectroscopy for Planetary Applications
NASA Astrophysics Data System (ADS)
Dyar, M. D.; Thomas, B. F.; Parente, M.; Gemp, I.; Mullen, T. H.
2017-12-01
Planetary scientists rely on spectral libraries and instrument reproducibility to interpret results from missions. Major investments have been made into assembling libraries, but they often naively assume that spectra of single crystals versus powders and from varying instruments will be the same. Calibration transfer (CT) seeks to algorithmically resolve discrepancies among datasets from different instruments or conditions. It offers the ability to align suites of spectra with a small number of common samples, allowing better models to be built with combined data sets. LIBS and Raman data present different challenges for CT. Quantitative geochemical analyses by LIBS spectroscopy are limited by lack of consistency among repeated laser shots and across instruments. Many different factors affect the presence/absence of emission lines and their intensities, such as laser power/plasma temperature, angle of incidence, detector sensitivity/resolution. To overcome these, models in which disparate datasets are projected into a joint low-dimensional subspace where all data can be aligned before quantitative analysis, such as Correlation Analysis for Domain Adaptation (CADA), have proven very effective. They require some overlap between the populations of spectra to be aligned. For example, prediction of SiO2 on 80 samples from two different LIBS labs show errors of ±16-29 wt.% when the training and test sets have no overlap, and ±4.94 wt% SiO2 when CADA is used. Uncorrected Earth-Mars spectral differences are likely to cause errors with the same order of magnitude. As with other types of reflectance spectroscopy, Raman data are plagued by differences among single crystal/powder samples and laser wavelength that affect peak intensities, and by spectral offsets from instruments with varying resolution and wavenumber alignment schemes. These problems persist even within the archetypal RRUFF database. Pre-processing transformation functions such as optimized baseline removal, normalization, squashing, and smoothing improve mineral matching accuracy. Alignment methods can record shifts between corresponding peaks from the same mineral from pairs of instruments. By considering many pairs of minerals, corrections at each energy increment can be determined, creating a transfer function to align the data.
BLIND EXTRACTION OF AN EXOPLANETARY SPECTRUM THROUGH INDEPENDENT COMPONENT ANALYSIS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waldmann, I. P.; Tinetti, G.; Hollis, M. D. J.
2013-03-20
Blind-source separation techniques are used to extract the transmission spectrum of the hot-Jupiter HD189733b recorded by the Hubble/NICMOS instrument. Such a 'blind' analysis of the data is based on the concept of independent component analysis. The detrending of Hubble/NICMOS data using the sole assumption that nongaussian systematic noise is statistically independent from the desired light-curve signals is presented. By not assuming any prior or auxiliary information but the data themselves, it is shown that spectroscopic errors only about 10%-30% larger than parametric methods can be obtained for 11 spectral bins with bin sizes of {approx}0.09 {mu}m. This represents a reasonablemore » trade-off between a higher degree of objectivity for the non-parametric methods and smaller standard errors for the parametric de-trending. Results are discussed in light of previous analyses published in the literature. The fact that three very different analysis techniques yield comparable spectra is a strong indication of the stability of these results.« less
Simulation Studies for a Space-Based CO2 Lidar Mission
NASA Technical Reports Server (NTRS)
Kawa, S. R.; Mao, J.; Abshire, J. B.; Collatz, G. J.; Sun, X.; Weaver, C. J.
2010-01-01
We report results of initial space mission simulation studies for a laser-based, atmospheric CO2 sounder, which are based on real-time carbon cycle process modelling and data analysis. The mission concept corresponds to the Active Sensing of CO2 Emissions over Nights, Days and Seasons (ASCENDS) recommended by the US National Academy of Sciences' Decadal Survey. As a pre-requisite for meaningful quantitative evaluation, we employ a CO2 model that has representative spatial and temporal gradients across a wide range of scales. In addition, a relatively complete description of the atmospheric and surface state is obtained from meteorological data assimilation and satellite measurements. We use radiative transfer calculations, an instrument model with representative errors and a simple retrieval approach to quantify errors in 'measured' CO2 distributions, which are a function of mission and instrument design specifications along with the atmospheric/surface state. Uncertainty estimates based on the current instrument design point indicate that a CO2 laser sounder can provide data consistent with ASCENDS requirements and will significantly enhance our ability to address carbon cycle science questions. Test of a dawn/dusk orbit deployment, however, shows that diurnal differences in CO2 column abundance, indicative of plant photosynthesis and respiration fluxes, will be difficult to detect
Calibration of solar radiation measuring instruments. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bahm, R J; Nakos, J C
A review of solar radiation measurement of instruments and some types of errors is given; and procedures for calibrating solar radiation measuring instruments are detailed. An appendix contains a description of various agencies who perform calibration of solar instruments and a description of the methods they used at the time this report was prepared. (WHK)
NASA Technical Reports Server (NTRS)
Diak, George R.; Stewart, Tod R.
1989-01-01
A method is presented for evaluating the fluxes of sensible and latent heating at the land surface, using satellite-measured surface temperature changes in a composite surface layer-mixed layer representation of the planetary boundary layer. The basic prognostic model is tested by comparison with synoptic station information at sites where surface evaporation climatology is well known. The remote sensing version of the model, using satellite-measured surface temperature changes, is then used to quantify the sharp spatial gradient in surface heating/evaporation across the central United States. An error analysis indicates that perhaps five levels of evaporation are recognizable by these methods and that the chief cause of error is the interaction of errors in the measurement of surface temperature change with errors in the assigment of surface roughness character. Finally, two new potential methods for remote sensing of the land-surface energy balance are suggested which will relay on space-borne instrumentation planned for the 1990s.
Error Modeling of Multibaseline Optical Truss: Part 1: Modeling of System Level Performance
NASA Technical Reports Server (NTRS)
Milman, Mark H.; Korechoff, R. E.; Zhang, L. D.
2004-01-01
Global astrometry is the measurement of stellar positions and motions. These are typically characterized by five parameters, including two position parameters, two proper motion parameters, and parallax. The Space Interferometry Mission (SIM) will derive these parameters for a grid of approximately 1300 stars covering the celestial sphere to an accuracy of approximately 4uas, representing a two orders of magnitude improvement over the most precise current star catalogues. Narrow angle astrometry will be performed to a 1uas accuracy. A wealth of scientific information will be obtained from these accurate measurements encompassing many aspects of both galactic (and extragalactic science. SIM will be subject to a number of instrument errors that can potentially degrade performance. Many of these errors are systematic in that they are relatively static and repeatable with respect to the time frame and direction of the observation. This paper and its companion define the modeling of the, contributing factors to these errors and the analysis of how they impact SIM's ability to perform astrometric science.
NASA Astrophysics Data System (ADS)
Utegulov, B. B.
2018-02-01
In the work the study of the developed method was carried out for reliability by analyzing the error in indirect determination of the insulation parameters in an asymmetric network with an isolated neutral voltage above 1000 V. The conducted studies of the random relative mean square errors show that the accuracy of indirect measurements in the developed method can be effectively regulated not only by selecting a capacitive additional conductivity, which are connected between phases of the electrical network and the ground, but also by the selection of measuring instruments according to the accuracy class. When choosing meters with accuracy class of 0.5 with the correct selection of capacitive additional conductivity that are connected between the phases of the electrical network and the ground, the errors in measuring the insulation parameters will not exceed 10%.
Horizon sensors attitude errors simulation for the Brazilian Remote Sensing Satellite
NASA Astrophysics Data System (ADS)
Vicente de Brum, Antonio Gil; Ricci, Mario Cesar
Remote sensing, meteorological and other types of satellites require an increasingly better Earth related positioning. From the past experience it is well known that the thermal horizon in the 15 micrometer band provides conditions of determining the local vertical at any time. This detection is done by horizon sensors which are accurate instruments for Earth referred attitude sensing and control whose performance is limited by systematic and random errors amounting about 0.5 deg. Using the computer programs OBLATE, SEASON, ELECTRO and MISALIGN, developed at INPE to simulate four distinct facets of conical scanning horizon sensors, attitude errors are obtained for the Brazilian Remote Sensing Satellite (the first one, SSR-1, is scheduled to fly in 1996). These errors are due to the oblate shape of the Earth, seasonal and latitudinal variations of the 15 micrometer infrared radiation, electronic processing time delay and misalignment of sensor axis. The sensor related attitude errors are thus properly quantified in this work and will, together with other systematic errors (for instance, ambient temperature variation) take part in the pre-launch analysis of the Brazilian Remote Sensing Satellite, with respect to the horizon sensor performance.
Li, T. S.; DePoy, D. L.; Marshall, J. L.; ...
2016-06-01
Here, we report that meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is both stable in time and uniform over the sky to 1% precision or better. Past and current surveys have achieved photometric precision of 1%–2% by calibrating the survey's stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations inmore » the wavelength dependence of the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors (SCEs) using photometry from the Dark Energy Survey (DES) as an example. We first define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the SCEs caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane can be up to 2% in some bandpasses. We then compare the calculated SCEs with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput from auxiliary calibration systems. In conclusion, the residual after correction is less than 0.3%. Moreover, we calculate such SCEs for Type Ia supernovae and elliptical galaxies and find that the chromatic errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.« less
Human leader and robot follower team: correcting leader's position from follower's heading
NASA Astrophysics Data System (ADS)
Borenstein, Johann; Thomas, David; Sights, Brandon; Ojeda, Lauro; Bankole, Peter; Fellars, Donald
2010-04-01
In multi-agent scenarios, there can be a disparity in the quality of position estimation amongst the various agents. Here, we consider the case of two agents - a leader and a follower - following the same path, in which the follower has a significantly better estimate of position and heading. This may be applicable to many situations, such as a robotic "mule" following a soldier. Another example is that of a convoy, in which only one vehicle (not necessarily the leading one) is instrumented with precision navigation instruments while all other vehicles use lower-precision instruments. We present an algorithm, called Follower-derived Heading Correction (FDHC), which substantially improves estimates of the leader's heading and, subsequently, position. Specifically, FHDC produces a very accurate estimate of heading errors caused by slow-changing errors (e.g., those caused by drift in gyros) of the leader's navigation system and corrects those errors.
Performance Characterization of an Instrument.
ERIC Educational Resources Information Center
Salin, Eric D.
1984-01-01
Describes an experiment designed to teach students to apply the same statistical awareness to instrumentation they commonly apply to classical techniques. Uses propagation of error techniques to pinpoint instrumental limitations and breakdowns and to demonstrate capabilities and limitations of volumetric and gravimetric methods. Provides lists of…
Estimation of chromatic errors from broadband images for high contrast imaging: sensitivity analysis
NASA Astrophysics Data System (ADS)
Sirbu, Dan; Belikov, Ruslan
2016-01-01
Many concepts have been proposed to enable direct imaging of planets around nearby stars, and which would enable spectroscopic observations of their atmospheric observations and the potential discovery of biomarkers. The main technical challenge associated with direct imaging of exoplanets is to effectively control both the diffraction and scattered light from the star so that the dim planetary companion can be seen. Usage of an internal coronagraph with an adaptive optical system for wavefront correction is one of the most mature methods and is being developed as an instrument addition to the WFIRST-AFTA space mission. In addition, such instruments as GPI and SPHERE are already being used on the ground and are yielding spectra of giant planets. For the deformable mirror (DM) to recover a dark hole region with sufficiently high contrast in the image plane, mid-spatial frequency wavefront errors must be estimated. To date, most broadband lab demonstrations use narrowband filters to obtain an estimate of the the chromaticity of the wavefront error and this can result in usage of a large percentage of the total integration time. Previously, we have proposed a method to estimate the chromaticity of wavefront errors using only broadband images; we have demonstrated that under idealized conditions wavefront errors can be estimated from images composed of discrete wavelengths. This is achieved by using DM probes with sufficient spatially-localized chromatic diversity. Here we report on the results of a study of the performance of this method with respect to realistic broadband images including noise. Additionally, we study optimal probe patterns that enable reduction of the number of probes used and compare the integration time with narrowband and IFS estimation methods.
Structural power flow measurement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Falter, K.J.; Keltie, R.F.
Previous investigations of structural power flow through beam-like structures resulted in some unexplained anomalies in the calculated data. In order to develop structural power flow measurement as a viable technique for machine tool design, the causes of these anomalies needed to be found. Once found, techniques for eliminating the errors could be developed. Error sources were found in the experimental apparatus itself as well as in the instrumentation. Although flexural waves are the carriers of power in the experimental apparatus, at some frequencies longitudinal waves were excited which were picked up by the accelerometers and altered power measurements. Errors weremore » found in the phase and gain response of the sensors and amplifiers used for measurement. A transfer function correction technique was employed to compensate for these instrumentation errors.« less
Error-Rate Estimation Based on Multi-Signal Flow Graph Model and Accelerated Radiation Tests
Wang, Yueke; Xing, Kefei; Deng, Wei; Zhang, Zelong
2016-01-01
A method of evaluating the single-event effect soft-error vulnerability of space instruments before launched has been an active research topic in recent years. In this paper, a multi-signal flow graph model is introduced to analyze the fault diagnosis and meantime to failure (MTTF) for space instruments. A model for the system functional error rate (SFER) is proposed. In addition, an experimental method and accelerated radiation testing system for a signal processing platform based on the field programmable gate array (FPGA) is presented. Based on experimental results of different ions (O, Si, Cl, Ti) under the HI-13 Tandem Accelerator, the SFER of the signal processing platform is approximately 10−3(error/particle/cm2), while the MTTF is approximately 110.7 h. PMID:27583533
Error-Rate Estimation Based on Multi-Signal Flow Graph Model and Accelerated Radiation Tests.
He, Wei; Wang, Yueke; Xing, Kefei; Deng, Wei; Zhang, Zelong
2016-01-01
A method of evaluating the single-event effect soft-error vulnerability of space instruments before launched has been an active research topic in recent years. In this paper, a multi-signal flow graph model is introduced to analyze the fault diagnosis and meantime to failure (MTTF) for space instruments. A model for the system functional error rate (SFER) is proposed. In addition, an experimental method and accelerated radiation testing system for a signal processing platform based on the field programmable gate array (FPGA) is presented. Based on experimental results of different ions (O, Si, Cl, Ti) under the HI-13 Tandem Accelerator, the SFER of the signal processing platform is approximately 10-3(error/particle/cm2), while the MTTF is approximately 110.7 h.
Stochastic estimates of gradient from laser measurements for an autonomous Martian roving vehicle
NASA Technical Reports Server (NTRS)
Burger, P. A.
1973-01-01
The general problem of estimating the state vector x from the state equation h = Ax where h, A, and x are all stochastic, is presented. Specifically, the problem is for an autonomous Martian roving vehicle to utilize laser measurements in estimating the gradient of the terrain. Error exists due to two factors - surface roughness and instrumental measurements. The errors in slope depend on the standard deviations of these noise factors. Numerically, the error in gradient is expressed as a function of instrumental inaccuracies. Certain guidelines for the accuracy of permissable gradient must be set. It is found that present technology can meet these guidelines.
Kumar, Poornima; Eickhoff, Simon B.; Dombrovski, Alexandre Y.
2015-01-01
Reinforcement learning describes motivated behavior in terms of two abstract signals. The representation of discrepancies between expected and actual rewards/punishments – prediction error – is thought to update the expected value of actions and predictive stimuli. Electrophysiological and lesion studies suggest that mesostriatal prediction error signals control behavior through synaptic modification of cortico-striato-thalamic networks. Signals in the ventromedial prefrontal and orbitofrontal cortex are implicated in representing expected value. To obtain unbiased maps of these representations in the human brain, we performed a meta-analysis of functional magnetic resonance imaging studies that employed algorithmic reinforcement learning models, across a variety of experimental paradigms. We found that the ventral striatum (medial and lateral) and midbrain/thalamus represented reward prediction errors, consistent with animal studies. Prediction error signals were also seen in the frontal operculum/insula, particularly for social rewards. In Pavlovian studies, striatal prediction error signals extended into the amygdala, while instrumental tasks engaged the caudate. Prediction error maps were sensitive to the model-fitting procedure (fixed or individually-estimated) and to the extent of spatial smoothing. A correlate of expected value was found in a posterior region of the ventromedial prefrontal cortex, caudal and medial to the orbitofrontal regions identified in animal studies. These findings highlight a reproducible motif of reinforcement learning in the cortico-striatal loops and identify methodological dimensions that may influence the reproducibility of activation patterns across studies. PMID:25665667
Determination of thorium by fluorescent x-ray spectrometry
Adler, I.; Axelrod, J.M.
1955-01-01
A fluorescent x-ray spectrographic method for the determination of thoria in rock samples uses thallium as an internal standard. Measurements are made with a two-channel spectrometer equipped with quartz (d = 1.817 A.) analyzing crystals. Particle-size effects are minimized by grinding the sample components with a mixture of silicon carbide and aluminum and then briquetting. Analyses of 17 samples showed that for the 16 samples containing over 0.7% thoria the average error, based on chemical results, is 4.7% and the maximum error, 9.5%. Because of limitations of instrumentation, 0.2% thoria is considered the lower limit of detection. An analysis can be made in about an hour.
NASA Technical Reports Server (NTRS)
Leake, M. A.
1982-01-01
Various linear and areal measurements of Mercury's first quadrant which were used in geological map preparation, map analysis, and statistical surveys of crater densities are discussed. Accuracy of each method rests on the determination of the scale of the photograph, i.e., the conversion factor between distances on the planet (in km) and distances on the photograph (in cm). Measurement errors arise due to uncertainty in Mercury's radius, poor resolution, poor coverage, high Sun angle illumination in the limb regions, planetary curvature, limited precision in measuring instruments, and inaccuracies in the printed map scales. Estimates are given for these errors.
Topographical optimization of structures for use in musical instruments and other applications
NASA Astrophysics Data System (ADS)
Kirkland, William Brandon
Mallet percussion instruments such as the xylophone, marimba, and vibraphone have been produced and tuned since their inception by arduously grinding the keys to achieve harmonic ratios between their 1st, 2 nd, and 3rd transverse modes. In consideration of this, it would be preferable to have defined mathematical models such that the keys of these instruments can be produced quickly and reliably. Additionally, physical modeling of these keys or beams provides a useful application of non-uniform beam vibrations as studied by Euler-Bernoulli and Timoshenko beam theories. This thesis work presents a literature review of previous studies regarding mallet percussion instrument design and optimization of non-uniform keys. The progression of previous research from strictly mathematical approaches to finite element methods is shown, ultimately arriving at the most current optimization techniques used by other authors. However, previous research varies slightly in the relative degree of accuracy to which a non-uniform beam can be modeled. Typically, accuracies are shown in literature as 1% to 2% error. While this seems attractive, musical tolerances require 0.25% error and beams are otherwise unsuitable. This research seeks to build on and add to the previous field research by optimizing beam topology and machining keys within tolerances that no further tuning is required. The optimization methods relied on finite element analysis and used harmonic modal frequencies as constraints rather than arguments of an error function to be optimized. Instead, the beam mass was minimized while the modal frequency constraints were required to be satisfied within 0.25% tolerance. The final optimized and machined keys of an A4 vibraphone were shown to be accurate within the required musical tolerances, with strong resonance at the designed frequencies. The findings solidify a systematic method for designing musical structures for accuracy and repeatability upon manufacture.
NASA Astrophysics Data System (ADS)
Pendrill, L. R.; Fisher, William P., Jr.
2013-09-01
A better understanding of how to characterise human response is essential to improved person-centred care and other situations where human factors are crucial. Challenges to introducing classical metrological concepts such as measurement uncertainty and traceability when characterising Man as a Measurement Instrument include the failure of many statistical tools when applied to ordinal measurement scales and a lack of metrological references in, for instance, healthcare. The present work attempts to link metrological and psychometric (Rasch) characterisation of Man as a Measurement Instrument in a study of elementary tasks, such as counting dots, where one knows independently the expected value because the measurement object (collection of dots) is prepared in advance. The analysis is compared and contrasted with recent approaches to this problem by others, for instance using signal error fidelity.
NASA Technical Reports Server (NTRS)
Rinsland, Curtis P.; Luo, Ming; Logan, Jennifer A.; Beer, Reinhard; Worden, Helen; Kulawik, Susan S.; Rider, David; Osterman, Greg; Gunson, Michael; Eldering, Annmarie;
2006-01-01
We provide an overview of the nadir measurements of carbon monoxide (CO) obtained thus far by the Tropospheric Emission Spectrometer (TES). The instrument is a high resolution array Fourier transform spectrometer designed to measure infrared spectral radiances from low Earth orbit. It is one of four instruments successfully launched onboard the Aura platform into a sun synchronous orbit at an altitude of 705 km on July 15, 2004 from Vandenberg Air Force Base, California. Nadir spectra are recorded at 0.06/cm spectral resolution with a nadir footprint of 5 x 8 km. We describe the TES retrieval approach for the analysis of the nadir measurements, report averaging kernels for typical tropical and polar ocean locations, characterize random and systematic errors for those locations, and describe instrument performance changes in the CO spectral region as a function of time. Sample maps of retrieved CO for the middle and upper troposphere from global surveys during December 2005 and April 2006 highlight the potential of the results for measurement and tracking of global pollution and determining air quality from space.
NASA Astrophysics Data System (ADS)
Rinsland, Curtis P.; Luo, Ming; Logan, Jennifer A.; Beer, Reinhard; Worden, Helen; Kulawik, Susan S.; Rider, David; Osterman, Greg; Gunson, Michael; Eldering, Annmarie; Goldman, Aaron; Shephard, Mark; Clough, Shepard A.; Rodgers, Clive; Lampel, Michael; Chiou, Linda
2006-11-01
We provide an overview of the nadir measurements of carbon monoxide (CO) obtained thus far by the Tropospheric Emission Spectrometer (TES). The instrument is a high resolution array Fourier transform spectrometer designed to measure infrared spectral radiances from low Earth orbit. It is one of four instruments successfully launched onboard the Aura platform into a sun synchronous orbit at an altitude of 705 km on July 15, 2004 from Vandenberg Air Force Base, California. Nadir spectra are recorded at 0.06-cm-1 spectral resolution with a nadir footprint of 5 × 8 km. We describe the TES retrieval approach for the analysis of the nadir measurements, report averaging kernels for typical tropical and polar ocean locations, characterize random and systematic errors for those locations, and describe instrument performance changes in the CO spectral region as a function of time. Sample maps of retrieved CO for the middle and upper troposphere from global surveys during December 2005 and April 2006 highlight the potential of the results for measurement and tracking of global pollution and determining air quality from space.
Sampling Analysis of Aerosol Retrievals by Single-track Spaceborne Instrument for Climate Research
NASA Astrophysics Data System (ADS)
Geogdzhayev, I. V.; Cairns, B.; Alexandrov, M. D.; Mishchenko, M. I.
2012-12-01
We examine to what extent the reduced sampling of along-track instruments such as Cloud-Aerosol LIdar with Orthogonal Polarisation (CALIOP) and Aerosol Polarimetry Sensor (APS) affects the statistical accuracy of a satellite climatology of retrieved aerosol optical thickness (AOT) by sub-sampling the retrievals from a wide-swath imaging instrument (MODerate resolution Imaging Spectroradiometer (MODIS)). Owing to its global coverage, longevity, and extensive characterization versus ground based data, the MODIS level-2 aerosol product is an instructive testbed for assessing sampling effects on climatic means derived from along-track instrument data. The advantage of using daily pixel-level aerosol retrievals from MODIS is that limitations caused by the presence of clouds are implicit in the sample, so that their seasonal and regional variations are captured coherently. However, imager data can exhibit cross-track variability of monthly global mean AOTs caused by a scattering-angle dependence. We found that single along-track values can deviate from the imager mean by 15% over land and by more than 20% over ocean. This makes it difficult to separate natural variability from viewing-geometry artifacts complicating direct comparisons of an along-track sub-sample with the full imager data. To work around this problem, we introduce "flipped-track" sampling which, by design, is statistically equivalent to along-track sampling and while closely approximating the imager in terms of angular artifacts. We show that the flipped-track variability of global monthly mean AOT is much smaller than the cross-track one for the 7-year period considered. Over the ocean flipped-track standard error is 85% less than the cross-track one (absolute values 0.0012 versus 0.0079), and over land it is about one third of the cross-track value (0.0054 versus 0.0188) on average. This allows us to attribute the difference between the two errors to the viewing-geometry artifacts and obtain an upper limit on AOT errors caused by along-track sampling. Our results show that using along-track subsets of MODIS aerosol data directly to analyze the sampling adequacy of single-track instruments can lead to false conclusions owing to the apparent enhancement of natural aerosol variability by the track-to-track artifacts. The analysis based on the statistics of the flipped-track means yields better estimates because it allows for better separation of the viewing-geometry artifacts and true natural variability. Published assessments estimate that a global AOT change of 0.01 would yield a climatically important flux change of 0.25 W/m2. Since the standard error estimates that we have obtained are comfortably below 0.01, we conclude that along-track instruments flown on a sun-synchronous orbiting platform have sufficient spatial sampling for estimating aerosol effects on climate. Since AOT is believed to be the most variable characteristic of tropospheric aerosols, our results imply that pixel-wide along-track coverage also provides adequate statistical representation of the global distribution of aerosol microphysical parameters.
Endodontic Procedural Errors: Frequency, Type of Error, and the Most Frequently Treated Tooth.
Yousuf, Waqas; Khan, Moiz; Mehdi, Hasan
2015-01-01
Introduction. The aim of this study is to determine the most common endodontically treated tooth and the most common error produced during treatment and to note the association of particular errors with particular teeth. Material and Methods. Periapical radiographs were taken of all the included teeth and were stored and assessed using DIGORA Optime. Teeth in each group were evaluated for presence or absence of procedural errors (i.e., overfill, underfill, ledge formation, perforations, apical transportation, and/or instrument separation) and the most frequent tooth to undergo endodontic treatment was also noted. Results. A total of 1748 root canal treated teeth were assessed, out of which 574 (32.8%) contained a procedural error. Out of these 397 (22.7%) were overfilled, 155 (8.9%) were underfilled, 16 (0.9%) had instrument separation, and 7 (0.4%) had apical transportation. The most frequently treated tooth was right permanent mandibular first molar (11.3%). The least commonly treated teeth were the permanent mandibular third molars (0.1%). Conclusion. Practitioners should show greater care to maintain accuracy of the working length throughout the procedure, as errors in length accounted for the vast majority of errors and special care should be taken when working on molars.
Wall shear stress measurements using a new transducer
NASA Technical Reports Server (NTRS)
Vakili, A. D.; Wu, J. M.; Lawing, P. L.
1986-01-01
A new instrument has been developed for direct measurement of wall shear stress. This instrument is simple and symmetric in design with small moving mass and no internal friction. Features employed in the design of this instrument eliminate most of the difficulties associated with the traditional floating element balances. Vibration problems associated with the floating element skin friction balances have been found to be minimized by the design features and optional damping provided. The unique design of this instrument eliminates or reduces the errors associated with conventional floating-element devices: such as errors due to gaps, pressure gradient, acceleration, heat transfer and temperature change. The instrument is equipped with various sensing systems and the output signal is a linear function of the wall shear stress. Measurement made in three different tunnels show good agreement with theory and data obtained by the floating element devices.
NASA Technical Reports Server (NTRS)
Liu, Shih-Ching
1994-01-01
The goal of this research was to determine kinematic parameters of the lower limbs of a subject pedaling a bicycle. An existing measurement system was used as the basis to develop the model to determine position and acceleration of the limbs. The system consists of an ergometer instrumented to provide position of the pedal (foot), accelerometers to be attached to the lower limbs to measure accelerations, a recorder used for filtering, and a computer instrumented with an A/D board and a decoder board. The system is designed to read and record data from accelerometers and encoders. Software has been developed for data collection, analysis and presentation. Based on the measurement system, a two dimensional analytical model has been developed to determine configuration (position, orientation) and kinematics (velocities, accelerations). The model has been implemented in software and verified by simulation. An error analysis to determine the system's accuracy shows that the expected error is well within the specifications of practical applications. When the physical hardware is completed, NASA researchers hope to use the system developed to determine forces exerted by muscles and forces at articulations. This data will be useful in the development of countermeasures to minimize bone loss experienced by astronauts in microgravity conditions.
NASA Technical Reports Server (NTRS)
Stowe, Larry; Ardanuy, Philip; Hucek, Richard; Abel, Peter; Jacobowitz, Herbert
1991-01-01
A set of system simulations was performed to evaluate candidate scanner configurations to fly as a part of the Earth Radiation Budget Instrument (ERBI) on the polar platforms during the 1990's. The simulation is considered of instantaneous sampling (without diurnal averaging) of the longwave and shortwave fluxes at the top of the atmosphere (TOA). After measurement and subsequent inversion to the TOA, the measured fluxes were compared to the reference fluxes for 2.5 deg lat/long resolution targets. The reference fluxes at this resolution are obtained by integrating over the 25 x 25 = 625 grid elements in each target. The differences between each of these two resultant spatially averaged sets of target measurements (errors) are taken and then statistically summarized. Five instruments are considered: (1) the Conically Scanning Radiometer (CSR); (2) the ERBE Cross Track Scanner; (3) the Nimbus-7 Biaxial Scanner; (4) the Clouds and Earth's Radiant Energy System Instrument (CERES-1); and (5) the Active Cavity Array (ACA). Identical studies of instantaneous error were completed for many days, two seasons, and several satellite equator crossing longitudes. The longwave flux errors were found to have the same space and time characteristics as for the shortwave fluxes, but the errors are only about 25 pct. of the shortwave errors.
Wang, Ching-Yun; Cullings, Harry; Song, Xiao; Kopecky, Kenneth J.
2017-01-01
SUMMARY Observational epidemiological studies often confront the problem of estimating exposure-disease relationships when the exposure is not measured exactly. In the paper, we investigate exposure measurement error in excess relative risk regression, which is a widely used model in radiation exposure effect research. In the study cohort, a surrogate variable is available for the true unobserved exposure variable. The surrogate variable satisfies a generalized version of the classical additive measurement error model, but it may or may not have repeated measurements. In addition, an instrumental variable is available for individuals in a subset of the whole cohort. We develop a nonparametric correction (NPC) estimator using data from the subcohort, and further propose a joint nonparametric correction (JNPC) estimator using all observed data to adjust for exposure measurement error. An optimal linear combination estimator of JNPC and NPC is further developed. The proposed estimators are nonparametric, which are consistent without imposing a covariate or error distribution, and are robust to heteroscedastic errors. Finite sample performance is examined via a simulation study. We apply the developed methods to data from the Radiation Effects Research Foundation, in which chromosome aberration is used to adjust for the effects of radiation dose measurement error on the estimation of radiation dose responses. PMID:29354018
Preliminary Design and Analysis of the GIFTS Instrument Pointing System
NASA Technical Reports Server (NTRS)
Zomkowski, Paul P.
2003-01-01
The Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) Instrument is the next generation spectrometer for remote sensing weather satellites. The GIFTS instrument will be used to perform scans of the Earth s atmosphere by assembling a series of field-of- views (FOV) into a larger pattern. Realization of this process is achieved by step scanning the instrument FOV in a contiguous fashion across any desired portion of the visible Earth. A 2.3 arc second pointing stability, with respect to the scanning instrument, must be maintained for the duration of the FOV scan. A star tracker producing attitude data at 100 Hz rate will be used by the autonomous pointing algorithm to precisely track target FOV s on the surface of the Earth. The main objective is to validate the pointing algorithm in the presence of spacecraft disturbances and determine acceptable disturbance limits from expected noise sources. Proof of concept validation of the pointing system algorithm is carried out with a full system simulation developed using Matlab Simulink. Models for the following components function within the full system simulation: inertial reference unit (IRU), attitude control system (ACS), reaction wheels, star tracker, and mirror controller. With the spacecraft orbital position and attitude maintained to within specified limits the pointing algorithm receives quaternion, ephemeris, and initialization data that are used to construct the required mirror pointing commands at a 100 Hz rate. This comprehensive simulation will also aid in obtaining a thorough understanding of spacecraft disturbances and other sources of pointing system errors. Parameter sensitivity studies and disturbance analysis will be used to obtain limits of operability for the GIFTS instrument. The culmination of this simulation development and analysis will be used to validate the specified performance requirements outlined for this instrument.
Measurements of Aperture Averaging on Bit-Error-Rate
NASA Technical Reports Server (NTRS)
Bastin, Gary L.; Andrews, Larry C.; Phillips, Ronald L.; Nelson, Richard A.; Ferrell, Bobby A.; Borbath, Michael R.; Galus, Darren J.; Chin, Peter G.; Harris, William G.; Marin, Jose A.;
2005-01-01
We report on measurements made at the Shuttle Landing Facility (SLF) runway at Kennedy Space Center of receiver aperture averaging effects on a propagating optical Gaussian beam wave over a propagation path of 1,000 in. A commercially available instrument with both transmit and receive apertures was used to transmit a modulated laser beam operating at 1550 nm through a transmit aperture of 2.54 cm. An identical model of the same instrument was used as a receiver with a single aperture that was varied in size up to 20 cm to measure the effect of receiver aperture averaging on Bit Error Rate. Simultaneous measurements were also made with a scintillometer instrument and local weather station instruments to characterize atmospheric conditions along the propagation path during the experiments.
Measurements of aperture averaging on bit-error-rate
NASA Astrophysics Data System (ADS)
Bastin, Gary L.; Andrews, Larry C.; Phillips, Ronald L.; Nelson, Richard A.; Ferrell, Bobby A.; Borbath, Michael R.; Galus, Darren J.; Chin, Peter G.; Harris, William G.; Marin, Jose A.; Burdge, Geoffrey L.; Wayne, David; Pescatore, Robert
2005-08-01
We report on measurements made at the Shuttle Landing Facility (SLF) runway at Kennedy Space Center of receiver aperture averaging effects on a propagating optical Gaussian beam wave over a propagation path of 1,000 m. A commercially available instrument with both transmit and receive apertures was used to transmit a modulated laser beam operating at 1550 nm through a transmit aperture of 2.54 cm. An identical model of the same instrument was used as a receiver with a single aperture that was varied in size up to 20 cm to measure the effect of receiver aperture averaging on Bit Error Rate. Simultaneous measurements were also made with a scintillometer instrument and local weather station instruments to characterize atmospheric conditions along the propagation path during the experiments.
NASA Technical Reports Server (NTRS)
Kuehn, C. E.; Himwich, W. E.; Clark, T. A.; Ma, C.
1991-01-01
The internal consistency of the baseline-length measurements derived from analysis of several independent VLBI experiments is an estimate of the measurement precision. The paper investigates whether the inclusion of water vapor radiometer (WVR) data as an absolute calibration of the propagation delay due to water vapor improves the precision of VLBI baseline-length measurements. The paper analyzes 28 International Radio Interferometric Surveying runs between June 1988 and January 1989; WVR measurements were made during each session. The addition of WVR data decreased the scatter of the length measurements of the baselines by 5-10 percent. The observed reduction in the scatter of the baseline lengths is less than what is expected from the behavior of the formal errors, which suggest that the baseline-length measurement precision should improve 10-20 percent if WVR data are included in the analysis. The discrepancy between the formal errors and the baseline-length results can be explained as the consequence of systematic errors in the dry-mapping function parameters, instrumental biases in the WVR and the barometer, or both.
Schwertner, Debora Soccal; Oliveira, Raul; Mazo, Giovana Zarpellon; Gioda, Fabiane Rosa; Kelber, Christian Roberto; Swarowsky, Alessandra
2016-05-04
Several posture evaluation devices have been used to detect deviations of the vertebral column. However it has been observed that the instruments present measurement errors related to the equipment, environment or measurement protocol. This study aimed to build, validate, analyze the reliability and describe a measurement protocol for the use of the Posture Evaluation Rotating Platform System (SPGAP, Brazilian abbreviation). The posture evaluation system comprises a Posture Evaluation Rotating Platform, video camera, calibration support and measurement software. Two pilot studies were carried out with 102 elderly individuals (average age 69 years old, SD = ±7.3) to establish a protocol for SPGAP, controlling the measurement errors related to the environment, equipment and the person under evaluation. Content validation was completed with input from judges with expertise in posture measurement. The variation coefficient method was used to validate the measurement by the instrument of an object with known dimensions. Finally, reliability was established using repeated measurements of the known object. Expert content judges gave the system excellent ratings for content validity (mean 9.4 out of 10; SD 1.13). The measurement of an object with known dimensions indicated excellent validity (all measurement errors <1 %) and test-retest reliability. A total of 26 images were needed to stabilize the system. Participants in the pilot studies indicated that they felt comfortable throughout the assessment. The use of only one image can offer measurements that underestimate or overestimate the reality. To verify the images of objects with known dimensions the values for the width and height were, respectively, CV 0.88 (width) and 2.33 (height), SD 0.22 (width) and 0.35 (height), minimum and maximum values 24.83-25.2 (width) and 14.56 - 15.75 (height). In the analysis of different images (similar) of an individual, greater discrepancies were observed in the values found. The cervical index, for example, presented minimum and maximum values of 15.38 and 37.5, a coefficient of variation of 0.29 and a standard deviation of 6.78. The SPGAP was shown to be a valid and reliable instrument for the quantitative analysis of body posture with applicability and clinical use, since it managed to reduce several measurement errors, amongst which parallax distortion.
Enjolras, Vivien; Vincent, Patrick; Souyris, Jean-Claude; Rodriguez, Ernesto; Phalippou, Laurent; Cazenave, Anny
2006-01-01
The main limitations of standard nadir-looking radar altimeters have been known for long. They include the lack of coverage (intertrack distance of typically 150 km for the T/P / Jason tandem), and the spatial resolution (typically 2 km for T/P and Jason), expected to be a limiting factor for the determination of mesoscale phenomena in deep ocean. In this context, various solutions using off-nadir radar interferometry have been proposed by Rodriguez and al to give an answer to oceanographic mission objectives. This paper addresses the performances study of this new generation of instruments, and dedicated mission. A first approach is based on the Wide-Swath Ocean Altimeter (WSOA) intended to be implemented onboard Jason-2 in 2004 but now abandoned. Every error domain has been checked: the physics of the measurement, its geometry, the impact of the platform and external errors like the tropospheric and ionospheric delays. We have especially shown the strong need to move to a sun-synchronous orbit and the non-negligible impact of propagation media errors in the swath, reaching a few centimetres in the worst case. Some changes in the parameters of the instrument have also been discussed to improve the overall error budget. The outcomes have led to the definition and the optimization of such an instrument and its dedicated mission.
Bayesian historical earthquake relocation: an example from the 1909 Taipei earthquake
Minson, Sarah E.; Lee, William H.K.
2014-01-01
Locating earthquakes from the beginning of the modern instrumental period is complicated by the fact that there are few good-quality seismograms and what traveltimes do exist may be corrupted by both large phase-pick errors and clock errors. Here, we outline a Bayesian approach to simultaneous inference of not only the hypocentre location but also the clock errors at each station and the origin time of the earthquake. This methodology improves the solution for the source location and also provides an uncertainty analysis on all of the parameters included in the inversion. As an example, we applied this Bayesian approach to the well-studied 1909 Mw 7 Taipei earthquake. While our epicentre location and origin time for the 1909 Taipei earthquake are consistent with earlier studies, our focal depth is significantly shallower suggesting a higher seismic hazard to the populous Taipei metropolitan area than previously supposed.
Advancing Technology for Starlight Suppression via an External Occulter
NASA Technical Reports Server (NTRS)
Kasdin, N. J.; Spergel, D. N.; Vanderbei, R. J.; Lisman, D.; Shaklan, S.; Thomson, M.; Walkemeyer, P.; Bach, V.; Oakes, E.; Cady, E.;
2011-01-01
External occulters provide the starlight suppression needed for detecting and characterizing exoplanets with a much simpler telescope and instrument than is required for the equivalent performing coronagraph. In this paper we describe progress on our Technology Development for Exoplanet Missions project to design, manufacture, and measure a prototype occulter petal. We focus on the key requirement of manufacturing a precision petal while controlling its shape within precise tolerances. The required tolerances are established by modeling the effect that various mechanical and thermal errors have on scatter in the telescope image plane and by suballocating the allowable contrast degradation between these error sources. We discuss the deployable starshade design, representative error budget, thermal analysis, and prototype manufacturing. We also present our meteorology system and methodology for verifying that the petal shape meets the contrast requirement. Finally, we summarize the progress to date building the prototype petal.
Investigation of air transportation technology at Massachusetts Institute of Technology, 1985
NASA Technical Reports Server (NTRS)
Simpson, Robert W.
1987-01-01
Two areas of research are discussed, an investigation into runway approach flying with Loran C and a series of research topics in the development of experimental validation of methodologies to support aircraft icing analysis. Flight tests with the Loran C led to the conclusion that it is a suitable system for non-precision approaches, and that time-difference corrections made every eight weeks in the instrument approach plates will produce acceptable errors. In the area of aircraft icing analysis, wind tunnel and flight test results are discussed.
Inertial Pointing and Positioning System
NASA Technical Reports Server (NTRS)
Yee, Robert (Inventor); Robbins, Fred (Inventor)
1998-01-01
An inertial pointing and control system and method for pointing to a designated target with known coordinates from a platform to provide accurate position, steering, and command information. The system continuously receives GPS signals and corrects Inertial Navigation System (INS) dead reckoning or drift errors. An INS is mounted directly on a pointing instrument rather than in a remote location on the platform for-monitoring the terrestrial position and instrument attitude. and for pointing the instrument at designated celestial targets or ground based landmarks. As a result. the pointing instrument and die INS move independently in inertial space from the platform since the INS is decoupled from the platform. Another important characteristic of the present system is that selected INS measurements are combined with predefined coordinate transformation equations and control logic algorithms under computer control in order to generate inertial pointing commands to the pointing instrument. More specifically. the computer calculates the desired instrument angles (Phi, Theta. Psi). which are then compared to the Euler angles measured by the instrument- mounted INS. and forms the pointing command error angles as a result of the compared difference.
Calendar Instruments in Retrospective Web Surveys
ERIC Educational Resources Information Center
Glasner, Tina; van der Vaart, Wander; Dijkstra, Wil
2015-01-01
Calendar instruments incorporate aided recall techniques such as temporal landmarks and visual time lines that aim to reduce response error in retrospective surveys. Those calendar instruments have been used extensively in off-line research (e.g., computer-aided telephone interviews, computer assisted personal interviewing, and paper and pen…
Deep data fusion method for missile-borne inertial/celestial system
NASA Astrophysics Data System (ADS)
Zhang, Chunxi; Chen, Xiaofei; Lu, Jiazhen; Zhang, Hao
2018-05-01
Strap-down inertial-celestial integrated navigation system has the advantages of autonomy and high precision and is very useful for ballistic missiles. The star sensor installation error and inertial measurement error have a great influence for the system performance. Based on deep data fusion, this paper establishes measurement equations including star sensor installation error and proposes the deep fusion filter method. Simulations including misalignment error, star sensor installation error, IMU error are analyzed. Simulation results indicate that the deep fusion method can estimate the star sensor installation error and IMU error. Meanwhile, the method can restrain the misalignment errors caused by instrument errors.
Error management for musicians: an interdisciplinary conceptual framework
Kruse-Weber, Silke; Parncutt, Richard
2014-01-01
Musicians tend to strive for flawless performance and perfection, avoiding errors at all costs. Dealing with errors while practicing or performing is often frustrating and can lead to anger and despair, which can explain musicians’ generally negative attitude toward errors and the tendency to aim for flawless learning in instrumental music education. But even the best performances are rarely error-free, and research in general pedagogy and psychology has shown that errors provide useful information for the learning process. Research in instrumental pedagogy is still neglecting error issues; the benefits of risk management (before the error) and error management (during and after the error) are still underestimated. It follows that dealing with errors is a key aspect of music practice at home, teaching, and performance in public. And yet, to be innovative, or to make their performance extraordinary, musicians need to risk errors. Currently, most music students only acquire the ability to manage errors implicitly – or not at all. A more constructive, creative, and differentiated culture of errors would balance error tolerance and risk-taking against error prevention in ways that enhance music practice and music performance. The teaching environment should lay the foundation for the development of such an approach. In this contribution, we survey recent research in aviation, medicine, economics, psychology, and interdisciplinary decision theory that has demonstrated that specific error-management training can promote metacognitive skills that lead to better adaptive transfer and better performance skills. We summarize how this research can be applied to music, and survey-relevant research that is specifically tailored to the needs of musicians, including generic guidelines for risk and error management in music teaching and performance. On this basis, we develop a conceptual framework for risk management that can provide orientation for further music education and musicians at all levels. PMID:25120501
Error management for musicians: an interdisciplinary conceptual framework.
Kruse-Weber, Silke; Parncutt, Richard
2014-01-01
Musicians tend to strive for flawless performance and perfection, avoiding errors at all costs. Dealing with errors while practicing or performing is often frustrating and can lead to anger and despair, which can explain musicians' generally negative attitude toward errors and the tendency to aim for flawless learning in instrumental music education. But even the best performances are rarely error-free, and research in general pedagogy and psychology has shown that errors provide useful information for the learning process. Research in instrumental pedagogy is still neglecting error issues; the benefits of risk management (before the error) and error management (during and after the error) are still underestimated. It follows that dealing with errors is a key aspect of music practice at home, teaching, and performance in public. And yet, to be innovative, or to make their performance extraordinary, musicians need to risk errors. Currently, most music students only acquire the ability to manage errors implicitly - or not at all. A more constructive, creative, and differentiated culture of errors would balance error tolerance and risk-taking against error prevention in ways that enhance music practice and music performance. The teaching environment should lay the foundation for the development of such an approach. In this contribution, we survey recent research in aviation, medicine, economics, psychology, and interdisciplinary decision theory that has demonstrated that specific error-management training can promote metacognitive skills that lead to better adaptive transfer and better performance skills. We summarize how this research can be applied to music, and survey-relevant research that is specifically tailored to the needs of musicians, including generic guidelines for risk and error management in music teaching and performance. On this basis, we develop a conceptual framework for risk management that can provide orientation for further music education and musicians at all levels.
Performance of a Facility for Measuring Scintillator Non-Proportionality
NASA Astrophysics Data System (ADS)
Choong, Woon-Seng; Hull, Giulia; Moses, William W.; Vetter, Kai M.; Payne, Stephen A.; Cherepy, Nerine J.; Valentine, John D.
2008-06-01
We have constructed a second-generation Compton coincidence instrument, known as the Scintillator Light Yield Non-proportionality Characterization Instrument (SLYNCI), to characterize the electron response of scintillating materials. While the SLYNCI design includes more and higher efficiency HPGe detectors than the original apparatus (five 25%-30% detectors versus one 10% detector), the most novel feature is that no collimator is placed in front of the HPGe detectors. Because of these improvements, the SLYNCI data collection rate is over 30 times higher than the original instrument. In this paper, we present a validation study of this instrument, reporting on the hardware implementation, calibration, and performance. We discuss the analysis method and present measurements of the electron response of two different NaI:Tl samples. We also discuss the systematic errors of the measurement, especially those that are unique to SLYNCI. We find that the apparatus is very stable, but that careful attention must be paid to the energy calibration of the HPGe detectors.
Leyland, M J; Beurskens, M N A; Flanagan, J C; Frassinetti, L; Gibson, K J; Kempenaars, M; Maslov, M; Scannell, R
2016-01-01
The Joint European Torus (JET) high resolution Thomson scattering (HRTS) system measures radial electron temperature and density profiles. One of the key capabilities of this diagnostic is measuring the steep pressure gradient, termed the pedestal, at the edge of JET plasmas. The pedestal is susceptible to limiting instabilities, such as Edge Localised Modes (ELMs), characterised by a periodic collapse of the steep gradient region. A common method to extract the pedestal width, gradient, and height, used on numerous machines, is by performing a modified hyperbolic tangent (mtanh) fit to overlaid profiles selected from the same region of the ELM cycle. This process of overlaying profiles, termed ELM synchronisation, maximises the number of data points defining the pedestal region for a given phase of the ELM cycle. When fitting to HRTS profiles, it is necessary to incorporate the diagnostic radial instrument function, particularly important when considering the pedestal width. A deconvolved fit is determined by a forward convolution method requiring knowledge of only the instrument function and profiles. The systematic error due to the deconvolution technique incorporated into the JET pedestal fitting tool has been documented by Frassinetti et al. [Rev. Sci. Instrum. 83, 013506 (2012)]. This paper seeks to understand and quantify the systematic error introduced to the pedestal width due to ELM synchronisation. Synthetic profiles, generated with error bars and point-to-point variation characteristic of real HRTS profiles, are used to evaluate the deviation from the underlying pedestal width. We find on JET that the ELM synchronisation systematic error is negligible in comparison to the statistical error when assuming ten overlaid profiles (typical for a pre-ELM fit to HRTS profiles). This confirms that fitting a mtanh to ELM synchronised profiles is a robust and practical technique for extracting the pedestal structure.
Perception of competence in middle school physical education: instrument development and validation.
Scrabis-Fletcher, Kristin; Silverman, Stephen
2010-03-01
Perception of Competence (POC) has been studied extensively in physical activity (PA) research with similar instruments adapted for physical education (PE) research. Such instruments do not account for the unique PE learning environment. Therefore, an instrument was developed and the scores validated to measure POC in middle school PE. A multiphase design was used consisting of an intensive theoretical review, elicitation study, prepilot study, pilot study, content validation study, and final validation study (N=1281). Data analysis included a multistep iterative process to identify the best model fit. A three-factor model for POC was tested and resulted in root mean square error of approximation = .09, root mean square residual = .07, goodness offit index = .90, and adjusted goodness offit index = .86 values in the acceptable range (Hu & Bentler, 1999). A two-factor model was also tested and resulted in a good fit (two-factor fit indexes values = .05, .03, .98, .97, respectively). The results of this study suggest that an instrument using a three- or two-factor model provides reliable and valid scores ofPOC measurement in middle school PE.
Data Quality Control Tools Applied to Seismo-Acoustic Arrays in Korea
NASA Astrophysics Data System (ADS)
Park, J.; Hayward, C.; Stump, B. W.
2017-12-01
We assess data quality (data gap, seismometer orientation, timing error, noise level and coherence between co-located sensors) for seismic and infrasound data in South Korea using six seismo-acoustic arrays, BRDAR, CHNAR, KSGAR, KMPAR, TJIAR, and YPDAR, cooperatively operated by Southern Methodist University and Korea Institute for Geosciences and Mineral Resources. Timing errors associated with seismometers can be found based on estimated changes in instrument orientation calculated from RMS errors between the reference array and each array seismometer using waveforms filtered from 0.1 to 0.35 Hz. Noise levels of seismic and infrasound data are analyzed to investigate local environmental effects and seasonal noise variation. In order to examine the spectral properties of the noise, the waveform are analyzed using Welch's method (Welch, 1967) that produces a single power spectral estimate from an average of spectra taken at regular intervals over a specific time period. This analysis quantifies the range of noise conditions found at each of the arrays over the given time period. We take an advantage of the fact that infrasound sensors are co-located or closely located to one another, which allows for a direct comparison of sensors, following the method by Ringler et al. (2010). The power level differences between two sensors at the same array in the frequency band of interest are used to monitor temporal changes in data quality and instrument conditions. A data quality factor is assigned to stations based on the average values of temporal changes estimated in the frequency and time domains. These monitoring tools enable us to automatically assess technical issue related to the instruments and data quality at each seismo-acoustic array as well as to investigate local environmental effects and seasonal variations in both seismic and infrasound data.
Instrumental variables vs. grouping approach for reducing bias due to measurement error.
Batistatou, Evridiki; McNamee, Roseanne
2008-01-01
Attenuation of the exposure-response relationship due to exposure measurement error is often encountered in epidemiology. Given that error cannot be totally eliminated, bias correction methods of analysis are needed. Many methods require more than one exposure measurement per person to be made, but the `group mean OLS method,' in which subjects are grouped into several a priori defined groups followed by ordinary least squares (OLS) regression on the group means, can be applied with one measurement. An alternative approach is to use an instrumental variable (IV) method in which both the single error-prone measure and an IV are used in IV analysis. In this paper we show that the `group mean OLS' estimator is equal to an IV estimator with the group mean used as IV, but that the variance estimators for the two methods are different. We derive a simple expression for the bias in the common estimator which is a simple function of group size, reliability and contrast of exposure between groups, and show that the bias can be very small when group size is large. We compare this method with a new proposal (group mean ranking method), also applicable with a single exposure measurement, in which the IV is the rank of the group means. When there are two independent exposure measurements per subject, we propose a new IV method (EVROS IV) and compare it with Carroll and Stefanski's (CS IV) proposal in which the second measure is used as an IV; the new IV estimator combines aspects of the `group mean' and `CS' strategies. All methods are evaluated in terms of bias, precision and root mean square error via simulations and a dataset from occupational epidemiology. The `group mean ranking method' does not offer much improvement over the `group mean method.' Compared with the `CS' method, the `EVROS' method is less affected by low reliability of exposure. We conclude that the group IV methods we propose may provide a useful way to handle mismeasured exposures in epidemiology with or without replicate measurements. Our finding may also have implications for the use of aggregate variables in epidemiology to control for unmeasured confounding.
NASA Astrophysics Data System (ADS)
Li, Li; Li, Zhengqiang; Li, Kaitao; Sun, Bin; Wu, Yanke; Xu, Hua; Xie, Yisong; Goloub, Philippe; Wendisch, Manfred
2018-04-01
In this study errors of the relative orientations of polarizers in the Cimel polarized sun-sky radiometers are measured and introduced into the Mueller matrix of the instrument. The linearly polarized light with different polarization directions from 0° to 180° (or 360°) is generated by using a rotating linear polarizer in front of an integrating sphere. Through measuring the referential linearly polarized light, the errors of relative orientations of polarizers are determined. The efficiencies of the polarizers are obtained simultaneously. By taking the error of relative orientation into consideration in the Mueller matrix, the accuracies of the calculated Stokes parameters, the degree of linear polarization, and the angle of polarization are remarkably improved. The method may also apply to other polarization instruments of similar types.
CME Velocity and Acceleration Error Estimates Using the Bootstrap Method
NASA Technical Reports Server (NTRS)
Michalek, Grzegorz; Gopalswamy, Nat; Yashiro, Seiji
2017-01-01
The bootstrap method is used to determine errors of basic attributes of coronal mass ejections (CMEs) visually identified in images obtained by the Solar and Heliospheric Observatory (SOHO) mission's Large Angle and Spectrometric Coronagraph (LASCO) instruments. The basic parameters of CMEs are stored, among others, in a database known as the SOHO/LASCO CME catalog and are widely employed for many research studies. The basic attributes of CMEs (e.g. velocity and acceleration) are obtained from manually generated height-time plots. The subjective nature of manual measurements introduces random errors that are difficult to quantify. In many studies the impact of such measurement errors is overlooked. In this study we present a new possibility to estimate measurements errors in the basic attributes of CMEs. This approach is a computer-intensive method because it requires repeating the original data analysis procedure several times using replicate datasets. This is also commonly called the bootstrap method in the literature. We show that the bootstrap approach can be used to estimate the errors of the basic attributes of CMEs having moderately large numbers of height-time measurements. The velocity errors are in the vast majority small and depend mostly on the number of height-time points measured for a particular event. In the case of acceleration, the errors are significant, and for more than half of all CMEs, they are larger than the acceleration itself.
Stochastic estimates of gradient from laser measurements for an autonomous Martian Roving Vehicle
NASA Technical Reports Server (NTRS)
Shen, C. N.; Burger, P.
1973-01-01
The general problem presented in this paper is one of estimating the state vector x from the state equation h = Ax, where h, A, and x are all stochastic. Specifically, the problem is for an autonomous Martian Roving Vehicle to utilize laser measurements in estimating the gradient of the terrain. Error exists due to two factors - surface roughness and instrumental measurements. The errors in slope depend on the standard deviations of these noise factors. Numerically, the error in gradient is expressed as a function of instrumental inaccuracies. Certain guidelines for the accuracy of permissable gradient must be set. It is found that present technology can meet these guidelines.-
Emissivity correction for interpreting thermal radiation from a terrestrial surface
NASA Technical Reports Server (NTRS)
Sutherland, R. A.; Bartholic, J. F.; Gerber, J. F.
1979-01-01
A general method of accounting for emissivity in making temperature determinations of graybody surfaces from radiometric data is presented. The method differs from previous treatments in that a simple blackbody calibration and graphical approach is used rather than numerical integrations which require detailed knowledge of an instrument's spectral characteristics. Also, errors caused by approximating instrumental response with the Stephan-Boltzman law rather than with an appropriately weighted Planck integral are examined. In the 8-14 micron wavelength interval, it is shown that errors are at most on the order of 3 C for the extremes of the earth's temperature and emissivity. For more practical limits, however, errors are less than 0.5 C.
Central Corneal Thickness Reproducibility among Ten Different Instruments.
Pierro, Luisa; Iuliano, Lorenzo; Gagliardi, Marco; Ambrosi, Alessandro; Rama, Paolo; Bandello, Francesco
2016-11-01
To assess agreement between one ultrasonic (US) and nine optical instruments for the measurement of central corneal thickness (CCT), and to evaluate intra- and inter-operator reproducibility. In this observational cross-sectional study, two masked operators measured CCT thickness twice in 28 healthy eyes. We used seven spectral-domain optical coherence tomography (SD-OCT) devices, one time-domain OCT, one Scheimpflug camera, and one US-based instrument. Inter- and intra-operator reproducibility was evaluated by intraclass correlation coefficient (ICC), coefficient of variation (CV), and Bland-Altman test analysis. Instrument-to-instrument reproducibility was determined by ANOVA for repeated measurements. We also tested how the devices disagreed regarding systemic bias and random error using a structural equation model. Mean CCT of all instruments ranged from 536 ± 42 μm to 577 ± 40 μm. An instrument-to-instrument correlation test showed high values among the 10 investigated devices (correlation coefficient range 0.852-0.995; p values <0.0001 in all cases). The highest correlation coefficient values were registered between 3D OCT-2000 Topcon-Spectral OCT/SLO Opko (0.995) and Cirrus HD-OCT Zeiss-RS-3000 Nidek (0.995), whereas the lowest were seen between SS-1000 CASIA and Spectral OCT/SLO Opko (0.852). ICC and CV showed excellent inter- and intra-operator reproducibility for all optic-based devices, except for the US-based device. Bland-Altman analysis demonstrated low mean biases between operators. Despite highlighting good intra- and inter-operator reproducibility, we found that a scale bias between instruments might interfere with thorough CCT monitoring. We suggest that optimal monitoring is achieved with the same operator and the same device.
Fluorescence decay data analysis correcting for detector pulse pile-up at very high count rates
NASA Astrophysics Data System (ADS)
Patting, Matthias; Reisch, Paja; Sackrow, Marcus; Dowler, Rhys; Koenig, Marcelle; Wahl, Michael
2018-03-01
Using time-correlated single photon counting for the purpose of fluorescence lifetime measurements is usually limited in speed due to pile-up. With modern instrumentation, this limitation can be lifted significantly, but some artifacts due to frequent merging of closely spaced detector pulses (detector pulse pile-up) remain an issue to be addressed. We propose a data analysis method correcting for this type of artifact and the resulting systematic errors. It physically models the photon losses due to detector pulse pile-up and incorporates the loss in the decay fit model employed to obtain fluorescence lifetimes and relative amplitudes of the decay components. Comparison of results with and without this correction shows a significant reduction of systematic errors at count rates approaching the excitation rate. This allows quantitatively accurate fluorescence lifetime imaging at very high frame rates.
Seismic Station Installation Orientation Errors at ANSS and IRIS/USGS Stations
Ringler, Adam T.; Hutt, Charles R.; Persfield, K.; Gee, Lind S.
2013-01-01
Many seismological studies depend on the published orientations of sensitive axes of seismic instruments relative to north (e.g., Li et al., 2011). For example, studies of the anisotropic structure of the Earth’s mantle through SKS‐splitting measurements (Long et al., 2009), constraints on core–mantle electromagnetic coupling from torsional normal‐mode measurements (Dumberry and Mound, 2008), and models of three‐dimensional (3D) velocity variations from surface waves (Ekström et al., 1997) rely on accurate sensor orientation. Unfortunately, numerous results indicate that this critical parameter is often subject to significant error (Laske, 1995; Laske and Masters, 1996; Yoshizawa et al., 1999; Schulte‐Pelkum et al., 2001; Larson and Ekström, 2002). For the Advanced National Seismic System (ANSS; ANSS Technical Integration Committee, 2002), the Global Seismographic Network (GSN; Butler et al., 2004), and many other networks, sensor orientation is typically determined by a field engineer during installation. Successful emplacement of a seismic instrument requires identifying true north, transferring a reference line, and measuring the orientation of the instrument relative to the reference line. Such an exercise is simple in theory, but there are many complications in practice. There are four commonly used methods for determining true north at the ANSS and GSN stations operated by the USGS Albuquerque Seismological Laboratory (ASL), including gyroscopic, astronomical, Global Positioning System (GPS), and magnetic field techniques. A particular method is selected based on site conditions (above ground, below ground, availability of astronomical observations, and so on) and in the case of gyroscopic methods, export restrictions. Once a north line has been determined, it must be translated to the sensor location. For installations in mines or deep vaults, this step can include tracking angles through the one or more turns in the access tunnel leading to the vault (e.g., GSN station WCI in Wyandotte Cave, Indiana). Finally, the third source of error comes from the ability of field engineers to orient the sensor relative to the reference line. In order to quantify bounds on the errors in each step in the orientation process, we conducted a series of tests at the ASL using twelve GSN and ANSS field engineers. The results from this exercise allow us to estimate upper bounds on the precision of our ability to orient instruments, as well as identify the sources of error in the procedures. We are also able to identify systematic bias of various true‐north‐finding methods relative to one another. Although we are unable to estimate the absolute accuracy of our orientation measurements due to our inability to identify true north without some error, the agreement between independent methods for finding true north provides confidence in the different approaches, assuming no systematic bias. Finally, our study neglects orientation errors that are beyond the control of the field engineer during a station visit. These additional errors can arise from deviations in the sensitive axes of the instruments relative to the case markings, processing errors (Holcomb, 2002) when comparing horizontal orientations relative to other sensors (e.g., borehole installations), and deviations of the sensitive axes of instruments from true orthogonality (e.g., instruments with separate modules such as the Streckeisen STS‐1).
Bennett, Derrick A; Landry, Denise; Little, Julian; Minelli, Cosetta
2017-09-19
Several statistical approaches have been proposed to assess and correct for exposure measurement error. We aimed to provide a critical overview of the most common approaches used in nutritional epidemiology. MEDLINE, EMBASE, BIOSIS and CINAHL were searched for reports published in English up to May 2016 in order to ascertain studies that described methods aimed to quantify and/or correct for measurement error for a continuous exposure in nutritional epidemiology using a calibration study. We identified 126 studies, 43 of which described statistical methods and 83 that applied any of these methods to a real dataset. The statistical approaches in the eligible studies were grouped into: a) approaches to quantify the relationship between different dietary assessment instruments and "true intake", which were mostly based on correlation analysis and the method of triads; b) approaches to adjust point and interval estimates of diet-disease associations for measurement error, mostly based on regression calibration analysis and its extensions. Two approaches (multiple imputation and moment reconstruction) were identified that can deal with differential measurement error. For regression calibration, the most common approach to correct for measurement error used in nutritional epidemiology, it is crucial to ensure that its assumptions and requirements are fully met. Analyses that investigate the impact of departures from the classical measurement error model on regression calibration estimates can be helpful to researchers in interpreting their findings. With regard to the possible use of alternative methods when regression calibration is not appropriate, the choice of method should depend on the measurement error model assumed, the availability of suitable calibration study data and the potential for bias due to violation of the classical measurement error model assumptions. On the basis of this review, we provide some practical advice for the use of methods to assess and adjust for measurement error in nutritional epidemiology.
NASA Astrophysics Data System (ADS)
Busonero, D.; Gai, M.
The goals of 21st century high angular precision experiments rely on the limiting performance associated to the selected instrumental configuration and observational strategy. Both global and narrow angle micro-arcsec space astrometry require that the instrument contributions to the overall error budget has to be less than the desired micro-arcsec level precision. Appropriate modelling of the astrometric response is required for optimal definition of the data reduction and calibration algorithms, in order to ensure high sensitivity to the astrophysical source parameters and in general high accuracy. We will refer to the framework of the SIM-Lite and the Gaia mission, the most challenging space missions of the next decade in the narrow angle and global astrometry field, respectively. We will focus our dissertation on the Gaia data reduction issues and instrument calibration implications. We describe selected topics in the framework of the Astrometric Instrument Modelling for the Gaia mission, evidencing their role in the data reduction chain and we give a brief overview of the Astrometric Instrument Model Data Analysis Software System, a Java-based pipeline under development by our team.
A new nondestructive instrument for bulk residual stress measurement using tungsten kα1 X-ray.
Ma, Ce; Dou, Zuo-Yong; Chen, Li; Li, Yun; Tan, Xiao; Dong, Ping; Zhang, Jin; Zheng, Lin; Zhang, Peng-Cheng
2016-11-01
We describe an experimental instrument used for measuring nondestructively the residual stress using short wavelength X-ray, tungsten k α1 . By introducing a photon energy screening technology, the monochromatic X-ray diffraction of tungsten k α1 was realized using a CdTe detector. A high precision Huber goniometer is utilized in order to reduce the error in residual stress measurement. This paper summarizes the main performance of this instrument, measurement depth, stress error, as opposed to the neutron diffraction measurements of residual stress. Here, we demonstrate an application on the determination of residual stress in an aluminum alloy welded by the friction stir welding.
Construct validity and expert benchmarking of the haptic virtual reality dental simulator.
Suebnukarn, Siriwan; Chaisombat, Monthalee; Kongpunwijit, Thanapohn; Rhienmora, Phattanapon
2014-10-01
The aim of this study was to demonstrate construct validation of the haptic virtual reality (VR) dental simulator and to define expert benchmarking criteria for skills assessment. Thirty-four self-selected participants (fourteen novices, fourteen intermediates, and six experts in endodontics) at one dental school performed ten repetitions of three mode tasks of endodontic cavity preparation: easy (mandibular premolar with one canal), medium (maxillary premolar with two canals), and hard (mandibular molar with three canals). The virtual instrument's path length was registered by the simulator. The outcomes were assessed by an expert. The error scores in easy and medium modes accurately distinguished the experts from novices and intermediates at the onset of training, when there was a significant difference between groups (ANOVA, p<0.05). The trend was consistent until trial 5. From trial 6 on, the three groups achieved similar scores. No significant difference was found between groups at the end of training. Error score analysis was not able to distinguish any group at the hard level of training. Instrument path length showed a difference in performance according to groups at the onset of training (ANOVA, p<0.05). This study established construct validity for the haptic VR dental simulator by demonstrating its discriminant capabilities between that of experts and non-experts. The experts' error scores and path length were used to define benchmarking criteria for optimal performance.
Koláčková, Pavla; Růžičková, Gabriela; Gregor, Tomáš; Šišperová, Eliška
2015-08-30
Calibration models for the Fourier transform-near infrared (FT-NIR) instrument were developed for quick and non-destructive determination of oil and fatty acids in whole achenes of milk thistle. Samples with a range of oil and fatty acid levels were collected and their transmittance spectra were obtained by the FT-NIR instrument. Based on these spectra and data gained by the means of the reference method - Soxhlet extraction and gas chromatography (GC) - calibration models were created by means of partial least square (PLS) regression analysis. Precision and accuracy of the calibration models was verified via the cross-validation of validation samples whose spectra were not part of the calibration model and also according to the root mean square error of prediction (RMSEP), root mean square error of calibration (RMSEC), root mean square error of cross-validation (RMSECV) and the validation coefficient of determination (R(2) ). R(2) for whole seeds were 0.96, 0.96, 0.83 and 0.67 and the RMSEP values were 0.76, 1.68, 1.24, 0.54 for oil, linoleic (C18:2), oleic (C18:1) and palmitic (C16:0) acids, respectively. The calibration models are appropriate for the non-destructive determination of oil and fatty acids levels in whole seeds of milk thistle. © 2014 Society of Chemical Industry.
NASA Technical Reports Server (NTRS)
Miller, N. J.; Chuss, D. T.; Marriage, T. A.; Wollack, E. J.; Appel, J. W.; Bennett, C. L.; Eimer, J.; Essinger-Hileman, T.; Fixsen, D. J.; Harrington, K.;
2016-01-01
Variable-delay Polarization Modulators (VPMs) are currently being implemented in experiments designed to measure the polarization of the cosmic microwave background on large angular scales because of their capability for providing rapid, front-end polarization modulation and control over systematic errors. Despite the advantages provided by the VPM, it is important to identify and mitigate any time-varying effects that leak into the synchronously modulated component of the signal. In this paper, the effect of emission from a 300 K VPM on the system performance is considered and addressed. Though instrument design can greatly reduce the influence of modulated VPM emission, some residual modulated signal is expected. VPM emission is treated in the presence of rotational misalignments and temperature variation. Simulations of time-ordered data are used to evaluate the effect of these residual errors on the power spectrum. The analysis and modeling in this paper guides experimentalists on the critical aspects of observations using VPMs as front-end modulators. By implementing the characterizations and controls as described, front-end VPM modulation can be very powerful for mitigating 1/ f noise in large angular scale polarimetric surveys. None of the systematic errors studied fundamentally limit the detection and characterization of B-modes on large scales for a tensor-to-scalar ratio of r= 0.01. Indeed, r less than 0.01 is achievable with commensurately improved characterizations and controls.
CDI Sensitivity and Crosstrack Error on Nonprecision Approaches
DOT National Transportation Integrated Search
1991-01-01
This study was conducted to determine the influence of course deviation : indicator (CDI) sensitivity on pilot tracking error during nonprecision approaches. : Twelve pilots flew an instrumented single-engine airplane on 144 approaches at six : diffe...
The Space Telescope SI C&DH system. [Scientific Instrument Control and Data Handling Subsystem
NASA Technical Reports Server (NTRS)
Gadwal, Govind R.; Barasch, Ronald S.
1990-01-01
The Hubble Space Telescope Scientific Instrument Control and Data Handling Subsystem (SI C&DH) is designed to interface with five scientific instruments of the Space Telescope to provide ground and autonomous control and collect health and status information using the Standard Telemetry and Command Components (STACC) multiplex data bus. It also formats high throughput science data into packets. The packetized data is interleaved and Reed-Solomon encoded for error correction and Pseudo Random encoded. An inner convolutional coding with the outer Reed-Solomon coding provides excellent error correction capability. The subsystem is designed with the capacity for orbital replacement in order to meet a mission life of fifteen years. The spacecraft computer and the SI C&DH computer coordinate the activities of the spacecraft and the scientific instruments to achieve the mission objectives.
Li, Qing-Bo; Xu, Yu-Po; Zhang, Chao-Hang; Zhang, Guang-Jun; Wu, Jin-Guang
2009-10-01
A portable nondestructive measuring instrument for plant chlorophyll was developed, which can perform real-time, quick and nondestructive measurement of chlorophyll. The instrument is mainly composed of four parts, including leaves clamp, driving circuit of light source, photoelectric detection and signal conditioning circuit and micro-control system. A new scheme of light source driving was proposed, which can not only achieve constant current, but also control the current by digital signal. The driving current can be changed depending on different light source and measurement situation by actual operation, which resolves the matching problem of output intensity of light source and input range of photoelectric detector. In addition, an integrative leaves clamp was designed, which simplified the optical structure, enhanced the stability of apparatus, decreased the loss of incident light and improved the signal-to-noise ratio and precision. The photoelectric detection and signal conditioning circuit achieve the conversion between optical signal and electrical signal, and make the electrical signal meet the requirement of AD conversion, and the photo detector is S1133-14 of Hamamatsu Company, with a high detection precision. The micro-control system mainly achieves control function, dealing with data, data storage and so on. As the most important component, microprocessor MSP430F149 of TI Company has many advantages, such as high processing speed, low power, high stability and so on. And it has an in-built 12 bit AD converter, so the data-acquisition circuit is simpler. MSP430F149 is suitable for portable instrument. In the calibration experiment of the instrument, the standard value was measured by chlorophyll meter SPAD-502, multiple linear calibration models were built, and the instrument performance was evaluated. The correlation coefficient between chlorophyll prediction value and standard value is 0.97, and the root mean square error of prediction is about 1.3 SPAD. In the evaluation experiment of the instrument repeatability, the root mean square error is 0.1 SPAD. Results of the calibration experiment show that the instrument has high measuring precision and high stability.
Instrumentation and Performance Analysis Plans for the HIFiRE Flight 2 Experiment
NASA Technical Reports Server (NTRS)
Gruber, Mark; Barhorst, Todd; Jackson, Kevin; Eklund, Dean; Hass, Neal; Storch, Andrea M.; Liu, Jiwen
2009-01-01
Supersonic combustion performance of a bi-component gaseous hydrocarbon fuel mixture is one of the primary aspects under investigation in the HIFiRE Flight 2 experiment. In-flight instrumentation and post-test analyses will be two key elements used to determine the combustion performance. Pre-flight computational fluid dynamics (CFD) analyses provide valuable information that can be used to optimize the placement of a constrained set of wall pressure instrumentation in the experiment. The simulations also allow pre-flight assessments of performance sensitivities leading to estimates of overall uncertainty in the determination of combustion efficiency. Based on the pre-flight CFD results, 128 wall pressure sensors have been located throughout the isolator/combustor flowpath to minimize the error in determining the wall pressure force at Mach 8 flight conditions. Also, sensitivity analyses show that mass capture and combustor exit stream thrust are the two primary contributors to uncertainty in combustion efficiency.
The TOPSAR interferometric radar topographic mapping instrument
NASA Technical Reports Server (NTRS)
Zebker, Howard A.; Madsen, Soren N.; Martin, Jan; Alberti, Giovanni; Vetrella, Sergio; Cucci, Alessandro
1992-01-01
The NASA DC-8 AIRSAR instrument was augmented with a pair of C-band antennas displaced across track to form an interferometer sensitive to topographic variations of the Earth's surface. The antennas were developed by the Italian consortium Co.Ri.S.T.A., under contract to the Italian Space Agency (ASI), while the AIRSAR instrument and modifications to it supporting TOPSAR were sponsored by NASA. A new data processor was developed at JPL for producing the topographic maps, and a second processor was developed at Co.Ri.S.T.A. All the results presented below were processed at JPL. During the 1991 DC-8 flight campaign, data were acquired over several sites in the United States and Europe, and topographic maps were produced from several of these flight lines. Analysis of the results indicate that statistical errors are in the 2-3 m range for flat terrain and in the 4-5 m range for mountainous areas.
Data inversion algorithm development for the hologen occultation experiment
NASA Technical Reports Server (NTRS)
Gordley, Larry L.; Mlynczak, Martin G.
1986-01-01
The successful retrieval of atmospheric parameters from radiometric measurement requires not only the ability to do ideal radiometric calculations, but also a detailed understanding of instrument characteristics. Therefore a considerable amount of time was spent in instrument characterization in the form of test data analysis and mathematical formulation. Analyses of solar-to-reference interference (electrical cross-talk), detector nonuniformity, instrument balance error, electronic filter time-constants and noise character were conducted. A second area of effort was the development of techniques for the ideal radiometric calculations required for the Halogen Occultation Experiment (HALOE) data reduction. The computer code for these calculations must be extremely complex and fast. A scheme for meeting these requirements was defined and the algorithms needed form implementation are currently under development. A third area of work included consulting on the implementation of the Emissivity Growth Approximation (EGA) method of absorption calculation into a HALOE broadband radiometer channel retrieval algorithm.
Development of the Computer-Adaptive Version of the Late-Life Function and Disability Instrument
Tian, Feng; Kopits, Ilona M.; Moed, Richard; Pardasaney, Poonam K.; Jette, Alan M.
2012-01-01
Background. Having psychometrically strong disability measures that minimize response burden is important in assessing of older adults. Methods. Using the original 48 items from the Late-Life Function and Disability Instrument and newly developed items, a 158-item Activity Limitation and a 62-item Participation Restriction item pool were developed. The item pools were administered to a convenience sample of 520 community-dwelling adults 60 years or older. Confirmatory factor analysis and item response theory were employed to identify content structure, calibrate items, and build the computer-adaptive testings (CATs). We evaluated real-data simulations of 10-item CAT subscales. We collected data from 102 older adults to validate the 10-item CATs against the Veteran’s Short Form-36 and assessed test–retest reliability in a subsample of 57 subjects. Results. Confirmatory factor analysis revealed a bifactor structure, and multi-dimensional item response theory was used to calibrate an overall Activity Limitation Scale (141 items) and an overall Participation Restriction Scale (55 items). Fit statistics were acceptable (Activity Limitation: comparative fit index = 0.95, Tucker Lewis Index = 0.95, root mean square error approximation = 0.03; Participation Restriction: comparative fit index = 0.95, Tucker Lewis Index = 0.95, root mean square error approximation = 0.05). Correlation of 10-item CATs with full item banks were substantial (Activity Limitation: r = .90; Participation Restriction: r = .95). Test–retest reliability estimates were high (Activity Limitation: r = .85; Participation Restriction r = .80). Strength and pattern of correlations with Veteran’s Short Form-36 subscales were as hypothesized. Each CAT, on average, took 3.56 minutes to administer. Conclusions. The Late-Life Function and Disability Instrument CATs demonstrated strong reliability, validity, accuracy, and precision. The Late-Life Function and Disability Instrument CAT can achieve psychometrically sound disability assessment in older persons while reducing respondent burden. Further research is needed to assess their ability to measure change in older adults. PMID:22546960
Development of a 0.5m clear aperture Cassegrain type collimator telescope
NASA Astrophysics Data System (ADS)
Ekinci, Mustafa; Selimoǧlu, Özgür
2016-07-01
Collimator is an optical instrument used to evaluate performance of high precision instruments, especially space-born high resolution telescopes. Optical quality of the collimator telescope needs to be better than the instrument to be measured. This requirement leads collimator telescope to be a very precise instrument with high quality mirrors and a stable structure to keep it operational under specified conditions. In order to achieve precision requirements and to ensure repeatability of the mounts for polishing and metrology, opto-mechanical principles are applied to mirror mounts. Finite Element Method is utilized to simulate gravity effects, integration errors and temperature variations. Finite element analyses results of deformed optical surfaces are imported to optical domain by using Zernike polynomials to evaluate the design against specified WFE requirements. Both mirrors are aspheric and made from Zerodur for its stability and near zero CTE, M1 is further light-weighted. Optical quality measurements of the mirrors are achieved by using custom made CGHs on an interferometric test setup. Spider of the Cassegrain collimator telescope has a flexural adjustment mechanism driven by precise micrometers to overcome tilt errors originating from finite stiffness of the structure and integration errors. Collimator telescope is assembled and alignment methods are proposed.
NASA Technical Reports Server (NTRS)
Thome, Kurtis; McCorkel, Joel; McAndrew, Brendan
2013-01-01
A goal of the Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission is to observe highaccuracy, long-term climate change trends over decadal time scales. The key to such a goal is to improving the accuracy of SI traceable absolute calibration across infrared and reflected solar wavelengths allowing climate change to be separated from the limit of natural variability. The advances required to reach on-orbit absolute accuracy to allow climate change observations to survive data gaps exist at NIST in the laboratory, but still need demonstration that the advances can move successfully from to NASA and/or instrument vendor capabilities for spaceborne instruments. The current work describes the radiometric calibration error budget for the Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) which is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO. The goal of the CDS is to allow the testing and evaluation of calibration approaches, alternate design and/or implementation approaches and components for the CLARREO mission. SOLARIS also provides a test-bed for detector technologies, non-linearity determination and uncertainties, and application of future technology developments and suggested spacecraft instrument design modifications. The resulting SI-traceable error budget for reflectance retrieval using solar irradiance as a reference and methods for laboratory-based, absolute calibration suitable for climatequality data collections is given. Key components in the error budget are geometry differences between the solar and earth views, knowledge of attenuator behavior when viewing the sun, and sensor behavior such as detector linearity and noise behavior. Methods for demonstrating this error budget are also presented.
Estimating pole/zero errors in GSN-IRIS/USGS network calibration metadata
Ringler, A.T.; Hutt, C.R.; Aster, R.; Bolton, H.; Gee, L.S.; Storm, T.
2012-01-01
Mapping the digital record of a seismograph into true ground motion requires the correction of the data by some description of the instrument's response. For the Global Seismographic Network (Butler et al., 2004), as well as many other networks, this instrument response is represented as a Laplace domain pole–zero model and published in the Standard for the Exchange of Earthquake Data (SEED) format. This Laplace representation assumes that the seismometer behaves as a linear system, with any abrupt changes described adequately via multiple time-invariant epochs. The SEED format allows for published instrument response errors as well, but these typically have not been estimated or provided to users. We present an iterative three-step method to estimate the instrument response parameters (poles and zeros) and their associated errors using random calibration signals. First, we solve a coarse nonlinear inverse problem using a least-squares grid search to yield a first approximation to the solution. This approach reduces the likelihood of poorly estimated parameters (a local-minimum solution) caused by noise in the calibration records and enhances algorithm convergence. Second, we iteratively solve a nonlinear parameter estimation problem to obtain the least-squares best-fit Laplace pole–zero–gain model. Third, by applying the central limit theorem, we estimate the errors in this pole–zero model by solving the inverse problem at each frequency in a two-thirds octave band centered at each best-fit pole–zero frequency. This procedure yields error estimates of the 99% confidence interval. We demonstrate the method by applying it to a number of recent Incorporated Research Institutions in Seismology/United States Geological Survey (IRIS/USGS) network calibrations (network code IU).
Fotiadis, Dimitris A; Astaras, Alexandros; Bamidis, Panagiotis D; Papathanasiou, Kostas; Kalfas, Anestis
2015-09-01
This paper presents a novel method for tracking the position of a medical instrument's tip. The system is based on phase locking a high frequency signal transmitted from the medical instrument's tip to a reference signal. Displacement measurement is established having the loop open, in order to get a low frequency voltage representing the medical instrument's movement; therefore, positioning is established by means of conventional measuring techniques. The voltage-controlled oscillator stage of the phase-locked loop (PLL), combined to an appropriate antenna, comprises the associated transmitter located inside the medical instrument tip. All the other low frequency PLL components, low noise amplifier and mixer, are located outside the human body, forming the receiver part of the system. The operating details of the proposed system were coded in Verilog-AMS. Simulation results indicate robust medical instrument tracking in 1-D. Experimental evaluation of the proposed position tracking system is also presented. The experiments described in this paper are based on a transmitter moving opposite a stationary receiver performing either constant velocity or uniformly accelerated movement, and also together with two stationary receivers performing constant velocity movement again. This latter setup is implemented in order to demonstrate the prototype's accuracy for planar (2-D) motion measurements. Error analysis and time-domain analysis are presented for system performance characterization. Furthermore, preliminary experimental assessment using a saline solution container to more closely approximate the human body as a radio frequency wave transmission medium has proved the system's capability of operating underneath the skin.
Burgess, Stephen; Zuber, Verena; Valdes-Marquez, Elsa; Sun, Benjamin B; Hopewell, Jemma C
2017-12-01
Mendelian randomization uses genetic variants to make causal inferences about the effect of a risk factor on an outcome. With fine-mapped genetic data, there may be hundreds of genetic variants in a single gene region any of which could be used to assess this causal relationship. However, using too many genetic variants in the analysis can lead to spurious estimates and inflated Type 1 error rates. But if only a few genetic variants are used, then the majority of the data is ignored and estimates are highly sensitive to the particular choice of variants. We propose an approach based on summarized data only (genetic association and correlation estimates) that uses principal components analysis to form instruments. This approach has desirable theoretical properties: it takes the totality of data into account and does not suffer from numerical instabilities. It also has good properties in simulation studies: it is not particularly sensitive to varying the genetic variants included in the analysis or the genetic correlation matrix, and it does not have greatly inflated Type 1 error rates. Overall, the method gives estimates that are less precise than those from variable selection approaches (such as using a conditional analysis or pruning approach to select variants), but are more robust to seemingly arbitrary choices in the variable selection step. Methods are illustrated by an example using genetic associations with testosterone for 320 genetic variants to assess the effect of sex hormone related pathways on coronary artery disease risk, in which variable selection approaches give inconsistent inferences. © 2017 The Authors Genetic Epidemiology Published by Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, T. S.; DePoy, D. L.; Marshall, J. L.
Here, we report that meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is both stable in time and uniform over the sky to 1% precision or better. Past and current surveys have achieved photometric precision of 1%–2% by calibrating the survey's stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations inmore » the wavelength dependence of the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors (SCEs) using photometry from the Dark Energy Survey (DES) as an example. We first define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the SCEs caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane can be up to 2% in some bandpasses. We then compare the calculated SCEs with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput from auxiliary calibration systems. In conclusion, the residual after correction is less than 0.3%. Moreover, we calculate such SCEs for Type Ia supernovae and elliptical galaxies and find that the chromatic errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, T. S.; DePoy, D. L.; Marshall, J. L.
Meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is both stable in time and uniform over the sky to 1% precision or better. Past and current surveys have achieved photometric precision of 1%–2% by calibrating the survey’s stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations in the wavelength dependence ofmore » the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors (SCEs) using photometry from the Dark Energy Survey (DES) as an example. We first define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the SCEs caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane can be up to 2% in some bandpasses. We then compare the calculated SCEs with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput from auxiliary calibration systems. The residual after correction is less than 0.3%. Moreover, we calculate such SCEs for Type Ia supernovae and elliptical galaxies and find that the chromatic errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.« less
On a more rigorous gravity field processing for future LL-SST type gravity satellite missions
NASA Astrophysics Data System (ADS)
Daras, I.; Pail, R.; Murböck, M.
2013-12-01
In order to meet the augmenting demands of the user community concerning accuracies of temporal gravity field models, future gravity missions of low-low satellite-to-satellite tracking (LL-SST) type are planned to carry more precise sensors than their precedents. A breakthrough is planned with the improved LL-SST measurement link, where the traditional K-band microwave instrument of 1μm accuracy will be complemented by an inter-satellite ranging instrument of several nm accuracy. This study focuses on investigations concerning the potential performance of the new sensors and their impact in gravity field solutions. The processing methods for gravity field recovery have to meet the new sensor standards and be able to take full advantage of the new accuracies that they provide. We use full-scale simulations in a realistic environment to investigate whether the standard processing techniques suffice to fully exploit the new sensors standards. We achieve that by performing full numerical closed-loop simulations based on the Integral Equation approach. In our simulation scheme, we simulate dynamic orbits in a conventional tracking analysis to compute pseudo inter-satellite ranges or range-rates that serve as observables. Each part of the processing is validated separately with special emphasis on numerical errors and their impact in gravity field solutions. We demonstrate that processing with standard precision may be a limiting factor for taking full advantage of new generation sensors that future satellite missions will carry. Therefore we have created versions of our simulator with enhanced processing precision with primarily aim to minimize round-off system errors. Results using the enhanced precision show a big reduction of system errors that were present at the standard precision processing even for the error-free scenario, and reveal the improvements the new sensors will bring into the gravity field solutions. As a next step, we analyze the contribution of individual error sources to the system's error budget. More specifically we analyze sensor noise from the laser interferometer and the accelerometers, errors in the kinematic orbits and the background fields as well as temporal and spatial aliasing errors. We give special care on the assessment of error sources with stochastic behavior, such as the laser interferometer and the accelerometers, and their consistent stochastic modeling in frame of the adjustment process.
Error in Dasibi flight measurements of atmospheric ozone due to instrument wall-loss
NASA Technical Reports Server (NTRS)
Ainsworth, J. E.; Hagemeyer, J. R.; Reed, E. I.
1981-01-01
Theory suggests that in laminar flow the percent loss of a trace constituent to the walls of a measuring instrument varies as P to the -2/3, where P is the total gas pressure. Preliminary laboratory ozone wall-loss measurements confirm this P to the -2/3 dependence. Accurate assessment of wall-loss is thus of particular importance for those balloon-borne instruments utilizing laminar flow at ambient pressure, since the ambient pressure decreases by a factor of 350 during ascent to 40 km. Measurements and extrapolations made for a Dasibi ozone monitor modified for balloon flight indicate that the wall-loss error at 40 km was between 6 and 30 percent and that the wall-loss error in the derived total ozone column-content for the region from the surface to 40 km altitude was between 2 and 10 percent. At 1000 mb, turbulence caused an order of magnitude increase in the Dasibi wall-loss.
A stereotaxic method of recording from single neurons in the intact in vivo eye of the cat.
Molenaar, J; Van de Grind, W A
1980-04-01
A method is described for recording stereotaxically from single retinal neurons in the optically intact in vivo eye of the cat. The method is implemented with the help of a new type of stereotaxic instrument and a specially developed stereotaxic atlas of the cat's eye and retina. The instrument is extremely stable and facilitates intracellular recording from retinal neurons. The microelectrode can be rotated about two mutually perpendicular axes, which intersect in the freely positionable pivot point of the electrode manipulation system. When the pivot point is made to coincide with a small electrode-entrance hole in the sclera of the eye, a large retinal region can be reached through this fixed hole in the immobilized eye. The stereotaxic method makes it possible to choose a target point on the presented eye atlas and predict the settings of the instrument necessary to reach this target. This method also includes the prediction of the corresponding light stimulus position on a tangent screen and the calculation of the projection of the recording electrode on this screen. The sources of error in the method were studied experimentally and a numerical perturbation analysis was carried out to study the influence of each of the sources of error on the final result. The overall accuracy of the method is of the order of 5 degrees of visual angle, which will be sufficient for most purposes.
Mission Simulation of Space Lidar Measurements for Seasonal and Regional CO2 Variations
NASA Technical Reports Server (NTRS)
Kawa, Stephan; Collatz, G. J.; Mao, J.; Abshire, J. B.; Sun, X.; Weaver, C. J.
2010-01-01
Results of mission simulation studies are presented for a laser-based atmospheric [82 sounder. The simulations are based on real-time carbon cycle process modeling and data analysis. The mission concept corresponds to the Active Sensing of [82 over Nights, Days, and Seasons (ASCENDS) recommended by the US National Academy of Sciences Decadal Survey of Earth Science and Applications from Space. One prerequisite for meaningful quantitative sensor evaluation is realistic CO2 process modeling across a wide range of scales, i.e., does the model have representative spatial and temporal gradients? Examples of model comparison with data will be shown. Another requirement is a relatively complete description of the atmospheric and surface state, which we have obtained from meteorological data assimilation and satellite measurements from MODIS and [ALIPS0. We use radiative transfer model calculations, an instrument model with representative errors ' and a simple retrieval approach to complete the cycle from "nature" run to "pseudo-data" CO2, Several mission and instrument configuration options are examined/ and the sensitivity to key design variables is shown. We use the simulation framework to demonstrate that within reasonable technological assumptions for the system performance, relatively high measurement precision can be obtained, but errors depend strongly on environmental conditions as well as instrument specifications. Examples are also shown of how the resulting pseudo - measurements might be used to address key carbon cycle science questions.
Ramos, Rogelio; Zlatev, Roumen; Valdez, Benjamin; Stoytcheva, Margarita; Carrillo, Mónica; García, Juan-Francisco
2013-01-01
A virtual instrumentation (VI) system called VI localized corrosion image analyzer (LCIA) based on LabVIEW 2010 was developed allowing rapid automatic and subjective error-free determination of the pits number on large sized corroded specimens. The VI LCIA controls synchronously the digital microscope image taking and its analysis, finally resulting in a map file containing the coordinates of the detected probable pits containing zones on the investigated specimen. The pits area, traverse length, and density are also determined by the VI using binary large objects (blobs) analysis. The resulting map file can be used further by a scanning vibrating electrode technique (SVET) system for rapid (one pass) "true/false" SVET check of the probable zones only passing through the pit's centers avoiding thus the entire specimen scan. A complete SVET scan over the already proved "true" zones could determine the corrosion rate in any of the zones.
The effect of divided attention on novices and experts in laparoscopic task performance.
Ghazanfar, Mudassar Ali; Cook, Malcolm; Tang, Benjie; Tait, Iain; Alijani, Afshin
2015-03-01
Attention is important for the skilful execution of surgery. The surgeon's attention during surgery is divided between surgery and outside distractions. The effect of this divided attention has not been well studied previously. We aimed to compare the effect of dividing attention of novices and experts on a laparoscopic task performance. Following ethical approval, 25 novices and 9 expert surgeons performed a standardised peg transfer task in a laboratory setup under three randomly assigned conditions: silent as control condition and two standardised auditory distracting tasks requiring response (easy and difficult) as study conditions. Human reliability assessment was used for surgical task analysis. Primary outcome measures were correct auditory responses, task time, number of surgical errors and instrument movements. Secondary outcome measures included error rate, error probability and hand specific differences. Non-parametric statistics were used for data analysis. 21109 movements and 9036 total errors were analysed. Novices had increased mean task completion time (seconds) (171 ± 44SD vs. 149 ± 34, p < 0.05), number of total movements (227 ± 27 vs. 213 ± 26, p < 0.05) and number of errors (127 ± 51 vs. 96 ± 28, p < 0.05) during difficult study conditions compared to control. The correct responses to auditory stimuli were less frequent in experts (68 %) compared to novices (80 %). There was a positive correlation between error rate and error probability in novices (r (2) = 0.533, p < 0.05) but not in experts (r (2) = 0.346, p > 0.05). Divided attention conditions in theatre environment require careful consideration during surgical training as the junior surgeons are less able to focus their attention during these conditions.
NASA Technical Reports Server (NTRS)
Wargan, K.; Stajner, I.; Pawson, S.
2003-01-01
In a data assimilation system the forecast error covariance matrix governs the way in which the data information is spread throughout the model grid. Implementation of a correct method of assigning covariances is expected to have an impact on the analysis results. The simplest models assume that correlations are constant in time and isotropic or nearly isotropic. In such models the analysis depends on the dynamics only through assumed error standard deviations. In applications to atmospheric tracer data assimilation this may lead to inaccuracies, especially in regions with strong wind shears or high gradient of potential vorticity, as well as in areas where no data are available. In order to overcome this problem we have developed a flow-dependent covariance model that is based on short term evolution of error correlations. The presentation compares performance of a static and a flow-dependent model applied to a global three- dimensional ozone data assimilation system developed at NASA s Data Assimilation Office. We will present some results of validation against WMO balloon-borne sondes and the Polar Ozone and Aerosol Measurement (POAM) III instrument. Experiments show that allowing forecast error correlations to evolve with the flow results in positive impact on assimilated ozone within the regions where data were not assimilated, particularly at high latitudes in both hemispheres and in the troposphere. We will also discuss statistical characteristics of both models; in particular we will argue that including evolution of error correlations leads to stronger internal consistency of a data assimilation ,
Contact Versus Non-Contact Measurement of a Helicopter Main Rotor Composite Blade
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luczak, Marcin; Dziedziech, Kajetan; Peeters, Bart
2010-05-28
The dynamic characterization of lightweight structures is particularly complex as the impact of the weight of sensors and instrumentation (cables, mounting of exciters...) can distort the results. Varying mass loading or constraint effects between partial measurements may determine several errors on the final conclusions. Frequency shifts can lead to erroneous interpretations of the dynamics parameters. Typically these errors remain limited to a few percent. Inconsistent data sets however can result in major processing errors, with all related consequences towards applications based on the consistency assumption, such as global modal parameter identification, model-based damage detection and FRF-based matrix inversion in substructuring,more » load identification and transfer path analysis [1]. This paper addresses the subject of accuracy in the context of the measurement of the dynamic properties of a particular lightweight structure. It presents a comprehensive comparative study between the use of accelerometer, laser vibrometer (scanning LDV) and PU-probe (acoustic particle velocity and pressure) measurements to measure the structural responses, with as final aim the comparison of modal model quality assessment. The object of the investigation is a composite material blade from the main rotor of a helicopter. The presented results are part of an extensive test campaign performed with application of SIMO, MIMO, random and harmonic excitation, and the use of the mentioned contact and non-contact measurement techniques. The advantages and disadvantages of the applied instrumentation are discussed. Presented are real-life measurement problems related to the different set up conditions. Finally an analysis of estimated models is made in view of assessing the applicability of the various measurement approaches for successful fault detection based on modal parameters observation as well as in uncertain non-deterministic numerical model updating.« less
Contact Versus Non-Contact Measurement of a Helicopter Main Rotor Composite Blade
NASA Astrophysics Data System (ADS)
Luczak, Marcin; Dziedziech, Kajetan; Vivolo, Marianna; Desmet, Wim; Peeters, Bart; Van der Auweraer, Herman
2010-05-01
The dynamic characterization of lightweight structures is particularly complex as the impact of the weight of sensors and instrumentation (cables, mounting of exciters…) can distort the results. Varying mass loading or constraint effects between partial measurements may determine several errors on the final conclusions. Frequency shifts can lead to erroneous interpretations of the dynamics parameters. Typically these errors remain limited to a few percent. Inconsistent data sets however can result in major processing errors, with all related consequences towards applications based on the consistency assumption, such as global modal parameter identification, model-based damage detection and FRF-based matrix inversion in substructuring, load identification and transfer path analysis [1]. This paper addresses the subject of accuracy in the context of the measurement of the dynamic properties of a particular lightweight structure. It presents a comprehensive comparative study between the use of accelerometer, laser vibrometer (scanning LDV) and PU-probe (acoustic particle velocity and pressure) measurements to measure the structural responses, with as final aim the comparison of modal model quality assessment. The object of the investigation is a composite material blade from the main rotor of a helicopter. The presented results are part of an extensive test campaign performed with application of SIMO, MIMO, random and harmonic excitation, and the use of the mentioned contact and non-contact measurement techniques. The advantages and disadvantages of the applied instrumentation are discussed. Presented are real-life measurement problems related to the different set up conditions. Finally an analysis of estimated models is made in view of assessing the applicability of the various measurement approaches for successful fault detection based on modal parameters observation as well as in uncertain non-deterministic numerical model updating.
Parameter Estimation for GRACE-FO Geometric Ranging Errors
NASA Astrophysics Data System (ADS)
Wegener, H.; Mueller, V.; Darbeheshti, N.; Naeimi, M.; Heinzel, G.
2017-12-01
Onboard GRACE-FO, the novel Laser Ranging Instrument (LRI) serves as a technology demonstrator, but it is a fully functional instrument to provide an additional high-precision measurement of the primary mission observable: the biased range between the two spacecraft. Its (expectedly) two largest error sources are laser frequency noise and tilt-to-length (TTL) coupling. While not much can be done about laser frequency noise, the mechanics of the TTL error are widely understood. They depend, however, on unknown parameters. In order to improve the quality of the ranging data, it is hence essential to accurately estimate these parameters and remove the resulting TTL error from the data.Means to do so will be discussed. In particular, the possibility of using calibration maneuvers, the utility of the attitude information provided by the LRI via Differential Wavefront Sensing (DWS), and the benefit from combining ranging data from LRI with ranging data from the established microwave ranging, will be mentioned.
MISR: protection from ourselves
NASA Technical Reports Server (NTRS)
Nolan, T.; Varanasi, P.
2004-01-01
Outlines lessons learned by the Instrument Operations Team of NASA/JPL Terra's Multi-angle Imaging SpectroRadiometer mission. It narrates a story of MISR: Protection from Ourselves! and describes, in detail, how the MISR instrument survived operator errors.
Development and content validation of performance assessments for endoscopic third ventriculostomy.
Breimer, Gerben E; Haji, Faizal A; Hoving, Eelco W; Drake, James M
2015-08-01
This study aims to develop and establish the content validity of multiple expert rating instruments to assess performance in endoscopic third ventriculostomy (ETV), collectively called the Neuro-Endoscopic Ventriculostomy Assessment Tool (NEVAT). The important aspects of ETV were identified through a review of current literature, ETV videos, and discussion with neurosurgeons, fellows, and residents. Three assessment measures were subsequently developed: a procedure-specific checklist (CL), a CL of surgical errors, and a global rating scale (GRS). Neurosurgeons from various countries, all identified as experts in ETV, were then invited to participate in a modified Delphi survey to establish the content validity of these instruments. In each Delphi round, experts rated their agreement including each procedural step, error, and GRS item in the respective instruments on a 5-point Likert scale. Seventeen experts agreed to participate in the study and completed all Delphi rounds. After item generation, a total of 27 procedural CL items, 26 error CL items, and 9 GRS items were posed to Delphi panelists for rating. An additional 17 procedural CL items, 12 error CL items, and 1 GRS item were added by panelists. After three rounds, strong consensus (>80% agreement) was achieved on 35 procedural CL items, 29 error CL items, and 10 GRS items. Moderate consensus (50-80% agreement) was achieved on an additional 7 procedural CL items and 1 error CL item. The final procedural and error checklist contained 42 and 30 items, respectively (divided into setup, exposure, navigation, ventriculostomy, and closure). The final GRS contained 10 items. We have established the content validity of three ETV assessment measures by iterative consensus of an international expert panel. Each measure provides unique assessment information and thus can be used individually or in combination, depending on the characteristics of the learner and the purpose of the assessment. These instruments must now be evaluated in both the simulated and operative settings, to determine their construct validity and reliability. Ultimately, the measures contained in the NEVAT may prove suitable for formative assessment during ETV training and potentially as summative assessment measures during certification.
A statistical approach to instrument calibration
Robert R. Ziemer; David Strauss
1978-01-01
Summary - It has been found that two instruments will yield different numerical values when used to measure identical points. A statistical approach is presented that can be used to approximate the error associated with the calibration of instruments. Included are standard statistical tests that can be used to determine if a number of successive calibrations of the...
NASA Astrophysics Data System (ADS)
Bacha, Tulu
The Goddard Lidar Observatory for Wind (GLOW), a mobile direct detection Doppler LIDAR based on molecular backscattering for measurement of wind in the troposphere and lower stratosphere region of atmosphere is operated and its errors characterized. It was operated at Howard University Beltsville Center for Climate Observation System (BCCOS) side by side with other operating instruments: the NASA/Langely Research Center Validation Lidar (VALIDAR), Leosphere WLS70, and other standard wind sensing instruments. The performance of Goddard Lidar Observatory for Wind (GLOW) is presented for various optical thicknesses of cloud conditions. It was also compared to VALIDAR under various conditions. These conditions include clear and cloudy sky regions. The performance degradation due to the presence of cirrus clouds is quantified by comparing the wind speed error to cloud thickness. The cloud thickness is quantified in terms of aerosol backscatter ratio (ASR) and cloud optical depth (COD). ASR and COD are determined from Howard University Raman Lidar (HURL) operating at the same station as GLOW. The wind speed error of GLOW was correlated with COD and aerosol backscatter ratio (ASR) which are determined from HURL data. The correlation related in a weak linear relationship. Finally, the wind speed measurements of GLOW were corrected using the quantitative relation from the correlation relations. Using ASR reduced the GLOW wind error from 19% to 8% in a thin cirrus cloud and from 58% to 28% in a relatively thick cloud. After correcting for cloud induced error, the remaining error is due to shot noise and atmospheric variability. Shot-noise error is the statistical random error of backscattered photons detected by photon multiplier tube (PMT) can only be minimized by averaging large number of data recorded. The atmospheric backscatter measured by GLOW along its line-of-sight direction is also used to analyze error due to atmospheric variability within the volume of measurement. GLOW scans in five different directions (vertical and at elevation angles of 45° in north, south, east, and west) to generate wind profiles. The non-uniformity of the atmosphere in all scanning directions is a factor contributing to the measurement error of GLOW. The atmospheric variability in the scanning region leads to difference in the intensity of backscattered signals for scanning directions. Taking the ratio of the north (east) to south (west) and comparing the statistical differences lead to a weak linear relation between atmospheric variability and line-of-sights wind speed differences. This relation was used to make correction which reduced by about 50%.
Sample preparation techniques for the determination of trace residues and contaminants in foods.
Ridgway, Kathy; Lalljie, Sam P D; Smith, Roger M
2007-06-15
The determination of trace residues and contaminants in complex matrices, such as food, often requires extensive sample extraction and preparation prior to instrumental analysis. Sample preparation is often the bottleneck in analysis and there is a need to minimise the number of steps to reduce both time and sources of error. There is also a move towards more environmentally friendly techniques, which use less solvent and smaller sample sizes. Smaller sample size becomes important when dealing with real life problems, such as consumer complaints and alleged chemical contamination. Optimal sample preparation can reduce analysis time, sources of error, enhance sensitivity and enable unequivocal identification, confirmation and quantification. This review considers all aspects of sample preparation, covering general extraction techniques, such as Soxhlet and pressurised liquid extraction, microextraction techniques such as liquid phase microextraction (LPME) and more selective techniques, such as solid phase extraction (SPE), solid phase microextraction (SPME) and stir bar sorptive extraction (SBSE). The applicability of each technique in food analysis, particularly for the determination of trace organic contaminants in foods is discussed.
Removal of batch effects using distribution-matching residual networks.
Shaham, Uri; Stanton, Kelly P; Zhao, Jun; Li, Huamin; Raddassi, Khadir; Montgomery, Ruth; Kluger, Yuval
2017-08-15
Sources of variability in experimentally derived data include measurement error in addition to the physical phenomena of interest. This measurement error is a combination of systematic components, originating from the measuring instrument and random measurement errors. Several novel biological technologies, such as mass cytometry and single-cell RNA-seq (scRNA-seq), are plagued with systematic errors that may severely affect statistical analysis if the data are not properly calibrated. We propose a novel deep learning approach for removing systematic batch effects. Our method is based on a residual neural network, trained to minimize the Maximum Mean Discrepancy between the multivariate distributions of two replicates, measured in different batches. We apply our method to mass cytometry and scRNA-seq datasets, and demonstrate that it effectively attenuates batch effects. our codes and data are publicly available at https://github.com/ushaham/BatchEffectRemoval.git. yuval.kluger@yale.edu. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
NASA Technical Reports Server (NTRS)
Lienert, Barry R.
1991-01-01
Monte Carlo perturbations of synthetic tensors to evaluate the Hext/Jelinek elliptical confidence regions for anisotropy of magnetic susceptibility (AMS) eigenvectors are used. When the perturbations are 33 percent of the minimum anisotropy, both the shapes and probability densities of the resulting eigenvector distributions agree with the elliptical distributions predicted by the Hext/Jelinek equations. When the perturbation size is increased to 100 percent of the minimum eigenvalue difference, the major axis of the 95 percent confidence ellipse underestimates the observed eigenvector dispersion by about 10 deg. The observed distributions of the principal susceptibilities (eigenvalues) are close to being normal, with standard errors that agree well with the calculated Hext/Jelinek errors. The Hext/Jelinek ellipses are also able to describe the AMS dispersions due to instrumental noise and provide reasonable limits for the AMS dispersions observed in two Hawaiian basaltic dikes. It is concluded that the Hext/Jelinek method provides a satisfactory description of the errors in AMS data and should be a standard part of any AMS data analysis.
NASA Technical Reports Server (NTRS)
Martos, Borja; Kiszely, Paul; Foster, John V.
2011-01-01
As part of the NASA Aviation Safety Program (AvSP), a novel pitot-static calibration method was developed to allow rapid in-flight calibration for subscale aircraft while flying within confined test areas. This approach uses Global Positioning System (GPS) technology coupled with modern system identification methods that rapidly computes optimal pressure error models over a range of airspeed with defined confidence bounds. This method has been demonstrated in subscale flight tests and has shown small 2- error bounds with significant reduction in test time compared to other methods. The current research was motivated by the desire to further evaluate and develop this method for full-scale aircraft. A goal of this research was to develop an accurate calibration method that enables reductions in test equipment and flight time, thus reducing costs. The approach involved analysis of data acquisition requirements, development of efficient flight patterns, and analysis of pressure error models based on system identification methods. Flight tests were conducted at The University of Tennessee Space Institute (UTSI) utilizing an instrumented Piper Navajo research aircraft. In addition, the UTSI engineering flight simulator was used to investigate test maneuver requirements and handling qualities issues associated with this technique. This paper provides a summary of piloted simulation and flight test results that illustrates the performance and capabilities of the NASA calibration method. Discussion of maneuver requirements and data analysis methods is included as well as recommendations for piloting technique.
Alves, Vanessa de Oliveira; Bueno, Carlos Eduardo da Silveira; Cunha, Rodrigo Sanches; Pinheiro, Sérgio Luiz; Fontana, Carlos Eduardo; de Martin, Alexandre Sigrist
2012-01-01
Nickel-titanium rotary instruments reduce procedural errors and the time required to finish root canal preparation. The goal of this study was to evaluate the occurrences of apical transportation and canal aberrations produced with different instruments used to create a glide path in the preparation of curved root canals, namely manual K-files (Dentsply Maillefer, Ballaigues, Switzerland) and PathFile (Dentsply Maillefer) and Mtwo (Sweden and Martina, Padua, Italy) nickel-titanium rotary files. The mesial canals of 45 mandibular first and second molars (with curvature angles between 25° and 35°) were selected for this study. The specimens were divided randomly into 3 groups with 15 canals each, and canal preparation was performed by an endodontist using #10-15-20 K-type stainless steel manual files (group M), #13-16-19 PathFile rotary instruments (group PF), and #10-15-20 Mtwo rotary instruments (group MT). The double digital radiograph technique was used, pre- and postinstrumentation, to assess whether apical transportation and/or aberration in root canal morphology occurred. The initial and final images of the central axis of the canals were compared by superimposition through computerized analysis and with the aid of magnification. The specimens were analyzed by 3 evaluators, whose calibration was checked using the Kendall agreement test. No apical transportation or aberration in root canal morphology occurred in any of the teeth; therefore, no statistical analysis was conducted. Neither the manual instruments nor the PathFile or Mtwo rotary instruments used to create a glide path had any influence on the occurrence of apical transportation or produced any canal aberration. Copyright © 2012 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.
Impact of TRMM and SSM/I-derived Precipitation and Moisture Data on the GEOS Global Analysis
NASA Technical Reports Server (NTRS)
Hou, Arthur Y.; Zhang, Sara Q.; daSilva, Arlindo M.; Olson, William S.
1999-01-01
Current global analyses contain significant errors in primary hydrological fields such as precipitation, evaporation, and related cloud and moisture in the tropics. The Data Assimilation Office at NASA's Goddard Space Flight Center has been exploring the use of space-based rainfall and total precipitable water (TPW) estimates to constrain these hydrological parameters in the Goddard Earth Observing System (GEOS) global data assimilation system. We present results showing that assimilating the 6-hour averaged rain rates and TPW estimates from the Tropical Rainfall Measuring Mission (TRMM) and Special Sensor Microwave/Imager (SSM/I) instruments improves not only the precipitation and moisture estimates but also reduce state-dependent systematic errors in key climate parameters directly linked to convection such as the outgoing longwave radiation, clouds, and the large-scale circulation. The improved analysis also improves short-range forecasts beyond 1 day, but the impact is relatively modest compared with improvements in the time-averaged analysis. The study shows that, in the presence of biases and other errors of the forecast model, improving the short-range forecast is not necessarily prerequisite for improving the assimilation as a climate data set. The full impact of a given type of observation on the assimilated data set should not be measured solely in terms of forecast skills.
Improving Lidar Turbulence Estimates for Wind Energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Newman, Jennifer F.; Clifton, Andrew; Churchfield, Matthew J.
2016-10-06
Remote sensing devices (e.g., lidars) are quickly becoming a cost-effective and reliable alternative to meteorological towers for wind energy applications. Although lidars can measure mean wind speeds accurately, these devices measure different values of turbulence intensity (TI) than an instrument on a tower. In response to these issues, a lidar TI error reduction model was recently developed for commercially available lidars. The TI error model first applies physics-based corrections to the lidar measurements, then uses machine-learning techniques to further reduce errors in lidar TI estimates. The model was tested at two sites in the Southern Plains where vertically profiling lidarsmore » were collocated with meteorological towers. This presentation primarily focuses on the physics-based corrections, which include corrections for instrument noise, volume averaging, and variance contamination. As different factors affect TI under different stability conditions, the combination of physical corrections applied in L-TERRA changes depending on the atmospheric stability during each 10-minute time period. This stability-dependent version of L-TERRA performed well at both sites, reducing TI error and bringing lidar TI estimates closer to estimates from instruments on towers. However, there is still scatter evident in the lidar TI estimates, indicating that there are physics that are not being captured in the current version of L-TERRA. Two options are discussed for modeling the remainder of the TI error physics in L-TERRA: machine learning and lidar simulations. Lidar simulations appear to be a better approach, as they can help improve understanding of atmospheric effects on TI error and do not require a large training data set.« less
Malhotra, Chetna; Chan, Angelique; Matchar, David; Seow, Dennis; Chuo, Adeline; Do, Young Kyung
2013-07-01
The Short Portable Mental Status Questionnaire (SPMSQ) is a brief cognitive screening instrument, which is easy to use by a healthcare worker with little training. However, the validity of this instrument has not been established in Singapore. Thus, the primary aim of this study was to determine the diagnostic performance of SPMSQ for screening dementia among patients attending outpatient cognitive assessment clinics and to assess whether the appropriate cut-off score varies by patient's age and education. A secondary aim of the study was to map the SPMSQ scores with Mini-Mental State Examination (MMSE) scores. SPMSQ and MMSE were administered by a trained interviewer to 127 patients visiting outpatient cognitive assessment clinics at the Singapore General Hospital, Changi General Hospital and Tan Tock Seng Hospital. The geriatricians at these clinics then diagnosed these patients with dementia or no dementia (reference standard). Sensitivity and specificity of SPMSQ with different cut-off points (number of errors) were calculated and compared to the reference standard using the Receiver Operator Characteristic (ROC) analysis. Correlation coefficient was also calculated between MMSE and SPMSQ scores. Based on the ROC analysis and a balance of sensitivity and specificity, the appropriate cut-off for SPMSQ was found to be 5 or more errors (sensitivity 78%, specificity 75%). The cut-off varied by education, but not by patient's age. There was a high correlation between SPMSQ and MMSE scores (r = 0.814, P <0.0001). Despite the advantage of being a brief screening instrument for dementia, the use of SPMSQ is limited by its low sensitivity and specificity, especially among patients with less than 6 years of education.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zwink, AB; Turner, DD
2012-03-19
The fore-optics of the Atmospheric Emitted Radiance Interferometer (AERI) are protected by an automated hatch to prevent precipitation from fouling the instrument's scene mirror (Knuteson et al. 2004). Limit switches connected with the hatch controller provide a signal of the hatch state: open, closed, undetermined (typically associated with the hatch being between fully open or fully closed during the instrument's sky view period), or an error condition. The instrument then records the state of the hatch with the radiance data so that samples taken when the hatch is not open can be removed from any subsequent analysis. However, the hatchmore » controller suffered a multi-year failure for the AERI located at the ARM North Slope of Alaska (NSA) Central Facility in Barrow, Alaska, from July 2006-February 2008. The failure resulted in misreporting the state of the hatch in the 'hatchOpen' field within the AERI data files. With this error there is no simple solution to translate what was reported back to the correct hatch status, thereby making it difficult for an analysis to determine when the AERI was actually viewing the sky. As only the data collected when the hatch is fully open are scientifically useful, an algorithm was developed to determine whether the hatch was open or closed based on spectral radiance data from the AERI. Determining if the hatch is open or closed in a scene with low clouds is non-trivial, as low opaque clouds may look very similar spectrally as the closed hatch. This algorithm used a backpropagation neural network; these types of neural networks have been used with increasing frequency in atmospheric science applications.« less
NASA Technical Reports Server (NTRS)
Diorio, Kimberly A.
2002-01-01
A process task analysis effort was undertaken by Dynacs Inc. commencing in June 2002 under contract from NASA YA-D6. Funding was provided through NASA's Ames Research Center (ARC), Code M/HQ, and Industrial Engineering and Safety (IES). The John F. Kennedy Space Center (KSC) Engineering Development Contract (EDC) Task Order was 5SMA768. The scope of the effort was to conduct a Human Factors Process Failure Modes and Effects Analysis (HF PFMEA) of a hazardous activity and provide recommendations to eliminate or reduce the effects of errors caused by human factors. The Liquid Oxygen (LOX) Pump Acceptance Test Procedure (ATP) was selected for this analysis. The HF PFMEA table (see appendix A) provides an analysis of six major categories evaluated for this study. These categories include Personnel Certification, Test Procedure Format, Test Procedure Safety Controls, Test Article Data, Instrumentation, and Voice Communication. For each specific requirement listed in appendix A, the following topics were addressed: Requirement, Potential Human Error, Performance-Shaping Factors, Potential Effects of the Error, Barriers and Controls, Risk Priority Numbers, and Recommended Actions. This report summarizes findings and gives recommendations as determined by the data contained in appendix A. It also includes a discussion of technology barriers and challenges to performing task analyses, as well as lessons learned. The HF PFMEA table in appendix A recommends the use of accepted and required safety criteria in order to reduce the risk of human error. The items with the highest risk priority numbers should receive the greatest amount of consideration. Implementation of the recommendations will result in a safer operation for all personnel.
Laboratory and field based evaluation of chromatography ...
The Monitor for AeRosols and GAses in ambient air (MARGA) is an on-line ion-chromatography-based instrument designed for speciation of the inorganic gas and aerosol ammonium-nitrate-sulfate system. Previous work to characterize the performance of the MARGA has been primarily based on field comparison to other measurement methods to evaluate accuracy. While such studies are useful, the underlying reasons for disagreement among methods are not always clear. This study examines aspects of MARGA accuracy and precision specifically related to automated chromatography analysis. Using laboratory standards, analytical accuracy, precision, and method detection limits derived from the MARGA chromatography software are compared to an alternative software package (Chromeleon, Thermo Scientific Dionex). Field measurements are used to further evaluate instrument performance, including the MARGA’s use of an internal LiBr standard to control accuracy. Using gas/aerosol ratios and aerosol neutralization state as a case study, the impact of chromatography on measurement error is assessed. The new generation of on-line chromatography-based gas and particle measurement systems have many advantages, including simultaneous analysis of multiple pollutants. The Monitor for Aerosols and Gases in Ambient Air (MARGA) is such an instrument that is used in North America, Europe, and Asia for atmospheric process studies as well as routine monitoring. While the instrument has been evaluat
Analog track angle error displays improve simulated GPS approach performance
DOT National Transportation Integrated Search
1996-01-01
Pilots flying non-precision instrument approaches traditionally rely on a course deviation indicator (CDI) analog display of cross track error (XTE) information. THe new generation of GPS based area navigation (RNAV) receivers can also compute accura...
Hybrid Gibbs Sampling and MCMC for CMB Analysis at Small Angular Scales
NASA Technical Reports Server (NTRS)
Jewell, Jeffrey B.; Eriksen, H. K.; Wandelt, B. D.; Gorski, K. M.; Huey, G.; O'Dwyer, I. J.; Dickinson, C.; Banday, A. J.; Lawrence, C. R.
2008-01-01
A) Gibbs Sampling has now been validated as an efficient, statistically exact, and practically useful method for "low-L" (as demonstrated on WMAP temperature polarization data). B) We are extending Gibbs sampling to directly propagate uncertainties in both foreground and instrument models to total uncertainty in cosmological parameters for the entire range of angular scales relevant for Planck. C) Made possible by inclusion of foreground model parameters in Gibbs sampling and hybrid MCMC and Gibbs sampling for the low signal to noise (high-L) regime. D) Future items to be included in the Bayesian framework include: 1) Integration with Hybrid Likelihood (or posterior) code for cosmological parameters; 2) Include other uncertainties in instrumental systematics? (I.e. beam uncertainties, noise estimation, calibration errors, other).
2016-01-01
Background It is often thought that random measurement error has a minor effect upon the results of an epidemiological survey. Theoretically, errors of measurement should always increase the spread of a distribution. Defining an illness by having a measurement outside an established healthy range will lead to an inflated prevalence of that condition if there are measurement errors. Methods and results A Monte Carlo simulation was conducted of anthropometric assessment of children with malnutrition. Random errors of increasing magnitude were imposed upon the populations and showed that there was an increase in the standard deviation with each of the errors that became exponentially greater with the magnitude of the error. The potential magnitude of the resulting error of reported prevalence of malnutrition were compared with published international data and found to be of sufficient magnitude to make a number of surveys and the numerous reports and analyses that used these data unreliable. Conclusions The effect of random error in public health surveys and the data upon which diagnostic cut-off points are derived to define “health” has been underestimated. Even quite modest random errors can more than double the reported prevalence of conditions such as malnutrition. Increasing sample size does not address this problem, and may even result in less accurate estimates. More attention needs to be paid to the selection, calibration and maintenance of instruments, measurer selection, training & supervision, routine estimation of the likely magnitude of errors using standardization tests, use of statistical likelihood of error to exclude data from analysis and full reporting of these procedures in order to judge the reliability of survey reports. PMID:28030627
Tor P. Schultz; Darrel D. Nicholas; Stan Lebow
2003-01-01
In a laboratory leaching study, we found that chromated copper arsenate (CCA) treated wood, which had been exposed to one of five soils examined, unexpectedly appeared to gain significant Cr (47%) when measured with an energy-dispersive x-ray fluorescence instrument (American Wood-Preservers' Association (AWPA) Method A9-01 2001). Analysis of some of the leached...
Flow tilt angles near forest edges - Part 2: Lidar anemometry
NASA Astrophysics Data System (ADS)
Dellwik, E.; Mann, J.; Bingöl, F.
2010-05-01
A novel way of estimating near-surface mean flow tilt angles from ground based Doppler lidar measurements is presented. The results are compared with traditional mast based in-situ sonic anemometry. The tilt angle assessed with the lidar is based on 10 or 30 min mean values of the velocity field from a conically scanning lidar. In this mode of measurement, the lidar beam is rotated in a circle by a prism with a fixed angle to the vertical at varying focus distances. By fitting a trigonometric function to the scans, the mean vertical velocity can be estimated. Lidar measurements from (1) a fetch-limited beech forest site taken at 48-175 m a.g.l. (above ground level), (2) a reference site in flat agricultural terrain and (3) a second reference site in complex terrain are presented. The method to derive flow tilt angles and mean vertical velocities from lidar has several advantages compared to sonic anemometry; there is no flow distortion caused by the instrument itself, there are no temperature effects and the instrument misalignment can be corrected for by assuming zero tilt angle at high altitudes. Contrary to mast-based instruments, the lidar measures the wind field with the exact same alignment error at a multitude of heights. Disadvantages with estimating vertical velocities from a lidar compared to mast-based measurements are potentially slightly increased levels of statistical errors due to limited sampling time, because the sampling is disjunct, and a requirement for homogeneous flow. The estimated mean vertical velocity is biased if the flow over the scanned circle is not homogeneous. It is demonstrated that the error on the mean vertical velocity due to flow inhomogeneity can be approximated by a function of the angle of the lidar beam to the vertical and the vertical gradient of the mean vertical velocity, whereas the error due to flow inhomogeneity on the horizontal mean wind speed is independent of the lidar beam angle. For the presented measurements over forest, it is evaluated that the systematic error due to the inhomogeneity of the flow is less than 0.2°. The results of the vertical conical scans were promising, and yielded positive flow angles for a sector where the forest is fetch-limited. However, more data and analysis are needed for a complete evaluation of the lidar technique.
Error and its meaning in forensic science.
Christensen, Angi M; Crowder, Christian M; Ousley, Stephen D; Houck, Max M
2014-01-01
The discussion of "error" has gained momentum in forensic science in the wake of the Daubert guidelines and has intensified with the National Academy of Sciences' Report. Error has many different meanings, and too often, forensic practitioners themselves as well as the courts misunderstand scientific error and statistical error rates, often confusing them with practitioner error (or mistakes). Here, we present an overview of these concepts as they pertain to forensic science applications, discussing the difference between practitioner error (including mistakes), instrument error, statistical error, and method error. We urge forensic practitioners to ensure that potential sources of error and method limitations are understood and clearly communicated and advocate that the legal community be informed regarding the differences between interobserver errors, uncertainty, variation, and mistakes. © 2013 American Academy of Forensic Sciences.
CORRELATED ERRORS IN EARTH POINTING MISSIONS
NASA Technical Reports Server (NTRS)
Bilanow, Steve; Patt, Frederick S.
2005-01-01
Two different Earth-pointing missions dealing with attitude control and dynamics changes illustrate concerns with correlated error sources and coupled effects that can occur. On the OrbView-2 (OV-2) spacecraft, the assumption of a nearly-inertially-fixed momentum axis was called into question when a residual dipole bias apparently changed magnitude. The possibility that alignment adjustments and/or sensor calibration errors may compensate for actual motions of the spacecraft is discussed, and uncertainties in the dynamics are considered. Particular consideration is given to basic orbit frequency and twice orbit frequency effects and their high correlation over the short science observation data span. On the Tropical Rainfall Measuring Mission (TRMM) spacecraft, the switch to a contingency Kalman filter control mode created changes in the pointing error patterns. Results from independent checks on the TRMM attitude using science instrument data are reported, and bias shifts and error correlations are discussed. Various orbit frequency effects are common with the flight geometry for Earth pointing instruments. In both dual-spin momentum stabilized spacecraft (like OV-2) and three axis stabilized spacecraft with gyros (like TRMM under Kalman filter control), changes in the initial attitude state propagate into orbit frequency variations in attitude and some sensor measurements. At the same time, orbit frequency measurement effects can arise from dynamics assumptions, environment variations, attitude sensor calibrations, or ephemeris errors. Also, constant environment torques for dual spin spacecraft have similar effects to gyro biases on three axis stabilized spacecraft, effectively shifting the one-revolution-per-orbit (1-RPO) body rotation axis. Highly correlated effects can create a risk for estimation errors particularly when a mission switches an operating mode or changes its normal flight environment. Some error effects will not be obvious from attitude sensor measurement residuals, so some independent checks using imaging sensors are essential and derived science instrument attitude measurements can prove quite valuable in assessing the attitude accuracy.
Errors in retarding potential analyzers caused by nonuniformity of the grid-plane potential.
NASA Technical Reports Server (NTRS)
Hanson, W. B.; Frame, D. R.; Midgley, J. E.
1972-01-01
One aspect of the degradation in performance of retarding potential analyzers caused by potential depressions in the retarding grid is quantitatively estimated from laboratory measurements and theoretical calculations. A simple expression is obtained that permits the use of laboratory measurements of grid properties to make first-order corrections to flight data. Systematic positive errors in ion temperature of approximately 16% for the Ogo 4 instrument and 3% for the Ogo 6 instrument are deduced. The effects of the transverse electric fields arising from the grid potential depressions are not treated.
Surface characterization protocol for precision aspheric optics
NASA Astrophysics Data System (ADS)
Sarepaka, RamaGopal V.; Sakthibalan, Siva; Doodala, Somaiah; Panwar, Rakesh S.; Kotaria, Rajendra
2017-10-01
In Advanced Optical Instrumentation, Aspherics provide an effective performance alternative. The aspheric fabrication and surface metrology, followed by aspheric design are complementary iterative processes for Precision Aspheric development. As in fabrication, a holistic approach of aspheric surface characterization is adopted to evaluate actual surface error and to aim at the deliverance of aspheric optics with desired surface quality. Precision optical surfaces are characterized by profilometry or by interferometry. Aspheric profiles are characterized by contact profilometers, through linear surface scans to analyze their Form, Figure and Finish errors. One must ensure that, the surface characterization procedure does not add to the resident profile errors (generated during the aspheric surface fabrication). This presentation examines the errors introduced post-surface generation and during profilometry of aspheric profiles. This effort is to identify sources of errors and is to optimize the metrology process. The sources of error during profilometry may be due to: profilometer settings, work-piece placement on the profilometer stage, selection of zenith/nadir points of aspheric profiles, metrology protocols, clear aperture - diameter analysis, computational limitations of the profiler and the software issues etc. At OPTICA, a PGI 1200 FTS contact profilometer (Taylor-Hobson make) is used for this study. Precision Optics of various profiles are studied, with due attention to possible sources of errors during characterization, with multi-directional scan approach for uniformity and repeatability of error estimation. This study provides an insight of aspheric surface characterization and helps in optimal aspheric surface production methodology.
Ko, YuKyung; Yu, Soyoung
2017-09-01
This study was undertaken to explore the correlations among nurses' perceptions of patient safety culture, their intention to report errors, and leader coaching behaviors. The participants (N = 289) were nurses from 5 Korean hospitals with approximately 300 to 500 beds each. Sociodemographic variables, patient safety culture, intention to report errors, and coaching behavior were measured using self-report instruments. Data were analyzed using descriptive statistics, Pearson correlation coefficient, the t test, and the Mann-Whitney U test. Nurses' perceptions of patient safety culture and their intention to report errors showed significant differences between groups of nurses who rated their leaders as high-performing or low-performing coaches. Perceived coaching behavior showed a significant, positive correlation with patient safety culture and intention to report errors, i.e., as nurses' perceptions of coaching behaviors increased, so did their ratings of patient safety culture and error reporting. There is a need in health care settings for coaching by nurse managers to provide quality nursing care and thus improve patient safety. Programs that are systematically developed and implemented to enhance the coaching behaviors of nurse managers are crucial to the improvement of patient safety and nursing care. Moreover, a systematic analysis of the causes of malpractice, as opposed to a focus on the punitive consequences of errors, could increase error reporting and therefore promote a culture in which a higher level of patient safety can thrive.
NASA Astrophysics Data System (ADS)
Braun, Jaroslav; Štroner, Martin; Urban, Rudolf
2015-05-01
All surveying instruments and their measurements suffer from some errors. To refine the measurement results, it is necessary to use procedures restricting influence of the instrument errors on the measured values or to implement numerical corrections. In precise engineering surveying industrial applications the accuracy of the distances usually realized on relatively short distance is a key parameter limiting the resulting accuracy of the determined values (coordinates, etc.). To determine the size of systematic and random errors of the measured distances were made test with the idea of the suppression of the random error by the averaging of the repeating measurement, and reducing systematic errors influence of by identifying their absolute size on the absolute baseline realized in geodetic laboratory at the Faculty of Civil Engineering CTU in Prague. The 16 concrete pillars with forced centerings were set up and the absolute distances between the points were determined with a standard deviation of 0.02 millimetre using a Leica Absolute Tracker AT401. For any distance measured by the calibrated instruments (up to the length of the testing baseline, i.e. 38.6 m) can now be determined the size of error correction of the distance meter in two ways: Firstly by the interpolation on the raw data, or secondly using correction function derived by previous FFT transformation usage. The quality of this calibration and correction procedure was tested on three instruments (Trimble S6 HP, Topcon GPT-7501, Trimble M3) experimentally using Leica Absolute Tracker AT401. By the correction procedure was the standard deviation of the measured distances reduced significantly to less than 0.6 mm. In case of Topcon GPT-7501 is the nominal standard deviation 2 mm, achieved (without corrections) 2.8 mm and after corrections 0.55 mm; in case of Trimble M3 is nominal standard deviation 3 mm, achieved (without corrections) 1.1 mm and after corrections 0.58 mm; and finally in case of Trimble S6 is nominal standard deviation 1 mm, achieved (without corrections) 1.2 mm and after corrections 0.51 mm. Proposed procedure of the calibration and correction is in our opinion very suitable for increasing of the accuracy of the electronic distance measurement and allows the use of the common surveying instrument to achieve uncommonly high precision.
The Importance of Relying on the Manual: Scoring Error Variance in the WISC-IV Vocabulary Subtest
ERIC Educational Resources Information Center
Erdodi, Laszlo A.; Richard, David C. S.; Hopwood, Christopher
2009-01-01
Classical test theory assumes that ability level has no effect on measurement error. Newer test theories, however, argue that the precision of a measurement instrument changes as a function of the examinee's true score. Research has shown that administration errors are common in the Wechsler scales and that subtests requiring subjective scoring…
Tissue resistivity estimation in the presence of positional and geometrical uncertainties.
Baysal, U; Eyüboğlu, B M
2000-08-01
Geometrical uncertainties (organ boundary variation and electrode position uncertainties) are the biggest sources of error in estimating electrical resistivity of tissues from body surface measurements. In this study, in order to decrease estimation errors, the statistically constrained minimum mean squared error estimation algorithm (MiMSEE) is constrained with a priori knowledge of the geometrical uncertainties in addition to the constraints based on geometry, resistivity range, linearization and instrumentation errors. The MiMSEE calculates an optimum inverse matrix, which maps the surface measurements to the unknown resistivity distribution. The required data are obtained from four-electrode impedance measurements, similar to injected-current electrical impedance tomography (EIT). In this study, the surface measurements are simulated by using a numerical thorax model. The data are perturbed with additive instrumentation noise. Simulated surface measurements are then used to estimate the tissue resistivities by using the proposed algorithm. The results are compared with the results of conventional least squares error estimator (LSEE). Depending on the region, the MiMSEE yields an estimation error between 0.42% and 31.3% compared with 7.12% to 2010% for the LSEE. It is shown that the MiMSEE is quite robust even in the case of geometrical uncertainties.
Risk Assessment Stability: A Revalidation Study of the Arizona Risk/Needs Assessment Instrument
ERIC Educational Resources Information Center
Schwalbe, Craig S.
2009-01-01
The actuarial method is the gold standard for risk assessment in child welfare, juvenile justice, and criminal justice. It produces risk classifications that are highly predictive and that may be robust to sampling error. This article reports a revalidation study of the Arizona Risk/Needs Assessment instrument, an actuarial instrument for juvenile…
NASA Technical Reports Server (NTRS)
Schrama, E.
1990-01-01
The concept of a Global Positioning System (GPS) receiver as a tracking facility and a gradiometer as a separate instrument on a low orbiting platform offers a unique tool to map the Earth's gravitational field with unprecedented accuracies. The former technique allows determination of the spacecraft's ephemeris at any epoch to within 3 to 10 cm, the latter permits the measurement of the tensor of second order derivatives of the gravity field to within 0.01 to 0.0001 Eotvos units depending on the type of gradiometer. First, a variety of error sources in gradiometry where emphasis is placed on the rotational problem pursuing as well a static as a dynamic approach is described. Next, an analytical technique is described and applied for an error analysis of gravity field parameters from gradiometer and GPS observation types. Results are discussed for various configurations proposed on Topex/Poseidon, Gravity Probe-B, and Aristoteles, indicating that GPS only solutions may be computed up to degree and order 35, 55, and 85 respectively, whereas a combined GPS/gradiometer experiment on Aristoteles may result in an acceptable solution up to degree and order 240.
Pilot performance and workload using simulated GPS track angle error displays
DOT National Transportation Integrated Search
1995-01-01
The effect on simulated GPS instrument approach performance and workload resulting from the addition of Track Angle Error (TAE) information to cockpit RNAV receiver displays in explicit analog form was studied experimentally (S display formats, 6 pil...
ON ESTIMATING FORCE-FREENESS BASED ON OBSERVED MAGNETOGRAMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, X. M.; Zhang, M.; Su, J. T., E-mail: xmzhang@nao.cas.cn
It is a common practice in the solar physics community to test whether or not measured photospheric or chromospheric vector magnetograms are force-free, using the Maxwell stress as a measure. Some previous studies have suggested that magnetic fields of active regions in the solar chromosphere are close to being force-free whereas there is no consistency among previous studies on whether magnetic fields of active regions in the solar photosphere are force-free or not. Here we use three kinds of representative magnetic fields (analytical force-free solutions, modeled solar-like force-free fields, and observed non-force-free fields) to discuss how measurement issues such asmore » limited field of view (FOV), instrument sensitivity, and measurement error could affect the estimation of force-freeness based on observed magnetograms. Unlike previous studies that focus on discussing the effect of limited FOV or instrument sensitivity, our calculation shows that just measurement error alone can significantly influence the results of estimates of force-freeness, due to the fact that measurement errors in horizontal magnetic fields are usually ten times larger than those in vertical fields. This property of measurement errors, interacting with the particular form of a formula for estimating force-freeness, would result in wrong judgments of the force-freeness: a truly force-free field may be mistakenly estimated as being non-force-free and a truly non-force-free field may be estimated as being force-free. Our analysis calls for caution when interpreting estimates of force-freeness based on measured magnetograms, and also suggests that the true photospheric magnetic field may be further away from being force-free than it currently appears to be.« less
Assessment of Satellite Surface Radiation Products in Highland Regions with Tibet Instrumental Data
NASA Technical Reports Server (NTRS)
Yang, Kun; Koike, Toshio; Stackhouse, Paul; Mikovitz, Colleen
2006-01-01
This study presents results of comparisons between instrumental radiation data in the elevated Tibetan Plateau and two global satellite products: the Global Energy and Water Cycle Experiment - Surface Radiation Budget (GEWEX-SRB) and International Satellite Cloud Climatology Project - Flux Data (ISCCP-FD). In general, shortwave radiation (SW) is estimated better by ISCCP-FD while longwave radiation (LW) is estimated better by GEWEX-SRB, but all the radiation components in both products are under-estimated. Severe and systematic errors were found in monthly-mean SRB SW (on plateau-average, -48 W/sq m for downward SW and -18 W/sq m for upward SW) and FD LW (on plateau-average, -37 W/sq m for downward LW and -62 W/sq m for upward LW) for radiation. Errors in monthly-mean diurnal variations are even larger than the monthly mean errors. Though the LW errors can be reduced about 10 W/sq m after a correction for altitude difference between the site and SRB and FD grids, these errors are still higher than that for other regions. The large errors in SRB SW was mainly due to a processing mistake for elevation effect, but the errors in SRB LW was mainly due to significant errors in input data. We suggest reprocessing satellite surface radiation budget data, at least for highland areas like Tibet.
Calvo, Roque; D’Amato, Roberto; Gómez, Emilio; Domingo, Rosario
2016-01-01
The development of an error compensation model for coordinate measuring machines (CMMs) and its integration into feature measurement is presented. CMMs are widespread and dependable instruments in industry and laboratories for dimensional measurement. From the tip probe sensor to the machine display, there is a complex transformation of probed point coordinates through the geometrical feature model that makes the assessment of accuracy and uncertainty measurement results difficult. Therefore, error compensation is not standardized, conversely to other simpler instruments. Detailed coordinate error compensation models are generally based on CMM as a rigid-body and it requires a detailed mapping of the CMM’s behavior. In this paper a new model type of error compensation is proposed. It evaluates the error from the vectorial composition of length error by axis and its integration into the geometrical measurement model. The non-explained variability by the model is incorporated into the uncertainty budget. Model parameters are analyzed and linked to the geometrical errors and uncertainty of CMM response. Next, the outstanding measurement models of flatness, angle, and roundness are developed. The proposed models are useful for measurement improvement with easy integration into CMM signal processing, in particular in industrial environments where built-in solutions are sought. A battery of implementation tests are presented in Part II, where the experimental endorsement of the model is included. PMID:27690052
Goulet, Eric D B; Baker, Lindsay B
2017-12-01
The B-722 Laqua Twin is a low cost, portable, and battery operated sodium analyzer, which can be used for the assessment of sweat sodium concentration. The Laqua Twin is reliable and provides a degree of accuracy similar to more expensive analyzers; however, its interunit measurement error remains unknown. The purpose of this study was to compare the sodium concentration values of 70 sweat samples measured using three different Laqua Twin units. Mean absolute errors, random errors and constant errors among the different Laqua Twins ranged respectively between 1.7 mmol/L to 3.5 mmol/L, 2.5 mmol/L to 3.7 mmol/L and -0.6 mmol/L to 3.9 mmol/L. Proportional errors among Laqua Twins were all < 2%. Based on a within-subject biological variability in sweat sodium concentration of ± 12%, the maximal allowable imprecision among instruments was considered to be £ 6%. In that respect, the within (2.9%), between (4.5%), and total (5.4%) measurement error coefficient of variations were all < 6%. For a given sweat sodium concentration value, the largest observed difference in mean and lower and upper bound error of measurements among instruments were, respectively, 4.7 mmol/L, 2.3 mmol/L, and 7.0 mmol/L. In conclusion, our findings show that the interunit measurement error of the B-722 Laqua Twin is low and methodologically acceptable.
NASA Astrophysics Data System (ADS)
Kolesnik, Y. B.
1995-12-01
15 catalogues produced in the 1980s and 12 catalogues made from 1960 to 1978 have been used to assess the consistency of the FK5 system with observations in the declination zone from -30deg to 30deg. Classical δ-dependent and α-dependent systematic differences (Cat-FK5) have been formed for individual instrumental systems of the catalogues. The weighted mean instrumental systems for two subsets of catalogues centred at the epochs 1970 and 1987 have been constructed. External systematic and random accuracy of the catalogues under analysis and errors of the mean instrumental systems for both selections of catalogues have been estimated and presented in tables. The individual systematic differences of the catalogues and the mean instrumental systems are shown in figures. Numerical values of the total systematic deviations for both mean instrumental systems are given in tables. The results of intercomparison are discussed to assess the actual systematic deviations of the FK5 at the respective epochs and its actual random accuracy. It has been found that the mutual consistency of individual instrumental systems of catalogues of 1980s with respect to zonal systematic differences in both right ascension and declination is significantly better when comparing with the earlier catalogues. Consistency of both catalogue subsets is comparable with respect to α-dependent systematic differences. It is shown that the claimed random errors of the FK5 positions and proper motions are rather realistic, while deviations of the FK5 right ascension and declination system in the equatorial zone for both mean epochs exceed expected ones from the formal considerations. Quick degradation of the FK5 system with time is detected in right ascension. The results in declination are recognized to be less reliable, due to larger inconsistency of the individual instrumental systems. The system of the Second Quito Astrolabe Catalogue (QAC 2) has been investigated by comparison with two subsets of catalogues. It shows rather good consistency with both mean instrumental systems. Some conspicuous local deviations are outlined and discussed. We conclude that the QAC 2 might successfully be used in the compilation of the future second general catalogue of astrolabes as a link between northern and southern astrolabe catalogues.
Wu, Yifei; Thibos, Larry N; Candy, T Rowan
2018-05-07
Eccentric photorefraction and Purkinje image tracking are used to estimate refractive state and eye position simultaneously. Beyond vision screening, they provide insight into typical and atypical visual development. Systematic analysis of the effect of refractive error and spectacles on photorefraction data is needed to gauge the accuracy and precision of the technique. Simulation of two-dimensional, double-pass eccentric photorefraction was performed (Zemax). The inward pass included appropriate light sources, lenses and a single surface pupil plane eye model to create an extended retinal image that served as the source for the outward pass. Refractive state, as computed from the luminance gradient in the image of the pupil captured by the model's camera, was evaluated for a range of refractive errors (-15D to +15D), pupil sizes (3 mm to 7 mm) and two sets of higher-order monochromatic aberrations. Instrument calibration was simulated using -8D to +8D trial lenses at the spectacle plane for: (1) vertex distances from 3 mm to 23 mm, (2) uncorrected and corrected hyperopic refractive errors of +4D and +7D, and (3) uncorrected and corrected astigmatism of 4D at four different axes. Empirical calibration of a commercial photorefractor was also compared with a wavefront aberrometer for human eyes. The pupil luminance gradient varied linearly with refractive state for defocus less than approximately 4D (5 mm pupil). For larger errors, the gradient magnitude saturated and then reduced, leading to under-estimation of refractive state. Additional inaccuracy (up to 1D for 8D of defocus) resulted from spectacle magnification in the pupil image, which would reduce precision in situations where vertex distance is variable. The empirical calibration revealed a constant offset between the two clinical instruments. Computational modelling demonstrates the principles and limitations of photorefraction to help users avoid potential measurement errors. Factors that could cause clinically significant errors in photorefraction estimates include high refractive error, vertex distance and magnification effects of a spectacle lens, increased higher-order monochromatic aberrations, and changes in primary spherical aberration with accommodation. The impact of these errors increases with increasing defocus. © 2018 The Authors Ophthalmic & Physiological Optics © 2018 The College of Optometrists.
Instrument Pointing Capabilities: Past, Present, and Future
NASA Technical Reports Server (NTRS)
Blackmore, Lars; Murray, Emmanuell; Scharf, Daniel P.; Aung, Mimi; Bayard, David; Brugarolas, Paul; Hadaegh, Fred; Lee, Allan; Milman, Mark; Sirlin, Sam;
2011-01-01
This paper surveys the instrument pointing capabilities of past, present and future space telescopes and interferometers. As an important aspect of this survey, we present a taxonomy for "apples-to-apples" comparisons of pointing performances. First, pointing errors are defined relative to either an inertial frame or a celestial target. Pointing error can then be further sub-divided into DC, that is, steady state, and AC components. We refer to the magnitude of the DC error relative to the inertial frame as absolute pointing accuracy, and we refer to the magnitude of the DC error relative to a celestial target as relative pointing accuracy. The magnitude of the AC error is referred to as pointing stability. While an AC/DC partition is not new, we leverage previous work by some of the authors to quantitatively clarify and compare varying definitions of jitter and time window averages. With this taxonomy and for sixteen past, present, and future missions, pointing accuracies and stabilities, both required and achieved, are presented. In addition, we describe the attitude control technologies used to and, for future missions, planned to achieve these pointing performances.
Bayesian operational modal analysis of Jiangyin Yangtze River Bridge
NASA Astrophysics Data System (ADS)
Brownjohn, James Mark William; Au, Siu-Kui; Zhu, Yichen; Sun, Zhen; Li, Binbin; Bassitt, James; Hudson, Emma; Sun, Hongbin
2018-09-01
Vibration testing of long span bridges is becoming a commissioning requirement, yet such exercises represent the extreme of experimental capability, with challenges for instrumentation (due to frequency range, resolution and km-order separation of sensor) and system identification (because of the extreme low frequencies). The challenge with instrumentation for modal analysis is managing synchronous data acquisition from sensors distributed widely apart inside and outside the structure. The ideal solution is precisely synchronised autonomous recorders that do not need cables, GPS or wireless communication. The challenge with system identification is to maximise the reliability of modal parameters through experimental design and subsequently to identify the parameters in terms of mean values and standard errors. The challenge is particularly severe for modes with low frequency and damping typical of long span bridges. One solution is to apply 'third generation' operational modal analysis procedures using Bayesian approaches in both the planning and analysis stages. The paper presents an exercise on the Jiangyin Yangtze River Bridge, a suspension bridge with a 1385 m main span. The exercise comprised planning of a test campaign to optimise the reliability of operational modal analysis, the deployment of a set of independent data acquisition units synchronised using precision oven controlled crystal oscillators and the subsequent identification of a set of modal parameters in terms of mean and variance errors. Although the bridge has had structural health monitoring technology installed since it was completed, this was the first full modal survey, aimed at identifying important features of the modal behaviour rather than providing fine resolution of mode shapes through the whole structure. Therefore, measurements were made in only the (south) tower, while torsional behaviour was identified by a single measurement using a pair of recorders across the carriageway. The modal survey revealed a first lateral symmetric mode with natural frequency 0.0536 Hz with standard error ±3.6% and damping ratio 4.4% with standard error ±88%. First vertical mode is antisymmetric with frequency 0.11 Hz ± 1.2% and damping ratio 4.9% ± 41%. A significant and novel element of the exercise was planning of the measurement setups and their necessary duration linked to prior estimation of the precision of the frequency and damping estimates. The second novelty is the use of the multi-sensor precision synchronised acquisition without external time reference on a structure of this scale. The challenges of ambient vibration testing and modal identification in a complex environment are addressed leveraging on advances in practical implementation and scientific understanding of the problem.
NASA Astrophysics Data System (ADS)
Giono, G.; Ishikawa, R.; Narukage, N.; Kano, R.; Katsukawa, Y.; Kubo, M.; Ishikawa, S.; Bando, T.; Hara, H.; Suematsu, Y.; Winebarger, A.; Kobayashi, K.; Auchère, F.; Trujillo Bueno, J.; Tsuneta, S.; Shimizu, T.; Sakao, T.; Cirtain, J.; Champey, P.; Asensio Ramos, A.; Štěpán, J.; Belluzzi, L.; Manso Sainz, R.; De Pontieu, B.; Ichimoto, K.; Carlsson, M.; Casini, R.; Goto, M.
2017-04-01
The Chromospheric Lyman-Alpha SpectroPolarimeter is a sounding rocket instrument designed to measure for the first time the linear polarization of the hydrogen Lyman-{α} line (121.6 nm). The instrument was successfully launched on 3 September 2015 and observations were conducted at the solar disc center and close to the limb during the five-minutes flight. In this article, the disc center observations are used to provide an in-flight calibration of the instrument spurious polarization. The derived in-flight spurious polarization is consistent with the spurious polarization levels determined during the pre-flight calibration and a statistical analysis of the polarization fluctuations from solar origin is conducted to ensure a 0.014% precision on the spurious polarization. The combination of the pre-flight and the in-flight polarization calibrations provides a complete picture of the instrument response matrix, and a proper error transfer method is used to confirm the achieved polarization accuracy. As a result, the unprecedented 0.1% polarization accuracy of the instrument in the vacuum ultraviolet is ensured by the polarization calibration.
NASA Technical Reports Server (NTRS)
Tuzzolino, A. J.; Simpson, J. A.; Mckibben, R. B.; Voss, H. D.; Gursky, H.
1993-01-01
The characteristics of a space dust instrument which would be ideally suited to carry out near-Earth dust measurements on a possible Long Duraction Exposure Facility reflight mission (LDEF 2) is discussed. As a model for the trajectory portion of the instrument proposed for LDEF 2, the characteristics of a SPAce DUSt instrument (SPADUS) currently under development for flight on the USA ARGOS mission to measure the flux, mass, velocity, and trajectory of near-Earth dust is summarized. Since natural (cosmic) dust and man-made dust particles (orbital debris) have different velocity and trajectory distributions, they are distinguished by means of the SPADUS velocity/trajectory information. The SPADUS measurements will cover the dust mass range approximately 5 x 10(exp -12) g (2 microns diameter) to approximately 1 x 10(exp -5) g (200 microns diameter), with an expected mean error in particle trajectory of approximately 7 deg (isotropic flux). Arrays of capture cell devices positioned behind the trajectory instrumentation would provide for Earth-based chemical and isotopic analysis of captured dust. The SPADUS measurement principles, characteristics, its role in the ARGOS mission, and its application to an LDEF 2 mission are summarized.
NASA Technical Reports Server (NTRS)
Abdelwahab, Mahmood; Biesiadny, Thomas J.; Silver, Dean
1987-01-01
An uncertainty analysis was conducted to determine the bias and precision errors and total uncertainty of measured turbojet engine performance parameters. The engine tests were conducted as part of the Uniform Engine Test Program which was sponsored by the Advisory Group for Aerospace Research and Development (AGARD). With the same engines, support hardware, and instrumentation, performance parameters were measured twice, once during tests conducted in test cell number 3 and again during tests conducted in test cell number 4 of the NASA Lewis Propulsion Systems Laboratory. The analysis covers 15 engine parameters, including engine inlet airflow, engine net thrust, and engine specific fuel consumption measured at high rotor speed of 8875 rpm. Measurements were taken at three flight conditions defined by the following engine inlet pressure, engine inlet total temperature, and engine ram ratio: (1) 82.7 kPa, 288 K, 1.0, (2) 82.7 kPa, 288 K, 1.3, and (3) 20.7 kPa, 288 K, 1.3. In terms of bias, precision, and uncertainty magnitudes, there were no differences between most measurements made in test cells number 3 and 4. The magnitude of the errors increased for both test cells as engine pressure level decreased. Also, the level of the bias error was two to three times larger than that of the precision error.
Sensitivity analysis for future space missions with segmented telescopes for high-contrast imaging
NASA Astrophysics Data System (ADS)
Leboulleux, Lucie; Pueyo, Laurent; Sauvage, Jean-François; Mazoyer, Johan; Soummer, Remi; Fusco, Thierry; Sivaramakrishnan, Anand
2018-01-01
The detection and analysis of biomarkers on earth-like planets using direct-imaging will require both high-contrast imaging and spectroscopy at very close angular separation (10^10 star to planet flux ratio at a few 0.1”). This goal can only be achieved with large telescopes in space to overcome atmospheric turbulence, often combined with a coronagraphic instrument with wavefront control. Large segmented space telescopes such as studied for the LUVOIR mission will generate segment-level instabilities and cophasing errors in addition to local mirror surface errors and other aberrations of the overall optical system. These effects contribute directly to the degradation of the final image quality and contrast. We present an analytical model that produces coronagraphic images of a segmented pupil telescope in the presence of segment phasing aberrations expressed as Zernike polynomials. This model relies on a pair-based projection of the segmented pupil and provides results that match an end-to-end simulation with an rms error on the final contrast of ~3%. This analytical model can be applied both to static and dynamic modes, and either in monochromatic or broadband light. It retires the need for end-to-end Monte-Carlo simulations that are otherwise needed to build a rigorous error budget, by enabling quasi-instantaneous analytical evaluations. The ability to invert directly the analytical model provides direct constraints and tolerances on all segments-level phasing and aberrations.
Asteroid approach covariance analysis for the Clementine mission
NASA Technical Reports Server (NTRS)
Ionasescu, Rodica; Sonnabend, David
1993-01-01
The Clementine mission is designed to test Strategic Defense Initiative Organization (SDIO) technology, the Brilliant Pebbles and Brilliant Eyes sensors, by mapping the moon surface and flying by the asteroid Geographos. The capability of two of the instruments available on board the spacecraft, the lidar (laser radar) and the UV/Visible camera is used in the covariance analysis to obtain the spacecraft delivery uncertainties at the asteroid. These uncertainties are due primarily to asteroid ephemeris uncertainties. On board optical navigation reduces the uncertainty in the knowledge of the spacecraft position in the direction perpendicular to the incoming asymptote to a one-sigma value of under 1 km, at the closest approach distance of 100 km. The uncertainty in the knowledge of the encounter time is about 0.1 seconds for a flyby velocity of 10.85 km/s. The magnitude of these uncertainties is due largely to Center Finding Errors (CFE). These systematic errors represent the accuracy expected in locating the center of the asteroid in the optical navigation images, in the absence of a topographic model for the asteroid. The direction of the incoming asymptote cannot be estimated accurately until minutes before the asteroid flyby, and correcting for it would require autonomous navigation. Orbit determination errors dominate over maneuver execution errors, and the final delivery accuracy attained is basically the orbit determination uncertainty before the final maneuver.
Suppression of Systematic Errors of Electronic Distance Meters for Measurement of Short Distances
Braun, Jaroslav; Štroner, Martin; Urban, Rudolf; Dvořáček, Filip
2015-01-01
In modern industrial geodesy, high demands are placed on the final accuracy, with expectations currently falling below 1 mm. The measurement methodology and surveying instruments used have to be adjusted to meet these stringent requirements, especially the total stations as the most often used instruments. A standard deviation of the measured distance is the accuracy parameter, commonly between 1 and 2 mm. This parameter is often discussed in conjunction with the determination of the real accuracy of measurements at very short distances (5–50 m) because it is generally known that this accuracy cannot be increased by simply repeating the measurement because a considerable part of the error is systematic. This article describes the detailed testing of electronic distance meters to determine the absolute size of their systematic errors, their stability over time, their repeatability and the real accuracy of their distance measurement. Twenty instruments (total stations) have been tested, and more than 60,000 distances in total were measured to determine the accuracy and precision parameters of the distance meters. Based on the experiments’ results, calibration procedures were designed, including a special correction function for each instrument, whose usage reduces the standard deviation of the measurement of distance by at least 50%. PMID:26258777
Suppression of Systematic Errors of Electronic Distance Meters for Measurement of Short Distances.
Braun, Jaroslav; Štroner, Martin; Urban, Rudolf; Dvoček, Filip
2015-08-06
In modern industrial geodesy, high demands are placed on the final accuracy, with expectations currently falling below 1 mm. The measurement methodology and surveying instruments used have to be adjusted to meet these stringent requirements, especially the total stations as the most often used instruments. A standard deviation of the measured distance is the accuracy parameter, commonly between 1 and 2 mm. This parameter is often discussed in conjunction with the determination of the real accuracy of measurements at very short distances (5-50 m) because it is generally known that this accuracy cannot be increased by simply repeating the measurement because a considerable part of the error is systematic. This article describes the detailed testing of electronic distance meters to determine the absolute size of their systematic errors, their stability over time, their repeatability and the real accuracy of their distance measurement. Twenty instruments (total stations) have been tested, and more than 60,000 distances in total were measured to determine the accuracy and precision parameters of the distance meters. Based on the experiments' results, calibration procedures were designed, including a special correction function for each instrument, whose usage reduces the standard deviation of the measurement of distance by at least 50%.
Turbulent CO2 Flux Measurements by Lidar: Length Scales, Results and Comparison with In-Situ Sensors
NASA Technical Reports Server (NTRS)
Gilbert, Fabien; Koch, Grady J.; Beyon, Jeffrey Y.; Hilton, Timothy W.; Davis, Kenneth J.; Andrews, Arlyn; Ismail, Syed; Singh, Upendra N.
2009-01-01
The vertical CO2 flux in the atmospheric boundary layer (ABL) is investigated with a Doppler differential absorption lidar (DIAL). The instrument was operated next to the WLEF instrumented tall tower in Park Falls, Wisconsin during three days and nights in June 2007. Profiles of turbulent CO2 mixing ratio and vertical velocity fluctuations are measured by in-situ sensors and Doppler DIAL. Time and space scales of turbulence are precisely defined in the ABL. The eddy-covariance method is applied to calculate turbulent CO2 flux both by lidar and in-situ sensors. We show preliminary mean lidar CO2 flux measurements in the ABL with a time and space resolution of 6 h and 1500 m respectively. The flux instrumental errors decrease linearly with the standard deviation of the CO2 data, as expected. Although turbulent fluctuations of CO2 are negligible with respect to the mean (0.1 %), we show that the eddy-covariance method can provide 2-h, 150-m range resolved CO2 flux estimates as long as the CO2 mixing ratio instrumental error is no greater than 10 ppm and the vertical velocity error is lower than the natural fluctuations over a time resolution of 10 s.
Floré, Katelijne M J; Fiers, Tom; Delanghe, Joris R
2008-01-01
In recent years a number of point of care testing (POCT) glucometers were introduced on the market. We investigated the analytical variability (lot-to-lot variation, calibration error, inter-instrument and inter-operator variability) of glucose POCT systems in a university hospital environment and compared these results with the analytical needs required for tight glucose monitoring. The reference hexokinase method was compared to different POCT systems based on glucose oxidase (blood gas instruments) or glucose dehydrogenase (handheld glucometers). Based upon daily internal quality control data, total errors were calculated for the various glucose methods and the analytical variability of the glucometers was estimated. The total error of the glucometers exceeded by far the desirable analytical specifications (based on a biological variability model). Lot-to-lot variation, inter-instrument variation and inter-operator variability contributed approximately equally to total variance. As in a hospital environment, distribution of hematocrit values is broad, converting blood glucose into plasma values using a fixed factor further increases variance. The percentage of outliers exceeded the ISO 15197 criteria in a broad glucose concentration range. Total analytical variation of handheld glucometers is larger than expected. Clinicians should be aware that the variability of glucose measurements obtained by blood gas instruments is lower than results obtained with handheld glucometers on capillary blood.
Correlation methods in optical metrology with state-of-the-art x-ray mirrors
NASA Astrophysics Data System (ADS)
Yashchuk, Valeriy V.; Centers, Gary; Gevorkyan, Gevork S.; Lacey, Ian; Smith, Brian V.
2018-01-01
The development of fully coherent free electron lasers and diffraction limited storage ring x-ray sources has brought to focus the need for higher performing x-ray optics with unprecedented tolerances for surface slope and height errors and roughness. For example, the proposed beamlines for the future upgraded Advance Light Source, ALS-U, require optical elements characterized by a residual slope error of <100 nrad (root-mean-square) and height error of <1-2 nm (peak-tovalley). These are for optics with a length of up to one meter. However, the current performance of x-ray optical fabrication and metrology generally falls short of these requirements. The major limitation comes from the lack of reliable and efficient surface metrology with required accuracy and with reasonably high measurement rate, suitable for integration into the modern deterministic surface figuring processes. The major problems of current surface metrology relate to the inherent instrumental temporal drifts, systematic errors, and/or an unacceptably high cost, as in the case of interferometry with computer-generated holograms as a reference. In this paper, we discuss the experimental methods and approaches based on correlation analysis to the acquisition and processing of metrology data developed at the ALS X-Ray Optical Laboratory (XROL). Using an example of surface topography measurements of a state-of-the-art x-ray mirror performed at the XROL, we demonstrate the efficiency of combining the developed experimental correlation methods to the advanced optimal scanning strategy (AOSS) technique. This allows a significant improvement in the accuracy and capacity of the measurements via suppression of the instrumental low frequency noise, temporal drift, and systematic error in a single measurement run. Practically speaking, implementation of the AOSS technique leads to an increase of the measurement accuracy, as well as the capacity of ex situ metrology by a factor of about four. The developed method is general and applicable to a broad spectrum of high accuracy measurements.
NASA Astrophysics Data System (ADS)
Wu, Hao; Wang, Xianhua; Ye, Hanhan; Jiang, Yun; Duan, Fenghua
2018-01-01
We developed an algorithm (named GMI_XCO2) to retrieve the global column-averaged dry air mole fraction of atmospheric carbon dioxide (XCO2) for greenhouse-gases monitor instrument (GMI) and directional polarized camera (DPC) on the GF-5 satellite. This algorithm is designed to work in cloudless atmospheric conditions with aerosol optical thickness (AOT)<0.3. To quantify the uncertainty level of the retrieved XCO2 when the aerosols and cirrus clouds occurred in retrieving XCO2 with the GMI short wave infrared (SWIR) data, we analyzed the errors rate caused by the six types of aerosols and cirrus clouds. The results indicated that in AOT range of 0.05 to 0.3 (550 nm), the uncertainties of aerosols could lead to errors of -0.27% to 0.59%, -0.32% to 1.43%, -0.10% to 0.49%, -0.12% to 1.17%, -0.35% to 0.49%, and -0.02% to -0.24% for rural, dust, clean continental, maritime, urban, and soot aerosols, respectively. The retrieval results presented a large error due to cirrus clouds. In the cirrus optical thickness range of 0.05 to 0.8 (500 nm), the most underestimation is up to 26.25% when the surface albedo is 0.05. The most overestimation is 8.1% when the surface albedo is 0.65. The retrieval results of GMI simulation data demonstrated that the accuracy of our algorithm is within 4 ppm (˜1%) using the simultaneous measurement of aerosols and clouds from DPC. Moreover, the speed of our algorithm is faster than full-physics (FP) methods. We verified our algorithm with Greenhouse-gases Observing Satellite (GOSAT) data in Beijing area during 2016. The retrieval errors of most observations are within 4 ppm except for summer. Compared with the results of GOSAT, the correlation coefficient is 0.55 for the whole year data, increasing to 0.62 after excluding the summer data.
Exploring Situational Awareness in Diagnostic Errors in Primary Care
Singh, Hardeep; Giardina, Traber Davis; Petersen, Laura A.; Smith, Michael; Wilson, Lindsey; Dismukes, Key; Bhagwath, Gayathri; Thomas, Eric J.
2013-01-01
Objective Diagnostic errors in primary care are harmful but poorly studied. To facilitate understanding of diagnostic errors in real-world primary care settings using electronic health records (EHRs), this study explored the use of the Situational Awareness (SA) framework from aviation human factors research. Methods A mixed-methods study was conducted involving reviews of EHR data followed by semi-structured interviews of selected providers from two institutions in the US. The study population included 380 consecutive patients with colorectal and lung cancers diagnosed between February 2008 and January 2009. Using a pre-tested data collection instrument, trained physicians identified diagnostic errors, defined as lack of timely action on one or more established indications for diagnostic work-up for lung and colorectal cancers. Twenty-six providers involved in cases with and without errors were interviewed. Interviews probed for providers' lack of SA and how this may have influenced the diagnostic process. Results Of 254 cases meeting inclusion criteria, errors were found in 30 (32.6%) of 92 lung cancer cases and 56 (33.5%) of 167 colorectal cancer cases. Analysis of interviews related to error cases revealed evidence of lack of one of four levels of SA applicable to primary care practice: information perception, information comprehension, forecasting future events, and choosing appropriate action based on the first three levels. In cases without error, the application of the SA framework provided insight into processes involved in attention management. Conclusions A framework of SA can help analyze and understand diagnostic errors in primary care settings that use EHRs. PMID:21890757
Measurement of solar radius changes
NASA Technical Reports Server (NTRS)
Labonte, B. J.; Howard, R.
1981-01-01
Results of daily photometric measurements of the solar radius from Mt. Wilson over the past seven years are reported. Reduction of the full disk magnetograms yields a formal error of 0.1 arcsec in the boustrophedonic scans in the 5250.2 A FeI line. 150 scan lines comprise each observation; 1,412 observations were made from 1974-1981. Measurement procedures, determination of the scattered light of the optics and the atmosphere, and error calculations are described, noting that days of poor atmospheric visibility are omitted from the data. The horizontal diameter of the sun remains visually fixed while the vertical component changes due to atmospheric diffraction; error accounting for thermal effects, telescope aberrations, and instrument calibration are discussed, and results, within instrument accuracy, indicate no change in the solar radius over the last seven years.
Camps, Adriano; Park, Hyuk; Sekulic, Ivan; Rius, Juan Manuel
2017-07-06
The GEROS-ISS (GNSS rEflectometry, Radio Occultation and Scatterometry onboard International Space Station) is an innovative experiment for climate research, proposed in 2011 within a call of the European Space Agency (ESA). This proposal was the only one selected for further studies by ESA out of ~25 ones that were submitted. In this work, the instrument performance for the near-nadir altimetry (GNSS-R) mode is assessed, including the effects of multi-path in the ISS structure, the electromagnetic-bias, and the orbital height decay. In the absence of ionospheric scintillations, the altimetry rms error is <50 cm for a swath <~250 km and for U 10 <10 m/s. If the transmitted power is 3 dB higher (likely to happen at beginning of life of the GNSS spacecrafts), mission requirements (rms error is <50 cm) are met for all ISS heights and for U 10 up to 15 m/s. However, around 1.5 GHz, the ionosphere can induce significant fading, from 2 to >20 dB at equatorial regions, mainly after sunset, which will seriously degrade the altimetry and the scatterometry performances of the instrument.
Error-in-variables models in calibration
NASA Astrophysics Data System (ADS)
Lira, I.; Grientschnig, D.
2017-12-01
In many calibration operations, the stimuli applied to the measuring system or instrument under test are derived from measurement standards whose values may be considered to be perfectly known. In that case, it is assumed that calibration uncertainty arises solely from inexact measurement of the responses, from imperfect control of the calibration process and from the possible inaccuracy of the calibration model. However, the premise that the stimuli are completely known is never strictly fulfilled and in some instances it may be grossly inadequate. Then, error-in-variables (EIV) regression models have to be employed. In metrology, these models have been approached mostly from the frequentist perspective. In contrast, not much guidance is available on their Bayesian analysis. In this paper, we first present a brief summary of the conventional statistical techniques that have been developed to deal with EIV models in calibration. We then proceed to discuss the alternative Bayesian framework under some simplifying assumptions. Through a detailed example about the calibration of an instrument for measuring flow rates, we provide advice on how the user of the calibration function should employ the latter framework for inferring the stimulus acting on the calibrated device when, in use, a certain response is measured.
Sekulic, Ivan
2017-01-01
The GEROS-ISS (GNSS rEflectometry, Radio Occultation and Scatterometry onboard International Space Station) is an innovative experiment for climate research, proposed in 2011 within a call of the European Space Agency (ESA). This proposal was the only one selected for further studies by ESA out of ~25 ones that were submitted. In this work, the instrument performance for the near-nadir altimetry (GNSS-R) mode is assessed, including the effects of multi-path in the ISS structure, the electromagnetic-bias, and the orbital height decay. In the absence of ionospheric scintillations, the altimetry rms error is <50 cm for a swath <~250 km and for U10 <10 m/s. If the transmitted power is 3 dB higher (likely to happen at beginning of life of the GNSS spacecrafts), mission requirements (rms error is <50 cm) are met for all ISS heights and for U10 up to 15 m/s. However, around 1.5 GHz, the ionosphere can induce significant fading, from 2 to >20 dB at equatorial regions, mainly after sunset, which will seriously degrade the altimetry and the scatterometry performances of the instrument. PMID:28684724
The Use of Analog Track Angle Error Display for Improving Simulated GPS Approach Performance
DOT National Transportation Integrated Search
1995-08-01
The effect of adding track angle error (TAE) information to general aviation aircraft cockpit displays used for GPS : nonprecision instrument approaches was studied experimentally. Six pilots flew 120 approaches in a Frasca 242 light : twin aircraft ...
Interdisciplinary Coordination Reviews: A Process to Reduce Construction Costs.
ERIC Educational Resources Information Center
Fewell, Dennis A.
1998-01-01
Interdisciplinary Coordination design review is instrumental in detecting coordination errors and omissions in construction documents. Cleansing construction documents of interdisciplinary coordination errors reduces time extensions, the largest source of change orders, and limits exposure to liability claims. Improving the quality of design…
Video camera system for locating bullet holes in targets at a ballistics tunnel
NASA Technical Reports Server (NTRS)
Burner, A. W.; Rummler, D. R.; Goad, W. K.
1990-01-01
A system consisting of a single charge coupled device (CCD) video camera, computer controlled video digitizer, and software to automate the measurement was developed to measure the location of bullet holes in targets at the International Shooters Development Fund (ISDF)/NASA Ballistics Tunnel. The camera/digitizer system is a crucial component of a highly instrumented indoor 50 meter rifle range which is being constructed to support development of wind resistant, ultra match ammunition. The system was designed to take data rapidly (10 sec between shoots) and automatically with little operator intervention. The system description, measurement concept, and procedure are presented along with laboratory tests of repeatability and bias error. The long term (1 hour) repeatability of the system was found to be 4 microns (one standard deviation) at the target and the bias error was found to be less than 50 microns. An analysis of potential errors and a technique for calibration of the system are presented.
Artificial Neural Network and application in calibration transfer of AOTF-based NIR spectrometer
NASA Astrophysics Data System (ADS)
Wang, Wenbo; Jiang, Chengzhi; Xu, Kexin; Wang, Bin
2002-09-01
Chemometrics is widely applied to develop models for quantitative prediction of unknown samples in Near-infrared (NIR) spectroscopy. However, calibrated models generally fail when new instruments are introduced or replacement of the instrument parts occurs. Therefore, calibration transfer becomes necessary to avoid the costly, time-consuming recalibration of models. Piecewise Direct Standardization (PDS) has been proven to be a reference method for standardization. In this paper, Artificial Neural Network (ANN) is employed as an alternative to transfer spectra between instruments. Two Acousto-optic Tunable Filter NIR spectrometers are employed in the experiment. Spectra of glucose solution are collected on the spectrometers through transflectance mode. A Back propagation Network with two layers is employed to simulate the function between instruments piecewisely. Standardization subset is selected by Kennard and Stone (K-S) algorithm in the first two score space of Principal Component Analysis (PCA) of spectra matrix. In current experiment, it is noted that obvious nonlinearity exists between instruments and attempts are made to correct such nonlinear effect. Prediction results before and after successful calibration transfer are compared. Successful transfer can be achieved by adapting window size and training parameters. Final results reveal that ANN is effective in correcting the nonlinear instrumental difference and a only 1.5~2 times larger prediction error is expected after successful transfer.
Wachs, Juan P; Frenkel, Boaz; Dori, Dov
2014-11-01
Errors in the delivery of medical care are the principal cause of inpatient mortality and morbidity, accounting for around 98,000 deaths in the United States of America (USA) annually. Ineffective team communication, especially in the operation room (OR), is a major root of these errors. This miscommunication can be reduced by analyzing and constructing a conceptual model of communication and miscommunication in the OR. We introduce the principles underlying Object-Process Methodology (OPM)-based modeling of the intricate interactions between the surgeon and the surgical technician while handling surgical instruments in the OR. This model is a software- and hardware-independent description of the agents engaged in communication events, their physical activities, and their interactions. The model enables assessing whether the task-related objectives of the surgical procedure were achieved and completed successfully and what errors can occur during the communication. The facts used to construct the model were gathered from observations of various types of operations miscommunications in the operating room and its outcomes. The model takes advantage of the compact ontology of OPM, which is comprised of stateful objects - things that exist physically or informatically, and processes - things that transform objects by creating them, consuming them or changing their state. The modeled communication modalities are verbal and non-verbal, and errors are modeled as processes that deviate from the "sunny day" scenario. Using OPM refinement mechanism of in-zooming, key processes are drilled into and elaborated, along with the objects that are required as agents or instruments, or objects that these processes transform. The model was developed through an iterative process of observation, modeling, group discussions, and simplification. The model faithfully represents the processes related to tool handling that take place in an OR during an operation. The specification is at various levels of detail, each level is depicted in a separate diagram, and all the diagrams are "aware" of each other as part of the whole model. Providing ontology of verbal and non-verbal modalities of communication in the OR, the resulting conceptual model is a solid basis for analyzing and understanding the source of the large variety of errors occurring in the course of an operation, providing an opportunity to decrease the quantity and severity of mistakes related to the use and misuse of surgical instrumentations. Since the model is event driven, rather than person driven, the focus is on the factors causing the errors, rather than the specific person. This approach advocates searching for technological solutions to alleviate tool-related errors rather than finger-pointing. Concretely, the model was validated through a structured questionnaire and it was found that surgeons agreed that the conceptual model was flexible (3.8 of 5, std=0.69), accurate, and it generalizable (3.7 of 5, std=0.37 and 3.7 of 5, std=0.85, respectively). The detailed conceptual model of the tools handling subsystem of the operation performed in an OR focuses on the details of the communication and the interactions taking place between the surgeon and the surgical technician during an operation, with the objective of pinpointing the exact circumstances in which errors can happen. Exact and concise specification of the communication events in general and the surgical instrument requests in particular is a prerequisite for a methodical analysis of the various modes of errors and the circumstances under which they occur. This has significant potential value in both reduction in tool-handling-related errors during an operation and providing a solid formal basis for designing a cybernetic agent which can replace a surgical technician in routine tool handling activities during an operation, freeing the technician to focus on quality assurance, monitoring and control of the cybernetic agent activities. This is a critical step in designing the next generation of cybernetic OR assistants. Copyright © 2014 Elsevier B.V. All rights reserved.
Nanopore sequencing in microgravity
McIntyre, Alexa B R; Rizzardi, Lindsay; Yu, Angela M; Alexander, Noah; Rosen, Gail L; Botkin, Douglas J; Stahl, Sarah E; John, Kristen K; Castro-Wallace, Sarah L; McGrath, Ken; Burton, Aaron S; Feinberg, Andrew P; Mason, Christopher E
2016-01-01
Rapid DNA sequencing and analysis has been a long-sought goal in remote research and point-of-care medicine. In microgravity, DNA sequencing can facilitate novel astrobiological research and close monitoring of crew health, but spaceflight places stringent restrictions on the mass and volume of instruments, crew operation time, and instrument functionality. The recent emergence of portable, nanopore-based tools with streamlined sample preparation protocols finally enables DNA sequencing on missions in microgravity. As a first step toward sequencing in space and aboard the International Space Station (ISS), we tested the Oxford Nanopore Technologies MinION during a parabolic flight to understand the effects of variable gravity on the instrument and data. In a successful proof-of-principle experiment, we found that the instrument generated DNA reads over the course of the flight, including the first ever sequenced in microgravity, and additional reads measured after the flight concluded its parabolas. Here we detail modifications to the sample-loading procedures to facilitate nanopore sequencing aboard the ISS and in other microgravity environments. We also evaluate existing analysis methods and outline two new approaches, the first based on a wave-fingerprint method and the second on entropy signal mapping. Computationally light analysis methods offer the potential for in situ species identification, but are limited by the error profiles (stays, skips, and mismatches) of older nanopore data. Higher accuracies attainable with modified sample processing methods and the latest version of flow cells will further enable the use of nanopore sequencers for diagnostics and research in space. PMID:28725742
Patel, Dishant; Bashetty, Kusum; Srirekha, A.; Archana, S.; Savitha, B.; Vijay, R.
2016-01-01
Aim: The aim of this study was to evaluate the influence of manual versus mechanical glide path (GP) on the surface changes of two different nickel-titanium rotary instruments used during root canal therapy in a moderately curved root canal. Materials and Methods: Sixty systemically healthy controls were selected for the study. Controls were divided randomly into four groups: Group 1: Manual GP followed by RaCe rotary instruments, Group 2: Manual GP followed by HyFlex rotary instruments, Group 3: Mechanical GP followed by RaCe rotary instruments, Group 4: Mechanical GP followed by HyFlex rotary instruments. After access opening, GP was prepared and rotary instruments were used according to manufacturer's instructions. All instruments were evaluated for defects under standard error mean before their use and after a single use. The scorings for the files were given at apical and middle third. Statistical Analysis Used: Chi-squared test was used. Results: The results showed that there is no statistical difference between any of the groups. Irrespective of the GP and rotary files used, more defects were present in the apical third when compared to middle third of the rotary instrument. Conclusion: Within the limitations of this study, it can be concluded that there was no effect of manual or mechanical GP on surface defects of subsequent rotary file system used. PMID:27994317
NASA Astrophysics Data System (ADS)
Hicks, S. P.; Hill, P.; Goessen, S.; Rietbrock, A.; Garth, T.
2016-12-01
The self-noise level of a broadband seismometer sensor is a commonly-used parameter used to evaluate instrument performance. There are several independent studies of various instruments' self-noise (e.g. Ringler & Hutt, 2010; Tasič & Runovc, 2012). However, due to ongoing developments in instrument design (i.e. mechanics and electronics), it is essential to regularly assess any changes in self-noise, which could indicate improvements/deterioration in instrument design and performance over time. We present new self-noise estimates for a range of Güralp broadband seismometers (3T, 3ESPC, 40T, 6T). We use the three-channel coherence analysis estimate of Sleeman et al. (2006) to measure self-noise of these instruments. Based on coherency analysis, we also perform a mathematical rotation of measured waveforms to account for any relative sensor misalignment errors, which can cause artefacts of amplified self-noise around the microseismic peak (Tasič & Runovc, 2012). The instruments were tested for a period of several months at a seismic vault located at the Eskdalemuir array in southern Scotland. We discuss the implications of these self-noise estimates within the framework of the ambient noise level across the mainland United Kingdom. Using attenuation relationships derived for the United Kingdom, we investigate the detection capability thresholds of the UK National Seismic Network within the framework of a Traffic Light System (TLS) that has been proposed for monitoring of induced seismic events due to shale gas extraction.
Active Optics: stress polishing of toric mirrors for the VLT SPHERE adaptive optics system.
Hugot, Emmanuel; Ferrari, Marc; El Hadi, Kacem; Vola, Pascal; Gimenez, Jean Luc; Lemaitre, Gérard R; Rabou, Patrick; Dohlen, Kjetil; Puget, Pascal; Beuzit, Jean Luc; Hubin, Norbert
2009-05-20
The manufacturing of toric mirrors for the Very Large Telescope-Spectro-Polarimetric High-Contrast Exoplanet Research instrument (SPHERE) is based on Active Optics and stress polishing. This figuring technique allows minimizing mid and high spatial frequency errors on an aspherical surface by using spherical polishing with full size tools. In order to reach the tight precision required, the manufacturing error budget is described to optimize each parameter. Analytical calculations based on elasticity theory and finite element analysis lead to the mechanical design of the Zerodur blank to be warped during the stress polishing phase. Results on the larger (366 mm diameter) toric mirror are evaluated by interferometry. We obtain, as expected, a toric surface within specification at low, middle, and high spatial frequencies ranges.
Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models
Anderson, Ryan; Clegg, Samuel M.; Frydenvang, Jens; Wiens, Roger C.; McLennan, Scott M.; Morris, Richard V.; Ehlmann, Bethany L.; Dyar, M. Darby
2017-01-01
Accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response of an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “sub-model” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. The sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.
OMPS Limb Profiler Instrument Performance Assessment
NASA Technical Reports Server (NTRS)
Jaross, Glen R.; Bhartia, Pawan K.; Chen, Grace; Kowitt, Mark; Haken, Michael; Chen, Zhong; Xu, Philippe; Warner, Jeremy; Kelly, Thomas
2014-01-01
Following the successful launch of the Ozone Mapping and Profiler Suite (OMPS) aboard the Suomi National Polar-orbiting Partnership (SNPP) spacecraft, the NASA OMPS Limb team began an evaluation of instrument and data product performance. The focus of this paper is the instrument performance in relation to the original design criteria. Performance that is closer to expectations increases the likelihood that limb scatter measurements by SNPP OMPS and successor instruments can form the basis for accurate long-term monitoring of ozone vertical profiles. The team finds that the Limb instrument operates mostly as designed and basic performance meets or exceeds the original design criteria. Internally scattered stray light and sensor pointing knowledge are two design challenges with the potential to seriously degrade performance. A thorough prelaunch characterization of stray light supports software corrections that are accurate to within 1% in radiances up to 60 km for the wavelengths used in deriving ozone. Residual stray light errors at 1000nm, which is useful in retrievals of stratospheric aerosols, currently exceed 10%. Height registration errors in the range of 1 km to 2 km have been observed that cannot be fully explained by known error sources. An unexpected thermal sensitivity of the sensor also causes wavelengths and pointing to shift each orbit in the northern hemisphere. Spectral shifts of as much as 0.5nm in the ultraviolet and 5 nm in the visible, and up to 0.3 km shifts in registered height, must be corrected in ground processing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, N. J.; Marriage, T. A.; Appel, J. W.
2016-02-20
Variable-delay Polarization Modulators (VPMs) are currently being implemented in experiments designed to measure the polarization of the cosmic microwave background on large angular scales because of their capability for providing rapid, front-end polarization modulation and control over systematic errors. Despite the advantages provided by the VPM, it is important to identify and mitigate any time-varying effects that leak into the synchronously modulated component of the signal. In this paper, the effect of emission from a 300 K VPM on the system performance is considered and addressed. Though instrument design can greatly reduce the influence of modulated VPM emission, some residualmore » modulated signal is expected. VPM emission is treated in the presence of rotational misalignments and temperature variation. Simulations of time-ordered data are used to evaluate the effect of these residual errors on the power spectrum. The analysis and modeling in this paper guides experimentalists on the critical aspects of observations using VPMs as front-end modulators. By implementing the characterizations and controls as described, front-end VPM modulation can be very powerful for mitigating 1/f noise in large angular scale polarimetric surveys. None of the systematic errors studied fundamentally limit the detection and characterization of B-modes on large scales for a tensor-to-scalar ratio of r = 0.01. Indeed, r < 0.01 is achievable with commensurately improved characterizations and controls.« less
NASA Technical Reports Server (NTRS)
Hou, Arthur Y.; Einaudi, Franco (Technical Monitor)
2001-01-01
I will discuss the need for accurate rainfall observations to improve our ability to model the earth's climate and improve short-range weather forecasts. I will give an overview of the recent progress in using of rainfall data provided by TRMM and other microwave instruments in data assimilation to improve global analyses and diagnose state-dependent systematic errors in physical parameterizations. I will outline the current and future research strategies in preparation for the Global Precipitation Mission.
Analysis of the ITER low field side reflectometer transmission line system.
Hanson, G R; Wilgen, J B; Bigelow, T S; Diem, S J; Biewer, T M
2010-10-01
A critical issue in the design of the ITER low field side reflectometer is the transmission line (TL) system. A TL connects each launcher to a diagnostic instrument. Each TL will typically consist of ∼42 m of corrugated waveguide and up to ten miter bends. Important issues for the performance of the TL system are mode conversion and reflections. Minimizing these issues are critical to minimizing standing waves and phase errors. The performance of TL system is analyzed and recommendations are given.
Study of an instrument for sensing errors in a telescope wavefront
NASA Technical Reports Server (NTRS)
Golden, L. J.; Shack, R. V.; Slater, D. N.
1973-01-01
Partial results are presented of theoretical and experimental investigations of different focal plane sensor configurations for determining the error in a telescope wavefront. The coarse range sensor and fine range sensors are used in the experimentation. The design of a wavefront error simulator is presented along with the Hartmann test, the shearing polarization interferometer, the Zernike test, and the Zernike polarization test.
Chen, Xin-Lin; Zhong, Liang-Huan; Wen, Yi; Liu, Tian-Wen; Li, Xiao-Ying; Hou, Zheng-Kun; Hu, Yue; Mo, Chuan-Wei; Liu, Feng-Bin
2017-09-15
This review aims to critically appraise and compare the measurement properties of inflammatory bowel disease (IBD)-specific health-related quality of life instruments. Medline, EMBASE and ISI Web of Knowledge were searched from their inception to May 2016. IBD-specific instruments for patients with Crohn's disease, ulcerative colitis or IBD were enrolled. The basic characteristics and domains of the instruments were collected. The methodological quality of measurement properties and measurement properties of the instruments were assessed. Fifteen IBD-specific instruments were included, which included twelve instruments for adult IBD patients and three for paediatric IBD patients. All of the instruments were developed in North American and European countries. The following common domains were identified: IBD-related symptoms, physical, emotional and social domain. The methodological quality was satisfactory for content validity; fair in internal consistency, reliability, structural validity, hypotheses testing and criterion validity; and poor in measurement error, cross-cultural validity and responsiveness. For adult IBD patients, the IBDQ-32 and its short version (SIBDQ) had good measurement properties and were the most widely used worldwide. For paediatric IBD patients, the IMPACT-III had good measurement properties and had more translated versions. Most methodological quality should be promoted, especially measurement error, cross-cultural validity and responsiveness. The IBDQ-32 was the most widely used instrument with good reliability and validity, followed by the SIBDQ and IMPACT-III. Further validation studies are necessary to support the use of other instruments.
Sampling errors for a nadir viewing instrument on the International Space Station
NASA Astrophysics Data System (ADS)
Berger, H. I.; Pincus, R.; Evans, F.; Santek, D.; Ackerman, S.; Ackerman, S.
2001-12-01
In an effort to improve the observational charactarization of ice clouds in the earth's atmosphere, we are developing a sub-millimeter wavelength radiometer which we propose to fly on the International Space Station for two years. Our goal is to accurately measure the ice water path and mass-weighted particle size at the finest possible temporal and spatial resolution. The ISS orbit precesses, sampling through the dirunal cycle every 16 days, but technological constraints limit our instrument to a single pixel viewed near nadir. We discuss sampling errors associated with this instrument/platform configuration. We use as "truth" the ISCCP dataset of pixel-level cloud optical retrievals, which acts as a proxy for ice water path; this dataset is sampled according to the orbital characteristics of the space station, and the statistics computed from the sub-sampled population are compared with those from the full dataset. We explore the tradeoffs in average sampling error as a function of the averaging time and spatial scale, and explore the possibility of resolving the dirunal cycle.
NASA Technical Reports Server (NTRS)
Jones, D. H.
1985-01-01
A new flexible model of pilot instrument scanning behavior is presented which assumes that the pilot uses a set of deterministic scanning patterns on the pilot's perception of error in the state of the aircraft, and the pilot's knowledge of the interactive nature of the aircraft's systems. Statistical analyses revealed that a three stage Markov process composed of the pilot's three predicted lookpoints (LP), occurring 1/30, 2/30, and 3/30 of a second prior to each LP, accurately modelled the scanning behavior of 14 commercial airline pilots while flying steep turn maneuvers in a Boeing 737 flight simulator. The modelled scanning data for each pilot were not statistically different from the observed scanning data in comparisons of mean dwell time, entropy, and entropy rate. These findings represent the first direct evidence that pilots are using deterministic scanning patterns during instrument flight. The results are interpreted as direct support for the error dependent model and suggestions are made for further research that could allow for identification of the specific scanning patterns suggested by the model.
NASA Astrophysics Data System (ADS)
Wilby, M. J.; Keller, C. U.; Haffert, S.; Korkiakoski, V.; Snik, F.; Pietrow, A. G. M.
2016-07-01
Non-Common Path Errors (NCPEs) are the dominant factor limiting the performance of current astronomical high-contrast imaging instruments. If uncorrected, the resulting quasi-static speckle noise floor limits coronagraph performance to a raw contrast of typically 10-4, a value which does not improve with increasing integration time. The coronagraphic Modal Wavefront Sensor (cMWS) is a hybrid phase optic which uses holographic PSF copies to supply focal-plane wavefront sensing information directly from the science camera, whilst maintaining a bias-free coronagraphic PSF. This concept has already been successfully implemented on-sky at the William Herschel Telescope (WHT), La Palma, demonstrating both real-time wavefront sensing capability and successful extraction of slowly varying wavefront errors under a dominant and rapidly changing atmospheric speckle foreground. In this work we present an overview of the development of the cMWS and recent first light results obtained using the Leiden EXoplanet Instrument (LEXI), a high-contrast imager and high-dispersion spectrograph pathfinder instrument for the WHT.
Performance appraisal of VAS radiometry for GOES-4, -5 and -6
NASA Technical Reports Server (NTRS)
Chesters, D.; Robinson, W. D.
1983-01-01
The first three VISSR Atmospheric Sounders (VAS) were launched on GOES-4, -5, and -6 in 1980, 1981 and 1983. Postlaunch radiometric performance is assessed for noise, biases, registration and reliability, with special attention to calibration and problems in the data processing chain. The postlaunch performance of the VAS radiometer meets its prelaunch design specifications, particularly those related to image formation and noise reduction. The best instrument is carried on GOES-5, currently operational as GOES-EAST. Single sample noise is lower than expected, especially for the small longwave and large shortwave detectors. Detector to detector offsets are correctable to within the resolution limits of the instrument. Truncation, zero point and droop errors are insignificant. Absolute calibration errors, estimated from HIRS and from radiation transfer calculations, indicate moderate, but stable biases. Relative calibration errors from scanline to scanline are noticeable, but meet sounding requirements for temporarily and spatially averaged sounding fields of view. The VAS instrument is a potentially useful radiometer for mesoscale sounding operations. Image quality is very good. Soundings derived from quality controlled data meet prelaunch requirements when calculated with noise and bias resistant algorithms.
Savage, Michael J.
2010-01-01
The possibility of reliable, reasonably accurate and relatively inexpensive estimates of sensible heat and latent energy fluxes was investigated using a commercial combination thin-film polymer capacitive relative humidity and adjacent temperature sensor instrument. Long-term and unattended water vapour pressure profile difference measurements using low-power combination instruments were compared with those from a cooled dewpoint mirror hygrometer, the latter often used with Bowen ratio energy balance (BREB) systems. An error analysis, based on instrument relative humidity and temperature errors, was applied for various capacitive humidity instrument models. The main disadvantage of a combination capacitive humidity instrument is that two measurements, relative humidity and temperature, are required for estimation of water vapour pressure as opposed to one for a dewpoint hygrometer. In a laboratory experiment using an automated procedure, water vapour pressure differences generated using a reference dewpoint generator were measured using a commercial model (Dew-10) dewpoint hygrometer and a combination capacitive humidity instrument. The laboratory measurement comparisons showed that, potentially, an inexpensive model combination capacitive humidity instrument (CS500 or HMP50), or for improved results a slightly more expensive model (HMP35C or HMP45C), could substitute for the more expensive dewpoint hygrometer. In a field study, in a mesic grassland, the water vapour pressure measurement noise for the combination capacitive humidity instruments was greater than that for the dewpoint hygrometer. The average water vapour pressure profile difference measured using a HMP45C was highly correlated with that from a dewpoint hygrometer with a slope less than unity. Water vapour pressure measurements using the capacitive humidity instruments were not as accurate, compared to those obtained using a dewpoint hygrometer, but the resolution magnitudes for the profile difference measurements were less than the minimum of 0.01 kPa required for BREB measurements when averaged over 20 min. Furthermore, the longer-term capacitive humidity measurements are more reliable and not dependent on a sensor bias adjustment as is the case for the dewpoint hygrometer. A field comparison of CS500 and HMP45C profile water vapour pressure differences yielded a slope of close to unity. However, the CS500 exhibited more variable water vapour pressure measurements mainly due to its increased variation in temperature measurements compared to the HMP45C. Comparisons between 20-min BREB sensible heat fluxes obtained using a HMP45C and a dewpoint hygrometer yielded a slope of almost unity. BREB sensible heat fluxes measured using a HMP45C were reasonably well correlated with those obtained using a surface-layer scintillometer and eddy covariance (slope of 0.9629 and 0.9198 respectively). This reasonable agreement showed that a combination capacitive humidity instrument, with similar relative humidity (RH) and temperature error magnitudes of at most 2% RH and 0.3 °C respectively, and similar measurement time response, would be an adequate and less expensive substitute for a dewpoint hygrometer. Furthermore, a combination capacitive humidity instrument requires no servicing compared to a dewpoint hygrometer which requires a bias adjustment and mirror cleaning each week. These findings make unattended BREB measurements of sensible heat flux and evaporation cheaper and more reliable with the system easier to assemble and service and with reduced instrument power. PMID:22163625
Polarization errors associated with birefringent waveplates
NASA Technical Reports Server (NTRS)
West, Edward A.; Smith, Matthew H.
1995-01-01
Although zero-order quartz waveplates are widely used in instrumentation that needs good temperature and field-of-view characteristics, the residual errors associated with these devices can be very important in high-resolution polarimetry measurements. How the field-of-view characteristics are affected by retardation errors and the misalignment of optic axes in a double-crystal waveplate is discussed. The retardation measurements made on zero-order quartz and single-order 'achromatic' waveplates and how the misalignment errors affect those measurements are discussed.
Uncertainty estimates in broadband seismometer sensitivities using microseisms
Ringler, Adam T.; Storm, Tyler L.; Gee, Lind S.; Hutt, Charles R.; Wilson, David C.
2015-01-01
The midband sensitivity of a seismic instrument is one of the fundamental parameters used in published station metadata. Any errors in this value can compromise amplitude estimates in otherwise high-quality data. To estimate an upper bound in the uncertainty of the midband sensitivity for modern broadband instruments, we compare daily microseism (4- to 8-s period) amplitude ratios between the vertical components of colocated broadband sensors across the IRIS/USGS (network code IU) seismic network. We find that the mean of the 145,972 daily ratios used between 2002 and 2013 is 0.9895 with a standard deviation of 0.0231. This suggests that the ratio between instruments shows a small bias and considerable scatter. We also find that these ratios follow a standard normal distribution (R 2 = 0.95442), which suggests that the midband sensitivity of an instrument has an error of no greater than ±6 % with a 99 % confidence interval. This gives an upper bound on the precision to which we know the sensitivity of a fielded instrument.
Tsukeoka, Tadashi; Tsuneizumi, Yoshikazu; Yoshino, Kensuke; Suzuki, Mashiko
2018-05-01
The aim of this study was to determine factors that contribute to bone cutting errors of conventional instrumentation for tibial resection in total knee arthroplasty (TKA) as assessed by an image-free navigation system. The hypothesis is that preoperative varus alignment is a significant contributory factor to tibial bone cutting errors. This was a prospective study of a consecutive series of 72 TKAs. The amount of the tibial first-cut errors with reference to the planned cutting plane in both coronal and sagittal planes was measured by an image-free computer navigation system. Multiple regression models were developed with the amount of tibial cutting error in the coronal and sagittal planes as dependent variables and sex, age, disease, height, body mass index, preoperative alignment, patellar height (Insall-Salvati ratio) and preoperative flexion angle as independent variables. Multiple regression analysis showed that sex (male gender) (R = 0.25 p = 0.047) and preoperative varus alignment (R = 0.42, p = 0.001) were positively associated with varus tibial cutting errors in the coronal plane. In the sagittal plane, none of the independent variables was significant. When performing TKA in varus deformity, careful confirmation of the bone cutting surface should be performed to avoid varus alignment. The results of this study suggest technical considerations that can help a surgeon achieve more accurate component placement. IV.
A variable acceleration calibration system
NASA Astrophysics Data System (ADS)
Johnson, Thomas H.
2011-12-01
A variable acceleration calibration system that applies loads using gravitational and centripetal acceleration serves as an alternative, efficient and cost effective method for calibrating internal wind tunnel force balances. Two proof-of-concept variable acceleration calibration systems are designed, fabricated and tested. The NASA UT-36 force balance served as the test balance for the calibration experiments. The variable acceleration calibration systems are shown to be capable of performing three component calibration experiments with an approximate applied load error on the order of 1% of the full scale calibration loads. Sources of error are indentified using experimental design methods and a propagation of uncertainty analysis. Three types of uncertainty are indentified for the systems and are attributed to prediction error, calibration error and pure error. Angular velocity uncertainty is shown to be the largest indentified source of prediction error. The calibration uncertainties using a production variable acceleration based system are shown to be potentially equivalent to current methods. The production quality system can be realized using lighter materials and a more precise instrumentation. Further research is needed to account for balance deflection, forcing effects due to vibration, and large tare loads. A gyroscope measurement technique is shown to be capable of resolving the balance deflection angle calculation. Long term research objectives include a demonstration of a six degree of freedom calibration, and a large capacity balance calibration.
NASA Technical Reports Server (NTRS)
Connelly, Joseph; Blake, Peter; Jones, Joycelyn
2008-01-01
The authors report operational upgrades and streamlined data analysis of a commissioned electronic speckle interferometer (ESPI) in a permanent in-house facility at NASA's Goddard Space Flight Center. Our ESPI was commercially purchased for use by the James Webb Space Telescope (JWST) development team. We have quantified and reduced systematic error sources, improved the software operability with a user-friendly graphic interface, developed an instrument simulator, streamlined data analysis for long-duration testing, and implemented a turn-key approach to speckle interferometry. We also summarize results from a test of the JWST support structure (previously published), and present new results from several pieces of test hardware at various environmental conditions.
Description of the GMAO OSSE for Weather Analysis Software Package: Version 3
NASA Technical Reports Server (NTRS)
Koster, Randal D. (Editor); Errico, Ronald M.; Prive, Nikki C.; Carvalho, David; Sienkiewicz, Meta; El Akkraoui, Amal; Guo, Jing; Todling, Ricardo; McCarty, Will; Putman, William M.;
2017-01-01
The Global Modeling and Assimilation Office (GMAO) at the NASA Goddard Space Flight Center has developed software and products for conducting observing system simulation experiments (OSSEs) for weather analysis applications. Such applications include estimations of potential effects of new observing instruments or data assimilation techniques on improving weather analysis and forecasts. The GMAO software creates simulated observations from nature run (NR) data sets and adds simulated errors to those observations. The algorithms employed are much more sophisticated, adding a much greater degree of realism, compared with OSSE systems currently available elsewhere. The algorithms employed, software designs, and validation procedures are described in this document. Instructions for using the software are also provided.
Triple collocation based merging of satellite soil moisture retrievals
USDA-ARS?s Scientific Manuscript database
We propose a method for merging soil moisture retrievals from space borne active and passive microwave instruments based on weighted averaging taking into account the error characteristics of the individual data sets. The merging scheme is parameterized using error variance estimates obtained from u...
CCD Camera Lens Interface for Real-Time Theodolite Alignment
NASA Technical Reports Server (NTRS)
Wake, Shane; Scott, V. Stanley, III
2012-01-01
Theodolites are a common instrument in the testing, alignment, and building of various systems ranging from a single optical component to an entire instrument. They provide a precise way to measure horizontal and vertical angles. They can be used to align multiple objects in a desired way at specific angles. They can also be used to reference a specific location or orientation of an object that has moved. Some systems may require a small margin of error in position of components. A theodolite can assist with accurately measuring and/or minimizing that error. The technology is an adapter for a CCD camera with lens to attach to a Leica Wild T3000 Theodolite eyepiece that enables viewing on a connected monitor, and thus can be utilized with multiple theodolites simultaneously. This technology removes a substantial part of human error by relying on the CCD camera and monitors. It also allows image recording of the alignment, and therefore provides a quantitative means to measure such error.
Mirjankar, Nikhil S; Fraga, Carlos G; Carman, April J; Moran, James J
2016-02-02
Chemical attribution signatures (CAS) for chemical threat agents (CTAs), such as cyanides, are being investigated to provide an evidentiary link between CTAs and specific sources to support criminal investigations and prosecutions. Herein, stocks of KCN and NaCN were analyzed for trace anions by high performance ion chromatography (HPIC), carbon stable isotope ratio (δ(13)C) by isotope ratio mass spectrometry (IRMS), and trace elements by inductively coupled plasma optical emission spectroscopy (ICP-OES). The collected analytical data were evaluated using hierarchical cluster analysis (HCA), Fisher-ratio (F-ratio), interval partial least-squares (iPLS), genetic algorithm-based partial least-squares (GAPLS), partial least-squares discriminant analysis (PLSDA), K nearest neighbors (KNN), and support vector machines discriminant analysis (SVMDA). HCA of anion impurity profiles from multiple cyanide stocks from six reported countries of origin resulted in cyanide samples clustering into three groups, independent of the associated alkali metal (K or Na). The three groups were independently corroborated by HCA of cyanide elemental profiles and corresponded to countries each having one known solid cyanide factory: Czech Republic, Germany, and United States. Carbon stable isotope measurements resulted in two clusters: Germany and United States (the single Czech stock grouped with United States stocks). Classification errors for two validation studies using anion impurity profiles collected over five years on different instruments were as low as zero for KNN and SVMDA, demonstrating the excellent reliability associated with using anion impurities for matching a cyanide sample to its factory using our current cyanide stocks. Variable selection methods reduced errors for those classification methods having errors greater than zero; iPLS-forward selection and F-ratio typically provided the lowest errors. Finally, using anion profiles to classify cyanides to a specific stock or stock group for a subset of United States stocks resulted in cross-validation errors ranging from 0 to 5.3%.
NASA Astrophysics Data System (ADS)
Ma, Chen-xi; Ding, Guo-qing
2017-10-01
Simple harmonic waves and synthesized simple harmonic waves are widely used in the test of instruments. However, because of the errors caused by clearance of gear and time-delay error of FPGA, it is difficult to control servo electric cylinder in precise simple harmonic motion under high speed, high frequency and large load conditions. To solve the problem, a method of error compensation is proposed in this paper. In the method, a displacement sensor is fitted on the piston rod of the electric cylinder. By using the displacement sensor, the real-time displacement of the piston rod is obtained and fed back to the input of servo motor, then a closed loop control is realized. There is compensation of pulses in the next period of the synthetic waves. This paper uses FPGA as the processing core. The software mainly comprises a waveform generator, an Ethernet module, a memory module, a pulse generator, a pulse selector, a protection module, an error compensation module. A durability of shock absorbers is used as the testing platform. The durability mainly comprises a single electric cylinder, a servo motor for driving the electric cylinder, and the servo motor driver.
Closed-loop focal plane wavefront control with the SCExAO instrument
NASA Astrophysics Data System (ADS)
Martinache, Frantz; Jovanovic, Nemanja; Guyon, Olivier
2016-09-01
Aims: This article describes the implementation of a focal plane based wavefront control loop on the high-contrast imaging instrument SCExAO (Subaru Coronagraphic Extreme Adaptive Optics). The sensor relies on the Fourier analysis of conventional focal-plane images acquired after an asymmetric mask is introduced in the pupil of the instrument. Methods: This absolute sensor is used here in a closed-loop to compensate for the non-common path errors that normally affects any imaging system relying on an upstream adaptive optics system.This specific implementation was used to control low-order modes corresponding to eight zernike modes (from focus to spherical). Results: This loop was successfully run on-sky at the Subaru Telescope and is used to offset the SCExAO deformable mirror shape used as a zero-point by the high-order wavefront sensor. The paper details the range of errors this wavefront-sensing approach can operate within and explores the impact of saturation of the data and how it can be bypassed, at a cost in performance. Conclusions: Beyond this application, because of its low hardware impact, the asymmetric pupil Fourier wavefront sensor (APF-WFS) can easily be ported in a wide variety of wavefront sensing contexts, for ground- as well space-borne telescopes, and for telescope pupils that can be continuous, segmented or even sparse. The technique is powerful because it measures the wavefront where it really matters, at the level of the science detector.
NASA Technical Reports Server (NTRS)
Zhang, Liwei Dennis; Milman, Mark; Korechoff, Robert
2004-01-01
The current design of the Space Interferometry Mission (SIM) employs a 19 laser-metrology-beam system (also called L19 external metrology truss) to monitor changes of distances between the fiducials of the flight system's multiple baselines. The function of the external metrology truss is to aid in the determination of the time-variations of the interferometer baseline. The largest contributor to truss error occurs in SIM wide-angle observations when the articulation of the siderostat mirrors (in order to gather starlight from different sky coordinates) brings to light systematic errors due to offsets at levels of instrument components (which include comer cube retro-reflectors, etc.). This error is labeled external metrology wide-angle field-dependent error. Physics-based model of field-dependent error at single metrology gauge level is developed and linearly propagated to errors in interferometer delay. In this manner delay error sensitivity to various error parameters or their combination can be studied using eigenvalue/eigenvector analysis. Also validation of physics-based field-dependent model on SIM testbed lends support to the present approach. As a first example, dihedral error model is developed for the comer cubes (CC) attached to the siderostat mirrors. Then the delay errors due to this effect can be characterized using the eigenvectors of composite CC dihedral error. The essence of the linear error model is contained in an error-mapping matrix. A corresponding Zernike component matrix approach is developed in parallel, first for convenience of describing the RMS of errors across the field-of-regard (FOR), and second for convenience of combining with additional models. Average and worst case residual errors are computed when various orders of field-dependent terms are removed from the delay error. Results of the residual errors are important in arriving at external metrology system component requirements. Double CCs with ideally co-incident vertices reside with the siderostat. The non-common vertex error (NCVE) is treated as a second example. Finally combination of models, and various other errors are discussed.
NASA Technical Reports Server (NTRS)
Vasilyev, Y. M.; Lagunov, L. F.
1973-01-01
The schematic diagram of a noise measuring device is presented that uses pulse expansion modeling according to the peak or any other measured values, to obtain instrument readings at a very low noise error.
Application of parameter estimation to aircraft stability and control: The output-error approach
NASA Technical Reports Server (NTRS)
Maine, Richard E.; Iliff, Kenneth W.
1986-01-01
The practical application of parameter estimation methodology to the problem of estimating aircraft stability and control derivatives from flight test data is examined. The primary purpose of the document is to present a comprehensive and unified picture of the entire parameter estimation process and its integration into a flight test program. The document concentrates on the output-error method to provide a focus for detailed examination and to allow us to give specific examples of situations that have arisen. The document first derives the aircraft equations of motion in a form suitable for application to estimation of stability and control derivatives. It then discusses the issues that arise in adapting the equations to the limitations of analysis programs, using a specific program for an example. The roles and issues relating to mass distribution data, preflight predictions, maneuver design, flight scheduling, instrumentation sensors, data acquisition systems, and data processing are then addressed. Finally, the document discusses evaluation and the use of the analysis results.
Punched belt hole position deviation analysis of float type water level gauge
NASA Astrophysics Data System (ADS)
Mao, Chunlei; Wang, Tao; Fu, Weijie; Li, Lianhui
2018-03-01
The key parts of the float type water level gauge instrument is perforated belt, The size and tolerance requirements of its aperture is: (1) alternation of 100+0.2 and 100-0.2, (2) 200±0.1, (3) 1000±0.15, (4) 10000±0.2. The single hole position: alternation of 100+0.2 and 100-0.2; double: 200±0.1, and ensure the best hole position error avoidance tends to be one-way, that is to say: when the punched belt combined with a water wheel rotating line moving, The hole position error to single direction increase or decrease, caused the water level nail gradually and close to the edge of the hole, and then edge and final punched belt was lifted. This paper uses the laser drilling process of steel strip for data collection and analysis. It is found that this method cannot meet the tolerance requirements and the double stamping processing method with adjustable cylindrical pin is feasible.
Fourier Transform Methods. Chapter 4
NASA Technical Reports Server (NTRS)
Kaplan, Simon G.; Quijada, Manuel A.
2015-01-01
This chapter describes the use of Fourier transform spectrometers (FTS) for accurate spectrophotometry over a wide spectral range. After a brief exposition of the basic concepts of FTS operation, we discuss instrument designs and their advantages and disadvantages relative to dispersive spectrometers. We then examine how common sources of error in spectrophotometry manifest themselves when using an FTS and ways to reduce the magnitude of these errors. Examples are given of applications to both basic and derived spectrophotometric quantities. Finally, we give recommendations for choosing the right instrument for a specific application, and how to ensure the accuracy of the measurement results..
Cometary ephemerides - needs and concerns
NASA Technical Reports Server (NTRS)
Yeomans, D. K.
1981-01-01
With the use of narrow field-of-view instrumentation on faint comets, the accuracy requirements upon computed ephemerides are increasing. It is not uncommon for instruments with a one arc minute field-of-view to be tracking a faint comet that is not visible without a substantial integration time. As with all ephemerides of solar syste objects, the computed motion and reduction of these observations, the computed motion of a comet is further depenent upon effects related to the comet's activity. Thus, the ephemeris of an active comet is corrupted by both observational errors and errors due to the comet's activity.
Virtual reality-based assessment of basic laparoscopic skills using the Leap Motion controller.
Lahanas, Vasileios; Loukas, Constantinos; Georgiou, Konstantinos; Lababidi, Hani; Al-Jaroudi, Dania
2017-12-01
The majority of the current surgical simulators employ specialized sensory equipment for instrument tracking. The Leap Motion controller is a new device able to track linear objects with sub-millimeter accuracy. The aim of this study was to investigate the potential of a virtual reality (VR) simulator for assessment of basic laparoscopic skills, based on the low-cost Leap Motion controller. A simple interface was constructed to simulate the insertion point of the instruments into the abdominal cavity. The controller provided information about the position and orientation of the instruments. Custom tools were constructed to simulate the laparoscopic setup. Three basic VR tasks were developed: camera navigation (CN), instrument navigation (IN), and bimanual operation (BO). The experiments were carried out in two simulation centers: MPLSC (Athens, Greece) and CRESENT (Riyadh, Kingdom of Saudi Arabia). Two groups of surgeons (28 experts and 21 novices) participated in the study by performing the VR tasks. Skills assessment metrics included time, pathlength, and two task-specific errors. The face validity of the training scenarios was also investigated via a questionnaire completed by the participants. Expert surgeons significantly outperformed novices in all assessment metrics for IN and BO (p < 0.05). For CN, a significant difference was found in one error metric (p < 0.05). The greatest difference between the performances of the two groups occurred for BO. Qualitative analysis of the instrument trajectory revealed that experts performed more delicate movements compared to novices. Subjects' ratings on the feedback questionnaire highlighted the training value of the system. This study provides evidence regarding the potential use of the Leap Motion controller for assessment of basic laparoscopic skills. The proposed system allowed the evaluation of dexterity of the hand movements. Future work will involve comparison studies with validated simulators and development of advanced training scenarios on current Leap Motion controller.
Improved Stratospheric Temperature Retrievals for Climate Reanalysis
NASA Technical Reports Server (NTRS)
Rokke, L.; Joiner, J.
1999-01-01
The Data Assimilation Office (DAO) is embarking on plans to generate a twenty year reanalysis data set of climatic atmospheric variables. One of the focus points will be in the evaluation of the dynamics of the stratosphere. The Stratospheric Sounding Unit (SSU), flown as part of the TIROS Operational Vertical Sounder (TOVS), is one of the primary stratospheric temperature sensors flown consistently throughout the reanalysis period. Seven unique sensors made the measurements over time, with individual instrument characteristics that need to be addressed. The stratospheric temperatures being assimilated across satellite platforms will profoundly impact the reanalysis dynamical fields. To attempt to quantify aspects of instrument and retrieval bias we are carefully collecting and analyzing all available information on the sensors, their instrument anomalies, forward model errors and retrieval biases. For the retrieval of stratospheric temperatures, we adapted the minimum variance approach of Jazwinski (1970) and Rodgers (1976) and applied it to the SSU soundings. In our algorithm, the state vector contains an initial guess of temperature from a model six hour forecast provided by the Goddard EOS Data Assimilation System (GEOS/DAS). This is combined with an a priori covariance matrix, a forward model parameterization, and specifications of instrument noise characteristics. A quasi-Newtonian iteration is used to obtain convergence of the retrieved state to the measurement vector. This algorithm also enables us to analyze and address the systematic errors associated with the unique characteristics of the cell pressures on the individual SSU instruments and the resolving power of the instruments to vertical gradients in the stratosphere. The preliminary results of the improved retrievals and their assimilation as well as baseline calculations of bias and rms error between the NESDIS operational product and col-located ground measurements will be presented.
Planck 2013 results. VII. HFI time response and beams
NASA Astrophysics Data System (ADS)
Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Armitage-Caplan, C.; Arnaud, M.; Ashdown, M.; Atrio-Barandela, F.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Battaner, E.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bobin, J.; Bock, J. J.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Bowyer, J. W.; Bridges, M.; Bucher, M.; Burigana, C.; Cardoso, J.-F.; Catalano, A.; Challinor, A.; Chamballu, A.; Chary, R.-R.; Chiang, H. C.; Chiang, L.-Y.; Christensen, P. R.; Church, S.; Clements, D. L.; Colombi, S.; Colombo, L. P. L.; Couchot, F.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Delouis, J.-M.; Désert, F.-X.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Dunkley, J.; Dupac, X.; Efstathiou, G.; Enßlin, T. A.; Eriksen, H. K.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Galeotta, S.; Ganga, K.; Giard, M.; Giraud-Héraud, Y.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J. E.; Haissinski, J.; Hansen, F. K.; Hanson, D.; Harrison, D.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hou, Z.; Hovest, W.; Huffenberger, K. M.; Jaffe, A. H.; Jaffe, T. R.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kisner, T. S.; Kneissl, R.; Knoche, J.; Knox, L.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lamarre, J.-M.; Lasenby, A.; Laureijs, R. J.; Lawrence, C. R.; Leonardi, R.; Leroy, C.; Lesgourgues, J.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; MacTavish, C. J.; Maffei, B.; Mandolesi, N.; Maris, M.; Marshall, D. J.; Martin, P. G.; Martínez-González, E.; Masi, S.; Massardi, M.; Matarrese, S.; Matsumura, T.; Matthai, F.; Mazzotta, P.; McGehee, P.; Melchiorri, A.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mitra, S.; Miville-Deschênes, M.-A.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; Osborne, S.; Oxborrow, C. A.; Paci, F.; Pagano, L.; Pajot, F.; Paoletti, D.; Pasian, F.; Patanchon, G.; Perdereau, O.; Perotto, L.; Perrotta, F.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polegre, A. M.; Polenta, G.; Ponthieu, N.; Popa, L.; Poutanen, T.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Reinecke, M.; Remazeilles, M.; Renault, C.; Ricciardi, S.; Riller, T.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Roudier, G.; Rowan-Robinson, M.; Rusholme, B.; Sandri, M.; Santos, D.; Sauvé, A.; Savini, G.; Scott, D.; Shellard, E. P. S.; Spencer, L. D.; Starck, J.-L.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sureau, F.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Tavagnacco, D.; Terenzi, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Umana, G.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vielva, P.; Villa, F.; Vittorio, N.; Wade, L. A.; Wandelt, B. D.; Yvon, D.; Zacchei, A.; Zonca, A.
2014-11-01
This paper characterizes the effective beams, the effective beam window functions and the associated errors for the Planck High Frequency Instrument (HFI) detectors. The effective beam is theangular response including the effect of the optics, detectors, data processing and the scan strategy. The window function is the representation of this beam in the harmonic domain which is required to recover an unbiased measurement of the cosmic microwave background angular power spectrum. The HFI is a scanning instrument and its effective beams are the convolution of: a) the optical response of the telescope and feeds; b) the processing of the time-ordered data and deconvolution of the bolometric and electronic transfer function; and c) the merging of several surveys to produce maps. The time response transfer functions are measured using observations of Jupiter and Saturn and by minimizing survey difference residuals. The scanning beam is the post-deconvolution angular response of the instrument, and is characterized with observations of Mars. The main beam solid angles are determined to better than 0.5% at each HFI frequency band. Observations of Jupiter and Saturn limit near sidelobes (within 5°) to about 0.1% of the total solid angle. Time response residuals remain as long tails in the scanning beams, but contribute less than 0.1% of the total solid angle. The bias and uncertainty in the beam products are estimated using ensembles of simulated planet observations that include the impact of instrumental noise and known systematic effects. The correlation structure of these ensembles is well-described by five error eigenmodes that are sub-dominant to sample variance and instrumental noise in the harmonic domain. A suite of consistency tests provide confidence that the error model represents a sufficient description of the data. The total error in the effective beam window functions is below 1% at 100 GHz up to multipole ℓ ~ 1500, and below 0.5% at 143 and 217 GHz up to ℓ ~ 2000.
Restoration of the ASCA Source Position Accuracy
NASA Astrophysics Data System (ADS)
Gotthelf, E. V.; Ueda, Y.; Fujimoto, R.; Kii, T.; Yamaoka, K.
2000-11-01
We present a calibration of the absolute pointing accuracy of the Advanced Satellite for Cosmology and Astrophysics (ASCA) which allows us to compensate for a large error (up to 1') in the derived source coordinates. We parameterize a temperature dependent deviation of the attitude solution which is responsible for this error. By analyzing ASCA coordinates of 100 bright active galactic nuclei, we show that it is possible to reduce the uncertainty in the sky position for any given observation by a factor of 4. The revised 90% error circle radius is then 12", consistent with preflight specifications, effectively restoring the full ASCA pointing accuracy. Herein, we derive an algorithm which compensates for this attitude error and present an internet-based table to be used to correct post facto the coordinate of all ASCA observations. While the above error circle is strictly applicable to data taken with the on-board Solid-state Imaging Spectrometers (SISs), similar coordinate corrections are derived for data obtained with the Gas Imaging Spectrometers (GISs), which, however, have additional instrumental uncertainties. The 90% error circle radius for the central 20' diameter of the GIS is 24". The large reduction in the error circle area for the two instruments offers the opportunity to greatly enhance the search for X-ray counterparts at other wavelengths. This has important implications for current and future ASCA source catalogs and surveys.
The response of the National Oceanic and Atmospheric Administration multilayer inferential dry deposition velocity model (NOAA-MLM) to error in meteorological inputs and model parameterization is reported. Monte Carlo simulations were performed to assess the uncertainty in NOA...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seong W. Lee
During this reporting period, the literature survey including the gasifier temperature measurement literature, the ultrasonic application and its background study in cleaning application, and spray coating process are completed. The gasifier simulator (cold model) testing has been successfully conducted. Four factors (blower voltage, ultrasonic application, injection time intervals, particle weight) were considered as significant factors that affect the temperature measurement. The Analysis of Variance (ANOVA) was applied to analyze the test data. The analysis shows that all four factors are significant to the temperature measurements in the gasifier simulator (cold model). The regression analysis for the case with the normalizedmore » room temperature shows that linear model fits the temperature data with 82% accuracy (18% error). The regression analysis for the case without the normalized room temperature shows 72.5% accuracy (27.5% error). The nonlinear regression analysis indicates a better fit than that of the linear regression. The nonlinear regression model's accuracy is 88.7% (11.3% error) for normalized room temperature case, which is better than the linear regression analysis. The hot model thermocouple sleeve design and fabrication are completed. The gasifier simulator (hot model) design and the fabrication are completed. The system tests of the gasifier simulator (hot model) have been conducted and some modifications have been made. Based on the system tests and results analysis, the gasifier simulator (hot model) has met the proposed design requirement and the ready for system test. The ultrasonic cleaning method is under evaluation and will be further studied for the gasifier simulator (hot model) application. The progress of this project has been on schedule.« less
NASA Astrophysics Data System (ADS)
Vicent, Jorge; Alonso, Luis; Sabater, Neus; Miesch, Christophe; Kraft, Stefan; Moreno, Jose
2015-09-01
The uncertainties in the knowledge of the Instrument Spectral Response Function (ISRF), barycenter of the spectral channels and bandwidth / spectral sampling (spectral resolution) are important error sources in the processing of satellite imaging spectrometers within narrow atmospheric absorption bands. The exhaustive laboratory spectral characterization is a costly engineering process that differs from the instrument configuration in-flight given the harsh space environment and harmful launching phase. The retrieval schemes at Level-2 commonly assume a Gaussian ISRF, leading to uncorrected spectral stray-light effects and wrong characterization and correction of the spectral shift and smile. These effects produce inaccurate atmospherically corrected data and are propagated to the final Level-2 mission products. Within ESA's FLEX satellite mission activities, the impact of the ISRF knowledge error and spectral calibration at Level-1 products and its propagation to Level-2 retrieved chlorophyll fluorescence has been analyzed. A spectral recalibration scheme has been implemented at Level-2 reducing the errors in Level-1 products below the 10% error in retrieved fluorescence within the oxygen absorption bands enhancing the quality of the retrieved products. The work presented here shows how the minimization of the spectral calibration errors requires an effort both for the laboratory characterization and for the implementation of specific algorithms at Level-2.
Transfusion monitoring: care practice analysis in a public teaching hospital
dos Reis, Valesca Nunes; Paixão, Isabella Bertolin; Perrone, Ana Carolina Amaral de São José; Monteiro, Maria Inês; dos Santos, Kelli Borges
2016-01-01
ABSTRACT Objective To analyze the process of recording transfusion monitoring at a public teaching hospital. Methods A descriptive and retrospective study with a quantitative approach, analyzing the instruments to record transfusion monitoring at a public hospital in a city in the State of Minas Gerais (MG). Data were collected on the correct completion of the instrument, time elapsed from transfusions, records of vital signs, type of blood component more frequently transfused, and hospital unit where transfusion was performed. Results A total of 1,012 records were analyzed, and 53.4% of them had errors in filling in the instruments, 6% of transfusions started after the recommended time, and 9.3% of patients had no vital signs registered. Conclusion Failures were identified in the process of recording transfusion monitoring, and they could result in more adverse events related to the administration of blood components. Planning and implementing strategies to enhance recording and to improve care delivered are challenging. PMID:27074233
Information systems as a tool to improve legal metrology activities
NASA Astrophysics Data System (ADS)
Rodrigues Filho, B. A.; Soratto, A. N. R.; Gonçalves, R. F.
2016-07-01
This study explores the importance of information systems applied to legal metrology as a tool to improve the control of measuring instruments used in trade. The information system implanted in Brazil has also helped to understand and appraise the control of the measurements due to the behavior of the errors and deviations of instruments used in trade, allowing the allocation of resources wisely, leading to a more effective planning and control on the legal metrology field. A study case analyzing the fuel sector is carried out in order to show the conformity of fuel dispersers according to maximum permissible errors. The statistics of measurement errors of 167,310 fuel dispensers of gasoline, ethanol and diesel used in the field were analyzed demonstrating the accordance of the fuel market in Brazil to the legal requirements.
NASA Astrophysics Data System (ADS)
Bibi, Humera; Alam, Khan; Chishtie, Farrukh; Bibi, Samina; Shahid, Imran; Blaschke, Thomas
2015-06-01
This study provides an intercomparison of aerosol optical depth (AOD) retrievals from satellite-based Moderate Resolution Imaging Spectroradiometer (MODIS), Multiangle Imaging Spectroradiometer (MISR), Ozone Monitoring Instrument (OMI), and Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) instrumentation over Karachi, Lahore, Jaipur, and Kanpur between 2007 and 2013, with validation against AOD observations from the ground-based Aerosol Robotic Network (AERONET). Both MODIS Deep Blue (MODISDB) and MODIS Standard (MODISSTD) products were compared with the AERONET products. The MODISSTD-AERONET comparisons revealed a high degree of correlation for the four investigated sites at Karachi, Lahore, Jaipur, and Kanpur, the MODISDB-AERONET comparisons revealed even better correlations, and the MISR-AERONET comparisons also indicated strong correlations, as did the OMI-AERONET comparisons, while the CALIPSO-AERONET comparisons revealed only poor correlations due to the limited number of data points available. We also computed figures for root mean square error (RMSE), mean absolute error (MAE) and root mean bias (RMB). Using AERONET data to validate MODISSTD, MODISDB, MISR, OMI, and CALIPSO data revealed that MODISSTD data was more accurate over vegetated locations than over un-vegetated locations, while MISR data was more accurate over areas close to the ocean than over other areas. The MISR instrument performed better than the other instruments over Karachi and Kanpur, while the MODISSTD AOD retrievals were better than those from the other instruments over Lahore and Jaipur. We also computed the expected error bounds (EEBs) for both MODIS retrievals and found that MODISSTD consistently outperformed MODISDB in all of the investigated areas. High AOD values were observed by the MODISSTD, MODISDB, MISR, and OMI instruments during the summer months (April-August); these ranged from 0.32 to 0.78, possibly due to human activity and biomass burning. In contrast, high AOD values were observed by the CALIPSO instrument between September and December, due to high concentrations of smoke and soot aerosols. The variable monthly AOD figures obtained with different sensors indicate overestimation by MODISSTD, MODISDB, OMI, and CALIPSO instruments over Karachi, Lahore, Jaipur and Kanpur, relative to the AERONET data, but underestimation by the MISR instrument.
Toering, Tynke; Jordet, Geir; Ripegutu, Anders
2013-01-01
The present study aimed to develop a football-specific self-report instrument measuring self-regulated learning in the context of daily practice, which can be used to monitor the extent to which players take responsibility for their own learning. Development of the instrument involved six steps: 1. Literature review based on Zimmerman's (2006) theory of self-regulated learning, 2. Item generation, 3. Item validation, 4. Pilot studies, 5. Exploratory factor analysis (EFA), and 6. Confirmatory factor analysis (CFA). The instrument was tested for reliability and validity among 204 elite youth football players aged 13-16 years (Mage = 14.6; s = 0.60; 123 boys, 81 girls). The EFA indicated that a five-factor model fitted the observed data best (reflection, evaluation, planning, speaking up, and coaching). However, the CFA showed that a three-factor structure including 22 items produced a satisfactory model fit (reflection, evaluation, and planning; non-normed fit index [NNFI] = 0.96, comparative fit index [CFI] = 0.95, root mean square error of approximation [RMSEA] = 0.067). While the self-regulation processes of reflection, evaluation, and planning are strongly related and fit well into one model, other self-regulated learning processes seem to be more individually determined. In conclusion, the questionnaire developed in this study is considered a reliable and valid instrument to measure self-regulated learning among elite football players.
NASA Technical Reports Server (NTRS)
Barrie, A. C.; Smith, S. E.; Dorelli, J. C.; Gershman, D. J.; Yeh, P.; Schiff, C.; Avanov, L. A.
2017-01-01
Data compression has been a staple of imaging instruments for years. Recently, plasma measurements have utilized compression with relatively low compression ratios. The Fast Plasma Investigation (FPI) on board the Magnetospheric Multiscale (MMS) mission generates data roughly 100 times faster than previous plasma instruments, requiring a higher compression ratio to fit within the telemetry allocation. This study investigates the performance of a space-based compression standard employing a Discrete Wavelet Transform and a Bit Plane Encoder (DWT/BPE) in compressing FPI plasma count data. Data from the first 6 months of FPI operation are analyzed to explore the error modes evident in the data and how to adapt to them. While approximately half of the Dual Electron Spectrometer (DES) maps had some level of loss, it was found that there is little effect on the plasma moments and that errors present in individual sky maps are typically minor. The majority of Dual Ion Spectrometer burst sky maps compressed in a lossless fashion, with no error introduced during compression. Because of induced compression error, the size limit for DES burst images has been increased for Phase 1B. Additionally, it was found that the floating point compression mode yielded better results when images have significant compression error, leading to floating point mode being used for the fast survey mode of operation for Phase 1B. Despite the suggested tweaks, it was found that wavelet-based compression, and a DWT/BPE algorithm in particular, is highly suitable to data compression for plasma measurement instruments and can be recommended for future missions.
Bias estimation for the Landsat 8 operational land imager
Morfitt, Ron; Vanderwerff, Kelly
2011-01-01
The Operational Land Imager (OLI) is a pushbroom sensor that will be a part of the Landsat Data Continuity Mission (LDCM). This instrument is the latest in the line of Landsat imagers, and will continue to expand the archive of calibrated earth imagery. An important step in producing a calibrated image from instrument data is accurately accounting for the bias of the imaging detectors. Bias variability is one factor that contributes to error in bias estimation for OLI. Typically, the bias is simply estimated by averaging dark data on a per-detector basis. However, data acquired during OLI pre-launch testing exhibited bias variation that correlated well with the variation in concurrently collected data from a special set of detectors on the focal plane. These detectors are sensitive to certain electronic effects but not directly to incoming electromagnetic radiation. A method of using data from these special detectors to estimate the bias of the imaging detectors was developed, but found not to be beneficial at typical radiance levels as the detectors respond slightly when the focal plane is illuminated. In addition to bias variability, a systematic bias error is introduced by the truncation performed by the spacecraft of the 14-bit instrument data to 12-bit integers. This systematic error can be estimated and removed on average, but the per pixel quantization error remains. This paper describes the variability of the bias, the effectiveness of a new approach to estimate and compensate for it, as well as the errors due to truncation and how they are reduced.
Strand, Matthew; Sillau, Stefan; Grunwald, Gary K; Rabinovitch, Nathan
2014-02-10
Regression calibration provides a way to obtain unbiased estimators of fixed effects in regression models when one or more predictors are measured with error. Recent development of measurement error methods has focused on models that include interaction terms between measured-with-error predictors, and separately, methods for estimation in models that account for correlated data. In this work, we derive explicit and novel forms of regression calibration estimators and associated asymptotic variances for longitudinal models that include interaction terms, when data from instrumental and unbiased surrogate variables are available but not the actual predictors of interest. The longitudinal data are fit using linear mixed models that contain random intercepts and account for serial correlation and unequally spaced observations. The motivating application involves a longitudinal study of exposure to two pollutants (predictors) - outdoor fine particulate matter and cigarette smoke - and their association in interactive form with levels of a biomarker of inflammation, leukotriene E4 (LTE 4 , outcome) in asthmatic children. Because the exposure concentrations could not be directly observed, we used measurements from a fixed outdoor monitor and urinary cotinine concentrations as instrumental variables, and we used concentrations of fine ambient particulate matter and cigarette smoke measured with error by personal monitors as unbiased surrogate variables. We applied the derived regression calibration methods to estimate coefficients of the unobserved predictors and their interaction, allowing for direct comparison of toxicity of the different pollutants. We used simulations to verify accuracy of inferential methods based on asymptotic theory. Copyright © 2013 John Wiley & Sons, Ltd.
Statistical analysis of the calibration procedure for personnel radiation measurement instruments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bush, W.J.; Bengston, S.J.; Kalbeitzer, F.L.
1980-11-01
Thermoluminescent analyzer (TLA) calibration procedures were used to estimate personnel radiation exposure levels at the Idaho National Engineering Laboratory (INEL). A statistical analysis is presented herein based on data collected over a six month period in 1979 on four TLA's located in the Department of Energy (DOE) Radiological and Environmental Sciences Laboratory at the INEL. The data were collected according to the day-to-day procedure in effect at that time. Both gamma and beta radiation models are developed. Observed TLA readings of thermoluminescent dosimeters are correlated with known radiation levels. This correlation is then used to predict unknown radiation doses frommore » future analyzer readings of personnel thermoluminescent dosimeters. The statistical techniques applied in this analysis include weighted linear regression, estimation of systematic and random error variances, prediction interval estimation using Scheffe's theory of calibration, the estimation of the ratio of the means of two normal bivariate distributed random variables and their corresponding confidence limits according to Kendall and Stuart, tests of normality, experimental design, a comparison between instruments, and quality control.« less
Assessment of Infrared Sounder Radiometric Noise from Analysis of Spectral Residuals
NASA Astrophysics Data System (ADS)
Dufour, E.; Klonecki, A.; Standfuss, C.; Tournier, B.; Serio, C.; Masiello, G.; Tjemkes, S.; Stuhlmann, R.
2016-08-01
For the preparation and performance monitoring of the future generation of hyperspectral InfraRed sounders dedicated to the precise vertical profiling of the atmospheric state, such as the Meteosat Third Generation hyperspectral InfraRed Sounder, a reliable assessment of the instrument radiometric error covariance matrix is needed.Ideally, an inflight estimation of the radiometrric noise is recommended as certain sources of noise can be driven by the spectral signature of the observed Earth/ atmosphere radiance. Also, unknown correlated noise sources, generally related to incomplete knowledge of the instrument state, can be present, so a caracterisation of the noise spectral correlation is also neeed.A methodology, relying on the analysis of post-retreival spectral residuals, is designed and implemented to derive in-flight the covariance matrix on the basis of Earth scenes measurements. This methodology is successfully demonstrated using IASI observations as MTG-IRS proxy data and made it possible to highlight anticipated correlation structures explained by apodization and micro-vibration effects (ghost). This analysis is corroborated by a parallel estimation based on an IASI black body measurement dataset and the results of an independent micro-vibration model.
Shohaimi, Shamarina; Wei, Wong Yoke; Shariff, Zalilah Mohd
2014-01-01
Comprehensive feeding practices questionnaire (CFPQ) is an instrument specifically developed to evaluate parental feeding practices. It has been confirmed among children in America and applied to populations in France, Norway, and New Zealand. In order to extend the application of CFPQ, we conducted a factor structure validation of the translated version of CFPQ (CFPQ-M) using confirmatory factor analysis among mothers of primary school children (N = 397) in Malaysia. Several items were modified for cultural adaptation. Of 49 items, 39 items with loading factors >0.40 were retained in the final model. The confirmatory factor analysis revealed that the final model (twelve-factor model with 39 items and 2 error covariances) displayed the best fit for our sample (Chi-square = 1147; df = 634; P < 0.05; CFI = 0.900; RMSEA = 0.045; SRMR = 0.0058). The instrument with some modifications was confirmed among mothers of school children in Malaysia. The present study extends the usability of the CFPQ and enables researchers and parents to better understand the relationships between parental feeding practices and related problems such as childhood obesity.
Advances toward submicron resolution optics for x-ray instrumentation and applications
NASA Astrophysics Data System (ADS)
Cordier, Mark; Stripe, Benjamin; Yun, Wenbing; Lau, S. H.; Lyon, Alan; Reynolds, David; Lewis, Sylvia J. Y.; Chen, Sharon; Semenov, Vladimir A.; Spink, Richard I.; Seshadri, Srivatsan
2017-08-01
Sigray's axially symmetric x-ray optics enable advanced microanalytical capabilities for focusing x-rays to microns-scale to submicron spot sizes, which can potentially unlock many avenues for laboratory micro-analysis. The design of these optics allows submicron spot sizes even at low x-ray energies, enabling research into low atomic number elements and allows increased sensitivity of grazing incidence measurements and surface analysis. We will discuss advances made in the fabrication of these double paraboloidal mirror lenses designed for use in laboratory x-ray applications. We will additionally present results from as-built paraboloids, including surface figure error and focal spot size achieved to-date.
Response function of modulated grid Faraday cup plasma instruments
NASA Technical Reports Server (NTRS)
Barnett, A.; Olbert, S.
1986-01-01
Modulated grid Faraday cup plasma analyzers are a very useful tool for making in situ measurements of space plasmas. One of their great attributes is that their simplicity permits their angular response function to be calculated theoretically. An expression is derived for this response function by computing the trajectories of the charged particles inside the cup. The Voyager plasma science experiment is used as a specific example. Two approximations to the rigorous response function useful for data analysis are discussed. Multisensor analysis of solar wind data indicates that the formulas represent the true cup response function for all angles of incidence with a maximum error of only a few percent.
Radiometer Calibrations: Saving Time by Automating the Gathering and Analysis Procedures
NASA Technical Reports Server (NTRS)
Sadino, Jeffrey L.
2005-01-01
Mr. Abtahi custom-designs radiometers for Mr. Hook's research group. Inherently, when the radiometers report the temperature of arbitrary surfaces, the results are affected by errors in accuracy. This problem can be reduced if the errors can be accounted for in a polynomial. This is achieved by pointing the radiometer at a constant-temperature surface. We have been using a Hartford Scientific WaterBath. The measurements from the radiometer are collected at many different temperatures and compared to the measurements made by a Hartford Chubb thermometer with a four-decimal point resolution. The data is analyzed and fit to a fifth-order polynomial. This formula is then uploaded into the radiometer software, enabling accurate data gathering. Traditionally, Mr. Abtahi has done this by hand, spending several hours of his time setting the temperature, waiting for stabilization, taking measurements, and then repeating for other temperatures. My program, written in the Python language, has enabled the data gathering and analysis process to be handed off to a less-senior member of the team. Simply by entering several initial settings, the program will simultaneously control all three instruments and organize the data suitable for computer analyses, thus giving the desired fifth-order polynomial. This will save time, allow for a more complete calibration data set, and allow for base calibrations to be developed. The program is expandable to simultaneously take any type of measurement from up to nine distinct instruments.
Ramos, Rogelio; Zlatev, Roumen; Valdez, Benjamin; Stoytcheva, Margarita; Carrillo, Mónica; García, Juan-Francisco
2013-01-01
A virtual instrumentation (VI) system called VI localized corrosion image analyzer (LCIA) based on LabVIEW 2010 was developed allowing rapid automatic and subjective error-free determination of the pits number on large sized corroded specimens. The VI LCIA controls synchronously the digital microscope image taking and its analysis, finally resulting in a map file containing the coordinates of the detected probable pits containing zones on the investigated specimen. The pits area, traverse length, and density are also determined by the VI using binary large objects (blobs) analysis. The resulting map file can be used further by a scanning vibrating electrode technique (SVET) system for rapid (one pass) “true/false” SVET check of the probable zones only passing through the pit's centers avoiding thus the entire specimen scan. A complete SVET scan over the already proved “true” zones could determine the corrosion rate in any of the zones. PMID:23691434
Automated processing for proton spectroscopic imaging using water reference deconvolution.
Maudsley, A A; Wu, Z; Meyerhoff, D J; Weiner, M W
1994-06-01
Automated formation of MR spectroscopic images (MRSI) is necessary before routine application of these methods is possible for in vivo studies; however, this task is complicated by the presence of spatially dependent instrumental distortions and the complex nature of the MR spectrum. A data processing method is presented for completely automated formation of in vivo proton spectroscopic images, and applied for analysis of human brain metabolites. This procedure uses the water reference deconvolution method (G. A. Morris, J. Magn. Reson. 80, 547(1988)) to correct for line shape distortions caused by instrumental and sample characteristics, followed by parametric spectral analysis. Results for automated image formation were found to compare favorably with operator dependent spectral integration methods. While the water reference deconvolution processing was found to provide good correction of spatially dependent resonance frequency shifts, it was found to be susceptible to errors for correction of line shape distortions. These occur due to differences between the water reference and the metabolite distributions.
The speckle polarimeter of the 2.5-m telescope: Design and calibration
NASA Astrophysics Data System (ADS)
Safonov, B. S.; Lysenko, P. A.; Dodin, A. V.
2017-05-01
The speckle polarimeter is a facility instrument of the 2.5-mSAIMSU telescope that combines the features of a speckle interferometer and a polarimeter. The speckle polarimeter is designed for observations in several visible bands in the following modes: speckle interferometry, polarimetry, speckle polarimetry, and polaroastrometry. In this paper we describe the instrument design and the procedures for determining the angular scale of the camera and the position angle of the camera and the polarimeter. Our measurements of the parameters for the binary star HD 9165 are used as an example to demonstrate the technique of speckle interferometry. For bright objects the accuracy of astrometry is limited by the error of the correction for the distortion caused by the atmospheric dispersion compensator. At zenith distances less than 45◦ the additional relative measurement error of the separation is 0.7%, while the additional error of the position angle is 0.3°. In the absence of a dispersion compensator the accuracy of astrometry is limited by the uncertainty in the scale and position angle of the camera, which are 0.15% and 0.06°, respectively. We have performed polarimetric measurements of unpolarized stars and polarization standards. The instrumental polarization at the Cassegrain focus in the V band does not exceed 0.01%. The instrumental polarization for the Nasmyth focus varies between 2 and 4% within the visible range; we have constructed its model and give a method for its elimination from the measurements. For stars with an intrinsic polarization of less than 0.2% during observations at the Cassegrain focus the error is determined mainly by the photon and readout noises and can reach 5 × 10-5.
Operating envelopes of particle sizing instrumentation used for icing research
NASA Technical Reports Server (NTRS)
Hovenac, Edward A.
1987-01-01
The Forward Scattering Spectrometer Probe and the Optical Array Probe are analyzed in terms of their ability to make accurate determinations of water droplet size distributions. Sources of counting and sizing errors are explained. The paper describes ways of identifying these errors and how they can affect measurement.
NASA Technical Reports Server (NTRS)
West, E. A.
1993-01-01
Magnetographs, which measure polarized light, allow solar astronomers to infer the magnetic field intensity on the Sun. The Marshall Space Flight Center (MSFC) Vector Magnetograph is such an imaging instrument. The instrument requires rapid modulation between polarization states to minimize seeing effects. The accuracy of those polarization measurements is dependent on stable modulators with small field-of-view errors. Although these devices are very important in ground-based telescopes, extending the field of view of electro-optical crystals such as KD*Ps (potassium di-deuterium phosphate) could encourage the development of these devices for other imaging applications. The work that was done at MSFC as part of the Center Director's Discretionary Fund (CDDF) to reduce the field-of-view errors of instruments that use KD*P modulators in their polarimeters is described.
Techniques for avoiding discrimination errors in the dynamic sampling of condensable vapors
NASA Technical Reports Server (NTRS)
Lincoln, K. A.
1983-01-01
In the mass spectrometric sampling of dynamic systems, measurements of the relative concentrations of condensable and noncondensable vapors can be significantly distorted if some subtle, but important, instrumental factors are overlooked. Even with in situ measurements, the condensables are readily lost to the container walls, and the noncondensables can persist within the vacuum chamber and yield a disproportionately high output signal. Where single pulses of vapor are sampled this source of error is avoided by gating either the mass spectrometer ""on'' or the data acquisition instrumentation ""on'' only during the very brief time-window when the initial vapor cloud emanating directly from the vapor source passes through the ionizer. Instrumentation for these techniques is detailed and its effectiveness is demonstrated by comparing gated and nongated spectra obtained from the pulsed-laser vaporization of several materials.
Data Combination and Instrumental Variables in Linear Models
ERIC Educational Resources Information Center
Khawand, Christopher
2012-01-01
Instrumental variables (IV) methods allow for consistent estimation of causal effects, but suffer from poor finite-sample properties and data availability constraints. IV estimates also tend to have relatively large standard errors, often inhibiting the interpretability of differences between IV and non-IV point estimates. Lastly, instrumental…
14 CFR 23.1323 - Airspeed indicating system.
Code of Federal Regulations, 2010 CFR
2010-01-01
... instrument calibration error when the corresponding pitot and static pressures are applied. (b) Each airspeed... positive drainage of moisture from the pitot static plumbing. (d) If certification for instrument flight rules or flight in icing conditions is requested, each airspeed system must have a heated pitot tube or...
Fresh Fuel Measurements With the Differential Die-Away Self-Interrogation Instrument
NASA Astrophysics Data System (ADS)
Trahan, Alexis C.; Belian, Anthony P.; Swinhoe, Martyn T.; Menlove, Howard O.; Flaska, Marek; Pozzi, Sara A.
2017-07-01
The purpose of the Next Generation Safeguards Initiative (NGSI)-Spent Fuel (SF) Project is to strengthen the technical toolkit of safeguards inspectors and/or other interested parties. The NGSI-SF team is working to achieve the following technical goals more easily and efficiently than in the past using nondestructive assay measurements of spent fuel assemblies: 1) verify the initial enrichment, burnup, and cooling time of facility declaration; 2) detect the diversion or replacement of pins; 3) estimate the plutonium mass; 4) estimate decay heat; and 5) determine the reactivity of spent fuel assemblies. The differential die-away self-interrogation (DDSI) instrument is one instrument that was assessed for years regarding its feasibility for robust, timely verification of spent fuel assemblies. The instrument was recently built and was tested using fresh fuel assemblies in a variety of configurations, including varying enrichment, neutron absorber content, and symmetry. The early die-away method, a multiplication determination method developed in simulation space, was successfully tested on the fresh fuel assembly data and determined multiplication with a root-mean-square (RMS) error of 2.9%. The experimental results were compared with MCNP simulations of the instrument as well. Low multiplication assemblies had agreement with an average RMS error of 0.2% in the singles count rate (i.e., total neutrons detected per second) and 3.4% in the doubles count rates (i.e., neutrons detected in coincidence per second). High-multiplication assemblies had agreement with an average RMS error of 4.1% in the singles and 13.3% in the doubles count rates.
MEADERS: Medication Errors and Adverse Drug Event Reporting system.
Zafar, Atif
2007-10-11
The Agency for Healthcare Research and Quality (AHRQ) recently funded the PBRN Resource Center to develop a system for reporting ambulatory medication errors. Our goal was to develop a usable system that practices could use internally to track errors. We initially performed a comprehensive literature review of what is currently available. Then, using a combination of expert panel meetings and iterative development we designed an instrument for ambulatory medication error reporting and createad a reporting system based both in MS Access 2003 and on the web using MS ASP.NET 2.0 technologies.
Fish-Eye Observing with Phased Array Radio Telescopes
NASA Astrophysics Data System (ADS)
Wijnholds, S. J.
The radio astronomical community is currently developing and building several new radio telescopes based on phased array technology. These telescopes provide a large field-of-view, that may in principle span a full hemisphere. This makes calibration and imaging very challenging tasks due to the complex source structures and direction dependent radio wave propagation effects. In this thesis, calibration and imaging methods are developed based on least squares estimation of instrument and source parameters. Monte Carlo simulations and actual observations with several prototype show that this model based approach provides statistically and computationally efficient solutions. The error analysis provides a rigorous mathematical framework to assess the imaging performance of current and future radio telescopes in terms of the effective noise, which is the combined effect of propagated calibration errors, noise in the data and source confusion.
NASA Technical Reports Server (NTRS)
Antonille, Scott R.; Miskey, Cherie L.; Ohl, Raymond G.; Rohrbach, Scott O.; Aronstein, David L.; Bartoszyk, Andrew E.; Bowers, Charles W.; Cofie, Emmanuel; Collins, Nicholas R.; Comber, Brian J.;
2016-01-01
NASA's James Webb Space Telescope (JWST) is a 6.6m diameter, segmented, deployable telescope for cryogenic IR space astronomy (40K). The JWST Observatory includes the Optical Telescope Element (OTE) and the Integrated Science Instrument Module (ISIM) that contains four science instruments (SI) and the fine guider. The SIs are mounted to a composite metering structure. The SI and guider units were integrated to the ISIM structure and optically tested at the NASA Goddard Space Flight Center as a suite using the Optical Telescope Element SIMulator (OSIM). OSIM is a full field, cryogenic JWST telescope simulator. SI performance, including alignment and wave front error, were evaluated using OSIM. We describe test and analysis methods for optical performance verification of the ISIM Element, with an emphasis on the processes used to plan and execute the test. The complexity of ISIM and OSIM drove us to develop a software tool for test planning that allows for configuration control of observations, associated scripts, and management of hardware and software limits and constraints, as well as tools for rapid data evaluation, and flexible re-planning in response to the unexpected. As examples of our test and analysis approach, we discuss how factors such as the ground test thermal environment are compensated in alignment. We describe how these innovative methods for test planning and execution and post-test analysis were instrumental in the verification program for the ISIM element, with enough information to allow the reader to consider these innovations and lessons learned in this successful effort in their future testing for other programs.
NASA Astrophysics Data System (ADS)
Antonille, Scott R.; Miskey, Cherie L.; Ohl, Raymond G.; Rohrbach, Scott O.; Aronstein, David L.; Bartoszyk, Andrew E.; Bowers, Charles W.; Cofie, Emmanuel; Collins, Nicholas R.; Comber, Brian J.; Eichhorn, William L.; Glasse, Alistair C.; Gracey, Renee; Hartig, George F.; Howard, Joseph M.; Kelly, Douglas M.; Kimble, Randy A.; Kirk, Jeffrey R.; Kubalak, David A.; Landsman, Wayne B.; Lindler, Don J.; Malumuth, Eliot M.; Maszkiewicz, Michael; Rieke, Marcia J.; Rowlands, Neil; Sabatke, Derek S.; Smith, Corbett T.; Smith, J. Scott; Sullivan, Joseph F.; Telfer, Randal C.; Te Plate, Maurice; Vila, M. Begoña.; Warner, Gerry D.; Wright, David; Wright, Raymond H.; Zhou, Julia; Zielinski, Thomas P.
2016-09-01
NASA's James Webb Space Telescope (JWST) is a 6.5m diameter, segmented, deployable telescope for cryogenic IR space astronomy. The JWST Observatory includes the Optical Telescope Element (OTE) and the Integrated Science Instrument Module (ISIM), that contains four science instruments (SI) and the Fine Guidance Sensor (FGS). The SIs are mounted to a composite metering structure. The SIs and FGS were integrated to the ISIM structure and optically tested at NASA's Goddard Space Flight Center using the Optical Telescope Element SIMulator (OSIM). OSIM is a full-field, cryogenic JWST telescope simulator. SI performance, including alignment and wavefront error, was evaluated using OSIM. We describe test and analysis methods for optical performance verification of the ISIM Element, with an emphasis on the processes used to plan and execute the test. The complexity of ISIM and OSIM drove us to develop a software tool for test planning that allows for configuration control of observations, implementation of associated scripts, and management of hardware and software limits and constraints, as well as tools for rapid data evaluation, and flexible re-planning in response to the unexpected. As examples of our test and analysis approach, we discuss how factors such as the ground test thermal environment are compensated in alignment. We describe how these innovative methods for test planning and execution and post-test analysis were instrumental in the verification program for the ISIM element, with enough information to allow the reader to consider these innovations and lessons learned in this successful effort in their future testing for other programs.
ACCOUNTING FOR CALIBRATION UNCERTAINTIES IN X-RAY ANALYSIS: EFFECTIVE AREAS IN SPECTRAL FITTING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Hyunsook; Kashyap, Vinay L.; Drake, Jeremy J.
2011-04-20
While considerable advance has been made to account for statistical uncertainties in astronomical analyses, systematic instrumental uncertainties have been generally ignored. This can be crucial to a proper interpretation of analysis results because instrumental calibration uncertainty is a form of systematic uncertainty. Ignoring it can underestimate error bars and introduce bias into the fitted values of model parameters. Accounting for such uncertainties currently requires extensive case-specific simulations if using existing analysis packages. Here, we present general statistical methods that incorporate calibration uncertainties into spectral analysis of high-energy data. We first present a method based on multiple imputation that can bemore » applied with any fitting method, but is necessarily approximate. We then describe a more exact Bayesian approach that works in conjunction with a Markov chain Monte Carlo based fitting. We explore methods for improving computational efficiency, and in particular detail a method of summarizing calibration uncertainties with a principal component analysis of samples of plausible calibration files. This method is implemented using recently codified Chandra effective area uncertainties for low-resolution spectral analysis and is verified using both simulated and actual Chandra data. Our procedure for incorporating effective area uncertainty is easily generalized to other types of calibration uncertainties.« less
Du, Lei; Sun, Qiao; Cai, Changqing; Bai, Jie; Fan, Zhe; Zhang, Yue
2018-01-01
Traffic speed meters are important legal measuring instruments specially used for traffic speed enforcement and must be tested and verified in the field every year using a vehicular mobile standard speed-measuring instrument to ensure speed-measuring performances. The non-contact optical speed sensor and the GPS speed sensor are the two most common types of standard speed-measuring instruments. The non-contact optical speed sensor requires extremely high installation accuracy, and its speed-measuring error is nonlinear and uncorrectable. The speed-measuring accuracy of the GPS speed sensor is rapidly reduced if the amount of received satellites is insufficient enough, which often occurs in urban high-rise regions, tunnels, and mountainous regions. In this paper, a new standard speed-measuring instrument using a dual-antenna Doppler radar sensor is proposed based on a tradeoff between the installation accuracy requirement and the usage region limitation, which has no specified requirements for its mounting distance and no limitation on usage regions and can automatically compensate for the effect of an inclined installation angle on its speed-measuring accuracy. Theoretical model analysis, simulated speed measurement results, and field experimental results compared with a GPS speed sensor with high accuracy showed that the dual-antenna Doppler radar sensor is effective and reliable as a new standard speed-measuring instrument. PMID:29621142
What is the effect of area size when using local area practice style as an instrument?
Brooks, John M; Tang, Yuexin; Chapman, Cole G; Cook, Elizabeth A; Chrischilles, Elizabeth A
2013-08-01
Discuss the tradeoffs inherent in choosing a local area size when using a measure of local area practice style as an instrument in instrumental variable estimation when assessing treatment effectiveness. Assess the effectiveness of angiotensin converting-enzyme inhibitors and angiotensin receptor blockers on survival after acute myocardial infarction for Medicare beneficiaries using practice style instruments based on different-sized local areas around patients. We contrasted treatment effect estimates using different local area sizes in terms of the strength of the relationship between local area practice styles and individual patient treatment choices; and indirect assessments of the assumption violations. Using smaller local areas to measure practice styles exploits more treatment variation and results in smaller standard errors. However, if treatment effects are heterogeneous, the use of smaller local areas may increase the risk that local practice style measures are dominated by differences in average treatment effectiveness across areas and bias results toward greater effectiveness. Local area practice style measures can be useful instruments in instrumental variable analysis, but the use of smaller local area sizes to generate greater treatment variation may result in treatment effect estimates that are biased toward higher effectiveness. Assessment of whether ecological bias can be mitigated by changing local area size requires the use of outside data sources. Copyright © 2013 Elsevier Inc. All rights reserved.
Du, Lei; Sun, Qiao; Cai, Changqing; Bai, Jie; Fan, Zhe; Zhang, Yue
2018-04-05
Traffic speed meters are important legal measuring instruments specially used for traffic speed enforcement and must be tested and verified in the field every year using a vehicular mobile standard speed-measuring instrument to ensure speed-measuring performances. The non-contact optical speed sensor and the GPS speed sensor are the two most common types of standard speed-measuring instruments. The non-contact optical speed sensor requires extremely high installation accuracy, and its speed-measuring error is nonlinear and uncorrectable. The speed-measuring accuracy of the GPS speed sensor is rapidly reduced if the amount of received satellites is insufficient enough, which often occurs in urban high-rise regions, tunnels, and mountainous regions. In this paper, a new standard speed-measuring instrument using a dual-antenna Doppler radar sensor is proposed based on a tradeoff between the installation accuracy requirement and the usage region limitation, which has no specified requirements for its mounting distance and no limitation on usage regions and can automatically compensate for the effect of an inclined installation angle on its speed-measuring accuracy. Theoretical model analysis, simulated speed measurement results, and field experimental results compared with a GPS speed sensor with high accuracy showed that the dual-antenna Doppler radar sensor is effective and reliable as a new standard speed-measuring instrument.
Pilot Designed Aircraft Displays in General Aviation: An Exploratory Study and Analysis
NASA Astrophysics Data System (ADS)
Conaway, Cody R.
From 2001-2011, the General Aviation (GA) fatal accident rate remained unchanged (Duquette & Dorr, 2014) with an overall stagnant accident rate between 2004 and 2013. The leading cause, loss of control in flight (NTSB, 2015b & 2015c) due to pilot inability to recognize approach to stall/spin conditions (NTSB, 2015b & 2016b). In 2013, there were 1,224 GA accidents in the U.S., accounting for 94% of all U.S. aviation accidents and 90% of all U.S. aviation fatalities that year (NTSB, 2015c). Aviation entails multiple challenges for pilots related to task management, procedural errors, perceptual distortions, and cognitive discrepancies. While machine errors in airplanes have continued to decrease over the years, human error still has not (NTSB, 2013). A preliminary analysis of a PC-based, Garmin G1000 flight deck was conducted with 3 professional pilots. Analyses revealed increased task load, opportunities for distraction, confusing perceptual ques, and hindered cognitive performance. Complex usage problems were deeply ingrained in the functionality of the system, forcing pilots to use fallible work arounds, add unnecessary steps, and memorize knob turns or button pushes. Modern computing now has the potential to free GA cockpit designs from knobs, soft keys, or limited display options. Dynamic digital displays might include changes in instrumentation or menu structuring depending on the phase of flight. Airspeed indicators could increase in size to become more salient during landing, simultaneously highlighting pitch angle on Attitude Indicators and automatically decluttering unnecessary information for landing. Likewise, Angle-of-Attack indicators demonstrate a great safety and performance advantage for pilots (Duquette & Dorr, 2014; NTSB, 2015b & 2016b), an instrument typically found in military platforms and now the Icon A5, light-sport aircraft (Icon, 2016). How does the design of pilots' environment---the cockpit---further influence their efficiency and effectiveness? To explore the possibilities for small aircraft displays, a participatory design investigation was conducted with 9 qualified instrument pilots. Aviators designed mock cockpits on a PC using pictorial cutouts of analog (e.g., mechanical dials) and digital (e.g., dynamic displays) controls. Data was analyzed qualitatively and compared to similar work. Finally, a template for GA displays was developed based on pilot input.
Using Laser Altimetry to Detect Topographic Change at Long Valley Caldera, California
NASA Technical Reports Server (NTRS)
Hofton, M. A.; Minster, J.-B.; Ridgway, J. R.; Blair, J. B.; Rabine, D. L.; Bufton, J. L.; Williams, N. P.
1997-01-01
Long Valley caldera, California, is a site of extensive volcanism, persistent seismicity, and uplift of a resurgent dome, currently at a rate of approximately 3 cm/year. Airborne laser altimetry was used to determine the surface topography of the region in 1993. A repeat mission occurred in 1995. Three different laser altimeters were flown, dubbed ATLAS, SLICER and RASCAL. Data processing consists of the combination of the aircraft trajectory and attitude data with the laser range, the determination of an atmospheric delay, laser pulse timing errors, laser system biases, and data geolocation to obtain the position of the laser spot on the ground. Results showed that using the ATLAS and SLICER instruments, the elevation of an overflown lake is determined to precisions of 3.3 cm and 2.9 cm from altitudes of 500 m and 3 km above the ground, and approximately 10 cm using the RASCAL instrument from 500 m above ground. Comparison with tide gauge data showed the laser measurements are able to resolve centimeter-level changes in the lake elevation over time. Repeat pass analysis of tracks over flat surfaces indicate no systematic biases affect the measurement procedure of the ATLAS and SLICER instruments. Comparison of GPS and laser-derived elevations of easily-identifiable features in the caldera confirm the horizontal accuracy of the measurement is within the diameter of the laser footprint, and vertical accuracy is within the error inherent in the measurement. Crossover analysis shows that the standard error of the means at track intersection points within the caldera and dome (i.e., where zero and close to the maximum amount of uplift is expected) are about 1 cm, indicating elevation change at the 3 cm/year level should be detectable. We demonstrate one of the powerful advantages of scanning laser altimetry over other remote sensing techniques; the straightforward creation of precise digital elevation maps of overflown terrain. Initial comparison of the 1993-1995 data indicates uplift occurred, but filtering is required to remove vegetation effects. Although research continues to utilize the full potential of laser altimetry data, the results constitute a successful demonstration that the technique may be used to perform geodetic monitoring of surface topographic change.
Using Laser Altimetry to Detect Topographic Change in Long Valley Caldera, California
NASA Technical Reports Server (NTRS)
Hofton, M. A.; Minster, J.-B.; Ridgway, J. R.; Blair, J. B.
1997-01-01
Long Valley caldera California, is a site of extensive volcanism, persistent seismicity, and uplift of a resurgent dome, currently at a rate of about 3 cm/year. Airborne laser altimetry was used to determine the surface topography of the region in 1993. A repeat mission occurred in 1995. Three different laser altimeters were flown, dubbed ATLAS, SLICER and RASCAL. Data processing consists of the combination of the aircraft trajectory and attitude data with the laser range, the determination of an atmospheric delay, laser pulse timing errors, laser system biases, and data geolocation to obtain the position of the laser spot on the ground. Results showed that using the ATLAS and SLICER instruments, the elevation of an overflown lake is determined to precisions of 3.3 cm and 2.9 cm from altitudes of 500 m and 3 km above the ground, and about 10 cm using the RASCAL instrument from 500 m above ground. Comparison with tide gauge data showed the laser measurements are able to resolve centimeter-level changes in the lake elevation over time. Repeat pass analysis of tracks over flat surfaces indicate no systematic biases affect the measurement procedure of the ATLAS and SLICER instruments. Comparison of GPS and laser-derived elevations of easily-identifiable features in the caldera confirm the horizontal accuracy of the measurement is within the diameter of the laser footprint, and vertical accuracy is within the error inherent in the measurement. Crossover analysis shows that the standard error of the means at track intersection points within the caldera, and dome (i.e., where zero and close to the maximum amount of uplift is expected) are about I cm, indicating elevation change at the 3 cm/year level should be detectable. We demonstrate one of the powerful advantages of scanning laser altimetry over other remote sensing techniques; the straightforward creation of precise digital elevation maps of overflown terrain. Initial comparison of the 1993-1995 data indicates uplift occurred, but filtering is required to remove vegetation effects. Although research continues to utilize the full potential of laser altimetry data, the results constitute a successful demonstration that the technique may be used to perform geodetic monitoring of surface topographic change.
Interpreting findings from Mendelian randomization using the MR-Egger method.
Burgess, Stephen; Thompson, Simon G
2017-05-01
Mendelian randomization-Egger (MR-Egger) is an analysis method for Mendelian randomization using summarized genetic data. MR-Egger consists of three parts: (1) a test for directional pleiotropy, (2) a test for a causal effect, and (3) an estimate of the causal effect. While conventional analysis methods for Mendelian randomization assume that all genetic variants satisfy the instrumental variable assumptions, the MR-Egger method is able to assess whether genetic variants have pleiotropic effects on the outcome that differ on average from zero (directional pleiotropy), as well as to provide a consistent estimate of the causal effect, under a weaker assumption-the InSIDE (INstrument Strength Independent of Direct Effect) assumption. In this paper, we provide a critical assessment of the MR-Egger method with regard to its implementation and interpretation. While the MR-Egger method is a worthwhile sensitivity analysis for detecting violations of the instrumental variable assumptions, there are several reasons why causal estimates from the MR-Egger method may be biased and have inflated Type 1 error rates in practice, including violations of the InSIDE assumption and the influence of outlying variants. The issues raised in this paper have potentially serious consequences for causal inferences from the MR-Egger approach. We give examples of scenarios in which the estimates from conventional Mendelian randomization methods and MR-Egger differ, and discuss how to interpret findings in such cases.
Threshold raw retrieved contrast in coronagraphs is limited by internal polarization
NASA Astrophysics Data System (ADS)
Breckinridge, James
The objective of this work is to provide the exoplanet program with an accurate model of the coronagraph complex point spread function, methods to correct chromatic aberration in the presence of polarization aberrations, device requirements to minimize and compensate for these aberrations at levels needed for exoplanet coronagraphy, and exoplanet retrieval algorithms in the presence of polarizaiton aberrations. Currently, space based coronagraphs are designed and performance analyzed using scalar wave aberration theory. Breckinridge, Lam & Chipman (2015) PASP 127: 445-468 and Breckinridge & Oppenheimer (2004) ApJ 600: 1091-1098 showed that astronomical telescopes designed for exoplanet and precision astrometric science require polarization or vector-wave analysis. Internal instrument polarization limits both threshold raw contrast and measurements of the vector wave properties of the electromagnetic radiation from stars, exoplanets, gas and dust. The threshold raw contrast obtained using only scalar wave theory is much more optimistic than that obtained using the more hardware-realistic vector wave theory. Internal polarization reduces system contrast, increases scattered light, alters radiometric measurements, distorts diffraction-limited star images and reduces signal-to-noise ratio. For example, a vector-wave analysis shows that the WFIRST-CGI instrument will have a threshold raw contrast of 10-7 not the 10-8 forecasted using the scalar wave analysis given in the WFIRST-CGI 2015 report. The physical nature of the complex point spread function determines the exoplanet scientific yield of coronagraphs. We propose to use the Polaris-M polarization aberration ray-tracing software developed at the College of Optical Science of the University of Arizona to ray trace both a "typical" exoplanet coronagraph system as well as the WFIRST-CGI system. Threshold raw contrast and the field across the complex PSF will be calculated as a function of optical device vector E&M requirements on: 1. Lyot coronagraph mask and stop size, configuration, location and composition, 2. Uniformity of the complex reflectance of the highly reflecting metal mirrors with their dielectric overcoats, and 3. Opto-mechanical layout. Once these requirements are developed polarization aberration mitigation studies can begin to identify a practical solution to compensate polarization errors, not unlike the more developed technology of A/O compensates for pointing and manufacturing errors. Several methods to compensate for chromatic aberration in coronagraphs further compounds the complex PSF errors that require compensation to maximize the best retrieved raw contrast in the presence of exoplanets in the vicinity of stars. Internal instrument polarization introduces partial coherence into the wavefront to distort the speckle-pattern complex-field in the dark hole. An additional factor that determines retrieved raw contrast is our ability to effectively process the polarizationdistorted field within the dark hole. This study is essential to the correct calculation of exoplanet coronagraph science yield, development of requirements on subsystem devices (mirrors, stops, masks, spectrometers, wavefront error mitigation optics and opto-mechanical layout) and the development of exoplanet retrieval algorithms.
Increased instrument intelligence--can it reduce laboratory error?
Jekelis, Albert W
2005-01-01
Recent literature has focused on the reduction of laboratory errors and the potential impact on patient management. This study assessed the intelligent, automated preanalytical process-control abilities in newer generation analyzers as compared with older analyzers and the impact on error reduction. Three generations of immuno-chemistry analyzers were challenged with pooled human serum samples for a 3-week period. One of the three analyzers had an intelligent process of fluidics checks, including bubble detection. Bubbles can cause erroneous results due to incomplete sample aspiration. This variable was chosen because it is the most easily controlled sample defect that can be introduced. Traditionally, lab technicians have had to visually inspect each sample for the presence of bubbles. This is time consuming and introduces the possibility of human error. Instruments with bubble detection may be able to eliminate the human factor and reduce errors associated with the presence of bubbles. Specific samples were vortexed daily to introduce a visible quantity of bubbles, then immediately placed in the daily run. Errors were defined as a reported result greater than three standard deviations below the mean and associated with incomplete sample aspiration of the analyte of the individual analyzer Three standard deviations represented the target limits of proficiency testing. The results of the assays were examined for accuracy and precision. Efficiency, measured as process throughput, was also measured to associate a cost factor and potential impact of the error detection on the overall process. The analyzer performance stratified according to their level of internal process control The older analyzers without bubble detection reported 23 erred results. The newest analyzer with bubble detection reported one specimen incorrectly. The precision and accuracy of the nonvortexed specimens were excellent and acceptable for all three analyzers. No errors were found in the nonvortexed specimens. There were no significant differences in overall process time for any of the analyzers when tests were arranged in an optimal configuration. The analyzer with advanced fluidic intelligence demostrated the greatest ability to appropriately deal with an incomplete aspiration by not processing and reporting a result for the sample. This study suggests that preanalytical process-control capabilities could reduce errors. By association, it implies that similar intelligent process controls could favorably impact the error rate and, in the case of this instrument, do it without negatively impacting process throughput. Other improvements may be realized as a result of having an intelligent error-detection process including further reduction in misreported results, fewer repeats, less operator intervention, and less reagent waste.
Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Ryan B.; Clegg, Samuel M.; Frydenvang, Jens
We report that accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response ofmore » an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “submodel” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. Lastly, the sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.« less
Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models
Anderson, Ryan B.; Clegg, Samuel M.; Frydenvang, Jens; ...
2016-12-15
We report that accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response ofmore » an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “submodel” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. Lastly, the sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.« less
NASA Technical Reports Server (NTRS)
Horan, Stephen; Wang, Ru-Hai
1999-01-01
There exists a need for designers and developers to have a method to conveniently test a variety of communications parameters for an overall system design. This is no different when testing network protocols as when testing modulation formats. In this report, we discuss a means of providing a networking test device specifically designed to be used for space communications. This test device is a PC-based Virtual Instrument (VI) programmed using the LabVIEW(TM) version 5 software suite developed by National Instruments(TM)TM. This instrument was designed to be portable and usable by others without special, additional equipment. The programming was designed to replicate a VME-based hardware module developed earlier at New Mexico State University (NMSU) and to provide expanded capabilities exceeding the baseline configuration existing in that module. This report describes the design goals for the VI module in the next section and follows that with a description of the design of the VI instrument. This is followed with a description of the validation tests run on the VI. An application of the error-generating VI to networking protocols is then given.
Nimon, Kim; Zientek, Linda Reichwein; Henson, Robin K.
2012-01-01
The purpose of this article is to help researchers avoid common pitfalls associated with reliability including incorrectly assuming that (a) measurement error always attenuates observed score correlations, (b) different sources of measurement error originate from the same source, and (c) reliability is a function of instrumentation. To accomplish our purpose, we first describe what reliability is and why researchers should care about it with focus on its impact on effect sizes. Second, we review how reliability is assessed with comment on the consequences of cumulative measurement error. Third, we consider how researchers can use reliability generalization as a prescriptive method when designing their research studies to form hypotheses about whether or not reliability estimates will be acceptable given their sample and testing conditions. Finally, we discuss options that researchers may consider when faced with analyzing unreliable data. PMID:22518107
The failures of root canal preparation with hand ProTaper.
Bătăiosu, Marilena; Diaconu, Oana; Moraru, Iren; Dăguci, C; Tuculină, Mihaela; Dăguci, Luminiţa; Gheorghiţă, Lelia
2012-07-01
The failures of root canal preparation are due to some anatomical deviation (canal in "C" or "S") and some technique errors. The technique errors are usually present in canal root cleansing and shaping stage and are the result of endodontic treatment objectives deviation. Our study was made on technique errors while preparing the canal roots with hand ProTaper. Our study was made "in vitro" on 84 extracted teeth (molars, premolars, incisors and canines). The canal root of these teeth were cleansed and shaped with hand ProTaper by crown-down technique and canal irrigation with NaOCl(2,5%). The dental preparation control was made by X-ray. During canal root preparation some failures were observed like: canal root overinstrumentation, zipping and stripping phenomenon, discarded and/or fractured instruments. Hand ProTaper represents a revolutionary progress of endodontic treatment, but a deviation from accepted rules of canal root instrumentation can lead to failures of endodontic treatment.
Cross-Spectrum PM Noise Measurement, Thermal Energy, and Metamaterial Filters.
Gruson, Yannick; Giordano, Vincent; Rohde, Ulrich L; Poddar, Ajay K; Rubiola, Enrico
2017-03-01
Virtually all commercial instruments for the measurement of the oscillator PM noise make use of the cross-spectrum method (arXiv:1004.5539 [physics.ins-det], 2010). High sensitivity is achieved by correlation and averaging on two equal channels, which measure the same input, and reject the background of the instrument. We show that a systematic error is always present if the thermal energy of the input power splitter is not accounted for. Such error can result in noise underestimation up to a few decibels in the lowest-noise quartz oscillators, and in an invalid measurement in the case of cryogenic oscillators. As another alarming fact, the presence of metamaterial components in the oscillator results in unpredictable behavior and large errors, even in well controlled experimental conditions. We observed a spread of 40 dB in the phase noise spectra of an oscillator, just replacing the output filter.
Optical Testing of Diamond Machined, Aspheric Mirrors for Groundbased, Near-IR Astronomy
NASA Technical Reports Server (NTRS)
Chambers, V. John; Mink, Ronald G.; Ohl, Raymond G.; Connelly, Joseph A.; Mentzell, J. Eric; Arnold, Steven M.; Greenhouse, Matthew A.; Winsor, Robert S.; MacKenty, John W.
2002-01-01
The Infrared Multi-Object Spectrometer (IRMOS) is a facility-class instrument for the Kitt Peak National Observatory 4 and 2.1 meter telescopes. IRMOS is a near-IR (0.8-2.5 micron) spectrometer and operates at approximately 80 K. The 6061-T651 aluminum bench and mirrors constitute an athermal design. The instrument produces simultaneous spectra at low- to mid-resolving power (R=lambda/delta lambda= 300-3000) of approximately 100 objects in its 2.8 x 2.0 arcmin field. We describe ambient and cryogenic optical testing of the IRMOS mirrors across a broad range in spatial frequency (figure error, mid-frequency error, and microroughness). The mirrors include three rotationally symmetric, off-axis conic sections, one off-axis biconic, and several flat fold mirrors. The symmetric mirrors include convex and concave prolate and oblate ellipsoids. They range in aperture from 94x86 mm to 286x269 mm and in f-number from 0.9 to 2.4. The biconic mirror is concave and has a 94x76 mm aperture, R(sub x)=377 mm, k(sub x)=0.0778, R(sub y)=407 mm, and k(sub y)=0.1265 and is decentered by -2 mm in X and 227 mm in Y. All of the mirrors have an aspect ratio of approximately 6:1. The surface error fabrication tolerances are less than 10 nm RMS microroughness, 'best effort' for mid-frequency error, and less than 63.3 nm RMS figure error. Ambient temperature (approximately 293 K) testing is performed for each of the three surface error regimes, and figure testing is also performed at approximately 80 K. Operation of the ADE Phaseshift MicroXAM white light interferometer (micro-roughness) and the Bauer Model 200 profilometer (mid-frequency error) is described. Both the sag and conic values of the aspheric mirrors make these tests challenging. Figure testing is performed using a Zygo GPI interferometer, custom computer generated holograms (CGH), and optomechanical alignment fiducials. Cryogenic CGH null testing is discussed in detail. We discuss complications such as the change in prescription with temperature and thermal gradients. Correction for the effect of the dewar window is also covered. We discuss the error budget for the optical test and alignment procedure. Data reduction is accomplished using commercial optical design and data analysis software packages. Results from CGH testing at cryogenic temperatures are encouraging thus far.
Boggess, Andrew; Crump, Stephen; Gregory, Clint; ...
2017-12-06
Here, unique hazards are presented in the analysis of radiologically contaminated samples. Strenuous safety and security precautions must be in place to protect the analyst, laboratory, and instrumentation used to perform analyses. A validated method has been optimized for the analysis of select nitroaromatic explosives and degradative products using gas chromatography/mass spectrometry via sonication extraction of radiologically contaminated soils, for samples requiring ISO/IEC 17025 laboratory conformance. Target analytes included 2-nitrotoluene, 4-nitrotoluene, 2,6-dinitrotoluene, and 2,4,6-trinitrotoluene, as well as the degradative product 4-amino-2,6-dinitrotoluene. Analytes were extracted from soil in methylene chloride by sonication. Administrative and engineering controls, as well as instrument automationmore » and quality control measures, were utilized to minimize potential human exposure to radiation at all times and at all stages of analysis, from receiving through disposition. Though thermal instability increased uncertainties of these selected compounds, a mean lower quantitative limit of 2.37 µg/mL and mean accuracy of 2.3% relative error and 3.1% relative standard deviation were achieved. Quadratic regression was found to be optimal for calibration of all analytes, with compounds of lower hydrophobicity displaying greater parabolic curve. Blind proficiency testing (PT) of spiked soil samples demonstrated a mean relative error of 9.8%. Matrix spiked analyses of PT samples demonstrated that 99% recovery of target analytes was achieved. To the knowledge of the authors, this represents the first safe, accurate, and reproducible quantitative method for nitroaromatic explosives in soil for specific use on radiologically contaminated samples within the constraints of a nuclear analytical lab.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boggess, Andrew; Crump, Stephen; Gregory, Clint
Here, unique hazards are presented in the analysis of radiologically contaminated samples. Strenuous safety and security precautions must be in place to protect the analyst, laboratory, and instrumentation used to perform analyses. A validated method has been optimized for the analysis of select nitroaromatic explosives and degradative products using gas chromatography/mass spectrometry via sonication extraction of radiologically contaminated soils, for samples requiring ISO/IEC 17025 laboratory conformance. Target analytes included 2-nitrotoluene, 4-nitrotoluene, 2,6-dinitrotoluene, and 2,4,6-trinitrotoluene, as well as the degradative product 4-amino-2,6-dinitrotoluene. Analytes were extracted from soil in methylene chloride by sonication. Administrative and engineering controls, as well as instrument automationmore » and quality control measures, were utilized to minimize potential human exposure to radiation at all times and at all stages of analysis, from receiving through disposition. Though thermal instability increased uncertainties of these selected compounds, a mean lower quantitative limit of 2.37 µg/mL and mean accuracy of 2.3% relative error and 3.1% relative standard deviation were achieved. Quadratic regression was found to be optimal for calibration of all analytes, with compounds of lower hydrophobicity displaying greater parabolic curve. Blind proficiency testing (PT) of spiked soil samples demonstrated a mean relative error of 9.8%. Matrix spiked analyses of PT samples demonstrated that 99% recovery of target analytes was achieved. To the knowledge of the authors, this represents the first safe, accurate, and reproducible quantitative method for nitroaromatic explosives in soil for specific use on radiologically contaminated samples within the constraints of a nuclear analytical lab.« less
Lamadrid-Figueroa, Héctor; Téllez-Rojo, Martha M; Angeles, Gustavo; Hernández-Ávila, Mauricio; Hu, Howard
2011-01-01
In-vivo measurement of bone lead by means of K-X-ray fluorescence (KXRF) is the preferred biological marker of chronic exposure to lead. Unfortunately, considerable measurement error associated with KXRF estimations can introduce bias in estimates of the effect of bone lead when this variable is included as the exposure in a regression model. Estimates of uncertainty reported by the KXRF instrument reflect the variance of the measurement error and, although they can be used to correct the measurement error bias, they are seldom used in epidemiological statistical analyzes. Errors-in-variables regression (EIV) allows for correction of bias caused by measurement error in predictor variables, based on the knowledge of the reliability of such variables. The authors propose a way to obtain reliability coefficients for bone lead measurements from uncertainty data reported by the KXRF instrument and compare, by the use of Monte Carlo simulations, results obtained using EIV regression models vs. those obtained by the standard procedures. Results of the simulations show that Ordinary Least Square (OLS) regression models provide severely biased estimates of effect, and that EIV provides nearly unbiased estimates. Although EIV effect estimates are more imprecise, their mean squared error is much smaller than that of OLS estimates. In conclusion, EIV is a better alternative than OLS to estimate the effect of bone lead when measured by KXRF. Copyright © 2010 Elsevier Inc. All rights reserved.
Hook, S.J.; Chander, G.; Barsi, J.A.; Alley, R.E.; Abtahi, A.; Palluconi, Frank Don; Markham, B.L.; Richards, R.C.; Schladow, S.G.; Helder, D.L.
2004-01-01
The absolute radiometric accuracy of the thermal infrared band (B6) of the Thematic Mapper (TM) instrument on the Landsat-5 (L5) satellite was assessed over a period of approximately four years using data from the Lake Tahoe automated validation site (California-Nevada). The Lake Tahoe site was established in July 1999, and measurements of the skin and bulk temperature have been made approximately every 2 min from four permanently moored buoys since mid-1999. Assessment involved using a radiative transfer model to propagate surface skin temperature measurements made at the time of the L5 overpass to predict the at-sensor radiance. The predicted radiance was then convolved with the L5B6 system response function to obtain the predicted L5B6 radiance, which was then compared with the radiance measured by L5B6. Twenty-four cloud-free scenes acquired between 1999 and 2003 were used in the analysis with scene temperatures ranging between 4/spl deg/C and 22/spl deg/C. The results indicate L5B6 had a radiance bias of 2.5% (1.6/spl deg/C) in late 1999, which gradually decreased to 0.8% (0.5/spl deg/C) in mid-2002. Since that time, the bias has remained positive (predicted minus measured) and between 0.3% (0.2/spl deg/C) and 1.4% (0.9/spl deg/C). The cause for the cold bias (L5 radiances are lower than expected) is unresolved, but likely related to changes in instrument temperature associated with changes in instrument usage. The in situ data were then used to develop algorithms to recover the skin and bulk temperature of the water by regressing the L5B6 radiance and the National Center for Environmental Prediction (NCEP) total column water data to either the skin or bulk temperature. Use of the NCEP data provides an alternative approach to the split-window approach used with instruments that have two thermal infrared bands. The results indicate the surface skin and bulk temperature can be recovered with a standard error of 0.6/spl deg/C. This error is larger than errors obtained with other instruments due, in part, to the calibration bias. L5 provides the only long-duration high spatial resolution thermal infrared measurements of the land surface. If these data are to be used effectively in studies designed to monitor change, it is essential to continue to monitor instrument performance in-flight and develop quantitative algorithms for recovering surface temperature.
ERIC Educational Resources Information Center
Onwuegbuzie, Anthony J.; Daniel, Larry G.
The purposes of this paper are to identify common errors made by researchers when dealing with reliability coefficients and to outline best practices for reporting and interpreting reliability coefficients. Common errors that researchers make are: (1) stating that the instruments are reliable; (2) incorrectly interpreting correlation coefficients;…
A PC program for estimating measurement uncertainty for aeronautics test instrumentation
NASA Technical Reports Server (NTRS)
Blumenthal, Philip Z.
1995-01-01
A personal computer program was developed which provides aeronautics and operations engineers at Lewis Research Center with a uniform method to quickly provide values for the uncertainty in test measurements and research results. The software package used for performing the calculations is Mathcad 4.0, a Windows version of a program which provides an interactive user interface for entering values directly into equations with immediate display of results. The error contribution from each component of the system is identified individually in terms of the parameter measured. The final result is given in common units, SI units, and percent of full scale range. The program also lists the specifications for all instrumentation and calibration equipment used for the analysis. It provides a presentation-quality printed output which can be used directly for reports and documents.
NASA Astrophysics Data System (ADS)
Luchowski, R.; Kapusta, P.; Szabelski, M.; Sarkar, P.; Borejdo, J.; Gryczynski, Z.; Gryczynski, I.
2009-09-01
Förster resonance energy transfer (FRET) can be utilized to achieve ultrashort fluorescence responses in time-domain fluorometry. In a poly(vinyl) alcohol matrix, the presence of 60 mM Rhodamine 800 acceptor shortens the fluorescence lifetime of a pyridine 1 donor to about 20 ps. Such a fast fluorescence response is very similar to the instrument response function (IRF) obtained using scattered excitation light. A solid fluorescent sample (e.g a film) with picosecond lifetime is ideal for IRF measurements and particularly useful for time-resolved microscopy. Avalanche photodiode detectors, commonly used in this field, feature color- dependent-timing responses. We demonstrate that recording the fluorescence decay of the proposed FRET-based reference sample yields a better IRF approximation than the conventional light-scattering method and therefore avoids systematic errors in decay curve analysis.
Simultaneous localization and calibration for electromagnetic tracking systems.
Sadjadi, Hossein; Hashtrudi-Zaad, Keyvan; Fichtinger, Gabor
2016-06-01
In clinical environments, field distortion can cause significant electromagnetic tracking errors. Therefore, dynamic calibration of electromagnetic tracking systems is essential to compensate for measurement errors. It is proposed to integrate the motion model of the tracked instrument with redundant EM sensor observations and to apply a simultaneous localization and mapping algorithm in order to accurately estimate the pose of the instrument and create a map of the field distortion in real-time. Experiments were conducted in the presence of ferromagnetic and electrically-conductive field distorting objects and results compared with those of a conventional sensor fusion approach. The proposed method reduced the tracking error from 3.94±1.61 mm to 1.82±0.62 mm in the presence of steel, and from 0.31±0.22 mm to 0.11±0.14 mm in the presence of aluminum. With reduced tracking error and independence from external tracking devices or pre-operative calibrations, the approach is promising for reliable EM navigation in various clinical procedures. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Analysis of RFI Statistics for Aquarius RFI Detection and Mitigation Improvements
NASA Technical Reports Server (NTRS)
de Matthaeis, Paolo; Soldo, Yan; Le Vine, David M.
2016-01-01
Aquarius is an L-band active/passive sensor designed to globally map sea surface salinity from space. Two instruments, a radar scatterometer and a radiometer, observe the same surface footprint almost simultaneously. The radiometer is the primary instrument for sensing sea surface salinity (SSS), while the scatterometer is included to provide a correction for sea surface roughness, which is a primary source of error in the salinity retrieval. Although the primary objective is the measurement of SSS, the instrument combination operates continuously, acquiring data over land and sea ice as well. An important feature of the data processing includes detection and mitigation of Radio Frequency Interference (RFI) which is done separately for both active and passive instruments. Correcting for RFI is particularly critical over ocean because of the high accuracy required in the brightness temperature measurements for SSS retrieval. It is also necessary for applications of the Aquarius data over land, where man-made interference is widespread, even though less accuracy is required in this case. This paper will provide an overview of the current status of the Aquarius RFI processing and an update on the ongoing work on the improvement of the RFI detection and mitigation performance.
A novel variable baseline visibility detection system and its measurement method
NASA Astrophysics Data System (ADS)
Li, Meng; Jiang, Li-hui; Xiong, Xing-long; Zhang, Guizhong; Yao, JianQuan
2017-10-01
As an important meteorological observation instrument, the visibility meter can ensure the safety of traffic operation. However, due to the optical system contamination as well as sample error, the accuracy and stability of the equipment are difficult to meet the requirement in the low-visibility environment. To settle this matter, a novel measurement equipment was designed based upon multiple baseline, which essentially acts as an atmospheric transmission meter with movable optical receiver, applying weighted least square method to process signal. Theoretical analysis and experiments in real atmosphere environment support this technique.
Evaluation of UT/LS hygrometer accuracy by intercomparison during the NASA MACPEX mission.
Rollins, A W; Thornberry, T D; Gao, R S; Smith, J B; Sayres, D S; Sargent, M R; Schiller, C; Krämer, M; Spelten, N; Hurst, D F; Jordan, A F; Hall, E G; Vömel, H; Diskin, G S; Podolske, J R; Christensen, L E; Rosenlof, K H; Jensen, E J; Fahey, D W
2014-02-27
Acquiring accurate measurements of water vapor at the low mixing ratios (< 10 ppm) encountered in the upper troposphere and lower stratosphere (UT/LS) has proven to be a significant analytical challenge evidenced by persistent disagreements between high-precision hygrometers. These disagreements have caused uncertainties in the description of the physical processes controlling dehydration of air in the tropical tropopause layer and entry of water into the stratosphere and have hindered validation of satellite water vapor retrievals. A 2011 airborne intercomparison of a large group of in situ hygrometers onboard the NASA WB-57F high-altitude research aircraft and balloons has provided an excellent opportunity to evaluate progress in the scientific community toward improved measurement agreement. In this work we intercompare the measurements from the Midlatitude Airborne Cirrus Properties Experiment (MACPEX) and discuss the quality of agreement. Differences between values reported by the instruments were reduced in comparison to some prior campaigns but were nonnegligible and on the order of 20% (0.8 ppm). Our analysis suggests that unrecognized errors in the quantification of instrumental background for some or all of the hygrometers are a likely cause. Until these errors are understood, differences at this level will continue to somewhat limit our understanding of cirrus microphysical processes and dehydration in the tropical tropopause layer.
Evaluation of UT/LS hygrometer accuracy by intercomparison during the NASA MACPEX mission
Rollins, A. W.; Thornberry, T. D.; Gao, R. S.; Smith, J. B.; Sayres, D. S.; Sargent, M. R.; Schiller, C.; Krämer, M.; Spelten, N.; Hurst, D. F.; Jordan, A. F.; Hall, E. G.; Vömel, H.; Diskin, G. S.; Podolske, J. R.; Christensen, L. E.; Rosenlof, K. H.; Jensen, E. J.; Fahey, D. W.
2017-01-01
Acquiring accurate measurements of water vapor at the low mixing ratios (< 10 ppm) encountered in the upper troposphere and lower stratosphere (UT/LS) has proven to be a significant analytical challenge evidenced by persistent disagreements between high-precision hygrometers. These disagreements have caused uncertainties in the description of the physical processes controlling dehydration of air in the tropical tropopause layer and entry of water into the stratosphere and have hindered validation of satellite water vapor retrievals. A 2011 airborne intercomparison of a large group of in situ hygrometers onboard the NASA WB-57F high-altitude research aircraft and balloons has provided an excellent opportunity to evaluate progress in the scientific community toward improved measurement agreement. In this work we intercompare the measurements from the Midlatitude Airborne Cirrus Properties Experiment (MACPEX) and discuss the quality of agreement. Differences between values reported by the instruments were reduced in comparison to some prior campaigns but were nonnegligible and on the order of 20% (0.8 ppm). Our analysis suggests that unrecognized errors in the quantification of instrumental background for some or all of the hygrometers are a likely cause. Until these errors are understood, differences at this level will continue to somewhat limit our understanding of cirrus microphysical processes and dehydration in the tropical tropopause layer. PMID:28845379
NASA Technical Reports Server (NTRS)
Stone, H. W.; Powell, R. W.
1985-01-01
A six degree of freedom simulation analysis was performed for the space shuttle orbiter during entry from Mach 8 to Mach 1.5 with realistic off nominal conditions by using the flight control systems defined by the shuttle contractor. The off nominal conditions included aerodynamic uncertainties in extrapolating from wind tunnel derived characteristics to full scale flight characteristics, uncertainties in the estimates of the reaction control system interaction with the orbiter aerodynamics, an error in deriving the angle of attack from onboard instrumentation, the failure of two of the four reaction control system thrusters on each side, and a lateral center of gravity offset coupled with vehicle and flow asymmetries. With combinations of these off nominal conditions, the flight control system performed satisfactorily. At low hypersonic speeds, a few cases exhibited unacceptable performances when errors in deriving the angle of attack from the onboard instrumentation were modeled. The orbiter was unable to maintain lateral trim for some cases between Mach 5 and Mach 2 and exhibited limit cycle tendencies or residual roll oscillations between Mach 3 and Mach 1. Piloting techniques and changes in some gains and switching times in the flight control system are suggested to help alleviate these problems.
Epinephrine Auto-Injector Versus Drawn Up Epinephrine for Anaphylaxis Management: A Scoping Review.
Chime, Nnenna O; Riese, Victoria G; Scherzer, Daniel J; Perretta, Julianne S; McNamara, LeAnn; Rosen, Michael A; Hunt, Elizabeth A
2017-08-01
Anaphylaxis is a life-threatening event. Most clinical symptoms of anaphylaxis can be reversed by prompt intramuscular administration of epinephrine using an auto-injector or epinephrine drawn up in a syringe and delays and errors may be fatal. The aim of this scoping review is to identify and compare errors associated with use of epinephrine drawn up in a syringe versus epinephrine auto-injectors in order to assist hospitals as they choose which approach minimizes risk of adverse events for their patients. PubMed, Embase, CINAHL, Web of Science, and the Cochrane Library were searched using terms agreed to a priori. We reviewed human and simulation studies reporting errors associated with the use of epinephrine in anaphylaxis. There were multiple screening stages with evolving feedback. Each study was independently assessed by two reviewers for eligibility. Data were extracted using an instrument modeled from the Zaza et al instrument and grouped into themes. Three main themes were noted: 1) ergonomics, 2) dosing errors, and 3) errors due to route of administration. Significant knowledge gaps in the operation of epinephrine auto-injectors among healthcare providers, patients, and caregivers were identified. For epinephrine in a syringe, there were more frequent reports of incorrect dosing and erroneous IV administration with associated adverse cardiac events. For the epinephrine auto-injector, unintentional administration to the digit was an error reported on multiple occasions. This scoping review highlights knowledge gaps and a diverse set of errors regardless of the approach to epinephrine preparation during management of anaphylaxis. There are more potentially life-threatening errors reported for epinephrine drawn up in a syringe than with the auto-injectors. The impact of these knowledge gaps and potentially fatal errors on patient outcomes, cost, and quality of care is worthy of further investigation.
The Grapefruit: An Alternative Arthroscopic Tool Skill Platform.
Molho, David A; Sylvia, Stephen M; Schwartz, Daniel L; Merwin, Sara L; Levy, I Martin
2017-08-01
To establish the construct validity of an arthroscopic training model that teaches arthroscopic tool skills including triangulation, grasping, precision biting, implant delivery and ambidexterity and uses a whole grapefruit for its training platform. For the grapefruit training model (GTM), an arthroscope and arthroscopic instruments were introduced through portals cut in the grapefruit skin of a whole prepared grapefruit. After institutional review board approval, participants performed a set of tasks inside the grapefruit. Performance for each component was assessed by recording errors, achievement of criteria, and time to completion. A total of 19 medical students, orthopaedic surgery residents, and fellowship-trained orthopaedic surgeons were included in the analysis and were divided into 3 groups based on arthroscopic experience. One-way analysis of variance (ANOVA) and the post hoc Tukey test were used for statistical analysis. One-way ANOVA showed significant differences in both time to completion and errors between groups, F(2, 16) = 16.10, P < .001; F(2, 16) = 17.43, P < .001. Group A had a longer time to completion and more errors than group B (P = .025, P = .019), and group B had a longer time to completion and more errors than group C (P = .023, P = .018). The GTM is an easily assembled and an alternative arthroscopic training model that bridges the gap between box trainers, cadavers, and virtual reality simulators. Our findings suggest construct validity when evaluating its use for teaching the basic arthroscopic tool skills. As such, it is a useful addition to the arthroscopic training toolbox. There is a need for validated low-cost arthroscopic training models that are easily accessible. Copyright © 2017 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.
Global and regional kinematics with GPS
NASA Technical Reports Server (NTRS)
King, Robert W.
1994-01-01
The inherent precision of the doubly differenced phase measurement and the low cost of instrumentation made GPS the space geodetic technique of choice for regional surveys as soon as the constellation reached acceptable geometry in the area of interest: 1985 in western North America, the early 1990's in most of the world. Instrument and site-related errors for horizontal positioning are usually less than 3 mm, so that the dominant source of error is uncertainty in the reference frame defined by the satellites orbits and the tracking stations used to determine them. Prior to about 1992, when the tracking network for most experiments was globally sparse, the number of fiducial sites or the level at which they could be tied to an SLR or VLBI reference frame usually, set the accuracy limit. Recently, with a global network of over 30 stations, the limit is set more often by deficiencies in models for non-gravitational forces acting on the satellites. For regional networks in the northern hemisphere, reference frame errors are currently about 3 parts per billion (ppb) in horizontal position, allowing centimeter-level accuracies over intercontinental distances and less than 1 mm for a 100 km baseline. The accuracy of GPS measurements for monitoring height variations is generally 2-3 times worse than for horizontal motions. As for VLBI, the primary source of error is unmodeled fluctuations in atmospheric water vapor, but both reference frame uncertainties and some instrument errors are more serious for vertical than horizontal measurements. Under good conditions, daily repeatabilities at the level of 10 mm rms were achieved. This paper will summarize the current accuracy of GPS measurements and their implication for the use of SLR to study regional kinematics.
The Gemini Planet Imager Calibration Wavefront Sensor Instrument
NASA Technical Reports Server (NTRS)
Wallace, J. Kent; Burruss, Rick S.; Bartos, Randall D.; Trinh, Thang Q.; Pueyo, Laurent A.; Fregoso, Santos F.; Angione, John R.; Shelton, J. Chris
2010-01-01
The Gemini Planet Imager is an extreme adaptive optics system that will employ an apodized-pupil coronagraph to make direct detections of faint companions of nearby stars to a contrast level of the 10(exp -7) within a few lambda/D of the parent star. Such high contrasts from the ground require exquisite wavefront sensing and control both for the AO system as well as for the coronagraph. Un-sensed non-common path phase and amplitude errors after the wavefront sensor dichroic but before the coronagraph would lead to speckles which would ultimately limit the contrast. The calibration wavefront system for GPI will measure the complex wavefront at the system pupil before the apodizer and provide slow phase corrections to the AO system to mitigate errors that would cause a loss in contrast. The calibration wavefront sensor instrument for GPI has been built. We will describe the instrument and its performance.
Metrological Support in Technosphere Safety
NASA Astrophysics Data System (ADS)
Akhobadze, G. N.
2017-11-01
The principle of metrological support in technosphere safety is considered. It is based on the practical metrology. The theoretical aspects of accuracy and errors of the measuring instruments intended for diagnostics and control of the technosphere under the influence of factors harmful to human beings are presented. The necessity to choose measuring devices with high metrological characteristics according to the accuracy class and contact of sensitive elements with a medium under control is shown. The types of additional errors in measuring instruments that arise when they are affected by environmental influences are described. A specific example of the analyzers application to control industrial emissions and measure the oil and particulate matter in wastewater is shown; it allows assessing advantages and disadvantages of analyzers. Besides, the recommendations regarding the missing metrological characteristics of the instruments in use are provided. The technosphere continuous monitoring taking into account the metrological principles is expected to efficiently forecast the technosphere development and make appropriate decisions.
Application of Least Mean Square Algorithms to Spacecraft Vibration Compensation
NASA Technical Reports Server (NTRS)
Woodard , Stanley E.; Nagchaudhuri, Abhijit
1998-01-01
This paper describes the application of the Least Mean Square (LMS) algorithm in tandem with the Filtered-X Least Mean Square algorithm for controlling a science instrument's line-of-sight pointing. Pointing error is caused by a periodic disturbance and spacecraft vibration. A least mean square algorithm is used on-orbit to produce the transfer function between the instrument's servo-mechanism and error sensor. The result is a set of adaptive transversal filter weights tuned to the transfer function. The Filtered-X LMS algorithm, which is an extension of the LMS, tunes a set of transversal filter weights to the transfer function between the disturbance source and the servo-mechanism's actuation signal. The servo-mechanism's resulting actuation counters the disturbance response and thus maintains accurate science instrumental pointing. A simulation model of the Upper Atmosphere Research Satellite is used to demonstrate the algorithms.
Zhang, Jiyang; Ma, Jie; Dou, Lei; Wu, Songfeng; Qian, Xiaohong; Xie, Hongwei; Zhu, Yunping; He, Fuchu
2009-02-01
The hybrid linear trap quadrupole Fourier-transform (LTQ-FT) ion cyclotron resonance mass spectrometer, an instrument with high accuracy and resolution, is widely used in the identification and quantification of peptides and proteins. However, time-dependent errors in the system may lead to deterioration of the accuracy of these instruments, negatively influencing the determination of the mass error tolerance (MET) in database searches. Here, a comprehensive discussion of LTQ/FT precursor ion mass error is provided. On the basis of an investigation of the mass error distribution, we propose an improved recalibration formula and introduce a new tool, FTDR (Fourier-transform data recalibration), that employs a graphic user interface (GUI) for automatic calibration. It was found that the calibration could adjust the mass error distribution to more closely approximate a normal distribution and reduce the standard deviation (SD). Consequently, we present a new strategy, LDSF (Large MET database search and small MET filtration), for database search MET specification and validation of database search results. As the name implies, a large-MET database search is conducted and the search results are then filtered using the statistical MET estimated from high-confidence results. By applying this strategy to a standard protein data set and a complex data set, we demonstrate the LDSF can significantly improve the sensitivity of the result validation procedure.
Electric vehicle power train instrumentation: Some constraints and considerations
NASA Technical Reports Server (NTRS)
Triner, J. E.; Hansen, I. G.
1977-01-01
The application of pulse modulation control (choppers) to dc motors creates unique instrumentation problems. In particular, the high harmonic components contained in the current waveforms require frequency response accommodations not normally considered in dc instrumentation. In addition to current sensing, accurate power measurement requires not only adequate frequency response but must also address phase errors caused by the finite bandwidths and component characteristics involved. The implications of these problems are assessed.
Neural network cloud top pressure and height for MODIS
NASA Astrophysics Data System (ADS)
Håkansson, Nina; Adok, Claudia; Thoss, Anke; Scheirer, Ronald; Hörnquist, Sara
2018-06-01
Cloud top height retrieval from imager instruments is important for nowcasting and for satellite climate data records. A neural network approach for cloud top height retrieval from the imager instrument MODIS (Moderate Resolution Imaging Spectroradiometer) is presented. The neural networks are trained using cloud top layer pressure data from the CALIOP (Cloud-Aerosol Lidar with Orthogonal Polarization) dataset. Results are compared with two operational reference algorithms for cloud top height: the MODIS Collection 6 Level 2 height product and the cloud top temperature and height algorithm in the 2014 version of the NWC SAF (EUMETSAT (European Organization for the Exploitation of Meteorological Satellites) Satellite Application Facility on Support to Nowcasting and Very Short Range Forecasting) PPS (Polar Platform System). All three techniques are evaluated using both CALIOP and CPR (Cloud Profiling Radar for CloudSat (CLOUD SATellite)) height. Instruments like AVHRR (Advanced Very High Resolution Radiometer) and VIIRS (Visible Infrared Imaging Radiometer Suite) contain fewer channels useful for cloud top height retrievals than MODIS, therefore several different neural networks are investigated to test how infrared channel selection influences retrieval performance. Also a network with only channels available for the AVHRR1 instrument is trained and evaluated. To examine the contribution of different variables, networks with fewer variables are trained. It is shown that variables containing imager information for neighboring pixels are very important. The error distributions of the involved cloud top height algorithms are found to be non-Gaussian. Different descriptive statistic measures are presented and it is exemplified that bias and SD (standard deviation) can be misleading for non-Gaussian distributions. The median and mode are found to better describe the tendency of the error distributions and IQR (interquartile range) and MAE (mean absolute error) are found to give the most useful information of the spread of the errors. For all descriptive statistics presented MAE, IQR, RMSE (root mean square error), SD, mode, median, bias and percentage of absolute errors above 0.25, 0.5, 1 and 2 km the neural network perform better than the reference algorithms both validated with CALIOP and CPR (CloudSat). The neural networks using the brightness temperatures at 11 and 12 µm show at least 32 % (or 623 m) lower MAE compared to the two operational reference algorithms when validating with CALIOP height. Validation with CPR (CloudSat) height gives at least 25 % (or 430 m) reduction of MAE.
Thermal and heat flow instrumentation for the space shuttle Thermal Protection System
NASA Technical Reports Server (NTRS)
Hartman, G. J.; Neuner, G. J.; Pavlosky, J.
1974-01-01
The 100 mission lifetime requirement for the space shuttle orbiter vehicle dictates a unique set of requirements for the Thermal Protection System (TPS) thermal and heat flow instrumentation. This paper describes the design and development of such instrumentation with emphasis on assessment of the accuracy of the measurements when the instrumentation is an integral part of the TPS. The temperature and heat flow sensors considered for this application are described and the optimum choices discussed. Installation techniques are explored and the resulting impact on the system error defined.
Development of a Low-Cost Sub-Scale Aircraft for Flight Research: The FASER Project
NASA Technical Reports Server (NTRS)
Owens, Donald B.; Cox, David E.; Morelli, Eugene A.
2006-01-01
An inexpensive unmanned sub-scale aircraft was developed to conduct frequent flight test experiments for research and demonstration of advanced dynamic modeling and control design concepts. This paper describes the aircraft, flight systems, flight operations, and data compatibility including details of some practical problems encountered and the solutions found. The aircraft, named Free-flying Aircraft for Sub-scale Experimental Research, or FASER, was outfitted with high-quality instrumentation to measure aircraft inputs and states, as well as vehicle health parameters. Flight data are stored onboard, but can also be telemetered to a ground station in real time for analysis. Commercial-off-the-shelf hardware and software were used as often as possible. The flight computer is based on the PC104 platform, and runs xPC-Target software. Extensive wind tunnel testing was conducted with the same aircraft used for flight testing, and a six degree-of-freedom simulation with nonlinear aerodynamics was developed to support flight tests. Flight tests to date have been conducted to mature the flight operations, validate the instrumentation, and check the flight data for kinematic consistency. Data compatibility analysis showed that the flight data are accurate and consistent after corrections are made for estimated systematic instrumentation errors.