An Enclosed Laser Calibration Standard
NASA Astrophysics Data System (ADS)
Adams, Thomas E.; Fecteau, M. L.
1985-02-01
We have designed, evaluated and calibrated an enclosed, safety-interlocked laser calibration standard for use in US Army Secondary Reference Calibration Laboratories. This Laser Test Set Calibrator (LTSC) represents the Army's first-generation field laser calibration standard. Twelve LTSC's are now being fielded world-wide. The main requirement on the LTSC is to provide calibration support for the Test Set (TS3620) which, in turn, is a GO/NO GO tester of the Hand-Held Laser Rangefinder (AN/GVS-5). However, we believe it's design is flexible enough to accommodate the calibration of other laser test, measurement and diagnostic equipment (TMDE) provided that single-shot capability is adequate to perform the task. In this paper we describe the salient aspects and calibration requirements of the AN/GVS-5 Rangefinder and the Test Set which drove the basic LTSC design. Also, we detail our evaluation and calibration of the LTSC, in particular, the LTSC system standards. We conclude with a review of our error analysis from which uncertainties were assigned to the LTSC calibration functions.
Task Identification and Evaluation System (TIES)
1991-08-01
Caliorate A N/AVh-11A- iUD -test -sets 127. Calibrate AN/AWII1-55 ASCU test setsI - 128. Calibrate 5001L11 tally punched tape readersI- 129. Perform...11AKHbD test sets -- 132. ?erform fault isolation of U4/AWN-55 ASCU -test sets -- 133. Perform fault isolation of 500 R.M tally punched tape I...AIN/AVM1-11A HfLM test sets- 137. Perf-orm self-tests of AL%/AWL-S5 ASCU test sets G. !MAI.T.T!ING A-7D_ ANUAL TEST SETS 138. Adjust SM-661/AS-388air
NASA Technical Reports Server (NTRS)
Groot, J. S.
1990-01-01
In August 1989 the NASA/JPL airborne P/L/C-band DC-8 SAR participated in several remote sensing campaigns in Europe. Amongst other test sites, data were obtained of the Flevopolder test site in the Netherlands on August the 16th. The Dutch X-band SLAR was flown on the same date and imaged parts of the same area as the SAR. To calibrate the two imaging radars a set of 33 calibration devices was deployed. 16 trihedrals were used to calibrate a part of the SLAR data. This short paper outlines the X-band SLAR characteristics, the experimental set-up and the calibration method used to calibrate the SLAR data. Finally some preliminary results are given.
Tian, Hai-Qing; Wang, Chun-Guang; Zhang, Hai-Jun; Yu, Zhi-Hong; Li, Jian-Kang
2012-11-01
Outlier samples strongly influence the precision of the calibration model in soluble solids content measurement of melons using NIR Spectra. According to the possible sources of outlier samples, three methods (predicted concentration residual test; Chauvenet test; leverage and studentized residual test) were used to discriminate these outliers respectively. Nine suspicious outliers were detected from calibration set which including 85 fruit samples. Considering the 9 suspicious outlier samples maybe contain some no-outlier samples, they were reclaimed to the model one by one to see whether they influence the model and prediction precision or not. In this way, 5 samples which were helpful to the model joined in calibration set again, and a new model was developed with the correlation coefficient (r) 0. 889 and root mean square errors for calibration (RMSEC) 0.6010 Brix. For 35 unknown samples, the root mean square errors prediction (RMSEP) was 0.854 degrees Brix. The performance of this model was more better than that developed with non outlier was eliminated from calibration set (r = 0.797, RMSEC= 0.849 degrees Brix, RMSEP = 1.19 degrees Brix), and more representative and stable with all 9 samples were eliminated from calibration set (r = 0.892, RMSEC = 0.605 degrees Brix, RMSEP = 0.862 degrees).
Toropov, Andrey A; Toropova, Alla P; Raska, Ivan; Benfenati, Emilio
2010-04-01
Three different splits into the subtraining set (n = 22), the set of calibration (n = 21), and the test set (n = 12) of 55 antineoplastic agents have been examined. By the correlation balance of SMILES-based optimal descriptors quite satisfactory models for the octanol/water partition coefficient have been obtained on all three splits. The correlation balance is the optimization of a one-variable model with a target function that provides both the maximal values of the correlation coefficient for the subtraining and calibration set and the minimum of the difference between the above-mentioned correlation coefficients. Thus, the calibration set is a preliminary test set. Copyright (c) 2009 Elsevier Masson SAS. All rights reserved.
ERIC Educational Resources Information Center
Snyder, James
2010-01-01
This dissertation research examined the changes in item RIT calibration that occurred when adding audio to a set of currently calibrated RIT items and then placing these new items as field test items in the modified assessments on the NWEA MAP test platform. The researcher used test results from over 600 students in the Poway School District in…
NASA Technical Reports Server (NTRS)
Tripp, John S.; Tcheng, Ping
1999-01-01
Statistical tools, previously developed for nonlinear least-squares estimation of multivariate sensor calibration parameters and the associated calibration uncertainty analysis, have been applied to single- and multiple-axis inertial model attitude sensors used in wind tunnel testing to measure angle of attack and roll angle. The analysis provides confidence and prediction intervals of calibrated sensor measurement uncertainty as functions of applied input pitch and roll angles. A comparative performance study of various experimental designs for inertial sensor calibration is presented along with corroborating experimental data. The importance of replicated calibrations over extended time periods has been emphasized; replication provides independent estimates of calibration precision and bias uncertainties, statistical tests for calibration or modeling bias uncertainty, and statistical tests for sensor parameter drift over time. A set of recommendations for a new standardized model attitude sensor calibration method and usage procedures is included. The statistical information provided by these procedures is necessary for the uncertainty analysis of aerospace test results now required by users of industrial wind tunnel test facilities.
Boeing infrared sensor (BIRS) calibration facility
NASA Technical Reports Server (NTRS)
Hazen, John D.; Scorsone, L. V.
1990-01-01
The Boeing Infrared Sensor (BIRS) Calibration Facility represents a major capital investment in optical and infrared technology. The facility was designed and built for the calibration and testing of the new generation large aperture long wave infrared (LWIR) sensors, seekers, and related technologies. Capability exists to perform both radiometric and goniometric calibrations of large infrared sensors under simulated environmental operating conditions. The system is presently configured for endoatmospheric calibrations with a uniform background field which can be set to simulate the expected mission background levels. During calibration, the sensor under test is also exposed to expected mission temperatures and pressures within the test chamber. Capability exists to convert the facility for exoatmospheric testing. The configuration of the system is described along with hardware elements and changes made to date are addressed.
A Method to Test Model Calibration Techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Judkoff, Ron; Polly, Ben; Neymark, Joel
This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then themore » calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.« less
A Method to Test Model Calibration Techniques: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Judkoff, Ron; Polly, Ben; Neymark, Joel
This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then themore » calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.« less
Calibration of the NASA Glenn 8- by 6-Foot Supersonic Wind Tunnel (1996 and 1997 Tests)
NASA Technical Reports Server (NTRS)
Arrington, E. Allen
2012-01-01
There were several physical and operational changes made to the NASA Glenn Research Center 8- by 6-Foot Supersonic Wind Tunnel during the period of 1992 through 1996. Following each of these changes, a facility calibration was conducted to provide the required information to support the research test programs. Due to several factors (facility research test schedule, facility downtime and continued facility upgrades), a full test section calibration was not conducted until 1996. This calibration test incorporated all test section configurations and covered the existing operating range of the facility. However, near the end of that test entry, two of the vortex generators mounted on the compressor exit tailcone failed causing minor damage to the honeycomb flow straightener. The vortex generators were removed from the facility and calibration testing was terminated. A follow-up test entry was conducted in 1997 in order to fully calibrate the facility without the effects of the vortex generators and to provide a complete calibration of the newly expanded low speed operating range. During the 1997 tunnel entry, all planned test points required for a complete test section calibration were obtained. This data set included detailed in-plane and axial flow field distributions for use in quantifying the test section flow quality.
Calibration and use of filter test facility orifice plates
NASA Astrophysics Data System (ADS)
Fain, D. E.; Selby, T. W.
1984-07-01
There are three official DOE filter test facilities. These test facilities are used by the DOE, and others, to test nuclear grade HEPA filters to provide Quality Assurance that the filters meet the required specifications. The filters are tested for both filter efficiency and pressure drop. In the test equipment, standard orifice plates are used to set the specified flow rates for the tests. There has existed a need to calibrate the orifice plates from the three facilities with a common calibration source to assure that the facilities have comparable tests. A project has been undertaken to calibrate these orifice plates. In addition to reporting the results of the calibrations of the orifice plates, the means for using the calibration results will be discussed. A comparison of the orifice discharge coefficients for the orifice plates used at the seven facilities will be given. The pros and cons for the use of mass flow or volume flow rates for testing will be discussed. It is recommended that volume flow rates be used as a more practical and comparable means of testing filters. The rationale for this recommendation will be discussed.
Liquid hydrogen and liquid oxygen feedline passive recirculation analysis
NASA Astrophysics Data System (ADS)
Holt, Kimberly Ann; Cleary, Nicole L.; Nichols, Andrew J.; Perry, Gretchen L. E.
The primary goal of the National Launch System (NLS) program was to design an operationally efficient, highly reliable vehicle with minimal recurring launch costs. To achieve this goal, trade studies of key main propulsion subsystems were performed to specify vehicle design requirements. These requirements include the use of passive recirculation to thermally condition the liquid hydrogen (LH2) and liquid oxygen (LO2) propellant feed systems and Space Transportation Main Engine (STME) fuel pumps. Rockwell International (RI) proposed a joint independent research and development (JIRAD) program with Marshall Space Flight Center (MSFC) to study the LH2 feed system passive recirculation concept. The testing was started in July 1992 and completed in November 1992. Vertical and sloped feedline designs were used. An engine simulator was attached at the bottom of the feedline. This simulator had strip heaters that were set to equal the corresponding heat input from different engines. A computer program is currently being used to analyze the passive recirculation concept in the LH2 vertical feedline tests. Four tests, where the heater setting is the independent variable, were chosen. While the JIRAD with RI was underway, General Dynamics Space Systems (GDSS) proposed a JIRAD with MSFC to explore passive recirculation in the LO2 feed system. Liquid nitrogen (LN2) is being used instead of LO2 for safety and economic concerns. To date, three sets of calibration tests have been completed on the sloped LN2 test article. The environmental heat was calculated from the calibration tests in which the strip heaters were turned off. During the LH2 testing, the environmental heat was assumed to be constant. Therefore, the total heat was equal to the environmental heat flux plus the heater input. However, the first two sets of LN2 calibration tests have shown that the environmental heat flux varies with heater input. A Systems Improved Numerical Differencing Analyzer and Fluid Integrator (SINDA/FLUINT) model is currently being built to determine if this variation in environmental heat is due to a change in the wall temperature.
Five-Hole Flow Angle Probe Calibration for the NASA Glenn Icing Research Tunnel
NASA Technical Reports Server (NTRS)
Gonsalez, Jose C.; Arrington, E. Allen
1999-01-01
A spring 1997 test section calibration program is scheduled for the NASA Glenn Research Center Icing Research Tunnel following the installation of new water injecting spray bars. A set of new five-hole flow angle pressure probes was fabricated to properly calibrate the test section for total pressure, static pressure, and flow angle. The probes have nine pressure ports: five total pressure ports on a hemispherical head and four static pressure ports located 14.7 diameters downstream of the head. The probes were calibrated in the NASA Glenn 3.5-in.-diameter free-jet calibration facility. After completing calibration data acquisition for two probes, two data prediction models were evaluated. Prediction errors from a linear discrete model proved to be no worse than those from a full third-order multiple regression model. The linear discrete model only required calibration data acquisition according to an abridged test matrix, thus saving considerable time and financial resources over the multiple regression model that required calibration data acquisition according to a more extensive test matrix. Uncertainties in calibration coefficients and predicted values of flow angle, total pressure, static pressure. Mach number. and velocity were examined. These uncertainties consider the instrumentation that will be available in the Icing Research Tunnel for future test section calibration testing.
Focks, Andreas; Belgers, Dick; Boerwinkel, Marie-Claire; Buijse, Laura; Roessink, Ivo; Van den Brink, Paul J
2018-05-01
Exposure patterns in ecotoxicological experiments often do not match the exposure profiles for which a risk assessment needs to be performed. This limitation can be overcome by using toxicokinetic-toxicodynamic (TKTD) models for the prediction of effects under time-variable exposure. For the use of TKTD models in the environmental risk assessment of chemicals, it is required to calibrate and validate the model for specific compound-species combinations. In this study, the survival of macroinvertebrates after exposure to the neonicotinoid insecticide was modelled using TKTD models from the General Unified Threshold models of Survival (GUTS) framework. The models were calibrated on existing survival data from acute or chronic tests under static exposure regime. Validation experiments were performed for two sets of species-compound combinations: one set focussed on multiple species sensitivity to a single compound: imidacloprid, and the other set on the effects of multiple compounds for a single species, i.e., the three neonicotinoid compounds imidacloprid, thiacloprid and thiamethoxam, on the survival of the mayfly Cloeon dipterum. The calibrated models were used to predict survival over time, including uncertainty ranges, for the different time-variable exposure profiles used in the validation experiments. From the comparison between observed and predicted survival, it appeared that the accuracy of the model predictions was acceptable for four of five tested species in the multiple species data set. For compounds such as neonicotinoids, which are known to have the potential to show increased toxicity under prolonged exposure, the calibration and validation of TKTD models for survival needs to be performed ideally by considering calibration data from both acute and chronic tests.
Melfsen, Andreas; Hartung, Eberhard; Haeussermann, Angelika
2013-02-01
The robustness of in-line raw milk analysis with near-infrared spectroscopy (NIRS) was tested with respect to the prediction of the raw milk contents fat, protein and lactose. Near-infrared (NIR) spectra of raw milk (n = 3119) were acquired on three different farms during the milking process of 354 milkings over a period of six months. Calibration models were calculated for: a random data set of each farm (fully random internal calibration); first two thirds of the visits per farm (internal calibration); whole datasets of two of the three farms (external calibration), and combinations of external and internal datasets. Validation was done either on the remaining data set per farm (internal validation) or on data of the remaining farms (external validation). Excellent calibration results were obtained when fully randomised internal calibration sets were used for milk analysis. In this case, RPD values of around ten, five and three for the prediction of fat, protein and lactose content, respectively, were achieved. Farm internal calibrations achieved much poorer prediction results especially for the prediction of protein and lactose with RPD values of around two and one respectively. The prediction accuracy improved when validation was done on spectra of an external farm, mainly due to the higher sample variation in external calibration sets in terms of feeding diets and individual cow effects. The results showed that further improvements were achieved when additional farm information was added to the calibration set. One of the main requirements towards a robust calibration model is the ability to predict milk constituents in unknown future milk samples. The robustness and quality of prediction increases with increasing variation of, e.g., feeding and cow individual milk composition in the calibration model.
Hybrid dynamic radioactive particle tracking (RPT) calibration technique for multiphase flow systems
NASA Astrophysics Data System (ADS)
Khane, Vaibhav; Al-Dahhan, Muthanna H.
2017-04-01
The radioactive particle tracking (RPT) technique has been utilized to measure three-dimensional hydrodynamic parameters for multiphase flow systems. An analytical solution to the inverse problem of the RPT technique, i.e. finding the instantaneous tracer positions based upon instantaneous counts received in the detectors, is not possible. Therefore, a calibration to obtain a counts-distance map is needed. There are major shortcomings in the conventional RPT calibration method due to which it has limited applicability in practical applications. In this work, the design and development of a novel dynamic RPT calibration technique are carried out to overcome the shortcomings of the conventional RPT calibration method. The dynamic RPT calibration technique has been implemented around a test reactor with 1foot in diameter and 1 foot in height using Cobalt-60 as an isotopes tracer particle. Two sets of experiments have been carried out to test the capability of novel dynamic RPT calibration. In the first set of experiments, a manual calibration apparatus has been used to hold a tracer particle at known static locations. In the second set of experiments, the tracer particle was moved vertically downwards along a straight line path in a controlled manner. The obtained reconstruction results about the tracer particle position were compared with the actual known position and the reconstruction errors were estimated. The obtained results revealed that the dynamic RPT calibration technique is capable of identifying tracer particle positions with a reconstruction error between 1 to 5.9 mm for the conditions studied which could be improved depending on various factors outlined here.
Goicoechea, H C; Olivieri, A C
2001-07-01
A newly developed multivariate method involving net analyte preprocessing (NAP) was tested using central composite calibration designs of progressively decreasing size regarding the multivariate simultaneous spectrophotometric determination of three active components (phenylephrine, diphenhydramine and naphazoline) and one excipient (methylparaben) in nasal solutions. Its performance was evaluated and compared with that of partial least-squares (PLS-1). Minimisation of the calibration predicted error sum of squares (PRESS) as a function of a moving spectral window helped to select appropriate working spectral ranges for both methods. The comparison of NAP and PLS results was carried out using two tests: (1) the elliptical joint confidence region for the slope and intercept of a predicted versus actual concentrations plot for a large validation set of samples and (2) the D-optimality criterion concerning the information content of the calibration data matrix. Extensive simulations and experimental validation showed that, unlike PLS, the NAP method is able to furnish highly satisfactory results when the calibration set is reduced from a full four-component central composite to a fractional central composite, as expected from the modelling requirements of net analyte based methods.
Chander, G.; Angal, A.; Choi, T.; Meyer, D.J.; Xiong, X.; Teillet, P.M.
2007-01-01
A cross-calibration methodology has been developed using coincident image pairs from the Terra Moderate Resolution Imaging Spectroradiometer (MODIS), the Landsat 7 (L7) Enhanced Thematic Mapper Plus (ETM+) and the Earth Observing EO-1 Advanced Land Imager (ALI) to verify the absolute radiometric calibration accuracy of these sensors with respect to each other. To quantify the effects due to different spectral responses, the Relative Spectral Responses (RSR) of these sensors were studied and compared by developing a set of "figures-of-merit." Seven cloud-free scenes collected over the Railroad Valley Playa, Nevada (RVPN), test site were used to conduct the cross-calibration study. This cross-calibration approach was based on image statistics from near-simultaneous observations made by different satellite sensors. Homogeneous regions of interest (ROI) were selected in the image pairs, and the mean target statistics were converted to absolute units of at-sensor reflectance. Using these reflectances, a set of cross-calibration equations were developed giving a relative gain and bias between the sensor pair.
Parameter regionalization of a monthly water balance model for the conterminous United States
Bock, Andrew R.; Hay, Lauren E.; McCabe, Gregory J.; Markstrom, Steven L.; Atkinson, R. Dwight
2016-01-01
A parameter regionalization scheme to transfer parameter values from gaged to ungaged areas for a monthly water balance model (MWBM) was developed and tested for the conterminous United States (CONUS). The Fourier Amplitude Sensitivity Test, a global-sensitivity algorithm, was implemented on a MWBM to generate parameter sensitivities on a set of 109 951 hydrologic response units (HRUs) across the CONUS. The HRUs were grouped into 110 calibration regions based on similar parameter sensitivities. Subsequently, measured runoff from 1575 streamgages within the calibration regions were used to calibrate the MWBM parameters to produce parameter sets for each calibration region. Measured and simulated runoff at the 1575 streamgages showed good correspondence for the majority of the CONUS, with a median computed Nash–Sutcliffe efficiency coefficient of 0.76 over all streamgages. These methods maximize the use of available runoff information, resulting in a calibrated CONUS-wide application of the MWBM suitable for providing estimates of water availability at the HRU resolution for both gaged and ungaged areas of the CONUS.
Parameter regionalization of a monthly water balance model for the conterminous United States
NASA Astrophysics Data System (ADS)
Bock, Andrew R.; Hay, Lauren E.; McCabe, Gregory J.; Markstrom, Steven L.; Atkinson, R. Dwight
2016-07-01
A parameter regionalization scheme to transfer parameter values from gaged to ungaged areas for a monthly water balance model (MWBM) was developed and tested for the conterminous United States (CONUS). The Fourier Amplitude Sensitivity Test, a global-sensitivity algorithm, was implemented on a MWBM to generate parameter sensitivities on a set of 109 951 hydrologic response units (HRUs) across the CONUS. The HRUs were grouped into 110 calibration regions based on similar parameter sensitivities. Subsequently, measured runoff from 1575 streamgages within the calibration regions were used to calibrate the MWBM parameters to produce parameter sets for each calibration region. Measured and simulated runoff at the 1575 streamgages showed good correspondence for the majority of the CONUS, with a median computed Nash-Sutcliffe efficiency coefficient of 0.76 over all streamgages. These methods maximize the use of available runoff information, resulting in a calibrated CONUS-wide application of the MWBM suitable for providing estimates of water availability at the HRU resolution for both gaged and ungaged areas of the CONUS.
Collection of quantitative chemical release field data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Demirgian, J.; Macha, S.; Loyola Univ.
1999-01-01
Detection and quantitation of chemicals in the environment requires Fourier-transform infrared (FTIR) instruments that are properly calibrated and tested. This calibration and testing requires field testing using matrices that are representative of actual instrument use conditions. Three methods commonly used for developing calibration files and training sets in the field are a closed optical cell or chamber, a large-scale chemical release, and a small-scale chemical release. There is no best method. The advantages and limitations of each method should be considered in evaluating field results. Proper calibration characterizes the sensitivity of an instrument, its ability to detect a component inmore » different matrices, and the quantitative accuracy and precision of the results.« less
High-level neutron coincidence counter maintenance manual
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swansen, J.; Collinsworth, P.
1983-05-01
High-level neutron coincidence counter operational (field) calibration and usage is well known. This manual makes explicit basic (shop) check-out, calibration, and testing of new units and is a guide for repair of failed in-service units. Operational criteria for the major electronic functions are detailed, as are adjustments and calibration procedures, and recurrent mechanical/electromechanical problems are addressed. Some system tests are included for quality assurance. Data on nonstandard large-scale integrated (circuit) components and a schematic set are also included.
Multispectral scanner flight model (F-1) radiometric calibration and alignment handbook
NASA Technical Reports Server (NTRS)
1981-01-01
This handbook on the calibration of the MSS-D flight model (F-1) provides both the relevant data and a summary description of how the data were obtained for the system radiometric calibration, system relative spectral response, and the filter response characteristics for all 24 channels of the four band MSS-D F-1 scanner. The calibration test procedure and resulting test data required to establish the reference light levels of the MSS-D internal calibration system are discussed. The final set of data ("nominal" calibration wedges for all 24 channels) for the internal calibration system is given. The system relative spectral response measurements for all 24 channels of MSS-D F-1 are included. These data are the spectral response of the complete scanner, which are the composite of the spectral responses of the scan mirror primary and secondary telescope mirrors, fiber optics, optical filters, and detectors. Unit level test data on the measurements of the individual channel optical transmission filters are provided. Measured performance is compared to specification values.
Nondestructive evaluation of soluble solid content in strawberry by near infrared spectroscopy
NASA Astrophysics Data System (ADS)
Guo, Zhiming; Huang, Wenqian; Chen, Liping; Wang, Xiu; Peng, Yankun
This paper indicates the feasibility to use near infrared (NIR) spectroscopy combined with synergy interval partial least squares (siPLS) algorithms as a rapid nondestructive method to estimate the soluble solid content (SSC) in strawberry. Spectral preprocessing methods were optimized selected by cross-validation in the model calibration. Partial least squares (PLS) algorithm was conducted on the calibration of regression model. The performance of the final model was back-evaluated according to root mean square error of calibration (RMSEC) and correlation coefficient (R2 c) in calibration set, and tested by mean square error of prediction (RMSEP) and correlation coefficient (R2 p) in prediction set. The optimal siPLS model was obtained with after first derivation spectra preprocessing. The measurement results of best model were achieved as follow: RMSEC = 0.2259, R2 c = 0.9590 in the calibration set; and RMSEP = 0.2892, R2 p = 0.9390 in the prediction set. This work demonstrated that NIR spectroscopy and siPLS with efficient spectral preprocessing is a useful tool for nondestructively evaluation SSC in strawberry.
21 CFR 874.1080 - Audiometer calibration set.
Code of Federal Regulations, 2010 CFR
2010-04-01
... calibration traceable to the National Bureau of Standards, oscillators, frequency counters, microphone amplifiers, and a recorder. The device can measure selected audiometer test frequencies at a given intensity... audiometer. It measures the sound frequency and intensity characteristics that emanate from an audiometer...
21 CFR 874.1080 - Audiometer calibration set.
Code of Federal Regulations, 2011 CFR
2011-04-01
... calibration traceable to the National Bureau of Standards, oscillators, frequency counters, microphone amplifiers, and a recorder. The device can measure selected audiometer test frequencies at a given intensity... audiometer. It measures the sound frequency and intensity characteristics that emanate from an audiometer...
NASA Technical Reports Server (NTRS)
Lerch, F. J.; Nerem, R. S.; Chinn, D. S.; Chan, J. C.; Patel, G. B.; Klosko, S. M.
1993-01-01
A new method has been developed to provide a direct test of the error calibrations of gravity models based on actual satellite observations. The basic approach projects the error estimates of the gravity model parameters onto satellite observations, and the results of these projections are then compared with data residual computed from the orbital fits. To allow specific testing of the gravity error calibrations, subset solutions are computed based on the data set and data weighting of the gravity model. The approach is demonstrated using GEM-T3 to show that the gravity error estimates are well calibrated and that reliable predictions of orbit accuracies can be achieved for independent orbits.
Procedures for adjusting regional regression models of urban-runoff quality using local data
Hoos, A.B.; Sisolak, J.K.
1993-01-01
Statistical operations termed model-adjustment procedures (MAP?s) can be used to incorporate local data into existing regression models to improve the prediction of urban-runoff quality. Each MAP is a form of regression analysis in which the local data base is used as a calibration data set. Regression coefficients are determined from the local data base, and the resulting `adjusted? regression models can then be used to predict storm-runoff quality at unmonitored sites. The response variable in the regression analyses is the observed load or mean concentration of a constituent in storm runoff for a single storm. The set of explanatory variables used in the regression analyses is different for each MAP, but always includes the predicted value of load or mean concentration from a regional regression model. The four MAP?s examined in this study were: single-factor regression against the regional model prediction, P, (termed MAP-lF-P), regression against P,, (termed MAP-R-P), regression against P, and additional local variables (termed MAP-R-P+nV), and a weighted combination of P, and a local-regression prediction (termed MAP-W). The procedures were tested by means of split-sample analysis, using data from three cities included in the Nationwide Urban Runoff Program: Denver, Colorado; Bellevue, Washington; and Knoxville, Tennessee. The MAP that provided the greatest predictive accuracy for the verification data set differed among the three test data bases and among model types (MAP-W for Denver and Knoxville, MAP-lF-P and MAP-R-P for Bellevue load models, and MAP-R-P+nV for Bellevue concentration models) and, in many cases, was not clearly indicated by the values of standard error of estimate for the calibration data set. A scheme to guide MAP selection, based on exploratory data analysis of the calibration data set, is presented and tested. The MAP?s were tested for sensitivity to the size of a calibration data set. As expected, predictive accuracy of all MAP?s for the verification data set decreased as the calibration data-set size decreased, but predictive accuracy was not as sensitive for the MAP?s as it was for the local regression models.
Regression Model Term Selection for the Analysis of Strain-Gage Balance Calibration Data
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert Manfred; Volden, Thomas R.
2010-01-01
The paper discusses the selection of regression model terms for the analysis of wind tunnel strain-gage balance calibration data. Different function class combinations are presented that may be used to analyze calibration data using either a non-iterative or an iterative method. The role of the intercept term in a regression model of calibration data is reviewed. In addition, useful algorithms and metrics originating from linear algebra and statistics are recommended that will help an analyst (i) to identify and avoid both linear and near-linear dependencies between regression model terms and (ii) to make sure that the selected regression model of the calibration data uses only statistically significant terms. Three different tests are suggested that may be used to objectively assess the predictive capability of the final regression model of the calibration data. These tests use both the original data points and regression model independent confirmation points. Finally, data from a simplified manual calibration of the Ames MK40 balance is used to illustrate the application of some of the metrics and tests to a realistic calibration data set.
The production of calibration specimens for impact testing of subsize Charpy specimens
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alexander, D.J.; Corwin, W.R.; Owings, T.D.
1994-09-01
Calibration specimens have been manufactured for checking the performance of a pendulum impact testing machine that has been configured for testing subsize specimens, both half-size (5.0 {times} 5.0 {times} 25.4 mm) and third-size (3.33 {times} 3.33 {times} 25.4 mm). Specimens were fabricated from quenched-and-tempered 4340 steel heat treated to produce different microstructures that would result in either high or low absorbed energy levels on testing. A large group of both half- and third-size specimens were tested at {minus}40{degrees}C. The results of the tests were analyzed for average value and standard deviation, and these values were used to establish calibration limitsmore » for the Charpy impact machine when testing subsize specimens. These average values plus or minus two standard deviations were set as the acceptable limits for the average of five tests for calibration of the impact testing machine.« less
Linear and nonlinear trending and prediction for AVHRR time series data
NASA Technical Reports Server (NTRS)
Smid, J.; Volf, P.; Slama, M.; Palus, M.
1995-01-01
The variability of AVHRR calibration coefficient in time was analyzed using algorithms of linear and non-linear time series analysis. Specifically we have used the spline trend modeling, autoregressive process analysis, incremental neural network learning algorithm and redundancy functional testing. The analysis performed on available AVHRR data sets revealed that (1) the calibration data have nonlinear dependencies, (2) the calibration data depend strongly on the target temperature, (3) both calibration coefficients and the temperature time series can be modeled, in the first approximation, as autonomous dynamical systems, (4) the high frequency residuals of the analyzed data sets can be best modeled as an autoregressive process of the 10th degree. We have dealt with a nonlinear identification problem and the problem of noise filtering (data smoothing). The system identification and filtering are significant problems for AVHRR data sets. The algorithms outlined in this study can be used for the future EOS missions. Prediction and smoothing algorithms for time series of calibration data provide a functional characterization of the data. Those algorithms can be particularly useful when calibration data are incomplete or sparse.
Langley Wind Tunnel Data Quality Assurance-Check Standard Results
NASA Technical Reports Server (NTRS)
Hemsch, Michael J.; Grubb, John P.; Krieger, William B.; Cler, Daniel L.
2000-01-01
A framework for statistical evaluation, control and improvement of wind funnel measurement processes is presented The methodology is adapted from elements of the Measurement Assurance Plans developed by the National Bureau of Standards (now the National Institute of Standards and Technology) for standards and calibration laboratories. The present methodology is based on the notions of statistical quality control (SQC) together with check standard testing and a small number of customer repeat-run sets. The results of check standard and customer repeat-run -sets are analyzed using the statistical control chart-methods of Walter A. Shewhart long familiar to the SQC community. Control chart results are presented for. various measurement processes in five facilities at Langley Research Center. The processes include test section calibration, force and moment measurements with a balance, and instrument calibration.
Computerized tomography calibrator
NASA Technical Reports Server (NTRS)
Engel, Herbert P. (Inventor)
1991-01-01
A set of interchangeable pieces comprising a computerized tomography calibrator, and a method of use thereof, permits focusing of a computerized tomographic (CT) system. The interchangeable pieces include a plurality of nestable, generally planar mother rings, adapted for the receipt of planar inserts of predetermined sizes, and of predetermined material densities. The inserts further define openings therein for receipt of plural sub-inserts. All pieces are of known sizes and densities, permitting the assembling of different configurations of materials of known sizes and combinations of densities, for calibration (i.e., focusing) of a computerized tomographic system through variation of operating variables thereof. Rather than serving as a phanton, which is intended to be representative of a particular workpiece to be tested, the set of interchangeable pieces permits simple and easy standardized calibration of a CT system. The calibrator and its related method of use further includes use of air or of particular fluids for filling various openings, as part of a selected configuration of the set of pieces.
Parameter regionalization of a monthly water balance model for the conterminous United States
NASA Astrophysics Data System (ADS)
Bock, A. R.; Hay, L. E.; McCabe, G. J.; Markstrom, S. L.; Atkinson, R. D.
2015-09-01
A parameter regionalization scheme to transfer parameter values and model uncertainty information from gaged to ungaged areas for a monthly water balance model (MWBM) was developed and tested for the conterminous United States (CONUS). The Fourier Amplitude Sensitivity Test, a global-sensitivity algorithm, was implemented on a MWBM to generate parameter sensitivities on a set of 109 951 hydrologic response units (HRUs) across the CONUS. The HRUs were grouped into 110 calibration regions based on similar parameter sensitivities. Subsequently, measured runoff from 1575 streamgages within the calibration regions were used to calibrate the MWBM parameters to produce parameter sets for each calibration region. Measured and simulated runoff at the 1575 streamgages showed good correspondence for the majority of the CONUS, with a median computed Nash-Sutcliffe Efficiency coefficient of 0.76 over all streamgages. These methods maximize the use of available runoff information, resulting in a calibrated CONUS-wide application of the MWBM suitable for providing estimates of water availability at the HRU resolution for both gaged and ungaged areas of the CONUS.
Lanvers-Kaminsky, Claudia; Rüffer, Andrea; Würthwein, Gudrun; Gerss, Joachim; Zucchetti, Massimo; Ballerini, Andrea; Attarbaschi, Andishe; Smisek, Petr; Nath, Christa; Lee, Samiuela; Elitzur, Sara; Zimmermann, Martin; Möricke, Anja; Schrappe, Martin; Rizzari, Carmelo; Boos, Joachim
2018-02-01
In the international AIEOP-BFM ALL 2009 trial, asparaginase (ASE) activity was monitored after each dose of pegylated Escherichia coli ASE (PEG-ASE). Two methods were used: the aspartic acid β-hydroxamate (AHA) test and medac asparaginase activity test (MAAT). As the latter method overestimates PEG-ASE activity because it calibrates using E. coli ASE, method comparison was performed using samples from the AIEOP-BFM ALL 2009 trial. PEG-ASE activities were determined using MAAT and AHA test in 2 sets of samples (first set: 630 samples and second set: 91 samples). Bland-Altman analysis was performed on ratios between MAAT and AHA tests. The mean difference between both methods, limits of agreement, and 95% confidence intervals were calculated and compared for all samples and samples grouped according to the calibration ranges of the MAAT and the AHA test. PEG-ASE activity determined using the MAAT was significantly higher than when determined using the AHA test (P < 0.001; Wilcoxon signed-rank test). Within the calibration range of the MAAT (30-600 U/L), PEG-ASE activities determined using the MAAT were on average 23% higher than PEG-ASE activities determined using the AHA test. This complies with the mean difference reported in the MAAT manual. With PEG-ASE activities >600 U/L, the discrepancies between MAAT and AHA test increased. Above the calibration range of the MAAT (>600 U/L) and the AHA test (>1000 U/L), a mean difference of 42% was determined. Because more than 70% of samples had PEG-ASE activities >600 U/L and required additional sample dilution, an overall mean difference of 37% was calculated for all samples (37% for the first and 34% for the second set). Comparison of the MAAT and AHA test for PEG-ASE activity confirmed a mean difference of 23% between MAAT and AHA test for PEG-ASE activities between 30 and 600 U/L. The discrepancy increased in samples with >600 U/L PEG-ASE activity, which will be especially relevant when evaluating high PEG-ASE activities in relation to toxicity, efficacy, and population pharmacokinetics.
Method calibration of the model 13145 infrared target projectors
NASA Astrophysics Data System (ADS)
Huang, Jianxia; Gao, Yuan; Han, Ying
2014-11-01
The SBIR Model 13145 Infrared Target Projectors ( The following abbreviation Evaluation Unit ) used for characterizing the performances of infrared imaging system. Test items: SiTF, MTF, NETD, MRTD, MDTD, NPS. Infrared target projectors includes two area blackbodies, a 12 position target wheel, all reflective collimator. It provide high spatial frequency differential targets, Precision differential targets imaged by infrared imaging system. And by photoelectricity convert on simulate signal or digital signal. Applications software (IR Windows TM 2001) evaluate characterizing the performances of infrared imaging system. With regards to as a whole calibration, first differently calibration for distributed component , According to calibration specification for area blackbody to calibration area blackbody, by means of to amend error factor to calibration of all reflective collimator, radiance calibration of an infrared target projectors using the SR5000 spectral radiometer, and to analyze systematic error. With regards to as parameter of infrared imaging system, need to integrate evaluation method. According to regulation with -GJB2340-1995 General specification for military thermal imaging sets -testing parameters of infrared imaging system, the results compare with results from Optical Calibration Testing Laboratory . As a goal to real calibration performances of the Evaluation Unit.
Assessment of the Uniqueness of Wind Tunnel Strain-Gage Balance Load Predictions
NASA Technical Reports Server (NTRS)
Ulbrich, N.
2016-01-01
A new test was developed to assess the uniqueness of wind tunnel strain-gage balance load predictions that are obtained from regression models of calibration data. The test helps balance users to gain confidence in load predictions of non-traditional balance designs. It also makes it possible to better evaluate load predictions of traditional balances that are not used as originally intended. The test works for both the Iterative and Non-Iterative Methods that are used in the aerospace testing community for the prediction of balance loads. It is based on the hypothesis that the total number of independently applied balance load components must always match the total number of independently measured bridge outputs or bridge output combinations. This hypothesis is supported by a control volume analysis of the inputs and outputs of a strain-gage balance. It is concluded from the control volume analysis that the loads and bridge outputs of a balance calibration data set must separately be tested for linear independence because it cannot always be guaranteed that a linearly independent load component set will result in linearly independent bridge output measurements. Simple linear math models for the loads and bridge outputs in combination with the variance inflation factor are used to test for linear independence. A highly unique and reversible mapping between the applied load component set and the measured bridge output set is guaranteed to exist if the maximum variance inflation factor of both sets is less than the literature recommended threshold of five. Data from the calibration of a six{component force balance is used to illustrate the application of the new test to real-world data.
NASA Technical Reports Server (NTRS)
Solis, Eduardo; Meyn, Larry
2016-01-01
Calibrating the internal, multi-component balance mounted in the Tiltrotor Test Rig (TTR) required photogrammetric measurements to determine the location and orientation of forces applied to the balance. The TTR, with the balance and calibration hardware attached, was mounted in a custom calibration stand. Calibration loads were applied using eleven hydraulic actuators, operating in tension only, that were attached to the forward frame of the calibration stand and the TTR calibration hardware via linkages with in-line load cells. Before the linkages were installed, photogrammetry was used to determine the location of the linkage attachment points on the forward frame and on the TTR calibration hardware. Photogrammetric measurements were used to determine the displacement of the linkage attachment points on the TTR due to deflection of the hardware under applied loads. These measurements represent the first photogrammetric deflection measurements to be made to support 6-component rotor balance calibration. This paper describes the design of the TTR and the calibration hardware, and presents the development, set-up and use of the photogrammetry system, along with some selected measurement results.
Brouckaert, D; Uyttersprot, J-S; Broeckx, W; De Beer, T
2018-03-01
Calibration transfer or standardisation aims at creating a uniform spectral response on different spectroscopic instruments or under varying conditions, without requiring a full recalibration for each situation. In the current study, this strategy is applied to construct at-line multivariate calibration models and consequently employ them in-line in a continuous industrial production line, using the same spectrometer. Firstly, quantitative multivariate models are constructed at-line at laboratory scale for predicting the concentration of two main ingredients in hard surface cleaners. By regressing the Raman spectra of a set of small-scale calibration samples against their reference concentration values, partial least squares (PLS) models are developed to quantify the surfactant levels in the liquid detergent compositions under investigation. After evaluating the models performance with a set of independent validation samples, a univariate slope/bias correction is applied in view of transporting these at-line calibration models to an in-line manufacturing set-up. This standardisation technique allows a fast and easy transfer of the PLS regression models, by simply correcting the model predictions on the in-line set-up, without adjusting anything to the original multivariate calibration models. An extensive statistical analysis is performed in order to assess the predictive quality of the transferred regression models. Before and after transfer, the R 2 and RMSEP of both models is compared for evaluating if their magnitude is similar. T-tests are then performed to investigate whether the slope and intercept of the transferred regression line are not statistically different from 1 and 0, respectively. Furthermore, it is inspected whether no significant bias can be noted. F-tests are executed as well, for assessing the linearity of the transfer regression line and for investigating the statistical coincidence of the transfer and validation regression line. Finally, a paired t-test is performed to compare the original at-line model to the slope/bias corrected in-line model, using interval hypotheses. It is shown that the calibration models of Surfactant 1 and Surfactant 2 yield satisfactory in-line predictions after slope/bias correction. While Surfactant 1 passes seven out of eight statistical tests, the recommended validation parameters are 100% successful for Surfactant 2. It is hence concluded that the proposed strategy for transferring at-line calibration models to an in-line industrial environment via a univariate slope/bias correction of the predicted values offers a successful standardisation approach. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Dillner, A. M.; Takahama, S.
2014-11-01
Organic carbon (OC) can constitute 50% or more of the mass of atmospheric particulate matter. Typically, the organic carbon concentration is measured using thermal methods such as Thermal-Optical Reflectance (TOR) from quartz fiber filters. Here, methods are presented whereby Fourier Transform Infrared (FT-IR) absorbance spectra from polytetrafluoroethylene (PTFE or Teflon) filters are used to accurately predict TOR OC. Transmittance FT-IR analysis is rapid, inexpensive, and non-destructive to the PTFE filters. To develop and test the method, FT-IR absorbance spectra are obtained from 794 samples from seven Interagency Monitoring of PROtected Visual Environment (IMPROVE) sites sampled during 2011. Partial least squares regression is used to calibrate sample FT-IR absorbance spectra to artifact-corrected TOR OC. The FTIR spectra are divided into calibration and test sets by sampling site and date which leads to precise and accurate OC predictions by FT-IR as indicated by high coefficient of determination (R2; 0.96), low bias (0.02 μg m-3, all μg m-3 values based on the nominal IMPROVE sample volume of 32.8 m-3), low error (0.08 μg m-3) and low normalized error (11%). These performance metrics can be achieved with various degrees of spectral pretreatment (e.g., including or excluding substrate contributions to the absorbances) and are comparable in precision and accuracy to collocated TOR measurements. FT-IR spectra are also divided into calibration and test sets by OC mass and by OM / OC which reflects the organic composition of the particulate matter and is obtained from organic functional group composition; this division also leads to precise and accurate OC predictions. Low OC concentrations have higher bias and normalized error due to TOR analytical errors and artifact correction errors, not due to the range of OC mass of the samples in the calibration set. However, samples with low OC mass can be used to predict samples with high OC mass indicating that the calibration is linear. Using samples in the calibration set that have a different OM / OC or ammonium / OC distributions than the test set leads to only a modest increase in bias and normalized error in the predicted samples. We conclude that FT-IR analysis with partial least squares regression is a robust method for accurately predicting TOR OC in IMPROVE network samples; providing complementary information to the organic functional group composition and organic aerosol mass estimated previously from the same set of sample spectra (Ruthenburg et al., 2014).
NASA Astrophysics Data System (ADS)
Golobokov, M.; Danilevich, S.
2018-04-01
In order to assess calibration reliability and automate such assessment, procedures for data collection and simulation study of thermal imager calibration procedure have been elaborated. The existing calibration techniques do not always provide high reliability. A new method for analyzing the existing calibration techniques and developing new efficient ones has been suggested and tested. A type of software has been studied that allows generating instrument calibration reports automatically, monitoring their proper configuration, processing measurement results and assessing instrument validity. The use of such software allows reducing man-hours spent on finalization of calibration data 2 to 5 times and eliminating a whole set of typical operator errors.
Finite Element Model Calibration Approach for Ares I-X
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.; Buehrle, Ralph D.; Templeton, Justin D.; Lazor, Daniel R.; Gaspar, James L.; Parks, Russel A.; Bartolotta, Paul A.
2010-01-01
Ares I-X is a pathfinder vehicle concept under development by NASA to demonstrate a new class of launch vehicles. Although this vehicle is essentially a shell of what the Ares I vehicle will be, efforts are underway to model and calibrate the analytical models before its maiden flight. Work reported in this document will summarize the model calibration approach used including uncertainty quantification of vehicle responses and the use of nonconventional boundary conditions during component testing. Since finite element modeling is the primary modeling tool, the calibration process uses these models, often developed by different groups, to assess model deficiencies and to update parameters to reconcile test with predictions. Data for two major component tests and the flight vehicle are presented along with the calibration results. For calibration, sensitivity analysis is conducted using Analysis of Variance (ANOVA). To reduce the computational burden associated with ANOVA calculations, response surface models are used in lieu of computationally intensive finite element solutions. From the sensitivity studies, parameter importance is assessed as a function of frequency. In addition, the work presents an approach to evaluate the probability that a parameter set exists to reconcile test with analysis. Comparisons of pre-test predictions of frequency response uncertainty bounds with measured data, results from the variance-based sensitivity analysis, and results from component test models with calibrated boundary stiffness models are all presented.
Quality-control issues on high-resolution diagnostic monitors.
Parr, L F; Anderson, A L; Glennon, B K; Fetherston, P
2001-06-01
Previous literature indicates a need for more data collection in the area of quality control of high-resolution diagnostic monitors. Throughout acceptance testing, which began in June 2000, stability of monitor calibration was analyzed. Although image quality on all monitors was found to be acceptable upon initial acceptance testing using VeriLUM software by Image Smiths, Inc (Germantown, MD), it was determined to be unacceptable during the clinical phase of acceptance testing. High-resolution monitors were evaluated for quality assurance on a weekly basis from installation through acceptance testing and beyond. During clinical utilization determination (CUD), monitor calibration was identified as a problem and the manufacturer returned and recalibrated all workstations. From that time through final acceptance testing, high-resolution monitor calibration and monitor failure rate remained a problem. The monitor vendor then returned to the site to address these areas. Monitor defocus was still noticeable and calibration checks were increased to three times per week. White and black level drift on medium-resolution monitors had been attributed to raster size settings. Measurements of white and black level at several different size settings were taken to determine the effect of size on white and black level settings. Black level remained steady with size change. White level appeared to increase by 2.0 cd/m2 for every 0.1 inches decrease in horizontal raster size. This was determined not to be the cause of the observed brightness drift. Frequency of calibration/testing is an issue in a clinical environment. The increased frequency required at our site cannot be sustained. The medical physics division cannot provide dedicated personnel to conduct the quality-assurance testing on all monitors at this interval due to other physics commitments throughout the hospital. Monitor access is also an issue due to radiologists' need to read images. Some workstations are in use 7 AM to 11 PM daily. An appropriate monitor calibration frequency must be established during acceptance testing to ensure unacceptable drift is not masked by excessive calibration frequency. Standards for acceptable black level and white level drift also need to be determined. The monitor vendor and hospital staff agree that currently, very small printed text is an acceptable method of determining monitor blur, however, a better method of determining monitor blur is being pursued. Although monitors may show acceptable quality during initial acceptance testing, they need to show sustained quality during the clinical acceptance-testing phase. Defocus, black level, and white level are image quality concerns, which need to be evaluated during the clinical phase of acceptance testing. Image quality deficiencies can have a negative impact on patient care and raise serious medical-legal concerns. The attention to quality control required of the hospital staff needs to be realistic and not have a significant impact on radiology workflow.
Behavior driven testing in ALMA telescope calibration software
NASA Astrophysics Data System (ADS)
Gil, Juan P.; Garces, Mario; Broguiere, Dominique; Shen, Tzu-Chiang
2016-07-01
ALMA software development cycle includes well defined testing stages that involves developers, testers and scientists. We adapted Behavior Driven Development (BDD) to testing activities applied to Telescope Calibration (TELCAL) software. BDD is an agile technique that encourages communication between roles by defining test cases using natural language to specify features and scenarios, what allows participants to share a common language and provides a high level set of automated tests. This work describes how we implemented and maintain BDD testing for TELCAL, the infrastructure needed to support it and proposals to expand this technique to other subsystems.
NASA Astrophysics Data System (ADS)
Dillner, A. M.; Takahama, S.
2015-03-01
Organic carbon (OC) can constitute 50% or more of the mass of atmospheric particulate matter. Typically, organic carbon is measured from a quartz fiber filter that has been exposed to a volume of ambient air and analyzed using thermal methods such as thermal-optical reflectance (TOR). Here, methods are presented that show the feasibility of using Fourier transform infrared (FT-IR) absorbance spectra from polytetrafluoroethylene (PTFE or Teflon) filters to accurately predict TOR OC. This work marks an initial step in proposing a method that can reduce the operating costs of large air quality monitoring networks with an inexpensive, non-destructive analysis technique using routinely collected PTFE filter samples which, in addition to OC concentrations, can concurrently provide information regarding the composition of organic aerosol. This feasibility study suggests that the minimum detection limit and errors (or uncertainty) of FT-IR predictions are on par with TOR OC such that evaluation of long-term trends and epidemiological studies would not be significantly impacted. To develop and test the method, FT-IR absorbance spectra are obtained from 794 samples from seven Interagency Monitoring of PROtected Visual Environment (IMPROVE) sites collected during 2011. Partial least-squares regression is used to calibrate sample FT-IR absorbance spectra to TOR OC. The FTIR spectra are divided into calibration and test sets by sampling site and date. The calibration produces precise and accurate TOR OC predictions of the test set samples by FT-IR as indicated by high coefficient of variation (R2; 0.96), low bias (0.02 μg m-3, the nominal IMPROVE sample volume is 32.8 m3), low error (0.08 μg m-3) and low normalized error (11%). These performance metrics can be achieved with various degrees of spectral pretreatment (e.g., including or excluding substrate contributions to the absorbances) and are comparable in precision to collocated TOR measurements. FT-IR spectra are also divided into calibration and test sets by OC mass and by OM / OC ratio, which reflects the organic composition of the particulate matter and is obtained from organic functional group composition; these divisions also leads to precise and accurate OC predictions. Low OC concentrations have higher bias and normalized error due to TOR analytical errors and artifact-correction errors, not due to the range of OC mass of the samples in the calibration set. However, samples with low OC mass can be used to predict samples with high OC mass, indicating that the calibration is linear. Using samples in the calibration set that have different OM / OC or ammonium / OC distributions than the test set leads to only a modest increase in bias and normalized error in the predicted samples. We conclude that FT-IR analysis with partial least-squares regression is a robust method for accurately predicting TOR OC in IMPROVE network samples - providing complementary information to the organic functional group composition and organic aerosol mass estimated previously from the same set of sample spectra (Ruthenburg et al., 2014).
Utilization of Expert Knowledge in a Multi-Objective Hydrologic Model Automatic Calibration Process
NASA Astrophysics Data System (ADS)
Quebbeman, J.; Park, G. H.; Carney, S.; Day, G. N.; Micheletty, P. D.
2016-12-01
Spatially distributed continuous simulation hydrologic models have a large number of parameters for potential adjustment during the calibration process. Traditional manual calibration approaches of such a modeling system is extremely laborious, which has historically motivated the use of automatic calibration procedures. With a large selection of model parameters, achieving high degrees of objective space fitness - measured with typical metrics such as Nash-Sutcliffe, Kling-Gupta, RMSE, etc. - can easily be achieved using a range of evolutionary algorithms. A concern with this approach is the high degree of compensatory calibration, with many similarly performing solutions, and yet grossly varying parameter set solutions. To help alleviate this concern, and mimic manual calibration processes, expert knowledge is proposed for inclusion within the multi-objective functions, which evaluates the parameter decision space. As a result, Pareto solutions are identified with high degrees of fitness, but also create parameter sets that maintain and utilize available expert knowledge resulting in more realistic and consistent solutions. This process was tested using the joint SNOW-17 and Sacramento Soil Moisture Accounting method (SAC-SMA) within the Animas River basin in Colorado. Three different elevation zones, each with a range of parameters, resulted in over 35 model parameters simultaneously calibrated. As a result, high degrees of fitness were achieved, in addition to the development of more realistic and consistent parameter sets such as those typically achieved during manual calibration procedures.
Iterative Strain-Gage Balance Calibration Data Analysis for Extended Independent Variable Sets
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert Manfred
2011-01-01
A new method was developed that makes it possible to use an extended set of independent calibration variables for an iterative analysis of wind tunnel strain gage balance calibration data. The new method permits the application of the iterative analysis method whenever the total number of balance loads and other independent calibration variables is greater than the total number of measured strain gage outputs. Iteration equations used by the iterative analysis method have the limitation that the number of independent and dependent variables must match. The new method circumvents this limitation. It simply adds a missing dependent variable to the original data set by using an additional independent variable also as an additional dependent variable. Then, the desired solution of the regression analysis problem can be obtained that fits each gage output as a function of both the original and additional independent calibration variables. The final regression coefficients can be converted to data reduction matrix coefficients because the missing dependent variables were added to the data set without changing the regression analysis result for each gage output. Therefore, the new method still supports the application of the two load iteration equation choices that the iterative method traditionally uses for the prediction of balance loads during a wind tunnel test. An example is discussed in the paper that illustrates the application of the new method to a realistic simulation of temperature dependent calibration data set of a six component balance.
Multiple Use One-Sided Hypotheses Testing in Univariate Linear Calibration
NASA Technical Reports Server (NTRS)
Krishnamoorthy, K.; Kulkarni, Pandurang M.; Mathew, Thomas
1996-01-01
Consider a normally distributed response variable, related to an explanatory variable through the simple linear regression model. Data obtained on the response variable, corresponding to known values of the explanatory variable (i.e., calibration data), are to be used for testing hypotheses concerning unknown values of the explanatory variable. We consider the problem of testing an unlimited sequence of one sided hypotheses concerning the explanatory variable, using the corresponding sequence of values of the response variable and the same set of calibration data. This is the situation of multiple use of the calibration data. The tests derived in this context are characterized by two types of uncertainties: one uncertainty associated with the sequence of values of the response variable, and a second uncertainty associated with the calibration data. We derive tests based on a condition that incorporates both of these uncertainties. The solution has practical applications in the decision limit problem. We illustrate our results using an example dealing with the estimation of blood alcohol concentration based on breath estimates of the alcohol concentration. In the example, the problem is to test if the unknown blood alcohol concentration of an individual exceeds a threshold that is safe for driving.
Statistical analysis on experimental calibration data for flowmeters in pressure pipes
NASA Astrophysics Data System (ADS)
Lazzarin, Alessandro; Orsi, Enrico; Sanfilippo, Umberto
2017-08-01
This paper shows a statistical analysis on experimental calibration data for flowmeters (i.e.: electromagnetic, ultrasonic, turbine flowmeters) in pressure pipes. The experimental calibration data set consists of the whole archive of the calibration tests carried out on 246 flowmeters from January 2001 to October 2015 at Settore Portate of Laboratorio di Idraulica “G. Fantoli” of Politecnico di Milano, that is accredited as LAT 104 for a flow range between 3 l/s and 80 l/s, with a certified Calibration and Measurement Capability (CMC) - formerly known as Best Measurement Capability (BMC) - equal to 0.2%. The data set is split into three subsets, respectively consisting in: 94 electromagnetic, 83 ultrasonic and 69 turbine flowmeters; each subset is analysed separately from the others, but then a final comparison is carried out. In particular, the main focus of the statistical analysis is the correction C, that is the difference between the flow rate Q measured by the calibration facility (through the accredited procedures and the certified reference specimen) minus the flow rate QM contemporarily recorded by the flowmeter under calibration, expressed as a percentage of the same QM .
Wang, Wei; Young, Bessie A.; Fülöp, Tibor; de Boer, Ian H.; Boulware, L. Ebony; Katz, Ronit; Correa, Adolfo; Griswold, Michael E.
2015-01-01
Background The calibration to Isotope Dilution Mass Spectroscopy (IDMS) traceable creatinine is essential for valid use of the new Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) equation to estimate the glomerular filtration rate (GFR). Methods For 5,210 participants in the Jackson Heart Study (JHS), serum creatinine was measured with a multipoint enzymatic spectrophotometric assay at the baseline visit (2000–2004) and re-measured using the Roche enzymatic method, traceable to IDMS in a subset of 206 subjects. The 200 eligible samples (6 were excluded, 1 for failure of the re-measurement and 5 for outliers) were divided into three disjoint sets - training, validation, and test - to select a calibration model, estimate true errors, and assess performance of the final calibration equation. The calibration equation was applied to serum creatinine measurements of 5,210 participants to estimate GFR and the prevalence of CKD. Results The selected Deming regression model provided a slope of 0.968 (95% Confidence Interval (CI), 0.904 to 1.053) and intercept of −0.0248 (95% CI, −0.0862 to 0.0366) with R squared 0.9527. Calibrated serum creatinine showed high agreement with actual measurements when applying to the unused test set (concordance correlation coefficient 0.934, 95% CI, 0.894 to 0.960). The baseline prevalence of CKD in the JHS (2000–2004) was 6.30% using calibrated values, compared with 8.29% using non-calibrated serum creatinine with the CKD-EPI equation (P < 0.001). Conclusions A Deming regression model was chosen to optimally calibrate baseline serum creatinine measurements in the JHS and the calibrated values provide a lower CKD prevalence estimate. PMID:25806862
Finite Element Model Calibration Approach for Area I-X
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.; Buehrle, Ralph D.; Templeton, Justin D.; Gaspar, James L.; Lazor, Daniel R.; Parks, Russell A.; Bartolotta, Paul A.
2010-01-01
Ares I-X is a pathfinder vehicle concept under development by NASA to demonstrate a new class of launch vehicles. Although this vehicle is essentially a shell of what the Ares I vehicle will be, efforts are underway to model and calibrate the analytical models before its maiden flight. Work reported in this document will summarize the model calibration approach used including uncertainty quantification of vehicle responses and the use of non-conventional boundary conditions during component testing. Since finite element modeling is the primary modeling tool, the calibration process uses these models, often developed by different groups, to assess model deficiencies and to update parameters to reconcile test with predictions. Data for two major component tests and the flight vehicle are presented along with the calibration results. For calibration, sensitivity analysis is conducted using Analysis of Variance (ANOVA). To reduce the computational burden associated with ANOVA calculations, response surface models are used in lieu of computationally intensive finite element solutions. From the sensitivity studies, parameter importance is assessed as a function of frequency. In addition, the work presents an approach to evaluate the probability that a parameter set exists to reconcile test with analysis. Comparisons of pretest predictions of frequency response uncertainty bounds with measured data, results from the variance-based sensitivity analysis, and results from component test models with calibrated boundary stiffness models are all presented.
Analyses of Field Test Data at the Atucha-1 Spent Fuel Pools
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sitaraman, S.
A field test was conducted at the Atucha-1 spent nuclear fuel pools to validate a software package for gross defect detection that is used in conjunction with the inspection tool, Spent Fuel Neutron Counter (SFNC). A set of measurements was taken with the SFNC and the software predictions were compared with these data and analyzed. The data spanned a wide range of cooling times and a set of burnup levels leading to count rates from the several hundreds to around twenty per second. The current calibration in the software using linear fitting required the use of multiple calibration factors tomore » cover the entire range of count rates recorded. The solution to this was to use power regression data fitting to normalize the predicted response and derive one calibration factor that can be applied to the entire set of data. The resulting comparisons between the predicted and measured responses were generally good and provided a quantitative method of detecting missing fuel in virtually all situations. Since the current version of the software uses the linear calibration method, it would need to be updated with the new power regression method to make it more user-friendly for real time verification and fieldable for the range of responses that will be encountered.« less
Evolution of solid rocket booster component testing
NASA Technical Reports Server (NTRS)
Lessey, Joseph A.
1989-01-01
The evolution of one of the new generation of test sets developed for the Solid Rocket Booster of the U.S. Space Transportation System. Requirements leading to factory checkout of the test set are explained, including the evolution from manual to semiautomated toward fully automated status. Individual improvements in the built-in test equipment, self-calibration, and software flexibility are addressed, and the insertion of fault detection to improve reliability is discussed.
Students' Performance Calibration in a Basketball Dibbling Task in Elementary Physical Education
ERIC Educational Resources Information Center
Kolovelonis, Athanasios; Goudas, Marios; Dermitzaki, Irini
2012-01-01
The aim of this study was to examine students' performance calibration in physical education. One hundred fifth and sixth grade students provided estimations regarding their performance in a dribbling test after practicing dribbling for 16 minutes under different self-regulatory conditions (i.e., receiving feedback, setting goals, self-recording).…
Two laboratory methods for the calibration of GPS speed meters
NASA Astrophysics Data System (ADS)
Bai, Yin; Sun, Qiao; Du, Lei; Yu, Mei; Bai, Jie
2015-01-01
The set-ups of two calibration systems are presented to investigate calibration methods of GPS speed meters. The GPS speed meter calibrated is a special type of high accuracy speed meter for vehicles which uses Doppler demodulation of GPS signals to calculate the measured speed of a moving target. Three experiments are performed: including simulated calibration, field-test signal replay calibration, and in-field test comparison with an optical speed meter. The experiments are conducted at specific speeds in the range of 40-180 km h-1 with the same GPS speed meter as the device under calibration. The evaluation of measurement results validates both methods for calibrating GPS speed meters. The relative deviations between the measurement results of the GPS-based high accuracy speed meter and those of the optical speed meter are analyzed, and the equivalent uncertainty of the comparison is evaluated. The comparison results justify the utilization of GPS speed meters as reference equipment if no fewer than seven satellites are available. This study contributes to the widespread use of GPS-based high accuracy speed meters as legal reference equipment in traffic speed metrology.
End-to-end test of the electron-proton spectrometer
NASA Technical Reports Server (NTRS)
Cash, B. L.
1972-01-01
A series of end-to-end tests were performed to demonstrate the proper functioning of the complete Electron-Proton Spectrometer (EPS). The purpose of the tests was to provide experimental verification of the design and to provide a complete functional performance check of the instrument from the excitation of the sensors to and including the data processor and equipment test set. Each of the channels of the EPS was exposed to a calibrated beam of energetic particles, and counts were accumulated for a predetermined period of time for each of several energies. The counts were related to the known flux of particles to give a monodirectional response function for each channel. The measured response function of the test unit was compared to the response function determined for the calibration sensors from the data taken from the calibration program.
Hydrogen Field Test Standard: Laboratory and Field Performance
Pope, Jodie G.; Wright, John D.
2015-01-01
The National Institute of Standards and Technology (NIST) developed a prototype field test standard (FTS) that incorporates three test methods that could be used by state weights and measures inspectors to periodically verify the accuracy of retail hydrogen dispensers, much as gasoline dispensers are tested today. The three field test methods are: 1) gravimetric, 2) Pressure, Volume, Temperature (PVT), and 3) master meter. The FTS was tested in NIST's Transient Flow Facility with helium gas and in the field at a hydrogen dispenser location. All three methods agree within 0.57 % and 1.53 % for all test drafts of helium gas in the laboratory setting and of hydrogen gas in the field, respectively. The time required to perform six test drafts is similar for all three methods, ranging from 6 h for the gravimetric and master meter methods to 8 h for the PVT method. The laboratory tests show that 1) it is critical to wait for thermal equilibrium to achieve density measurements in the FTS that meet the desired uncertainty requirements for the PVT and master meter methods; in general, we found a wait time of 20 minutes introduces errors < 0.1 % and < 0.04 % in the PVT and master meter methods, respectively and 2) buoyancy corrections are important for the lowest uncertainty gravimetric measurements. The field tests show that sensor drift can become a largest component of uncertainty that is not present in the laboratory setting. The scale was calibrated after it was set up at the field location. Checks of the calibration throughout testing showed drift of 0.031 %. Calibration of the master meter and the pressure sensors prior to travel to the field location and upon return showed significant drifts in their calibrations; 0.14 % and up to 1.7 %, respectively. This highlights the need for better sensor selection and/or more robust sensor testing prior to putting into field service. All three test methods are capable of being successfully performed in the field and give equivalent answers if proper sensors without drift are used. PMID:26722192
Wang, Wei; Young, Bessie A; Fülöp, Tibor; de Boer, Ian H; Boulware, L Ebony; Katz, Ronit; Correa, Adolfo; Griswold, Michael E
2015-05-01
The calibration to isotope dilution mass spectrometry-traceable creatinine is essential for valid use of the new Chronic Kidney Disease Epidemiology Collaboration equation to estimate the glomerular filtration rate. For 5,210 participants in the Jackson Heart Study (JHS), serum creatinine was measured with a multipoint enzymatic spectrophotometric assay at the baseline visit (2000-2004) and remeasured using the Roche enzymatic method, traceable to isotope dilution mass spectrometry in a subset of 206 subjects. The 200 eligible samples (6 were excluded, 1 for failure of the remeasurement and 5 for outliers) were divided into 3 disjoint sets-training, validation and test-to select a calibration model, estimate true errors and assess performance of the final calibration equation. The calibration equation was applied to serum creatinine measurements of 5,210 participants to estimate glomerular filtration rate and the prevalence of chronic kidney disease (CKD). The selected Deming regression model provided a slope of 0.968 (95% confidence interval [CI], 0.904-1.053) and intercept of -0.0248 (95% CI, -0.0862 to 0.0366) with R value of 0.9527. Calibrated serum creatinine showed high agreement with actual measurements when applying to the unused test set (concordance correlation coefficient 0.934, 95% CI, 0.894-0.960). The baseline prevalence of CKD in the JHS (2000-2004) was 6.30% using calibrated values compared with 8.29% using noncalibrated serum creatinine with the Chronic Kidney Disease Epidemiology Collaboration equation (P < 0.001). A Deming regression model was chosen to optimally calibrate baseline serum creatinine measurements in the JHS, and the calibrated values provide a lower CKD prevalence estimate.
A multi-objective approach to improve SWAT model calibration in alpine catchments
NASA Astrophysics Data System (ADS)
Tuo, Ye; Marcolini, Giorgia; Disse, Markus; Chiogna, Gabriele
2018-04-01
Multi-objective hydrological model calibration can represent a valuable solution to reduce model equifinality and parameter uncertainty. The Soil and Water Assessment Tool (SWAT) model is widely applied to investigate water quality and water management issues in alpine catchments. However, the model calibration is generally based on discharge records only, and most of the previous studies have defined a unique set of snow parameters for an entire basin. Only a few studies have considered snow observations to validate model results or have taken into account the possible variability of snow parameters for different subbasins. This work presents and compares three possible calibration approaches. The first two procedures are single-objective calibration procedures, for which all parameters of the SWAT model were calibrated according to river discharge alone. Procedures I and II differ from each other by the assumption used to define snow parameters: The first approach assigned a unique set of snow parameters to the entire basin, whereas the second approach assigned different subbasin-specific sets of snow parameters to each subbasin. The third procedure is a multi-objective calibration, in which we considered snow water equivalent (SWE) information at two different spatial scales (i.e. subbasin and elevation band), in addition to discharge measurements. We tested these approaches in the Upper Adige river basin where a dense network of snow depth measurement stations is available. Only the set of parameters obtained with this multi-objective procedure provided an acceptable prediction of both river discharge and SWE. These findings offer the large community of SWAT users a strategy to improve SWAT modeling in alpine catchments.
Domain-Invariant Partial-Least-Squares Regression.
Nikzad-Langerodi, Ramin; Zellinger, Werner; Lughofer, Edwin; Saminger-Platz, Susanne
2018-05-11
Multivariate calibration models often fail to extrapolate beyond the calibration samples because of changes associated with the instrumental response, environmental condition, or sample matrix. Most of the current methods used to adapt a source calibration model to a target domain exclusively apply to calibration transfer between similar analytical devices, while generic methods for calibration-model adaptation are largely missing. To fill this gap, we here introduce domain-invariant partial-least-squares (di-PLS) regression, which extends ordinary PLS by a domain regularizer in order to align the source and target distributions in the latent-variable space. We show that a domain-invariant weight vector can be derived in closed form, which allows the integration of (partially) labeled data from the source and target domains as well as entirely unlabeled data from the latter. We test our approach on a simulated data set where the aim is to desensitize a source calibration model to an unknown interfering agent in the target domain (i.e., unsupervised model adaptation). In addition, we demonstrate unsupervised, semisupervised, and supervised model adaptation by di-PLS on two real-world near-infrared (NIR) spectroscopic data sets.
Measurements by a Vector Network Analyzer at 325 to 508 GHz
NASA Technical Reports Server (NTRS)
Fung, King Man; Samoska, Lorene; Chattopadhyay, Goutam; Gaier, Todd; Kangaslahti, Pekka; Pukala, David; Lau, Yuenie; Oleson, Charles; Denning, Anthony
2008-01-01
Recent experiments were performed in which return loss and insertion loss of waveguide test assemblies in the frequency range from 325 to 508 GHz were measured by use of a swept-frequency two-port vector network analyzer (VNA) test set. The experiments were part of a continuing effort to develop means of characterizing passive and active electronic components and systems operating at ever increasing frequencies. The waveguide test assemblies comprised WR-2.2 end sections collinear with WR-3.3 middle sections. The test set, assembled from commercially available components, included a 50-GHz VNA scattering- parameter test set and external signal synthesizers, augmented with recently developed frequency extenders, and further augmented with attenuators and amplifiers as needed to adjust radiofrequency and intermediate-frequency power levels between the aforementioned components. The tests included line-reflect-line calibration procedures, using WR-2.2 waveguide shims as the "line" standards and waveguide flange short circuits as the "reflect" standards. Calibrated dynamic ranges somewhat greater than about 20 dB for return loss and 35 dB for insertion loss were achieved. The measurement data of the test assemblies were found to substantially agree with results of computational simulations.
NASA Technical Reports Server (NTRS)
Romanofsky, Robert R.; Shalkhauser, Kurt A.
1989-01-01
The design and evaluation of a novel fixturing technique for characterizing millimeter wave solid state devices is presented. The technique utilizes a cosine-tapered ridge guide fixture and a one-tier de-embedding procedure to produce accurate and repeatable device level data. Advanced features of this technique include nondestructive testing, full waveguide bandwidth operation, universality of application, and rapid, yet repeatable, chip-level characterization. In addition, only one set of calibration standards is required regardless of the device geometry.
40 CFR 1065.703 - Distillate diesel fuel.
Code of Federal Regulations, 2014 CFR
2014-07-01
... CONTROLS ENGINE-TESTING PROCEDURES Engine Fluids, Test Fuels, Analytical Gases and Other Calibration... diesel fuel specified for use as a test fuel. See the standard-setting part to determine which grade to... grades are specified in the following table: Table 1 of § 1065.703—Test Fuel Specifications for...
40 CFR 1065.703 - Distillate diesel fuel.
Code of Federal Regulations, 2011 CFR
2011-07-01
... CONTROLS ENGINE-TESTING PROCEDURES Engine Fluids, Test Fuels, Analytical Gases and Other Calibration... diesel fuel specified for use as a test fuel. See the standard-setting part to determine which grade to... inhibitor. (5) Pour depressant. (6) Dye. (7) Dispersant. (8) Biocide. Table 1 of § 1065.703—Test Fuel...
40 CFR 1065.703 - Distillate diesel fuel.
Code of Federal Regulations, 2013 CFR
2013-07-01
... CONTROLS ENGINE-TESTING PROCEDURES Engine Fluids, Test Fuels, Analytical Gases and Other Calibration... diesel fuel specified for use as a test fuel. See the standard-setting part to determine which grade to... inhibitor. (5) Pour depressant. (6) Dye. (7) Dispersant. (8) Biocide. Table 1 of § 1065.703—Test Fuel...
40 CFR 1065.703 - Distillate diesel fuel.
Code of Federal Regulations, 2012 CFR
2012-07-01
... CONTROLS ENGINE-TESTING PROCEDURES Engine Fluids, Test Fuels, Analytical Gases and Other Calibration... diesel fuel specified for use as a test fuel. See the standard-setting part to determine which grade to... inhibitor. (5) Pour depressant. (6) Dye. (7) Dispersant. (8) Biocide. Table 1 of § 1065.703—Test Fuel...
Ouyang, Qin; Zhao, Jiewen; Chen, Quansheng
2015-01-01
The non-sugar solids (NSS) content is one of the most important nutrition indicators of Chinese rice wine. This study proposed a rapid method for the measurement of NSS content in Chinese rice wine using near infrared (NIR) spectroscopy. We also systemically studied the efficient spectral variables selection algorithms that have to go through modeling. A new algorithm of synergy interval partial least square with competitive adaptive reweighted sampling (Si-CARS-PLS) was proposed for modeling. The performance of the final model was back-evaluated using root mean square error of calibration (RMSEC) and correlation coefficient (Rc) in calibration set and similarly tested by mean square error of prediction (RMSEP) and correlation coefficient (Rp) in prediction set. The optimum model by Si-CARS-PLS algorithm was achieved when 7 PLS factors and 18 variables were included, and the results were as follows: Rc=0.95 and RMSEC=1.12 in the calibration set, Rp=0.95 and RMSEP=1.22 in the prediction set. In addition, Si-CARS-PLS algorithm showed its superiority when compared with the commonly used algorithms in multivariate calibration. This work demonstrated that NIR spectroscopy technique combined with a suitable multivariate calibration algorithm has a high potential in rapid measurement of NSS content in Chinese rice wine. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Beck, Hylke; de Roo, Ad; van Dijk, Albert; McVicar, Tim; Miralles, Diego; Schellekens, Jaap; Bruijnzeel, Sampurno; de Jeu, Richard
2015-04-01
Motivated by the lack of large-scale model parameter regionalization studies, a large set of 3328 small catchments (< 10000 km2) around the globe was used to set up and evaluate five model parameterization schemes at global scale. The HBV-light model was chosen because of its parsimony and flexibility to test the schemes. The catchments were calibrated against observed streamflow (Q) using an objective function incorporating both behavioral and goodness-of-fit measures, after which the catchment set was split into subsets of 1215 donor and 2113 evaluation catchments based on the calibration performance. The donor catchments were subsequently used to derive parameter sets that were transferred to similar grid cells based on a similarity measure incorporating climatic and physiographic characteristics, thereby producing parameter maps with global coverage. Overall, there was a lack of suitable donor catchments for mountainous and tropical environments. The schemes with spatially-uniform parameter sets (EXP2 and EXP3) achieved the worst Q estimation performance in the evaluation catchments, emphasizing the importance of parameter regionalization. The direct transfer of calibrated parameter sets from donor catchments to similar grid cells (scheme EXP1) performed best, although there was still a large performance gap between EXP1 and HBV-light calibrated against observed Q. The schemes with parameter sets obtained by simultaneously calibrating clusters of similar donor catchments (NC10 and NC58) performed worse than EXP1. The relatively poor Q estimation performance achieved by two (uncalibrated) macro-scale hydrological models suggests there is considerable merit in regionalizing the parameters of such models. The global HBV-light parameter maps and ancillary data are freely available via http://water.jrc.ec.europa.eu.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fuehne, David Patrick; Lattin, Rebecca Renee
The Rad-NESHAP program, part of the Air Quality Compliance team of LANL’s Compliance Programs group (EPC-CP), and the Radiation Instrumentation & Calibration team, part of the Radiation Protection Services group (RP-SVS), frequently partner on issues relating to characterizing air flow streams. This memo documents the most recent example of this partnership, involving performance testing of sulfur hexafluoride detectors for use in stack gas mixing tests. Additionally, members of the Rad-NESHAP program performed a functional trending test on a pair of optical particle counters, comparing results from a non-calibrated instrument to a calibrated instrument. Prior to commissioning a new stack samplingmore » system, the ANSI Standard for stack sampling requires that the stack sample location must meet several criteria, including uniformity of tracer gas and aerosol mixing in the air stream. For these mix tests, tracer media (sulfur hexafluoride gas or liquid oil aerosol particles) are injected into the stack air stream and the resulting air concentrations are measured across the plane of the stack at the proposed sampling location. The coefficient of variation of these media concentrations must be under 20% when evaluated over the central 2/3 area of the stack or duct. The instruments which measure these air concentrations must be tested prior to the stack tests in order to ensure their linear response to varying air concentrations of either tracer gas or tracer aerosol. The instruments used in tracer gas and aerosol mix testing cannot be calibrated by the LANL Standards and Calibration Laboratory, so they would normally be sent off-site for factory calibration by the vendor. Operational requirements can prevent formal factory calibration of some instruments after they have been used in hazardous settings, e.g., within a radiological facility with potential airborne contamination. The performance tests described in this document are intended to demonstrate the reliable performance of the test instruments for the specific tests used in stack flow characterization.« less
Microprocessor-based single particle calibration of scintillation counter
NASA Technical Reports Server (NTRS)
Mazumdar, G. K. D.; Pathak, K. M.
1985-01-01
A microprocessor-base set-up is fabricated and tested for the single particle calibration of the plastic scintillator. The single particle response of the scintillator is digitized by an A/D converter, and a 8085 A based microprocessor stores the pulse heights. The digitized information is printed. Facilities for CRT display and cassette storing and recalling are also made available.
NIST-NRC Comparison of Total Immersion Liquid-in-Glass Thermometers
NASA Astrophysics Data System (ADS)
Hill, K. D.; Gee, D. J.; Cross, C. D.; Strouse, G. F.
2009-02-01
The use of liquid-in-glass (LIG) thermometers is described in many documentary standards in the fields of environmental testing, material testing, and material transfer. Many national metrology institutes, including the National Institute of Standards and Technology (NIST) and the National Research Council of Canada (NRC), list calibration services for these thermometers among the Calibration Measurement Capabilities of Appendix C of the BIPM Key Comparison Database. NIST and NRC arranged a bilateral comparison of a set of total-immersion ASTM-type LIG thermometers to validate their uncertainty claims. Two each of ASTM thermometer types 62C through 69C were calibrated at NIST and at NRC at four temperatures distributed over the range appropriate to each thermometer, in addition to the ice point. Collectively, the thermometers span a temperature range of - 38 °C to 305 °C. In total, 160 measurements (80 pairs) comprise the comparison data set. Pair-wise differences ( T NIST- T NRC) were formed for each thermometer at each temperature. For 8 of the 80 pairs (10 %), the differences exceed the k = 2 combined uncertainties. These results support the claimed capabilities of NIST and NRC for the calibration of LIG thermometers.
Igne, Benoît; Drennen, James K; Anderson, Carl A
2014-01-01
Changes in raw materials and process wear and tear can have significant effects on the prediction error of near-infrared calibration models. When the variability that is present during routine manufacturing is not included in the calibration, test, and validation sets, the long-term performance and robustness of the model will be limited. Nonlinearity is a major source of interference. In near-infrared spectroscopy, nonlinearity can arise from light path-length differences that can come from differences in particle size or density. The usefulness of support vector machine (SVM) regression to handle nonlinearity and improve the robustness of calibration models in scenarios where the calibration set did not include all the variability present in test was evaluated. Compared to partial least squares (PLS) regression, SVM regression was less affected by physical (particle size) and chemical (moisture) differences. The linearity of the SVM predicted values was also improved. Nevertheless, although visualization and interpretation tools have been developed to enhance the usability of SVM-based methods, work is yet to be done to provide chemometricians in the pharmaceutical industry with a regression method that can supplement PLS-based methods.
NASA Technical Reports Server (NTRS)
Everhart, Joel L.
1996-01-01
Orifice-to-orifice inconsistencies in data acquired with an electronically-scanned pressure system at the beginning of a wind tunnel experiment forced modifications to the standard, instrument calibration procedures. These modifications included a large increase in the number of calibration points which would allow a critical examination of the calibration curve-fit process, and a subsequent post-test reduction of the pressure data. Evaluation of these data has resulted in an improved functional representation of the pressure-voltage signature for electronically-scanned pressures sensors, which can reduce the errors due to calibration curve fit to under 0.10 percent of reading compared to the manufacturer specified 0.10 percent of full scale. Application of the improved calibration function allows a more rational selection of the calibration set-point pressures. These pressures should be adjusted to achieve a voltage output which matches the physical shape of the pressure-voltage signature of the sensor. This process is conducted in lieu of the more traditional approach where a calibration pressure is specified and the resulting sensor voltage is recorded. The fifteen calibrations acquired over the two-week duration of the wind tunnel test were further used to perform a preliminary, statistical assessment of the variation in the calibration process. The results allowed the estimation of the bias uncertainty for a single instrument calibration; and, they form the precursor for more extensive and more controlled studies in the laboratory.
Accuracy improvement in a calibration test bench for accelerometers by a vision system
DOE Office of Scientific and Technical Information (OSTI.GOV)
D’Emilia, Giulio, E-mail: giulio.demilia@univaq.it; Di Gasbarro, David, E-mail: david.digasbarro@graduate.univaq.it; Gaspari, Antonella, E-mail: antonella.gaspari@graduate.univaq.it
2016-06-28
A procedure is described in this paper for the accuracy improvement of calibration of low-cost accelerometers in a prototype rotary test bench, driven by a brushless servo-motor and operating in a low frequency range of vibrations (0 to 5 Hz). Vibration measurements by a vision system based on a low frequency camera have been carried out, in order to reduce the uncertainty of the real acceleration evaluation at the installation point of the sensor to be calibrated. A preliminary test device has been realized and operated in order to evaluate the metrological performances of the vision system, showing a satisfactory behaviormore » if the uncertainty measurement is taken into account. A combination of suitable settings of the control parameters of the motion control system and of the information gained by the vision system allowed to fit the information about the reference acceleration at the installation point to the needs of the procedure for static and dynamic calibration of three-axis accelerometers.« less
Spectral irradiance measurement and actinic radiometer calibration for UV water disinfection
NASA Astrophysics Data System (ADS)
Sperfeld, Peter; Barton, Bettina; Pape, Sven; Towara, Anna-Lena; Eggers, Jutta; Hopfenmüller, Gabriel
2014-12-01
In a joint project, sglux and PTB investigated and developed methods and equipment to measure the spectral and weighted irradiance of high-efficiency UV-C emitters used in water disinfection plants. A calibration facility was set up to calibrate the microbicidal irradiance responsivity of actinic radiometers with respect to the weighted spectral irradiance of specially selected low-pressure mercury and medium-pressure mercury UV lamps. To verify the calibration method and to perform on-site tests, spectral measurements were carried out directly at water disinfection plants in operation. The weighted microbicidal irradiance of the plants was calculated and compared to the measurements of various actinic radiometers.
Hypervelocity Capability of the HYPULSE Shock-Expansion Tunnel for Scramjet Testing
NASA Technical Reports Server (NTRS)
Foelsche, Robert O.; Rogers, R. Clayton; Tsai, Ching-Yi; Bakos, Robert J.; Shih, Ann T.
2001-01-01
New hypervelocity capabilities for scramjet testing have recently been demonstrated in the HYPULSE Shock-Expansion Tunnel (SET). With NASA's continuing interests in scramjet testing at hypervelocity conditions (Mach 12 and above), a SET nozzle was designed and added to the HYPULSE facility. Results of tests conducted to establish SET operational conditions and facility nozzle calibration are presented and discussed for a Mach 15 (M15) flight enthalpy. The measurements and detailed computational fluid dynamics calculations (CFD) show the nozzle delivers a test gas with sufficiently wide core size to be suitable for free-jet testing of scramjet engine models of similar scale as, those tested in conventional low Mach number blow-down test facilities.
NASA Technical Reports Server (NTRS)
Miller, D. P.; Prahst, P. S.
1995-01-01
An axial compressor test rig has been designed for the operation of small turbomachines. A flow test was run to calibrate and determine the source and magnitudes of the loss mechanisms in the compressor inlet for a highly loaded two-stage axial compressor test. Several flow conditions and inlet guide vane (IGV) angle settings were established, for which detailed surveys were completed. Boundary layer bleed was also provided along the casing of the inlet behind the support struts and ahead of the IGV. Several computational fluid dynamics (CFD) calculations were made for selected flow conditions established during the test. Good agreement between the CFD and test data were obtained for these test conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jankovic, John; Zontek, Tracy L.; Ogle, Burton R.
We examined the calibration records of two direct reading instruments designated as condensation particle counters in order to determine the number of times they were found to be out of tolerance at annual manufacturer's recalibration. For both instruments were found to be out of tolerance more times than within tolerance. And, it was concluded that annual calibration alone was insufficient to provide operational confidence in an instrument's response. Thus, a method based on subsequent agreement with data gathered from a newly calibrated instrument was developed to confirm operational readiness between annual calibrations, hereafter referred to as bump testing. The methodmore » consists of measuring source particles produced by a gas grille spark igniter in a gallon-size jar. Sampling from this chamber with a newly calibrated instrument to determine the calibrated response over the particle concentration range of interest serves as a reference. Agreement between this reference response and subsequent responses at later dates implies that the instrument is performing as it was at the time of calibration. Side-by-side sampling allows the level of agreement between two or more instruments to be determined. This is useful when simultaneously collected data are compared for differences, i.e., background with process aerosol concentrations. A reference set of data was obtained using the spark igniter. The generation system was found to be reproducible and suitable to form the basis of calibration verification. Finally, the bump test is simple enough to be performed periodically throughout the calibration year or prior to field monitoring.« less
Jankovic, John; Zontek, Tracy L.; Ogle, Burton R.; ...
2015-01-27
We examined the calibration records of two direct reading instruments designated as condensation particle counters in order to determine the number of times they were found to be out of tolerance at annual manufacturer's recalibration. For both instruments were found to be out of tolerance more times than within tolerance. And, it was concluded that annual calibration alone was insufficient to provide operational confidence in an instrument's response. Thus, a method based on subsequent agreement with data gathered from a newly calibrated instrument was developed to confirm operational readiness between annual calibrations, hereafter referred to as bump testing. The methodmore » consists of measuring source particles produced by a gas grille spark igniter in a gallon-size jar. Sampling from this chamber with a newly calibrated instrument to determine the calibrated response over the particle concentration range of interest serves as a reference. Agreement between this reference response and subsequent responses at later dates implies that the instrument is performing as it was at the time of calibration. Side-by-side sampling allows the level of agreement between two or more instruments to be determined. This is useful when simultaneously collected data are compared for differences, i.e., background with process aerosol concentrations. A reference set of data was obtained using the spark igniter. The generation system was found to be reproducible and suitable to form the basis of calibration verification. Finally, the bump test is simple enough to be performed periodically throughout the calibration year or prior to field monitoring.« less
Norén, G Niklas; Bergvall, Tomas; Ryan, Patrick B; Juhlin, Kristina; Schuemie, Martijn J; Madigan, David
2013-10-01
Observational healthcare data offer the potential to identify adverse drug reactions that may be missed by spontaneous reporting. The self-controlled cohort analysis within the Temporal Pattern Discovery framework compares the observed-to-expected ratio of medical outcomes during post-exposure surveillance periods with those during a set of distinct pre-exposure control periods in the same patients. It utilizes an external control group to account for systematic differences between the different time periods, thus combining within- and between-patient confounder adjustment in a single measure. To evaluate the performance of the calibrated self-controlled cohort analysis within Temporal Pattern Discovery as a tool for risk identification in observational healthcare data. Different implementations of the calibrated self-controlled cohort analysis were applied to 399 drug-outcome pairs (165 positive and 234 negative test cases across 4 health outcomes of interest) in 5 real observational databases (four with administrative claims and one with electronic health records). Performance was evaluated on real data through sensitivity/specificity, the area under receiver operator characteristics curve (AUC), and bias. The calibrated self-controlled cohort analysis achieved good predictive accuracy across the outcomes and databases under study. The optimal design based on this reference set uses a 360 days surveillance period and a single control period 180 days prior to new prescriptions. It achieved an average AUC of 0.75 and AUC >0.70 in all but one scenario. A design with three separate control periods performed better for the electronic health records database and for acute renal failure across all data sets. The estimates for negative test cases were generally unbiased, but a minor negative bias of up to 0.2 on the RR-scale was observed with the configurations using multiple control periods, for acute liver injury and upper gastrointestinal bleeding. The calibrated self-controlled cohort analysis within Temporal Pattern Discovery shows promise as a tool for risk identification; it performs well at discriminating positive from negative test cases. The optimal parameter configuration may vary with the data set and medical outcome of interest.
NASA Technical Reports Server (NTRS)
Cook, M.
1990-01-01
Qualification testing of Combustion Engineering's AMDATA Intraspect/98 Data Acquisition and Imaging System that applies to the redesigned solid rocket motor (RSRM) case membrane case-to-insulation bondline inspection was performed. Testing was performed at M-67, the Thiokol Corp. RSRM Assembly Facility. The purpose of the inspection was to verify the integrity of the case membrane case-to-insulation bondline. The case membrane scanner was calibrated on the redesigned solid rocket motor case segment calibration standard, which had an intentional 1.0 by 1.0 in. case-to-insulation unbond. The case membrane scanner was then used to scan a 20 by 20 in. membrane area of the case segment. Calibration of the scanner was then rechecked on the calibration standard to ensure that the calibration settings did not change during the case membrane scan. This procedure was successfully performed five times to qualify the unbond detection capability of the case membrane scanner.
Detailed Calibration of SphinX instrument at the Palermo XACT facility of INAF-OAPA
NASA Astrophysics Data System (ADS)
Szymon, Gburek; Collura, Alfonso; Barbera, Marco; Reale, Fabio; Sylwester, Janusz; Kowalinski, Miroslaw; Bakala, Jaroslaw; Kordylewski, Zbigniew; Plocieniak, Stefan; Podgorski, Piotr; Trzebinski, Witold; Varisco, Salvatore
The Solar photometer in X-rays (SphinX) experiment is scheduled for launch late summer 2008 on-board the Russian CORONAS-Photon satellite. SphinX will use three silicon PIN diode detectors with selected effective areas in order to record solar spectra in the X-ray energy range 0.3-15 keV with unprecedented temporal and medium energy resolution. High sensitivity and large dynamic range of the SphinX instrument will give for the first time possibility of observing solar soft X-ray variability from the weakest levels, ten times below present thresholds, to the largest X20+ flares. We present the results of the ground X-ray calibrations of the SphinX instrument performed at the X-ray Astronomy Calibration and Testing (XACT) facility of INAF-OAPA. The calibrations were essential for determination of SphinX detector energy resolution and efficiency. We describe the ground tests instrumental set-up, adopted measurement techniques and present results of the calibration data analysis.
Results of the 1981 NASA/JPL balloon flight solar cell calibration program
NASA Technical Reports Server (NTRS)
Seaman, C. H.; Weiss, R. S.
1982-01-01
The calibration of the direct conversion of solar energy through use of solar cells at high altitudes by balloon flight is reported. Twenty seven modules were carried to an altitude of 35.4 kilometers. Silicon cells are stable for long periods of time and can be used as standards. It is demonstrated that the cell mounting cavity may be either black or white with equal validity in setting solar simulators. The calibrated cells can be used as reference standards in simulator testing of cells and arrays.
Procedure for the Selection and Validation of a Calibration Model I-Description and Application.
Desharnais, Brigitte; Camirand-Lemyre, Félix; Mireault, Pascal; Skinner, Cameron D
2017-05-01
Calibration model selection is required for all quantitative methods in toxicology and more broadly in bioanalysis. This typically involves selecting the equation order (quadratic or linear) and weighting factor correctly modelizing the data. A mis-selection of the calibration model will generate lower quality control (QC) accuracy, with an error up to 154%. Unfortunately, simple tools to perform this selection and tests to validate the resulting model are lacking. We present a stepwise, analyst-independent scheme for selection and validation of calibration models. The success rate of this scheme is on average 40% higher than a traditional "fit and check the QCs accuracy" method of selecting the calibration model. Moreover, the process was completely automated through a script (available in Supplemental Data 3) running in RStudio (free, open-source software). The need for weighting was assessed through an F-test using the variances of the upper limit of quantification and lower limit of quantification replicate measurements. When weighting was required, the choice between 1/x and 1/x2 was determined by calculating which option generated the smallest spread of weighted normalized variances. Finally, model order was selected through a partial F-test. The chosen calibration model was validated through Cramer-von Mises or Kolmogorov-Smirnov normality testing of the standardized residuals. Performance of the different tests was assessed using 50 simulated data sets per possible calibration model (e.g., linear-no weight, quadratic-no weight, linear-1/x, etc.). This first of two papers describes the tests, procedures and outcomes of the developed procedure using real LC-MS-MS results for the quantification of cocaine and naltrexone. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Is site-specific APEX calibration necessary for field scale BMP assessment?
USDA-ARS?s Scientific Manuscript database
The possibility of extending parameter sets obtained at one site to sites with similar characteristics is appealing. This study was undertaken to test model performance and compare the effectiveness of best management practices (BMPs) using three parameters sets obtained from three watersheds when a...
NASA Astrophysics Data System (ADS)
Wendelboe, Gorm
2018-06-01
A SeaBat T50 calibration that combines measurements in a test tank with data from numerical models is presented. The calibration is assessed with data obtained from a series of tests conducted over a sandy seabed outside the harbor of Santa Barbara, California (April 2016). The tests include different tone-burst durations, sound pressure levels, and receive gains in order to verify that the estimated seabed backscattering strength (S_b) is invariant to sonar settings. Finally, S_b-estimates obtained in the frequency range from 190 kHz in steps of 10 kHz up to 400 kHz, and for grazing angles from 20° up to 90° in bins of width 5°, are presented. The results are compared with results found in the literature.
40 CFR 85.2233 - Steady state test equipment calibrations, adjustments, and quality control-EPA 91.
Code of Federal Regulations, 2013 CFR
2013-07-01
... tolerance range. The pressure in the sample cell must be the same with the calibration gas flowing during... this chapter. The check is done at 30 mph (48 kph), and a power absorption load setting to generate a... in § 85.2225(c)(1) are not met. (2) Leak checks. Each time the sample line integrity is broken, a...
40 CFR 85.2233 - Steady state test equipment calibrations, adjustments, and quality control-EPA 91.
Code of Federal Regulations, 2011 CFR
2011-07-01
... tolerance range. The pressure in the sample cell must be the same with the calibration gas flowing during... this chapter. The check is done at 30 mph (48 kph), and a power absorption load setting to generate a... in § 85.2225(c)(1) are not met. (2) Leak checks. Each time the sample line integrity is broken, a...
40 CFR 85.2233 - Steady state test equipment calibrations, adjustments, and quality control-EPA 91.
Code of Federal Regulations, 2012 CFR
2012-07-01
... tolerance range. The pressure in the sample cell must be the same with the calibration gas flowing during... this chapter. The check is done at 30 mph (48 kph), and a power absorption load setting to generate a... in § 85.2225(c)(1) are not met. (2) Leak checks. Each time the sample line integrity is broken, a...
Rincent, R; Laloë, D; Nicolas, S; Altmann, T; Brunel, D; Revilla, P; Rodríguez, V M; Moreno-Gonzalez, J; Melchinger, A; Bauer, E; Schoen, C-C; Meyer, N; Giauffret, C; Bauland, C; Jamin, P; Laborde, J; Monod, H; Flament, P; Charcosset, A; Moreau, L
2012-10-01
Genomic selection refers to the use of genotypic information for predicting breeding values of selection candidates. A prediction formula is calibrated with the genotypes and phenotypes of reference individuals constituting the calibration set. The size and the composition of this set are essential parameters affecting the prediction reliabilities. The objective of this study was to maximize reliabilities by optimizing the calibration set. Different criteria based on the diversity or on the prediction error variance (PEV) derived from the realized additive relationship matrix-best linear unbiased predictions model (RA-BLUP) were used to select the reference individuals. For the latter, we considered the mean of the PEV of the contrasts between each selection candidate and the mean of the population (PEVmean) and the mean of the expected reliabilities of the same contrasts (CDmean). These criteria were tested with phenotypic data collected on two diversity panels of maize (Zea mays L.) genotyped with a 50k SNPs array. In the two panels, samples chosen based on CDmean gave higher reliabilities than random samples for various calibration set sizes. CDmean also appeared superior to PEVmean, which can be explained by the fact that it takes into account the reduction of variance due to the relatedness between individuals. Selected samples were close to optimality for a wide range of trait heritabilities, which suggests that the strategy presented here can efficiently sample subsets in panels of inbred lines. A script to optimize reference samples based on CDmean is available on request.
Kuligowski, Julia; Carrión, David; Quintás, Guillermo; Garrigues, Salvador; de la Guardia, Miguel
2011-01-01
The selection of an appropriate calibration set is a critical step in multivariate method development. In this work, the effect of using different calibration sets, based on a previous classification of unknown samples, on the partial least squares (PLS) regression model performance has been discussed. As an example, attenuated total reflection (ATR) mid-infrared spectra of deep-fried vegetable oil samples from three botanical origins (olive, sunflower, and corn oil), with increasing polymerized triacylglyceride (PTG) content induced by a deep-frying process were employed. The use of a one-class-classifier partial least squares-discriminant analysis (PLS-DA) and a rooted binary directed acyclic graph tree provided accurate oil classification. Oil samples fried without foodstuff could be classified correctly, independent of their PTG content. However, class separation of oil samples fried with foodstuff, was less evident. The combined use of double-cross model validation with permutation testing was used to validate the obtained PLS-DA classification models, confirming the results. To discuss the usefulness of the selection of an appropriate PLS calibration set, the PTG content was determined by calculating a PLS model based on the previously selected classes. In comparison to a PLS model calculated using a pooled calibration set containing samples from all classes, the root mean square error of prediction could be improved significantly using PLS models based on the selected calibration sets using PLS-DA, ranging between 1.06 and 2.91% (w/w).
An update on 'dose calibrator' settings for nuclides used in nuclear medicine.
Bergeron, Denis E; Cessna, Jeffrey T
2018-06-01
Most clinical measurements of radioactivity, whether for therapeutic or imaging nuclides, rely on commercial re-entrant ionization chambers ('dose calibrators'). The National Institute of Standards and Technology (NIST) maintains a battery of representative calibrators and works to link calibration settings ('dial settings') to primary radioactivity standards. Here, we provide a summary of NIST-determined dial settings for 22 radionuclides. We collected previously published dial settings and determined some new ones using either the calibration curve method or the dialing-in approach. The dial settings with their uncertainties are collected in a comprehensive table. In general, current manufacturer-provided calibration settings give activities that agree with National Institute of Standards and Technology standards to within a few percent.
NASA Astrophysics Data System (ADS)
Houchin, J. S.
2014-09-01
A common problem for the off-line validation of the calibration algorithms and algorithm coefficients is being able to run science data through the exact same software used for on-line calibration of that data. The Joint Polar Satellite System (JPSS) program solved part of this problem by making the Algorithm Development Library (ADL) available, which allows the operational algorithm code to be compiled and run on a desktop Linux workstation using flat file input and output. However, this solved only part of the problem, as the toolkit and methods to initiate the processing of data through the algorithms were geared specifically toward the algorithm developer, not the calibration analyst. In algorithm development mode, a limited number of sets of test data are staged for the algorithm once, and then run through the algorithm over and over as the software is developed and debugged. In calibration analyst mode, we are continually running new data sets through the algorithm, which requires significant effort to stage each of those data sets for the algorithm without additional tools. AeroADL solves this second problem by providing a set of scripts that wrap the ADL tools, providing both efficient means to stage and process an input data set, to override static calibration coefficient look-up-tables (LUT) with experimental versions of those tables, and to manage a library containing multiple versions of each of the static LUT files in such a way that the correct set of LUTs required for each algorithm are automatically provided to the algorithm without analyst effort. Using AeroADL, The Aerospace Corporation's analyst team has demonstrated the ability to quickly and efficiently perform analysis tasks for both the VIIRS and OMPS sensors with minimal training on the software tools.
Barañao, P A; Hall, E R
2004-01-01
Activated Sludge Model No 3 (ASM3) was chosen to model an activated sludge system treating effluents from a mechanical pulp and paper mill. The high COD concentration and the high content of readily biodegradable substrates of the wastewater make this model appropriate for this system. ASM3 was calibrated based on batch respirometric tests using fresh wastewater and sludge from the treatment plant, and on analytical measurements of COD, TSS and VSS. The model, developed for municipal wastewater, was found suitable for fitting a variety of respirometric batch tests, performed at different temperatures and food to microorganism ratios (F/M). Therefore, a set of calibrated parameters, as well as the wastewater COD fractions, was estimated for this industrial wastewater. The majority of the calibrated parameters were in the range of those found in the literature.
Investigation of cloud properties and atmospheric stability with MODIS
NASA Technical Reports Server (NTRS)
Menzel, P.; Ackerman, S.; Moeller, C.; Gumley, L.; Strabala, K.; Frey, R.; Prins, E.; LaPorte, D.; Lynch, M.
1996-01-01
The last half year was spent in preparing Version 1 software for delivery, and culminated in transfer of the Level 2 cloud mask production software to the SDST in April. A simulated MODIS test data set with good radiometric integrity was produced using MAS data for a clear ocean scene. ER-2 flight support and MAS data processing were provided by CIMSS personnel during the Apr-May 96 SUCCESS field campaign in Salina, Kansas. Improvements have been made in the absolute calibration of the MAS, including better characterization of the spectral response for all 50 channels. Plans were laid out for validating and testing the MODIS calibration techniques; these plans were further refined during a UW calibration meeting with MCST.
NASA Technical Reports Server (NTRS)
Capone, Francis J.; Bangert, Linda S.; Asbury, Scott C.; Mills, Charles T. L.; Bare, E. Ann
1995-01-01
The Langley 16-Foot Transonic Tunnel is a closed-circuit single-return atmospheric wind tunnel that has a slotted octagonal test section with continuous air exchange. The wind tunnel speed can be varied continuously over a Mach number range from 0.1 to 1.3. Test-section plenum suction is used for speeds above a Mach number of 1.05. Over a period of some 40 years, the wind tunnel has undergone many modifications. During the modifications completed in 1990, a new model support system that increased blockage, new fan blades, a catcher screen for the first set of turning vanes, and process controllers for tunnel speed, model attitude, and jet flow for powered models were installed. This report presents a complete description of the Langley 16-Foot Transonic Tunnel and auxiliary equipment, the calibration procedures, and the results of the 1977 and the 1990 wind tunnel calibration with test section air removal. Comparisons with previous calibrations showed that the modifications made to the wind tunnel had little or no effect on the aerodynamic characteristics of the tunnel. Information required for planning experimental investigations and the use of test hardware and model support systems is also provided.
Conical Probe Calibration and Wind Tunnel Data Analysis of the Channeled Centerbody Inlet Experiment
NASA Technical Reports Server (NTRS)
Truong, Samson Siu
2011-01-01
For a multi-hole test probe undergoing wind tunnel tests, the resulting data needs to be analyzed for any significant trends. These trends include relating the pressure distributions, the geometric orientation, and the local velocity vector to one another. However, experimental runs always involve some sort of error. As a result, a calibration procedure is required to compensate for this error. For this case, it is the misalignment bias angles resulting from the distortion associated with the angularity of the test probe or the local velocity vector. Through a series of calibration steps presented here, the angular biases are determined and removed from the data sets. By removing the misalignment, smoother pressure distributions contribute to more accurate experimental results, which in turn could be then compared to theoretical and actual in-flight results to derive any similarities. Error analyses will also be performed to verify the accuracy of the calibration error reduction. The resulting calibrated data will be implemented into an in-flight RTF script that will output critical flight parameters during future CCIE experimental test runs. All of these tasks are associated with and in contribution to NASA Dryden Flight Research Center s F-15B Research Testbed s Small Business Innovation Research of the Channeled Centerbody Inlet Experiment.
NASA Technical Reports Server (NTRS)
Sketoe, J. G.; Clark, Anthony
2000-01-01
This paper presents a DOD E3 program overview on integrated circuit immunity. The topics include: 1) EMI Immunity Testing; 2) Threshold Definition; 3) Bias Tee Function; 4) Bias Tee Calibration Set-Up; 5) EDM Test Figure; 6) EMI Immunity Levels; 7) NAND vs. and Gate Immunity; 8) TTL vs. LS Immunity Levels; 9) TP vs. OC Immunity Levels; 10) 7805 Volt Reg Immunity; and 11) Seventies Chip Set. This paper is presented in viewgraph form.
NASA Technical Reports Server (NTRS)
Held, D.; Werner, C.; Wall, S.
1983-01-01
The absolute amplitude calibration of the spaceborne Seasat SAR data set is presented based on previous relative calibration studies. A scale factor making it possible to express the perceived radar brightness of a scene in units of sigma-zero is established. The system components are analyzed for error contribution, and the calibration techniques are introduced for each stage. These include: A/D converter saturation tests; prevention of clipping in the processing step; and converting the digital image into the units of received power. Experimental verification was performed by screening and processing the data of the lava flow surrounding the Pisgah Crater in Southern California, for which previous C-130 airborne scatterometer data were available. The average backscatter difference between the two data sets is estimated to be 2 dB in the brighter, and 4 dB in the dimmer regions. For the SAR a calculated uncertainty of 3 dB is expected.
NASA Technical Reports Server (NTRS)
Miller, Eric J.; Holguin, Andrew C.; Cruz, Josue; Lokos, William A.
2014-01-01
The safety-of-flight parameters for the Adaptive Compliant Trailing Edge (ACTE) flap experiment require that flap-to-wing interface loads be sensed and monitored in real time to ensure that the structural load limits of the wing are not exceeded. This paper discusses the strain gage load calibration testing and load equation derivation methodology for the ACTE interface fittings. Both the left and right wing flap interfaces were monitored; each contained four uniquely designed and instrumented flap interface fittings. The interface hardware design and instrumentation layout are discussed. Twenty-one applied test load cases were developed using the predicted in-flight loads. Pre-test predictions of strain gage responses were produced using finite element method models of the interface fittings. Predicted and measured test strains are presented. A load testing rig and three hydraulic jacks were used to apply combinations of shear, bending, and axial loads to the interface fittings. Hardware deflections under load were measured using photogrammetry and transducers. Due to deflections in the interface fitting hardware and test rig, finite element model techniques were used to calculate the reaction loads throughout the applied load range, taking into account the elastically-deformed geometry. The primary load equations were selected based on multiple calibration metrics. An independent set of validation cases was used to validate each derived equation. The 2-sigma residual errors for the shear loads were less than eight percent of the full-scale calibration load; the 2-sigma residual errors for the bending moment loads were less than three percent of the full-scale calibration load. The derived load equations for shear, bending, and axial loads are presented, with the calculated errors for both the calibration cases and the independent validation load cases.
NASA Astrophysics Data System (ADS)
Rantakyrö, Fredrik T.
2017-09-01
"The Gemini Planet Imager requires a large set of Calibrations. These can be split into two major sets, one set associated with each observation and one set related to biweekly calibrations. The observation set is to optimize the correction of miscroshifts in the IFU spectra and the latter set is for correction of detector and instrument cosmetics."
Effect of Correlated Precision Errors on Uncertainty of a Subsonic Venturi Calibration
NASA Technical Reports Server (NTRS)
Hudson, S. T.; Bordelon, W. J., Jr.; Coleman, H. W.
1996-01-01
An uncertainty analysis performed in conjunction with the calibration of a subsonic venturi for use in a turbine test facility produced some unanticipated results that may have a significant impact in a variety of test situations. Precision uncertainty estimates using the preferred propagation techniques in the applicable American National Standards Institute/American Society of Mechanical Engineers standards were an order of magnitude larger than precision uncertainty estimates calculated directly from a sample of results (discharge coefficient) obtained at the same experimental set point. The differences were attributable to the effect of correlated precision errors, which previously have been considered negligible. An analysis explaining this phenomenon is presented. The article is not meant to document the venturi calibration, but rather to give a real example of results where correlated precision terms are important. The significance of the correlated precision terms could apply to many test situations.
Spectrometric test of a linear array sensor
NASA Technical Reports Server (NTRS)
Brown, Kenneth S.; Kim, Moon S.
1987-01-01
A spectroradiometer which measures spectral reflectivities and irradiance in discrete spectral channels was tested to determine the accuracy of its wavelength calibration. This sensor is a primary tool in the remote sensing investigations conducted on biomass at NASA's Goddard Space Flight Center. Measurements have been collected on crop and forest plants both in the laboratory and field with this radiometer to develop crop identification and plant stress remote sensing techniques. Wavelength calibration is essential for use in referencing the study measurements with those of other investigations and satellite remote sensor data sets. This calibration determines a wavelength vs channel address conversion which was found to have an RMS deviation of approximately half a channel, or 1.5 nm in the range from 360 to 1050 nm. A comparison of these results with those of another test showed an average difference of approximately 4 nm, sufficiently accurate for most investigative work.
Airado-Rodríguez, Diego; Høy, Martin; Skaret, Josefine; Wold, Jens Petter
2014-05-01
The potential of multispectral imaging of autofluorescence to map sensory flavour properties and fluorophore concentrations in cod caviar paste has been investigated. Cod caviar paste was used as a case product and it was stored over time, under different headspace gas composition and light exposure conditions, to obtain a relevant span in lipid oxidation and sensory properties. Samples were divided in two sets, calibration and test sets, with 16 and 7 samples, respectively. A third set of samples was prepared with induced gradients in lipid oxidation and sensory properties by light exposure of certain parts of the sample surface. Front-face fluorescence emission images were obtained for excitation wavelength 382 nm at 11 different channels ranging from 400 to 700 nm. The analysis of the obtained sets of images was divided in two parts: First, in an effort to compress and extract relevant information, multivariate curve resolution was applied on the calibration set and three spectral components and their relative concentrations in each sample were obtained. The obtained profiles were employed to estimate the concentrations of each component in the images of the heterogeneous samples, giving chemical images of the distribution of fluorescent oxidation products, protoporphyrin IX and photoprotoporphyrin. Second, regression models for sensory attributes related to lipid oxidation were constructed based on the spectra of homogeneous samples from the calibration set. These models were successfully validated with the test set. The models were then applied for pixel-wise estimation of sensory flavours in the heterogeneous images, giving rise to sensory images. As far as we know this is the first time that sensory images of odour and flavour are obtained based on multispectral imaging. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Sahraei, S.; Asadzadeh, M.
2017-12-01
Any modern multi-objective global optimization algorithm should be able to archive a well-distributed set of solutions. While the solution diversity in the objective space has been explored extensively in the literature, little attention has been given to the solution diversity in the decision space. Selection metrics such as the hypervolume contribution and crowding distance calculated in the objective space would guide the search toward solutions that are well-distributed across the objective space. In this study, the diversity of solutions in the decision-space is used as the main selection criteria beside the dominance check in multi-objective optimization. To this end, currently archived solutions are clustered in the decision space and the ones in less crowded clusters are given more chance to be selected for generating new solution. The proposed approach is first tested on benchmark mathematical test problems. Second, it is applied to a hydrologic model calibration problem with more than three objective functions. Results show that the chance of finding more sparse set of high-quality solutions increases, and therefore the analyst would receive a well-diverse set of options with maximum amount of information. Pareto Archived-Dynamically Dimensioned Search, which is an efficient and parsimonious multi-objective optimization algorithm for model calibration, is utilized in this study.
Bocquet, S.; Saro, A.; Mohr, J. J.; ...
2015-01-30
Here, we present a velocity-dispersion-based mass calibration of the South Pole Telescope Sunyaev-Zel'dovich effect survey (SPT-SZ) galaxy cluster sample. Using a homogeneously selected sample of 100 cluster candidates from 720 deg 2 of the survey along with 63 velocity dispersion (σ v) and 16 X-ray Y X measurements of sample clusters, we simultaneously calibrate the mass-observable relation and constrain cosmological parameters. Our method accounts for cluster selection, cosmological sensitivity, and uncertainties in the mass calibrators. The calibrations using σ v and Y X are consistent at the 0.6σ level, with the σ v calibration preferring ~16% higher masses. We usemore » the full SPTCL data set (SZ clusters+σ v+Y X) to measure σ 8(Ωm/0.27) 0.3 = 0.809 ± 0.036 within a flat ΛCDM model. The SPT cluster abundance is lower than preferred by either the WMAP9 or Planck+WMAP9 polarization (WP) data, but assuming that the sum of the neutrino masses is m ν = 0.06 eV, we find the data sets to be consistent at the 1.0σ level for WMAP9 and 1.5σ for Planck+WP. Allowing for larger Σm ν further reconciles the results. When we combine the SPTCL and Planck+WP data sets with information from baryon acoustic oscillations and Type Ia supernovae, the preferred cluster masses are 1.9σ higher than the Y X calibration and 0.8σ higher than the σ v calibration. Given the scale of these shifts (~44% and ~23% in mass, respectively), we execute a goodness-of-fit test; it reveals no tension, indicating that the best-fit model provides an adequate description of the data. Using the multi-probe data set, we measure Ω m = 0.299 ± 0.009 and σ8 = 0.829 ± 0.011. Within a νCDM model we find Σm ν = 0.148 ± 0.081 eV. We present a consistency test of the cosmic growth rate using SPT clusters. Allowing both the growth index γ and the dark energy equation-of-state parameter w to vary, we find γ = 0.73 ± 0.28 and w = –1.007 ± 0.065, demonstrating that the eΣxpansion and the growth histories are consistent with a ΛCDM universe (γ = 0.55; w = –1).« less
NASA Astrophysics Data System (ADS)
Bocquet, S.; Saro, A.; Mohr, J. J.; Aird, K. A.; Ashby, M. L. N.; Bautz, M.; Bayliss, M.; Bazin, G.; Benson, B. A.; Bleem, L. E.; Brodwin, M.; Carlstrom, J. E.; Chang, C. L.; Chiu, I.; Cho, H. M.; Clocchiatti, A.; Crawford, T. M.; Crites, A. T.; Desai, S.; de Haan, T.; Dietrich, J. P.; Dobbs, M. A.; Foley, R. J.; Forman, W. R.; Gangkofner, D.; George, E. M.; Gladders, M. D.; Gonzalez, A. H.; Halverson, N. W.; Hennig, C.; Hlavacek-Larrondo, J.; Holder, G. P.; Holzapfel, W. L.; Hrubes, J. D.; Jones, C.; Keisler, R.; Knox, L.; Lee, A. T.; Leitch, E. M.; Liu, J.; Lueker, M.; Luong-Van, D.; Marrone, D. P.; McDonald, M.; McMahon, J. J.; Meyer, S. S.; Mocanu, L.; Murray, S. S.; Padin, S.; Pryke, C.; Reichardt, C. L.; Rest, A.; Ruel, J.; Ruhl, J. E.; Saliwanchik, B. R.; Sayre, J. T.; Schaffer, K. K.; Shirokoff, E.; Spieler, H. G.; Stalder, B.; Stanford, S. A.; Staniszewski, Z.; Stark, A. A.; Story, K.; Stubbs, C. W.; Vanderlinde, K.; Vieira, J. D.; Vikhlinin, A.; Williamson, R.; Zahn, O.; Zenteno, A.
2015-02-01
We present a velocity-dispersion-based mass calibration of the South Pole Telescope Sunyaev-Zel'dovich effect survey (SPT-SZ) galaxy cluster sample. Using a homogeneously selected sample of 100 cluster candidates from 720 deg2 of the survey along with 63 velocity dispersion (σ v ) and 16 X-ray Y X measurements of sample clusters, we simultaneously calibrate the mass-observable relation and constrain cosmological parameters. Our method accounts for cluster selection, cosmological sensitivity, and uncertainties in the mass calibrators. The calibrations using σ v and Y X are consistent at the 0.6σ level, with the σ v calibration preferring ~16% higher masses. We use the full SPTCL data set (SZ clusters+σ v +Y X) to measure σ8(Ωm/0.27)0.3 = 0.809 ± 0.036 within a flat ΛCDM model. The SPT cluster abundance is lower than preferred by either the WMAP9 or Planck+WMAP9 polarization (WP) data, but assuming that the sum of the neutrino masses is ∑m ν = 0.06 eV, we find the data sets to be consistent at the 1.0σ level for WMAP9 and 1.5σ for Planck+WP. Allowing for larger ∑m ν further reconciles the results. When we combine the SPTCL and Planck+WP data sets with information from baryon acoustic oscillations and Type Ia supernovae, the preferred cluster masses are 1.9σ higher than the Y X calibration and 0.8σ higher than the σ v calibration. Given the scale of these shifts (~44% and ~23% in mass, respectively), we execute a goodness-of-fit test; it reveals no tension, indicating that the best-fit model provides an adequate description of the data. Using the multi-probe data set, we measure Ωm = 0.299 ± 0.009 and σ8 = 0.829 ± 0.011. Within a νCDM model we find ∑m ν = 0.148 ± 0.081 eV. We present a consistency test of the cosmic growth rate using SPT clusters. Allowing both the growth index γ and the dark energy equation-of-state parameter w to vary, we find γ = 0.73 ± 0.28 and w = -1.007 ± 0.065, demonstrating that the expansion and the growth histories are consistent with a ΛCDM universe (γ = 0.55; w = -1).
A numerical identifiability test for state-space models--application to optimal experimental design.
Hidalgo, M E; Ayesa, E
2001-01-01
This paper describes a mathematical tool for identifiability analysis, easily applicable to high order non-linear systems modelled in state-space and implementable in simulators with a time-discrete approach. This procedure also permits a rigorous analysis of the expected estimation errors (average and maximum) in calibration experiments. The methodology is based on the recursive numerical evaluation of the information matrix during the simulation of a calibration experiment and in the setting-up of a group of information parameters based on geometric interpretations of this matrix. As an example of the utility of the proposed test, the paper presents its application to an optimal experimental design of ASM Model No. 1 calibration, in order to estimate the maximum specific growth rate microH and the concentration of heterotrophic biomass XBH.
Precision alignment and calibration of optical systems using computer generated holograms
NASA Astrophysics Data System (ADS)
Coyle, Laura Elizabeth
As techniques for manufacturing and metrology advance, optical systems are being designed with more complexity than ever before. Given these prescriptions, alignment and calibration can be a limiting factor in their final performance. Computer generated holograms (CGHs) have several unique properties that make them powerful tools for meeting these demanding tolerances. This work will present three novel methods for alignment and calibration of optical systems using computer generated holograms. Alignment methods using CGHs require that the optical wavefront created by the CGH be related to a mechanical datum to locate it space. An overview of existing methods is provided as background, then two new alignment methods are discussed in detail. In the first method, the CGH contact Ball Alignment Tool (CBAT) is used to align a ball or sphere mounted retroreflector (SMR) to a Fresnel zone plate pattern with micron level accuracy. The ball is bonded directly onto the CGH substrate and provides permanent, accurate registration between the optical wavefront and a mechanical reference to locate the CGH in space. A prototype CBAT was built and used to align and bond an SMR to a CGH. In the second method, CGH references are used to align axi-symmetric optics in four degrees of freedom with low uncertainty and real time feedback. The CGHs create simultaneous 3D optical references where the zero order reflection sets tilt and the first diffracted order sets centration. The flexibility of the CGH design can be used to accommodate a wide variety of optical systems and maximize sensitivity to misalignments. A 2-CGH prototype system was aligned multiplied times and the alignment uncertainty was quantified and compared to an error model. Finally, an enhanced calibration method is presented. It uses multiple perturbed measurements of a master sphere to improve the calibration of CGH-based Fizeau interferometers ultimately measuring aspheric test surfaces. The improvement in the calibration is a function of the interferometer error and the aspheric departure of the desired test surface. This calibration is most effective at reducing coma and trefoil from figure error or misalignments of the interferometer components. The enhanced calibration can reduce overall measurement uncertainty or allow the budgeted error contribution from another source to be increased. A single set of sphere measurements can be used to calculate calibration maps for closely related aspheres, including segmented primary mirrors for telescopes. A parametric model is developed and compared to the simulated calibration of a case study interferometer.
Dagalakis, Nicholas G.; Yoo, Jae Myung; Oeste, Thomas
2017-01-01
The Dynamic Impact Testing and Calibration Instrument (DITCI) is a simple instrument with a significant data collection and analysis capability that is used for the testing and calibration of biosimulant human tissue artifacts. These artifacts may be used to measure the severity of injuries caused in the case of a robot impact with a human. In this paper we describe the DITCI adjustable impact and flexible foundation mechanism, which allows the selection of a variety of impact force levels and foundation stiffness. The instrument can accommodate arrays of a variety of sensors and impact tools, simulating both real manufacturing tools and the testing requirements of standards setting organizations. A computer data acquisition system may collect a variety of impact motion, force, and torque data, which are used to develop a variety of mathematical model representations of the artifacts. Finally, we describe the fabrication and testing of human abdomen soft tissue artifacts, used to display the magnitude of impact tissue deformation. Impact tests were performed at various maximum impact force and average pressure levels. PMID:28579658
Dagalakis, Nicholas G; Yoo, Jae Myung; Oeste, Thomas
2016-01-01
The Dynamic Impact Testing and Calibration Instrument (DITCI) is a simple instrument with a significant data collection and analysis capability that is used for the testing and calibration of biosimulant human tissue artifacts. These artifacts may be used to measure the severity of injuries caused in the case of a robot impact with a human. In this paper we describe the DITCI adjustable impact and flexible foundation mechanism, which allows the selection of a variety of impact force levels and foundation stiffness. The instrument can accommodate arrays of a variety of sensors and impact tools, simulating both real manufacturing tools and the testing requirements of standards setting organizations. A computer data acquisition system may collect a variety of impact motion, force, and torque data, which are used to develop a variety of mathematical model representations of the artifacts. Finally, we describe the fabrication and testing of human abdomen soft tissue artifacts, used to display the magnitude of impact tissue deformation. Impact tests were performed at various maximum impact force and average pressure levels.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, J; Penfold, S; Royal Adelaide Hospital, Adelaide, SA
2015-06-15
Purpose: To investigate the robustness of dual energy CT (DECT) and single energy CT (SECT) proton stopping power calibration techniques and quantify the associated errors when imaging a phantom differing in chemical composition to that used during stopping power calibration. Methods: The CIRS tissue substitute phantom was scanned in a CT-simulator at 90kV and 140kV. This image set was used to generate a DECT proton SPR calibration based on a relationship between effective atomic number and mean excitation energy. A SECT proton SPR calibration based only on Hounsfield units (HUs) was also generated. DECT and SECT scans of a secondmore » phantom of known density and chemical composition were performed. The SPR of the second phantom was calculated with the DECT approach (SPR-DECT),the SECT approach (SPR-SECT) and finally the known density and chemical composition of the phantom (SPR-ref). The DECT and SECT image sets were imported into the Pinnacle{sup 3} research release of proton therapy treatment planning. The difference in dose when exposed to a common pencil beam distribution was investigated. Results: SPR-DECT was found to be in better agreement with SPR-ref than SPR- SECT. The mean difference in SPR for all materials was 0.51% for DECT and 6.89% for SECT. With the exception of Teflon, SPR-DECT was found to agree with SPR-ref to within 1%. Significant differences in calculated dose were found when using the DECT image set or the SECT image set. Conclusion: The DECT calibration technique was found to be more robust to situations in which the physical properties of the test materials differed from the materials used during SPR calibration. Furthermore, it was demonstrated that the DECT and SECT SPR calibration techniques can Result in significantly different calculated dose distributions.« less
Why Bother to Calibrate? Model Consistency and the Value of Prior Information
NASA Astrophysics Data System (ADS)
Hrachowitz, Markus; Fovet, Ophelie; Ruiz, Laurent; Euser, Tanja; Gharari, Shervan; Nijzink, Remko; Savenije, Hubert; Gascuel-Odoux, Chantal
2015-04-01
Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by 4 calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce 20 hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by using prior information about the system to impose "prior constraints", inferred from expert knowledge and to ensure a model which behaves well with respect to the modeller's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model set-up exhibited increased performance in the independent test period and skill to reproduce all 20 signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if efficiently counter-balanced by available prior constraints, can increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge driven strategy of constraining models.
Why Bother and Calibrate? Model Consistency and the Value of Prior Information.
NASA Astrophysics Data System (ADS)
Hrachowitz, M.; Fovet, O.; Ruiz, L.; Euser, T.; Gharari, S.; Nijzink, R.; Freer, J. E.; Savenije, H.; Gascuel-Odoux, C.
2014-12-01
Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by 4 calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce 20 hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by using prior information about the system to impose "prior constraints", inferred from expert knowledge and to ensure a model which behaves well with respect to the modeller's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model set-up exhibited increased performance in the independent test period and skill to reproduce all 20 signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if efficiently counter-balanced by available prior constraints, can increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge driven strategy of constraining models.
Regression Analysis of Long-term Profile Ozone Data Set from BUV Instruments
NASA Technical Reports Server (NTRS)
Frith, Stacey; Taylor, Steve; DeLand, Matt; Ahn, Chang-Woo; Stolarski, Richard S.
2005-01-01
We have produced a profile merged ozone data set (MOD) based on the SBUV/SBUV2 series of nadir-viewing satellite backscatter instruments, covering the period from November 1978 - December 2003. In 2004, data from the Nimbus 7 SBUV and NOAA 9,11, and 16 SBUV/2 instruments were reprocessed using the Version 8 (V8) algorithm and most recent calibrations. More recently, data from the Nimbus 4 BUV instrument, which operated from 1970 - 1977, were also reprocessed using the V8 algorithm. As part of the V8 profile calibration, the Nimbus 7 and NOAA 9 (1993-1997 only) instrument calibrations have been adjusted to match the NOAA 11 calibration, which was established from comparisons with SSBUV shuttle flight data. Given the level of agreement between the data sets, we simply average the ozone values during periods of instrument overlap to produce the MOD profile data set. We use statistical time-series analysis of the MOD profile data set (1978-2003) to estimate the change in profile ozone due to changing stratospheric chlorine levels. The Nimbus 4 BUV data offer an opportunity to test the physical properties of our statistical model. We extrapolate our statistical model fit backwards in time and compare to the Nimbus 4 data. We compare the statistics of the residuals from the fit for the Nimbus 4 period to those obtained from the 1978-2003 period over which the statistical model coefficients were estimated.
Quantifying Particle Numbers and Mass Flux in Drifting Snow
NASA Astrophysics Data System (ADS)
Crivelli, Philip; Paterna, Enrico; Horender, Stefan; Lehning, Michael
2016-12-01
We compare two of the most common methods of quantifying mass flux, particle numbers and particle-size distribution for drifting snow events, the snow-particle counter (SPC), a laser-diode-based particle detector, and particle tracking velocimetry based on digital shadowgraphic imaging. The two methods were correlated for mass flux and particle number flux. For the SPC measurements, the device was calibrated by the manufacturer beforehand. The shadowgrapic imaging method measures particle size and velocity directly from consecutive images, and before each new test the image pixel length is newly calibrated. A calibration study with artificially scattered sand particles and glass beads provides suitable settings for the shadowgraphical imaging as well as obtaining a first correlation of the two methods in a controlled environment. In addition, using snow collected in trays during snowfall, several experiments were performed to observe drifting snow events in a cold wind tunnel. The results demonstrate a high correlation between the mass flux obtained for the calibration studies (r ≥slant 0.93) and good correlation for the drifting snow experiments (r ≥slant 0.81). The impact of measurement settings is discussed in order to reliably quantify particle numbers and mass flux in drifting snow. The study was designed and performed to optimize the settings of the digital shadowgraphic imaging system for both the acquisition and the processing of particles in a drifting snow event. Our results suggest that these optimal settings can be transferred to different imaging set-ups to investigate sediment transport processes.
Shi, Jingjin; Chen, Fei'er; Cai, Yunfei; Fan, Shichen; Cai, Jing; Chen, Renjie; Kan, Haidong; Lu, Yihan; Zhao, Zhuohui
2017-01-01
Portable direct-reading instruments by light-scattering method are increasingly used in airborne fine particulate matter (PM2.5) monitoring. However, there are limited calibration studies on such instruments by applying the gravimetric method as reference method in field tests. An 8-month sampling was performed and 96 pairs of PM2.5 data by both the gravimetric method and the simultaneous light-scattering real-time monitoring (QT-50) were obtained from July, 2015 to February, 2016 in Shanghai. Temperature and relative humidity (RH) were recorded. Mann-Whitney U nonparametric test and Spearman correlation were used to investigate the differences between the two measurements. Multiple linear regression (MLR) model was applied to set up the calibration model for the light-scattering device. The average PM2.5 concentration (median) was 48.1μg/m3 (min-max 10.4-95.8μg/m3) by the gravimetric method and 58.1μg/m3 (19.2-315.9μg/m3) by the light-scattering method, respectively. By time trend analyses, they were significantly correlated with each other (Spearman correlation coefficient 0.889, P<0.01). By MLR, the calibration model for the light-scattering instrument was Y(calibrated) = 57.45 + 0.47 × X(the QT - 50 measurements) - 0.53 × RH - 0.41 × Temp with both RH and temperature adjusted. The 10-fold cross-validation R2 and the root mean squared error of the calibration model were 0.79 and 11.43 μg/m3, respectively. Light-scattering measurements of PM2.5 by QT-50 instrument overestimated the concentration levels and were affected by temperature and RH. The calibration model for QT-50 instrument was firstly set up against the gravimetric method with temperature and RH adjusted.
Shi, Jingjin; Chen, Fei’er; Cai, Yunfei; Fan, Shichen; Cai, Jing; Chen, Renjie; Kan, Haidong; Lu, Yihan
2017-01-01
Background Portable direct-reading instruments by light-scattering method are increasingly used in airborne fine particulate matter (PM2.5) monitoring. However, there are limited calibration studies on such instruments by applying the gravimetric method as reference method in field tests. Methods An 8-month sampling was performed and 96 pairs of PM2.5 data by both the gravimetric method and the simultaneous light-scattering real-time monitoring (QT-50) were obtained from July, 2015 to February, 2016 in Shanghai. Temperature and relative humidity (RH) were recorded. Mann-Whitney U nonparametric test and Spearman correlation were used to investigate the differences between the two measurements. Multiple linear regression (MLR) model was applied to set up the calibration model for the light-scattering device. Results The average PM2.5 concentration (median) was 48.1μg/m3 (min-max 10.4–95.8μg/m3) by the gravimetric method and 58.1μg/m3 (19.2–315.9μg/m3) by the light-scattering method, respectively. By time trend analyses, they were significantly correlated with each other (Spearman correlation coefficient 0.889, P<0.01). By MLR, the calibration model for the light-scattering instrument was Y(calibrated) = 57.45 + 0.47 × X(the QT – 50 measurements) – 0.53 × RH – 0.41 × Temp with both RH and temperature adjusted. The 10-fold cross-validation R2 and the root mean squared error of the calibration model were 0.79 and 11.43 μg/m3, respectively. Conclusion Light-scattering measurements of PM2.5 by QT-50 instrument overestimated the concentration levels and were affected by temperature and RH. The calibration model for QT-50 instrument was firstly set up against the gravimetric method with temperature and RH adjusted. PMID:29121101
Biaxial Anisotropic Material Development and Characterization using Rectangular to Square Waveguide
2015-03-26
holder 68 Figure 29. Measurement Setup with Test port cables and Network Analyzer VNA and the waveguide adapters are torqued to specification with...calibrated torque wrenches and waveguide flanges are aligned using precision alignment pins. A TRL calibration is performed prior to measuring the sample as...set to 0.0001. This enables the Frequency domain solver to refine the mesh until the tolerance is achieved. Tightening the error tolerance results in
Laser and Optical Fiber Metrology in Romania
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sporea, Dan; Sporea, Adelina
2008-04-15
The Romanian government established in the last five years a National Program for the improvement of country's infrastructure of metrology. The set goal was to develop and accredit testing and calibration laboratories, as well as certification bodies, according to the ISO 17025:2005 norm. Our Institute benefited from this policy, and developed a laboratory for laser and optical fibers metrology in order to provide testing and calibration services for the certification of laser-based industrial, medical and communication products. The paper will present the laboratory accredited facilities and some of the results obtained in the evaluation of irradiation effects of optical andmore » optoelectronic parts, tests run under the EU's Fusion Program.« less
Zhan, Xue-yan; Zhao, Na; Lin, Zhao-zhou; Wu, Zhi-sheng; Yuan, Rui-juan; Qiao, Yan-jiang
2014-12-01
The appropriate algorithm for calibration set selection was one of the key technologies for a good NIR quantitative model. There are different algorithms for calibration set selection, such as Random Sampling (RS) algorithm, Conventional Selection (CS) algorithm, Kennard-Stone(KS) algorithm and Sample set Portioning based on joint x-y distance (SPXY) algorithm, et al. However, there lack systematic comparisons between two algorithms of the above algorithms. The NIR quantitative models to determine the asiaticoside content in Centella total glucosides were established in the present paper, of which 7 indexes were classified and selected, and the effects of CS algorithm, KS algorithm and SPXY algorithm for calibration set selection on the accuracy and robustness of NIR quantitative models were investigated. The accuracy indexes of NIR quantitative models with calibration set selected by SPXY algorithm were significantly different from that with calibration set selected by CS algorithm or KS algorithm, while the robustness indexes, such as RMSECV and |RMSEP-RMSEC|, were not significantly different. Therefore, SPXY algorithm for calibration set selection could improve the predicative accuracy of NIR quantitative models to determine asiaticoside content in Centella total glucosides, and have no significant effect on the robustness of the models, which provides a reference to determine the appropriate algorithm for calibration set selection when NIR quantitative models are established for the solid system of traditional Chinese medcine.
Dudev, Todor; Devereux, Mike; Meuwly, Markus; Lim, Carmay; Piquemal, Jean-Philip; Gresh, Nohad
2015-02-15
The alkali metal cations in the series Li(+)-Cs(+) act as major partners in a diversity of biological processes and in bioinorganic chemistry. In this article, we present the results of their calibration in the context of the SIBFA polarizable molecular mechanics/dynamics procedure. It relies on quantum-chemistry (QC) energy-decomposition analyses of their monoligated complexes with representative O-, N-, S-, and Se- ligands, performed with the aug-cc-pVTZ(-f) basis set at the Hartree-Fock level. Close agreement with QC is obtained for each individual contribution, even though the calibration involves only a limited set of cation-specific parameters. This agreement is preserved in tests on polyligated complexes with four and six O- ligands, water and formamide, indicating the transferability of the procedure. Preliminary extensions to density functional theory calculations are reported. © 2014 Wiley Periodicals, Inc.
Performance evaluation and clinical applications of 3D plenoptic cameras
NASA Astrophysics Data System (ADS)
Decker, Ryan; Shademan, Azad; Opfermann, Justin; Leonard, Simon; Kim, Peter C. W.; Krieger, Axel
2015-06-01
The observation and 3D quantification of arbitrary scenes using optical imaging systems is challenging, but increasingly necessary in many fields. This paper provides a technical basis for the application of plenoptic cameras in medical and medical robotics applications, and rigorously evaluates camera integration and performance in the clinical setting. It discusses plenoptic camera calibration and setup, assesses plenoptic imaging in a clinically relevant context, and in the context of other quantitative imaging technologies. We report the methods used for camera calibration, precision and accuracy results in an ideal and simulated surgical setting. Afterwards, we report performance during a surgical task. Test results showed the average precision of the plenoptic camera to be 0.90mm, increasing to 1.37mm for tissue across the calibrated FOV. The ideal accuracy was 1.14mm. The camera showed submillimeter error during a simulated surgical task.
Elsohaby, Ibrahim; Hou, Siyuan; McClure, J Trenton; Riley, Christopher B; Shaw, R Anthony; Keefe, Gregory P
2015-08-20
Following the recent development of a new approach to quantitative analysis of IgG concentrations in bovine serum using transmission infrared spectroscopy, the potential to measure IgG levels using technology and a device better designed for field use was investigated. A method using attenuated total reflectance infrared (ATR) spectroscopy in combination with partial least squares (PLS) regression was developed to measure bovine serum IgG concentrations. ATR spectroscopy has a distinct ease-of-use advantage that may open the door to routine point-of-care testing. Serum samples were collected from calves and adult cows, tested by a reference RID method, and ATR spectra acquired. The spectra were linked to the RID-IgG concentrations and then randomly split into two sets: calibration and prediction. The calibration set was used to build a calibration model, while the prediction set was used to assess the predictive performance and accuracy of the final model. The procedure was repeated for various spectral data preprocessing approaches. For the prediction set, the Pearson's and concordance correlation coefficients between the IgG measured by RID and predicted by ATR spectroscopy were both 0.93. The Bland Altman plot revealed no obvious systematic bias between the two methods. ATR spectroscopy showed a sensitivity for detection of failure of transfer of passive immunity (FTPI) of 88 %, specificity of 100 % and accuracy of 94 % (with IgG <1000 mg/dL as the FTPI cut-off value). ATR spectroscopy in combination with multivariate data analysis shows potential as an alternative approach for rapid quantification of IgG concentrations in bovine serum and the diagnosis of FTPI in calves.
Bay of Fundy verification of a system for multidate Landsat measurement of suspended sediment
NASA Technical Reports Server (NTRS)
Munday, J. C., Jr.; Afoldi, T. T.; Amos, C. L.
1981-01-01
A system for automated multidate Landsat CCT MSS measurement of suspended sediment concentration (S) has been implemented and verified on nine sets (108 points) of data from the Bay of Fundy, Canada. The system employs 'chromaticity analysis' to provide automatic pixel-by-pixel adjustment of atmospheric variations, permitting reference calibration data from one or several dates to be spatially and temporally extrapolated to other regions and to other dates. For verification, each data set was used in turn as test data against the remainder as a calibration set: the average absolute error was 44 percent of S over the range 1-1000 mg/l. The system can be used to measure chlorophyll (in the absence of atmospheric variations), Secchi disk depth, and turbidity.
Calibration Variable Selection and Natural Zero Determination for Semispan and Canard Balances
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert M.
2013-01-01
Independent calibration variables for the characterization of semispan and canard wind tunnel balances are discussed. It is shown that the variable selection for a semispan balance is determined by the location of the resultant normal and axial forces that act on the balance. These two forces are the first and second calibration variable. The pitching moment becomes the third calibration variable after the normal and axial forces are shifted to the pitch axis of the balance. Two geometric distances, i.e., the rolling and yawing moment arms, are the fourth and fifth calibration variable. They are traditionally substituted by corresponding moments to simplify the use of calibration data during a wind tunnel test. A canard balance is related to a semispan balance. It also only measures loads on one half of a lifting surface. However, the axial force and yawing moment are of no interest to users of a canard balance. Therefore, its calibration variable set is reduced to the normal force, pitching moment, and rolling moment. The combined load diagrams of the rolling and yawing moment for a semispan balance are discussed. They may be used to illustrate connections between the wind tunnel model geometry, the test section size, and the calibration load schedule. Then, methods are reviewed that may be used to obtain the natural zeros of a semispan or canard balance. In addition, characteristics of three semispan balance calibration rigs are discussed. Finally, basic requirements for a full characterization of a semispan balance are reviewed.
NASA Astrophysics Data System (ADS)
Bijl, Piet; Reynolds, Joseph P.; Vos, Wouter K.; Hogervorst, Maarten A.; Fanning, Jonathan D.
2011-05-01
The TTP (Targeting Task Performance) metric, developed at NVESD, is the current standard US Army model to predict EO/IR Target Acquisition performance. This model however does not have a corresponding lab or field test to empirically assess the performance of a camera system. The TOD (Triangle Orientation Discrimination) method, developed at TNO in The Netherlands, provides such a measurement. In this study, we make a direct comparison between TOD performance for a range of sensors and the extensive historical US observer performance database built to develop and calibrate the TTP metric. The US perception data were collected doing an identification task by military personnel on a standard 12 target, 12 aspect tactical vehicle image set that was processed through simulated sensors for which the most fundamental sensor parameters such as blur, sampling, spatial and temporal noise were varied. In the present study, we measured TOD sensor performance using exactly the same sensors processing a set of TOD triangle test patterns. The study shows that good overall agreement is obtained when the ratio between target characteristic size and TOD test pattern size at threshold equals 6.3. Note that this number is purely based on empirical data without any intermediate modeling. The calibration of the TOD to the TTP is highly beneficial to the sensor modeling and testing community for a variety of reasons. These include: i) a connection between requirement specification and acceptance testing, and ii) a very efficient method to quickly validate or extend the TTP range prediction model to new systems and tasks.
Burns, Jennifer B.; Riley, Christopher B.; Shaw, R. Anthony; McClure, J. Trenton
2017-01-01
The objective of this study was to develop and compare the performance of laboratory grade and portable attenuated total reflectance infrared (ATR-IR) spectroscopic approaches in combination with partial least squares regression (PLSR) for the rapid quantification of alpaca serum IgG concentration, and the identification of low IgG (<1000 mg/dL), which is consistent with the diagnosis of failure of transfer of passive immunity (FTPI) in neonates. Serum samples (n = 175) collected from privately owned, healthy alpacas were tested by the reference method of radial immunodiffusion (RID) assay, and laboratory grade and portable ATR-IR spectrometers. Various pre-processing strategies were applied to the ATR-IR spectra that were linked to corresponding RID-IgG concentrations, and then randomly split into two sets: calibration (training) and test sets. PLSR was applied to the calibration set and calibration models were developed, and the test set was used to assess the accuracy of the analytical method. For the test set, the Pearson correlation coefficients between the IgG measured by RID and predicted by both laboratory grade and portable ATR-IR spectrometers was 0.91. The average differences between reference serum IgG concentrations and the two IR-based methods were 120.5 mg/dL and 71 mg/dL for the laboratory and portable ATR-IR-based assays, respectively. Adopting an IgG concentration <1000 mg/dL as the cut-point for FTPI cases, the sensitivity, specificity, and accuracy for identifying serum samples below this cut point by laboratory ATR-IR assay were 86, 100 and 98%, respectively (within the entire data set). Corresponding values for the portable ATR-IR assay were 95, 99 and 99%, respectively. These results suggest that the two different ATR-IR assays performed similarly for rapid qualitative evaluation of alpaca serum IgG and for diagnosis of IgG <1000 mg/dL, the portable ATR-IR spectrometer performed slightly better, and provides more flexibility for potential application in the field. PMID:28651006
Elsohaby, Ibrahim; Burns, Jennifer B; Riley, Christopher B; Shaw, R Anthony; McClure, J Trenton
2017-01-01
The objective of this study was to develop and compare the performance of laboratory grade and portable attenuated total reflectance infrared (ATR-IR) spectroscopic approaches in combination with partial least squares regression (PLSR) for the rapid quantification of alpaca serum IgG concentration, and the identification of low IgG (<1000 mg/dL), which is consistent with the diagnosis of failure of transfer of passive immunity (FTPI) in neonates. Serum samples (n = 175) collected from privately owned, healthy alpacas were tested by the reference method of radial immunodiffusion (RID) assay, and laboratory grade and portable ATR-IR spectrometers. Various pre-processing strategies were applied to the ATR-IR spectra that were linked to corresponding RID-IgG concentrations, and then randomly split into two sets: calibration (training) and test sets. PLSR was applied to the calibration set and calibration models were developed, and the test set was used to assess the accuracy of the analytical method. For the test set, the Pearson correlation coefficients between the IgG measured by RID and predicted by both laboratory grade and portable ATR-IR spectrometers was 0.91. The average differences between reference serum IgG concentrations and the two IR-based methods were 120.5 mg/dL and 71 mg/dL for the laboratory and portable ATR-IR-based assays, respectively. Adopting an IgG concentration <1000 mg/dL as the cut-point for FTPI cases, the sensitivity, specificity, and accuracy for identifying serum samples below this cut point by laboratory ATR-IR assay were 86, 100 and 98%, respectively (within the entire data set). Corresponding values for the portable ATR-IR assay were 95, 99 and 99%, respectively. These results suggest that the two different ATR-IR assays performed similarly for rapid qualitative evaluation of alpaca serum IgG and for diagnosis of IgG <1000 mg/dL, the portable ATR-IR spectrometer performed slightly better, and provides more flexibility for potential application in the field.
40 CFR 90.315 - Analyzer initial calibration.
Code of Federal Regulations, 2012 CFR
2012-07-01
... specifications in § 90.316(b). (c) Zero setting and calibration. Using purified synthetic air (or nitrogen), set the CO, CO2, NOX. and HC analyzers at zero. Connect the appropriate calibrating gases to the analyzers...) Rechecking of zero setting. Recheck the zero setting and, if necessary, repeat the procedure described in...
40 CFR 90.315 - Analyzer initial calibration.
Code of Federal Regulations, 2014 CFR
2014-07-01
... specifications in § 90.316(b). (c) Zero setting and calibration. Using purified synthetic air (or nitrogen), set the CO, CO2, NOX. and HC analyzers at zero. Connect the appropriate calibrating gases to the analyzers...) Rechecking of zero setting. Recheck the zero setting and, if necessary, repeat the procedure described in...
40 CFR 90.315 - Analyzer initial calibration.
Code of Federal Regulations, 2013 CFR
2013-07-01
... specifications in § 90.316(b). (c) Zero setting and calibration. Using purified synthetic air (or nitrogen), set the CO, CO2, NOX. and HC analyzers at zero. Connect the appropriate calibrating gases to the analyzers...) Rechecking of zero setting. Recheck the zero setting and, if necessary, repeat the procedure described in...
40 CFR 90.315 - Analyzer initial calibration.
Code of Federal Regulations, 2011 CFR
2011-07-01
... specifications in § 90.316(b). (c) Zero setting and calibration. Using purified synthetic air (or nitrogen), set the CO, CO2, NOX. and HC analyzers at zero. Connect the appropriate calibrating gases to the analyzers...) Rechecking of zero setting. Recheck the zero setting and, if necessary, repeat the procedure described in...
40 CFR 90.315 - Analyzer initial calibration.
Code of Federal Regulations, 2010 CFR
2010-07-01
... specifications in § 90.316(b). (c) Zero setting and calibration. Using purified synthetic air (or nitrogen), set the CO, CO2, NOX. and HC analyzers at zero. Connect the appropriate calibrating gases to the analyzers...) Rechecking of zero setting. Recheck the zero setting and, if necessary, repeat the procedure described in...
NASA Astrophysics Data System (ADS)
Zhang, Jiyan; Wang, Liru; Ma, Zhenya
2006-11-01
A focimeter is one of the basic ophthalmic instruments used in every optometric practice, and verification of the accuracy and calibration of the instrument are of the utmost importance. For many years the International Standardization for Organization requires that calibrations for all kinds of focimeters shall be accomplished by using test lenses described in ISO 9342:1996. These test lenses must be of high quality and of nominal back vertex power that is known with high accuracy. With the development of science and technology, ISO 9342 was revised in 2005. A new part ISO 9342-2 had been drafted for test lenses used to calibrate focimeters with contact lens measurement, and the original ISO 9342 was turned into the current ISO 9342-1, which could only be used to calibrate fociemters with spectacle lens measurement. As one of the standard drafters, the background for the newly published ISO 9342-2 is introduced in this study, and comparison between test lenses of ISO 9342-1 and ISO 9342-2 is made. Further, the influence of tolerance and uncertainty in design and production of standard test lenses of ISO 9342-2 is analyzed. The paraxial approximation is used to relate the lens parameters with back vertex power and to calculate the uncertainty budget. Moreover, one set of test lenses conforming to ISO 9342-2 is manufactured and experiments are done with it. Results show that test lenses described in ISO 9342-2 can correct the measurement errors of focimeters used for measuring contact lenses well, especially for spherical aberration, and the correction is more effective for spherical contact lenses with high back vertex power.
NASA Technical Reports Server (NTRS)
Steele, W. G.; Molder, K. J.; Hudson, S. T.; Vadasy, K. V.; Rieder, P. T.; Giel, T.
2005-01-01
NASA and the U.S. Air Force are working on a joint project to develop a new hydrogen-fueled, full-flow, staged combustion rocket engine. The initial testing and modeling work for the Integrated Powerhead Demonstrator (IPD) project is being performed by NASA Marshall and Stennis Space Centers. A key factor in the testing of this engine is the ability to predict and measure the transient fluid flow during engine start and shutdown phases of operation. A model built by NASA Marshall in the ROCket Engine Transient Simulation (ROCETS) program is used to predict transient engine fluid flows. The model is initially calibrated to data from previous tests on the Stennis E1 test stand. The model is then used to predict the next run. Data from this run can then be used to recalibrate the model providing a tool to guide the test program in incremental steps to reduce the risk to the prototype engine. In this paper, they define this type of model as a calibrated model. This paper proposes a method to estimate the uncertainty of a model calibrated to a set of experimental test data. The method is similar to that used in the calibration of experiment instrumentation. For the IPD example used in this paper, the model uncertainty is determined for both LOX and LH flow rates using previous data. The successful use of this model is then demonstrated to predict another similar test run within the uncertainty bounds. The paper summarizes the uncertainty methodology when a model is continually recalibrated with new test data. The methodology is general and can be applied to other calibrated models.
Spinning angle optical calibration apparatus
Beer, Stephen K.; Pratt, II, Harold R.
1991-01-01
An optical calibration apparatus is provided for calibrating and reproducing spinning angles in cross-polarization, nuclear magnetic resonance spectroscopy. An illuminated magnifying apparatus enables optical setting an accurate reproducing of spinning "magic angles" in cross-polarization, nuclear magnetic resonance spectroscopy experiments. A reference mark scribed on an edge of a spinning angle test sample holder is illuminated by a light source and viewed through a magnifying scope. When the "magic angle" of a sample material used as a standard is attained by varying the angular position of the sample holder, the coordinate position of the reference mark relative to a graduation or graduations on a reticle in the magnifying scope is noted. Thereafter, the spinning "magic angle" of a test material having similar nuclear properties to the standard is attained by returning the sample holder back to the originally noted coordinate position.
Bennett, B. N.; Martin, M. Z.; Leonard, D. N.; ...
2018-02-13
Handheld laser-induced breakdown spectroscopy (HH LIBS) was used to study the elemental composition of four copper alloys and four aluminum alloys to produce calibration curves. The HH LIBS instrument used is a SciAps Z-500, commercially available, that contains a class-1 solid-state laser with an output wavelength of 1532 nm, a laser energy of 5 mJ/pulse, and a pulse duration of 5 ns. Test samples were solid specimens comprising of copper and aluminum alloys and data were collected from the samples’ surface at three different locations, employing a 12-point-grid pattern for each data set. All three data sets of the spectramore » were averaged, and the intensity, corrected by subtraction of background, was used to produce the elemental calibration curves. Calibration curves are presented for the matrix elements, copper and aluminum, as well as several minor elements. The surface damage produced by the laser was examined by microscopy. The alloys were tested in air and in a glovebox to evaluate the instrument’s ability to identify the constituents within materials under different environmental conditions. The main objective of using this HH LIBS technology is to determine its capability to fingerprint the presence of certain elements related to subpercent level within materials in real time and in-situ, as a starting point for undertaking future complex material characterization work.« less
NASA Astrophysics Data System (ADS)
Bennett, B. N.; Martin, M. Z.; Leonard, D. N.; Garlea, E.
2018-03-01
Handheld laser-induced breakdown spectroscopy (HH LIBS) was used to study the elemental composition of four copper alloys and four aluminum alloys to produce calibration curves. The HH LIBS instrument used is a SciAps Z-500, commercially available, that contains a class-1 solid-state laser with an output wavelength of 1532 nm, laser energy of 5 mJ/pulse, and a pulse duration of 5 ns. Test samples were solid specimens comprising copper and aluminum alloys and data were collected from the samples' surface at three different locations, employing a 12-point-grid pattern for each data set. All three data sets of the spectra were averaged, and the intensity, corrected by subtraction of background, was used to produce the elemental calibration curves. Calibration curves are presented for the matrix elements, copper and aluminum, as well as several minor elements. The surface damage produced by the laser was examined by microscopy. The alloys were tested in air and in a glovebox to evaluate the instrument's ability to identify the constituents within materials under different environmental conditions. The main objective of using this HH LIBS technology is to determine its capability to fingerprint the presence of certain elements related to subpercent level within materials in real time and in situ, as a starting point for undertaking future complex material characterization work.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bennett, B. N.; Martin, M. Z.; Leonard, D. N.
Handheld laser-induced breakdown spectroscopy (HH LIBS) was used to study the elemental composition of four copper alloys and four aluminum alloys to produce calibration curves. The HH LIBS instrument used is a SciAps Z-500, commercially available, that contains a class-1 solid-state laser with an output wavelength of 1532 nm, a laser energy of 5 mJ/pulse, and a pulse duration of 5 ns. Test samples were solid specimens comprising of copper and aluminum alloys and data were collected from the samples’ surface at three different locations, employing a 12-point-grid pattern for each data set. All three data sets of the spectramore » were averaged, and the intensity, corrected by subtraction of background, was used to produce the elemental calibration curves. Calibration curves are presented for the matrix elements, copper and aluminum, as well as several minor elements. The surface damage produced by the laser was examined by microscopy. The alloys were tested in air and in a glovebox to evaluate the instrument’s ability to identify the constituents within materials under different environmental conditions. The main objective of using this HH LIBS technology is to determine its capability to fingerprint the presence of certain elements related to subpercent level within materials in real time and in-situ, as a starting point for undertaking future complex material characterization work.« less
Forecasting Space Weather Hazards for Astronauts in Deep Space
NASA Astrophysics Data System (ADS)
Martens, P. C.
2018-02-01
Deep Space Gateway provides a unique platform to develop, calibrate, and test a space weather forecasting system for interplanetary travel in a real life setting. We will discuss requirements and design of such a system.
Comparison of Test and Finite Element Analysis for Two Full-Scale Helicopter Crash Tests
NASA Technical Reports Server (NTRS)
Annett, Martin S.; Horta,Lucas G.
2011-01-01
Finite element analyses have been performed for two full-scale crash tests of an MD-500 helicopter. The first crash test was conducted to evaluate the performance of a composite deployable energy absorber under combined flight loads. In the second crash test, the energy absorber was removed to establish the baseline loads. The use of an energy absorbing device reduced the impact acceleration levels by a factor of three. Accelerations and kinematic data collected from the crash tests were compared to analytical results. Details of the full-scale crash tests and development of the system-integrated finite element model are briefly described along with direct comparisons of acceleration magnitudes and durations for the first full-scale crash test. Because load levels were significantly different between tests, models developed for the purposes of predicting the overall system response with external energy absorbers were not adequate under more severe conditions seen in the second crash test. Relative error comparisons were inadequate to guide model calibration. A newly developed model calibration approach that includes uncertainty estimation, parameter sensitivity, impact shape orthogonality, and numerical optimization was used for the second full-scale crash test. The calibrated parameter set reduced 2-norm prediction error by 51% but did not improve impact shape orthogonality.
NASA Astrophysics Data System (ADS)
Li, You Yun; Tsai, DeChang; Hwang, Weng Sing
2008-06-01
The purpose of this study is to develop a technique of numerically simulating the microstructure of 17-4PH (precipitation hardening) stainless steel during investment casting. A cellular automation (CA) algorithm was adopted to simulate the nucleation and grain growth. First a calibration casting was made, and then by comparing the microstructures of the calibration casting with those simulated using different kinetic growth coefficients (a2, a3) in CA, the most appropriate set of values for a2 and a3 would be obtained. Then, this set of values was applied to the microstructure simulation of a separate casting, where the casting was actually made. Through this approach, this study has arrived at a set of growth kinetic coefficients from the calibration casting: a2 is 2.9 × 10-5, a3 is 1.49 × 10-7, which is then used to predict the microstructure of the other test casting. Consequently, a good correlation has been found between the microstructure of actual 17-4PH casting and the simulation result.
Parameter estimation uncertainty: Comparing apples and apples?
NASA Astrophysics Data System (ADS)
Hart, D.; Yoon, H.; McKenna, S. A.
2012-12-01
Given a highly parameterized ground water model in which the conceptual model of the heterogeneity is stochastic, an ensemble of inverse calibrations from multiple starting points (MSP) provides an ensemble of calibrated parameters and follow-on transport predictions. However, the multiple calibrations are computationally expensive. Parameter estimation uncertainty can also be modeled by decomposing the parameterization into a solution space and a null space. From a single calibration (single starting point) a single set of parameters defining the solution space can be extracted. The solution space is held constant while Monte Carlo sampling of the parameter set covering the null space creates an ensemble of the null space parameter set. A recently developed null-space Monte Carlo (NSMC) method combines the calibration solution space parameters with the ensemble of null space parameters, creating sets of calibration-constrained parameters for input to the follow-on transport predictions. Here, we examine the consistency between probabilistic ensembles of parameter estimates and predictions using the MSP calibration and the NSMC approaches. A highly parameterized model of the Culebra dolomite previously developed for the WIPP project in New Mexico is used as the test case. A total of 100 estimated fields are retained from the MSP approach and the ensemble of results defining the model fit to the data, the reproduction of the variogram model and prediction of an advective travel time are compared to the same results obtained using NSMC. We demonstrate that the NSMC fields based on a single calibration model can be significantly constrained by the calibrated solution space and the resulting distribution of advective travel times is biased toward the travel time from the single calibrated field. To overcome this, newly proposed strategies to employ a multiple calibration-constrained NSMC approach (M-NSMC) are evaluated. Comparison of the M-NSMC and MSP methods suggests that M-NSMC can provide a computationally efficient and practical solution for predictive uncertainty analysis in highly nonlinear and complex subsurface flow and transport models. This material is based upon work supported as part of the Center for Frontiers of Subsurface Energy Security, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences under Award Number DE-SC0001114. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
NASA Astrophysics Data System (ADS)
Luo, L.
2011-12-01
Automated calibration of complex deterministic water quality models with a large number of biogeochemical parameters can reduce time-consuming iterative simulations involving empirical judgements of model fit. We undertook auto-calibration of the one-dimensional hydrodynamic-ecological lake model DYRESM-CAEDYM, using a Monte Carlo sampling (MCS) method, in order to test the applicability of this procedure for shallow, polymictic Lake Rotorua (New Zealand). The calibration procedure involved independently minimising the root-mean-square-error (RMSE), maximizing the Pearson correlation coefficient (r) and Nash-Sutcliffe efficient coefficient (Nr) for comparisons of model state variables against measured data. An assigned number of parameter permutations was used for 10,000 simulation iterations. The 'optimal' temperature calibration produced a RMSE of 0.54 °C, Nr-value of 0.99 and r-value of 0.98 through the whole water column based on comparisons with 540 observed water temperatures collected between 13 July 2007 - 13 January 2009. The modeled bottom dissolved oxygen concentration (20.5 m below surface) was compared with 467 available observations. The calculated RMSE of the simulations compared with the measurements was 1.78 mg L-1, the Nr-value was 0.75 and the r-value was 0.87. The autocalibrated model was further tested for an independent data set by simulating bottom-water hypoxia events for the period 15 January 2009 to 8 June 2011 (875 days). This verification produced an accurate simulation of five hypoxic events corresponding to DO < 2 mg L-1 during summer of 2009-2011. The RMSE was 2.07 mg L-1, Nr-value 0.62 and r-value of 0.81, based on the available data set of 738 days. The auto-calibration software of DYRESM-CAEDYM developed here is substantially less time-consuming and more efficient in parameter optimisation than traditional manual calibration which has been the standard tool practiced for similar complex water quality models.
NASA Technical Reports Server (NTRS)
Miller, D. P.; Prahst, P. S.
1994-01-01
An axial compressor test rig has been designed for the operation of small turbomachines. The inlet region consisted of a long flowpath region with two series of support struts and a flapped inlet guide vane. A flow test was run to calibrate and determine the source and magnitudes of the loss mechanisms in the inlet for a highly loaded two-stage axial compressor test. Several flow conditions and IGV angle settings were established in which detailed surveys were completed. Boundary layer bleed was also provided along the casing of the inlet behind the support struts and ahead of the IGV. A detailed discussion of the flowpath design along with a summary of the experimental results are provided in Part 1.
Aleixandre-Tudo, José Luis; Nieuwoudt, Helené; Aleixandre, José Luis; Du Toit, Wessel J
2015-02-04
The validation of ultraviolet-visible (UV-vis) spectroscopy combined with partial least-squares (PLS) regression to quantify red wine tannins is reported. The methylcellulose precipitable (MCP) tannin assay and the bovine serum albumin (BSA) tannin assay were used as reference methods. To take the high variability of wine tannins into account when the calibration models were built, a diverse data set was collected from samples of South African red wines that consisted of 18 different cultivars, from regions spanning the wine grape-growing areas of South Africa with their various sites, climates, and soils, ranging in vintage from 2000 to 2012. A total of 240 wine samples were analyzed, and these were divided into a calibration set (n = 120) and a validation set (n = 120) to evaluate the predictive ability of the models. To test the robustness of the PLS calibration models, the predictive ability of the classifying variables cultivar, vintage year, and experimental versus commercial wines was also tested. In general, the statistics obtained when BSA was used as a reference method were slightly better than those obtained with MCP. Despite this, the MCP tannin assay should also be considered as a valid reference method for developing PLS calibrations. The best calibration statistics for the prediction of new samples were coefficient of correlation (R 2 val) = 0.89, root mean standard error of prediction (RMSEP) = 0.16, and residual predictive deviation (RPD) = 3.49 for MCP and R 2 val = 0.93, RMSEP = 0.08, and RPD = 4.07 for BSA, when only the UV region (260-310 nm) was selected, which also led to a faster analysis time. In addition, a difference in the results obtained when the predictive ability of the classifying variables vintage, cultivar, or commercial versus experimental wines was studied suggests that tannin composition is highly affected by many factors. This study also discusses the correlations in tannin values between the methylcellulose and protein precipitation methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bocquet, S.; Saro, A.; Mohr, J. J.
2015-02-01
We present a velocity-dispersion-based mass calibration of the South Pole Telescope Sunyaev-Zel'dovich effect survey (SPT-SZ) galaxy cluster sample. Using a homogeneously selected sample of 100 cluster candidates from 720 deg{sup 2} of the survey along with 63 velocity dispersion (σ {sub v}) and 16 X-ray Y {sub X} measurements of sample clusters, we simultaneously calibrate the mass-observable relation and constrain cosmological parameters. Our method accounts for cluster selection, cosmological sensitivity, and uncertainties in the mass calibrators. The calibrations using σ {sub v} and Y {sub X} are consistent at the 0.6σ level, with the σ {sub v} calibration preferring ∼16% highermore » masses. We use the full SPT{sub CL} data set (SZ clusters+σ {sub v}+Y {sub X}) to measure σ{sub 8}(Ω{sub m}/0.27){sup 0.3} = 0.809 ± 0.036 within a flat ΛCDM model. The SPT cluster abundance is lower than preferred by either the WMAP9 or Planck+WMAP9 polarization (WP) data, but assuming that the sum of the neutrino masses is ∑m {sub ν} = 0.06 eV, we find the data sets to be consistent at the 1.0σ level for WMAP9 and 1.5σ for Planck+WP. Allowing for larger ∑m {sub ν} further reconciles the results. When we combine the SPT{sub CL} and Planck+WP data sets with information from baryon acoustic oscillations and Type Ia supernovae, the preferred cluster masses are 1.9σ higher than the Y {sub X} calibration and 0.8σ higher than the σ {sub v} calibration. Given the scale of these shifts (∼44% and ∼23% in mass, respectively), we execute a goodness-of-fit test; it reveals no tension, indicating that the best-fit model provides an adequate description of the data. Using the multi-probe data set, we measure Ω{sub m} = 0.299 ± 0.009 and σ{sub 8} = 0.829 ± 0.011. Within a νCDM model we find ∑m {sub ν} = 0.148 ± 0.081 eV. We present a consistency test of the cosmic growth rate using SPT clusters. Allowing both the growth index γ and the dark energy equation-of-state parameter w to vary, we find γ = 0.73 ± 0.28 and w = –1.007 ± 0.065, demonstrating that the expansion and the growth histories are consistent with a ΛCDM universe (γ = 0.55; w = –1)« less
Principal Component Noise Filtering for NAST-I Radiometric Calibration
NASA Technical Reports Server (NTRS)
Tian, Jialin; Smith, William L., Sr.
2011-01-01
The National Polar-orbiting Operational Environmental Satellite System (NPOESS) Airborne Sounder Testbed- Interferometer (NAST-I) instrument is a high-resolution scanning interferometer that measures emitted thermal radiation between 3.3 and 18 microns. The NAST-I radiometric calibration is achieved using internal blackbody calibration references at ambient and hot temperatures. In this paper, we introduce a refined calibration technique that utilizes a principal component (PC) noise filter to compensate for instrument distortions and artifacts, therefore, further improve the absolute radiometric calibration accuracy. To test the procedure and estimate the PC filter noise performance, we form dependent and independent test samples using odd and even sets of blackbody spectra. To determine the optimal number of eigenvectors, the PC filter algorithm is applied to both dependent and independent blackbody spectra with a varying number of eigenvectors. The optimal number of PCs is selected so that the total root-mean-square (RMS) error is minimized. To estimate the filter noise performance, we examine four different scenarios: apply PC filtering to both dependent and independent datasets, apply PC filtering to dependent calibration data only, apply PC filtering to independent data only, and no PC filters. The independent blackbody radiances are predicted for each case and comparisons are made. The results show significant reduction in noise in the final calibrated radiances with the implementation of the PC filtering algorithm.
Kim, Ki-Hyun; Anthwal, A; Pandey, Sudhir Kumar; Kabir, Ehsanul; Sohn, Jong Ryeul
2010-11-01
In this study, a series of GC calibration experiments were conducted to examine the feasibility of the thermal desorption approach for the quantification of five carbonyl compounds (acetaldehyde, propionaldehyde, butyraldehyde, isovaleraldehyde, and valeraldehyde) in conjunction with two internal standard compounds. The gaseous working standards of carbonyls were calibrated with the aid of thermal desorption as a function of standard concentration and of loading volume. The detection properties were then compared against two types of external calibration data sets derived by fixed standard volume and fixed standard concentration approach. According to this comparison, the fixed standard volume-based calibration of carbonyls should be more sensitive and reliable than its fixed standard concentration counterpart. Moreover, the use of internal standard can improve the analytical reliability of aromatics and some carbonyls to a considerable extent. Our preliminary test on real samples, however, indicates that the performance of internal calibration, when tested using samples of varying dilution ranges, can be moderately different from that derivable from standard gases. It thus suggests that the reliability of calibration approaches should be examined carefully with the considerations on the interactive relationships between the compound-specific properties and the operation conditions of the instrumental setups.
NASA Astrophysics Data System (ADS)
Mitishita, E.; Debiasi, P.; Hainosz, F.; Centeno, J.
2012-07-01
Digital photogrammetric products from the integration of imagery and lidar datasets are a reality nowadays. When the imagery and lidar surveys are performed together and the camera is connected to the lidar system, a direct georeferencing can be applied to compute the exterior orientation parameters of the images. Direct georeferencing of the images requires accurate interior orientation parameters to perform photogrammetric application. Camera calibration is a procedure applied to compute the interior orientation parameters (IOPs). Calibration researches have established that to obtain accurate IOPs, the calibration must be performed with same or equal condition that the photogrammetric survey is done. This paper shows the methodology and experiments results from in situ self-calibration using a simultaneous images block and lidar dataset. The calibration results are analyzed and discussed. To perform this research a test field was fixed in an urban area. A set of signalized points was implanted on the test field to use as the check points or control points. The photogrammetric images and lidar dataset of the test field were taken simultaneously. Four strips of flight were used to obtain a cross layout. The strips were taken with opposite directions of flight (W-E, E-W, N-S and S-N). The Kodak DSC Pro SLR/c digital camera was connected to the lidar system. The coordinates of the exposition station were computed from the lidar trajectory. Different layouts of vertical control points were used in the calibration experiments. The experiments use vertical coordinates from precise differential GPS survey or computed by an interpolation procedure using the lidar dataset. The positions of the exposition stations are used as control points in the calibration procedure to eliminate the linear dependency of the group of interior and exterior orientation parameters. This linear dependency happens, in the calibration procedure, when the vertical images and flat test field are used. The mathematic correlation of the interior and exterior orientation parameters are analyzed and discussed. The accuracies of the calibration experiments are, as well, analyzed and discussed.
New Teff and [Fe/H] spectroscopic calibration for FGK dwarfs and GK giants
NASA Astrophysics Data System (ADS)
Teixeira, G. D. C.; Sousa, S. G.; Tsantaki, M.; Monteiro, M. J. P. F. G.; Santos, N. C.; Israelian, G.
2016-10-01
Context. The ever-growing number of large spectroscopic survey programs has increased the importance of fast and reliable methods with which to determine precise stellar parameters. Some of these methods are highly dependent on correct spectroscopic calibrations. Aims: The goal of this work is to obtain a new spectroscopic calibration for a fast estimate of Teff and [Fe/H] for a wide range of stellar spectral types. Methods: We used spectra from a joint sample of 708 stars, compiled from 451 FGK dwarfs and 257 GK-giant stars. We used homogeneously determined spectroscopic stellar parameters to derive temperature calibrations using a set of selected EW line-ratios, and [Fe/H] calibrations using a set of selected Fe I lines. Results: We have derived 322 EW line-ratios and 100 Fe I lines that can be used to compute Teff and [Fe/H], respectively. We show that these calibrations are effective for FGK dwarfs and GK-giant stars in the following ranges: 4500 K
Calibration of GPS based high accuracy speed meter for vehicles
NASA Astrophysics Data System (ADS)
Bai, Yin; Sun, Qiao; Du, Lei; Yu, Mei; Bai, Jie
2015-02-01
GPS based high accuracy speed meter for vehicles is a special type of GPS speed meter which uses Doppler Demodulation of GPS signals to calculate the speed of a moving target. It is increasingly used as reference equipment in the field of traffic speed measurement, but acknowledged standard calibration methods are still lacking. To solve this problem, this paper presents the set-ups of simulated calibration, field test signal replay calibration, and in-field test comparison with an optical sensor based non-contact speed meter. All the experiments were carried out on particular speed values in the range of (40-180) km/h with the same GPS speed meter. The speed measurement errors of simulated calibration fall in the range of +/-0.1 km/h or +/-0.1%, with uncertainties smaller than 0.02% (k=2). The errors of replay calibration fall in the range of +/-0.1% with uncertainties smaller than 0.10% (k=2). The calibration results justify the effectiveness of the two methods. The relative deviations of the GPS speed meter from the optical sensor based noncontact speed meter fall in the range of +/-0.3%, which validates the use of GPS speed meter as reference instruments. The results of this research can provide technical basis for the establishment of internationally standard calibration methods of GPS speed meters, and thus ensures the legal status of GPS speed meters as reference equipment in the field of traffic speed metrology.
Current profilers and current meters: compass and tilt sensors errors and calibration
NASA Astrophysics Data System (ADS)
Le Menn, M.; Lusven, A.; Bongiovanni, E.; Le Dû, P.; Rouxel, D.; Lucas, S.; Pacaud, L.
2014-08-01
Current profilers and current meters have a magnetic compass and tilt sensors for relating measurements to a terrestrial reference frame. As compasses are sensitive to their magnetic environment, they must be calibrated in the configuration in which they will be used. A calibration platform for magnetic compasses and tilt sensors was built, based on a method developed in 2007, to correct angular errors and guarantee a measurement uncertainty for instruments mounted in mooring cages. As mooring cages can weigh up to 800 kg, it was necessary to find a suitable place to set up this platform, map the magnetic fields in this area and dimension the platform to withstand these loads. It was calibrated using a GPS positioning technique. The platform has a table that can be tilted to calibrate the tilt sensors. The measurement uncertainty of the system was evaluated. Sinusoidal corrections based on the anomalies created by soft and hard magnetic materials were tested, as well as manufacturers’ calibration methods.
40 CFR 1065.642 - SSV, CFV, and PDP molar flow rate calculations.
Code of Federal Regulations, 2014 CFR
2014-07-01
...) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Calculations and Data Requirements § 1065.642 SSV... ranges of dilution air dewpoint versus calibration air dewpoint in Table 3 of § 1065.640, you may set M...
Calibration Test Set for a Phase-Comparison Digital Tracker
NASA Technical Reports Server (NTRS)
Boas, Amy; Li, Samuel; McMaster, Robert
2007-01-01
An apparatus that generates four signals at a frequency of 7.1 GHz having precisely controlled relative phases and equal amplitudes has been designed and built. This apparatus is intended mainly for use in computer-controlled automated calibration and testing of a phase-comparison digital tracker (PCDT) that measures the relative phases of replicas of the same X-band signal received by four antenna elements in an array. (The relative direction of incidence of the signal on the array is then computed from the relative phases.) The present apparatus can also be used to generate precisely phased signals for steering a beam transmitted from a phased antenna array. The apparatus (see figure) includes a 7.1-GHz signal generator, the output of which is fed to a four-way splitter. Each of the four splitter outputs is attenuated by 10 dB and fed as input to a vector modulator, wherein DC bias voltages are used to control the in-phase (I) and quadrature (Q) signal components. The bias voltages are generated by digital-to-analog- converter circuits on a control board that receives its digital control input from a computer running a LabVIEW program. The outputs of the vector modulators are further attenuated by 10 dB, then presented at high-grade radio-frequency connectors. The attenuation reduces the effects of changing mismatch and reflections. The apparatus was calibrated in a process in which the bias voltages were first stepped through all possible IQ settings. Then in a reverse interpolation performed by use of MATLAB software, a lookup table containing 3,600 IQ settings, representing equal amplitude and phase increments of 0.1 , was created for each vector modulator. During operation of the apparatus, these lookup tables are used in calibrating the PCDT.
An improved error assessment for the GEM-T1 gravitational model
NASA Technical Reports Server (NTRS)
Lerch, F. J.; Marsh, J. G.; Klosko, S. M.; Pavlis, E. C.; Patel, G. B.; Chinn, D. S.; Wagner, C. A.
1988-01-01
Several tests were designed to determine the correct error variances for the Goddard Earth Model (GEM)-T1 gravitational solution which was derived exclusively from satellite tracking data. The basic method employs both wholly independent and dependent subset data solutions and produces a full field coefficient estimate of the model uncertainties. The GEM-T1 errors were further analyzed using a method based upon eigenvalue-eigenvector analysis which calibrates the entire covariance matrix. Dependent satellite and independent altimetric and surface gravity data sets, as well as independent satellite deep resonance information, confirm essentially the same error assessment. These calibrations (utilizing each of the major data subsets within the solution) yield very stable calibration factors which vary by approximately 10 percent over the range of tests employed. Measurements of gravity anomalies obtained from altimetry were also used directly as observations to show that GEM-T1 is calibrated. The mathematical representation of the covariance error in the presence of unmodeled systematic error effects in the data is analyzed and an optimum weighting technique is developed for these conditions. This technique yields an internal self-calibration of the error model, a process which GEM-T1 is shown to approximate.
Rudolff, Andrea S; Moens, Yves P S; Driessen, Bernd; Ambrisko, Tamas D
2014-07-01
To assess agreement between infrared (IR) analysers and a refractometer for measurements of isoflurane, sevoflurane and desflurane concentrations and to demonstrate the effect of customized calibration of IR analysers. In vitro experiment. Six IR anaesthetic monitors (Datex-Ohmeda) and a single portable refractometer (Riken). Both devices were calibrated following the manufacturer's recommendations. Gas samples were collected at common gas outlets of anaesthesia machines. A range of agent concentrations was produced by stepwise changes in dial settings: isoflurane (0-5% in 0.5% increments), sevoflurane (0-8% in 1% increments), or desflurane (0-18% in 2% increments). Oxygen flow was 2 L minute(-1) . The orders of testing IR analysers, agents and dial settings were randomized. Duplicate measurements were performed at each setting. The entire procedure was repeated 24 hours later. Bland-Altman analysis was performed. Measurements on day-1 were used to yield calibration equations (IR measurements as dependent and refractometry measurements as independent variables), which were used to modify the IR measurements on day-2. Bias ± limits of agreement for isoflurane, sevoflurane and desflurane were 0.2 ± 0.3, 0.1 ± 0.4 and 0.7 ± 0.9 volume%, respectively. There were significant linear relationships between differences and means for all agents. The IR analysers became less accurate at higher gas concentrations. After customized calibration, the bias became almost zero and the limits of agreement became narrower. If similar IR analysers are used in research studies, they need to be calibrated against a reference method using the agent in question at multiple calibration points overlapping the range of interest. © 2013 Association of Veterinary Anaesthetists and the American College of Veterinary Anesthesia and Analgesia.
Wang, Wei; Lu, Hui; Yang, Dawen; Sothea, Khem; Jiao, Yang; Gao, Bin; Peng, Xueting; Pang, Zhiguo
2016-01-01
The Mekong River is the most important river in Southeast Asia. It has increasingly suffered from water-related problems due to economic development, population growth and climate change in the surrounding areas. In this study, we built a distributed Geomorphology-Based Hydrological Model (GBHM) of the Mekong River using remote sensing data and other publicly available data. Two numerical experiments were conducted using different rainfall data sets as model inputs. The data sets included rain gauge data from the Mekong River Commission (MRC) and remote sensing rainfall data from the Tropic Rainfall Measurement Mission (TRMM 3B42V7). Model calibration and validation were conducted for the two rainfall data sets. Compared to the observed discharge, both the gauge simulation and TRMM simulation performed well during the calibration period (1998–2001). However, the performance of the gauge simulation was worse than that of the TRMM simulation during the validation period (2002–2012). The TRMM simulation is more stable and reliable at different scales. Moreover, the calibration period was changed to 2, 4, and 8 years to test the impact of the calibration period length on the two simulations. The results suggest that longer calibration periods improved the GBHM performance during validation periods. In addition, the TRMM simulation is more stable and less sensitive to the calibration period length than is the gauge simulation. Further analysis reveals that the uneven distribution of rain gauges makes the input rainfall data less representative and more heterogeneous, worsening the simulation performance. Our results indicate that remotely sensed rainfall data may be more suitable for driving distributed hydrologic models, especially in basins with poor data quality or limited gauge availability. PMID:27010692
Wang, Wei; Lu, Hui; Yang, Dawen; Sothea, Khem; Jiao, Yang; Gao, Bin; Peng, Xueting; Pang, Zhiguo
2016-01-01
The Mekong River is the most important river in Southeast Asia. It has increasingly suffered from water-related problems due to economic development, population growth and climate change in the surrounding areas. In this study, we built a distributed Geomorphology-Based Hydrological Model (GBHM) of the Mekong River using remote sensing data and other publicly available data. Two numerical experiments were conducted using different rainfall data sets as model inputs. The data sets included rain gauge data from the Mekong River Commission (MRC) and remote sensing rainfall data from the Tropic Rainfall Measurement Mission (TRMM 3B42V7). Model calibration and validation were conducted for the two rainfall data sets. Compared to the observed discharge, both the gauge simulation and TRMM simulation performed well during the calibration period (1998-2001). However, the performance of the gauge simulation was worse than that of the TRMM simulation during the validation period (2002-2012). The TRMM simulation is more stable and reliable at different scales. Moreover, the calibration period was changed to 2, 4, and 8 years to test the impact of the calibration period length on the two simulations. The results suggest that longer calibration periods improved the GBHM performance during validation periods. In addition, the TRMM simulation is more stable and less sensitive to the calibration period length than is the gauge simulation. Further analysis reveals that the uneven distribution of rain gauges makes the input rainfall data less representative and more heterogeneous, worsening the simulation performance. Our results indicate that remotely sensed rainfall data may be more suitable for driving distributed hydrologic models, especially in basins with poor data quality or limited gauge availability.
Masalski, Marcin; Kipiński, Lech; Grysiński, Tomasz; Kręcicki, Tomasz
2016-05-30
Hearing tests carried out in home setting by means of mobile devices require previous calibration of the reference sound level. Mobile devices with bundled headphones create a possibility of applying the predefined level for a particular model as an alternative to calibrating each device separately. The objective of this study was to determine the reference sound level for sets composed of a mobile device and bundled headphones. Reference sound levels for Android-based mobile devices were determined using an open access mobile phone app by means of biological calibration, that is, in relation to the normal-hearing threshold. The examinations were conducted in 2 groups: an uncontrolled and a controlled one. In the uncontrolled group, the fully automated self-measurements were carried out in home conditions by 18- to 35-year-old subjects, without prior hearing problems, recruited online. Calibration was conducted as a preliminary step in preparation for further examination. In the controlled group, audiologist-assisted examinations were performed in a sound booth, on normal-hearing subjects verified through pure-tone audiometry, recruited offline from among the workers and patients of the clinic. In both the groups, the reference sound levels were determined on a subject's mobile device using the Bekesy audiometry. The reference sound levels were compared between the groups. Intramodel and intermodel analyses were carried out as well. In the uncontrolled group, 8988 calibrations were conducted on 8620 different devices representing 2040 models. In the controlled group, 158 calibrations (test and retest) were conducted on 79 devices representing 50 models. Result analysis was performed for 10 most frequently used models in both the groups. The difference in reference sound levels between uncontrolled and controlled groups was 1.50 dB (SD 4.42). The mean SD of the reference sound level determined for devices within the same model was 4.03 dB (95% CI 3.93-4.11). Statistically significant differences were found across models. Reference sound levels determined in the uncontrolled group are comparable to the values obtained in the controlled group. This validates the use of biological calibration in the uncontrolled group for determining the predefined reference sound level for new devices. Moreover, due to a relatively small deviation of the reference sound level for devices of the same model, it is feasible to conduct hearing screening on devices calibrated with the predefined reference sound level.
Spinning angle optical calibration apparatus
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beer, S.K.; Pratt, H.R. II.
1989-09-12
An optical calibration apparatus is provided for calibrating and reproducing spinning angles in cross-polarization, nuclear magnetic resonance spectroscopy. An illuminated magnifying apparatus enables optical setting and accurate reproducing of spinning magic angles in cross-polarization, nuclear magnetic resonance spectroscopy experiments. A reference mark scribed on an edge of a spinning angle test sample holder is illuminated by a light source and viewed through a magnifying scope. When the magic angle of a sample material used as a standard is attained by varying the angular position of the sample holder, the coordinate position of the reference mark relative to a graduation ormore » graduations on a reticle in the magnifying scope is noted. Thereafter, the spinning magic angle of a test material having similar nuclear properties to the standard is attained by returning the sample holder back to the originally noted coordinate position. 2 figs.« less
Versatile robotic probe calibration for position tracking in ultrasound imaging.
Bø, Lars Eirik; Hofstad, Erlend Fagertun; Lindseth, Frank; Hernes, Toril A N
2015-05-07
Within the field of ultrasound-guided procedures, there are a number of methods for ultrasound probe calibration. While these methods are usually developed for a specific probe, they are in principle easily adapted to other probes. In practice, however, the adaptation often proves tedious and this is impractical in a research setting, where new probes are tested regularly. Therefore, we developed a method which can be applied to a large variety of probes without adaptation. The method used a robot arm to move a plastic sphere submerged in water through the ultrasound image plane, providing a slow and precise movement. The sphere was then segmented from the recorded ultrasound images using a MATLAB programme and the calibration matrix was computed based on this segmentation in combination with tracking information. The method was tested on three very different probes demonstrating both great versatility and high accuracy.
Versatile robotic probe calibration for position tracking in ultrasound imaging
NASA Astrophysics Data System (ADS)
Eirik Bø, Lars; Fagertun Hofstad, Erlend; Lindseth, Frank; Hernes, Toril A. N.
2015-05-01
Within the field of ultrasound-guided procedures, there are a number of methods for ultrasound probe calibration. While these methods are usually developed for a specific probe, they are in principle easily adapted to other probes. In practice, however, the adaptation often proves tedious and this is impractical in a research setting, where new probes are tested regularly. Therefore, we developed a method which can be applied to a large variety of probes without adaptation. The method used a robot arm to move a plastic sphere submerged in water through the ultrasound image plane, providing a slow and precise movement. The sphere was then segmented from the recorded ultrasound images using a MATLAB programme and the calibration matrix was computed based on this segmentation in combination with tracking information. The method was tested on three very different probes demonstrating both great versatility and high accuracy.
Measurement of pH in whole blood by near-infrared spectroscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alam, M. Kathleen; Maynard, John D.; Robinson, M. Ries
1999-03-01
Whole blood pH has been determined {ital in vitro} by using near-infrared spectroscopy over the wavelength range of 1500 to 1785 nm with multivariate calibration modeling of the spectral data obtained from two different sample sets. In the first sample set, the pH of whole blood was varied without controlling cell size and oxygen saturation (O{sub 2} Sat) variation. The result was that the red blood cell (RBC) size and O{sub 2} Sat correlated with pH. Although the partial least-squares (PLS) multivariate calibration of these data produced a good pH prediction cross-validation standard error of prediction (CVSEP)=0.046, R{sup 2}=0.982, themore » spectral data were dominated by scattering changes due to changing RBC size that correlated with the pH changes. A second experiment was carried out where the RBC size and O{sub 2} Sat were varied orthogonally to the pH variation. A PLS calibration of the spectral data obtained from these samples produced a pH prediction with an R{sup 2} of 0.954 and a cross-validated standard error of prediction of 0.064 pH units. The robustness of the PLS calibration models was tested by predicting the data obtained from the other sets. The predicted pH values obtained from both data sets yielded R{sup 2} values greater than 0.9 once the data were corrected for differences in hemoglobin concentration. For example, with the use of the calibration produced from the second sample set, the pH values from the first sample set were predicted with an R{sup 2} of 0.92 after the predictions were corrected for bias and slope. It is shown that spectral information specific to pH-induced chemical changes in the hemoglobin molecule is contained within the PLS loading vectors developed for both the first and second data sets. It is this pH specific information that allows the spectra dominated by pH-correlated scattering changes to provide robust pH predictive ability in the uncorrelated data, and visa versa. {copyright} {ital 1999} {ital Society for Applied Spectroscopy}« less
NASA Astrophysics Data System (ADS)
Lu, Jiazhen; Liang, Shufang; Yang, Yanqiang
2017-10-01
Micro-electro-mechanical systems (MEMS) inertial measurement devices tend to be widely used in inertial navigation systems and have quickly emerged on the market due to their characteristics of low cost, high reliability and small size. Calibration is the most effective way to remove the deterministic error of an inertial reference unit (IRU), which in this paper consists of three orthogonally mounted MEMS gyros. However, common testing methods in the lab cannot predict the corresponding errors precisely when the turntable’s working condition is restricted. In this paper, the turntable can only provide a relatively small rotation angle. Moreover, the errors must be compensated exactly because of the great effect caused by the high angular velocity of the craft. To deal with this question, a new method is proposed to evaluate the MEMS IRU’s performance. In the calibration procedure, a one-axis table that can rotate a limited angle in the form of a sine function is utilized to provide the MEMS IRU’s angular velocity. A new algorithm based on Fourier series is designed to calculate the misalignment and scale factor errors. The proposed method is tested in a set of experiments, and the calibration results are compared to a traditional calibration method performed under normal working conditions to verify their correctness. In addition, a verification test in the given rotation speed is implemented for further demonstration.
De Champlain, Andre F; Boulais, Andre-Philippe; Dallas, Andrew
2016-01-01
The aim of this research was to compare different methods of calibrating multiple choice question (MCQ) and clinical decision making (CDM) components for the Medical Council of Canada's Qualifying Examination Part I (MCCQEI) based on item response theory. Our data consisted of test results from 8,213 first time applicants to MCCQEI in spring and fall 2010 and 2011 test administrations. The data set contained several thousand multiple choice items and several hundred CDM cases. Four dichotomous calibrations were run using BILOG-MG 3.0. All 3 mixed item format (dichotomous MCQ responses and polytomous CDM case scores) calibrations were conducted using PARSCALE 4. The 2-PL model had identical numbers of items with chi-square values at or below a Type I error rate of 0.01 (83/3,499 or 0.02). In all 3 polytomous models, whether the MCQs were either anchored or concurrently run with the CDM cases, results suggest very poor fit. All IRT abilities estimated from dichotomous calibration designs correlated very highly with each other. IRT-based pass-fail rates were extremely similar, not only across calibration designs and methods, but also with regard to the actual reported decision to candidates. The largest difference noted in pass rates was 4.78%, which occurred between the mixed format concurrent 2-PL graded response model (pass rate= 80.43%) and the dichotomous anchored 1-PL calibrations (pass rate= 85.21%). Simpler calibration designs with dichotomized items should be implemented. The dichotomous calibrations provided better fit of the item response matrix than more complex, polytomous calibrations.
Burns, J; Hou, S; Riley, C B; Shaw, R A; Jewett, N; McClure, J T
2014-01-01
Rapid, economical, and quantitative assays for measurement of camelid serum immunoglobulin G (IgG) are limited. In camelids, failure of transfer of maternal immunoglobulins has a reported prevalence of up to 20.5%. An accurate method for quantifying serum IgG concentrations is required. To develop an infrared spectroscopy-based assay for measurement of alpaca serum IgG and compare its performance to the reference standard radial immunodiffusion (RID) assay. One hundred and seventy-five privately owned, healthy alpacas. Eighty-two serum samples were collected as convenience samples during routine herd visits whereas 93 samples were recruited from a separate study. Serum IgG concentrations were determined by RID assays and midinfrared spectra were collected for each sample. Fifty samples were set aside as the test set and the remaining 125 training samples were employed to build a calibration model using partial least squares (PLS) regression with Monte Carlo cross validation to determine the optimum number of PLS factors. The predictive performance of the calibration model was evaluated by the test set. Correlation coefficients for the IR-based assay were 0.93 and 0.87, respectively, for the entire data set and test set. Sensitivity in the diagnosis of failure of transfer of passive immunity (FTPI) ([IgG] <1,000 mg/dL) was 71.4% and specificity was 100% for the IR-based method (test set) as gauged relative to the RID reference method assay. This study indicated that infrared spectroscopy, in combination with chemometrics, is an effective method for measurement of IgG in alpaca serum. Copyright © 2014 by the American College of Veterinary Internal Medicine.
DOT National Transportation Integrated Search
1996-04-01
THIS REPORT ALSO DESCRIBES THE PROCEDURES FOR DIRECT ESTIMATION OF INTERSECTION CAPACITY WITH SIMULATION, INCLUDING A SET OF RIGOROUS STATISTICAL TESTS FOR SIMULATION PARAMETER CALIBRATION FROM FIELD DATA.
Spinning angle optical calibration apparatus
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beer, S.K.; Pratt, H.R.
1991-02-26
This patent describes an optical calibration apparatus provided for calibrating and reproducing spinning angles in cross-polarization, nuclear magnetic resonance spectroscopy. An illuminated magnifying apparatus enables optical setting an accurate reproducing of spinning magic angles in cross-polarization, nuclear magnetic resonance spectroscopy experiments. A reference mark scribed on an edge of a spinning angle test sample holder is illuminated by a light source and viewed through a magnifying scope. When the magic angle of a sample material used as a standard is attained by varying the angular position of the sample holder, the coordinate position of the reference mark relative to amore » graduation or graduations on a reticle in the magnifying scope is noted.« less
NASA Astrophysics Data System (ADS)
Gektin, Yu. M.; Egoshkin, N. A.; Eremeev, V. V.; Kuznecov, A. E.; Moskatinyev, I. V.; Smelyanskiy, M. B.
2017-12-01
A set of standardized models and algorithms for geometric normalization and georeferencing images from geostationary and highly elliptical Earth observation systems is considered. The algorithms can process information from modern scanning multispectral sensors with two-coordinate scanning and represent normalized images in optimal projection. Problems of the high-precision ground calibration of the imaging equipment using reference objects, as well as issues of the flight calibration and refinement of geometric models using the absolute and relative reference points, are considered. Practical testing of the models, algorithms, and technologies is performed in the calibration of sensors for spacecrafts of the Electro-L series and during the simulation of the Arktika prospective system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Armentrout, J.M.; Smith-Rouch, L.S.; Bowman, S.A.
1996-08-01
Numeric simulations based on integrated data sets enhance our understanding of depositional geometry and facilitate quantification of depositional processes. Numeric values tested against well-constrained geologic data sets can then be used in iterations testing each variable, and in predicting lithofacies distributions under various depositional scenarios using the principles of sequence stratigraphic analysis. The stratigraphic modeling software provides a broad spectrum of techniques for modeling and testing elements of the petroleum system. Using well-constrained geologic examples, variations in depositional geometry and lithofacies distributions between different tectonic settings (passive vs. active margin) and climate regimes (hothouse vs. icehouse) can provide insight tomore » potential source rock and reservoir rock distribution, maturation timing, migration pathways, and trap formation. Two data sets are used to illustrate such variations: both include a seismic reflection profile calibrated by multiple wells. The first is a Pennsylvanian mixed carbonate-siliciclastic system in the Paradox basin, and the second a Pliocene-Pleistocene siliciclastic system in the Gulf of Mexico. Numeric simulations result in geometry and facies distributions consistent with those interpreted using the integrated stratigraphic analysis of the calibrated seismic profiles. An exception occurs in the Gulf of Mexico study where the simulated sediment thickness from 3.8 to 1.6 Ma within an upper slope minibasin was less than that mapped using a regional seismic grid. Regional depositional patterns demonstrate that this extra thickness was probably sourced from out of the plane of the modeled transect, illustrating the necessity for three-dimensional constraints on two-dimensional modeling.« less
Calibrating the PAU Survey's 46 Filters
NASA Astrophysics Data System (ADS)
Bauer, A.; Castander, F.; Gaztañaga, E.; Serrano, S.; Sevilla, N.; Tonello, N.; PAU Team
2016-05-01
The Physics of the Accelerating Universe (PAU) Survey, being carried out by several Spanish institutions, will image an area of 100-200 square degrees in 6 broad and 40 narrow band optical filters. The team is building a camera (PAUCam) with 18 CCDs, which will be installed in the 4 meter William Herschel Telescope at La Palma in 2013. The narrow band filters will each cover 100Å, with the set spanning 4500-8500Å. The broad band set will consist of standard ugriZy filters. The narrow band filters will provide low-resolution (R˜50) photometric "spectra" for all objects observed in the survey, which will reach a depth of ˜24 mag in the broad bands and ˜22.5 mag (AB) in the narrow bands. Such precision will allow for galaxy photometric redshift errors of 0.0035(1+z), which will facilitate the measurement of cosmological parameters with precision comparable to much larger spectroscopic and photometric surveys. Accurate photometric calibration of the PAU data is vital to the survey's science goals, and is not straightforward due to the large and unusual filter set. We outline the data management pipelines being developed for the survey, both for nightly data reduction and coaddition of multiple epochs, with emphasis on the photometric calibration strategies. We also describe the tools we are developing to test the quality of the reduction and calibration.
An active learning representative subset selection method using net analyte signal.
He, Zhonghai; Ma, Zhenhe; Luan, Jingmin; Cai, Xi
2018-05-05
To guarantee accurate predictions, representative samples are needed when building a calibration model for spectroscopic measurements. However, in general, it is not known whether a sample is representative prior to measuring its concentration, which is both time-consuming and expensive. In this paper, a method to determine whether a sample should be selected into a calibration set is presented. The selection is based on the difference of Euclidean norm of net analyte signal (NAS) vector between the candidate and existing samples. First, the concentrations and spectra of a group of samples are used to compute the projection matrix, NAS vector, and scalar values. Next, the NAS vectors of candidate samples are computed by multiplying projection matrix with spectra of samples. Scalar value of NAS is obtained by norm computation. The distance between the candidate set and the selected set is computed, and samples with the largest distance are added to selected set sequentially. Last, the concentration of the analyte is measured such that the sample can be used as a calibration sample. Using a validation test, it is shown that the presented method is more efficient than random selection. As a result, the amount of time and money spent on reference measurements is greatly reduced. Copyright © 2018 Elsevier B.V. All rights reserved.
An active learning representative subset selection method using net analyte signal
NASA Astrophysics Data System (ADS)
He, Zhonghai; Ma, Zhenhe; Luan, Jingmin; Cai, Xi
2018-05-01
To guarantee accurate predictions, representative samples are needed when building a calibration model for spectroscopic measurements. However, in general, it is not known whether a sample is representative prior to measuring its concentration, which is both time-consuming and expensive. In this paper, a method to determine whether a sample should be selected into a calibration set is presented. The selection is based on the difference of Euclidean norm of net analyte signal (NAS) vector between the candidate and existing samples. First, the concentrations and spectra of a group of samples are used to compute the projection matrix, NAS vector, and scalar values. Next, the NAS vectors of candidate samples are computed by multiplying projection matrix with spectra of samples. Scalar value of NAS is obtained by norm computation. The distance between the candidate set and the selected set is computed, and samples with the largest distance are added to selected set sequentially. Last, the concentration of the analyte is measured such that the sample can be used as a calibration sample. Using a validation test, it is shown that the presented method is more efficient than random selection. As a result, the amount of time and money spent on reference measurements is greatly reduced.
Lumb, A.M.; McCammon, R.B.; Kittle, J.L.
1994-01-01
Expert system software was developed to assist less experienced modelers with calibration of a watershed model and to facilitate the interaction between the modeler and the modeling process not provided by mathematical optimization. A prototype was developed with artificial intelligence software tools, a knowledge engineer, and two domain experts. The manual procedures used by the domain experts were identified and the prototype was then coded by the knowledge engineer. The expert system consists of a set of hierarchical rules designed to guide the calibration of the model through a systematic evaluation of model parameters. When the prototype was completed and tested, it was rewritten for portability and operational use and was named HSPEXP. The watershed model Hydrological Simulation Program--Fortran (HSPF) is used in the expert system. This report is the users manual for HSPEXP and contains a discussion of the concepts and detailed steps and examples for using the software. The system has been tested on watersheds in the States of Washington and Maryland, and the system correctly identified the model parameters to be adjusted and the adjustments led to improved calibration.
Hao, Z Q; Li, C M; Shen, M; Yang, X Y; Li, K H; Guo, L B; Li, X Y; Lu, Y F; Zeng, X Y
2015-03-23
Laser-induced breakdown spectroscopy (LIBS) with partial least squares regression (PLSR) has been applied to measuring the acidity of iron ore, which can be defined by the concentrations of oxides: CaO, MgO, Al₂O₃, and SiO₂. With the conventional internal standard calibration, it is difficult to establish the calibration curves of CaO, MgO, Al₂O₃, and SiO₂ in iron ore due to the serious matrix effects. PLSR is effective to address this problem due to its excellent performance in compensating the matrix effects. In this work, fifty samples were used to construct the PLSR calibration models for the above-mentioned oxides. These calibration models were validated by the 10-fold cross-validation method with the minimum root-mean-square errors (RMSE). Another ten samples were used as a test set. The acidities were calculated according to the estimated concentrations of CaO, MgO, Al₂O₃, and SiO₂ using the PLSR models. The average relative error (ARE) and RMSE of the acidity achieved 3.65% and 0.0048, respectively, for the test samples.
NASA Astrophysics Data System (ADS)
Gu, Tingwei; Kong, Deren; Shang, Fei; Chen, Jing
2017-12-01
We present an optimization algorithm to obtain low-uncertainty dynamic pressure measurements from a force-transducer-based device. In this paper, the advantages and disadvantages of the methods that are commonly used to measure the propellant powder gas pressure, the applicable scope of dynamic pressure calibration devices, and the shortcomings of the traditional comparison calibration method based on the drop-weight device are firstly analysed in detail. Then, a dynamic calibration method for measuring pressure using a force sensor based on a drop-weight device is introduced. This method can effectively save time when many pressure sensors are calibrated simultaneously and extend the life of expensive reference sensors. However, the force sensor is installed between the drop-weight and the hammerhead by transition pieces through the connection mode of bolt fastening, which causes adverse effects such as additional pretightening and inertia forces. To solve these effects, the influence mechanisms of the pretightening force, the inertia force and other influence factors on the force measurement are theoretically analysed. Then a measurement correction method for the force measurement is proposed based on an artificial neural network optimized by a genetic algorithm. The training and testing data sets are obtained from calibration tests, and the selection criteria for the key parameters of the correction model is discussed. The evaluation results for the test data show that the correction model can effectively improve the force measurement accuracy of the force sensor. Compared with the traditional high-accuracy comparison calibration method, the percentage difference of the impact-force-based measurement is less than 0.6% and the relative uncertainty of the corrected force value is 1.95%, which can meet the requirements of engineering applications.
NASA Technical Reports Server (NTRS)
Caplin, R. S.; Royer, E. R.
1978-01-01
Attempts are made to provide a total design of a Microbial Load Monitor (MLM) system flight engineering model. Activities include assembly and testing of Sample Receiving and Card Loading Devices (SRCLDs), operator related software, and testing of biological samples in the MLM. Progress was made in assembling SRCLDs with minimal leaks and which operate reliably in the Sample Loading System. Seven operator commands are used to control various aspects of the MLM such as calibrating and reading the incubating reading head, setting the clock and reading time, and status of Card. Testing of the instrument, both in hardware and biologically, was performed. Hardware testing concentrated on SRCLDs. Biological testing covered 66 clinical and seeded samples. Tentative thresholds were set and media performance listed.
The LED and fiber based calibration system for the photomultiplier array of SNO+
NASA Astrophysics Data System (ADS)
Seabra, L.; Alves, R.; Andringa, S.; Bradbury, S.; Carvalho, J.; Clark, K.; Coulter, I.; Descamps, F.; Falk, L.; Gurriana, L.; Kraus, C.; Lefeuvre, G.; Maio, A.; Maneira, J.; Mottram, M.; Peeters, S.; Rose, J.; Sinclair, J.; Skensved, P.; Waterfield, J.; White, R.; Wilson, J.; SNO+ Collaboration
2015-02-01
A new external LED/fiber light injection calibration system was designed for the calibration and monitoring of the photomultiplier array of the SNO+ experiment at SNOLAB. The goal of the calibration system is to allow an accurate and regular measurement of the photomultiplier array's performance, while minimizing the risk of radioactivity ingress. The choice in SNO+ was to use a set of optical fiber cables to convey into the detector the light pulses produced by external LEDs. The quality control was carried out using a modified test bench that was used in QC of optical fibers for TileCal/ATLAS. The optical fibers were characterized for transmission, timing and angular dispersions. This article describes the setups used for the characterization and quality control of the system based on LEDs and optical fibers and their results.
Developing an Abaqus *HYPERFOAM Model for M9747 (4003047) Cellular Silicone Foam
DOE Office of Scientific and Technical Information (OSTI.GOV)
Siranosian, Antranik A.; Stevens, R. Robert
This report documents work done to develop an Abaqus *HYPERFOAM hyperelastic model for M9747 (4003047) cellular silicone foam for use in quasi-static analyses at ambient temperature. Experimental data, from acceptance tests for 'Pad A' conducted at the Kansas City Plant (KCP), was used to calibrate the model. The data includes gap (relative displacement) and load measurements from three locations on the pad. Thirteen sets of data, from pads with different serial numbers, were provided. The thirty-nine gap-load curves were extracted from the thirteen supplied Excel spreadsheets and analyzed, and from those thirty-nine one set of data, representing a qualitative mean,more » was chosen to calibrate the model. The data was converted from gap and load to nominal (engineering) strain and nominal stress in order to implement it in Abaqus. Strain computations required initial pad thickness estimates. An Abaqus model of a right-circular cylinder was used to evaluate and calibrate the *HYPERFOAM model.« less
The NASA Langley 8-foot Transonic Pressure Tunnel calibration
NASA Technical Reports Server (NTRS)
Brooks, Cuyler W., Jr.; Harris, Charles D.; Reagon, Patricia G.
1994-01-01
The NASA Langley 8-Foot Transonic Pressure Tunnel is a continuous-flow, variable-pressure wind tunnel with control capability to independently vary Mach number, stagnation pressure, stagnation temperature, and humidity. The top and bottom walls of the test section are axially slotted to permit continuous variation of the test section Mach number from 0.2 to 1.2, the slot-width contour provides a gradient-free test section 50 in. long for Mach numbers equal to or greater than 1.0 and 100 in. long for Mach numbers less than 1.0. The stagnation pressure may be varied from 0.25 to 2.0 atm. The tunnel test section has been recalibrated to determine the relationship between the free-stream Mach number and the test chamber reference Mach number. The hardware was the same as that of an earlier calibration in 1972 but the pressure measurement instrumentation available for the recalibration was about an order of magnitude more precise. The principal result of the recalibration was a slightly different schedule of reentry flap settings for Mach numbers from 0.80 to 1.05 than that determined during the 1972 calibration. Detailed tunnel contraction geometry, test section geometry, and limited test section wall boundary layer data are presented.
Nuopponen, Mari H; Birch, Gillian M; Sykes, Rob J; Lee, Steve J; Stewart, Derek
2006-01-11
Sitka spruce (Picea sitchensis) samples (491) from 50 different clones as well as 24 different tropical hardwoods and 20 Scots pine (Pinus sylvestris) samples were used to construct diffuse reflectance mid-infrared Fourier transform (DRIFT-MIR) based partial least squares (PLS) calibrations on lignin, cellulose, and wood resin contents and densities. Calibrations for density, lignin, and cellulose were established for all wood species combined into one data set as well as for the separate Sitka spruce data set. Relationships between wood resin and MIR data were constructed for the Sitka spruce data set as well as the combined Scots pine and Sitka spruce data sets. Calibrations containing only five wavenumbers instead of spectral ranges 4000-2800 and 1800-700 cm(-1) were also established. In addition, chemical factors contributing to wood density were studied. Chemical composition and density assessed from DRIFT-MIR calibrations had R2 and Q2 values in the ranges of 0.6-0.9 and 0.6-0.8, respectively. The PLS models gave residual mean squares error of prediction (RMSEP) values of 1.6-1.9, 2.8-3.7, and 0.4 for lignin, cellulose, and wood resin contents, respectively. Density test sets had RMSEP values ranging from 50 to 56. Reduced amount of wavenumbers can be utilized to predict the chemical composition and density of a wood, which should allow measurements of these properties using a hand-held device. MIR spectral data indicated that low-density samples had somewhat higher lignin contents than high-density samples. Correspondingly, high-density samples contained slightly more polysaccharides than low-density samples. This observation was consistent with the wet chemical data.
Overview of the Joint NASA ISRO Imaging Spectroscopy Science Campaign in India
NASA Astrophysics Data System (ADS)
Green, R. O.; Bhattacharya, B. K.; Eastwood, M. L.; Saxena, M.; Thompson, D. R.; Sadasivarao, B.
2016-12-01
In the period from December 2015 to March 2016 the Airborne Visible-Infrared Imaging Spectrometer Next Generation (AVIRIS-NG) was deployed to India for a joint NASA ISRO science campaign. This campaign was conceived to provide first of their kind high fidelity imaging spectroscopy measurements of a diverse set of Asian environments for science and applications research. During this campaign measurements were acquired for 57 high priority sites that have objectives spanning: snow/ice of the Himalaya; coastal habitats and water quality; mangrove forests; soils; dry and humid forests; hydrocarbon alteration; mineralogy; agriculture; urban materials; atmospheric properties; and calibration/validation. Measurements from the campaign have been processed to at-instrument spectral radiance and atmospherically corrected surface reflectance. New AVIRIS-NG algorithms for retrieval of vegetation canopy water and for estimation of the fractions of photosynthetic, non-photosynthetic vegetation have been tested and evaluated on these measurements. An inflight calibration validation experiment was performed on the 11thof December 2015 in Hyderabad to assess the spectral and radiometric calibration of AVIRIS-NG in the flight environment. We present an overview of the campaign, calibration and validation results, and initial science analysis of a subset of these unique and diverse data sets.
Performance of Reclassification Statistics in Comparing Risk Prediction Models
Paynter, Nina P.
2012-01-01
Concerns have been raised about the use of traditional measures of model fit in evaluating risk prediction models for clinical use, and reclassification tables have been suggested as an alternative means of assessing the clinical utility of a model. Several measures based on the table have been proposed, including the reclassification calibration (RC) statistic, the net reclassification improvement (NRI), and the integrated discrimination improvement (IDI), but the performance of these in practical settings has not been fully examined. We used simulations to estimate the type I error and power for these statistics in a number of scenarios, as well as the impact of the number and type of categories, when adding a new marker to an established or reference model. The type I error was found to be reasonable in most settings, and power was highest for the IDI, which was similar to the test of association. The relative power of the RC statistic, a test of calibration, and the NRI, a test of discrimination, varied depending on the model assumptions. These tools provide unique but complementary information. PMID:21294152
Boiret, Mathieu; Meunier, Loïc; Ginot, Yves-Michel
2011-02-20
A near infrared (NIR) method was developed for determination of tablet potency of active pharmaceutical ingredient (API) in a complex coated tablet matrix. The calibration set contained samples from laboratory and production scale batches. The reference values were obtained by high performance liquid chromatography (HPLC) and partial least squares (PLS) regression was used to establish a model. The model was challenged by calculating tablet potency of two external test sets. Root mean square errors of prediction were respectively equal to 2.0% and 2.7%. To use this model with a second spectrometer from the production field, a calibration transfer method called piecewise direct standardisation (PDS) was used. After the transfer, the root mean square error of prediction of the first test set was 2.4% compared to 4.0% without transferring the spectra. A statistical technique using bootstrap of PLS residuals was used to estimate confidence intervals of tablet potency calculations. This method requires an optimised PLS model, selection of the bootstrap number and determination of the risk. In the case of a chemical analysis, the tablet potency value will be included within the confidence interval calculated by the bootstrap method. An easy to use graphical interface was developed to easily determine if the predictions, surrounded by minimum and maximum values, are within the specifications defined by the regulatory organisation. Copyright © 2010 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Vincent, Mark B.; Chanover, Nancy J.; Beebe, Reta F.; Huber, Lyle
2005-10-01
The NASA Infrared Telescope Facility (IRTF) on Mauna Kea, Hawaii, set aside some time on about 500 nights from 1995 to 2002, when the NSFCAM facility infrared camera was mounted and Jupiter was visible, for a standardized set of observations of Jupiter in support of the Galileo mission. The program included observations of Jupiter, nearby reference stars, and dome flats in five filters: narrowband filters centered at 1.58, 2.28, and 3.53 μm, and broader L' and M' bands that probe the atmosphere from the stratosphere to below the main cloud layer. The reference stars were not cross-calibrated against standards. We performed follow-up observations to calibrate these stars and Jupiter in 2003 and 2004. We present a summary of the calibration of the Galileo support monitoring program data set. We present calibrated magnitudes of the six most frequently observed stars, calibrated reflectivities, and brightness temperatures of Jupiter from 1995 to 2004, and a simple method of normalizing the Jovian brightness to the 2004 results. Our study indicates that the NSFCAM's zero-point magnitudes were not stable from 1995 to early 1997, and that the best Jovian calibration possible with this data set is limited to about +/-10%. The raw images and calibration data have been deposited in the Planetary Data System.
PSL Icing Facility Upgrade Overview
NASA Technical Reports Server (NTRS)
Griffin, Thomas A.; Dicki, Dennis J.; Lizanich, Paul J.
2014-01-01
The NASA Glenn Research Center Propulsion Systems Lab (PSL) was recently upgraded to perform engine inlet ice crystal testing in an altitude environment. The system installed 10 spray bars in the inlet plenum for ice crystal generation using 222 spray nozzles. As an altitude test chamber, the PSL is capable of simulating icing events at altitude in a groundtest facility. The system was designed to operate at altitudes from 4,000 to 40,000 ft at Mach numbers up to 0.8M and inlet total temperatures from -60 to +15 degF. This paper and presentation will be part of a series of presentations on PSL Icing and will cover the development of the icing capability through design, developmental testing, installation, initial calibration, and validation engine testing. Information will be presented on the design criteria and process, spray bar developmental testing at Cox and Co., system capabilities, and initial calibration and engine validation test. The PSL icing system was designed to provide NASA and the icing community with a facility that could be used for research studies of engine icing by duplicating in-flight events in a controlled ground-test facility. With the system and the altitude chamber we can produce flight conditions and cloud environments to simulate those encountered in flight. The icing system can be controlled to set various cloud uniformities, droplet median volumetric diameter (MVD), and icing water content (IWC) through a wide variety of conditions. The PSL chamber can set altitudes, Mach numbers, and temperatures of interest to the icing community and also has the instrumentation capability of measuring engine performance during icing testing. PSL last year completed the calibration and initial engine validation of the facility utilizing a Honeywell ALF502-R5 engine and has duplicated in-flight roll back conditions experienced during flight testing. This paper will summarize the modifications and buildup of the facility to accomplish these tests.
Parameter calibration for synthesizing realistic-looking variability in offline handwriting
NASA Astrophysics Data System (ADS)
Cheng, Wen; Lopresti, Dan
2011-01-01
Motivated by the widely accepted principle that the more training data, the better a recognition system performs, we conducted experiments asking human subjects to do evaluate a mixture of real English handwritten text lines and text lines altered from existing handwriting with various distortion degrees. The idea of generating synthetic handwriting is based on a perturbation method by T. Varga and H. Bunke that distorts an entire text line. There are two purposes of our experiments. First, we want to calibrate distortion parameter settings for Varga and Bunke's perturbation model. Second, we intend to compare the effects of parameter settings on different writing styles: block, cursive and mixed. From the preliminary experimental results, we determined appropriate ranges for parameter amplitude, and found that parameter settings should be altered for different handwriting styles. With the proper parameter settings, it should be possible to generate large amount of training and testing data for building better off-line handwriting recognition systems.
Calibration of Wide-Field Deconvolution Microscopy for Quantitative Fluorescence Imaging
Lee, Ji-Sook; Wee, Tse-Luen (Erika); Brown, Claire M.
2014-01-01
Deconvolution enhances contrast in fluorescence microscopy images, especially in low-contrast, high-background wide-field microscope images, improving characterization of features within the sample. Deconvolution can also be combined with other imaging modalities, such as confocal microscopy, and most software programs seek to improve resolution as well as contrast. Quantitative image analyses require instrument calibration and with deconvolution, necessitate that this process itself preserves the relative quantitative relationships between fluorescence intensities. To ensure that the quantitative nature of the data remains unaltered, deconvolution algorithms need to be tested thoroughly. This study investigated whether the deconvolution algorithms in AutoQuant X3 preserve relative quantitative intensity data. InSpeck Green calibration microspheres were prepared for imaging, z-stacks were collected using a wide-field microscope, and the images were deconvolved using the iterative deconvolution algorithms with default settings. Afterwards, the mean intensities and volumes of microspheres in the original and the deconvolved images were measured. Deconvolved data sets showed higher average microsphere intensities and smaller volumes than the original wide-field data sets. In original and deconvolved data sets, intensity means showed linear relationships with the relative microsphere intensities given by the manufacturer. Importantly, upon normalization, the trend lines were found to have similar slopes. In original and deconvolved images, the volumes of the microspheres were quite uniform for all relative microsphere intensities. We were able to show that AutoQuant X3 deconvolution software data are quantitative. In general, the protocol presented can be used to calibrate any fluorescence microscope or image processing and analysis procedure. PMID:24688321
Delivery of calibration workshops covering herbicide application equipment : final report.
DOT National Transportation Integrated Search
2014-03-31
Proper herbicide sprayer set-up and calibration are critical to the success of the Oklahoma Department of Transportation (ODOT) herbicide program. Sprayer system set-up and calibration training is provided in annual continuing education herbicide wor...
Goicoechea, Héctor C; Olivieri, Alejandro C; Tauler, Romà
2010-03-01
Correlation constrained multivariate curve resolution-alternating least-squares is shown to be a feasible method for processing first-order instrumental data and achieve analyte quantitation in the presence of unexpected interferences. Both for simulated and experimental data sets, the proposed method could correctly retrieve the analyte and interference spectral profiles and perform accurate estimations of analyte concentrations in test samples. Since no information concerning the interferences was present in calibration samples, the proposed multivariate calibration approach including the correlation constraint facilitates the achievement of the so-called second-order advantage for the analyte of interest, which is known to be present for more complex higher-order richer instrumental data. The proposed method is tested using a simulated data set and two experimental data systems, one for the determination of ascorbic acid in powder juices using UV-visible absorption spectral data, and another for the determination of tetracycline in serum samples using fluorescence emission spectroscopy.
Childs, Charmaine; Wang, Li; Neoh, Boon Kwee; Goh, Hok Liok; Zu, Mya Myint; Aung, Phyo Wai; Yeo, Tseng Tsai
2014-10-01
The objective was to investigate sensor measurement uncertainty for intracerebral probes inserted during neurosurgery and remaining in situ during neurocritical care. This describes a prospective observational study of two sensor types and including performance of the complete sensor-bedside monitoring and readout system. Sensors from 16 patients with severe traumatic brain injury (TBI) were obtained at the time of removal from the brain. When tested, 40% of sensors achieved the manufacturer temperature specification of 0.1 °C. Pressure sensors calibration differed from the manufacturers at all test pressures in 8/20 sensors. The largest pressure measurement error was in the intraparenchymal triple sensor. Measurement uncertainty is not influenced by duration in situ. User experiences reveal problems with sensor 'handling', alarms and firmware. Rigorous investigation of the performance of intracerebral sensors in the laboratory and at the bedside has established measurement uncertainty in the 'real world' setting of neurocritical care.
First Results of Field Absolute Calibration of the GPS Receiver Antenna at Wuhan University.
Hu, Zhigang; Zhao, Qile; Chen, Guo; Wang, Guangxing; Dai, Zhiqiang; Li, Tao
2015-11-13
GNSS receiver antenna phase center variations (PCVs), which arise from the non-spherical phase response of GNSS signals have to be well corrected for high-precision GNSS applications. Without using a precise antenna phase center correction (PCC) model, the estimated position of a station monument will lead to a bias of up to several centimeters. The Chinese large-scale research project "Crustal Movement Observation Network of China" (CMONOC), which requires high-precision positions in a comprehensive GPS observational network motived establishment of a set of absolute field calibrations of the GPS receiver antenna located at Wuhan University. In this paper the calibration facilities are firstly introduced and then the multipath elimination and PCV estimation strategies currently used are elaborated. The validation of estimated PCV values of test antenna are finally conducted, compared with the International GNSS Service (IGS) type values. Examples of TRM57971.00 NONE antenna calibrations from our calibration facility demonstrate that the derived PCVs and IGS type mean values agree at the 1 mm level.
Feldman, Betsy J.; Crane, Heidi M.; Mugavero, Michael; Willig, James H.; Patrick, Donald; Schumacher, Joseph; Saag, Michael; Kitahata, Mari M.; Crane, Paul K.
2011-01-01
Purpose We provide detailed instructions for analyzing patient-reported outcome (PRO) data collected with an existing (legacy) instrument so that scores can be calibrated to the PRO Measurement Information System (PROMIS) metric. This calibration facilitates migration to computerized adaptive test (CAT) PROMIS data collection, while facilitating research using historical legacy data alongside new PROMIS data. Methods A cross-sectional convenience sample (n = 2,178) from the Universities of Washington and Alabama at Birmingham HIV clinics completed the PROMIS short form and Patient Health Questionnaire (PHQ-9) depression symptom measures between August 2008 and December 2009. We calibrated the tests using item response theory. We compared measurement precision of the PHQ-9, the PROMIS short form, and simulated PROMIS CAT. Results Dimensionality analyses confirmed the PHQ-9 could be calibrated to the PROMIS metric. We provide code used to score the PHQ-9 on the PROMIS metric. The mean standard errors of measurement were 0.49 for the PHQ-9, 0.35 for the PROMIS short form, and 0.37, 0.28, and 0.27 for 3-, 8-, and 9-item-simulated CATs. Conclusions The strategy described here facilitated migration from a fixed-format legacy scale to PROMIS CAT administration and may be useful in other settings. PMID:21409516
Gibbons, Laura E; Feldman, Betsy J; Crane, Heidi M; Mugavero, Michael; Willig, James H; Patrick, Donald; Schumacher, Joseph; Saag, Michael; Kitahata, Mari M; Crane, Paul K
2011-11-01
We provide detailed instructions for analyzing patient-reported outcome (PRO) data collected with an existing (legacy) instrument so that scores can be calibrated to the PRO Measurement Information System (PROMIS) metric. This calibration facilitates migration to computerized adaptive test (CAT) PROMIS data collection, while facilitating research using historical legacy data alongside new PROMIS data. A cross-sectional convenience sample (n = 2,178) from the Universities of Washington and Alabama at Birmingham HIV clinics completed the PROMIS short form and Patient Health Questionnaire (PHQ-9) depression symptom measures between August 2008 and December 2009. We calibrated the tests using item response theory. We compared measurement precision of the PHQ-9, the PROMIS short form, and simulated PROMIS CAT. Dimensionality analyses confirmed the PHQ-9 could be calibrated to the PROMIS metric. We provide code used to score the PHQ-9 on the PROMIS metric. The mean standard errors of measurement were 0.49 for the PHQ-9, 0.35 for the PROMIS short form, and 0.37, 0.28, and 0.27 for 3-, 8-, and 9-item-simulated CATs. The strategy described here facilitated migration from a fixed-format legacy scale to PROMIS CAT administration and may be useful in other settings.
Balss, K M; Llanos, G; Papandreou, G; Maryanoff, C A
2008-04-01
Raman spectroscopy was used to differentiate each component found in the CYPHER Sirolimus-eluting Coronary Stent. The unique spectral features identified for each component were then used to develop three separate calibration curves to describe the solid phase distribution found on drug-polymer coated stents. The calibration curves were obtained by analyzing confocal Raman spectral depth profiles from a set of 16 unique formulations of drug-polymer coatings sprayed onto stents and planar substrates. The sirolimus model was linear from 0 to 100 wt % of drug. The individual polymer calibration curves for poly(ethylene-co-vinyl acetate) [PEVA] and poly(n-butyl methacrylate) [PBMA] were also linear from 0 to 100 wt %. The calibration curves were tested on three independent drug-polymer coated stents. The sirolimus calibration predicted the drug content within 1 wt % of the laboratory assay value. The polymer calibrations predicted the content within 7 wt % of the formulation solution content. Attenuated total reflectance Fourier transform infrared (ATR-FTIR) spectra from five formulations confirmed a linear response to changes in sirolimus and polymer content. Copyright 2007 Wiley Periodicals, Inc.
A calibration hierarchy for risk models was defined: from utopia to empirical data.
Van Calster, Ben; Nieboer, Daan; Vergouwe, Yvonne; De Cock, Bavo; Pencina, Michael J; Steyerberg, Ewout W
2016-06-01
Calibrated risk models are vital for valid decision support. We define four levels of calibration and describe implications for model development and external validation of predictions. We present results based on simulated data sets. A common definition of calibration is "having an event rate of R% among patients with a predicted risk of R%," which we refer to as "moderate calibration." Weaker forms of calibration only require the average predicted risk (mean calibration) or the average prediction effects (weak calibration) to be correct. "Strong calibration" requires that the event rate equals the predicted risk for every covariate pattern. This implies that the model is fully correct for the validation setting. We argue that this is unrealistic: the model type may be incorrect, the linear predictor is only asymptotically unbiased, and all nonlinear and interaction effects should be correctly modeled. In addition, we prove that moderate calibration guarantees nonharmful decision making. Finally, results indicate that a flexible assessment of calibration in small validation data sets is problematic. Strong calibration is desirable for individualized decision support but unrealistic and counter productive by stimulating the development of overly complex models. Model development and external validation should focus on moderate calibration. Copyright © 2016 Elsevier Inc. All rights reserved.
Bastian, Thomas; Maire, Aurélia; Dugas, Julien; Ataya, Abbas; Villars, Clément; Gris, Florence; Perrin, Emilie; Caritu, Yanis; Doron, Maeva; Blanc, Stéphane; Jallon, Pierre; Simon, Chantal
2015-03-15
"Objective" methods to monitor physical activity and sedentary patterns in free-living conditions are necessary to further our understanding of their impacts on health. In recent years, many software solutions capable of automatically identifying activity types from portable accelerometry data have been developed, with promising results in controlled conditions, but virtually no reports on field tests. An automatic classification algorithm initially developed using laboratory-acquired data (59 subjects engaging in a set of 24 standardized activities) to discriminate between 8 activity classes (lying, slouching, sitting, standing, walking, running, and cycling) was applied to data collected in the field. Twenty volunteers equipped with a hip-worn triaxial accelerometer performed at their own pace an activity set that included, among others, activities such as walking the streets, running, cycling, and taking the bus. Performances of the laboratory-calibrated classification algorithm were compared with those of an alternative version of the same model including field-collected data in the learning set. Despite good results in laboratory conditions, the performances of the laboratory-calibrated algorithm (assessed by confusion matrices) decreased for several activities when applied to free-living data. Recalibrating the algorithm with data closer to real-life conditions and from an independent group of subjects proved useful, especially for the detection of sedentary behaviors while in transports, thereby improving the detection of overall sitting (sensitivity: laboratory model = 24.9%; recalibrated model = 95.7%). Automatic identification methods should be developed using data acquired in free-living conditions rather than data from standardized laboratory activity sets only, and their limits carefully tested before they are used in field studies. Copyright © 2015 the American Physiological Society.
Comparison of Signal Response Between EDM Notch and Cracks in Eddy-Current Testing
NASA Technical Reports Server (NTRS)
Kane, Mary; Koshti, Ajay
2008-01-01
In the field of ET an eddy-current instrument is calibrated on a manufactured notch that is designed to simulate a defect in a part. The calibrated instrument is then used to scan parts with the assumption that any response that is over half the amplitude of the notch signal is taken to be defective. The purpose of this study is to attempt a direct comparison of the signal response observed from an EDM notch to a crack of the same size. To make this comparison test equipment will be set up and calibrated as per normal inspection procedures. Once this has been achieved both notches and as many different sizes of crack specimens will be scanned and the data recorded. This data will then be analyzed to provide a comparison of the response. The results should also provide information that shows it is acceptable to use the half-amplitude method for determining if a part is defective. The tests will be performed on two different materials commonly inspected, titanium and aluminum. This will allow a comparison of the results between materials.
CALIBRATION OF SEMI-ANALYTIC MODELS OF GALAXY FORMATION USING PARTICLE SWARM OPTIMIZATION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruiz, Andrés N.; Domínguez, Mariano J.; Yaryura, Yamila
2015-03-10
We present a fast and accurate method to select an optimal set of parameters in semi-analytic models of galaxy formation and evolution (SAMs). Our approach compares the results of a model against a set of observables applying a stochastic technique called Particle Swarm Optimization (PSO), a self-learning algorithm for localizing regions of maximum likelihood in multidimensional spaces that outperforms traditional sampling methods in terms of computational cost. We apply the PSO technique to the SAG semi-analytic model combined with merger trees extracted from a standard Lambda Cold Dark Matter N-body simulation. The calibration is performed using a combination of observedmore » galaxy properties as constraints, including the local stellar mass function and the black hole to bulge mass relation. We test the ability of the PSO algorithm to find the best set of free parameters of the model by comparing the results with those obtained using a MCMC exploration. Both methods find the same maximum likelihood region, however, the PSO method requires one order of magnitude fewer evaluations. This new approach allows a fast estimation of the best-fitting parameter set in multidimensional spaces, providing a practical tool to test the consequences of including other astrophysical processes in SAMs.« less
The Calibration and error analysis of Shallow water (less than 100m) Multibeam Echo-Sounding System
NASA Astrophysics Data System (ADS)
Lin, M.
2016-12-01
Multibeam echo-sounders(MBES) have been developed to gather bathymetric and acoustic data for more efficient and more exact mapping of the oceans. This gain in efficiency does not come without drawbacks. Indeed, the finer the resolution of remote sensing instruments, the harder they are to calibrate. This is the case for multibeam echo-sounding systems (MBES). We are no longer dealing with sounding lines where the bathymetry must be interpolated between them to engender consistent representations of the seafloor. We now need to match together strips (swaths) of totally ensonified seabed. As a consequence, misalignment and time lag problems emerge as artifacts in the bathymetry from adjacent or overlapping swaths, particularly when operating in shallow water. More importantly, one must still verify that bathymetric data meet the accuracy requirements. This paper aims to summarize the system integration involved with MBES and identify the various source of error pertaining to shallow water survey (100m and less). A systematic method for the calibration of shallow water MBES is proposed and presented as a set of field procedures. The procedures aim at detecting, quantifying and correcting systematic instrumental and installation errors. Hence, calibrating for variations of the speed of sound in the water column, which is natural in origin, is not addressed in this document. The data which used in calibration will reference International Hydrographic Organization(IHO) and other related standards to compare. This paper aims to set a model in the specific area which can calibrate the error due to instruments. We will construct a procedure in patch test and figure out all the possibilities may make sounding data with error then calculate the error value to compensate. In general, the problems which have to be solved is the patch test's 4 correction in the Hypack system 1.Roll 2.GPS Latency 3.Pitch 4.Yaw. Cause These 4 correction affect each others, we run each survey line to calibrate. GPS Latency is synchronized GPS to echo sounder. Future studies concerning any shallower portion of an area, by this procedure can be more accurate sounding value and can do more detailed research.
Solar Dynamics Observatory Launch and Commissioning
NASA Technical Reports Server (NTRS)
O'Donnell, James R., Jr.; Kristin, D.; Bourkland, L.; Hsu, Oscar C.; Liu, Kuo-Chia; Mason, Paul A. C.; Morgenstern, Wendy M.; Russo, Angela M.; Starin, Scott R.; Vess, Melissa F.
2011-01-01
The Solar Dynamics Observatory (SDO) was launched on February 11, 2010. Over the next three months, the spacecraft was raised from its launch orbit into its final geosynchronous orbit and its systems and instruments were tested and calibrated in preparation for its desired ten year science mission studying the Sun. A great deal of activity during this time involved the spacecraft attitude control system (ACS); testing control modes, calibrating sensors and actuators, and using the ACS to help commission the spacecraft instruments and to control the propulsion system as the spacecraft was maneuvered into its final orbit. This paper will discuss the chronology of the SDO launch and commissioning, showing the ACS analysis work performed to diagnose propellant slosh transient and attitude oscillation anomalies that were seen during commissioning, and to determine how to overcome them. The simulations and tests devised to demonstrate correct operation of all onboard ACS modes and the activities in support of instrument calibration will be discussed and the final maneuver plan performed to bring SDO on station will be shown. In addition to detailing these commissioning and anomaly resolution activities, the unique set of tests performed to characterize SDO's on-orbit jitter performance will be discussed.
Calibration of the SBUV version 8.6 ozone data product
NASA Astrophysics Data System (ADS)
DeLand, M. T.; Taylor, S. L.; Huang, L. K.; Fisher, B. L.
2012-11-01
This paper describes the calibration process for the Solar Backscatter Ultraviolet (SBUV) Version 8.6 (V8.6) ozone data product. Eight SBUV instruments have flown on NASA and NOAA satellites since 1970, and a continuous data record is available since November 1978. The accuracy of ozone trends determined from these data depends on the calibration and long-term characterization of each instrument. V8.6 calibration adjustments are determined at the radiance level, and do not rely on comparison of retrieved ozone products with other instruments. The primary SBUV instrument characterization is based on prelaunch laboratory tests and dedicated on-orbit calibration measurements. We supplement these results with "soft" calibration techniques using carefully chosen subsets of radiance data and information from the retrieval algorithm output to validate each instrument's calibration. The estimated long-term uncertainty in albedo is approximately ±0.8-1.2% (1σ) for most of the instruments. The overlap between these instruments and the Shuttle SBUV (SSBUV) data allows us to intercalibrate the SBUV instruments to produce a coherent V8.6 data set covering more than 32 yr. The estimated long-term uncertainty in albedo is less than 3% over this period.
Calibration of the SBUV version 8.6 ozone data product
NASA Astrophysics Data System (ADS)
DeLand, M. T.; Taylor, S. L.; Huang, L. K.; Fisher, B. L.
2012-07-01
This paper describes the calibration process for the Solar Backscatter Ultraviolet (SBUV) Version 8.6 (V8.6) ozone data product. Eight SBUV instruments have flown on NASA and NOAA satellites since 1970, and a continuous data record is available since November 1978. The accuracy of ozone trends determined from these data depends on the calibration and long-term characterization of each instrument. V8.6 calibration adjustments are determined at the radiance level, and do not rely on comparison of retrieved ozone products with other instruments. The primary SBUV instrument characterization is based on prelaunch laboratory tests and dedicated on-orbit calibration measurements. We supplement these results with "soft" calibration techniques using carefully chosen subsets of radiance data and information from the retrieval algorithm output to validate each instrument's calibration. The estimated long-term uncertainty in albedo is approximately ±0.8-1.2% (1σ) for most of the instruments. The overlap between these instruments and the Shuttle SBUV (SSBUV) data allows us to intercalibrate the SBUV instruments to produce a coherent V8.6 data set covering more than 32 yr. The estimated long-term uncertainty in albedo is less than 3% over this period.
Landsat-7 ETM+ On-Orbit Reflective-Band Radiometric Stability and Absolute Calibration
NASA Technical Reports Server (NTRS)
Markham, Brian L.; Thome, Kurtis J.; Barsi, Julia A.; Kaita, Ed; Helder, Dennis L.; Barker, John L.
2003-01-01
The Landsat-7 spacecraft carries the Enhanced Thematic Mapper Plus (ETM+) instrument. This instrument images the Earth land surface in eight parts of the electromagnetic spectrum, termed spectral bands. These spectral images are used to monitor changes in the land surface, so a consistent relationship, i.e., calibration, between the image data and the Earth surface brightness, is required. The ETM+ has several on- board calibration devices that are used to monitor this calibration. The best on-board calibration source employs a flat white painted reference panel and has indicated changes of between 0.5% to 2% per year in the ETM+ response, depending on the spectral band. However, most of these changes are believed to be caused by changes in the reference panel, as opposed to changes in the instrument's sensitivity. This belief is based partially on on-orbit calibrations using instrumented ground sites and observations of "invariant sites", hyper-arid sites of the Sahara and Arabia. Changes determined from these data sets indicate are 0.1% - 0.6% per year. Tests and comparisons to other sensors also indicate that the uncertainty of the calibration is at the 5% level.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harben, P E; Rock, D; Rodgers, A J
1999-07-23
Calibration of hydroacoustic and T-phase stations for Comprehensive Nuclear-Test-Ban Treaty (CTBT) monitoring will be an important element in establishing new operational stations and upgrading existing stations. Calibration of hydroacoustic stations is herein defined as precision location of the hydrophones and determination of the amplitude response from a known source energy. T-phase station calibration is herein defined as a determination of station site attenuation as a function of frequency, bearing, and distance for known impulsive energy sources in the ocean. To understand how to best conduct calibration experiments for both hydroacoustic and T-phase stations, an experiment was conducted in May, 1999more » at Ascension Island in the South Atlantic Ocean. The experiment made use of a British oceanographic research vessel and collected data that will be used for CTBT issues and for fundamental understanding of the Ascension Island volcanic edifice.« less
Reliably detectable flaw size for NDE methods that use calibration
NASA Astrophysics Data System (ADS)
Koshti, Ajay M.
2017-04-01
Probability of detection (POD) analysis is used in assessing reliably detectable flaw size in nondestructive evaluation (NDE). MIL-HDBK-1823 and associated mh18232 POD software gives most common methods of POD analysis. In this paper, POD analysis is applied to an NDE method, such as eddy current testing, where calibration is used. NDE calibration standards have known size artificial flaws such as electro-discharge machined (EDM) notches and flat bottom hole (FBH) reflectors which are used to set instrument sensitivity for detection of real flaws. Real flaws such as cracks and crack-like flaws are desired to be detected using these NDE methods. A reliably detectable crack size is required for safe life analysis of fracture critical parts. Therefore, it is important to correlate signal responses from real flaws with signal responses form artificial flaws used in calibration process to determine reliably detectable flaw size.
Reliably Detectable Flaw Size for NDE Methods that Use Calibration
NASA Technical Reports Server (NTRS)
Koshti, Ajay M.
2017-01-01
Probability of detection (POD) analysis is used in assessing reliably detectable flaw size in nondestructive evaluation (NDE). MIL-HDBK-1823 and associated mh1823 POD software gives most common methods of POD analysis. In this paper, POD analysis is applied to an NDE method, such as eddy current testing, where calibration is used. NDE calibration standards have known size artificial flaws such as electro-discharge machined (EDM) notches and flat bottom hole (FBH) reflectors which are used to set instrument sensitivity for detection of real flaws. Real flaws such as cracks and crack-like flaws are desired to be detected using these NDE methods. A reliably detectable crack size is required for safe life analysis of fracture critical parts. Therefore, it is important to correlate signal responses from real flaws with signal responses form artificial flaws used in calibration process to determine reliably detectable flaw size.
Evaluation of the Long-Term Stability and Temperature Coefficient of Dew-Point Hygrometers
NASA Astrophysics Data System (ADS)
Benyon, R.; Vicente, T.; Hernández, P.; De Rivas, L.; Conde, F.
2012-09-01
The continuous quest for improved specifications of optical dew-point hygrometers has raised customer expectations on the performance of these devices. In the absence of a long calibration history, users with a limited prior experience in the measurement of humidity, place reliance on manufacturer specifications to estimate long-term stability. While this might be reasonable in the case of measurement of electrical quantities, in humidity it can lead to optimistic estimations of uncertainty. This article reports a study of the long-term stability of some hygrometers and the analysis of their performance as monitored through regular calibration. The results of the investigations provide some typical, realistic uncertainties associated with the long-term stability of instruments used in calibration and testing laboratories. Together, these uncertainties can help in establishing initial contributions in uncertainty budgets, as well as in setting the minimum calibration requirements, based on the evaluation of dominant influence quantities.
A Comparison of Two Balance Calibration Model Building Methods
NASA Technical Reports Server (NTRS)
DeLoach, Richard; Ulbrich, Norbert
2007-01-01
Simulated strain-gage balance calibration data is used to compare the accuracy of two balance calibration model building methods for different noise environments and calibration experiment designs. The first building method obtains a math model for the analysis of balance calibration data after applying a candidate math model search algorithm to the calibration data set. The second building method uses stepwise regression analysis in order to construct a model for the analysis. Four balance calibration data sets were simulated in order to compare the accuracy of the two math model building methods. The simulated data sets were prepared using the traditional One Factor At a Time (OFAT) technique and the Modern Design of Experiments (MDOE) approach. Random and systematic errors were introduced in the simulated calibration data sets in order to study their influence on the math model building methods. Residuals of the fitted calibration responses and other statistical metrics were compared in order to evaluate the calibration models developed with different combinations of noise environment, experiment design, and model building method. Overall, predicted math models and residuals of both math model building methods show very good agreement. Significant differences in model quality were attributable to noise environment, experiment design, and their interaction. Generally, the addition of systematic error significantly degraded the quality of calibration models developed from OFAT data by either method, but MDOE experiment designs were more robust with respect to the introduction of a systematic component of the unexplained variance.
Method for in-situ calibration of electrophoretic analysis systems
Liu, Changsheng; Zhao, Hequan
2005-05-08
An electrophoretic system having a plurality of separation lanes is provided with an automatic calibration feature in which each lane is separately calibrated. For each lane, the calibration coefficients map a spectrum of received channel intensities onto values reflective of the relative likelihood of each of a plurality of dyes being present. Individual peaks, reflective of the influence of a single dye, are isolated from among the various sets of detected light intensity spectra, and these can be used to both detect the number of dye components present, and also to establish exemplary vectors for the calibration coefficients which may then be clustered and further processed to arrive at a calibration matrix for the system. The system of the present invention thus permits one to use different dye sets to tag DNA nucleotides in samples which migrate in separate lanes, and also allows for in-situ calibration with new, previously unused dye sets.
NASA Astrophysics Data System (ADS)
Kaus, Rüdiger
This chapter gives the background on the accreditation of testing and calibration laboratories according to ISO/IEC 17025 and sets out the requirements of this international standard. ISO 15189 describes similar requirements especially tailored for medical laboratories. Because of these similarities ISO 15189 is not separately mentioned throughout this lecture.
A large-scale, long-term study of scale drift: The micro view and the macro view
NASA Astrophysics Data System (ADS)
He, W.; Li, S.; Kingsbury, G. G.
2016-11-01
The development of measurement scales for use across years and grades in educational settings provides unique challenges, as instructional approaches, instructional materials, and content standards all change periodically. This study examined the measurement stability of a set of Rasch measurement scales that have been in place for almost 40 years. In order to investigate the stability of these scales, item responses were collected from a large set of students who took operational adaptive tests using items calibrated to the measurement scales. For the four scales that were examined, item samples ranged from 2183 to 7923 items. Each item was administered to at least 500 students in each grade level, resulting in approximately 3000 responses per item. Stability was examined at the micro level analysing change in item parameter estimates that have occurred since the items were first calibrated. It was also examined at the macro level, involving groups of items and overall test scores for students. Results indicated that individual items had changes in their parameter estimates, which require further analysis and possible recalibration. At the same time, the results at the total score level indicate substantial stability in the measurement scales over the span of their use.
NASA Astrophysics Data System (ADS)
Cao, X.; Tian, F.; Telford, R.; Ni, J.; Xu, Q.; Chen, F.; Liu, X.; Stebich, M.; Zhao, Y.; Herzschuh, U.
2017-12-01
Pollen-based quantitative reconstructions of past climate variables is a standard palaeoclimatic approach. Despite knowing that the spatial extent of the calibration-set affects the reconstruction result, guidance is lacking as to how to determine a suitable spatial extent of the pollen-climate calibration-set. In this study, past mean annual precipitation (Pann) during the Holocene (since 11.5 cal ka BP) is reconstructed repeatedly for pollen records from Qinghai Lake (36.7°N, 100.5°E; north-east Tibetan Plateau), Gonghai Lake (38.9°N, 112.2°E; north China) and Sihailongwan Lake (42.3°N, 126.6°E; north-east China) using calibration-sets of varying spatial extents extracted from the modern pollen dataset of China and Mongolia (2559 sampling sites and 168 pollen taxa in total). Results indicate that the spatial extent of the calibration-set has a strong impact on model performance, analogue quality and reconstruction diagnostics (absolute value, range, trend, optimum). Generally, these effects are stronger with the modern analogue technique (MAT) than with weighted averaging partial least squares (WA-PLS). With respect to fossil spectra from northern China, the spatial extent of calibration-sets should be restricted to ca. 1000 km in radius because small-scale calibration-sets (<800 km radius) will likely fail to include enough spatial variation in the modern pollen assemblages to reflect the temporal range shifts during the Holocene, while too broad a scale calibration-set (>1500 km radius) will include taxa with very different pollen-climate relationships. Based on our results we conclude that the optimal calibration-set should 1) cover a reasonably large spatial extent with an even distribution of modern pollen samples; 2) possess good model performance as indicated by cross-validation, high analogue quality, and excellent fit with the target fossil pollen spectra; 3) possess high taxonomic resolution, and 4) obey the modern and past distribution ranges of taxa inferred from palaeo-genetic and macrofossil studies.
Automatic colorimetric calibration of human wounds
2010-01-01
Background Recently, digital photography in medicine is considered an acceptable tool in many clinical domains, e.g. wound care. Although ever higher resolutions are available, reproducibility is still poor and visual comparison of images remains difficult. This is even more the case for measurements performed on such images (colour, area, etc.). This problem is often neglected and images are freely compared and exchanged without further thought. Methods The first experiment checked whether camera settings or lighting conditions could negatively affect the quality of colorimetric calibration. Digital images plus a calibration chart were exposed to a variety of conditions. Precision and accuracy of colours after calibration were quantitatively assessed with a probability distribution for perceptual colour differences (dE_ab). The second experiment was designed to assess the impact of the automatic calibration procedure (i.e. chart detection) on real-world measurements. 40 Different images of real wounds were acquired and a region of interest was selected in each image. 3 Rotated versions of each image were automatically calibrated and colour differences were calculated. Results 1st Experiment: Colour differences between the measurements and real spectrophotometric measurements reveal median dE_ab values respectively 6.40 for the proper patches of calibrated normal images and 17.75 for uncalibrated images demonstrating an important improvement in accuracy after calibration. The reproducibility, visualized by the probability distribution of the dE_ab errors between 2 measurements of the patches of the images has a median of 3.43 dE* for all calibrated images, 23.26 dE_ab for all uncalibrated images. If we restrict ourselves to the proper patches of normal calibrated images the median is only 2.58 dE_ab! Wilcoxon sum-rank testing (p < 0.05) between uncalibrated normal images and calibrated normal images with proper squares were equal to 0 demonstrating a highly significant improvement of reproducibility. In the second experiment, the reproducibility of the chart detection during automatic calibration is presented using a probability distribution of dE_ab errors between 2 measurements of the same ROI. Conclusion The investigators proposed an automatic colour calibration algorithm that ensures reproducible colour content of digital images. Evidence was provided that images taken with commercially available digital cameras can be calibrated independently of any camera settings and illumination features. PMID:20298541
Occupational Survey Report, Cardiopulmonary Laboratory, AFSC 4H0X1, OSSN: 2541
2004-02-01
patients within facility 97 E0211 Set up humidifiers 97 E0175 Instruct patients in use of incentive spirometers 97 A0031 Obtain sputum samples 97 A0026...D0137 Calibrate pulmonary function testing equipment 100 D0150 Perform routine spirometry tests 100 D0146 Perform lung diffusion tests 100 A0042 Perform...consultations, or procedures 31 D0150 Perform routine spirometry tests 23 35 TABLE A2 REPRESENTATIVE TASKS PERFORMED BY MEMBERS IN THE SUPERVISION AND
Winston, Richard B.; Shapiro, Allen M.
2007-01-01
The BAT3 Analyzer provides real-time display and interpretation of fluid pressure responses and flow rates measured during geochemical sampling, hydraulic testing, or tracer testing conducted with the Multifunction Bedrock-Aquifer Transportable Testing Tool (BAT3) (Shapiro, 2007). Real-time display of the data collected with the Multifunction BAT3 allows the user to ensure that the downhole apparatus is operating properly, and that test procedures can be modified to correct for unanticipated hydraulic responses during testing. The BAT3 Analyzer can apply calibrations to the pressure transducer and flow meter data to display physically meaningful values. Plots of the time-varying data can be formatted for a specified time interval, and either saved to files, or printed. Libraries of calibrations for the pressure transducers and flow meters can be created, updated and reloaded to facilitate the rapid set up of the software to display data collected during testing with the Multifunction BAT3. The BAT3 Analyzer also has the functionality to estimate calibrations for pressure transducers and flow meters using data collected with the Multifunction BAT3 in conjunction with corroborating check measurements. During testing with the Multifunction BAT3, and also after testing has been completed, hydraulic properties of the test interval can be estimated by comparing fluid pressure responses with model results; a variety of hydrogeologic conceptual models of the formation are available for interpreting fluid-withdrawal, fluid-injection, and slug tests.
An interactive in-game approach to user adjustment of stereoscopic 3D settings
NASA Astrophysics Data System (ADS)
Tawadrous, Mina; Hogue, Andrew; Kapralos, Bill; Collins, Karen
2013-03-01
Given the popularity of 3D film, content developers have been creating customizable stereoscopic 3D experiences for the user to enjoy at home. Stereoscopic 3D game developers often offer a `white box' approach whereby far too many controls and settings are exposed to the average consumer who may have little knowledge or interest to correctly adjust these settings. Improper settings can lead to users being uncomfortable or unimpressed with their own user-defined stereoscopic 3D experience. We have begun investigating interactive approaches to in-game adjustment of the various stereoscopic 3D parameters to reduce the reliance on the user doing so and thefore creating a more pleasurable stereoscopic 3D experience. In this paper, we describe a preliminary technique for interactively calibrating the various stereoscopic 3D parameters and we compare this interface with the typical slider-based control interface game developers utilize in commercial S3D games. Inspired by standard testing methodologies experienced at an optometrist, we've created a split-screen game with the same stereoscopic 3D game running in both screens, but with different interaxial distances. We expect that the interactive nature of the calibration will impact the final game experience providing us with an indication of whether in-game, interactive, S3D parameter calibration is a mechanism that game developers should adopt.
NASA Technical Reports Server (NTRS)
Angal, Amit; Mccorkel, Joel; Thome, Kurt
2016-01-01
The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission is formulated to determine long-term climate trends using SI-traceable measurements. The CLARREO mission will include instruments operating in the reflected solar (RS) wavelength region from 320 nm to 2300 nm. The Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO and facilitates testing and evaluation of calibration approaches. The basis of CLARREO and SOLARIS calibration is the Goddard Laser for Absolute Measurement of Response (GLAMR) that provides a radiance-based calibration at reflective solar wavelengths using continuously tunable lasers. SI-traceability is achieved via detector-based standards that, in GLAMRs case, are a set of NIST-calibrated transfer radiometers. A portable version of the SOLARIS, Suitcase SOLARIS is used to evaluate GLAMRs calibration accuracies. The calibration of Suitcase SOLARIS using GLAMR agrees with that obtained from source-based results of the Remote Sensing Group (RSG) at the University of Arizona to better than 5 (k2) in the 720-860 nm spectral range. The differences are within the uncertainties of the NIST-calibrated FEL lamp-based approach of RSG and give confidence that GLAMR is operating at 5 (k2) absolute uncertainties. Limitations of the Suitcase SOLARIS instrument also discussed and the next edition of the SOLARIS instrument (Suitcase SOLARIS- 2) is expected to provide an improved mechanism to further assess GLAMR and CLARREO calibration approaches. (2016) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Dong, Ren G.; Welcome, Daniel E.; McDowell, Thomas W.; Wu, John Z.
2015-01-01
While simulations of the measured biodynamic responses of the whole human body or body segments to vibration are conventionally interpreted as summaries of biodynamic measurements, and the resulting models are considered quantitative, this study looked at these simulations from a different angle: model calibration. The specific aims of this study are to review and clarify the theoretical basis for model calibration, to help formulate the criteria for calibration validation, and to help appropriately select and apply calibration methods. In addition to established vibration theory, a novel theorem of mechanical vibration is also used to enhance the understanding of the mathematical and physical principles of the calibration. Based on this enhanced understanding, a set of criteria was proposed and used to systematically examine the calibration methods. Besides theoretical analyses, a numerical testing method is also used in the examination. This study identified the basic requirements for each calibration method to obtain a unique calibration solution. This study also confirmed that the solution becomes more robust if more than sufficient calibration references are provided. Practically, however, as more references are used, more inconsistencies can arise among the measured data for representing the biodynamic properties. To help account for the relative reliabilities of the references, a baseline weighting scheme is proposed. The analyses suggest that the best choice of calibration method depends on the modeling purpose, the model structure, and the availability and reliability of representative reference data. PMID:26740726
Development, refinement, and testing of a short term solar flare prediction algorithm
NASA Technical Reports Server (NTRS)
Smith, Jesse B., Jr.
1993-01-01
During the period included in this report, the expenditure of time and effort, and progress toward performance of the tasks and accomplishing the goals set forth in the two year research grant proposal, consisted primarily of calibration and analysis of selected data sets. The heliographic limits of 30 degrees from central meridian were continued. As previously reported, all analyses are interactive and are performed by the Principal Investigator. It should also be noted that the analysis time involved by the Principal Investigator during this reporting period was limited, partially due to illness and partially resulting from other uncontrollable factors. The calibration technique (as developed by MSFC solar scientists), incorporates sets of constants which vary according to the wave length of the observation data set. One input constant is then varied interactively to correct for observing conditions, etc., to result in a maximum magnetic field strength (in the calibrated data), based on a separate analysis. There is some insecurity in the methodology and the selection of variables to yield the most self-consistent results for variable maximum field strengths and for variable observing/atmospheric conditions. Several data sets were analyzed using differing constant sets, and separate analyses to differing maximum field strength - toward standardizing methodology and technique for the most self-consistent results for the large number of cases. It may be necessary to recalibrate some of the analyses, but the sc analyses are retained on the optical disks and can still be used with recalibration where necessary. Only the extracted parameters will be changed.
Chander, Gyanesh; Angal, Amit; Xiong, Xiaoxiong; Helder, Dennis L.; Mishra, Nischal; Choi, Taeyoung; Wu, Aisheng
2010-01-01
Test sites are central to any future quality assurance and quality control (QA/QC) strategy. The Committee on Earth Observation Satellites (CEOS) Working Group for Calibration and Validation (WGCV) Infrared Visible Optical Sensors (IVOS) worked with collaborators around the world to establish a core set of CEOS-endorsed, globally distributed, reference standard test sites (both instrumented and pseudo-invariant) for the post-launch calibration of space-based optical imaging sensors. The pseudo-invariant calibration sites (PICS) have high reflectance and are usually made up of sand dunes with low aerosol loading and practically no vegetation. The goal of this paper is to provide preliminary assessment of "several parameters" than can be used on an operational basis to compare and measure usefulness of reference sites all over the world. The data from Landsat 7 (L7) Enhanced Thematic Mapper Plus (ETM+) and the Earth Observing-1 (EO-1) Hyperion sensors over the CEOS PICS were used to perform a preliminary assessment of several parameters, such as usable area, data availability, top-of-atmosphere (TOA) reflectance, at-sensor brightness temperature, spatial uniformity, temporal stability, spectral stability, and typical spectrum observed over the sites.
NASA Astrophysics Data System (ADS)
Meygret, Aimé; Santer, Richard P.; Berthelot, Béatrice
2011-10-01
La Crau test site is used by CNES since 1987 for vicarious calibration of SPOT cameras. The former calibration activities were conducted during field campaigns devoted to the characterization of the atmosphere and the site reflectances. Since 1997, au automatic photometric station (ROSAS) was set up on the site on a 10m height pole. This station measures at different wavelengths, the solar extinction and the sky radiances to fully characterize the optical properties of the atmosphere. It also measures the upwelling radiance over the ground to fully characterize the surface reflectance properties. The photometer samples the spectrum from 380nm to 1600nm with 9 narrow bands. Every non cloudy days the photometer automatically and sequentially performs its measurements. Data are transmitted by GSM (Global System for Mobile communications) to CNES and processed. The photometer is calibrated in situ over the sun for irradiance and cross-band calibration, and over the Rayleigh scattering for the short wavelengths radiance calibration. The data are processed by an operational software which calibrates the photometer, estimates the atmosphere properties, computes the bidirectional reflectance distribution function of the site, then simulates the top of atmosphere radiance seen by any sensor over-passing the site and calibrates it. This paper describes the instrument, its measurement protocol and its calibration principle. Calibration results are discussed and compared to laboratory calibration. It details the surface reflectance characterization and presents SPOT4 calibration results deduced from the estimated TOA radiance. The results are compared to the official calibration.
Meijer, Piet; Kynde, Karin; van den Besselaar, Antonius M H P; Van Blerk, Marjan; Woods, Timothy A L
2018-04-12
This study was designed to obtain an overview of the analytical quality of the prothrombin time, reported as international normalized ratio (INR) and to assess the variation of INR results between European laboratories, the difference between Quick-type and Owren-type methods and the effect of using local INR calibration or not. In addition, we assessed the variation in INR results obtained for a single donation in comparison with a pool of several plasmas. A set of four different lyophilized plasma samples were distributed via national EQA organizations to participating laboratories for INR measurement. Between-laboratory variation was lower in the Owren group than in the Quick group (on average: 6.7% vs. 8.1%, respectively). Differences in the mean INR value between the Owren and Quick group were relatively small (<0.20 INR). Between-laboratory variation was lower after local INR calibration (CV: 6.7% vs. 8.6%). For laboratories performing local calibration, the between-laboratory variation was quite similar for the Owren and Quick group (on average: 6.5% and 6.7%, respectively). Clinically significant differences in INR results (difference in INR>0.5) were observed between different reagents. No systematic significant differences in the between-laboratory variation for a single-plasma sample and a pooled plasma sample were observed. The comparability for laboratories using local calibration of their thromboplastin reagent is better than for laboratories not performing local calibration. Implementing local calibration is strongly recommended for the measurement of INR.
NASA Astrophysics Data System (ADS)
He, Zhihua; Vorogushyn, Sergiy; Unger-Shayesteh, Katy; Gafurov, Abror; Kalashnikova, Olga; Omorova, Elvira; Merz, Bruno
2018-03-01
This study refines the method for calibrating a glacio-hydrological model based on Hydrograph Partitioning Curves (HPCs), and evaluates its value in comparison to multidata set optimization approaches which use glacier mass balance, satellite snow cover images, and discharge. The HPCs are extracted from the observed flow hydrograph using catchment precipitation and temperature gradients. They indicate the periods when the various runoff processes, such as glacier melt or snow melt, dominate the basin hydrograph. The annual cumulative curve of the difference between average daily temperature and melt threshold temperature over the basin, as well as the annual cumulative curve of average daily snowfall on the glacierized areas are used to identify the starting and end dates of snow and glacier ablation periods. Model parameters characterizing different runoff processes are calibrated on different HPCs in a stepwise and iterative way. Results show that the HPC-based method (1) delivers model-internal consistency comparably to the tri-data set calibration method; (2) improves the stability of calibrated parameter values across various calibration periods; and (3) estimates the contributions of runoff components similarly to the tri-data set calibration method. Our findings indicate the potential of the HPC-based approach as an alternative for hydrological model calibration in glacierized basins where other calibration data sets than discharge are often not available or very costly to obtain.
Kwicklis, Edward M.; Wolfsberg, Andrew V.; Stauffer, Philip H.; Walvoord, Michelle Ann; Sully, Michael J.
2006-01-01
Multiphase, multicomponent numerical models of long-term unsaturated-zone liquid and vapor movement were created for a thick alluvial basin at the Nevada Test Site to predict present-day liquid and vapor fluxes. The numerical models are based on recently developed conceptual models of unsaturated-zone moisture movement in thick alluvium that explain present-day water potential and tracer profiles in terms of major climate and vegetation transitions that have occurred during the past 10 000 yr or more. The numerical models were calibrated using borehole hydrologic and environmental tracer data available from a low-level radioactive waste management site located in a former nuclear weapons testing area. The environmental tracer data used in the model calibration includes tracers that migrate in both the liquid and vapor phases (??D, ??18O) and tracers that migrate solely as dissolved solutes (Cl), thus enabling the estimation of some gas-phase as well as liquid-phase transport parameters. Parameter uncertainties and correlations identified during model calibration were used to generate parameter combinations for a set of Monte Carlo simulations to more fully characterize the uncertainty in liquid and vapor fluxes. The calculated background liquid and vapor fluxes decrease as the estimated time since the transition to the present-day arid climate increases. However, on the whole, the estimated fluxes display relatively little variability because correlations among parameters tend to create parameter sets for which changes in some parameters offset the effects of others in the set. Independent estimates on the timing since the climate transition established from packrat midden data were essential for constraining the model calibration results. The study demonstrates the utility of environmental tracer data in developing numerical models of liquid- and gas-phase moisture movement and the importance of considering parameter correlations when using Monte Carlo analysis to characterize the uncertainty in moisture fluxes. ?? Soil Science Society of America.
In-Situ Cameras for Radiometric Correction of Remotely Sensed Data
NASA Astrophysics Data System (ADS)
Kautz, Jess S.
The atmosphere distorts the spectrum of remotely sensed data, negatively affecting all forms of investigating Earth's surface. To gather reliable data, it is vital that atmospheric corrections are accurate. The current state of the field of atmospheric correction does not account well for the benefits and costs of different correction algorithms. Ground spectral data are required to evaluate these algorithms better. This dissertation explores using cameras as radiometers as a means of gathering ground spectral data. I introduce techniques to implement a camera systems for atmospheric correction using off the shelf parts. To aid the design of future camera systems for radiometric correction, methods for estimating the system error prior to construction, calibration and testing of the resulting camera system are explored. Simulations are used to investigate the relationship between the reflectance accuracy of the camera system and the quality of atmospheric correction. In the design phase, read noise and filter choice are found to be the strongest sources of system error. I explain the calibration methods for the camera system, showing the problems of pixel to angle calibration, and adapting the web camera for scientific work. The camera system is tested in the field to estimate its ability to recover directional reflectance from BRF data. I estimate the error in the system due to the experimental set up, then explore how the system error changes with different cameras, environmental set-ups and inversions. With these experiments, I learn about the importance of the dynamic range of the camera, and the input ranges used for the PROSAIL inversion. Evidence that the camera can perform within the specification set for ELM correction in this dissertation is evaluated. The analysis is concluded by simulating an ELM correction of a scene using various numbers of calibration targets, and levels of system error, to find the number of cameras needed for a full-scale implementation.
Hoffmann, Uwe; Pfeifer, Frank; Hsuing, Chang; Siesler, Heinz W
2016-05-01
The aim of this contribution is to demonstrate the transfer of spectra that have been measured on two different laboratory Fourier transform near-infrared (FT-NIR) spectrometers to the format of a handheld instrument by measuring only a few samples with both spectrometer types. Thus, despite the extreme differences in spectral range and resolution, spectral data sets that have been collected and quantitative as well as qualitative calibrations that have been developed thereof, respectively, over a long period on a laboratory instrument can be conveniently transferred to the handheld system. Thus, the necessity to prepare completely new calibration samples and the effort required to develop calibration models when changing hardware platforms is minimized. The enabling procedure is based on piecewise direct standardization (PDS) and will be described for the data sets of a quantitative and a qualitative application case study. For this purpose the spectra measured on the FT-NIR laboratory spectrometers were used as "master" data and transferred to the "target" format of the handheld instrument. The quantitative test study refers to transmission spectra of three-component liquid solvent mixtures whereas the qualitative application example encompasses diffuse reflection spectra of six different current polymers. To prove the performance of the transfer procedure for quantitative applications, partial least squares (PLS-1) calibrations were developed for the individual components of the solvent mixtures with spectra transferred from the master to the target instrument and the cross-validation parameters were compared with the corresponding parameters obtained for spectra measured on the master and target instruments, respectively. To test the retention of the discrimination ability of the transferred polymer spectra sets principal component analyses (PCAs) were applied exemplarily for three of the six investigated polymers and their identification was demonstrated by Mahalanobis distance plots for all polymers. © The Author(s) 2016.
A rotorcraft flight database for validation of vision-based ranging algorithms
NASA Technical Reports Server (NTRS)
Smith, Phillip N.
1992-01-01
A helicopter flight test experiment was conducted at the NASA Ames Research Center to obtain a database consisting of video imagery and accurate measurements of camera motion, camera calibration parameters, and true range information. The database was developed to allow verification of monocular passive range estimation algorithms for use in the autonomous navigation of rotorcraft during low altitude flight. The helicopter flight experiment is briefly described. Four data sets representative of the different helicopter maneuvers and the visual scenery encountered during the flight test are presented. These data sets will be made available to researchers in the computer vision community.
NASA Astrophysics Data System (ADS)
Fu, X.; Hu, L.; Lee, K. M.; Zou, J.; Ruan, X. D.; Yang, H. Y.
2010-10-01
This paper presents a method for dry calibration of an electromagnetic flowmeter (EMF). This method, which determines the voltage induced in the EMF as conductive liquid flows through a magnetic field, numerically solves a coupled set of multiphysical equations with measured boundary conditions for the magnetic, electric, and flow fields in the measuring pipe of the flowmeter. Specifically, this paper details the formulation of dry calibration and an efficient algorithm (that adaptively minimizes the number of measurements and requires only the normal component of the magnetic flux density as boundary conditions on the pipe surface to reconstruct the magnetic field involved) for computing the sensitivity of EMF. Along with an in-depth discussion on factors that could significantly affect the final precision of a dry calibrated EMF, the effects of flow disturbance on measuring errors have been experimentally studied by installing a baffle at the inflow port of the EMF. Results of the dry calibration on an actual EMF were compared against flow-rig calibration; excellent agreements (within 0.3%) between dry calibration and flow-rig tests verify the multiphysical computation of the fields and the robustness of the method. As requiring no actual flow, the dry calibration is particularly useful for calibrating large-diameter EMFs where conventional flow-rig methods are often costly and difficult to implement.
Hyer, D; Mart, C
2012-06-01
The aim of this study was to develop a phantom and analysis software that could be used to quickly and accurately determine the location of radiation isocenter using the Electronic Portal Imaging Device (EPID). The phantom could then be used as a static reference point for performing other tests including: radiation vs. light field coincidence, MLC and Jaw strip tests, and Varian Optical Guidance Platform (OGP) calibration. The solution proposed uses a collimator setting of 10×10 cm to acquire EPID images of the new phantom constructed from LEGO® blocks. Images from a number of gantry and collimator angles are analyzed by the software to determine the position of the jaws and center of the phantom in each image. The distance between a chosen jaw and the phantom center is then compared to the same distance measured after a 180 degree collimator rotation to determine if the phantom is centered in the dimension being investigated. The accuracy of the algorithm's measurements were verified by independent measurement to be approximately equal to the detector's pitch. Light versus radiation field as well as MLC and Jaw strip tests are performed using measurements based on the phantom center once located at the radiation isocenter. Reproducibility tests show that the algorithm's results were objectively repeatable. Additionally, the phantom and software are completely independent of linac vendor and this study presents results from two major linac manufacturers. An OGP calibration array was also integrated into the phantom to allow calibration of the OGP while the phantom is positioned at radiation isocenter to reduce setup uncertainty contained in the calibration. This solution offers a quick, objective method to perform isocenter localization as well as laser alignment, OGP calibration, and other tests on a monthly basis. © 2012 American Association of Physicists in Medicine.
Luo, Yu; Li, Wen-Long; Huang, Wen-Hua; Liu, Xue-Hua; Song, Yan-Gang; Qu, Hai-Bin
2017-05-01
A near infrared spectroscopy (NIRS) approach was established for quality control of the alcohol precipitation liquid in the manufacture of Codonopsis Radix. By applying NIRS with multivariate analysis, it was possible to build variation into the calibration sample set, and the Plackett-Burman design, Box-Behnken design, and a concentrating-diluting method were used to obtain the sample set covered with sufficient fluctuation of process parameters and extended concentration information. NIR data were calibrated to predict the four quality indicators using partial least squares regression (PLSR). In the four calibration models, the root mean squares errors of prediction (RMSEPs) were 1.22 μg/ml, 10.5 μg/ml, 1.43 μg/ml, and 0.433% for lobetyolin, total flavonoids, pigments, and total solid contents, respectively. The results indicated that multi-components quantification of the alcohol precipitation liquid of Codonopsis Radix could be achieved with an NIRS-based method, which offers a useful tool for real-time release testing (RTRT) of intermediates in the manufacture of Codonopsis Radix.
Digital dental photography. Part 6: camera settings.
Ahmad, I
2009-07-25
Once the appropriate camera and equipment have been purchased, the next considerations involve setting up and calibrating the equipment. This article provides details regarding depth of field, exposure, colour spaces and white balance calibration, concluding with a synopsis of camera settings for a standard dental set-up.
Improvement of the GERDA Ge Detectors Energy Resolution by an Optimized Digital Signal Processing
NASA Astrophysics Data System (ADS)
Benato, G.; D'Andrea, V.; Cattadori, C.; Riboldi, S.
GERDA is a new generation experiment searching for neutrinoless double beta decay of 76Ge, operating at INFN Gran Sasso Laboratories (LNGS) since 2010. Coaxial and Broad Energy Germanium (BEGe) Detectors have been operated in liquid argon (LAr) in GERDA Phase I. In the framework of the second GERDA experimental phase, both the contacting technique, the connection to and the location of the front end readout devices are novel compared to those previously adopted, and several tests have been performed. In this work, starting from considerations on the energy scale stability of the GERDA Phase I calibrations and physics data sets, an optimized pulse filtering method has been developed and applied to the Phase II pilot tests data sets, and to few GERDA Phase I data sets. In this contribution the detector performances in term of energy resolution and time stability are here presented. The improvement of the energy resolution, compared to standard Gaussian shaping adopted for Phase I data analysis, is discussed and related to the optimized noise filtering capability. The result is an energy resolution better than 0.1% at 2.6 MeV for the BEGe detectors operated in the Phase II pilot tests and an improvement of the energy resolution in LAr of about 8% achieved on the GERDA Phase I calibration runs, compared to previous analysis algorithms.
GOES-12 SXI Operational Calibration
NASA Astrophysics Data System (ADS)
Pizzo, V. J.; Hill, S. M.; Balch, C.
2002-12-01
The prototype Solar X-ray Imager (SXI) was lofted into orbit aboard the NOAA GOES-12 spacecraft on 23 July 2001. The results of pre-launch ground-based optical tests have been combined with an extensive set of imagery taken during the post-launch checkout period from late August through mid December 2001 to establish an operational calibration for the full instrument performance. Although the nickel-coated mirror is a conventional Wolter-I grazing incidence optic, the detector consists of an MCP-enhanced CCD configuration not previously used for direct solar imaging. A full set of calibration data for each optical component (mirror, filters, detector) as well as for net system throughput have been derived and are available on the SXI website (http://sec.noaa.gov/sxi/ScienceUserGuide.html). In addition, a wide variety of information on instrument spatial resolution, point-spread function, dynamic range, photon statistics, and gain dependence (related to voltage settings for the MCP) have been derived. An improved background correction has been developed and applied to the recent release of the post-launch data now publicly available in FITS format. Special instrument topics including issues related to solar pointing and image timing aboard a geo-synchronous platform, CCD blooming properties, detector flat-field effects, and response to SEP events are also detailed.
A real-time freehand ultrasound calibration system with automatic accuracy feedback and control.
Chen, Thomas Kuiran; Thurston, Adrian D; Ellis, Randy E; Abolmaesumi, Purang
2009-01-01
This article describes a fully automatic, real-time, freehand ultrasound calibration system. The system was designed to be simple and sterilizable, intended for operating-room usage. The calibration system employed an automatic-error-retrieval and accuracy-control mechanism based on a set of ground-truth data. Extensive validations were conducted on a data set of 10,000 images in 50 independent calibration trials to thoroughly investigate the accuracy, robustness, and performance of the calibration system. On average, the calibration accuracy (measured in three-dimensional reconstruction error against a known ground truth) of all 50 trials was 0.66 mm. In addition, the calibration errors converged to submillimeter in 98% of all trials within 12.5 s on average. Overall, the calibration system was able to consistently, efficiently and robustly achieve high calibration accuracy with real-time performance.
2006 Interferometry Imaging Beauty Contest
NASA Technical Reports Server (NTRS)
Lawson, Peter R.; Cotton, William D.; Hummel, Christian A.; Ireland, Michael; Monnier, John D.; Thiebaut, Eric; Rengaswamy, Sridharan; Baron, Fabien; Young, John S.; Kraus, Stefan;
2006-01-01
We present a formal comparison of the performance of algorithms used for synthesis imaging with optical/infrared long-baseline interferometers. Five different algorithms are evaluated based on their performance with simulated test data. Each set of test data is formatted in the OI-FITS format. The data are calibrated power spectra and bispectra measured with an array intended to be typical of existing imaging interferometers. The strengths and limitations of each algorithm are discussed.
21 CFR 882.1925 - Ultrasonic scanner calibration test block.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Ultrasonic scanner calibration test block. 882... Ultrasonic scanner calibration test block. (a) Identification. An ultrasonic scanner calibration test block is a block of material with known properties used to calibrate ultrasonic scanning devices (e.g., the...
Base flow calibration in a global hydrological model
NASA Astrophysics Data System (ADS)
van Beek, L. P.; Bierkens, M. F.
2006-12-01
Base flow constitutes an important water resource in many parts of the world. Its provenance and yield over time are governed by the storage capacity of local aquifers and the internal drainage paths, which are difficult to capture at the global scale. To represent the spatial and temporal variability in base flow adequately in a distributed global model at 0.5 degree resolution, we resorted to the conceptual model of aquifer storage of Kraaijenhoff- van de Leur (1958) that yields the reservoir coefficient for a linear groundwater store. This model was parameterised using global information on drainage density, climatology and lithology. Initial estimates of aquifer thickness, permeability and specific porosity from literature were linked to the latter two categories and calibrated to low flow data by means of simulated annealing so as to conserve the ordinal information contained by them. The observations used stem from the RivDis dataset of monthly discharge. From this dataset 324 stations were selected with at least 10 years of observations in the period 1958-1991 and an areal coverage of at least 10 cells of 0.5 degree. The dataset was split between basins into a calibration and validation set whilst preserving a representative distribution of lithology types and climate zones. Optimisation involved minimising the absolute differences between the simulated base flow and the lowest 10% of the observed monthly discharge. Subsequently, the reliability of the calibrated parameters was tested by reversing the calibration and validation sets.
Barsi, Alpar; Jager, Tjalling; Collinet, Marc; Lagadic, Laurent; Ducrot, Virginie
2014-07-01
Toxicokinetic-toxicodynamic (TKTD) modeling offers many advantages in the analysis of ecotoxicity test data. Calibration of TKTD models, however, places different demands on test design compared with classical concentration-response approaches. In the present study, useful complementary information is provided regarding test design for TKTD modeling. A case study is presented for the pond snail Lymnaea stagnalis exposed to the narcotic compound acetone, in which the data on all endpoints were analyzed together using a relatively simple TKTD model called DEBkiss. Furthermore, the influence of the data used for calibration on accuracy and precision of model parameters is discussed. The DEBkiss model described toxic effects on survival, growth, and reproduction over time well, within a single integrated analysis. Regarding the parameter estimates (e.g., no-effect concentration), precision rather than accuracy was affected depending on which data set was used for model calibration. In addition, the present study shows that the intrinsic sensitivity of snails to acetone stays the same across different life stages, including the embryonic stage. In fact, the data on egg development allowed for selection of a unique metabolic mode of action for the toxicant. Practical and theoretical considerations for test design to accommodate TKTD modeling are discussed in the hope that this information will aid other researchers to make the best possible use of their test animals. © 2014 SETAC.
1. VIEW NORTHEAST, LEFT TO RIGHT COLD CALIBRATION TEST STAND ...
1. VIEW NORTHEAST, LEFT TO RIGHT COLD CALIBRATION TEST STAND COLD CALIBRATION BLOCKHOUSE IN FOREGROUND. - Marshall Space Flight Center, East Test Area, Cold Calibration Test Stand, Huntsville, Madison County, AL
Robust camera calibration for sport videos using court models
NASA Astrophysics Data System (ADS)
Farin, Dirk; Krabbe, Susanne; de With, Peter H. N.; Effelsberg, Wolfgang
2003-12-01
We propose an automatic camera calibration algorithm for court sports. The obtained camera calibration parameters are required for applications that need to convert positions in the video frame to real-world coordinates or vice versa. Our algorithm uses a model of the arrangement of court lines for calibration. Since the court model can be specified by the user, the algorithm can be applied to a variety of different sports. The algorithm starts with a model initialization step which locates the court in the image without any user assistance or a-priori knowledge about the most probable position. Image pixels are classified as court line pixels if they pass several tests including color and local texture constraints. A Hough transform is applied to extract line elements, forming a set of court line candidates. The subsequent combinatorial search establishes correspondences between lines in the input image and lines from the court model. For the succeeding input frames, an abbreviated calibration algorithm is used, which predicts the camera parameters for the new image and optimizes the parameters using a gradient-descent algorithm. We have conducted experiments on a variety of sport videos (tennis, volleyball, and goal area sequences of soccer games). Video scenes with considerable difficulties were selected to test the robustness of the algorithm. Results show that the algorithm is very robust to occlusions, partial court views, bad lighting conditions, or shadows.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osthus, Dave; Godinez, Humberto C.; Rougier, Esteban
We presenmore » t a generic method for automatically calibrating a computer code to an experiment, with uncertainty, for a given “training” set of computer code runs. The calibration technique is general and probabilistic, meaning the calibration uncertainty is represented in the form of a probability distribution. We demonstrate the calibration method by calibrating a combined Finite-Discrete Element Method (FDEM) to a Split Hopkinson Pressure Bar (SHPB) experiment with a granite sample. The probabilistic calibration method combines runs of a FDEM computer simulation for a range of “training” settings and experimental uncertainty to develop a statistical emulator. The process allows for calibration of input parameters and produces output quantities with uncertainty estimates for settings where simulation results are desired. Input calibration and FDEM fitted results are presented. We find that the maximum shear strength σ t max and to a lesser extent maximum tensile strength σ n max govern the behavior of the stress-time curve before and around the peak, while the specific energy in Mode II (shear) E t largely governs the post-peak behavior of the stress-time curve. Good agreement is found between the calibrated FDEM and the SHPB experiment. Interestingly, we find the SHPB experiment to be rather uninformative for calibrating the softening-curve shape parameters (a, b, and c). This work stands as a successful demonstration of how a general probabilistic calibration framework can automatically calibrate FDEM parameters to an experiment.« less
Osthus, Dave; Godinez, Humberto C.; Rougier, Esteban; ...
2018-05-01
We presenmore » t a generic method for automatically calibrating a computer code to an experiment, with uncertainty, for a given “training” set of computer code runs. The calibration technique is general and probabilistic, meaning the calibration uncertainty is represented in the form of a probability distribution. We demonstrate the calibration method by calibrating a combined Finite-Discrete Element Method (FDEM) to a Split Hopkinson Pressure Bar (SHPB) experiment with a granite sample. The probabilistic calibration method combines runs of a FDEM computer simulation for a range of “training” settings and experimental uncertainty to develop a statistical emulator. The process allows for calibration of input parameters and produces output quantities with uncertainty estimates for settings where simulation results are desired. Input calibration and FDEM fitted results are presented. We find that the maximum shear strength σ t max and to a lesser extent maximum tensile strength σ n max govern the behavior of the stress-time curve before and around the peak, while the specific energy in Mode II (shear) E t largely governs the post-peak behavior of the stress-time curve. Good agreement is found between the calibrated FDEM and the SHPB experiment. Interestingly, we find the SHPB experiment to be rather uninformative for calibrating the softening-curve shape parameters (a, b, and c). This work stands as a successful demonstration of how a general probabilistic calibration framework can automatically calibrate FDEM parameters to an experiment.« less
NASA Astrophysics Data System (ADS)
Wuest, Martin; Robinson, David W.; Decoste, Dennis
Calibration is defined as a set of operations that establish, under specified conditions, the relationship between the values of quantities indicated by a measuring instrument or measuring system and the corresponding values realized by standards. Calibration of an instrument means determining by how much the instrument reading is in error by checking it against a measurement standard of known error.Space physics particle instrumentation needs to be calibrated on the ground and inflight to insure that the data can be properly interpreted.On the ground, calibration is performed by exposing the instrument to a well characterized incident particle beam. Not only the nominal range of parameters the instrument is designed to measure should be calibrated but the instrument should also be exposed to out-of-band exposure such as higher energies, angles outside of the nominal field-of-view and susceptibility to ultraviolet radiation.There are several challenges to laboratory calibration on the ground. The beam must be well characterized in energy, angle, mass and position. The particle flux must be uniform over the whole aperture area of the instrument to be calibrated. The beam must be very stable in time and space. One of the difficulties arises that in order to measure the incident particle flux the beam monitor is placed upstream in front of the instrument thereby blocking the incident beam and interrupting the beam detection by the device under test. A beam monitor placed outside of the field-of-view of the instrument to be calibrated is often in a region at the fringes of the beam where the beam is not very stable. This basically prevents the measuring of the same beam with a trusted reference detector and the instrument under test at the same time. Further, highly sensitive instruments are calibrated at flux levels too low to be detected with stable Faraday cup detectors. Present day windowless electron multiplier detectors are able to measure the low flux levels but are sensitive to degradation as a function of contamination and the amount of extracted charge. Windowless electron multipliers are therefore not very stable reference detectors. This makes it difficult to obtain a reliable absolute calibration traceable to a national measurement institute. Calibration is still a time consuming process. It involves testing the instrument at component, subsystem and integrated level. It is important that the instrument is not only operated using a special calibration configuration to save time, but also in its full flight configuration exercising the full path of the data through data compression and telemetry. Very seldom there is enough time available to calibrate all the desired points in parameter space. Usually only a subset can be calibrated for schedule and economic reasons. The number of calibration points is often further reduced since the available calibration time is cut due to development schedule slip and a fixed launch date. This increases the uncertainties as more parameters have to be interpolated or extrapolated. Calibration data should be evaluated preferably in near-real time to prevent losing valuable calibration time if something in the instrument or facility is not working properly. Computer simulation models should be used to obtain a thorough understanding of the actual flight instrument. In flight the instrument performance degrades due to contamination (outgassing), environmental effects (atomic oxygen, radiation) or aging. One of the most sensitive parts in today's instrument are their detectors. Microchannel plate detectors degrade as function of the extracted charge. Solid-state detectors experience radiation damage which increases their noise and the lower energy detection threshold. The goal of the in-flight calibration is to determine this instrument degradation. Calibration is then performed by comparing measurements taken with different bias voltage or discriminator threshold settings. If possible, the instrument data is compared with other sensors covering the same or at least a part of the same measurand on the same or on a different spacecraft. In-flight calibration is not easy, as no absolute calibration standard for particles exist in space and measuring the same physical quantity with two different spacecraft at the same environmental conditions is very challenging.
New STS-1 Electronics: Development and Test Results
NASA Astrophysics Data System (ADS)
Uhrhammer, R. A.; Karavas, B.; Friday, J.; Vanzandt, T.; Hutt, C. R.; Wielandt, E.; Romanowicz, B.
2007-12-01
The STS-1 seismometer is currently the principal very broad-band (VBB) seismometer used in global or regional seismic networks operated by members of the Federation of Digital Broad-Band Seismograph Networks (FDSN). It is widely viewed as the finest VBB sensor in the world, Unfortunately, many of the STS-1's, which were manufactured and installed 10-20 years ago, are encountering both operational failures and age-related degradation. This problem is exacerbated by the fact that sensors are no longer being produced or supported by the original manufacturer, G. Streckeisen AG. In a first step towards assuring continued high quality of VBB data for decades to come, we have developed and tested new electronics and methods for mechanical repair for the STS-1 very broadband seismometer. This is a collaborative project with Tom VanZandt of Metrozet, LLC (Redondo Beach, CA) and Erhard Wielandt (original designer of the STS-1), and has been funded by a grant from NSF through the IRIS/GSN program. A primary goal of this effort was to develop a fully-tested, modern electronics module that will be a drop-in replacement for the original electronics. This new electronics design addresses environmental packaging problems that have led to operational degradation and failures in the existing instruments. This effort also provided the opportunity to implement a set of electronic improvements that will make the installation and operation of the sensors more efficient. Metrozet developed the first prototype new electronics for the STS-1, while the BSL engineering staff constructed a test-bed at the Byerly Vault (BKS) and developed the capability to simultaneously test 6-8 STS-1 components. BSL staff then tested successive versions of the electronics. The first generation prototype electronics did not include centering or calibration functionality. The second generation prototype included remote centering functionality as well as calibration functions. After some observations and refinements, this generation of electronics was operated on two seismometers concurrently and successfully run through swept sine and step calibration functions on four seismometers. During this final phase, the Metrozet electronics included the ability to initiate and operate the calibrations via a network (Ethernet) connection. Most of the calibration testing was performed remotely from Metrozet's Southern California office over the BSL network. Metrozet was able to remotely log into the Berkeley network, establish a connection to the test bed in the Byerly seismic vault and initiate control of the seismometer including remote centering and calibration functions. Finally, after BSL tests were completed and the development appeared complete and satisfactory, the new electronics were tested at the Albuquerque Seismological Laboratory's seismic vault, which is located in a quieter environment than BKS. The new electronics package was also field tested at the BDSN broadband station HOPS. We present detailed results of the calibrations.
NASA Astrophysics Data System (ADS)
Campbell, J. L.; Lee, M.; Jones, B. N.; Andrushenko, S. M.; Holmes, N. G.; Maxwell, J. A.; Taylor, S. M.
2009-04-01
The detection sensitivities of the Alpha Particle X-ray Spectrometer (APXS) instruments on the Mars Exploration Rovers for a wide range of elements were experimentally determined in 2002 using spectra of geochemical reference materials. A flight spare instrument was similarly calibrated, and the calibration exercise was then continued for this unit with an extended set of geochemical reference materials together with pure elements and simple chemical compounds. The flight spare instrument data are examined in detail here using a newly developed fundamental parameters approach which takes precise account of all the physics inherent in the two X-ray generation techniques involved, namely, X-ray fluorescence and particle-induced X-ray emission. The objectives are to characterize the instrument as fully as possible, to test this new approach, and to determine the accuracy of calibration for major, minor, and trace elements. For some of the lightest elements the resulting calibration exhibits a dependence upon the mineral assemblage of the geological reference material; explanations are suggested for these observations. The results will assist in designing the overall calibration approach for the APXS on the Mars Science Laboratory mission.
40 CFR 91.315 - Analyzer initial calibration.
Code of Federal Regulations, 2013 CFR
2013-07-01
... in § 91.316(b). (c) Zero setting and calibration. Using purified synthetic air (or nitrogen), set the CO, CO2, NOX and HC analyzers at zero. Connect the appropriate calibrating gases to the analyzers and record the values. The same gas flow rates shall be used as when sampling exhaust. (d) Rechecking of zero...
40 CFR 91.315 - Analyzer initial calibration.
Code of Federal Regulations, 2014 CFR
2014-07-01
... in § 91.316(b). (c) Zero setting and calibration. Using purified synthetic air (or nitrogen), set the CO, CO2, NOX and HC analyzers at zero. Connect the appropriate calibrating gases to the analyzers and record the values. The same gas flow rates shall be used as when sampling exhaust. (d) Rechecking of zero...
40 CFR 91.315 - Analyzer initial calibration.
Code of Federal Regulations, 2012 CFR
2012-07-01
... in § 91.316(b). (c) Zero setting and calibration. Using purified synthetic air (or nitrogen), set the CO, CO2, NOX and HC analyzers at zero. Connect the appropriate calibrating gases to the analyzers and record the values. The same gas flow rates shall be used as when sampling exhaust. (d) Rechecking of zero...
40 CFR 91.315 - Analyzer initial calibration.
Code of Federal Regulations, 2010 CFR
2010-07-01
... in § 91.316(b). (c) Zero setting and calibration. Using purified synthetic air (or nitrogen), set the CO, CO2, NOX and HC analyzers at zero. Connect the appropriate calibrating gases to the analyzers and record the values. The same gas flow rates shall be used as when sampling exhaust. (d) Rechecking of zero...
40 CFR 91.315 - Analyzer initial calibration.
Code of Federal Regulations, 2011 CFR
2011-07-01
... in § 91.316(b). (c) Zero setting and calibration. Using purified synthetic air (or nitrogen), set the CO, CO2, NOX and HC analyzers at zero. Connect the appropriate calibrating gases to the analyzers and record the values. The same gas flow rates shall be used as when sampling exhaust. (d) Rechecking of zero...
First Results of Field Absolute Calibration of the GPS Receiver Antenna at Wuhan University
Hu, Zhigang; Zhao, Qile; Chen, Guo; Wang, Guangxing; Dai, Zhiqiang; Li, Tao
2015-01-01
GNSS receiver antenna phase center variations (PCVs), which arise from the non-spherical phase response of GNSS signals have to be well corrected for high-precision GNSS applications. Without using a precise antenna phase center correction (PCC) model, the estimated position of a station monument will lead to a bias of up to several centimeters. The Chinese large-scale research project “Crustal Movement Observation Network of China” (CMONOC), which requires high-precision positions in a comprehensive GPS observational network motived establishment of a set of absolute field calibrations of the GPS receiver antenna located at Wuhan University. In this paper the calibration facilities are firstly introduced and then the multipath elimination and PCV estimation strategies currently used are elaborated. The validation of estimated PCV values of test antenna are finally conducted, compared with the International GNSS Service (IGS) type values. Examples of TRM57971.00 NONE antenna calibrations from our calibration facility demonstrate that the derived PCVs and IGS type mean values agree at the 1 mm level. PMID:26580616
Yang, Jun; Fan, Shangchun; Li, Cheng; Guo, Zhanshe; Li, Bo; Shi, Bo
2016-12-01
A new method with laser interferometry is used to enhance the traceability for sinusoidal pressure calibration in water. The laser vibrometer measures the dynamic pressure based on the acousto-optic effect. The relation of the refractive index of water and the optical path length with the pressure's change is built based on the Lorentz-Lorenz equation, and the conversion coefficients are tested by static calibration in situ. A device with a piezoelectric transducer and resonant pressure pipe with water is set up to generate sinusoidal pressure up to 20 kHz. With the conversion coefficients, the reference sinusoidal pressure is measured by the laser interferometer for pressure sensors' dynamic calibration. The experiment results show that under 10 kHz, the measurement results between the laser vibrometer and a piezoelectric sensor are in basic agreement and indicate that this new method and its measurement system are feasible in sinusoidal pressure calibration. Some disturbing components including small amplitude, temperature change, pressure maldistribution, and glass windows' vibration are also analyzed, especially for the dynamic calibrations above 10 kHz.
Land utilization and water resource inventories over extended test sites
NASA Technical Reports Server (NTRS)
Hoffer, R. M.
1972-01-01
In addition to the work on the corn blight this year, several other analysis tests were completed which resulted in significant findings. These aspects are discussed as follows: (1) field spectral measurements of soil conditions; (2) analysis of extended test site data; this discussion involves three different sets of data analysis sequences; (3) urban land use analysis, for studying water runoff potentials; and (4) thermal data quality study, as an expansion of our water resources studies involving temperature calibration techniques.
Field calibration of orifice meters for natural gas flow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ting, V.C.; Shen, J.J.S.
1989-03-01
This paper presents the orifice calibration results for nominal 15.24, 10.16, and 5.08-cm (6,4,2-in.) orifice meters conducted at the Chevron's Sand Hills natural gas flow measurement facility in Crane, Texas. Over 200 test runs were collected in a field environment to study the accuracy of the orifice meters. Data were obtained at beta ratios ranging from 0.12 to 0.74 at the nominal conditions of 4576 kPa and 27{sup 0}C (650 psig and 80{sup 0}F) with a 0.57 specific gravity processed, pipeline quality natural gas. A bank of critical flow nozzles was used as the flow rate proving device to calibratemore » the orifice meters. Orifice discharge coefficients were computed with ANSI/API 2530-1985 (AGA3) and ISO 5167/ASME MFC-3M-1984 equations for every set of data points. With the orifice bore Reynolds numbers ranging from 1 to 9 million, the Sand Hills calibration data bridge the gap between the Ohio State water data at low Reynolds numbers and Chevron's high Reynolds number test data taken at a large test facility in Venice, Louisiana. The test results also successfully demonstrate that orifice meters can be accurately proved with critical flow nozzles under realistic field conditions.« less
New Mexico Standards Based Assessment (NMSBA) Technical Report: 2006 Spring Administration
ERIC Educational Resources Information Center
Griph, Gerald W.
2006-01-01
The purpose of the NMSBA technical report is to provide users and other interested parties with a general overview of and technical characteristics of the 2006 NMSBA. The 2006 technical report contains the following information: (1) Test development; (2) Scoring procedures; (3) Calibration, scaling, and equating procedures; (4) Standard setting;…
Out of lab calibration of a rotating 2D scanner for 3D mapping
NASA Astrophysics Data System (ADS)
Koch, Rainer; Böttcher, Lena; Jahrsdörfer, Maximilian; Maier, Johannes; Trommer, Malte; May, Stefan; Nüchter, Andreas
2017-06-01
Mapping is an essential task in mobile robotics. To fulfil advanced navigation and manipulation tasks a 3D representation of the environment is required. Applying stereo cameras or Time-of-flight cameras (TOF cameras) are one way to archive this requirement. Unfortunately, they suffer from drawbacks which makes it difficult to map properly. Therefore, costly 3D laser scanners are applied. An inexpensive way to build a 3D representation is to use a 2D laser scanner and rotate the scan plane around an additional axis. A 3D point cloud acquired with such a custom device consists of multiple 2D line scans. Therefore the scanner pose of each line scan need to be determined as well as parameters resulting from a calibration to generate a 3D point cloud. Using external sensor systems are a common method to determine these calibration parameters. This is costly and difficult when the robot needs to be calibrated outside the lab. Thus, this work presents a calibration method applied on a rotating 2D laser scanner. It uses a hardware setup to identify the required parameters for calibration. This hardware setup is light, small, and easy to transport. Hence, an out of lab calibration is possible. Additional a theoretical model was created to test the algorithm and analyse impact of the scanner accuracy. The hardware components of the 3D scanner system are an HOKUYO UTM-30LX-EW 2D laser scanner, a Dynamixel servo-motor, and a control unit. The calibration system consists of an hemisphere. In the inner of the hemisphere a circular plate is mounted. The algorithm needs to be provided with a dataset of a single rotation from the laser scanner. To achieve a proper calibration result the scanner needs to be located in the middle of the hemisphere. By means of geometric formulas the algorithms determine the individual deviations of the placed laser scanner. In order to minimize errors, the algorithm solves the formulas in an iterative process. First, the calibration algorithm was tested with an ideal hemisphere model created in Matlab. Second, laser scanner was mounted differently, the scanner position and the rotation axis was modified. In doing so, every deviation, was compared with the algorithm results. Several measurement settings were tested repeatedly with the 3D scanner system and the calibration system. The results show that the length accuracy of the laser scanner is most critical. It influences the required size of the hemisphere and the calibration accuracy.
14 CFR 33.45 - Calibration tests.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 14 Aeronautics and Space 1 2013-01-01 2013-01-01 false Calibration tests. 33.45 Section 33.45... STANDARDS: AIRCRAFT ENGINES Block Tests; Reciprocating Aircraft Engines § 33.45 Calibration tests. (a) Each engine must be subjected to the calibration tests necessary to establish its power characteristics and...
14 CFR 33.45 - Calibration tests.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Calibration tests. 33.45 Section 33.45... STANDARDS: AIRCRAFT ENGINES Block Tests; Reciprocating Aircraft Engines § 33.45 Calibration tests. (a) Each engine must be subjected to the calibration tests necessary to establish its power characteristics and...
14 CFR 33.45 - Calibration tests.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 14 Aeronautics and Space 1 2014-01-01 2014-01-01 false Calibration tests. 33.45 Section 33.45... STANDARDS: AIRCRAFT ENGINES Block Tests; Reciprocating Aircraft Engines § 33.45 Calibration tests. (a) Each engine must be subjected to the calibration tests necessary to establish its power characteristics and...
40 CFR 90.326 - Pre- and post-test analyzer calibration.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 20 2014-07-01 2013-07-01 true Pre- and post-test analyzer calibration... Emission Test Equipment Provisions § 90.326 Pre- and post-test analyzer calibration. Calibrate only the range of each analyzer used during the engine exhaust emission test prior to and after each test in...
Development and test of sets of 3D printed age-specific thyroid phantoms for 131I measurements
NASA Astrophysics Data System (ADS)
Beaumont, Tiffany; Caldeira Ideias, Pedro; Rimlinger, Maeva; Broggio, David; Franck, Didier
2017-06-01
In the case of a nuclear reactor accident the release contains a high proportion of iodine-131 that can be inhaled or ingested by members of the public. Iodine-131 is naturally retained in the thyroid and increases the thyroid cancer risk. Since the radiation induced thyroid cancer risk is greater for children than for adults, the thyroid dose to children should be assessed as accurately as possible. For that purpose direct measurements should be carried out with age-specific calibration factors but, currently, there is no age-specific thyroid phantoms allowing a robust measurement protocol. A set of age-specific thyroid phantoms for 5, 10, 15 year old children and for the adult has been designed and 3D printed. A realistic thyroid shape has been selected and material properties taken into account to simulate the attenuation of biological tissues. The thyroid volumes follow ICRP recommendations and the phantoms also include the trachea and a spine model. Several versions, with or without spine, with our without trachea, with or without age-specific neck have been manufactured, in order to study the influence of these elements on calibration factors. The calibration factor obtained with the adult phantom and a reference phantom are in reasonable agreement. In vivo calibration experiments with germanium detectors have shown that the difference in counting efficiency, the inverse of the calibration factor, between the 5 year and adult phantoms is 25% for measurement at contact. It is also experimentally evidenced that the inverse of the calibration factor varies linearly with the thyroid volume. The influence of scattering elements like the neck or spine is not evidenced by experimental measurements.
Development of a EUV Test Facility at the Marshall Space Flight Center
NASA Technical Reports Server (NTRS)
West, Edward; Pavelitz, Steve; Kobayashi, Ken; Robinson, Brian; Cirtain, Johnathan; Gaskin, Jessica; Winebarger, Amy
2011-01-01
This paper will describe a new EUV test facility that is being developed at the Marshall Space Flight Center (MSFC) to test EUV telescopes. Two flight programs, HiC - high resolution coronal imager (sounding rocket) and SUVI - Solar Ultraviolet Imager (GOES-R), set the requirements for this new facility. This paper will discuss those requirements, the EUV source characteristics, the wavelength resolution that is expected and the vacuum chambers (Stray Light Facility, Xray Calibration Facility and the EUV test chamber) where this facility will be used.
Chander, G.; Xiong, X.(J.); Choi, T.(J.); Angal, A.
2010-01-01
The ability to detect and quantify changes in the Earth's environment depends on sensors that can provide calibrated, consistent measurements of the Earth's surface features through time. A critical step in this process is to put image data from different sensors onto a common radiometric scale. This work focuses on monitoring the long-term on-orbit calibration stability of the Terra Moderate Resolution Imaging Spectroradiometer (MODIS) and the Landsat 7 (L7) Enhanced Thematic Mapper Plus (ETM+) sensors using the Committee on Earth Observation Satellites (CEOS) reference standard pseudo-invariant test sites (Libya 4, Mauritania 1/2, Algeria 3, Libya 1, and Algeria 5). These sites have been frequently used as radiometric targets because of their relatively stable surface conditions temporally. This study was performed using all cloud-free calibrated images from the Terra MODIS and the L7 ETM+ sensors, acquired from launch to December 2008. Homogeneous regions of interest (ROI) were selected in the calibrated images and the mean target statistics were derived from sensor measurements in terms of top-of-atmosphere (TOA) reflectance. For each band pair, a set of fitted coefficients (slope and offset) is provided to monitor the long-term stability over very stable pseudo-invariant test sites. The average percent differences in intercept from the long-term trends obtained from the ETM + TOA reflectance estimates relative to the MODIS for all the CEOS reference standard test sites range from 2.5% to 15%. This gives an estimate of the collective differences due to the Relative Spectral Response (RSR) characteristics of each sensor, bi-directional reflectance distribution function (BRDF), spectral signature of the ground target, and atmospheric composition. The lifetime TOA reflectance trends from both sensors over 10 years are extremely stable, changing by no more than 0.4% per year in its TOA reflectance over the CEOS reference standard test sites.
Esquinas, Pedro L; Tanguay, Jesse; Gonzalez, Marjorie; Vuckovic, Milan; Rodríguez-Rodríguez, Cristina; Häfeli, Urs O; Celler, Anna
2016-12-01
In the nuclear medicine department, the activity of radiopharmaceuticals is measured using dose calibrators (DCs) prior to patient injection. The DC consists of an ionization chamber that measures current generated by ionizing radiation (emitted from the radiotracer). In order to obtain an activity reading, the current is converted into units of activity by applying an appropriate calibration factor (also referred to as DC dial setting). Accurate determination of DC dial settings is crucial to ensure that patients receive the appropriate dose in diagnostic scans or radionuclide therapies. The goals of this study were (1) to describe a practical method to experimentally determine dose calibrator settings using a thyroid-probe (TP) and (2) to investigate the accuracy, reproducibility, and uncertainties of the method. As an illustration, the TP method was applied to determine 188 Re dial settings for two dose calibrator models: Atomlab 100plus and Capintec CRC-55tR. Using the TP to determine dose calibrator settings involved three measurements. First, the energy-dependent efficiency of the TP was determined from energy spectra measurements of two calibration sources ( 152 Eu and 22 Na). Second, the gamma emissions from the investigated isotope ( 188 Re) were measured using the TP and its activity was determined using γ-ray spectroscopy methods. Ambient background, scatter, and source-geometry corrections were applied during the efficiency and activity determination steps. Third, the TP-based 188 Re activity was used to determine the dose calibrator settings following the calibration curve method [B. E. Zimmerman et al., J. Nucl. Med. 40, 1508-1516 (1999)]. The interobserver reproducibility of TP measurements was determined by the coefficient of variation (COV) and uncertainties associated to each step of the measuring process were estimated. The accuracy of activity measurements using the proposed method was evaluated by comparing the TP activity estimates of 99m Tc, 188 Re, 131 I, and 57 Co samples to high purity Ge (HPGe) γ-ray spectroscopy measurements. The experimental 188 Re dial settings determined with the TP were 76.5 ± 4.8 and 646 ± 43 for Atomlab 100plus and Capintec CRC-55tR, respectively. In the case of Atomlab 100plus, the TP-based dial settings improved the accuracy of 188 Re activity measurements (confirmed by HPGe measurements) as compared to manufacturer-recommended settings. For Capintec CRC-55tR, the TP-based settings were in agreement with previous results [B. E. Zimmerman et al., J. Nucl. Med. 40, 1508-1516 (1999)] which demonstrated that manufacturer-recommended settings overestimate 188 Re activity by more than 20%. The largest source of uncertainty in the experimentally determined dial settings was due to the application of a geometry correction factor, followed by the uncertainty of the scatter-corrected photopeak counts and the uncertainty of the TP efficiency calibration experiment. When using the most intense photopeak of the sample's emissions, the TP method yielded accurate (within 5% errors) and reproducible (COV = 2%) measurements of sample's activity. The relative uncertainties associated with such measurements ranged from 6% to 8% (expanded uncertainty at 95% confidence interval, k = 2). Accurate determination/verification of dose calibrator dial settings can be performed using a thyroid-probe in the nuclear medicine department.
14 CFR 33.85 - Calibration tests.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 14 Aeronautics and Space 1 2013-01-01 2013-01-01 false Calibration tests. 33.85 Section 33.85... STANDARDS: AIRCRAFT ENGINES Block Tests; Turbine Aircraft Engines § 33.85 Calibration tests. (a) Each engine must be subjected to those calibration tests necessary to establish its power characteristics and the...
14 CFR 33.85 - Calibration tests.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Calibration tests. 33.85 Section 33.85... STANDARDS: AIRCRAFT ENGINES Block Tests; Turbine Aircraft Engines § 33.85 Calibration tests. (a) Each engine must be subjected to those calibration tests necessary to establish its power characteristics and the...
14 CFR 33.85 - Calibration tests.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 14 Aeronautics and Space 1 2014-01-01 2014-01-01 false Calibration tests. 33.85 Section 33.85... STANDARDS: AIRCRAFT ENGINES Block Tests; Turbine Aircraft Engines § 33.85 Calibration tests. (a) Each engine must be subjected to those calibration tests necessary to establish its power characteristics and the...
A Low Cost Weather Balloon Borne Solar Cell Calibration Payload
NASA Technical Reports Server (NTRS)
Snyder, David B.; Wolford, David S.
2012-01-01
Calibration of standard sets of solar cell sub-cells is an important step to laboratory verification of on-orbit performance of new solar cell technologies. This paper, looks at the potential capabilities of a lightweight weather balloon payload for solar cell calibration. A 1500 gr latex weather balloon can lift a 2.7 kg payload to over 100,000 ft altitude, above 99% of the atmosphere. Data taken between atmospheric pressures of about 30 to 15 mbar may be extrapolated via the Langley Plot method to 0 mbar, i.e. AMO. This extrapolation, in principle, can have better than 0.1 % error. The launch costs of such a payload arc significantly less than the much larger, higher altitude balloons, or the manned flight facility. The low cost enables a risk tolerant approach to payload development. Demonstration of 1% standard deviation flight-to-flight variation is the goal of this project. This paper describes the initial concept of solar cell calibration payload, and reports initial test flight results. .
Research on Geometric Calibration of Spaceborne Linear Array Whiskbroom Camera
Sheng, Qinghong; Wang, Qi; Xiao, Hui; Wang, Qing
2018-01-01
The geometric calibration of a spaceborne thermal-infrared camera with a high spatial resolution and wide coverage can set benchmarks for providing an accurate geographical coordinate for the retrieval of land surface temperature. The practice of using linear array whiskbroom Charge-Coupled Device (CCD) arrays to image the Earth can help get thermal-infrared images of a large breadth with high spatial resolutions. Focusing on the whiskbroom characteristics of equal time intervals and unequal angles, the present study proposes a spaceborne linear-array-scanning imaging geometric model, whilst calibrating temporal system parameters and whiskbroom angle parameters. With the help of the YG-14—China’s first satellite equipped with thermal-infrared cameras of high spatial resolution—China’s Anyang Imaging and Taiyuan Imaging are used to conduct an experiment of geometric calibration and a verification test, respectively. Results have shown that the plane positioning accuracy without ground control points (GCPs) is better than 30 pixels and the plane positioning accuracy with GCPs is better than 1 pixel. PMID:29337885
Li, Weiyong; Worosila, Gregory D
2005-05-13
This research note demonstrates the simultaneous quantitation of a pharmaceutical active ingredient and three excipients in a simulated powder blend containing acetaminophen, Prosolv and Crospovidone. An experimental design approach was used in generating a 5-level (%, w/w) calibration sample set that included 125 samples. The samples were prepared by weighing suitable amount of powders into separate 20-mL scintillation vials and were mixed manually. Partial least squares (PLS) regression was used in calibration model development. The models generated accurate results for quantitation of Crospovidone (at 5%, w/w) and magnesium stearate (at 0.5%, w/w). Further testing of the models demonstrated that the 2-level models were as effective as the 5-level ones, which reduced the calibration sample number to 50. The models had a small bias for quantitation of acetaminophen (at 30%, w/w) and Prosolv (at 64.5%, w/w) in the blend. The implication of the bias is discussed.
Pettit works with the SLICE at the MSG in the U.S. Laboratory
2012-03-09
ISS030-E-128918 (9 March 2012) --- NASA astronaut Don Pettit, Expedition 30 flight engineer, works with the Structure and Liftoff In Combustion Experiment (SLICE) at the Microgravity Sciences Glovebox (MSG) in the Destiny laboratory of the International Space Station. Pettit conducted three sets of flame tests, followed by a fan calibration. This test will lead to increased efficiency and reduced pollutant emission for practical combustion devices.
The Initial Atmospheric Transport (IAT) Code: Description and Validation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morrow, Charles W.; Bartel, Timothy James
The Initial Atmospheric Transport (IAT) computer code was developed at Sandia National Laboratories as part of their nuclear launch accident consequences analysis suite of computer codes. The purpose of IAT is to predict the initial puff/plume rise resulting from either a solid rocket propellant or liquid rocket fuel fire. The code generates initial conditions for subsequent atmospheric transport calculations. The Initial Atmospheric Transfer (IAT) code has been compared to two data sets which are appropriate to the design space of space launch accident analyses. The primary model uncertainties are the entrainment coefficients for the extended Taylor model. The Titan 34Dmore » accident (1986) was used to calibrate these entrainment settings for a prototypic liquid propellant accident while the recent Johns Hopkins University Applied Physics Laboratory (JHU/APL, or simply APL) large propellant block tests (2012) were used to calibrate the entrainment settings for prototypic solid propellant accidents. North American Meteorology (NAM )formatted weather data profiles are used by IAT to determine the local buoyancy force balance. The IAT comparisons for the APL solid propellant tests illustrate the sensitivity of the plume elevation to the weather profiles; that is, the weather profile is a dominant factor in determining the plume elevation. The IAT code performed remarkably well and is considered validated for neutral weather conditions.« less
NASA Astrophysics Data System (ADS)
Wegehenkel, Martin
As a result of a new agricultural funding policy established in 1992 by the European Community, it was assumed that up to 15-20% of arable land would have been set aside in the next years in the new federal states of north-eastern Germany, for example, Brandenburg. As one potential land use option, afforestation of these set aside areas was discussed to obtain deciduous forests. Since the mean annual precipitation in north-eastern Germany, Brandenburg is relatively low (480-530 mm y -1), an increase in interception and evapotranspiration loss by forests compared to arable land would lead to a reduction in ground water recharge. Experimental evidence to determine effects of such land use changes are rarely available. Therefore, there is a need for indirect methods to estimate the impact of afforestation on the water balance of catchments. In this paper, a conceptual hydrological model was verified and calibrated in two steps using data from the Stobber-catchment located in Brandenburg. In the first step, model outputs like daily evapotranspiration rates and soil water contents were verified on the basis of experimental data sets from two test locations. One test site with the land use arable land was located within the Stobber-catchment. The other test site with pine forest was located near by the catchment. In the second step, the model was used to estimate the impact of afforestation on catchment water balance and discharge. For that purpose, the model was calibrated against daily discharge measurements for the period 1995-1997. For a simple afforestation scenario, it was assumed that the area of forest increases from 34% up to 80% of the catchment area. The impact of this change in forest cover proportion was analyzed using the calibrated model. In case of increasing the proportion of forest cover in the catchment due to the scenario afforestation, the model predicts a reduction in discharge and an increase in evapotranspiration.
Detection of Unexpected High Correlations between Balance Calibration Loads and Load Residuals
NASA Technical Reports Server (NTRS)
Ulbrich, N.; Volden, T.
2014-01-01
An algorithm was developed for the assessment of strain-gage balance calibration data that makes it possible to systematically investigate potential sources of unexpected high correlations between calibration load residuals and applied calibration loads. The algorithm investigates correlations on a load series by load series basis. The linear correlation coefficient is used to quantify the correlations. It is computed for all possible pairs of calibration load residuals and applied calibration loads that can be constructed for the given balance calibration data set. An unexpected high correlation between a load residual and a load is detected if three conditions are met: (i) the absolute value of the correlation coefficient of a residual/load pair exceeds 0.95; (ii) the maximum of the absolute values of the residuals of a load series exceeds 0.25 % of the load capacity; (iii) the load component of the load series is intentionally applied. Data from a baseline calibration of a six-component force balance is used to illustrate the application of the detection algorithm to a real-world data set. This analysis also showed that the detection algorithm can identify load alignment errors as long as repeat load series are contained in the balance calibration data set that do not suffer from load alignment problems.
Requirements for Calibration in Noninvasive Glucose Monitoring by Raman Spectroscopy
Lipson, Jan; Bernhardt, Jeff; Block, Ueyn; Freeman, William R.; Hofmeister, Rudy; Hristakeva, Maya; Lenosky, Thomas; McNamara, Robert; Petrasek, Danny; Veltkamp, David; Waydo, Stephen
2009-01-01
Background In the development of noninvasive glucose monitoring technology, it is highly desirable to derive a calibration that relies on neither person-dependent calibration information nor supplementary calibration points furnished by an existing invasive measurement technique (universal calibration). Method By appropriate experimental design and associated analytical methods, we establish the sufficiency of multiple factors required to permit such a calibration. Factors considered are the discrimination of the measurement technique, stabilization of the experimental apparatus, physics–physiology-based measurement techniques for normalization, the sufficiency of the size of the data set, and appropriate exit criteria to establish the predictive value of the algorithm. Results For noninvasive glucose measurements, using Raman spectroscopy, the sufficiency of the scale of data was demonstrated by adding new data into an existing calibration algorithm and requiring that (a) the prediction error should be preserved or improved without significant re-optimization, (b) the complexity of the model for optimum estimation not rise with the addition of subjects, and (c) the estimation for persons whose data were removed entirely from the training set should be no worse than the estimates on the remainder of the population. Using these criteria, we established guidelines empirically for the number of subjects (30) and skin sites (387) for a preliminary universal calibration. We obtained a median absolute relative difference for our entire data set of 30 mg/dl, with 92% of the data in the Clarke A and B ranges. Conclusions Because Raman spectroscopy has high discrimination for glucose, a data set of practical dimensions appears to be sufficient for universal calibration. Improvements based on reducing the variance of blood perfusion are expected to reduce the prediction errors substantially, and the inclusion of supplementary calibration points for the wearable device under development will be permissible and beneficial. PMID:20144354
NASA Astrophysics Data System (ADS)
Fournier, A.; Morzfeld, M.; Hulot, G.
2013-12-01
For a suitable choice of parameters, the system of three ordinary differential equations (ODE) presented by Gissinger [1] was shown to exhibit chaotic reversals whose statistics compared well with those from the paleomagnetic record. In order to further assess the geophysical relevance of this low-dimensional model, we resort to data assimilation methods to calibrate it using reconstructions of the fluctuation of the virtual axial dipole moment spanning the past 2 millions years. Moreover, we test to which extent a properly calibrated model could possibly be used to predict a reversal of the geomagnetic field. We calibrate the ODE model to the geomagnetic field over the past 2 Ma using the SINT data set of Valet et al. [2]. To this end, we consider four data assimilation algorithms: the ensemble Kalman filter (EnKF), a variational method and two Monte Carlo (MC) schemes, prior importance sampling and implicit sampling. We observe that EnKF performs poorly and that prior importance sampling is inefficient. We obtain the most accurate reconstructions of the geomagnetic data using implicit sampling with five data points per assimilation sweep (of duration 5 kyr). The variational scheme performs equally well, but it does not provide us with quantitative information about the uncertainty of the estimates, which makes this method difficult to use for robust prediction under uncertainty. A calibration of the model using the PADM2M data set of Ziegler et al. [3] confirms these findings. We study the predictive capability of the ODE model using statistics computed from synthetic data experiments. For each experiment, we produce 2 Myr of synthetic data (with error levels similar to the ones found in real data), then calibrate the model to this record and then check if this calibrated model can correctly and reliably predict a reversal within the next 10 kyr (say). By performing 100 such experiments, we can assess how reliably our calibrated model can predict a (non-) reversal. It is found that the 5 kyr ahead predictions of reversals produced by the model appear to be accurate and reliable.These encouraging results prompted us to also test predictions of the five reversals of the SINT (and PADM2M) data set, using a similarly calibrated model. Results will be presented and discussed. [1] Gissinger, C., 2012, A new deterministic model for chaotic reversals, European Physical Journal B, 85:137 [2] Valet, J.-P., Meynadier, L. and Guyodo, Y., 2005, Geomagnetic field strength and reversal rate over the past 2 Million years, Nature, 435, 802-805. [3] Ziegler, L. B., Constable, C. G., Johnson, C. L. and Tauxe, L., 2011, PADM2M: a penalized maximum likelihood model of the 0-2 Ma paleomagnetic axial dipole moment, Geophysical Journal International, 184, 1069-1089.
Calibrated Noise Measurements with Induced Receiver Gain Fluctuations
NASA Technical Reports Server (NTRS)
Racette, Paul; Walker, David; Gu, Dazhen; Rajola, Marco; Spevacek, Ashly
2011-01-01
The lack of well-developed techniques for modeling changing statistical moments in our observations has stymied the application of stochastic process theory in science and engineering. These limitations were encountered when modeling the performance of radiometer calibration architectures and algorithms in the presence of non stationary receiver fluctuations. Analyses of measured signals have traditionally been limited to a single measurement series. Whereas in a radiometer that samples a set of noise references, the data collection can be treated as an ensemble set of measurements of the receiver state. Noise Assisted Data Analysis is a growing field of study with significant potential for aiding the understanding and modeling of non stationary processes. Typically, NADA entails adding noise to a signal to produce an ensemble set on which statistical analysis is performed. Alternatively as in radiometric measurements, mixing a signal with calibrated noise provides, through the calibration process, the means to detect deviations from the stationary assumption and thereby a measurement tool to characterize the signal's non stationary properties. Data sets comprised of calibrated noise measurements have been limited to those collected with naturally occurring fluctuations in the radiometer receiver. To examine the application of NADA using calibrated noise, a Receiver Gain Modulation Circuit (RGMC) was designed and built to modulate the gain of a radiometer receiver using an external signal. In 2010, an RGMC was installed and operated at the National Institute of Standards and Techniques (NIST) using their Noise Figure Radiometer (NFRad) and national standard noise references. The data collected is the first known set of calibrated noise measurements from a receiver with an externally modulated gain. As an initial step, sinusoidal and step-function signals were used to modulate the receiver gain, to evaluate the circuit characteristics and to study the performance of a variety of calibration algorithms. The receiver noise temperature and time-bandwidth product of the NFRad are calculated from the data. Statistical analysis using temporal-dependent calibration algorithms reveals that the natural occurring fluctuations in the receiver are stationary over long intervals (100s of seconds); however the receiver exhibits local non stationarity over the interval over which one set of reference measurements are collected. A variety of calibration algorithms have been applied to the data to assess algorithms' performance with the gain fluctuation signals. This presentation will describe the RGMC, experiment design and a comparative analysis of calibration algorithms.
Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach.
Enns, Eva A; Cipriano, Lauren E; Simons, Cyrena T; Kong, Chung Yin
2015-02-01
To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single goodness-of-fit (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. We demonstrate the Pareto frontier approach in the calibration of 2 models: a simple, illustrative Markov model and a previously published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to 2 possible weighted-sum GOF scoring systems, and we compare the health economic conclusions arising from these different definitions of best-fitting. For the simple model, outcomes evaluated over the best-fitting input sets according to the 2 weighted-sum GOF schemes were virtually nonoverlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95% CI 72,500-87,600] v. $139,700 [95% CI 79,900-182,800] per quality-adjusted life-year [QALY] gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95% CI 64,900-156,200] per QALY gained). The TAVR model yielded similar results. Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. © The Author(s) 2014.
Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach
Enns, Eva A.; Cipriano, Lauren E.; Simons, Cyrena T.; Kong, Chung Yin
2014-01-01
Background To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single “goodness-of-fit” (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. Methods We demonstrate the Pareto frontier approach in the calibration of two models: a simple, illustrative Markov model and a previously-published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to two possible weighted-sum GOF scoring systems, and compare the health economic conclusions arising from these different definitions of best-fitting. Results For the simple model, outcomes evaluated over the best-fitting input sets according to the two weighted-sum GOF schemes were virtually non-overlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95%CI: 72,500 – 87,600] vs. $139,700 [95%CI: 79,900 - 182,800] per QALY gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95%CI: 64,900 – 156,200] per QALY gained). The TAVR model yielded similar results. Conclusions Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. PMID:24799456
ERIC Educational Resources Information Center
Bol, Linda; Hacker, Douglas J.; Walck, Camilla C.; Nunnery, John A.
2012-01-01
A 2 x 2 factorial design was employed in a quasi-experiment to investigate the effects of guidelines in group or individual settings on the calibration accuracy and achievement of 82 high school biology students. Significant main effects indicated that calibration practice with guidelines and practice in group settings increased prediction and…
NASA Technical Reports Server (NTRS)
Ohring, G.; Wielicki, B.; Spencer, R.; Emery, B.; Datla, R.
2004-01-01
Measuring the small changes associated with long-term global climate change from space is a daunting task. To address these problems and recommend directions for improvements in satellite instrument calibration some 75 scientists, including researchers who develop and analyze long-term data sets from satellites, experts in the field of satellite instrument calibration, and physicists working on state of the art calibration sources and standards met November 12 - 14, 2002 and discussed the issues. The workshop defined the absolute accuracies and long-term stabilities of global climate data sets that are needed to detect expected trends, translated these data set accuracies and stabilities to required satellite instrument accuracies and stabilities, and evaluated the ability of current observing systems to meet these requirements. The workshop's recommendations include a set of basic axioms or overarching principles that must guide high quality climate observations in general, and a roadmap for improving satellite instrument characterization, calibration, inter-calibration, and associated activities to meet the challenge of measuring global climate change. It is also recommended that a follow-up workshop be conducted to discuss implementation of the roadmap developed at this workshop.
Using modern analogues to reconstruct past landcover
NASA Astrophysics Data System (ADS)
Brewer, Simon
2016-04-01
The physical cover of the earth plays an important role in the earth system. It affects the climate through feedbacks such as albedo and surface roughness, forms part of the carbon cycle as both sink and source and is both affected by and can affect human societies. Reconstructing past changes in land use and land cover helps to understand how these interactions may have changed over time, and provides important boundary conditions for paleoclimate models. Pollen assemblages, extracted from sedimentary sequences, provide one of the most abundant sources of information about past changes in land cover over the Holocene period. However, the relationship between plant cover and sedimentary pollen abundance is complex and non-linear, being affected by differential dispersal, production and taxonomic resolution. One method to correct for this and provide quantified estimates of past land cover is to calibrate modern pollen assemblages against contemporary remotely sensed estimates of land cover. Results will be presented from developing such a calibration for a set of European modern pollen samples and AVHRR-based tree cover estimates. An emphasis will be placed on the output of validation tests of the calibration, and what this indicates for the predictive skill of this approach. The calibration will then be applied to a set of pollen sequences for the European continent for the past 11,000 years, and the patterns of reconstructed land cover will be discussed.
Geometric Calibration and Validation of Kompsat-3A AEISS-A Camera
Seo, Doocheon; Oh, Jaehong; Lee, Changno; Lee, Donghan; Choi, Haejin
2016-01-01
Kompsat-3A, which was launched on 25 March 2015, is a sister spacecraft of the Kompsat-3 developed by the Korea Aerospace Research Institute (KARI). Kompsat-3A’s AEISS-A (Advanced Electronic Image Scanning System-A) camera is similar to Kompsat-3’s AEISS but it was designed to provide PAN (Panchromatic) resolution of 0.55 m, MS (multispectral) resolution of 2.20 m, and TIR (thermal infrared) at 5.5 m resolution. In this paper we present the geometric calibration and validation work of Kompsat-3A that was completed last year. A set of images over the test sites was taken for two months and was utilized for the work. The workflow includes the boresight calibration, CCDs (charge-coupled devices) alignment and focal length determination, the merge of two CCD lines, and the band-to-band registration. Then, the positional accuracies without any GCPs (ground control points) were validated for hundreds of test sites across the world using various image acquisition modes. In addition, we checked the planimetric accuracy by bundle adjustments with GCPs. PMID:27783054
Calibration of Gimbaled Platforms: The Solar Dynamics Observatory High Gain Antennas
NASA Technical Reports Server (NTRS)
Hashmall, Joseph A.
2006-01-01
Simple parameterization of gimbaled platform pointing produces a complete set of 13 calibration parameters-9 misalignment angles, 2 scale factors and 2 biases. By modifying the parameter representation, redundancy can be eliminated and a minimum set of 9 independent parameters defined. These consist of 5 misalignment angles, 2 scale factors, and 2 biases. Of these, only 4 misalignment angles and 2 biases are significant for the Solar Dynamics Observatory (SDO) High Gain Antennas (HGAs). An algorithm to determine these parameters after launch has been developed and tested with simulated SDO data. The algorithm consists of a direct minimization of the root-sum-square of the differences between expected power and measured power. The results show that sufficient parameter accuracy can be attained even when time-dependent thermal distortions are present, if measurements from a pattern of intentional offset pointing positions is included.
Modeling of solid-state and excimer laser processes for 3D micromachining
NASA Astrophysics Data System (ADS)
Holmes, Andrew S.; Onischenko, Alexander I.; George, David S.; Pedder, James E.
2005-04-01
An efficient simulation method has recently been developed for multi-pulse ablation processes. This is based on pulse-by-pulse propagation of the machined surface according to one of several phenomenological models for the laser-material interaction. The technique allows quantitative predictions to be made about the surface shapes of complex machined parts, given only a minimal set of input data for parameter calibration. In the case of direct-write machining of polymers or glasses with ns-duration pulses, this data set can typically be limited to the surface profiles of a small number of standard test patterns. The use of phenomenological models for the laser-material interaction, calibrated by experimental feedback, allows fast simulation, and can achieve a high degree of accuracy for certain combinations of material, laser and geometry. In this paper, the capabilities and limitations of the approach are discussed, and recent results are presented for structures machined in SU8 photoresist.
Use of a Self-Instructional Radiographic Anatomy Module for Dental Hygiene Faculty Calibration.
Brame, Jennifer L; AlGheithy, Demah Salem; Platin, Enrique; Mitchell, Shannon H
2017-06-01
Purpose: Dental hygiene educators often provide inconsistent instruction in clinical settings and various attempts to address the lack of consistency have been reported in the literature. The purpose of this pilot study was to determine if the use of a use of a self-instructional, radiographic anatomy (SIRA) module improved DH faculty calibration regarding the identifica-tion of normal intraoral and extraoral radiographic anatomy and whether its effect could be sustained over a period of four months. Methods: A convenience sample consisting of all dental hygiene faculty members involved in clinical instruction (N=23) at the University of North Carolina (UNC) was invited to complete the four parts of this online pilot study: a pre-test, review of the SIRA module, an immediate post-test, and a four-month follow-up post-test. Descriptive analyses, the Friedman's ANOVA, and the exact form of the Wilcoxon-Signed-Rank test were used to an-alyze the data. Level of significance was set at 0.05. Participants who did not complete all parts of the study were omitted from data analysis comparing the pre to post-test performance. Results: The pre-test response rate was 73.9% (N=17), and 88.2% (N=15) of those initial participants completed both the immediate and follow-up post-tests. Faculty completing all parts of the study consisted of: 5 full-time faculty, 5 part-time faculty, and 5 graduate teaching assistants. The Friedman's ANOVA revealed no statistically significant difference (P=0.179) in percentages of correct responses between the three tests (pre, post and follow-up). The exact form of the Wilcoxon-Signed-Rank test revealed marginal significance when comparing percent of correct responses at pre-test and immediate post-test (P=0.054), and no statistically significant difference when comparing percent of correct responses at immediate post-test and the follow-up post-test four months later (P=0.106). Conclusions: Use of a SIRA module did not significantly affect DH faculty test performance. Lack of statistical significance in the percentages of correct responses between the three tests may have been affected by the small number of participants completing all four parts of the study (N=15). Additional research is needed to identify and improve methods for faculty calibration. Copyright © 2017 The American Dental Hygienists’ Association.
Kimmel, Lara A; Holland, Anne E; Edwards, Elton R; Cameron, Peter A; De Steiger, Richard; Page, Richard S; Gabbe, Belinda
2012-06-01
Accurate prediction of the likelihood of discharge to inpatient rehabilitation following lower limb fracture made on admission to hospital may assist patient discharge planning and decrease the burden on the hospital system caused by delays in decision making. To develop a prognostic model for discharge to inpatient rehabilitation. Isolated lower extremity fracture cases (excluding fractured neck of femur), captured by the Victorian Orthopaedic Trauma Outcomes Registry (VOTOR), were extracted for analysis. A training data set was created for model development and validation data set for evaluation. A multivariable logistic regression model was developed based on patient and injury characteristics. Models were assessed using measures of discrimination (C-statistic) and calibration (Hosmer-Lemeshow (H-L) statistic). A total of 1429 patients met the inclusion criteria and were randomly split into training and test data sets. Increasing age, more proximal fracture type, compensation or private fund source for the admission, metropolitan location of residence, not working prior to injury and having a self-reported pre-injury disability were included in the final prediction model. The C-statistic for the model was 0.92 (95% confidence interval (CI) 0.88, 0.95) with an H-L statistic of χ(2)=11.62, p=0.17. For the test data set, the C-statistic was 0.86 (95% CI 0.83, 0.90) with an H-L statistic of χ(2)=37.98, p<0.001. A model to predict discharge to inpatient rehabilitation following lower limb fracture was developed with excellent discrimination although the calibration was reduced in the test data set. This model requires prospective testing but could form an integral part of decision making in regards to discharge disposition to facilitate timely and accurate referral to rehabilitation and optimise resource allocation. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Pendleton, Geoffrey N.; Paciesas, William S.; Mallozzi, Robert S.; Koshut, Tom M.; Fishman, Gerald J.; Meegan, Charles A.; Wilson, Robert B.; Horack, John M.; Lestrade, John Patrick
1995-01-01
The detector response matrices for the Burst And Transient Source Experiment (BATSE) on board the Compton Gamma Ray Observatory (CGRO) are described, including their creation and operation in data analysis. These response matrices are a detailed abstract representation of the gamma-ray detectors' operating characteristics that are needed for data analysis. They are constructed from an extensive set of calibration data coupled with a complex geometry electromagnetic cascade Monte Carlo simulation code. The calibration tests and simulation algorithm optimization are described. The characteristics of the BATSE detectors in the spacecraft environment are also described.
NASA Astrophysics Data System (ADS)
Teixeira, Filipe; Melo, André; Cordeiro, M. Natália D. S.
2010-09-01
A linear least-squares methodology was used to determine the vibrational scaling factors for the X3LYP density functional. Uncertainties for these scaling factors were calculated according to the method devised by Irikura et al. [J. Phys. Chem. A 109, 8430 (2005)]. The calibration set was systematically partitioned according to several of its descriptors and the scaling factors for X3LYP were recalculated for each subset. The results show that the scaling factors are only significant up to the second digit, irrespective of the calibration set used. Furthermore, multivariate statistical analysis allowed us to conclude that the scaling factors and the associated uncertainties are independent of the size of the calibration set and strongly suggest the practical impossibility of obtaining vibrational scaling factors with more than two significant digits.
Teixeira, Filipe; Melo, André; Cordeiro, M Natália D S
2010-09-21
A linear least-squares methodology was used to determine the vibrational scaling factors for the X3LYP density functional. Uncertainties for these scaling factors were calculated according to the method devised by Irikura et al. [J. Phys. Chem. A 109, 8430 (2005)]. The calibration set was systematically partitioned according to several of its descriptors and the scaling factors for X3LYP were recalculated for each subset. The results show that the scaling factors are only significant up to the second digit, irrespective of the calibration set used. Furthermore, multivariate statistical analysis allowed us to conclude that the scaling factors and the associated uncertainties are independent of the size of the calibration set and strongly suggest the practical impossibility of obtaining vibrational scaling factors with more than two significant digits.
Yu, Shaohui; Xiao, Xue; Ding, Hong; Xu, Ge; Li, Haixia; Liu, Jing
2017-08-05
The quantitative analysis is very difficult for the emission-excitation fluorescence spectroscopy of multi-component mixtures whose fluorescence peaks are serious overlapping. As an effective method for the quantitative analysis, partial least squares can extract the latent variables from both the independent variables and the dependent variables, so it can model for multiple correlations between variables. However, there are some factors that usually affect the prediction results of partial least squares, such as the noise, the distribution and amount of the samples in calibration set etc. This work focuses on the problems in the calibration set that are mentioned above. Firstly, the outliers in the calibration set are removed by leave-one-out cross-validation. Then, according to two different prediction requirements, the EWPLS method and the VWPLS method are proposed. The independent variables and dependent variables are weighted in the EWPLS method by the maximum error of the recovery rate and weighted in the VWPLS method by the maximum variance of the recovery rate. Three organic matters with serious overlapping excitation-emission fluorescence spectroscopy are selected for the experiments. The step adjustment parameter, the iteration number and the sample amount in the calibration set are discussed. The results show the EWPLS method and the VWPLS method are superior to the PLS method especially for the case of small samples in the calibration set. Copyright © 2017 Elsevier B.V. All rights reserved.
Visible spectroscopy calibration transfer model in determining pH of Sala mangoes
NASA Astrophysics Data System (ADS)
Yahaya, O. K. M.; MatJafri, M. Z.; Aziz, A. A.; Omar, A. F.
2015-05-01
The purpose of this study is to compare the efficiency of calibration transfer procedures between three spectrometers involving two Ocean Optics Inc. spectrometers, namely, QE65000 and Jaz, and also, ASD FieldSpec 3 in measuring the pH of Sala mango by visible reflectance spectroscopy. This study evaluates the ability of these spectrometers in measuring the pH of Sala mango by applying similar calibration algorithms through direct calibration transfer. This visible reflectance spectroscopy technique defines a spectrometer as a master instrument and another spectrometer as a slave. The multiple linear regression (MLR) of calibration model generated using the QE65000 spectrometer is transferred to the Jaz spectrometer and vice versa for Set 1. The same technique is applied for Set 2 with QE65000 spectrometer is transferred to the FieldSpec3 spectrometer and vice versa. For Set 1, the result showed that the QE65000 spectrometer established a calibration model with higher accuracy than that of the Jaz spectrometer. In addition, the calibration model developed on Jaz spectrometer successfully predicted the pH of Sala mango, which was measured using QE65000 spectrometer, with a root means square error of prediction RMSEP = 0.092 pH and coefficients of determination R2 = 0.892. Moreover, the best prediction result is obtained for Set 2 when the calibration model developed on QE65000 spectrometer is successfully transferred to FieldSpec 3 with R2 = 0.839 and RMSEP = 0.16 pH.
Implications of Version 8 TOMS and SBUV Data for Long-Term Trend Analysis
NASA Technical Reports Server (NTRS)
Frith, Stacey M.
2004-01-01
Total ozone data from the Total Ozone Mapping Spectrometer (TOMS) and profile/total ozone data from the Solar Backscatter Ultraviolet (SBUV; SBW/2) series of instruments have recently been reprocessed using new retrieval algorithms (referred to as Version 8 for both) and updated calibrations. In this paper, we incorporate the Version 8 data into a TOMS/SBW merged total ozone data set and an S B W merged profile ozone data set. The Total Merged Ozone Data (Total MOD) combines data from multiple TOMS and SBW instruments to form an internally consistent global data set with virtually complete time coverage from October 1978 through December 2003. Calibration differences between instruments are accounted for using external adjustments based on instrument intercomparisons during overlap periods. Previous results showed errors due to aerosol loading and sea glint are significantly reduced in the V8 TOMS retrievals. Using SBW as a transfer standard, calibration differences between V8 Nimbus 7 and Earth Probe TOMS data are approx. 1.3%, suggesting small errors in calibration remain. We will present updated total ozone long-term trends based on the Version 8 data. The Profile Merged Ozone Data (Profile MOD) data set is constructed using data from the SBUV series of instruments. In previous versions, SAGE data were used to establish the long-term external calibration of the combined data set. The SBW Version 8 we assess the V8 profile data through comparisons with SAGE and between SBW instruments in overlap periods. We then construct a consistently-calibrated long term time series. Updated zonal mean trends as a function of altitude and season from the new profile data set will be shown, and uncertainties in determining the best long-term calibration will be discussed.
TOGA/COARE AMMR 1992 data processing
NASA Technical Reports Server (NTRS)
Kunkee, D. B.
1994-01-01
The complete set of Tropical Ocean and Global Atmosphere (TOGA)/Coupled Ocean Atmosphere Response Experiment (COARE) flight data for the 91.65 GHz Airborne Meteorological Radiometer (AMMR92) contains data from nineteen flights: two test flights, four transit flights, and thirteen experimental flights. The data flight occurred between December 16, 1992 and February 28, 1993. Data collection from the AMMR92 during the first ten flights of TOGA/COARE was performed using the executable code TSK30041. These are IBM PC/XT programs used by the NASA Goddard Space Flight Center (GSFC). During one flight, inconsistencies were found during the operation of the AMMR92 using the GSFC data acquisition system. Consequently, the Georgia Tech (GT) data acquisition system was used during all successive TOGA/COARE flights. These inconsistencies were found during the data processing to affect the recorded data as well. Errors are caused by an insufficient pre- and post-calibration setting period for the splash-plate mechanism. The splash-plate operates asynchronusly with the data acquisition system (there is no position feedback to the GSFC or GT data system). This condition caused both the calibration and the post-calibration scene measurement to be corrupted on a randomly occurring basis when the GSFC system was used. This problem did not occur with the GT data acquisition system due to sufficient allowance for splash-plate settling. After TOGA/COARE it was determined that calibration of the instrument was a function of the scene brightness temperature. Therefore, the orientation error in the main antenna beam of the AMMR92 is hypothesized to be caused by misalignment of the internal 'splash-plate' responsible for directing the antenna beam toward the scene or toward the calibration loads. Misalignment of the splash-plate is responsible for 'scene feedthrough' during calibration. Laboratory investigation at Georgia Tech found that each polarization is affected differently by the splash-plate alignment error. This is likely to cause significant and unique errors in the absolute calibration of each channel.
TOGA/COARE AMMR 1992 data processing
NASA Astrophysics Data System (ADS)
Kunkee, D. B.
1994-05-01
The complete set of Tropical Ocean and Global Atmosphere (TOGA)/Coupled Ocean Atmosphere Response Experiment (COARE) flight data for the 91.65 GHz Airborne Meteorological Radiometer (AMMR92) contains data from nineteen flights: two test flights, four transit flights, and thirteen experimental flights. The data flight occurred between December 16, 1992 and February 28, 1993. Data collection from the AMMR92 during the first ten flights of TOGA/COARE was performed using the executable code TSK30041. These are IBM PC/XT programs used by the NASA Goddard Space Flight Center (GSFC). During one flight, inconsistencies were found during the operation of the AMMR92 using the GSFC data acquisition system. Consequently, the Georgia Tech (GT) data acquisition system was used during all successive TOGA/COARE flights. These inconsistencies were found during the data processing to affect the recorded data as well. Errors are caused by an insufficient pre- and post-calibration setting period for the splash-plate mechanism. The splash-plate operates asynchronusly with the data acquisition system (there is no position feedback to the GSFC or GT data system). This condition caused both the calibration and the post-calibration scene measurement to be corrupted on a randomly occurring basis when the GSFC system was used. This problem did not occur with the GT data acquisition system due to sufficient allowance for splash-plate settling. After TOGA/COARE it was determined that calibration of the instrument was a function of the scene brightness temperature. Therefore, the orientation error in the main antenna beam of the AMMR92 is hypothesized to be caused by misalignment of the internal 'splash-plate' responsible for directing the antenna beam toward the scene or toward the calibration loads. Misalignment of the splash-plate is responsible for 'scene feedthrough' during calibration. Laboratory investigation at Georgia Tech found that each polarization is affected differently by the splash-plate alignment error. This is likely to cause significant and unique errors in the absolute calibration of each channel.
NASA Astrophysics Data System (ADS)
Shedekar, Vinayak S.; King, Kevin W.; Fausey, Norman R.; Soboyejo, Alfred B. O.; Harmel, R. Daren; Brown, Larry C.
2016-09-01
Three different models of tipping bucket rain gauges (TBRs), viz. HS-TB3 (Hydrological Services Pty Ltd.), ISCO-674 (Isco, Inc.) and TR-525 (Texas Electronics, Inc.), were calibrated in the lab to quantify measurement errors across a range of rainfall intensities (5 mm·h- 1 to 250 mm·h- 1) and three different volumetric settings. Instantaneous and cumulative values of simulated rainfall were recorded at 1, 2, 5, 10 and 20-min intervals. All three TBR models showed a substantial deviation (α = 0.05) in measurements from actual rainfall depths, with increasing underestimation errors at greater rainfall intensities. Simple linear regression equations were developed for each TBR to correct the TBR readings based on measured intensities (R2 > 0.98). Additionally, two dynamic calibration techniques, viz. quadratic model (R2 > 0.7) and T vs. 1/Q model (R2 = > 0.98), were tested and found to be useful in situations when the volumetric settings of TBRs are unknown. The correction models were successfully applied to correct field-collected rainfall data from respective TBR models. The calibration parameters of correction models were found to be highly sensitive to changes in volumetric calibration of TBRs. Overall, the HS-TB3 model (with a better protected tipping bucket mechanism, and consistent measurement errors across a range of rainfall intensities) was found to be the most reliable and consistent for rainfall measurements, followed by the ISCO-674 (with susceptibility to clogging and relatively smaller measurement errors across a range of rainfall intensities) and the TR-525 (with high susceptibility to clogging and frequent changes in volumetric calibration, and highly intensity-dependent measurement errors). The study demonstrated that corrections based on dynamic and volumetric calibration can only help minimize-but not completely eliminate the measurement errors. The findings from this study will be useful for correcting field data from TBRs; and may have major implications to field- and watershed-scale hydrologic studies.
40 CFR 91.326 - Pre- and post-test analyzer calibration.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 20 2014-07-01 2013-07-01 true Pre- and post-test analyzer calibration... PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM MARINE SPARK-IGNITION ENGINES Emission Test Equipment Provisions § 91.326 Pre- and post-test analyzer calibration. Calibrate the operating range of each analyzer...
Pilkonis, Paul A.; Yu, Lan; Dodds, Nathan E.; Johnston, Kelly L.; Lawrence, Suzanne; Hilton, Thomas F.; Daley, Dennis C.; Patkar, Ashwin A.; McCarty, Dennis
2015-01-01
Background Two item banks for substance use were developed as part of the Patient-Reported Outcomes Measurement Information System (PROMIS®): severity of substance use and positive appeal of substance use. Methods Qualitative item analysis (including focus groups, cognitive interviewing, expert review, and item revision) reduced an initial pool of more than 5,300 items for substance use to 119 items included in field testing. Items were written in a first-person, past-tense format, with 5 response options reflecting frequency or severity. Both 30-day and 3-month time frames were tested. The calibration sample of 1,336 respondents included 875 individuals from the general population (ascertained through an internet panel) and 461patients from addiction treatment centers participating in the National Drug Abuse Treatment Clinical Trials Network. Results Final banks of 37 and 18 items were calibrated for severity of substance use and positive appeal of substance use, respectively, using the two-parameter graded response model from item response theory (IRT). Initial calibrations were similar for the 30-day and 3-month time frames, and final calibrations used data combined across the time frames, making the items applicable with either interval. Seven-item static short forms were also developed from each item bank. Conclusions Test information curves showed that the PROMIS item banks provided substantial information in a broad range of severity, making them suitable for treatment, observational, and epidemiological research in both clinical and community settings. PMID:26423364
Simulation of flow and water quality of the Arroyo Colorado, Texas, 1989-99
Raines, Timothy H.; Miranda, Roger M.
2002-01-01
A model parameter set for use with the Hydrological Simulation Program—FORTRAN watershed model was developed to simulate flow and water quality for selected properties and constituents for the Arroyo Colorado from the city of Mission to the Laguna Madre, Texas. The model simulates flow, selected water-quality properties, and constituent concentrations. The model can be used to estimate a total maximum daily load for selected properties and constituents in the Arroyo Colorado. The model was calibrated and tested for flow with data measured during 1989–99 at three streamflow-gaging stations. The errors for total flow volume ranged from -0.1 to 29.0 percent, and the errors for total storm volume ranged from -15.6 to 8.4 percent. The model was calibrated and tested for water quality for seven properties and constituents with 1989–99 data. The model was calibrated sequentially for suspended sediment, water temperature, biochemical oxygen demand, dissolved oxygen, nitrate nitrogen, ammonia nitrogen, and orthophosphate. The simulated concentrations of the selected properties and constituents generally matched the measured concentrations available for the calibration and testing periods. The model was used to simulate total point- and nonpoint-source loads for selected properties and constituents for 1989–99 for urban, natural, and agricultural land-use types. About one-third to one-half of the biochemical oxygen demand and nutrient loads are from urban point and nonpoint sources, although only 13 percent of the total land use in the basin is urban.
Kramer, Gary H; Guerriere, Steven
2003-02-01
Lung counters are generally used to measure low energy photons (<100 keV). They are usually calibrated with lung sets that are manufactured from a lung tissue substitute material that contains homogeneously distributed activity; however, it is difficult to verify either the activity in the phantom or the homogeneity of the activity distribution without destructive testing. Lung sets can have activities that are as much as 25% different from the expected value. An alternative method to using whole lungs to calibrate a lung counter is to use a sliced lung with planar inserts. Experimental work has already indicated that this alternative method of calibration can be a satisfactory substitute. This work has extended the experimental study by the use of Monte Carlo simulation to validate that sliced and whole lungs are equivalent. It also has determined the optimum slice thicknesses that separate the planar sources in the sliced lung. Slice thicknesses have been investigated in the range of 0.5 cm to 9.0 cm and at photon energies from 17 keV to 1,000 keV. Results have shown that there is little difference between sliced and whole lungs at low energies providing that the slice thickness is 2.0 cm or less. As the photon energy rises the slice thickness can increase substantially with no degradation on equivalence.
Remote Calibration Procedure and Results for the Ctbto AS109 STS-2HG at Ybh
NASA Astrophysics Data System (ADS)
Uhrhammer, R. A.; Taira, T.; Hellweg, M.
2013-12-01
Berkeley Digital Seismic Station (BDSN) YBH, located in Yreka, CA, USA, is certified as Auxiliary Seismic Station 109 (AS109) by the Preparatory Commission for the Comprehensive Nuclear-Test-Ban Treaty organization (CTBTO). YBH, sited in an abandoned hard rock mining drift, houses a Streckeisen STS-2HG triaxial broadband seismometer (the AS109 sensor) and a co-sited three-component set of Streckeisen STS-1 broadband seismometers and a Kinemetrics Episensor strong motion accelerometer (the BDSN sensors). CTBTO requested that we preform a remote calibration test of the STS-2HG (20,000 V/(m/s) nominal sensitivity) to verify its response and sensitivity. The remote calibration test was done successfully on June 17, 2013 and we report here on the procedure and results of the calibration. The calibration of the STS-2HG (s/n 30235) was accomplished using two Random Telegraph (RT) stimuli which were applied to the triaxial U,V,W component calibration coils through an appropriate series resistance to limit the drive current. The first was a four hour RT at 1.25 Hz (to determine the low-frequency response) and the second was a one hour RT at 25 Hz (to determine the high-frequency response). The RT stimulus signals were generated by the Kinemetrics Q330 data logger and both the stimuli and the response were recorded simultaneously with synchronous sampling at 100 sps. The RT calibrations were invoked remotely from Berkeley. The response to the 1.25 Hz RT stimulus was used to determine the seismometer natural period, fraction of critical damping and sensitivity of the STS-2HG sensors and the response to the 25 Hz RT stimulus was used to determine their corresponding high-frequency response. The accuracy of the sensitivity as determined by the response to the RT stimuli is limited by the accuracy of the calibration coil motor constant (2 g/A) provided on the factory calibration sheet. As a check on the accuracy of the sensitivity determined from the response to the RT stimuli, we also compare the ground motions inferred from the STS-2HG with the corresponding ground motions inferred from the co-sited STS-1's and the Episensor strong motion accelerometer using seismic signals which have adequate signal-to-noise ratios in passband common to both instruments.
NASA Astrophysics Data System (ADS)
Ding, Wei; Xu, Qinghai; Tarasov, Pavel E.
2017-09-01
Human impact is a well-known confounder in pollen-based quantitative climate reconstructions as most terrestrial ecosystems have been artificially affected to varying degrees. In this paper, we use a human-induced
pollen dataset (H-set) and a corresponding natural
pollen dataset (N-set) to establish pollen-climate calibration sets for temperate eastern China (TEC). The two calibration sets, taking a weighted averaging partial least squares (WA-PLS) approach, are used to reconstruct past climate variables from a fossil record, which is located at the margin of the East Asian summer monsoon in north-central China and covers the late glacial Holocene from 14.7 ka BP (thousands of years before AD 1950). Ordination results suggest that mean annual precipitation (Pann) is the main explanatory variable of both pollen composition and percentage distributions in both datasets. The Pann reconstructions, based on the two calibration sets, demonstrate consistently similar patterns and general trends, suggesting a relatively strong climate impact on the regional vegetation and pollen spectra. However, our results also indicate that the human impact may obscure climate signals derived from fossil pollen assemblages. In a test with modern climate and pollen data, the Pann influence on pollen distribution decreases in the H-set, while the human influence index (HII) rises. Moreover, the relatively strong human impact reduces woody pollen taxa abundances, particularly in the subhumid forested areas. Consequently, this shifts their model-inferred Pann optima to the arid end of the gradient compared to Pann tolerances in the natural dataset and further produces distinct deviations when the total tree pollen percentages are high (i.e. about 40 % for the Gonghai area) in the fossil sequence. In summary, the calibration set with human impact used in our experiment can produce a reliable general pattern of past climate, but the human impact on vegetation affects the pollen-climate relationship and biases the pollen-based climate reconstruction. The extent of human-induced bias may be rather small for the entire late glacial and early Holocene interval when we use a reference set called natural. Nevertheless, this potential bias should be kept in mind when conducting quantitative reconstructions, especially for the recent 2 or 3 millennia.
SU-E-T-638: Evaluation and Comparison of Landauer Microstar (OSLD) Readers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Souri, S; Ahmed, Y; Cao, Y
2014-06-15
Purpose: To evaluate and compare characteristic performance of a new Landauer nanodot Reader with the previous model. Methods: In order to calibrate and test the reader, a set of nanodots were irradiated using a Varian Truebeam Linac. Solid water slabs and bolus were used in the process of irradiation. Calibration sets of nanodots were irradiated for radiation dose ranges: 0 to 10 and 20 to 1000 cGy, using 6MV photons. Additionally, three sets of nanodots were each irradiated using 6MV, 10MV and 15MV beams. For each beam energy, and selected dose in the range of 3 to 1000 cGy, amore » pair of nanodots was irradiated and three readings were obtained with both readers. Results: The analysis shows that for 3 photon beam energies and selected ranges of dose, the calculated absorbed dose agrees well with the expected value. The results illustrate that the new Microstar II reader is a highly consistent system and that the repeated readings provide results with a reasonably small standard deviation. For all practical purposes, the response of system is linear for all radiation beam energies. Conclusion: The Microstar II nanodot reader is consistent, accurate, and reliable. The new hardware design and corresponding software contain several advantages over the previous model. The automatic repeat reading mechanism, that helps improve reproducibility and reduce processing time, and the smaller unit size that renders ease of transport, are two of such features. Present study shows that for high dose ranges a polynomial calibration equation provides more consistent results. A 3rd order polynomial calibration curve was used to analyze the readings of dosimeters exposed to high dose range radiation. It was observed that the results show less error compared to those calculated by using linear calibration curves, as provided by Landauer system software for all dose ranges.« less
Validation of geometric models for fisheye lenses
NASA Astrophysics Data System (ADS)
Schneider, D.; Schwalbe, E.; Maas, H.-G.
The paper focuses on the photogrammetric investigation of geometric models for different types of optical fisheye constructions (equidistant, equisolid-angle, sterographic and orthographic projection). These models were implemented and thoroughly tested in a spatial resection and a self-calibrating bundle adjustment. For this purpose, fisheye images were taken with a Nikkor 8 mm fisheye lens on a Kodak DSC 14n Pro digital camera in a hemispherical calibration room. Both, the spatial resection and the bundle adjustment resulted in a standard deviation of unit weight of 1/10 pixel with a suitable set of simultaneous calibration parameters introduced into the camera model. The camera-lens combination was treated with all of the four basic models mentioned above. Using the same set of additional lens distortion parameters, the differences between the models can largely be compensated, delivering almost the same precision parameters. The relative object space precision obtained from the bundle adjustment was ca. 1:10 000 of the object dimensions. This value can be considered as a very satisfying result, as fisheye images generally have a lower geometric resolution as a consequence of their large field of view and also have a inferior imaging quality in comparison to most central perspective lenses.
AN ALTERNATIVE CALIBRATION OF CR-39 DETECTORS FOR RADON DETECTION BEYOND THE SATURATION LIMIT.
Franci, Daniele; Aureli, Tommaso; Cardellini, Francesco
2016-12-01
Time-integrated measurements of indoor radon levels are commonly carried out using solid-state nuclear track detectors (SSNTDs), due to the numerous advantages offered by this radiation detection technique. However, the use of SSNTD also presents some problems that may affect the accuracy of the results. The effect of overlapping tracks often results in the underestimation of the detected track density, which leads to the reduction of the counting efficiency for increasing radon exposure. This article aims to address the effect of overlapping tracks by proposing an alternative calibration technique based on the measurement of the fraction of the detector surface covered by alpha tracks. The method has been tested against a set of Monte Carlo data and then applied to a set of experimental data collected at the radon chamber of the Istituto Nazionale di Metrologia delle Radiazioni Ionizzanti, at the ENEA centre in Casaccia, using CR-39 detectors. It has been proved that the method allows to extend the detectable range of radon exposure far beyond the intrinsic limit imposed by the standard calibration based on the track density. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Rainfall Estimation over the Nile Basin using an Adapted Version of the SCaMPR Algorithm
NASA Astrophysics Data System (ADS)
Habib, E. H.; Kuligowski, R. J.; Elshamy, M. E.; Ali, M. A.; Haile, A.; Amin, D.; Eldin, A.
2011-12-01
Management of Egypt's Aswan High Dam is critical not only for flood control on the Nile but also for ensuring adequate water supplies for most of Egypt since rainfall is scarce over the vast majority of its land area. However, reservoir inflow is driven by rainfall over Sudan, Ethiopia, Uganda, and several other countries from which routine rain gauge data are sparse. Satellite-derived estimates of rainfall offer a much more detailed and timely set of data to form a basis for decisions on the operation of the dam. A single-channel infrared algorithm is currently in operational use at the Egyptian Nile Forecast Center (NFC). This study reports on the adaptation of a multi-spectral, multi-instrument satellite rainfall estimation algorithm (Self-Calibrating Multivariate Precipitation Retrieval, SCaMPR) for operational application over the Nile Basin. The algorithm uses a set of rainfall predictors from multi-spectral Infrared cloud top observations and self-calibrates them to a set of predictands from Microwave (MW) rain rate estimates. For application over the Nile Basin, the SCaMPR algorithm uses multiple satellite IR channels recently available to NFC from the Spinning Enhanced Visible and Infrared Imager (SEVIRI). Microwave rain rates are acquired from multiple sources such as SSM/I, SSMIS, AMSU, AMSR-E, and TMI. The algorithm has two main steps: rain/no-rain separation using discriminant analysis, and rain rate estimation using stepwise linear regression. We test two modes of algorithm calibration: real-time calibration with continuous updates of coefficients with newly coming MW rain rates, and calibration using static coefficients that are derived from IR-MW data from past observations. We also compare the SCaMPR algorithm to other global-scale satellite rainfall algorithms (e.g., 'Tropical Rainfall Measuring Mission (TRMM) and other sources' (TRMM-3B42) product, and the National Oceanographic and Atmospheric Administration Climate Prediction Center (NOAA-CPC) CMORPH product. The algorithm has several potential future applications such as: improving the performance accuracy of hydrologic forecasting models over the Nile Basin, and utilizing the enhanced rainfall datasets and better-calibrated hydrologic models to assess the impacts of climate change on the region's water availability.
Relative Humidity in Limited Streamer Tubes for Stanford Linear Accelerator Center's BaBar Detector
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lang, M.I.; /MIT; Convery, M.
2005-12-15
The BABAR Detector at the Stanford Linear Accelerator Center studies the decay of B mesons created in e{sup +}e{sup -} collisions. The outermost layer of the detector, used to detect muons and neutral hadrons created during this process, is being upgraded from Resistive Plate Chambers (RPCs) to Limited Streamer Tubes (LSTs). The standard-size LST tube consists of eight cells, where a silver-plated wire runs down the center of each. A large potential difference is placed between the wires and ground. Gas flows through a series of modules connected with tubing, typically four. LSTs must be carefully tested before installation, asmore » it will be extremely difficult to repair any damage once installed in the detector. In the testing process, the count rate in most modules showed was stable and consistent with cosmic ray rate over an approximately 500 V operating range between 5400 to 5900 V. The count in some modules, however, was shown to unexpectedly spike near the operation point. In general, the modules through which the gas first flows did not show this problem, but those further along the gas chain were much more likely to do so. The suggestion was that this spike was due to higher humidity in the modules furthest from the fresh, dry inflowing gas, and that the water molecules in more humid modules were adversely affecting the modules' performance. This project studied the effect of humidity in the modules, using a small capacitive humidity sensor (Honeywell). The sensor provided a humidity-dependent output voltage, as well as a temperature measurement from a thermistor. A full-size hygrometer (Panametrics) was used for testing and calibrating the Honeywell sensors. First the relative humidity of the air was measured. For the full calibration, a special gas-mixing setup was used, where relative humidity of the LST gas mixture could be varied from almost dry to almost fully saturated. With the sensor calibrated, a set of sensors was used to measure humidity vs. time in the LSTs. The sensors were placed in two sets of LST modules, one gas line flowing through each set. These modules were tested for count rate v. voltage while simultaneously measuring relative humidity in each module. One set produced expected readings, while the other showed the spike in count rate. The relative humidity in the two sets of modules looked very similar, but it rose significantly for modules further along the gas chain.« less
NASA Astrophysics Data System (ADS)
Zhao, Chun-yan; Li, Xin; Wei, Wei; Zheng, Xiao-bing
2016-10-01
With the progress of quantitative remote sensing, the acquisition of surface BRDF becomes more and more important. In order to improve the accuracy of the surface BRDF measurements, a VNIR-SWIR Bidirectional Reflectance Automatic Measurement System, which was developed by Hefei Institutes of Physical Science (HIPS), is introduced that allows in situ measurements of hyperspectral bidirectional reflectance data. Hyperspectral bidirectional reflectance distribution function data sets taken with the BRDF automatic measurement system nominally cover the spectral range between 390 and 2390 nm in 971 bands. In July 2007, September 2008, June 2011, we acquired a series of the BRDF data covered Dunhuang radiometric calibration test site in terms of the BRDF measurement system. We have not obtained such comprehensive and accurate data as they are, since 1990s when the site was built up. These data are applied to calibration for FY-2 and other satellites sensors. Field BRDF data of a Dunhuang site surface reveal a strong spectral variability. An anisotropy factor (ANIF), defined as the ratio between the directional reflectance and nadir reflectance over the hemisphere, is introduced as a surrogate measurement for the extent of spectral BRDF effects. The ANIF data show a very high correlation with the solar zenith angle due to multiple scattering effects over a desert site. Since surface geometry, multiple scattering, and BRDF effects are related, these findings may help to derive BRDF model parameters from the in-situ BRDF measurement remotely sensed hyperspectral data sets.
Development of composite calibration standard for quantitative NDE by ultrasound and thermography
NASA Astrophysics Data System (ADS)
Dayal, Vinay; Benedict, Zach G.; Bhatnagar, Nishtha; Harper, Adam G.
2018-04-01
Inspection of aircraft components for damage utilizing ultrasonic Non-Destructive Evaluation (NDE) is a time intensive endeavor. Additional time spent during aircraft inspections translates to added cost to the company performing them, and as such, reducing this expenditure is of great importance. There is also great variance in the calibration samples from one entity to another due to a lack of a common calibration set. By characterizing damage types, we can condense the required calibration sets and reduce the time required to perform calibration while also providing procedures for the fabrication of these standard sets. We present here our effort to fabricate composite samples with known defects and quantify the size and location of defects, such as delaminations, and impact damage. Ultrasonic and Thermographic images are digitally enhanced to accurately measure the damage size. Ultrasonic NDE is compared with thermography.
Prototype ultrasonic instrument for quantitative testing
NASA Technical Reports Server (NTRS)
Lynnworth, L. C.; Dubois, J. L.; Kranz, P. R.
1972-01-01
A prototype ultrasonic instrument has been designed and developed for quantitative testing. The complete delivered instrument consists of a pulser/receiver which plugs into a standard oscilloscope, an rf power amplifier, a standard decade oscillator, and a set of broadband transducers for typical use at 1, 2, 5 and 10 MHz. The system provides for its own calibration, and on the oscilloscope, presents a quantitative (digital) indication of time base and sensitivity scale factors and some measurement data.
Investigating the Effects of Magnetic Variations on Inertial/Magnetic Orientation Sensors
2007-09-01
caused by test objects, a track was constructed using nonferrous materials and set so that the orientation of an inertial/magnetic sensor module...states ◆ metal filing cabinet ◆ mobile robot, unpowered, powered, and motor engaged. The MicroStrain 3DM-G sensor module is factory calibrated and...triad of the sensor module approached a large metal filing cabinet. The deviations for this test object are the largest of any observed in the
The effect of rainfall measurement uncertainties on rainfall-runoff processes modelling.
Stransky, D; Bares, V; Fatka, P
2007-01-01
Rainfall data are a crucial input for various tasks concerning the wet weather period. Nevertheless, their measurement is affected by random and systematic errors that cause an underestimation of the rainfall volume. Therefore, the general objective of the presented work was to assess the credibility of measured rainfall data and to evaluate the effect of measurement errors on urban drainage modelling tasks. Within the project, the methodology of the tipping bucket rain gauge (TBR) was defined and assessed in terms of uncertainty analysis. A set of 18 TBRs was calibrated and the results were compared to the previous calibration. This enables us to evaluate the ageing of TBRs. A propagation of calibration and other systematic errors through the rainfall-runoff model was performed on experimental catchment. It was found that the TBR calibration is important mainly for tasks connected with the assessment of peak values and high flow durations. The omission of calibration leads to up to 30% underestimation and the effect of other systematic errors can add a further 15%. The TBR calibration should be done every two years in order to catch up the ageing of TBR mechanics. Further, the authors recommend to adjust the dynamic test duration proportionally to generated rainfall intensity.
Radiometric analysis of the longwave infrared channel of the Thematic Mapper on LANDSAT 4 and 5
NASA Technical Reports Server (NTRS)
Schott, John R.; Volchok, William J.; Biegel, Joseph D.
1986-01-01
The first objective was to evaluate the postlaunch radiometric calibration of the LANDSAT Thematic Mapper (TM) band 6 data. The second objective was to determine to what extent surface temperatures could be computed from the TM and 6 data using atmospheric propagation models. To accomplish this, ground truth data were compared to a single TM-4 band 6 data set. This comparison indicated satisfactory agreement over a narrow temperature range. The atmospheric propagation model (modified LOWTRAN 5A) was used to predict surface temperature values based on the radiance at the spacecraft. The aircraft data were calibrated using a multi-altitude profile calibration technique which had been extensively tested in previous studies. This aircraft calibration permitted measurement of surface temperatures based on the radiance reaching the aircraft. When these temperature values are evaluated, an error in the satellite's ability to predict surface temperatures can be estimated. This study indicated that by carefully accounting for various sensor calibration and atmospheric propagation effects, and expected error (1 standard deviation) in surface temperature would be 0.9 K. This assumes no error in surface emissivity and no sampling error due to target location. These results indicate that the satellite calibration is within nominal limits to within this study's ability to measure error.
NASA Astrophysics Data System (ADS)
Chang, Vivide Tuan-Chyan; Merisier, Delson; Yu, Bing; Walmer, David K.; Ramanujam, Nirmala
2011-03-01
A significant challenge in detecting cervical pre-cancer in low-resource settings is the lack of effective screening facilities and trained personnel to detect the disease before it is advanced. Light based technologies, particularly quantitative optical spectroscopy, have the potential to provide an effective, low cost, and portable solution for cervical pre-cancer screening in these communities. We have developed and characterized a portable USB-powered optical spectroscopic system to quantify total hemoglobin content, hemoglobin saturation, and reduced scattering coefficient of cervical tissue in vivo. The system consists of a high-power LED as light source, a bifurcated fiber optic assembly, and two USB spectrometers for sample and calibration spectra acquisitions. The system was subsequently tested in Leogane, Haiti, where diffuse reflectance spectra from 33 colposcopically normal sites in 21 patients were acquired. Two different calibration methods, i.e., a post-study diffuse reflectance standard measurement and a real time self-calibration channel were studied. Our results suggest that a self-calibration channel enabled more accurate extraction of scattering contrast through simultaneous real-time correction of intensity drifts in the system. A self-calibration system also minimizes operator bias and required training. Hence, future contact spectroscopy or imaging systems should incorporate a selfcalibration channel to reliably extract scattering contrast.
13. VIEW FROM COLD CALIBRATION BLOCKHOUSE LOOKING DOWN CONNECTING TUNNEL ...
13. VIEW FROM COLD CALIBRATION BLOCKHOUSE LOOKING DOWN CONNECTING TUNNEL TO COLD CALIBRATION TEST STAND BASEMENT, SHOWING HARD WIRE CONNECTION (INSTRUMENTATION AND CONTROL). - Marshall Space Flight Center, East Test Area, Cold Calibration Test Stand, Huntsville, Madison County, AL
2. VIEW NORTHWEST FROM LEFT TO RIGHT: COLD CALIBRATION BLOCKHOUSE, ...
2. VIEW NORTHWEST FROM LEFT TO RIGHT: COLD CALIBRATION BLOCKHOUSE, COLD CALIBRATION TEST STAND FOR FL ENGINE FOR SATURN V. EXHAUST DUCT IN FOREGROUND. - Marshall Space Flight Center, East Test Area, Cold Calibration Test Stand, Huntsville, Madison County, AL
NASA Astrophysics Data System (ADS)
Dillner, A. M.; Takahama, S.
2015-06-01
Elemental carbon (EC) is an important constituent of atmospheric particulate matter because it absorbs solar radiation influencing climate and visibility and it adversely affects human health. The EC measured by thermal methods such as Thermal-Optical Reflectance (TOR) is operationally defined as the carbon that volatilizes from quartz filter samples at elevated temperatures in the presence of oxygen. Here, methods are presented to accurately predict TOR EC using Fourier Transform Infrared (FT-IR) absorbance spectra from atmospheric particulate matter collected on polytetrafluoroethylene (PTFE or Teflon) filters. This method is similar to the procedure tested and developed for OC in prior work (Dillner and Takahama, 2015). Transmittance FT-IR analysis is rapid, inexpensive, and non-destructive to the PTFE filter samples which are routinely collected for mass and elemental analysis in monitoring networks. FT-IR absorbance spectra are obtained from 794 filter samples from seven Interagency Monitoring of PROtected Visual Environment (IMPROVE) sites collected during 2011. Partial least squares regression is used to calibrate sample FT-IR absorbance spectra to collocated TOR EC measurements. The FTIR spectra are divided into calibration and test sets. Two calibrations are developed, one which is developed from uniform distribution of samples across the EC mass range (Uniform EC) and one developed from a~uniform distribution of low EC mass samples (EC < 2.4 μg, Low Uniform EC). A hybrid approach which applies the low EC calibration to low EC samples and the Uniform EC calibration to all other samples is used to produces predictions for low EC samples that have mean error on par with parallel TOR EC samples in the same mass range and an estimate of the minimum detection limit (MDL) that is on par with TOR EC MDL. For all samples, this hybrid approach leads to precise and accurate TOR EC predictions by FT-IR as indicated by high coefficient of variation (R2; 0.96), no bias (0.00 μg m-3, concentration value based on the nominal IMPROVE sample volume of 32.8 m-3), low error (0.03 μg m-3) and reasonable normalized error (21 %). These performance metrics can be achieved with various degrees of spectral pretreatment (e.g., including or excluding substrate contributions to the absorbances) and are comparable in precision and accuracy to collocated TOR measurements. Only the normalized error is higher for the FT-IR EC measurements than for collocated TOR. FT-IR spectra are also divided into calibration and test sets by the ratios OC/EC and ammonium/EC to determine the impact of OC and ammonium on EC prediction. We conclude that FT-IR analysis with partial least squares regression is a robust method for accurately predicting TOR EC in IMPROVE network samples; providing complementary information to TOR OC predictions (Dillner and Takahama, 2015) and the organic functional group composition and organic matter (OM) estimated previously from the same set of sample spectra (Ruthenburg et al., 2014).
NASA Technical Reports Server (NTRS)
Waterman, A. W.; Huxford, R. L.; Nelson, W. G.
1976-01-01
Molded high temperature plastic first and second stage rod seal elements were evaluated in seal assemblies to determine performance characteristics. These characteristics were compared with the performance of machined seal elements. The 6.35 cm second stage Chevron seal assembly was tested using molded Chevrons fabricated from five molding materials. Impulse screening tests conducted over a range of 311 K to 478 K revealed thermal setting deficiencies in the aromatic polyimide molding materials. Seal elements fabricated from aromatic copolyester materials structurally failed during impulse cycle calibration. Endurance testing of 3.85 million cycles at 450 K using MIL-H-83283 fluid showed poorer seal performance with the unfilled aromatic polyimide material than had been attained with seals machined from Vespel SP-21 material. The 6.35 cm first stage step-cut compression loaded seal ring fabricated from copolyester injection molding material failed structurally during impulse cycle calibration. Molding of complex shape rod seals was shown to be a potentially controllable technique, but additional molding material property testing is recommended.
Multi-model ensemble hydrologic prediction using Bayesian model averaging
NASA Astrophysics Data System (ADS)
Duan, Qingyun; Ajami, Newsha K.; Gao, Xiaogang; Sorooshian, Soroosh
2007-05-01
Multi-model ensemble strategy is a means to exploit the diversity of skillful predictions from different models. This paper studies the use of Bayesian model averaging (BMA) scheme to develop more skillful and reliable probabilistic hydrologic predictions from multiple competing predictions made by several hydrologic models. BMA is a statistical procedure that infers consensus predictions by weighing individual predictions based on their probabilistic likelihood measures, with the better performing predictions receiving higher weights than the worse performing ones. Furthermore, BMA provides a more reliable description of the total predictive uncertainty than the original ensemble, leading to a sharper and better calibrated probability density function (PDF) for the probabilistic predictions. In this study, a nine-member ensemble of hydrologic predictions was used to test and evaluate the BMA scheme. This ensemble was generated by calibrating three different hydrologic models using three distinct objective functions. These objective functions were chosen in a way that forces the models to capture certain aspects of the hydrograph well (e.g., peaks, mid-flows and low flows). Two sets of numerical experiments were carried out on three test basins in the US to explore the best way of using the BMA scheme. In the first set, a single set of BMA weights was computed to obtain BMA predictions, while the second set employed multiple sets of weights, with distinct sets corresponding to different flow intervals. In both sets, the streamflow values were transformed using Box-Cox transformation to ensure that the probability distribution of the prediction errors is approximately Gaussian. A split sample approach was used to obtain and validate the BMA predictions. The test results showed that BMA scheme has the advantage of generating more skillful and equally reliable probabilistic predictions than original ensemble. The performance of the expected BMA predictions in terms of daily root mean square error (DRMS) and daily absolute mean error (DABS) is generally superior to that of the best individual predictions. Furthermore, the BMA predictions employing multiple sets of weights are generally better than those using single set of weights.
Ottaway, Josh; Farrell, Jeremy A; Kalivas, John H
2013-02-05
An essential part to calibration is establishing the analyte calibration reference samples. These samples must characterize the sample matrix and measurement conditions (chemical, physical, instrumental, and environmental) of any sample to be predicted. Calibration usually requires measuring spectra for numerous reference samples in addition to determining the corresponding analyte reference values. Both tasks are typically time-consuming and costly. This paper reports on a method named pure component Tikhonov regularization (PCTR) that does not require laboratory prepared or determined reference values. Instead, an analyte pure component spectrum is used in conjunction with nonanalyte spectra for calibration. Nonanalyte spectra can be from different sources including pure component interference samples, blanks, and constant analyte samples. The approach is also applicable to calibration maintenance when the analyte pure component spectrum is measured in one set of conditions and nonanalyte spectra are measured in new conditions. The PCTR method balances the trade-offs between calibration model shrinkage and the degree of orthogonality to the nonanalyte content (model direction) in order to obtain accurate predictions. Using visible and near-infrared (NIR) spectral data sets, the PCTR results are comparable to those obtained using ridge regression (RR) with reference calibration sets. The flexibility of PCTR also allows including reference samples if such samples are available.
NASA Astrophysics Data System (ADS)
Hernández-Almeida, I.; Cortese, G.; Yu, P.-S.; Chen, M.-T.; Kucera, M.
2017-08-01
Radiolarians are a very diverse microzooplanktonic group, often distributed in regionally restricted assemblages and responding to specific environmental factors. These properties of radiolarian assemblages make the group more conducive for the development and application of basin-wide ecological models. Here we use a new surface sediment data set from the western Pacific to demonstrate that ecological patterns derived from basin-wide open-ocean data sets cannot be transferred on semirestricted marginal seas. The data set consists of 160 surface sediment samples from three tropical-subtropical regions (East China Sea, South China Sea, and western Pacific), combining 54 new assemblage counts with taxonomically harmonized data from previous studies. Multivariate statistical analyses indicate that winter sea surface temperature at 10 m depth (SSTw) was the most significant environmental variable affecting the composition of radiolarian assemblages, allowing the development of an optimal calibration model (Locally Weighted-Weighted Averaging regression inverse deshrinking, R2cv = 0.88, root-mean-square error of prediction = 1.6°C). The dominant effect of SSTw on radiolarian assemblage composition in the western Pacific is attributed to the East Asian Winter Monsoon (EAWM), which is particularly strong in the marginal seas. To test the applicability of the calibration model on fossil radiolarian assemblages from the marginal seas, the calibration model was applied to two downcore records from the Okinawa Trough, covering the last 18 ka. We observe that these assemblages find most appropriate analogs among modern samples from the marginal basins (East China Sea and South China Sea). Downcore temperature reconstructions at both sites show similarities to known regional SST reconstructions, providing proof of concept for the new radiolarian-based SSTw calibration model.
Barry, U; Choubert, J-M; Canler, J-P; Héduit, A; Robin, L; Lessard, P
2012-01-01
This work suggests a procedure to correctly calibrate the parameters of a one-dimensional MBBR dynamic model in nitrification treatment. The study deals with the MBBR configuration with two reactors in series, one for carbon treatment and the other for nitrogen treatment. Because of the influence of the first reactor on the second one, the approach needs a specific calibration strategy. Firstly, a comparison between measured values and simulated ones obtained with default parameters has been carried out. Simulated values of filtered COD, NH(4)-N and dissolved oxygen are underestimated and nitrates are overestimated compared with observed data. Thus, nitrifying rate and oxygen transfer into the biofilm are overvalued. Secondly, a sensitivity analysis was carried out for parameters and for COD fractionation. It revealed three classes of sensitive parameters: physical, diffusional and kinetic. Then a calibration protocol of the MBBR dynamic model was proposed. It was successfully tested on data recorded at a pilot-scale plant and a calibrated set of values was obtained for four parameters: the maximum biofilm thickness, the detachment rate, the maximum autotrophic growth rate and the oxygen transfer rate.
NASA Astrophysics Data System (ADS)
Neill, Aaron; Reaney, Sim
2015-04-01
Fully-distributed, physically-based rainfall-runoff models attempt to capture some of the complexity of the runoff processes that operate within a catchment, and have been used to address a variety of issues including water quality and the effect of climate change on flood frequency. Two key issues are prevalent, however, which call into question the predictive capability of such models. The first is the issue of parameter equifinality which can be responsible for large amounts of uncertainty. The second is whether such models make the right predictions for the right reasons - are the processes operating within a catchment correctly represented, or do the predictive abilities of these models result only from the calibration process? The use of additional data sources, such as environmental tracers, has been shown to help address both of these issues, by allowing for multi-criteria model calibration to be undertaken, and by permitting a greater understanding of the processes operating in a catchment and hence a more thorough evaluation of how well catchment processes are represented in a model. Using discharge and oxygen-18 data sets, the ability of the fully-distributed, physically-based CRUM3 model to represent the runoff processes in three sub-catchments in Cumbria, NW England has been evaluated. These catchments (Morland, Dacre and Pow) are part of the of the River Eden demonstration test catchment project. The oxygen-18 data set was firstly used to derive transit-time distributions and mean residence times of water for each of the catchments to gain an integrated overview of the types of processes that were operating. A generalised likelihood uncertainty estimation procedure was then used to calibrate the CRUM3 model for each catchment based on a single discharge data set from each catchment. Transit-time distributions and mean residence times of water obtained from the model using the top 100 behavioural parameter sets for each catchment were then compared to those derived from the oxygen-18 data to see how well the model captured catchment dynamics. The value of incorporating the oxygen-18 data set, as well as discharge data sets from multiple as opposed to single gauging stations in each catchment, in the calibration process to improve the predictive capability of the model was then investigated. This was achieved by assessing by how much the identifiability of the model parameters and the ability of the model to represent the runoff processes operating in each catchment improved with the inclusion of the additional data sets with respect to the likely costs that would be incurred in obtaining the data sets themselves.
MacFarlane, Michael; Wong, Daniel; Hoover, Douglas A; Wong, Eugene; Johnson, Carol; Battista, Jerry J; Chen, Jeff Z
2018-03-01
In this work, we propose a new method of calibrating cone beam computed tomography (CBCT) data sets for radiotherapy dose calculation and plan assessment. The motivation for this patient-specific calibration (PSC) method is to develop an efficient, robust, and accurate CBCT calibration process that is less susceptible to deformable image registration (DIR) errors. Instead of mapping the CT numbers voxel-by-voxel with traditional DIR calibration methods, the PSC methods generates correlation plots between deformably registered planning CT and CBCT voxel values, for each image slice. A linear calibration curve specific to each slice is then obtained by least-squares fitting, and applied to the CBCT slice's voxel values. This allows each CBCT slice to be corrected using DIR without altering the patient geometry through regional DIR errors. A retrospective study was performed on 15 head-and-neck cancer patients, each having routine CBCTs and a middle-of-treatment re-planning CT (reCT). The original treatment plan was re-calculated on the patient's reCT image set (serving as the gold standard) as well as the image sets produced by voxel-to-voxel DIR, density-overriding, and the new PSC calibration methods. Dose accuracy of each calibration method was compared to the reference reCT data set using common dose-volume metrics and 3D gamma analysis. A phantom study was also performed to assess the accuracy of the DIR and PSC CBCT calibration methods compared with planning CT. Compared with the gold standard using reCT, the average dose metric differences were ≤ 1.1% for all three methods (PSC: -0.3%; DIR: -0.7%; density-override: -1.1%). The average gamma pass rates with thresholds 3%, 3 mm were also similar among the three techniques (PSC: 95.0%; DIR: 96.1%; density-override: 94.4%). An automated patient-specific calibration method was developed which yielded strong dosimetric agreement with the results obtained using a re-planning CT for head-and-neck patients. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Comparison of modern icing cloud instruments
NASA Technical Reports Server (NTRS)
Takeuchi, D. M.; Jahnsen, L. J.; Callander, S. M.; Humbert, M. C.
1983-01-01
Intercomparison tests with Particle Measuring Systems (PMS) were conducted. Cloud liquid water content (LWC) measurements were also taken with a Johnson and Williams (JW) hot-wire device and an icing rate device (Leigh IDS). Tests include varying cloud LWC (0.5 to 5 au gm), cloud median volume diameter (MVD) (15 to 26 microns), temperature (-29 to 20 C), and air speeds (50 to 285 mph). Comparisons were based upon evaluating probe estimates of cloud LWC and median volume diameter for given tunnel settings. Variations of plus or minus 10% and plus or minus 5% in LWC and MVD, respectively, were determined of spray clouds between test made at given tunnel settings (fixed LWC, MVD, and air speed) indicating cloud conditions were highly reproducible. Although LWC measurements from JW and Leigh devices were consistent with tunnel values, individual probe measurements either consistently over or underestimated tunnel values by factors ranging from about 0.2 to 2. Range amounted to a factor of 6 differences between LWC estimates of probes for given cloud conditions. For given cloud conditions, estimates of cloud MVD between probes were within plus or minus 3 microns and 93% of the test cases. Measurements overestimated tunnel values in the range between 10 to 20 microns. The need for improving currently used calibration procedures was indicated. Establishment of test facility (or facilities) such as an icing tunnel where instruments can be calibrated against known cloud standards would be a logical choice.
NASA Astrophysics Data System (ADS)
Hawdon, Aaron; McJannet, David; Wallace, Jim
2014-06-01
The cosmic-ray probe (CRP) provides continuous estimates of soil moisture over an area of ˜30 ha by counting fast neutrons produced from cosmic rays which are predominantly moderated by water molecules in the soil. This paper describes the setup, measurement correction procedures, and field calibration of CRPs at nine locations across Australia with contrasting soil type, climate, and land cover. These probes form the inaugural Australian CRP network, which is known as CosmOz. CRP measurements require neutron count rates to be corrected for effects of atmospheric pressure, water vapor pressure changes, and variations in incoming neutron intensity. We assess the magnitude and importance of these corrections and present standardized approaches for network-wide analysis. In particular, we present a new approach to correct for incoming neutron intensity variations and test its performance against existing procedures used in other studies. Our field calibration results indicate that a generalized calibration function for relating neutron counts to soil moisture is suitable for all soil types, with the possible exception of very sandy soils with low water content. Using multiple calibration data sets, we demonstrate that the generalized calibration function only applies after accounting for persistent sources of hydrogen in the soil profile. Finally, we demonstrate that by following standardized correction procedures and scaling neutron counting rates of all CRPs to a single reference location, differences in calibrations between sites are related to site biomass. This observation provides a means for estimating biomass at a given location or for deriving coefficients for the calibration function in the absence of field calibration data.
Wold, Jens Petter; Veiseth-Kent, Eva; Høst, Vibeke; Løvland, Atle
2017-01-01
The main objective of this work was to develop a method for rapid and non-destructive detection and grading of wooden breast (WB) syndrome in chicken breast fillets. Near-infrared (NIR) spectroscopy was chosen as detection method, and an industrial NIR scanner was applied and tested for large scale on-line detection of the syndrome. Two approaches were evaluated for discrimination of WB fillets: 1) Linear discriminant analysis based on NIR spectra only, and 2) a regression model for protein was made based on NIR spectra and the estimated concentrations of protein were used for discrimination. A sample set of 197 fillets was used for training and calibration. A test set was recorded under industrial conditions and contained spectra from 79 fillets. The classification methods obtained 99.5-100% correct classification of the calibration set and 100% correct classification of the test set. The NIR scanner was then installed in a commercial chicken processing plant and could detect incidence rates of WB in large batches of fillets. Examples of incidence are shown for three broiler flocks where a high number of fillets (9063, 6330 and 10483) were effectively measured. Prevalence of WB of 0.1%, 6.6% and 8.5% were estimated for these flocks based on the complete sample volumes. Such an on-line system can be used to alleviate the challenges WB represents to the poultry meat industry. It enables automatic quality sorting of chicken fillets to different product categories. Manual laborious grading can be avoided. Incidences of WB from different farms and flocks can be tracked and information can be used to understand and point out main causes for WB in the chicken production. This knowledge can be used to improve the production procedures and reduce today's extensive occurrence of WB.
NASA Technical Reports Server (NTRS)
Ulich, B. L.; Rhodes, P. J.; Davis, J. H.; Hollis, J. M.
1980-01-01
Careful observations have been made at 86.1 GHz to derive the absolute brightness temperatures of the sun (7914 + or - 192 K), Venus (357.5 + or - 13.1 K), Jupiter (179.4 + or - 4.7 K), and Saturn (153.4 + or - 4.8 K) with a standard error of about three percent. This is a significant improvement in accuracy over previous results at millimeter wavelengths. A stable transmitter and novel superheterodyne receiver were constructed and used to determine the effective collecting area of the Millimeter Wave Observatory (MWO) 4.9-m antenna relative to a previously calibrated standard gain horn. The thermal scale was set by calibrating the radiometer with carefully constructed and tested hot and cold loads. The brightness temperatures may be used to establish an absolute calibration scale and to determine the antenna aperture and beam efficiencies of other radio telescopes at 3.5-mm wavelength.
Photometric calibration of the COMBO-17 survey with the Softassign Procrustes Matching method
NASA Astrophysics Data System (ADS)
Sheikhbahaee, Z.; Nakajima, R.; Erben, T.; Schneider, P.; Hildebrandt, H.; Becker, A. C.
2017-11-01
Accurate photometric calibration of optical data is crucial for photometric redshift estimation. We present the Softassign Procrustes Matching (SPM) method to improve the colour calibration upon the commonly used Stellar Locus Regression (SLR) method for the COMBO-17 survey. Our colour calibration approach can be categorised as a point-set matching method, which is frequently used in medical imaging and pattern recognition. We attain a photometric redshift precision Δz/(1 + zs) of better than 2 per cent. Our method is based on aligning the stellar locus of the uncalibrated stars to that of a spectroscopic sample of the Sloan Digital Sky Survey standard stars. We achieve our goal by finding a correspondence matrix between the two point-sets and applying the matrix to estimate the appropriate translations in multidimensional colour space. The SPM method is able to find the translation between two point-sets, despite the existence of noise and incompleteness of the common structures in the sets, as long as there is a distinct structure in at least one of the colour-colour pairs. We demonstrate the precision of our colour calibration method with a mock catalogue. The SPM colour calibration code is publicly available at https://neuronphysics@bitbucket.org/neuronphysics/spm.git.
Mixed Model Association with Family-Biased Case-Control Ascertainment.
Hayeck, Tristan J; Loh, Po-Ru; Pollack, Samuela; Gusev, Alexander; Patterson, Nick; Zaitlen, Noah A; Price, Alkes L
2017-01-05
Mixed models have become the tool of choice for genetic association studies; however, standard mixed model methods may be poorly calibrated or underpowered under family sampling bias and/or case-control ascertainment. Previously, we introduced a liability threshold-based mixed model association statistic (LTMLM) to address case-control ascertainment in unrelated samples. Here, we consider family-biased case-control ascertainment, where case and control subjects are ascertained non-randomly with respect to family relatedness. Previous work has shown that this type of ascertainment can severely bias heritability estimates; we show here that it also impacts mixed model association statistics. We introduce a family-based association statistic (LT-Fam) that is robust to this problem. Similar to LTMLM, LT-Fam is computed from posterior mean liabilities (PML) under a liability threshold model; however, LT-Fam uses published narrow-sense heritability estimates to avoid the problem of biased heritability estimation, enabling correct calibration. In simulations with family-biased case-control ascertainment, LT-Fam was correctly calibrated (average χ 2 = 1.00-1.02 for null SNPs), whereas the Armitage trend test (ATT), standard mixed model association (MLM), and case-control retrospective association test (CARAT) were mis-calibrated (e.g., average χ 2 = 0.50-1.22 for MLM, 0.89-2.65 for CARAT). LT-Fam also attained higher power than other methods in some settings. In 1,259 type 2 diabetes-affected case subjects and 5,765 control subjects from the CARe cohort, downsampled to induce family-biased ascertainment, LT-Fam was correctly calibrated whereas ATT, MLM, and CARAT were again mis-calibrated. Our results highlight the importance of modeling family sampling bias in case-control datasets with related samples. Copyright © 2017 American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Cook, M.
1990-01-01
Qualification testing of Combustion Engineering's AMDATA Intraspect/98 Data Acquisition and Imaging System that applies to the redesigned solid rocket motor field joint capture feature case-to-insulation bondline inspection was performed. Testing was performed at M-111, the Thiokol Corp. Inert Parts Preparation Building. The purpose of the inspection was to verify the integrity of the capture feature area case-to-insulation bondline. The capture feature scanner was calibrated over an intentional 1.0 to 1.0 in. case-to-insulation unbond. The capture feature scanner was then used to scan 60 deg of a capture feature field joint. Calibration of the capture feature scanner was then rechecked over the intentional unbond to ensure that the calibration settings did not change during the case scan. This procedure was successfully performed five times to qualify the unbond detection capability of the capture feature scanner. The capture feature scanner qualified in this test contains many points of mechanical instability that can affect the overall ultrasonic signal response. A new generation scanner, designated the sigma scanner, should be implemented to replace the current configuration scanner. The sigma scanner eliminates the unstable connection points of the current scanner and has additional inspection capabilities.
Parastar, Hadi; Mostafapour, Sara; Azimi, Gholamhasan
2016-01-01
Comprehensive two-dimensional gas chromatography and flame ionization detection combined with unfolded-partial least squares is proposed as a simple, fast and reliable method to assess the quality of gasoline and to detect its potential adulterants. The data for the calibration set are first baseline corrected using a two-dimensional asymmetric least squares algorithm. The number of significant partial least squares components to build the model is determined using the minimum value of root-mean square error of leave-one out cross validation, which was 4. In this regard, blends of gasoline with kerosene, white spirit and paint thinner as frequently used adulterants are used to make calibration samples. Appropriate statistical parameters of regression coefficient of 0.996-0.998, root-mean square error of prediction of 0.005-0.010 and relative error of prediction of 1.54-3.82% for the calibration set show the reliability of the developed method. In addition, the developed method is externally validated with three samples in validation set (with a relative error of prediction below 10.0%). Finally, to test the applicability of the proposed strategy for the analysis of real samples, five real gasoline samples collected from gas stations are used for this purpose and the gasoline proportions were in range of 70-85%. Also, the relative standard deviations were below 8.5% for different samples in the prediction set. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
9. COLD CALIBRATION TEST STAND (H1) FROM LEFT TO RIGHT ...
9. COLD CALIBRATION TEST STAND (H-1) FROM LEFT TO RIGHT - WORK BENCH, CONTROL PANEL, CHEMICAL TANK. - Marshall Space Flight Center, East Test Area, Cold Calibration Test Stand, Huntsville, Madison County, AL
Evaluation of EIT system performance.
Yasin, Mamatjan; Böhm, Stephan; Gaggero, Pascal O; Adler, Andy
2011-07-01
An electrical impedance tomography (EIT) system images internal conductivity from surface electrical stimulation and measurement. Such systems necessarily comprise multiple design choices from cables and hardware design to calibration and image reconstruction. In order to compare EIT systems and study the consequences of changes in system performance, this paper describes a systematic approach to evaluate the performance of the EIT systems. The system to be tested is connected to a saline phantom in which calibrated contrasting test objects are systematically positioned using a position controller. A set of evaluation parameters are proposed which characterize (i) data and image noise, (ii) data accuracy, (iii) detectability of single contrasts and distinguishability of multiple contrasts, and (iv) accuracy of reconstructed image (amplitude, resolution, position and ringing). Using this approach, we evaluate three different EIT systems and illustrate the use of these tools to evaluate and compare performance. In order to facilitate the use of this approach, all details of the phantom, test objects and position controller design are made publicly available including the source code of the evaluation and reporting software.
Evanoff, M G; Roehrig, H; Giffords, R S; Capp, M P; Rovinelli, R J; Hartmann, W H; Merritt, C
2001-06-01
This report discusses calibration and set-up procedures for medium-resolution monochrome cathode ray tubes (CRTs) taken in preparation of the oral portion of the board examination of the American Board of Radiology (ABR). The board examinations took place in more than 100 rooms of a hotel. There was one display-station (a computer and the associated CRT display) in each of the hotel rooms used for the examinations. The examinations covered the radiologic specialties cardiopulmonary, musculoskeletal, gastrointestinal, vascular, pediatric, and genitourinary. The software used for set-up and calibration was the VeriLUM 4.0 package from Image Smiths in Germantown, MD. The set-up included setting minimum luminance and maximum luminance, as well as positioning of the CRT in each examination room with respect to reflections of roomlights. The calibration for the grey scale rendition was done meeting the Digital Imaging and communication in Medicine (DICOM) 14 Standard Display Function. We describe these procedures, and present the calibration data in. tables and graphs, listing initial values of minimum luminance, maximum luminance, and grey scale rendition (DICOM 14 standard display function). Changes of these parameters over the duration of the examination were observed and recorded on 11 monitors in a particular room. These changes strongly suggest that all calibrated CRTs be monitored over the duration of the examination. In addition, other CRT performance data affecting image quality such as spatial resolution should be included in set-up and image quality-control procedures.
Dambergs, Robert G; Mercurio, Meagan D; Kassara, Stella; Cozzolino, Daniel; Smith, Paul A
2012-06-01
Information relating to tannin concentration in grapes and wine is not currently available simply and rapidly enough to inform decision-making by grape growers, winemakers, and wine researchers. Spectroscopy and chemometrics have been implemented for the analysis of critical grape and wine parameters and offer a possible solution for rapid tannin analysis. We report here the development and validation of an ultraviolet (UV) spectral calibration for the prediction of tannin concentration in red wines. Such spectral calibrations reduce the time and resource requirements involved in measuring tannins. A diverse calibration set (n = 204) was prepared with samples of Australian wines of five varieties (Cabernet Sauvignon, Shiraz, Merlot, Pinot Noir, and Durif), from regions spanning the wine grape growing areas of Australia, with varying climate and soils, and with vintages ranging from 1991 to 2007. The relationship between tannin measured by the methyl cellulose precipitation (MCP) reference method at 280 nm and tannin predicted with a multiple linear regression (MLR) calibration, using ultraviolet (UV) absorbance at 250, 270, 280, 290, and 315 nm, was strong (r(2)val = 0.92; SECV = 0.20 g/L). An independent validation set (n = 85) was predicted using the MLR algorithm developed with the calibration set and gave confidence in the ability to predict new samples, independent of the samples used to prepare the calibration (r(2)val = 0.94; SEP = 0.18 g/L). The MLR algorithm could also predict tannin in fermenting wines (r(2)val = 0.76; SEP = 0.18 g/L), but worked best from the second day of ferment on. This study also explored instrument-to-instrument transfer of a spectral calibration for MCP tannin. After slope and bias adjustments of the calibration, efficient calibration transfer to other laboratories was clearly demonstrated, with all instruments in the study effectively giving identical results on a transfer set.
NASA Astrophysics Data System (ADS)
Verardo, E.; Atteia, O.; Rouvreau, L.
2015-12-01
In-situ bioremediation is a commonly used remediation technology to clean up the subsurface of petroleum-contaminated sites. Forecasting remedial performance (in terms of flux and mass reduction) is a challenge due to uncertainties associated with source properties and the uncertainties associated with contribution and efficiency of concentration reducing mechanisms. In this study, predictive uncertainty analysis of bio-remediation system efficiency is carried out with the null-space Monte Carlo (NSMC) method which combines the calibration solution-space parameters with the ensemble of null-space parameters, creating sets of calibration-constrained parameters for input to follow-on remedial efficiency. The first step in the NSMC methodology for uncertainty analysis is model calibration. The model calibration was conducted by matching simulated BTEX concentration to a total of 48 observations from historical data before implementation of treatment. Two different bio-remediation designs were then implemented in the calibrated model. The first consists in pumping/injection wells and the second in permeable barrier coupled with infiltration across slotted piping. The NSMC method was used to calculate 1000 calibration-constrained parameter sets for the two different models. Several variants of the method were implemented to investigate their effect on the efficiency of the NSMC method. The first variant implementation of the NSMC is based on a single calibrated model. In the second variant, models were calibrated from different initial parameter sets. NSMC calibration-constrained parameter sets were sampled from these different calibrated models. We demonstrate that in context of nonlinear model, second variant avoids to underestimate parameter uncertainty which may lead to a poor quantification of predictive uncertainty. Application of the proposed approach to manage bioremediation of groundwater in a real site shows that it is effective to provide support in management of the in-situ bioremediation systems. Moreover, this study demonstrates that the NSMC method provides a computationally efficient and practical methodology of utilizing model predictive uncertainty methods in environmental management.
An Interferometry Imaging Beauty Contest
NASA Technical Reports Server (NTRS)
Lawson, Peter R.; Cotton, William D.; Hummel, Christian A.; Monnier, John D.; Zhaod, Ming; Young, John S.; Thorsteinsson, Hrobjartur; Meimon, Serge C.; Mugnier, Laurent; LeBesnerais, Guy;
2004-01-01
We present a formal comparison of the performance of algorithms used for synthesis imaging with optical/infrared long-baseline interferometers. Six different algorithms are evaluated based on their performance with simulated test data. Each set of test data is formated in the interferometry Data Exchange Standard and is designed to simulate a specific problem relevant to long-baseline imaging. The data are calibrated power spectra and bispectra measured with a ctitious array, intended to be typical of existing imaging interferometers. The strengths and limitations of each algorithm are discussed.
NASA Technical Reports Server (NTRS)
Mcpherron, R. L.
1977-01-01
Procedures are described for the calibration of a vector magnetometer of high absolute accuracy. It is assumed that the calibration will be performed in the magnetic test facility of Goddard Space Flight Center (GSFC). The first main section of the report describes the test equipment and facility calibrations required. The second presents procedures for calibrating individual sensors. The third discusses the calibration of the sensor assembly. In a final section recommendations are made to GSFC for modification of the test facility required to carry out the calibration procedures.
Electronic test and calibration circuits, a compilation
NASA Technical Reports Server (NTRS)
1972-01-01
A wide variety of simple test calibration circuits are compiled for the engineer and laboratory technician. The majority of circuits were found inexpensive to assemble. Testing electronic devices and components, instrument and system test, calibration and reference circuits, and simple test procedures are presented.
Empirical dual energy calibration (EDEC) for cone-beam computed tomography.
Stenner, Philip; Berkus, Timo; Kachelriess, Marc
2007-09-01
Material-selective imaging using dual energy CT (DECT) relies heavily on well-calibrated material decomposition functions. These require the precise knowledge of the detected x-ray spectra, and even if they are exactly known the reliability of DECT will suffer from scattered radiation. We propose an empirical method to determine the proper decomposition function. In contrast to other decomposition algorithms our empirical dual energy calibration (EDEC) technique requires neither knowledge of the spectra nor of the attenuation coefficients. The desired material-selective raw data p1 and p2 are obtained as functions of the measured attenuation data q1 and q2 (one DECT scan = two raw data sets) by passing them through a polynomial function. The polynomial's coefficients are determined using a general least squares fit based on thresholded images of a calibration phantom. The calibration phantom's dimension should be of the same order of magnitude as the test object, but other than that no assumptions on its exact size or positioning are made. Once the decomposition coefficients are determined DECT raw data can be decomposed by simply passing them through the polynomial. To demonstrate EDEC simulations of an oval CTDI phantom, a lung phantom, a thorax phantom and a mouse phantom were carried out. The method was further verified by measuring a physical mouse phantom, a half-and-half-cylinder phantom and a Yin-Yang phantom with a dedicated in vivo dual source micro-CT scanner. The raw data were decomposed into their components, reconstructed, and the pixel values obtained were compared to the theoretical values. The determination of the calibration coefficients with EDEC is very robust and depends only slightly on the type of calibration phantom used. The images of the test phantoms (simulations and measurements) show a nearly perfect agreement with the theoretical micro values and density values. Since EDEC is an empirical technique it inherently compensates for scatter components. The empirical dual energy calibration technique is a pragmatic, simple, and reliable calibration approach that produces highly quantitative DECT images.
Evaluation of a physically based quasi-linear and a conceptually based nonlinear Muskingum methods
NASA Astrophysics Data System (ADS)
Perumal, Muthiah; Tayfur, Gokmen; Rao, C. Madhusudana; Gurarslan, Gurhan
2017-03-01
Two variants of the Muskingum flood routing method formulated for accounting nonlinearity of the channel routing process are investigated in this study. These variant methods are: (1) The three-parameter conceptual Nonlinear Muskingum (NLM) method advocated by Gillin 1978, and (2) The Variable Parameter McCarthy-Muskingum (VPMM) method recently proposed by Perumal and Price in 2013. The VPMM method does not require rigorous calibration and validation procedures as required in the case of NLM method due to established relationships of its parameters with flow and channel characteristics based on hydrodynamic principles. The parameters of the conceptual nonlinear storage equation used in the NLM method were calibrated using the Artificial Intelligence Application (AIA) techniques, such as the Genetic Algorithm (GA), the Differential Evolution (DE), the Particle Swarm Optimization (PSO) and the Harmony Search (HS). The calibration was carried out on a given set of hypothetical flood events obtained by routing a given inflow hydrograph in a set of 40 km length prismatic channel reaches using the Saint-Venant (SV) equations. The validation of the calibrated NLM method was investigated using a different set of hypothetical flood hydrographs obtained in the same set of channel reaches used for calibration studies. Both the sets of solutions obtained in the calibration and validation cases using the NLM method were compared with the corresponding solutions of the VPMM method based on some pertinent evaluation measures. The results of the study reveal that the physically based VPMM method is capable of accounting for nonlinear characteristics of flood wave movement better than the conceptually based NLM method which requires the use of tedious calibration and validation procedures.
Enhanced ID Pit Sizing Using Multivariate Regression Algorithm
NASA Astrophysics Data System (ADS)
Krzywosz, Kenji
2007-03-01
EPRI is funding a program to enhance and improve the reliability of inside diameter (ID) pit sizing for balance-of plant heat exchangers, such as condensers and component cooling water heat exchangers. More traditional approaches to ID pit sizing involve the use of frequency-specific amplitude or phase angles. The enhanced multivariate regression algorithm for ID pit depth sizing incorporates three simultaneous input parameters of frequency, amplitude, and phase angle. A set of calibration data sets consisting of machined pits of various rounded and elongated shapes and depths was acquired in the frequency range of 100 kHz to 1 MHz for stainless steel tubing having nominal wall thickness of 0.028 inch. To add noise to the acquired data set, each test sample was rotated and test data acquired at 3, 6, 9, and 12 o'clock positions. The ID pit depths were estimated using a second order and fourth order regression functions by relying on normalized amplitude and phase angle information from multiple frequencies. Due to unique damage morphology associated with the microbiologically-influenced ID pits, it was necessary to modify the elongated calibration standard-based algorithms by relying on the algorithm developed solely from the destructive sectioning results. This paper presents the use of transformed multivariate regression algorithm to estimate ID pit depths and compare the results with the traditional univariate phase angle analysis. Both estimates were then compared with the destructive sectioning results.
NASA Astrophysics Data System (ADS)
Wu, Z.; Luo, Z.; Zhang, Y.; Guo, F.; He, L.
2018-04-01
A Modulation Transfer Function (MTF)-based fuzzy comprehensive evaluation method was proposed in this paper for the purpose of evaluating high-resolution satellite image quality. To establish the factor set, two MTF features and seven radiant features were extracted from the knife-edge region of image patch, which included Nyquist, MTF0.5, entropy, peak signal to noise ratio (PSNR), average difference, edge intensity, average gradient, contrast and ground spatial distance (GSD). After analyzing the statistical distribution of above features, a fuzzy evaluation threshold table and fuzzy evaluation membership functions was established. The experiments for comprehensive quality assessment of different natural and artificial objects was done with GF2 image patches. The results showed that the calibration field image has the highest quality scores. The water image has closest image quality to the calibration field, quality of building image is a little poor than water image, but much higher than farmland image. In order to test the influence of different features on quality evaluation, the experiment with different weights were tested on GF2 and SPOT7 images. The results showed that different weights correspond different evaluating effectiveness. In the case of setting up the weights of edge features and GSD, the image quality of GF2 is better than SPOT7. However, when setting MTF and PSNR as main factor, the image quality of SPOT7 is better than GF2.
5. VIEW NORTHWEST FROM LEFT TO RIGHT: COLD CALIBRATION OBSERVATION ...
5. VIEW NORTHWEST FROM LEFT TO RIGHT: COLD CALIBRATION OBSERVATION BUNKER BACKGROUND, COLD CALIBRATION TOWER. - Marshall Space Flight Center, East Test Area, Cold Calibration Test Stand, Huntsville, Madison County, AL
Calibration and assessment of full-field optical strain measurement procedures and instrumentation
NASA Astrophysics Data System (ADS)
Kujawinska, Malgorzata; Patterson, E. A.; Burguete, R.; Hack, E.; Mendels, D.; Siebert, T.; Whelan, Maurice
2006-09-01
There are no international standards or norms for the use of optical techniques for full-field strain measurement. In the paper the rationale and design of a reference material and a set of standarized materials for the calibration and evaluation of optical systems for full-field measurements of strain are outlined. A classification system for the steps in the measurement process is also proposed and allows the development of a unified approach to diagnostic testing of components in an optical system for strain measurement based on any optical technique. The results described arise from a European study known as SPOTS whose objectives were to begin to fill the gap caused by a lack of standards.
Coedo, A G; Padilla, I; Dorado, M T
2004-12-01
This paper describes a study designed to determine the possibility of using a dried aerosol solution for calibration in laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS). The relative sensitivities of tested materials mobilized by laser ablation and by aqueous nebulization were established, and the experimentally determined relative sensitivity factors (RSFs) were used in conjunction with aqueous calibration for the analysis of solid steel samples. To such a purpose a set of CRM carbon steel samples (SS-451/1 to SS-460/1) were sampled into an ICP-MS instrument by solution nebulization using a microconcentric nebulizer with membrane desolvating (D-MCN) and by laser ablation (LA). Both systems were applied with the same ICP-MS operating parameters and the analyte signals were compared. The RSF (desolvated aerosol response/ablated solid response) values were close to 1 for the analytes Cr, Ni, Co, V, and W, about 1.3 for Mo, and 1.7 for As, P, and Mn. Complementary tests were carried out using CRM SS-455/1 as a solid standard for one-point calibration, applying LAMTRACE software for data reduction and quantification. The analytical results are in good agreement with the certified values in all cases, showing that the applicability of dried aerosol solutions is a good alternative calibration system for laser ablation sampling.
Non-contact thrust stand calibration method for repetitively pulsed electric thrusters.
Wong, Andrea R; Toftul, Alexandra; Polzin, Kurt A; Pearson, J Boise
2012-02-01
A thrust stand calibration technique for use in testing repetitively pulsed electric thrusters for in-space propulsion has been developed and tested using a modified hanging pendulum thrust stand. In the implementation of this technique, current pulses are applied to a solenoid to produce a pulsed magnetic field that acts against a permanent magnet mounted to the thrust stand pendulum arm. The force on the magnet is applied in this non-contact manner, with the entire pulsed force transferred to the pendulum arm through a piezoelectric force transducer to provide a time-accurate force measurement. Modeling of the pendulum arm dynamics reveals that after an initial transient in thrust stand motion the quasi-steady average deflection of the thrust stand arm away from the unforced or "zero" position can be related to the average applied force through a simple linear Hooke's law relationship. Modeling demonstrates that this technique is universally applicable except when the pulsing period is increased to the point where it approaches the period of natural thrust stand motion. Calibration data were obtained using a modified hanging pendulum thrust stand previously used for steady-state thrust measurements. Data were obtained for varying impulse bit at constant pulse frequency and for varying pulse frequency. The two data sets exhibit excellent quantitative agreement with each other. The overall error on the linear regression fit used to determine the calibration coefficient was roughly 1%.
The calibration and flight test performance of the space shuttle orbiter air data system
NASA Technical Reports Server (NTRS)
Dean, A. S.; Mena, A. L.
1983-01-01
The Space Shuttle air data system (ADS) is used by the guidance, navigation and control system (GN&C) to guide the vehicle to a safe landing. In addition, postflight aerodynamic analysis requires a precise knowledge of flight conditions. Since the orbiter is essentially an unpowered vehicle, the conventional methods of obtaining the ADS calibration were not available; therefore, the calibration was derived using a unique and extensive wind tunnel test program. This test program included subsonic tests with a 0.36-scale orbiter model, transonic and supersonic tests with a smaller 0.2-scale model, and numerous ADS probe-alone tests. The wind tunnel calibration was further refined with subsonic results from the approach and landing test (ALT) program, thus producing the ADS calibration for the orbital flight test (OFT) program. The calibration of the Space Shuttle ADS and its performance during flight are discussed in this paper. A brief description of the system is followed by a discussion of the calibration methodology, and then by a review of the wind tunnel and flight test programs. Finally, the flight results are presented, including an evaluation of the system performance for on-board systems use and a description of the calibration refinements developed to provide the best possible air data for postflight analysis work.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koepferl, Christine M.; Robitaille, Thomas P.; Dale, James E., E-mail: koepferl@usm.lmu.de
Through an extensive set of realistic synthetic observations (produced in Paper I), we assess in this part of the paper series (Paper III) how the choice of observational techniques affects the measurement of star formation rates (SFRs) in star-forming regions. We test the accuracy of commonly used techniques and construct new methods to extract the SFR, so that these findings can be applied to measure the SFR in real regions throughout the Milky Way. We investigate diffuse infrared SFR tracers such as those using 24 μ m, 70 μ m and total infrared emission, which have been previously calibrated formore » global galaxy scales. We set up a toy model of a galaxy and show that the infrared emission is consistent with the intrinsic SFR using extra-galactic calibrated laws (although the consistency does not prove their reliability). For local scales, we show that these techniques produce completely unreliable results for single star-forming regions, which are governed by different characteristic timescales. We show how calibration of these techniques can be improved for single star-forming regions by adjusting the characteristic timescale and the scaling factor and give suggestions of new calibrations of the diffuse star formation tracers. We show that star-forming regions that are dominated by high-mass stellar feedback experience a rapid drop in infrared emission once high-mass stellar feedback is turned on, which implies different characteristic timescales. Moreover, we explore the measured SFRs calculated directly from the observed young stellar population. We find that the measured point sources follow the evolutionary pace of star formation more directly than diffuse star formation tracers.« less
Global Sensitivity Analysis and Parameter Calibration for an Ecosystem Carbon Model
NASA Astrophysics Data System (ADS)
Safta, C.; Ricciuto, D. M.; Sargsyan, K.; Najm, H. N.; Debusschere, B.; Thornton, P. E.
2013-12-01
We present uncertainty quantification results for a process-based ecosystem carbon model. The model employs 18 parameters and is driven by meteorological data corresponding to years 1992-2006 at the Harvard Forest site. Daily Net Ecosystem Exchange (NEE) observations were available to calibrate the model parameters and test the performance of the model. Posterior distributions show good predictive capabilities for the calibrated model. A global sensitivity analysis was first performed to determine the important model parameters based on their contribution to the variance of NEE. We then proceed to calibrate the model parameters in a Bayesian framework. The daily discrepancies between measured and predicted NEE values were modeled as independent and identically distributed Gaussians with prescribed daily variance according to the recorded instrument error. All model parameters were assumed to have uninformative priors with bounds set according to expert opinion. The global sensitivity results show that the rate of leaf fall (LEAFALL) is responsible for approximately 25% of the total variance in the average NEE for 1992-2005. A set of 4 other parameters, Nitrogen use efficiency (NUE), base rate for maintenance respiration (BR_MR), growth respiration fraction (RG_FRAC), and allocation to plant stem pool (ASTEM) contribute between 5% and 12% to the variance in average NEE, while the rest of the parameters have smaller contributions. The posterior distributions, sampled with a Markov Chain Monte Carlo algorithm, exhibit significant correlations between model parameters. However LEAFALL, the most important parameter for the average NEE, is not informed by the observational data, while less important parameters show significant updates between their prior and posterior densities. The Fisher information matrix values, indicating which parameters are most informed by the experimental observations, are examined to augment the comparison between the calibration and global sensitivity analysis results.
The Very Large Array Data Processing Pipeline
NASA Astrophysics Data System (ADS)
Kent, Brian R.; Masters, Joseph S.; Chandler, Claire J.; Davis, Lindsey E.; Kern, Jeffrey S.; Ott, Juergen; Schinzel, Frank K.; Medlin, Drew; Muders, Dirk; Williams, Stewart; Geers, Vincent C.; Momjian, Emmanuel; Butler, Bryan J.; Nakazato, Takeshi; Sugimoto, Kanako
2018-01-01
We present the VLA Pipeline, software that is part of the larger pipeline processing framework used for the Karl G. Jansky Very Large Array (VLA), and Atacama Large Millimeter/sub-millimeter Array (ALMA) for both interferometric and single dish observations.Through a collection of base code jointly used by the VLA and ALMA, the pipeline builds a hierarchy of classes to execute individual atomic pipeline tasks within the Common Astronomy Software Applications (CASA) package. Each pipeline task contains heuristics designed by the team to actively decide the best processing path and execution parameters for calibration and imaging. The pipeline code is developed and written in Python and uses a "context" structure for tracking the heuristic decisions and processing results. The pipeline "weblog" acts as the user interface in verifying the quality assurance of each calibration and imaging stage. The majority of VLA scheduling blocks above 1 GHz are now processed with the standard continuum recipe of the pipeline and offer a calibrated measurement set as a basic data product to observatory users. In addition, the pipeline is used for processing data from the VLA Sky Survey (VLASS), a seven year community-driven endeavor started in September 2017 to survey the entire sky down to a declination of -40 degrees at S-band (2-4 GHz). This 5500 hour next-generation large radio survey will explore the time and spectral domains, relying on pipeline processing to generate calibrated measurement sets, polarimetry, and imaging data products that are available to the astronomical community with no proprietary period. Here we present an overview of the pipeline design philosophy, heuristics, and calibration and imaging results produced by the pipeline. Future development will include the testing of spectral line recipes, low signal-to-noise heuristics, and serving as a testing platform for science ready data products.The pipeline is developed as part of the CASA software package by an international consortium of scientists and software developers based at the National Radio Astronomical Observatory (NRAO), the European Southern Observatory (ESO), and the National Astronomical Observatory of Japan (NAOJ).
40 CFR Appendix B to Part 75 - Quality Assurance and Quality Control Procedures
Code of Federal Regulations, 2012 CFR
2012-07-01
... Systems 1.2.1Calibration Error Test and Linearity Check Procedures Keep a written record of the procedures used for daily calibration error tests and linearity checks (e.g., how gases are to be injected..., and when calibration adjustments should be made). Identify any calibration error test and linearity...
40 CFR Appendix B to Part 75 - Quality Assurance and Quality Control Procedures
Code of Federal Regulations, 2013 CFR
2013-07-01
... Systems 1.2.1Calibration Error Test and Linearity Check Procedures Keep a written record of the procedures used for daily calibration error tests and linearity checks (e.g., how gases are to be injected..., and when calibration adjustments should be made). Identify any calibration error test and linearity...
42 CFR 493.1255 - Standard: Calibration and calibration verification procedures.
Code of Federal Regulations, 2010 CFR
2010-10-01
... accuracy of the test system throughout the laboratory's reportable range of test results for the test system. Unless otherwise specified in this subpart, for each applicable test system the laboratory must... test system instructions, using calibration materials provided or specified, and with at least the...
A Ricin Forensic Profiling Approach Based on a Complex Set of Biomarkers
Fredriksson, Sten-Ake; Wunschel, David S.; Lindstrom, Susanne Wiklund; ...
2018-03-28
A forensic method for the retrospective determination of preparation methods used for illicit ricin toxin production was developed. The method was based on a complex set of biomarkers, including carbohydrates, fatty acids, seed storage proteins, in combination with data on ricin and Ricinus communis agglutinin. The analyses were performed on samples prepared from four castor bean plant (R. communis) cultivars by four different sample preparation methods (PM1 – PM4) ranging from simple disintegration of the castor beans to multi-step preparation methods including different protein precipitation methods. Comprehensive analytical data was collected by use of a range of analytical methods andmore » robust orthogonal partial least squares-discriminant analysis- models (OPLS-DA) were constructed based on the calibration set. By the use of a decision tree and two OPLS-DA models, the sample preparation methods of test set samples were determined. The model statistics of the two models were good and a 100% rate of correct predictions of the test set was achieved.« less
A Ricin Forensic Profiling Approach Based on a Complex Set of Biomarkers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fredriksson, Sten-Ake; Wunschel, David S.; Lindstrom, Susanne Wiklund
A forensic method for the retrospective determination of preparation methods used for illicit ricin toxin production was developed. The method was based on a complex set of biomarkers, including carbohydrates, fatty acids, seed storage proteins, in combination with data on ricin and Ricinus communis agglutinin. The analyses were performed on samples prepared from four castor bean plant (R. communis) cultivars by four different sample preparation methods (PM1 – PM4) ranging from simple disintegration of the castor beans to multi-step preparation methods including different protein precipitation methods. Comprehensive analytical data was collected by use of a range of analytical methods andmore » robust orthogonal partial least squares-discriminant analysis- models (OPLS-DA) were constructed based on the calibration set. By the use of a decision tree and two OPLS-DA models, the sample preparation methods of test set samples were determined. The model statistics of the two models were good and a 100% rate of correct predictions of the test set was achieved.« less
Aero-Thermal Calibration of the NASA Glenn Icing Research Tunnel (2004 and 2005 Tests)
NASA Technical Reports Server (NTRS)
Arrington, E. Allen; Pastor, Christine M.; Gonsalez, Jose C.; Curry, Monroe R., III
2010-01-01
A full aero-thermal calibration of the NASA Glenn Icing Research Tunnel was completed in 2004 following the replacement of the inlet guide vanes upstream of the tunnel drive system and improvement to the facility total temperature instrumentation. This calibration test provided data used to fully document the aero-thermal flow quality in the IRT test section and to construct calibration curves for the operation of the IRT. The 2004 test was also the first to use the 2-D RTD array, an improved total temperature calibration measurement platform.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Habte, Aron; Sengupta, Manajit; Andreas, Afshin
Banks financing solar energy projects require assurance that these systems will produce the energy predicted. Furthermore, utility planners and grid system operators need to understand the impact of the variable solar resource on solar energy conversion system performance. Accurate solar radiation data sets reduce the expense associated with mitigating performance risk and assist in understanding the impacts of solar resource variability. The accuracy of solar radiation measured by radiometers depends on the instrument performance specification, installation method, calibration procedure, measurement conditions, maintenance practices, location, and environmental conditions. This study addresses the effect of different calibration methods provided by radiometric calibrationmore » service providers, such as NREL and manufacturers of radiometers, on the resulting calibration responsivity. Some of these radiometers are calibrated indoors and some outdoors. To establish or understand the differences in calibration methodology, we processed and analyzed field-measured data from these radiometers. This study investigates calibration responsivities provided by NREL's broadband outdoor radiometer calibration (BORCAL) and a few prominent manufacturers. The BORCAL method provides the outdoor calibration responsivity of pyranometers and pyrheliometers at 45 degree solar zenith angle, and as a function of solar zenith angle determined by clear-sky comparisons with reference irradiance. The BORCAL method also employs a thermal offset correction to the calibration responsivity of single-black thermopile detectors used in pyranometers. Indoor calibrations of radiometers by their manufacturers are performed using a stable artificial light source in a side-by-side comparison between the test radiometer under calibration and a reference radiometer of the same type. In both methods, the reference radiometer calibrations are traceable to the World Radiometric Reference (WRR). These different methods of calibration demonstrated +1% to +2% differences in solar irradiance measurement. Analyzing these differences will ultimately help determine the uncertainty of the field radiometer data and guide the development of a consensus standard for calibration. Further advancing procedures for precisely calibrating radiometers to world reference standards that reduce measurement uncertainty will allow more accurate prediction of solar output and improve the bankability of solar projects.« less
40 CFR 90.326 - Pre- and post-test analyzer calibration.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Pre- and post-test analyzer... Emission Test Equipment Provisions § 90.326 Pre- and post-test analyzer calibration. Calibrate only the range of each analyzer used during the engine exhaust emission test prior to and after each test in...
40 CFR 90.326 - Pre- and post-test analyzer calibration.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 21 2012-07-01 2012-07-01 false Pre- and post-test analyzer... Emission Test Equipment Provisions § 90.326 Pre- and post-test analyzer calibration. Calibrate only the range of each analyzer used during the engine exhaust emission test prior to and after each test in...
40 CFR 89.314 - Pre- and post-test calibration of analyzers.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 21 2012-07-01 2012-07-01 false Pre- and post-test calibration of... Test Equipment Provisions § 89.314 Pre- and post-test calibration of analyzers. Each operating range used during the test shall be checked prior to and after each test in accordance with the following...
40 CFR 89.314 - Pre- and post-test calibration of analyzers.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Pre- and post-test calibration of... Test Equipment Provisions § 89.314 Pre- and post-test calibration of analyzers. Each operating range used during the test shall be checked prior to and after each test in accordance with the following...
40 CFR 90.326 - Pre- and post-test analyzer calibration.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 21 2013-07-01 2013-07-01 false Pre- and post-test analyzer... Emission Test Equipment Provisions § 90.326 Pre- and post-test analyzer calibration. Calibrate only the range of each analyzer used during the engine exhaust emission test prior to and after each test in...
40 CFR 90.326 - Pre- and post-test analyzer calibration.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 20 2011-07-01 2011-07-01 false Pre- and post-test analyzer... Emission Test Equipment Provisions § 90.326 Pre- and post-test analyzer calibration. Calibrate only the range of each analyzer used during the engine exhaust emission test prior to and after each test in...
40 CFR 89.314 - Pre- and post-test calibration of analyzers.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 20 2011-07-01 2011-07-01 false Pre- and post-test calibration of... Test Equipment Provisions § 89.314 Pre- and post-test calibration of analyzers. Each operating range used during the test shall be checked prior to and after each test in accordance with the following...
40 CFR 89.314 - Pre- and post-test calibration of analyzers.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 20 2014-07-01 2013-07-01 true Pre- and post-test calibration of... Test Equipment Provisions § 89.314 Pre- and post-test calibration of analyzers. Each operating range used during the test shall be checked prior to and after each test in accordance with the following...
40 CFR 89.314 - Pre- and post-test calibration of analyzers.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 21 2013-07-01 2013-07-01 false Pre- and post-test calibration of... Test Equipment Provisions § 89.314 Pre- and post-test calibration of analyzers. Each operating range used during the test shall be checked prior to and after each test in accordance with the following...
Optimal Test Design with Rule-Based Item Generation
ERIC Educational Resources Information Center
Geerlings, Hanneke; van der Linden, Wim J.; Glas, Cees A. W.
2013-01-01
Optimal test-design methods are applied to rule-based item generation. Three different cases of automated test design are presented: (a) test assembly from a pool of pregenerated, calibrated items; (b) test generation on the fly from a pool of calibrated item families; and (c) test generation on the fly directly from calibrated features defining…
NASA Astrophysics Data System (ADS)
Chesterman, Frédérique; Manssens, Hannah; Morel, Céline; Serrell, Guillaume; Piepers, Bastian; Kimpe, Tom
2017-03-01
Medical displays for primary diagnosis are calibrated to the DICOM GSDF1 but there is no accepted standard today that describes how display systems for medical modalities involving color should be calibrated. Recently the Color Standard Display Function3,4 (CSDF), a calibration using the CIEDE2000 color difference metric to make a display as perceptually linear as possible has been proposed. In this work we present the results of a first observer study set up to investigate the interpretation accuracy of a rainbow color scale when a medical display is calibrated to CSDF versus DICOM GSDF and a second observer study set up to investigate the detectability of color differences when a medical display is calibrated to CSDF, DICOM GSDF and sRGB. The results of the first study indicate that the error when interpreting a rainbow color scale is lower for CSDF than for DICOM GSDF with statistically significant difference (Mann-Whitney U test) for eight out of twelve observers. The results correspond to what is expected based on CIEDE2000 color differences between consecutive colors along the rainbow color scale for both calibrations. The results of the second study indicate a statistical significant improvement in detecting color differences when a display is calibrated to CSDF compared to DICOM GSDF and a (non-significant) trend indicating improved detection for CSDF compared to sRGB. To our knowledge this is the first work that shows the added value of a perceptual color calibration method (CSDF) in interpreting medical color images using the rainbow color scale. Improved interpretation of the rainbow color scale may be beneficial in the area of quantitative medical imaging (e.g. PET SUV, quantitative MRI and CT and doppler US), where a medical specialist needs to interpret quantitative medical data based on a color scale and/or detect subtle color differences and where improved interpretation accuracy and improved detection of color differences may contribute to a better diagnosis. Our results indicate that for diagnostic applications involving both grayscale and color images, CSDF should be chosen over DICOM GSDF and sRGB as it assures excellent detection for color images and at the same time maintains DICOM GSDF for grayscale images.
NASA Astrophysics Data System (ADS)
Lovejoy, McKenna R.; Wickert, Mark A.
2017-05-01
A known problem with infrared imaging devices is their non-uniformity. This non-uniformity is the result of dark current, amplifier mismatch as well as the individual photo response of the detectors. To improve performance, non-uniformity correction (NUC) techniques are applied. Standard calibration techniques use linear, or piecewise linear models to approximate the non-uniform gain and off set characteristics as well as the nonlinear response. Piecewise linear models perform better than the one and two-point models, but in many cases require storing an unmanageable number of correction coefficients. Most nonlinear NUC algorithms use a second order polynomial to improve performance and allow for a minimal number of stored coefficients. However, advances in technology now make higher order polynomial NUC algorithms feasible. This study comprehensively tests higher order polynomial NUC algorithms targeted at short wave infrared (SWIR) imagers. Using data collected from actual SWIR cameras, the nonlinear techniques and corresponding performance metrics are compared with current linear methods including the standard one and two-point algorithms. Machine learning, including principal component analysis, is explored for identifying and replacing bad pixels. The data sets are analyzed and the impact of hardware implementation is discussed. Average floating point results show 30% less non-uniformity, in post-corrected data, when using a third order polynomial correction algorithm rather than a second order algorithm. To maximize overall performance, a trade off analysis on polynomial order and coefficient precision is performed. Comprehensive testing, across multiple data sets, provides next generation model validation and performance benchmarks for higher order polynomial NUC methods.
A Maximum-Likelihood Approach to Force-Field Calibration.
Zaborowski, Bartłomiej; Jagieła, Dawid; Czaplewski, Cezary; Hałabis, Anna; Lewandowska, Agnieszka; Żmudzińska, Wioletta; Ołdziej, Stanisław; Karczyńska, Agnieszka; Omieczynski, Christian; Wirecki, Tomasz; Liwo, Adam
2015-09-28
A new approach to the calibration of the force fields is proposed, in which the force-field parameters are obtained by maximum-likelihood fitting of the calculated conformational ensembles to the experimental ensembles of training system(s). The maximum-likelihood function is composed of logarithms of the Boltzmann probabilities of the experimental conformations, calculated with the current energy function. Because the theoretical distribution is given in the form of the simulated conformations only, the contributions from all of the simulated conformations, with Gaussian weights in the distances from a given experimental conformation, are added to give the contribution to the target function from this conformation. In contrast to earlier methods for force-field calibration, the approach does not suffer from the arbitrariness of dividing the decoy set into native-like and non-native structures; however, if such a division is made instead of using Gaussian weights, application of the maximum-likelihood method results in the well-known energy-gap maximization. The computational procedure consists of cycles of decoy generation and maximum-likelihood-function optimization, which are iterated until convergence is reached. The method was tested with Gaussian distributions and then applied to the physics-based coarse-grained UNRES force field for proteins. The NMR structures of the tryptophan cage, a small α-helical protein, determined at three temperatures (T = 280, 305, and 313 K) by Hałabis et al. ( J. Phys. Chem. B 2012 , 116 , 6898 - 6907 ), were used. Multiplexed replica-exchange molecular dynamics was used to generate the decoys. The iterative procedure exhibited steady convergence. Three variants of optimization were tried: optimization of the energy-term weights alone and use of the experimental ensemble of the folded protein only at T = 280 K (run 1); optimization of the energy-term weights and use of experimental ensembles at all three temperatures (run 2); and optimization of the energy-term weights and the coefficients of the torsional and multibody energy terms and use of experimental ensembles at all three temperatures (run 3). The force fields were subsequently tested with a set of 14 α-helical and two α + β proteins. Optimization run 1 resulted in better agreement with the experimental ensemble at T = 280 K compared with optimization run 2 and in comparable performance on the test set but poorer agreement of the calculated folding temperature with the experimental folding temperature. Optimization run 3 resulted in the best fit of the calculated ensembles to the experimental ones for the tryptophan cage but in much poorer performance on the training set, suggesting that use of a small α-helical protein for extensive force-field calibration resulted in overfitting of the data for this protein at the expense of transferability. The optimized force field resulting from run 2 was found to fold 13 of the 14 tested α-helical proteins and one small α + β protein with the correct topologies; the average structures of 10 of them were predicted with accuracies of about 5 Å C(α) root-mean-square deviation or better. Test simulations with an additional set of 12 α-helical proteins demonstrated that this force field performed better on α-helical proteins than the previous parametrizations of UNRES. The proposed approach is applicable to any problem of maximum-likelihood parameter estimation when the contributions to the maximum-likelihood function cannot be evaluated at the experimental points and the dimension of the configurational space is too high to construct histograms of the experimental distributions.
Automatic force balance calibration system
NASA Technical Reports Server (NTRS)
Ferris, Alice T. (Inventor)
1995-01-01
A system for automatically calibrating force balances is provided. The invention uses a reference balance aligned with the balance being calibrated to provide superior accuracy while minimizing the time required to complete the calibration. The reference balance and the test balance are rigidly attached together with closely aligned moment centers. Loads placed on the system equally effect each balance, and the differences in the readings of the two balances can be used to generate the calibration matrix for the test balance. Since the accuracy of the test calibration is determined by the accuracy of the reference balance and current technology allows for reference balances to be calibrated to within +/-0.05% the entire system has an accuracy of +/-0.2%. The entire apparatus is relatively small and can be mounted on a movable base for easy transport between test locations. The system can also accept a wide variety of reference balances, thus allowing calibration under diverse load and size requirements.
Automatic force balance calibration system
NASA Technical Reports Server (NTRS)
Ferris, Alice T. (Inventor)
1996-01-01
A system for automatically calibrating force balances is provided. The invention uses a reference balance aligned with the balance being calibrated to provide superior accuracy while minimizing the time required to complete the calibration. The reference balance and the test balance are rigidly attached together with closely aligned moment centers. Loads placed on the system equally effect each balance, and the differences in the readings of the two balances can be used to generate the calibration matrix for the test balance. Since the accuracy of the test calibration is determined by the accuracy of the reference balance and current technology allows for reference balances to be calibrated to within .+-.0.05%, the entire system has an accuracy of a .+-.0.2%. The entire apparatus is relatively small and can be mounted on a movable base for easy transport between test locations. The system can also accept a wide variety of reference balances, thus allowing calibration under diverse load and size requirements.
Self-adaptive calibration for staring infrared sensors
NASA Astrophysics Data System (ADS)
Kendall, William B.; Stocker, Alan D.
1993-10-01
This paper presents a new, self-adaptive technique for the correlation of non-uniformities (fixed-pattern noise) in high-density infrared focal-plane detector arrays. We have developed a new approach to non-uniformity correction in which we use multiple image frames of the scene itself, and take advantage of the aim-point wander caused by jitter, residual tracking errors, or deliberately induced motion. Such wander causes each detector in the array to view multiple scene elements, and each scene element to be viewed by multiple detectors. It is therefore possible to formulate (and solve) a set of simultaneous equations from which correction parameters can be computed for the detectors. We have tested our approach with actual images collected by the ARPA-sponsored MUSIC infrared sensor. For these tests we employed a 60-frame (0.75-second) sequence of terrain images for which an out-of-date calibration was deliberately used. The sensor was aimed at a point on the ground via an operator-assisted tracking system having a maximum aim point wander on the order of ten pixels. With these data, we were able to improve the calibration accuracy by a factor of approximately 100.
Improved uncertainty quantification in nondestructive assay for nonproliferation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burr, Tom; Croft, Stephen; Jarman, Ken
2016-12-01
This paper illustrates methods to improve uncertainty quantification (UQ) for non-destructive assay (NDA) measurements used in nuclear nonproliferation. First, it is shown that current bottom-up UQ applied to calibration data is not always adequate, for three main reasons: (1) Because there are errors in both the predictors and the response, calibration involves a ratio of random quantities, and calibration data sets in NDA usually consist of only a modest number of samples (3–10); therefore, asymptotic approximations involving quantities needed for UQ such as means and variances are often not sufficiently accurate; (2) Common practice overlooks that calibration implies a partitioningmore » of total error into random and systematic error, and (3) In many NDA applications, test items exhibit non-negligible departures in physical properties from calibration items, so model-based adjustments are used, but item-specific bias remains in some data. Therefore, improved bottom-up UQ using calibration data should predict the typical magnitude of item-specific bias, and the suggestion is to do so by including sources of item-specific bias in synthetic calibration data that is generated using a combination of modeling and real calibration data. Second, for measurements of the same nuclear material item by both the facility operator and international inspectors, current empirical (top-down) UQ is described for estimating operator and inspector systematic and random error variance components. A Bayesian alternative is introduced that easily accommodates constraints on variance components, and is more robust than current top-down methods to the underlying measurement error distributions.« less
14 CFR 33.45 - Calibration tests.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Calibration tests. 33.45 Section 33.45 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION AIRCRAFT AIRWORTHINESS STANDARDS: AIRCRAFT ENGINES Block Tests; Reciprocating Aircraft Engines § 33.45 Calibration tests. (a) Each...
ROx3: Retinal oximetry utilizing the blue-green oximetry method
NASA Astrophysics Data System (ADS)
Parsons, Jennifer Kathleen Hendryx
The ROx is a retinal oximeter under development with the purpose of non-invasively and accurately measuring oxygen saturation (SO2) in vivo. It is novel in that it utilizes the blue-green oximetry technique with on-axis illumination. ROx calibration tests were performed by inducing hypoxia in live anesthetized swine and comparing ROx measurements to SO 2 values measured by a CO-Oximeter. Calibration was not achieved to the precision required for clinical use, but limiting factors were identified and improved. The ROx was used in a set of sepsis experiments on live pigs with the intention of tracking retinal SO2 during the development of sepsis. Though conclusions are qualitative due to insufficient calibration of the device, retinal venous SO2 is shown to trend generally with central venous SO2 as sepsis develops. The novel sepsis model developed in these experiments is also described. The method of cecal ligation and perforation with additional soiling of the abdomen consistently produced controllable severe sepsis/septic shock in a matter of hours. In addition, the ROx was used to collect retinal images from a healthy human volunteer. These experiments served as a bench test for several of the additions/modifications made to the ROx. This set of experiments specifically served to illuminate problems with various light paths and image acquisition. The analysis procedure for the ROx is under development, particularly automating the process for consistency, accuracy, and time efficiency. The current stage of automation is explained, including data acquisition processes and the automated vessel fit routine. Suggestions for the next generation of device minimization are also described.
Galileo SSI/Ida Radiometrically Calibrated Images V1.0
NASA Astrophysics Data System (ADS)
Domingue, D. L.
2016-05-01
This data set includes Galileo Orbiter SSI radiometrically calibrated images of the asteroid 243 Ida, created using ISIS software and assuming nadir pointing. This is an original delivery of radiometrically calibrated files, not an update to existing files. All images archived include the asteroid within the image frame. Calibration was performed in 2013-2014.
Reduced set averaging of face identity in children and adolescents with autism.
Rhodes, Gillian; Neumann, Markus F; Ewing, Louise; Palermo, Romina
2015-01-01
Individuals with autism have difficulty abstracting and updating average representations from their diet of faces. These averages function as perceptual norms for coding faces, and poorly calibrated norms may contribute to face recognition difficulties in autism. Another kind of average, known as an ensemble representation, can be abstracted from briefly glimpsed sets of faces. Here we show for the first time that children and adolescents with autism also have difficulty abstracting ensemble representations from sets of faces. On each trial, participants saw a study set of four identities and then indicated whether a test face was present. The test face could be a set average or a set identity, from either the study set or another set. Recognition of set averages was reduced in participants with autism, relative to age- and ability-matched typically developing participants. This difference, which actually represents more accurate responding, indicates weaker set averaging and thus weaker ensemble representations of face identity in autism. Our finding adds to the growing evidence for atypical abstraction of average face representations from experience in autism. Weak ensemble representations may have negative consequences for face processing in autism, given the importance of ensemble representations in dealing with processing capacity limitations.
NASA Astrophysics Data System (ADS)
Luo, Liancong; Hamilton, David; Lan, Jia; McBride, Chris; Trolle, Dennis
2018-03-01
Automated calibration of complex deterministic water quality models with a large number of biogeochemical parameters can reduce time-consuming iterative simulations involving empirical judgements of model fit. We undertook autocalibration of the one-dimensional hydrodynamic-ecological lake model DYRESM-CAEDYM, using a Monte Carlo sampling (MCS) method, in order to test the applicability of this procedure for shallow, polymictic Lake Rotorua (New Zealand). The calibration procedure involved independently minimizing the root-mean-square error (RMSE), maximizing the Pearson correlation coefficient (r) and Nash-Sutcliffe efficient coefficient (Nr) for comparisons of model state variables against measured data. An assigned number of parameter permutations was used for 10 000 simulation iterations. The "optimal" temperature calibration produced a RMSE of 0.54 °C, Nr value of 0.99, and r value of 0.98 through the whole water column based on comparisons with 540 observed water temperatures collected between 13 July 2007 and 13 January 2009. The modeled bottom dissolved oxygen concentration (20.5 m below surface) was compared with 467 available observations. The calculated RMSE of the simulations compared with the measurements was 1.78 mg L-1, the Nr value was 0.75, and the r value was 0.87. The autocalibrated model was further tested for an independent data set by simulating bottom-water hypoxia events from 15 January 2009 to 8 June 2011 (875 days). This verification produced an accurate simulation of five hypoxic events corresponding to DO < 2 mg L-1 during summer of 2009-2011. The RMSE was 2.07 mg L-1, Nr value 0.62, and r value of 0.81, based on the available data set of 738 days. The autocalibration software of DYRESM-CAEDYM developed here is substantially less time-consuming and more efficient in parameter optimization than traditional manual calibration which has been the standard tool practiced for similar complex water quality models.
Dinç, Erdal; Ozdemir, Abdil
2005-01-01
Multivariate chromatographic calibration technique was developed for the quantitative analysis of binary mixtures enalapril maleate (EA) and hydrochlorothiazide (HCT) in tablets in the presence of losartan potassium (LST). The mathematical algorithm of multivariate chromatographic calibration technique is based on the use of the linear regression equations constructed using relationship between concentration and peak area at the five-wavelength set. The algorithm of this mathematical calibration model having a simple mathematical content was briefly described. This approach is a powerful mathematical tool for an optimum chromatographic multivariate calibration and elimination of fluctuations coming from instrumental and experimental conditions. This multivariate chromatographic calibration contains reduction of multivariate linear regression functions to univariate data set. The validation of model was carried out by analyzing various synthetic binary mixtures and using the standard addition technique. Developed calibration technique was applied to the analysis of the real pharmaceutical tablets containing EA and HCT. The obtained results were compared with those obtained by classical HPLC method. It was observed that the proposed multivariate chromatographic calibration gives better results than classical HPLC.
14 CFR 33.85 - Calibration tests.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Calibration tests. 33.85 Section 33.85 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION AIRCRAFT AIRWORTHINESS STANDARDS: AIRCRAFT ENGINES Block Tests; Turbine Aircraft Engines § 33.85 Calibration tests. (a) Each engine...
Analysis of Multivariate Experimental Data Using A Simplified Regression Model Search Algorithm
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert M.
2013-01-01
A new regression model search algorithm was developed that may be applied to both general multivariate experimental data sets and wind tunnel strain-gage balance calibration data. The algorithm is a simplified version of a more complex algorithm that was originally developed for the NASA Ames Balance Calibration Laboratory. The new algorithm performs regression model term reduction to prevent overfitting of data. It has the advantage that it needs only about one tenth of the original algorithm's CPU time for the completion of a regression model search. In addition, extensive testing showed that the prediction accuracy of math models obtained from the simplified algorithm is similar to the prediction accuracy of math models obtained from the original algorithm. The simplified algorithm, however, cannot guarantee that search constraints related to a set of statistical quality requirements are always satisfied in the optimized regression model. Therefore, the simplified algorithm is not intended to replace the original algorithm. Instead, it may be used to generate an alternate optimized regression model of experimental data whenever the application of the original search algorithm fails or requires too much CPU time. Data from a machine calibration of NASA's MK40 force balance is used to illustrate the application of the new search algorithm.
Analysis of Multivariate Experimental Data Using A Simplified Regression Model Search Algorithm
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert Manfred
2013-01-01
A new regression model search algorithm was developed in 2011 that may be used to analyze both general multivariate experimental data sets and wind tunnel strain-gage balance calibration data. The new algorithm is a simplified version of a more complex search algorithm that was originally developed at the NASA Ames Balance Calibration Laboratory. The new algorithm has the advantage that it needs only about one tenth of the original algorithm's CPU time for the completion of a search. In addition, extensive testing showed that the prediction accuracy of math models obtained from the simplified algorithm is similar to the prediction accuracy of math models obtained from the original algorithm. The simplified algorithm, however, cannot guarantee that search constraints related to a set of statistical quality requirements are always satisfied in the optimized regression models. Therefore, the simplified search algorithm is not intended to replace the original search algorithm. Instead, it may be used to generate an alternate optimized regression model of experimental data whenever the application of the original search algorithm either fails or requires too much CPU time. Data from a machine calibration of NASA's MK40 force balance is used to illustrate the application of the new regression model search algorithm.
Youssef, Joseph El; Engle, Julia M.; Massoud, Ryan G.; Ward, W. Kenneth
2010-01-01
Abstract Background A cause of suboptimal accuracy in amperometric glucose sensors is the presence of a background current (current produced in the absence of glucose) that is not accounted for. We hypothesized that a mathematical correction for the estimated background current of a commercially available sensor would lead to greater accuracy compared to a situation in which we assumed the background current to be zero. We also tested whether increasing the frequency of sensor calibration would improve sensor accuracy. Methods This report includes analysis of 20 sensor datasets from seven human subjects with type 1 diabetes. Data were divided into a training set for algorithm development and a validation set on which the algorithm was tested. A range of potential background currents was tested. Results Use of the background current correction of 4 nA led to a substantial improvement in accuracy (improvement of absolute relative difference or absolute difference of 3.5–5.5 units). An increase in calibration frequency led to a modest accuracy improvement, with an optimum at every 4 h. Conclusions Compared to no correction, a correction for the estimated background current of a commercially available glucose sensor led to greater accuracy and better detection of hypoglycemia and hyperglycemia. The accuracy-optimizing scheme presented here can be implemented in real time. PMID:20879968
P.D. Jones; L.R. Schimleck; G.F. Peter; R.F. Daniels; A. Clark
2005-01-01
Preliminary studies based on small sample sets show that near infrared (NIR) spectroscopy has the potential for rapidly estimating many important wood properties. However, if NIR is to be used operationally, then calibrations using several hundred samples from a wide variety of growing conditions need to be developed and their performance tested on samples from new...
NASA Technical Reports Server (NTRS)
Giveon, Amir; Kern, Brian; Shaklan, Stuart; Wallace, Kent; Noecker, Charley
2012-01-01
The pair-wise estimation has been used now on various testbeds with different coronagraphs with the best contrast results to date. Pinholes estimate has been implemented and ready to be tested in closed loop correction. Pinholes estimate offers an independent method. We hope to improve the calibration process to gain better estimates.
Cevenini, Gabriele; Barbini, Emanuela; Scolletta, Sabino; Biagioli, Bonizella; Giomarelli, Pierpaolo; Barbini, Paolo
2007-11-22
Popular predictive models for estimating morbidity probability after heart surgery are compared critically in a unitary framework. The study is divided into two parts. In the first part modelling techniques and intrinsic strengths and weaknesses of different approaches were discussed from a theoretical point of view. In this second part the performances of the same models are evaluated in an illustrative example. Eight models were developed: Bayes linear and quadratic models, k-nearest neighbour model, logistic regression model, Higgins and direct scoring systems and two feed-forward artificial neural networks with one and two layers. Cardiovascular, respiratory, neurological, renal, infectious and hemorrhagic complications were defined as morbidity. Training and testing sets each of 545 cases were used. The optimal set of predictors was chosen among a collection of 78 preoperative, intraoperative and postoperative variables by a stepwise procedure. Discrimination and calibration were evaluated by the area under the receiver operating characteristic curve and Hosmer-Lemeshow goodness-of-fit test, respectively. Scoring systems and the logistic regression model required the largest set of predictors, while Bayesian and k-nearest neighbour models were much more parsimonious. In testing data, all models showed acceptable discrimination capacities, however the Bayes quadratic model, using only three predictors, provided the best performance. All models showed satisfactory generalization ability: again the Bayes quadratic model exhibited the best generalization, while artificial neural networks and scoring systems gave the worst results. Finally, poor calibration was obtained when using scoring systems, k-nearest neighbour model and artificial neural networks, while Bayes (after recalibration) and logistic regression models gave adequate results. Although all the predictive models showed acceptable discrimination performance in the example considered, the Bayes and logistic regression models seemed better than the others, because they also had good generalization and calibration. The Bayes quadratic model seemed to be a convincing alternative to the much more usual Bayes linear and logistic regression models. It showed its capacity to identify a minimum core of predictors generally recognized as essential to pragmatically evaluate the risk of developing morbidity after heart surgery.
Reactive flow calibration for diaminoazoxyfurazan (DAAF) and comparison with experiment
NASA Astrophysics Data System (ADS)
Johnson, Carl; Francois, Elizabeth Green; Morris, John
2012-03-01
Diaminoazoxyfurazan (DAAF) has a number of desirable properties; it is sensitive to shock while being insensitive to initiation by low level impact or friction, it has a small failure diameter, and its manufacturing process is inexpensive with minimal environmental impact. In light of its unique properties, DAAF based materials have gained interest for possible applications in insensitive munitions. In order to facilitate hydrocode modeling of DAAF and DAAF based formulations, we have developed a set of reactive flow parameters which were calibrated using published experimental data as well as recent experiments at LANL. Hydrocode calculations using the DAAF reactive flow parameters developed in the course of this work were compared to rate stick experiments, small scale gap tests, as well as the Onionskin experiment. Hydrocode calculations were compared directly to streak image results using numerous tracer points in conjunction with an external algorithm to match the data sets. The calculations display a reasonable agreement with experiment with the exception of effects related to shock desensitization of explosive.
NASA Technical Reports Server (NTRS)
Giveona, Amir; Shaklan, Stuart; Kern, Brian; Noecker, Charley; Kendrick, Steve; Wallace, Kent
2012-01-01
In a setup similar to the self coherent camera, we have added a set of pinholes in the diffraction ring of the Lyot plane in a high-contrast stellar Lyot coronagraph. We describe a novel complex electric field reconstruction from image plane intensity measurements consisting of light in the coronagraph's dark hole interfering with light from the pinholes. The image plane field is modified by letting light through one pinhole at a time. In addition to estimation of the field at the science camera, this method allows for self-calibration of the probes by letting light through the pinholes in various permutations while blocking the main Lyot opening. We present results of estimation and calibration from the High Contrast Imaging Testbed along with a comparison to the pair-wise deformable mirror diversity based estimation technique. Tests are carried out in narrow-band light and over a composite 10% bandpass.
NASA Astrophysics Data System (ADS)
Khrustalev, K.
2016-12-01
Current process for the calibration of the beta-gamma detectors used for radioxenon isotope measurements for CTBT purposes is laborious and time consuming. It uses a combination of point sources and gaseous sources resulting in differences between energy and resolution calibrations. The emergence of high resolution SiPIN based electron detectors allows improvements in the calibration and analysis process to be made. Thanks to high electron resolution of SiPIN detectors ( 8-9 keV@129 keV) compared to plastic scintillators ( 35 keV@129keV) there are a lot more CE peaks (from radioxenon and radon progenies) can be resolved and used for energy and resolution calibration in the energy range of the CTBT-relevant radioxenon isotopes. The long term stability of the SiPIN energy calibration allows one to significantly reduce the time of the QC measurements needed for checking the stability of the E/R calibration. The currently used second order polynomials for the E/R calibration fitting are unphysical and shall be replaced by a linear energy calibration for NaI and SiPIN, owing to high linearity and dynamic range of the modern digital DAQ systems, and resolution calibration functions shall be modified to reflect the underlying physical processes. Alternatively, one can completely abandon the use of fitting functions and use only point-values of E/R (similar to the efficiency calibration currently used) at the energies relevant for the isotopes of interest (ROI - Regions Of Interest ). Current analysis considers the detector as a set of single channel analysers, with an established set of coefficients relating the positions of ROIs with the positions of the QC peaks. The analysis of the spectra can be made more robust using peak and background fitting in the ROIs with a single free parameter (peak area) of the potential peaks from the known isotopes and a fixed E/R calibration values set.
Pacilio, M; Basile, C; Shcherbinin, S; Caselli, F; Ventroni, G; Aragno, D; Mango, L; Santini, E
2011-06-01
Positron emission tomography (PET) and single-photon emission computed tomography (SPECT) imaging play an important role in the segmentation of functioning parts of organs or tumours, but an accurate and reproducible delineation is still a challenging task. In this work, an innovative iterative thresholding method for tumour segmentation has been proposed and implemented for a SPECT system. This method, which is based on experimental threshold-volume calibrations, implements also the recovery coefficients (RC) of the imaging system, so it has been called recovering iterative thresholding method (RIThM). The possibility to employ Monte Carlo (MC) simulations for system calibration was also investigated. The RIThM is an iterative algorithm coded using MATLAB: after an initial rough estimate of the volume of interest, the following calculations are repeated: (i) the corresponding source-to-background ratio (SBR) is measured and corrected by means of the RC curve; (ii) the threshold corresponding to the amended SBR value and the volume estimate is then found using threshold-volume data; (iii) new volume estimate is obtained by image thresholding. The process goes on until convergence. The RIThM was implemented for an Infinia Hawkeye 4 (GE Healthcare) SPECT/CT system, using a Jaszczak phantom and several test objects. Two MC codes were tested to simulate the calibration images: SIMIND and SimSet. For validation, test images consisting of hot spheres and some anatomical structures of the Zubal head phantom were simulated with SIMIND code. Additional test objects (flasks and vials) were also imaged experimentally. Finally, the RIThM was applied to evaluate three cases of brain metastases and two cases of high grade gliomas. Comparing experimental thresholds and those obtained by MC simulations, a maximum difference of about 4% was found, within the errors (+/- 2% and +/- 5%, for volumes > or = 5 ml or < 5 ml, respectively). Also for the RC data, the comparison showed differences (up to 8%) within the assigned error (+/- 6%). ANOVA test demonstrated that the calibration results (in terms of thresholds or RCs at various volumes) obtained by MC simulations were indistinguishable from those obtained experimentally. The accuracy in volume determination for the simulated hot spheres was between -9% and 15% in the range 4-270 ml, whereas for volumes less than 4 ml (in the range 1-3 ml) the difference increased abruptly reaching values greater than 100%. For the Zubal head phantom, errors ranged between 9% and 18%. For the experimental test images, the accuracy level was within +/- 10%, for volumes in the range 20-110 ml. The preliminary test of application on patients evidenced the suitability of the method in a clinical setting. The MC-guided delineation of tumor volume may reduce the acquisition time required for the experimental calibration. Analysis of images of several simulated and experimental test objects, Zubal head phantom and clinical cases demonstrated the robustness, suitability, accuracy, and speed of the proposed method. Nevertheless, studies concerning tumors of irregular shape and/or nonuniform distribution of the background activity are still in progress.
Test surfaces useful for calibration of surface profilometers
Yashchuk, Valeriy V; McKinney, Wayne R; Takacs, Peter Z
2013-12-31
The present invention provides for test surfaces and methods for calibration of surface profilometers, including interferometric and atomic force microscopes. Calibration is performed using a specially designed test surface, or the Binary Pseudo-random (BPR) grating (array). Utilizing the BPR grating (array) to measure the power spectral density (PSD) spectrum, the profilometer is calibrated by determining the instrumental modulation transfer.
NASA Astrophysics Data System (ADS)
Liamsuwan, T.; Channuie, J.; Ratanatongchai, W.
2015-05-01
Reliable measurement of neutron radiation is important for monitoring and protection in workplace where neutrons are present. Although Thailand has been familiar with applications of neutron sources and neutron beams for many decades, there is no calibration facility dedicated to neutron measuring devices available in the country. Recently, Thailand Institute of Nuclear Technology (TINT) has set up a multi-purpose irradiation facility equipped with a 50 Ci americium-241/beryllium neutron irradiator. The facility is planned to be used for research, nuclear analytical techniques and, among other applications, calibration of neutron measuring devices. In this work, the neutron calibration fields were investigated in terms of neutron energy spectra and dose equivalent rates using Monte Carlo simulations, an in-house developed neutron spectrometer and commercial survey meters. The characterized neutron fields can generate neutron dose equivalent rates ranging from 156 μSv/h to 3.5 mSv/h with nearly 100% of dose contributed by neutrons of energies larger than 0.01 MeV. The gamma contamination was less than 4.2-7.5% depending on the irradiation configuration. It is possible to use the described neutron fields for calibration test and routine quality assurance of neutron dose rate meters and passive dosemeters commonly used in radiation protection dosimetry.
GHRS Cycle 5 Echelle Wavelength Monitor
NASA Astrophysics Data System (ADS)
Soderblom, David
1995-07-01
This proposal defines the spectral lamp test for Echelle A. It is an internal test which makes measurements of the wavelength lamp SC2. It calibrates the carrousel function, Y deflections, resolving power, sensitivity, and scattered light. The wavelength calibration dispersion constants will be updated in the PODPS calibration data base. This proposal defines the spectral lamp test for Echelle B. It is an internal test which makes measurements of the wavelength lamp SC2. It calibrates the carrousel function, Y deflections, resolving power, sensitivity, and scattered light. The wavelength calibration dispersion constants will be updated in the PODPS calibration data base. It will be run every 4 months. The wavelengths may be out of range according to PEPSI or TRANS. Please ignore the errors.
40 CFR 91.326 - Pre- and post-test analyzer calibration.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 20 2011-07-01 2011-07-01 false Pre- and post-test analyzer... PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM MARINE SPARK-IGNITION ENGINES Emission Test Equipment Provisions § 91.326 Pre- and post-test analyzer calibration. Calibrate the operating range of each analyzer...
40 CFR 91.326 - Pre- and post-test analyzer calibration.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Pre- and post-test analyzer... PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM MARINE SPARK-IGNITION ENGINES Emission Test Equipment Provisions § 91.326 Pre- and post-test analyzer calibration. Calibrate the operating range of each analyzer...
40 CFR 91.326 - Pre- and post-test analyzer calibration.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 21 2013-07-01 2013-07-01 false Pre- and post-test analyzer... PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM MARINE SPARK-IGNITION ENGINES Emission Test Equipment Provisions § 91.326 Pre- and post-test analyzer calibration. Calibrate the operating range of each analyzer...
40 CFR 91.326 - Pre- and post-test analyzer calibration.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 21 2012-07-01 2012-07-01 false Pre- and post-test analyzer... PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM MARINE SPARK-IGNITION ENGINES Emission Test Equipment Provisions § 91.326 Pre- and post-test analyzer calibration. Calibrate the operating range of each analyzer...
7 CFR 28.956 - Prescribed fees.
Code of Federal Regulations, 2014 CFR
2014-01-01
.... sample 42.00 3.0Furnishing standard color tiles for calibrating cotton colormeters, per set of five tiles... outside continental United States 165.00 3.1Furnishing single color calibration tiles for use with specific instruments or as replacements in above sets, each tile: a. f.o.b. Memphis, Tennessee 22.00 b...
7 CFR 28.956 - Prescribed fees.
Code of Federal Regulations, 2012 CFR
2012-01-01
.... sample 42.00 3.0Furnishing standard color tiles for calibrating cotton colormeters, per set of five tiles... outside continental United States 165.00 3.1Furnishing single color calibration tiles for use with specific instruments or as replacements in above sets, each tile: a. f.o.b. Memphis, Tennessee 22.00 b...
The Role of Feedback on Studying, Achievement and Calibration.
ERIC Educational Resources Information Center
Chu, Stephanie T. L.; Jamieson-Noel, Dianne L.; Winne, Philip H.
One set of hypotheses examined in this study was that various types of feedback (outcome, process, and corrective) supply different information about performance and have different effects on studying processes and on achievement. Another set of hypotheses concerned students' calibration, their accuracy in predicting and postdicting achievement…
Check Calibration of the NASA Glenn 10- by 10-Foot Supersonic Wind Tunnel (2014 Test Entry)
NASA Technical Reports Server (NTRS)
Johnson, Aaron; Pastor-Barsi, Christine; Arrington, E. Allen
2016-01-01
A check calibration of the 10- by 10-Foot Supersonic Wind Tunnel (SWT) was conducted in May/June 2014 using an array of five supersonic wedge probes to verify the 1999 Calibration. This check calibration was necessary following a control systems upgrade and an integrated systems test (IST). This check calibration was required to verify the tunnel flow quality was unchanged by the control systems upgrade prior to the next test customer beginning their test entry. The previous check calibration of the tunnel occurred in 2007, prior to the Mars Science Laboratory test program. Secondary objectives of this test entry included the validation of the new Cobra data acquisition system (DAS) against the current Escort DAS and the creation of statistical process control (SPC) charts through the collection of series of repeated test points at certain predetermined tunnel parameters. The SPC charts secondary objective was not completed due to schedule constraints. It is hoped that this effort will be readdressed and completed in the near future.
The recalibration of the IUE scientific instrument
NASA Technical Reports Server (NTRS)
Imhoff, Catherine L.; Oliversen, Nancy A.; Nichols-Bohlin, Joy; Casatella, Angelo; Lloyd, Christopher
1988-01-01
The IUE instrument was recalibrated because of long time-scale changes in the scientific instrument, a better understanding of the performance of the instrument, improved sets of calibration data, and improved analysis techniques. Calibrations completed or planned include intensity transfer functions (ITF), low-dispersion absolute calibrations, high-dispersion ripple corrections and absolute calibrations, improved geometric mapping of the ITFs to spectral images, studies to improve the signal-to-noise, enhanced absolute calibrations employing corrections for time, temperature, and aperture dependence, and photometric and geometric calibrations for the FES.
Method and apparatus for calibrating multi-axis load cells in a dexterous robot
NASA Technical Reports Server (NTRS)
Wampler, II, Charles W. (Inventor); Platt, Jr., Robert J. (Inventor)
2012-01-01
A robotic system includes a dexterous robot having robotic joints, angle sensors adapted for measuring joint angles at a corresponding one of the joints, load cells for measuring a set of strain values imparted to a corresponding one of the load cells during a predetermined pose of the robot, and a host machine. The host machine is electrically connected to the load cells and angle sensors, and receives the joint angle values and strain values during the predetermined pose. The robot presses together mating pairs of load cells to form the poses. The host machine executes an algorithm to process the joint angles and strain values, and from the set of all calibration matrices that minimize error in force balance equations, selects the set of calibration matrices that is closest in a value to a pre-specified value. A method for calibrating the load cells via the algorithm is also provided.
Naderi, S; Yin, T; König, S
2016-09-01
A simulation study was conducted to investigate the performance of random forest (RF) and genomic BLUP (GBLUP) for genomic predictions of binary disease traits based on cow calibration groups. Training and testing sets were modified in different scenarios according to disease incidence, the quantitative-genetic background of the trait (h(2)=0.30 and h(2)=0.10), and the genomic architecture [725 quantitative trait loci (QTL) and 290 QTL, populations with high and low levels of linkage disequilibrium (LD)]. For all scenarios, 10,005 SNP (depicting a low-density 10K SNP chip) and 50,025 SNP (depicting a 50K SNP chip) were evenly spaced along 29 chromosomes. Training and testing sets included 20,000 cows (4,000 sick, 16,000 healthy, disease incidence 20%) from the last 2 generations. Initially, 4,000 sick cows were assigned to the testing set, and the remaining 16,000 healthy cows represented the training set. In the ongoing allocation schemes, the number of sick cows in the training set increased stepwise by moving 10% of the sick animals from the testing set to the training set, and vice versa. The size of the training and testing sets was kept constant. Evaluation criteria for both GBLUP and RF were the correlations between genomic breeding values and true breeding values (prediction accuracy), and the area under the receiving operating characteristic curve (AUROC). Prediction accuracy and AUROC increased for both methods and all scenarios as increasing percentages of sick cows were allocated to the training set. Highest prediction accuracies were observed for disease incidences in training sets that reflected the population disease incidence of 0.20. For this allocation scheme, the largest prediction accuracies of 0.53 for RF and of 0.51 for GBLUP, and the largest AUROC of 0.66 for RF and of 0.64 for GBLUP, were achieved using 50,025 SNP, a heritability of 0.30, and 725 QTL. Heritability decreases from 0.30 to 0.10 and QTL reduction from 725 to 290 were associated with decreasing prediction accuracy and decreasing AUROC for all scenarios. This decrease was more pronounced for RF. Also, the increase of LD had stronger effect on RF results than on GBLUP results. The highest prediction accuracy from the low LD scenario was 0.30 from RF and 0.36 from GBLUP, and increased to 0.39 for both methods in the high LD population. Random forest successfully identified important SNP in close map distance to QTL explaining a high proportion of the phenotypic trait variations. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
2018-02-01
international proficiency testing sponsored by the Organisation for the Prohibition of Chemical Weapons (The Hague, Netherlands). Traditionally...separate batch of standards at each level for a total of six analyses at each calibration level. Concentrations of the tested calibration levels are...and ruthenium at each calibration level. 11 REFERENCES 1. General Requirements for the Competence of Testing and Calibration Laboratories
Calibrating/testing meters in hot water test bench VM7
NASA Astrophysics Data System (ADS)
Kling, E.; Stolt, K.; Lau, P.; Mattiasson, K.
A Hot Water Test Bench, VM7, has been developed and constructed for the calibration and testing of volume and flowmeters, in a project at the National Volume Measurement Laboratory at the Swedish National Testing and Research Institute. The intended area of use includes use as a reference at audit measurements, e.g. for accredited laboratories, calibration of meters for the industry and for the testing of hot water meters. The objective of the project, which was initiated in 1989, was to design equipment with stable flow and with a minimal temperature drop even at very low flow rates. The principle of the design is a closed system with two pressure tanks at different pressures. The water is led from the high pressure tank through the test object and the volume standard, in the form of master meters or a piston prover alternatively, to the low pressure tank. Calibrations/tests are made comparing the indication of the test object to that of master meters covering the current flow rate. These are, in the same test cycle, calibrated to the piston prover. Alternatively, the test object can be calibrated directly to the piston prover.
NASA Astrophysics Data System (ADS)
Sperling, A.; Meyer, M.; Pendsa, S.; Jordan, W.; Revtova, E.; Poikonen, T.; Renoux, D.; Blattner, P.
2018-04-01
Proper characterization of test setups used in industry for testing and traceable measurement of lighting devices by the substitution method is an important task. According to new standards for testing LED lamps, luminaires and modules, uncertainty budgets are requested because in many cases the properties of the device under test differ from the transfer standard used, which may cause significant errors, for example if a LED-based lamp is tested or calibrated in an integrating sphere which was calibrated with a tungsten lamp. This paper introduces a multiple transfer standard, which was designed not only to transfer a single calibration value (e.g. luminous flux) but also to characterize test setups used for LED measurements with additional provided and calibrated output features to enable the application of the new standards.
Matching Images to Models: Camera Calibration for 3-D Surface Reconstruction
NASA Technical Reports Server (NTRS)
Morris, Robin D.; Smelyanskiy, Vadim N.; Cheeseman. Peter C.; Norvig, Peter (Technical Monitor)
2001-01-01
In a previous paper we described a system which recursively recovers a super-resolved three dimensional surface model from a set of images of the surface. In that paper we assumed that the camera calibration for each image was known. In this paper we solve two problems. Firstly, if an estimate of the surface is already known, the problem is to calibrate a new image relative to the existing surface model. Secondly, if no surface estimate is available, the relative camera calibration between the images in the set must be estimated. This will allow an initial surface model to be estimated. Results of both types of estimation are given.
Hou, Siyuan; Riley, Christopher B; Mitchell, Cynthia A; Shaw, R Anthony; Bryanton, Janet; Bigsby, Kathryn; McClure, J Trenton
2015-09-01
Immunoglobulin G (IgG) is crucial for the protection of the host from invasive pathogens. Due to its importance for human health, tools that enable the monitoring of IgG levels are highly desired. Consequently there is a need for methods to determine the IgG concentration that are simple, rapid, and inexpensive. This work explored the potential of attenuated total reflectance (ATR) infrared spectroscopy as a method to determine IgG concentrations in human serum samples. Venous blood samples were collected from adults and children, and from the umbilical cord of newborns. The serum was harvested and tested using ATR infrared spectroscopy. Partial least squares (PLS) regression provided the basis to develop the new analytical methods. Three PLS calibrations were determined: one for the combined set of the venous and umbilical cord serum samples, the second for only the umbilical cord samples, and the third for only the venous samples. The number of PLS factors was chosen by critical evaluation of Monte Carlo-based cross validation results. The predictive performance for each PLS calibration was evaluated using the Pearson correlation coefficient, scatter plot and Bland-Altman plot, and percent deviations for independent prediction sets. The repeatability was evaluated by standard deviation and relative standard deviation. The results showed that ATR infrared spectroscopy is potentially a simple, quick, and inexpensive method to measure IgG concentrations in human serum samples. The results also showed that it is possible to build a united calibration curve for the umbilical cord and the venous samples. Copyright © 2015 Elsevier B.V. All rights reserved.
Kinch, Kjartan M; Bell, James F; Goetz, Walter; Johnson, Jeffrey R; Joseph, Jonathan; Madsen, Morten Bo; Sohl-Dickstein, Jascha
2015-05-01
The Panoramic Cameras on NASA's Mars Exploration Rovers have each returned more than 17,000 images of their calibration targets. In order to make optimal use of this data set for reflectance calibration, a correction must be made for the presence of air fall dust. Here we present an improved dust correction procedure based on a two-layer scattering model, and we present a dust reflectance spectrum derived from long-term trends in the data set. The dust on the calibration targets appears brighter than dusty areas of the Martian surface. We derive detailed histories of dust deposition and removal revealing two distinct environments: At the Spirit landing site, half the year is dominated by dust deposition, the other half by dust removal, usually in brief, sharp events. At the Opportunity landing site the Martian year has a semiannual dust cycle with dust removal happening gradually throughout two removal seasons each year. The highest observed optical depth of settled dust on the calibration target is 1.5 on Spirit and 1.1 on Opportunity (at 601 nm). We derive a general prediction for dust deposition rates of 0.004 ± 0.001 in units of surface optical depth deposited per sol (Martian solar day) per unit atmospheric optical depth. We expect this procedure to lead to improved reflectance-calibration of the Panoramic Camera data set. In addition, it is easily adapted to similar data sets from other missions in order to deliver improved reflectance calibration as well as data on dust reflectance properties and deposition and removal history.
Bell, James F.; Goetz, Walter; Johnson, Jeffrey R.; Joseph, Jonathan; Madsen, Morten Bo; Sohl‐Dickstein, Jascha
2015-01-01
Abstract The Panoramic Cameras on NASA's Mars Exploration Rovers have each returned more than 17,000 images of their calibration targets. In order to make optimal use of this data set for reflectance calibration, a correction must be made for the presence of air fall dust. Here we present an improved dust correction procedure based on a two‐layer scattering model, and we present a dust reflectance spectrum derived from long‐term trends in the data set. The dust on the calibration targets appears brighter than dusty areas of the Martian surface. We derive detailed histories of dust deposition and removal revealing two distinct environments: At the Spirit landing site, half the year is dominated by dust deposition, the other half by dust removal, usually in brief, sharp events. At the Opportunity landing site the Martian year has a semiannual dust cycle with dust removal happening gradually throughout two removal seasons each year. The highest observed optical depth of settled dust on the calibration target is 1.5 on Spirit and 1.1 on Opportunity (at 601 nm). We derive a general prediction for dust deposition rates of 0.004 ± 0.001 in units of surface optical depth deposited per sol (Martian solar day) per unit atmospheric optical depth. We expect this procedure to lead to improved reflectance‐calibration of the Panoramic Camera data set. In addition, it is easily adapted to similar data sets from other missions in order to deliver improved reflectance calibration as well as data on dust reflectance properties and deposition and removal history. PMID:27981072
Uncertainty quantification for constitutive model calibration of brain tissue.
Brewick, Patrick T; Teferra, Kirubel
2018-05-31
The results of a study comparing model calibration techniques for Ogden's constitutive model that describes the hyperelastic behavior of brain tissue are presented. One and two-term Ogden models are fit to two different sets of stress-strain experimental data for brain tissue using both least squares optimization and Bayesian estimation. For the Bayesian estimation, the joint posterior distribution of the constitutive parameters is calculated by employing Hamiltonian Monte Carlo (HMC) sampling, a type of Markov Chain Monte Carlo method. The HMC method is enriched in this work to intrinsically enforce the Drucker stability criterion by formulating a nonlinear parameter constraint function, which ensures the constitutive model produces physically meaningful results. Through application of the nested sampling technique, 95% confidence bounds on the constitutive model parameters are identified, and these bounds are then propagated through the constitutive model to produce the resultant bounds on the stress-strain response. The behavior of the model calibration procedures and the effect of the characteristics of the experimental data are extensively evaluated. It is demonstrated that increasing model complexity (i.e., adding an additional term in the Ogden model) improves the accuracy of the best-fit set of parameters while also increasing the uncertainty via the widening of the confidence bounds of the calibrated parameters. Despite some similarity between the two data sets, the resulting distributions are noticeably different, highlighting the sensitivity of the calibration procedures to the characteristics of the data. For example, the amount of uncertainty reported on the experimental data plays an essential role in how data points are weighted during the calibration, and this significantly affects how the parameters are calibrated when combining experimental data sets from disparate sources. Published by Elsevier Ltd.
Masili, Alice; Puligheddu, Sonia; Sassu, Lorenzo; Scano, Paola; Lai, Adolfo
2012-11-01
In this work, we report the feasibility study to predict the properties of neat crude oil samples from 300-MHz NMR spectral data and partial least squares (PLS) regression models. The study was carried out on 64 crude oil samples obtained from 28 different extraction fields and aims at developing a rapid and reliable method for characterizing the crude oil in a fast and cost-effective way. The main properties generally employed for evaluating crudes' quality and behavior during refining were measured and used for calibration and testing of the PLS models. Among these, the UOP characterization factor K (K(UOP)) used to classify crude oils in terms of composition, density (D), total acidity number (TAN), sulfur content (S), and true boiling point (TBP) distillation yields were investigated. Test set validation with an independent set of data was used to evaluate model performance on the basis of standard error of prediction (SEP) statistics. Model performances are particularly good for K(UOP) factor, TAN, and TPB distillation yields, whose standard error of calibration and SEP values match the analytical method precision, while the results obtained for D and S are less accurate but still useful for predictions. Furthermore, a strategy that reduces spectral data preprocessing and sample preparation procedures has been adopted. The models developed with such an ample crude oil set demonstrate that this methodology can be applied with success to modern refining process requirements. Copyright © 2012 John Wiley & Sons, Ltd.
Optimum data weighting and error calibration for estimation of gravitational parameters
NASA Technical Reports Server (NTRS)
Lerch, Francis J.
1989-01-01
A new technique was developed for the weighting of data from satellite tracking systems in order to obtain an optimum least-squares solution and an error calibration for the solution parameters. Data sets from optical, electronic, and laser systems on 17 satellites in GEM-T1 Goddard Earth Model-T1 (GEM-T1) were employed toward application of this technique for gravity field parameters. Also GEM-T2 (31 satellites) was recently computed as a direct application of the method and is summarized. The method employs subset solutions of the data associated with the complete solution to agree with their error estimates. With the adjusted weights the process provides for an automatic calibration of the error estimates for the solution parameters. The data weights derived are generally much smaller than corresponding weights obtained from nominal values of observation accuracy or residuals. Independent tests show significant improvement for solutions with optimal weighting. The technique is general and may be applied to orbit parameters, station coordinates, or other parameters than the gravity model.
Dark Energy Survey Year 1 Results: Weak Lensing Shape Catalogues
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zuntz, J.; et al.
We present two galaxy shape catalogues from the Dark Energy Survey Year 1 data set, covering 1500 square degrees with a median redshift ofmore » $0.59$. The catalogues cover two main fields: Stripe 82, and an area overlapping the South Pole Telescope survey region. We describe our data analysis process and in particular our shape measurement using two independent shear measurement pipelines, METACALIBRATION and IM3SHAPE. The METACALIBRATION catalogue uses a Gaussian model with an innovative internal calibration scheme, and was applied to $riz$-bands, yielding 34.8M objects. The IM3SHAPE catalogue uses a maximum-likelihood bulge/disc model calibrated using simulations, and was applied to $r$-band data, yielding 21.9M objects. Both catalogues pass a suite of null tests that demonstrate their fitness for use in weak lensing science. We estimate the 1$$\\sigma$$ uncertainties in multiplicative shear calibration to be $0.013$ and $0.025$ for the METACALIBRATION and IM3SHAPE catalogues, respectively.« less
Reconstruction method for fringe projection profilometry based on light beams.
Li, Xuexing; Zhang, Zhijiang; Yang, Chen
2016-12-01
A novel reconstruction method for fringe projection profilometry, based on light beams, is proposed and verified by experiments. Commonly used calibration techniques require the parameters of projector calibration or the reference planes placed in many known positions. Obviously, introducing the projector calibration can reduce the accuracy of the reconstruction result, and setting the reference planes to many known positions is a time-consuming process. Therefore, in this paper, a reconstruction method without projector's parameters is proposed and only two reference planes are introduced. A series of light beams determined by the subpixel point-to-point map on the two reference planes combined with their reflected light beams determined by the camera model are used to calculate the 3D coordinates of reconstruction points. Furthermore, the bundle adjustment strategy and the complementary gray-code phase-shifting method are utilized to ensure the accuracy and stability. Qualitative and quantitative comparisons as well as experimental tests demonstrate the performance of our proposed approach, and the measurement accuracy can reach about 0.0454 mm.
Aero-Thermal Calibration of the NASA Glenn Icing Research Tunnel (2012 Tests)
NASA Technical Reports Server (NTRS)
Pastor-Barsi, Christine; Allen, Arrington E.
2013-01-01
A full aero-thermal calibration of the NASA Glenn Icing Research Tunnel (IRT) was completed in 2012 following the major modifications to the facility that included replacement of the refrigeration plant and heat exchanger. The calibration test provided data used to fully document the aero-thermal flow quality in the IRT test section and to construct calibration curves for the operation of the IRT.
Initial Radiometric Calibration of the AWiFS using Vicarious Calibration Techniques
NASA Technical Reports Server (NTRS)
Pagnutti, Mary; Thome, Kurtis; Aaron, David; Leigh, Larry
2006-01-01
NASA SSC maintains four ASD FieldSpec FR spectroradiometers: 1) Laboratory transfer radiometers; 2) Ground surface reflectance for V&V field collection activities. Radiometric Calibration consists of a NIST-calibrated integrating sphere which serves as a source with known spectral radiance. Spectral Calibration consists of a laser and pen lamp illumination of integrating sphere. Environmental Testing includes temperature stability tests performed in environmental chamber.
Sub-Camera Calibration of a Penta-Camera
NASA Astrophysics Data System (ADS)
Jacobsen, K.; Gerke, M.
2016-03-01
Penta cameras consisting of a nadir and four inclined cameras are becoming more and more popular, having the advantage of imaging also facades in built up areas from four directions. Such system cameras require a boresight calibration of the geometric relation of the cameras to each other, but also a calibration of the sub-cameras. Based on data sets of the ISPRS/EuroSDR benchmark for multi platform photogrammetry the inner orientation of the used IGI Penta DigiCAM has been analyzed. The required image coordinates of the blocks Dortmund and Zeche Zollern have been determined by Pix4Dmapper and have been independently adjusted and analyzed by program system BLUH. With 4.1 million image points in 314 images respectively 3.9 million image points in 248 images a dense matching was provided by Pix4Dmapper. With up to 19 respectively 29 images per object point the images are well connected, nevertheless the high number of images per object point are concentrated to the block centres while the inclined images outside the block centre are satisfying but not very strongly connected. This leads to very high values for the Student test (T-test) of the finally used additional parameters or in other words, additional parameters are highly significant. The estimated radial symmetric distortion of the nadir sub-camera corresponds to the laboratory calibration of IGI, but there are still radial symmetric distortions also for the inclined cameras with a size exceeding 5μm even if mentioned as negligible based on the laboratory calibration. Radial and tangential effects of the image corners are limited but still available. Remarkable angular affine systematic image errors can be seen especially in the block Zeche Zollern. Such deformations are unusual for digital matrix cameras, but it can be caused by the correlation between inner and exterior orientation if only parallel flight lines are used. With exception of the angular affinity the systematic image errors for corresponding cameras of both blocks have the same trend, but as usual for block adjustments with self calibration, they still show significant differences. Based on the very high number of image points the remaining image residuals can be safely determined by overlaying and averaging the image residuals corresponding to their image coordinates. The size of the systematic image errors, not covered by the used additional parameters, is in the range of a square mean of 0.1 pixels corresponding to 0.6μm. They are not the same for both blocks, but show some similarities for corresponding cameras. In general the bundle block adjustment with a satisfying set of additional parameters, checked by remaining systematic errors, is required for use of the whole geometric potential of the penta camera. Especially for object points on facades, often only in two images and taken with a limited base length, the correct handling of systematic image errors is important. At least in the analyzed data sets the self calibration of sub-cameras by bundle block adjustment suffers from the correlation of the inner to the exterior calibration due to missing crossing flight directions. As usual, the systematic image errors differ from block to block even without the influence of the correlation to the exterior orientation.
21 CFR 864.8185 - Calibrator for red cell and white cell counting.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Calibrator for red cell and white cell counting... Calibrator for red cell and white cell counting. (a) Identification. A calibrator for red cell and white cell counting is a device that resembles red or white blood cells and that is used to set instruments intended...
Evans, John R.; Jensen, E. Gray; Sell, Russell; Stephens, Christopher D.; Nyman, Douglas J.; Hamilton, Robert C.; Hager, William C.
2006-01-01
In September, 2003, the Alyeska Pipeline Service Company (APSC) and the U.S. Geological Survey (USGS) embarked on a joint effort to extract, test, and calibrate the accelerometers, amplifiers, and bandpass filters from the earthquake monitoring systems (EMS) at Pump Stations 09, 10, and 11 of the Trans-Alaska Pipeline System (TAPS). These were the three closest strong-motion seismographs to the Denali fault when it ruptured in the MW 7.9 earthquake of 03 November 2002 (22:12:41 UTC). The surface rupture is only 3.0 km from PS10 and 55.5 km from PS09 but PS11 is 124.2 km away from a small rupture splay and 126.9 km from the main trace. Here we briefly describe precision calibration results for all three instruments. Included with this report is a link to the seismograms reprocessed using these new calibrations: http://nsmp.wr.usgs.gov/data_sets/20021103_2212_taps.html Calibration information in this paper applies at the time of the Denali fault earthquake (03 November 2002), but not necessarily at other times because equipment at these stations is changed by APSC personnel at irregular intervals. In particular, the equipment at PS09, PS10, and PS11 was changed by our joint crew in September, 2003, so that we could perform these calibrations. The equipment stayed the same from at least the time of the earthquake until that retrieval, and these calibrations apply for that interval.
1992-01-30
re-examined in order to ascertain whether, after more than two deca- des of development, it will progress into a stage of innovation or face...Corporation (1974-80) and the Electric Power Research Institute (since 1980) under RP1493. Vito J. Longo is the EPRI project manager . A. Avcisoy drafted the...10(GT12), 1529-1548 (1982). RELATIVE DENSITY, SET, AND CPT INTERRELATIONSHIPS FRED H. KULHAWY* and PAUL W. MAYNE** *School of Civil and
Metallicity calibrations for dwarf stars and giants in the Geneva photometric system
NASA Astrophysics Data System (ADS)
Netopil, Martin
2017-08-01
We use the most homogeneous Geneva seven-colour photometric system to derive new metallicity calibrations for early A- to K-type stars that cover both, dwarf stars and giants. The calibrations are based on several spectroscopic data sets that were merged to a common scale, and we applied them to open cluster data to obtain an additional proof of the metallicity scale and accuracy. In total, metallicities of 54 open clusters are presented. The accuracy of the calibrations for single stars is in general below 0.1 dex, but for the open cluster sample with mean values based on several stars we find a much better precision, a scatter as low as about 0.03 dex. Furthermore, we combine the new results with another comprehensive photometric data set to present a catalogue of mean metallicities for more than 3000 F- and G-type dwarf stars with σ ˜ 0.06 dex. The list was extended by more than 1200 hotter stars up to about 8500 K (or spectral type A3) by taking advantage of their almost reddening free characteristic in the new Geneva metallicity calibrations. These two large samples are well suited as primary or secondary calibrators of other data, and we already identified about 20 spectroscopic data sets that show offsets up to about 0.4 dex.
NASA Technical Reports Server (NTRS)
Siu, Marie-Michele; Martos, Borja; Foster, John V.
2013-01-01
As part of a joint partnership between the NASA Aviation Safety Program (AvSP) and the University of Tennessee Space Institute (UTSI), research on advanced air data calibration methods has been in progress. This research was initiated to expand a novel pitot-static calibration method that was developed to allow rapid in-flight calibration for the NASA Airborne Subscale Transport Aircraft Research (AirSTAR) facility. This approach uses Global Positioning System (GPS) technology coupled with modern system identification methods that rapidly computes optimal pressure error models over a range of airspeed with defined confidence bounds. Subscale flight tests demonstrated small 2-s error bounds with significant reduction in test time compared to other methods. Recent UTSI full scale flight tests have shown airspeed calibrations with the same accuracy or better as the Federal Aviation Administration (FAA) accepted GPS 'four-leg' method in a smaller test area and in less time. The current research was motivated by the desire to extend this method for inflight calibration of angle of attack (AOA) and angle of sideslip (AOS) flow vanes. An instrumented Piper Saratoga research aircraft from the UTSI was used to collect the flight test data and evaluate flight test maneuvers. Results showed that the output-error approach produces good results for flow vane calibration. In addition, maneuvers for pitot-static and flow vane calibration can be integrated to enable simultaneous and efficient testing of each system.
A Dynamic Calibration Method for Experimental and Analytical Hub Load Comparison
NASA Technical Reports Server (NTRS)
Kreshock, Andrew R.; Thornburgh, Robert P.; Wilbur, Matthew L.
2017-01-01
This paper presents the results from an ongoing effort to produce improved correlation between analytical hub force and moment prediction and those measured during wind-tunnel testing on the Aeroelastic Rotor Experimental System (ARES), a conventional rotor testbed commonly used at the Langley Transonic Dynamics Tunnel (TDT). A frequency-dependent transformation between loads at the rotor hub and outputs of the testbed balance is produced from frequency response functions measured during vibration testing of the system. The resulting transformation is used as a dynamic calibration of the balance to transform hub loads predicted by comprehensive analysis into predicted balance outputs. In addition to detailing the transformation process, this paper also presents a set of wind-tunnel test cases, with comparisons between the measured balance outputs and transformed predictions from the comprehensive analysis code CAMRAD II. The modal response of the testbed is discussed and compared to a detailed finite-element model. Results reveal that the modal response of the testbed exhibits a number of characteristics that make accurate dynamic balance predictions challenging, even with the use of the balance transformation.
Neighbour-die effect on the measurement of wafer-level flip-chip LED dies in production lines
NASA Astrophysics Data System (ADS)
Chen, Tengfei; Wan, Zirui; Li, Bin
2017-11-01
The light from the side surfaces of the test flip-chip light-emitting diode (FCLED) dies is reflected, refracted or absorbed by neighbour dies during the measurement of wafer-level FCLED dies in production lines. A notable measurement deviation is caused by the neighbour-die effect, which is not considered in current industry practice. In this paper, Monte Carlo ray-tracing simulations are used to study the measurement deviations caused by the neighbour-die effect and extension ratios of the film. The simulation results show that the maximal deviation of radiant flux impinging the photodiode can reach 5.5%, if the die is tested without any neighbour dies, or is surrounded by a set of neighbour dies at an extension ratio of 1.1. Moreover, the dependence between the measurement results and neighbour cases for different extension ratios is also investigated. Then, a modified calibration method is proposed and studied. The proposed technique can be used to improve the calibration and measurement accuracy of the test equipment used for measurement of wafer-level FCLED dies in production lines.
A Testbed for Model Development
NASA Astrophysics Data System (ADS)
Berry, J. A.; Van der Tol, C.; Kornfeld, A.
2014-12-01
Carbon cycle and land-surface models used in global simulations need to be computationally efficient and have a high standard of software engineering. These models also make a number of scaling assumptions to simplify the representation of complex biochemical and structural properties of ecosystems. This makes it difficult to use these models to test new ideas for parameterizations or to evaluate scaling assumptions. The stripped down nature of these models also makes it difficult to "connect" with current disciplinary research which tends to be focused on much more nuanced topics than can be included in the models. In our opinion/experience this indicates the need for another type of model that can more faithfully represent the complexity ecosystems and which has the flexibility to change or interchange parameterizations and to run optimization codes for calibration. We have used the SCOPE (Soil Canopy Observation, Photochemistry and Energy fluxes) model in this way to develop, calibrate, and test parameterizations for solar induced chlorophyll fluorescence, OCS exchange and stomatal parameterizations at the canopy scale. Examples of the data sets and procedures used to develop and test new parameterizations are presented.
Prediction of valid acidity in intact apples with Fourier transform near infrared spectroscopy.
Liu, Yan-De; Ying, Yi-Bin; Fu, Xia-Ping
2005-03-01
To develop nondestructive acidity prediction for intact Fuji apples, the potential of Fourier transform near infrared (FT-NIR) method with fiber optics in interactance mode was investigated. Interactance in the 800 nm to 2619 nm region was measured for intact apples, harvested from early to late maturity stages. Spectral data were analyzed by two multivariate calibration techniques including partial least squares (PLS) and principal component regression (PCR) methods. A total of 120 Fuji apples were tested and 80 of them were used to form a calibration data set. The influences of different data preprocessing and spectra treatments were also quantified. Calibration models based on smoothing spectra were slightly worse than that based on derivative spectra, and the best result was obtained when the segment length was 5 nm and the gap size was 10 points. Depending on data preprocessing and PLS method, the best prediction model yielded correlation coefficient of determination (r2) of 0.759, low root mean square error of prediction (RMSEP) of 0.0677, low root mean square error of calibration (RMSEC) of 0.0562. The results indicated the feasibility of FT-NIR spectral analysis for predicting apple valid acidity in a nondestructive way.
Prediction of valid acidity in intact apples with Fourier transform near infrared spectroscopy*
Liu, Yan-de; Ying, Yi-bin; Fu, Xia-ping
2005-01-01
To develop nondestructive acidity prediction for intact Fuji apples, the potential of Fourier transform near infrared (FT-NIR) method with fiber optics in interactance mode was investigated. Interactance in the 800 nm to 2619 nm region was measured for intact apples, harvested from early to late maturity stages. Spectral data were analyzed by two multivariate calibration techniques including partial least squares (PLS) and principal component regression (PCR) methods. A total of 120 Fuji apples were tested and 80 of them were used to form a calibration data set. The influences of different data preprocessing and spectra treatments were also quantified. Calibration models based on smoothing spectra were slightly worse than that based on derivative spectra, and the best result was obtained when the segment length was 5 nm and the gap size was 10 points. Depending on data preprocessing and PLS method, the best prediction model yielded correlation coefficient of determination (r 2) of 0.759, low root mean square error of prediction (RMSEP) of 0.0677, low root mean square error of calibration (RMSEC) of 0.0562. The results indicated the feasibility of FT-NIR spectral analysis for predicting apple valid acidity in a nondestructive way. PMID:15682498
METHODOLOGIES FOR CALIBRATION AND PREDICTIVE ANALYSIS OF A WATERSHED MODEL
The use of a fitted-parameter watershed model to address water quantity and quality management issues requires that it be calibrated under a wide range of hydrologic conditions. However, rarely does model calibration result in a unique parameter set. Parameter nonuniqueness can l...
Towards a global network of gamma-ray detector calibration facilities
NASA Astrophysics Data System (ADS)
Tijs, Marco; Koomans, Ronald; Limburg, Han
2016-09-01
Gamma-ray logging tools are applied worldwide. At various locations, calibration facilities are used to calibrate these gamma-ray logging systems. Several attempts have been made to cross-correlate well known calibration pits, but this cross-correlation does not include calibration facilities in Europe or private company calibration facilities. Our aim is to set-up a framework that gives the possibility to interlink all calibration facilities worldwide by using `tools of opportunity' - tools that have been calibrated in different calibration facilities, whether this usage was on a coordinated basis or by coincidence. To compare the measurement of different tools, it is important to understand the behaviour of the tools in the different calibration pits. Borehole properties, such as diameter, fluid, casing and probe diameter strongly influence the outcome of gamma-ray borehole logging. Logs need to be properly calibrated and compensated for these borehole properties in order to obtain in-situ grades or to do cross-hole correlation. Some tool providers provide tool-specific correction curves for this purpose. Others rely on reference measurements against sources of known radionuclide concentration and geometry. In this article, we present an attempt to set-up a framework for transferring `local' calibrations to be applied `globally'. This framework includes corrections for any geometry and detector size to give absolute concentrations of radionuclides from borehole measurements. This model is used to compare measurements in the calibration pits of Grand Junction, located in the USA; Adelaide (previously known as AMDEL), located in Adelaide Australia; and Stonehenge, located at Medusa Explorations BV in the Netherlands.
Wolski, Witold E; Lalowski, Maciej; Jungblut, Peter; Reinert, Knut
2005-01-01
Background Peptide Mass Fingerprinting (PMF) is a widely used mass spectrometry (MS) method of analysis of proteins and peptides. It relies on the comparison between experimentally determined and theoretical mass spectra. The PMF process requires calibration, usually performed with external or internal calibrants of known molecular masses. Results We have introduced two novel MS calibration methods. The first method utilises the local similarity of peptide maps generated after separation of complex protein samples by two-dimensional gel electrophoresis. It computes a multiple peak-list alignment of the data set using a modified Minimum Spanning Tree (MST) algorithm. The second method exploits the idea that hundreds of MS samples are measured in parallel on one sample support. It improves the calibration coefficients by applying a two-dimensional Thin Plate Splines (TPS) smoothing algorithm. We studied the novel calibration methods utilising data generated by three different MALDI-TOF-MS instruments. We demonstrate that a PMF data set can be calibrated without resorting to external or relying on widely occurring internal calibrants. The methods developed here were implemented in R and are part of the BioConductor package mscalib available from . Conclusion The MST calibration algorithm is well suited to calibrate MS spectra of protein samples resulting from two-dimensional gel electrophoretic separation. The TPS based calibration algorithm might be used to correct systematic mass measurement errors observed for large MS sample supports. As compared to other methods, our combined MS spectra calibration strategy increases the peptide/protein identification rate by an additional 5 – 15%. PMID:16102175
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 1 2010-01-01 2010-01-01 false Schedule C-prototype tests for calibration or reference... Licensed Items § 32.102 Schedule C—prototype tests for calibration or reference sources containing..., conduct prototype tests, in the order listed, on each of five prototypes of the source, which contains...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Q; Herrick, A; Hoke, S
Purpose: A new readout technology based on pulsed optically stimulating luminescence is introduced (microSTARii, Landauer, Inc, Glenwood, IL60425). This investigation searches for approaches that maximizes the dosimetry accuracy in clinical applications. Methods: The sensitivity of each optically stimulated luminescence dosimeter (OSLD) was initially characterized by exposing it to a given radiation beam. After readout, the luminescence signal stored in the OSLD was erased by exposing its sensing area to a 21W white LED light for 24 hours. A set of OSLDs with consistent sensitivities was selected to calibrate the dose reader. Higher order nonlinear curves were also derived from themore » calibration readings. OSLDs with cumulative doses below 15 Gy were reused. Before an in-vivo dosimetry, the OSLD luminescence signal was erased with the white LED light. Results: For a set of 68 manufacturer-screened OSLDs, the measured sensitivities vary in a range of 17.3%. A sub-set of the OSLDs with sensitivities within ±1% was selected for the reader calibration. Three OSLDs in a group were exposed to a given radiation. Nine groups were exposed to radiation doses ranging from 0 to 13 Gy. Additional verifications demonstrated that the reader uncertainty is about 3%. With an external calibration function derived by fitting the OSLD readings to a 3rd-order polynomial, the dosimetry uncertainty dropped to 0.5%. The dose-luminescence response curves of individual OSLDs were characterized. All curves converge within 1% after the sensitivity correction. With all uncertainties considered, the systematic uncertainty is about 2%. Additional tests emulating in-vivo dosimetry by exposing the OSLDs under different radiation sources confirmed the claim. Conclusion: The sensitivity of individual OSLD should be characterized initially. A 3rd-order polynomial function is a more accurate representation of the dose-luminescence response curve. The dosimetry uncertainty specified by the manufacturer is 4%. Following the proposed approach, it can be controlled to 2%.« less
NASA Technical Reports Server (NTRS)
Hooker, Stanford B. (Editor); Acker, James G. (Editor); Firestone, Elaine R. (Editor); Mcclain, Charles R.; Fraser, Robert S.; Mclean, James T.; Darzi, Michael; Firestone, James K.; Patt, Frederick S.; Schieber, Brian D.
1994-01-01
This document provides brief reports, or case studies, on a number of investigations and data set development activities sponsored by the Calibration and Validation Team (CVT) within the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) Project. Chapter 1 is a comparison with the atmospheric correction of Coastal Zone Color Scanner (CZCS) data using two independent radiative transfer formulations. Chapter 2 is a study on lunar reflectance at the SeaWiFS wavelengths which was useful in establishing the SeaWiFS lunar gain. Chapter 3 reports the results of the first ground-based solar calibration of the SeaWiFS instrument. The experiment was repeated in the fall of 1993 after the instrument was modified to reduce stray light; the results from the second experiment will be provided in the next case studies volume. Chapter 4 is a laboratory experiment using trap detectors which may be useful tools in the calibration round-robin program. Chapter 5 is the original data format evaluation study conducted in 1992 which outlines the technical criteria used in considering three candidate formats, the hierarchical data format (HDF), the common data format (CDF), and the network CDF (netCDF). Chapter 6 summarizes the meteorological data sets accumulated during the first three years of CZCS operation which are being used for initial testing of the operational SeaWiFS algorithms and systems and would be used during a second global processing of the CZCS data set. Chapter 7 describes how near-real time surface meteorological and total ozone data required for the atmospheric correction algorithm will be retrieved and processed. Finally, Chapter 8 is a comparison of surface wind products from various operational meteorological centers and field observations. Surface winds are used in the atmospheric correction scheme to estimate glint and foam radiances.
MacDonald, Megan; Lord, Catherine; Ulrich, Dale
2015-01-01
Objective To determine the relationship of motor skills and the core behaviors of young children with autism, social affective skills and repetitive behaviors, as indicated through the calibrated autism severity scores. Design The univariate GLM tested the relationship of gross and fine motor skills measured by the gross motor scale and the fine motor scale of the MSEL with autism symptomology as measured by calibrated autism severity scores. Setting Majority of the data collected took place in an autism clinic. Participants A cohort of 159 young children with ASD (n=110), PDD-NOS (n=26) and non-ASD (developmental delay, n=23) between the ages of 12–33 months were recruited from early intervention studies and clinical referrals. Children with non-ASD (developmental delay) were included in this study to provide a range of scores indicted through calibrated autism severity. Interventions Not applicable. Main Outcome Measures The primary outcome measures in this study were calibrated autism severity scores. Results Fine motor skills and gross motor skills significantly predicted calibrated autism severity (p < 0.01). Children with weaker motor skills displayed higher levels of calibrated autism severity. Conclusions The fine and gross motor skills are significantly related to autism symptomology. There is more to focus on and new avenues to explore in the realm of discovering how to implement early intervention and rehabilitation for young children with autism and motor skills need to be a part of the discussion. PMID:25774214
Tu, Junchao; Zhang, Liyan
2018-01-12
A new solution to the problem of galvanometric laser scanning (GLS) system calibration is presented. Under the machine learning framework, we build a single-hidden layer feedforward neural network (SLFN)to represent the GLS system, which takes the digital control signal at the drives of the GLS system as input and the space vector of the corresponding outgoing laser beam as output. The training data set is obtained with the aid of a moving mechanism and a binocular stereo system. The parameters of the SLFN are efficiently solved in a closed form by using extreme learning machine (ELM). By quantitatively analyzing the regression precision with respective to the number of hidden neurons in the SLFN, we demonstrate that the proper number of hidden neurons can be safely chosen from a broad interval to guarantee good generalization performance. Compared to the traditional model-driven calibration, the proposed calibration method does not need a complex modeling process and is more accurate and stable. As the output of the network is the space vectors of the outgoing laser beams, it costs much less training time and can provide a uniform solution to both laser projection and 3D-reconstruction, in contrast with the existing data-driven calibration method which only works for the laser triangulation problem. Calibration experiment, projection experiment and 3D reconstruction experiment are respectively conducted to test the proposed method, and good results are obtained.
Description and calibration of the Langley unitary plan wind tunnel
NASA Technical Reports Server (NTRS)
Jackson, C. M., Jr.; Corlett, W. A.; Monta, W. J.
1981-01-01
The two test sections of the Langley Unitary Plan Wind Tunnel were calibrated over the operating Mach number range from 1.47 to 4.63. The results of the calibration are presented along with a a description of the facility and its operational capability. The calibrations include Mach number and flow angularity distributions in both test sections at selected Mach numbers and tunnel stagnation pressures. Calibration data are also presented on turbulence, test-section boundary layer characteristics, moisture effects, blockage, and stagnation-temperature distributions. The facility is described in detail including dimensions and capacities where appropriate, and example of special test capabilities are presented. The operating parameters are fully defined and the power consumption characteristics are discussed.
NASA Astrophysics Data System (ADS)
McEvoy, Helen C.; Simpson, Robert; Machin, Graham
2004-04-01
The use of infrared tympanic thermometers for monitoring patient health is widespread. However, studies into the performance of these thermometers have questioned their accuracy and repeatability. To give users confidence in these devices, and to provide credibility in the measurements, it is necessary for them to be tested using an accredited, standard blackbody source, with a calibration traceable to the International Temperature Scale of 1990 (ITS-90). To address this need the National Physical Laboratory (NPL), UK, has recently set up a primary ear thermometer calibration (PET-C) source for the evaluation and calibration of tympanic (ear) thermometers over the range from 15 °C to 45 °C. The overall uncertainty of the PET-C source is estimated to be +/- 0.04 °C at k = 2. The PET-C source meets the requirements of the European Standard EN 12470-5: 2003 Clinical thermometers. It consists of a high emissivity blackbody cavity immersed in a bath of stirred liquid. The temperature of the blackbody is determined using an ITS-90 calibrated platinum resistance thermometer inserted close to the rear of the cavity. The temperature stability and uniformity of the PET-C source was evaluated and its performance validated. This paper provides a description of the PET-C along with the results of the validation measurements. To further confirm the performance of the PET-C source it was compared to the standard ear thermometer calibration sources of the National Metrology Institute of Japan (NMIJ), Japan and the Physikalisch-Technische Bundesanstalt (PTB), Germany. The results of this comparison will also be briefly discussed. The PET-C source extends the capability for testing ear thermometers offered by the NPL body temperature fixed-point source, described previously. An update on the progress with the commercialisation of the fixed-point source will be given.
12. COLD CALIBRATION BLOCKHOUSE BASEMENT VIEW FROM LEFT TO RIGHT, ...
12. COLD CALIBRATION BLOCKHOUSE BASEMENT VIEW FROM LEFT TO RIGHT, CABLE TRAYS, RACKS, CABLE CONNECTION TERMINALS. - Marshall Space Flight Center, East Test Area, Cold Calibration Test Stand, Huntsville, Madison County, AL
Computer Generated Hologram System for Wavefront Measurement System Calibration
NASA Technical Reports Server (NTRS)
Olczak, Gene
2011-01-01
Computer Generated Holograms (CGHs) have been used for some time to calibrate interferometers that require nulling optics. A typical scenario is the testing of aspheric surfaces with an interferometer placed near the paraxial center of curvature. Existing CGH technology suffers from a reduced capacity to calibrate middle and high spatial frequencies. The root cause of this shortcoming is as follows: the CGH is not placed at an image conjugate of the asphere due to limitations imposed by the geometry of the test and the allowable size of the CGH. This innovation provides a calibration system where the imaging properties in calibration can be made comparable to the test configuration. Thus, if the test is designed to have good imaging properties, then middle and high spatial frequency errors in the test system can be well calibrated. The improved imaging properties are provided by a rudimentary auxiliary optic as part of the calibration system. The auxiliary optic is simple to characterize and align to the CGH. Use of the auxiliary optic also reduces the size of the CGH required for calibration and the density of the lines required for the CGH. The resulting CGH is less expensive than the existing technology and has reduced write error and alignment error sensitivities. This CGH system is suitable for any kind of calibration using an interferometer when high spatial resolution is required. It is especially well suited for tests that include segmented optical components or large apertures.
Li, Xiang; Arzhantsev, Sergey; Kauffman, John F; Spencer, John A
2011-04-05
Four portable NIR instruments from the same manufacturer that were nominally identical were programmed with a PLS model for the detection of diethylene glycol (DEG) contamination in propylene glycol (PG)-water mixtures. The model was developed on one spectrometer and used on other units after a calibration transfer procedure that used piecewise direct standardization. Although quantitative results were produced, in practice the instrument interface was programmed to report in Pass/Fail mode. The Pass/Fail determinations were made within 10s and were based on a threshold that passed a blank sample with 95% confidence. The detection limit was then established as the concentration at which a sample would fail with 95% confidence. For a 1% DEG threshold one false negative (Type II) and eight false positive (Type I) errors were found in over 500 samples measured. A representative test set produced standard errors of less than 2%. Since the range of diethylene glycol for economically motivated adulteration (EMA) is expected to be above 1%, the sensitivity of field calibrated portable NIR instruments is sufficient to rapidly screen out potentially problematic materials. Following method development, the instruments were shipped to different sites around the country for a collaborative study with a fixed protocol to be carried out by different analysts. NIR spectra of replicate sets of calibration transfer, system suitability and test samples were all processed with the same chemometric model on multiple instruments to determine the overall analytical precision of the method. The combined results collected for all participants were statistically analyzed to determine a limit of detection (2.0% DEG) and limit of quantitation (6.5%) that can be expected for a method distributed to multiple field laboratories. Published by Elsevier B.V.
Jiřík, Miroslav; Bartoš, Martin; Tomášek, Petr; Malečková, Anna; Kural, Tomáš; Horáková, Jana; Lukáš, David; Suchý, Tomáš; Kochová, Petra; Hubálek Kalbáčová, Marie; Králíčková, Milena; Tonar, Zbyněk
2018-06-01
Quantification of the structure and composition of biomaterials using micro-CT requires image segmentation due to the low contrast and overlapping radioopacity of biological materials. The amount of bias introduced by segmentation procedures is generally unknown. We aim to develop software that generates three-dimensional models of fibrous and porous structures with known volumes, surfaces, lengths, and object counts in fibrous materials and to provide a software tool that calibrates quantitative micro-CT assessments. Virtual image stacks were generated using the newly developed software TeIGen, enabling the simulation of micro-CT scans of unconnected tubes, connected tubes, and porosities. A realistic noise generator was incorporated. Forty image stacks were evaluated using micro-CT, and the error between the true known and estimated data was quantified. Starting with geometric primitives, the error of the numerical estimation of surfaces and volumes was eliminated, thereby enabling the quantification of volumes and surfaces of colliding objects. Analysis of the sensitivity of the thresholding upon parameters of generated testing image sets revealed the effects of decreasing resolution and increasing noise on the accuracy of the micro-CT quantification. The size of the error increased with decreasing resolution when the voxel size exceeded 1/10 of the typical object size, which simulated the effect of the smallest details that could still be reliably quantified. Open-source software for calibrating quantitative micro-CT assessments by producing and saving virtually generated image data sets with known morphometric data was made freely available to researchers involved in morphometry of three-dimensional fibrillar and porous structures in micro-CT scans. © 2018 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Dillner, A. M.; Takahama, S.
2015-10-01
Elemental carbon (EC) is an important constituent of atmospheric particulate matter because it absorbs solar radiation influencing climate and visibility and it adversely affects human health. The EC measured by thermal methods such as thermal-optical reflectance (TOR) is operationally defined as the carbon that volatilizes from quartz filter samples at elevated temperatures in the presence of oxygen. Here, methods are presented to accurately predict TOR EC using Fourier transform infrared (FT-IR) absorbance spectra from atmospheric particulate matter collected on polytetrafluoroethylene (PTFE or Teflon) filters. This method is similar to the procedure developed for OC in prior work (Dillner and Takahama, 2015). Transmittance FT-IR analysis is rapid, inexpensive and nondestructive to the PTFE filter samples which are routinely collected for mass and elemental analysis in monitoring networks. FT-IR absorbance spectra are obtained from 794 filter samples from seven Interagency Monitoring of PROtected Visual Environment (IMPROVE) sites collected during 2011. Partial least squares regression is used to calibrate sample FT-IR absorbance spectra to collocated TOR EC measurements. The FT-IR spectra are divided into calibration and test sets. Two calibrations are developed: one developed from uniform distribution of samples across the EC mass range (Uniform EC) and one developed from a uniform distribution of Low EC mass samples (EC < 2.4 μg, Low Uniform EC). A hybrid approach which applies the Low EC calibration to Low EC samples and the Uniform EC calibration to all other samples is used to produce predictions for Low EC samples that have mean error on par with parallel TOR EC samples in the same mass range and an estimate of the minimum detection limit (MDL) that is on par with TOR EC MDL. For all samples, this hybrid approach leads to precise and accurate TOR EC predictions by FT-IR as indicated by high coefficient of determination (R2; 0.96), no bias (0.00 μg m-3, a concentration value based on the nominal IMPROVE sample volume of 32.8 m3), low error (0.03 μg m-3) and reasonable normalized error (21 %). These performance metrics can be achieved with various degrees of spectral pretreatment (e.g., including or excluding substrate contributions to the absorbances) and are comparable in precision and accuracy to collocated TOR measurements. Only the normalized error is higher for the FT-IR EC measurements than for collocated TOR. FT-IR spectra are also divided into calibration and test sets by the ratios OC/EC and ammonium/EC to determine the impact of OC and ammonium on EC prediction. We conclude that FT-IR analysis with partial least squares regression is a robust method for accurately predicting TOR EC in IMPROVE network samples, providing complementary information to TOR OC predictions (Dillner and Takahama, 2015) and the organic functional group composition and organic matter estimated previously from the same set of sample spectra (Ruthenburg et al., 2014).
2003-04-09
The Eastman-Kodak mirror assembly is being tested for the James Webb Space Telescope (JWST) project at the X-Ray Calibration Facility at Marshall Space Flight Center (MSFC). In this photo, one of many segments of the mirror assembly is being set up inside the 24-ft vacuum chamber where it will undergo x-ray calibration tests. MSFC is supporting Goddard Space Flight Center (GSFC) in developing the JWST by taking numerous measurements to predict its future performance. The tests are conducted in a vacuum chamber cooled to approximate the super cold temperatures found in space. During its 27 years of operation, the facility has performed testing in support of a wide array of projects, including the Hubble Space Telescope (HST), Solar A, Chandra technology development, Chandra High Resolution Mirror Assembly and science instruments, Constellation X-Ray Mission, and Solar X-Ray Imager, currently operating on a Geostationary Operational Environment Satellite. The JWST is NASA's next generation space telescope, a successor to the Hubble Space Telescope, named in honor of NASA's second administrator, James E. Webb. It is scheduled for launch in 2010 aboard an expendable launch vehicle. It will take about 3 months for the spacecraft to reach its destination, an orbit of 940,000 miles in space.
Aero-Thermal Calibration of the NASA Glenn Icing Research Tunnel (2012 Test)
NASA Technical Reports Server (NTRS)
Pastor-Barsi, Christine M.; Arrington, E. Allen; VanZante, Judith Foss
2012-01-01
A major modification of the refrigeration plant and heat exchanger at the NASA Glenn Icing Research Tunnel (IRT) occurred in autumn of 2011. It is standard practice at NASA Glenn to perform a full aero-thermal calibration of the test section of a wind tunnel facility upon completion of major modifications. This paper will discuss the tools and techniques used to complete an aero-thermal calibration of the IRT and the results that were acquired. The goal of this test entry was to complete a flow quality survey and aero-thermal calibration measurements in the test section of the IRT. Test hardware that was used includes the 2D Resistive Temperature Detector (RTD) array, 9-ft pressure survey rake, hot wire survey rake, and the quick check survey rake. This test hardware provides a map of the velocity, Mach number, total and static pressure, total temperature, flow angle and turbulence intensity. The data acquired were then reduced to examine pressure, temperature, velocity, flow angle, and turbulence intensity. Reduced data has been evaluated to assess how the facility meets flow quality goals. No icing conditions were tested as part of the aero-thermal calibration. However, the effects of the spray bar air injections on the flow quality and aero-thermal calibration measurements were examined as part of this calibration.
Bittante, G; Ferragina, A; Cipolat-Gotet, C; Cecchinato, A
2014-10-01
Cheese yield is an important technological trait in the dairy industry. The aim of this study was to infer the genetic parameters of some cheese yield-related traits predicted using Fourier-transform infrared (FTIR) spectral analysis and compare the results with those obtained using an individual model cheese-producing procedure. A total of 1,264 model cheeses were produced using 1,500-mL milk samples collected from individual Brown Swiss cows, and individual measurements were taken for 10 traits: 3 cheese yield traits (fresh curd, curd total solids, and curd water as a percent of the weight of the processed milk), 4 milk nutrient recovery traits (fat, protein, total solids, and energy of the curd as a percent of the same nutrient in the processed milk), and 3 daily cheese production traits per cow (fresh curd, total solids, and water weight of the curd). Each unprocessed milk sample was analyzed using a MilkoScan FT6000 (Foss, Hillerød, Denmark) over the spectral range, from 5,000 to 900 wavenumber × cm(-1). The FTIR spectrum-based prediction models for the previously mentioned traits were developed using modified partial least-square regression. Cross-validation of the whole data set yielded coefficients of determination between the predicted and measured values in cross-validation of 0.65 to 0.95 for all traits, except for the recovery of fat (0.41). A 3-fold external validation was also used, in which the available data were partitioned into 2 subsets: a training set (one-third of the herds) and a testing set (two-thirds). The training set was used to develop calibration equations, whereas the testing subsets were used for external validation of the calibration equations and to estimate the heritabilities and genetic correlations of the measured and FTIR-predicted phenotypes. The coefficients of determination between the predicted and measured values in cross-validation results obtained from the training sets were very similar to those obtained from the whole data set, but the coefficient of determination of validation values for the external validation sets were much lower for all traits (0.30 to 0.73), and particularly for fat recovery (0.05 to 0.18), for the training sets compared with the full data set. For each testing subset, the (co)variance components for the measured and FTIR-predicted phenotypes were estimated using bivariate Bayesian analyses and linear models. The intraherd heritabilities for the predicted traits obtained from our internal cross-validation using the whole data set ranged from 0.085 for daily yield of curd solids to 0.576 for protein recovery, and were similar to those obtained from the measured traits (0.079 to 0.586, respectively). The heritabilities estimated from the testing data set used for external validation were more variable but similar (on average) to the corresponding values obtained from the whole data set. Moreover, the genetic correlations between the predicted and measured traits were high in general (0.791 to 0.996), and they were always higher than the corresponding phenotypic correlations (0.383 to 0.995), especially for the external validation subset. In conclusion, we herein report that application of the cross-validation technique to the whole data set tended to overestimate the predictive ability of FTIR spectra, give more precise phenotypic predictions than the calibrations obtained using smaller data sets, and yield genetic correlations similar to those obtained from the measured traits. Collectively, our findings indicate that FTIR predictions have the potential to be used as indicator traits for the rapid and inexpensive selection of dairy populations for improvement of cheese yield, milk nutrient recovery in curd, and daily cheese production per cow. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Advanced fast 3D DSA model development and calibration for design technology co-optimization
NASA Astrophysics Data System (ADS)
Lai, Kafai; Meliorisz, Balint; Muelders, Thomas; Welling, Ulrich; Stock, Hans-Jürgen; Marokkey, Sajan; Demmerle, Wolfgang; Liu, Chi-Chun; Chi, Cheng; Guo, Jing
2017-04-01
Direct Optimization (DO) of a 3D DSA model is a more optimal approach to a DTCO study in terms of accuracy and speed compared to a Cahn Hilliard Equation solver. DO's shorter run time (10X to 100X faster) and linear scaling makes it scalable to the area required for a DTCO study. However, the lack of temporal data output, as opposed to prior art, requires a new calibration method. The new method involves a specific set of calibration patterns. The calibration pattern's design is extremely important when temporal data is absent to obtain robust model parameters. A model calibrated to a Hybrid DSA system with a set of device-relevant constructs indicates the effectiveness of using nontemporal data. Preliminary model prediction using programmed defects on chemo-epitaxy shows encouraging results and agree qualitatively well with theoretical predictions from a strong segregation theory.
NASA Technical Reports Server (NTRS)
Ulbrich, N.; Volden, T.
2018-01-01
Analysis and use of temperature-dependent wind tunnel strain-gage balance calibration data are discussed in the paper. First, three different methods are presented and compared that may be used to process temperature-dependent strain-gage balance data. The first method uses an extended set of independent variables in order to process the data and predict balance loads. The second method applies an extended load iteration equation during the analysis of balance calibration data. The third method uses temperature-dependent sensitivities for the data analysis. Physical interpretations of the most important temperature-dependent regression model terms are provided that relate temperature compensation imperfections and the temperature-dependent nature of the gage factor to sets of regression model terms. Finally, balance calibration recommendations are listed so that temperature-dependent calibration data can be obtained and successfully processed using the reviewed analysis methods.
Quality control and batch testing of MRPC modules for BESIII ETOF upgrade
NASA Astrophysics Data System (ADS)
Liu, Z.; Li, X.; Sun, Y. J.; Li, C.; Heng, Y. K.; Chen, T. X.; Dai, H. L.; Shao, M.; Sun, S. S.; Tang, Z. B.; Yang, R. X.; Wu, Z.; Wang, X. Z.
2017-12-01
The end-cap time-of-flight (ETOF) system for the Beijing Spectrometer III (BESIII) has been upgraded using the Multi-gap Resistive Plate Chamber (MRPC) technology (Williams et al., 1999; Li et al., 2001; Blanco et al., 2003; Fonte et al., 2013, [1-4]). A set of quality-assurance procedures has been developed to guarantee the performances of the 72 mass-produced MRPC modules installed. The cosmic ray batch testing show that the average detection efficiency of the MRPC modules is about 95%. Two different calibration methods indicate that MRPCs' time resolution can reach 60 ps in the cosmic ray test.
NASA Astrophysics Data System (ADS)
Becker, R.; Usman, M.
2017-12-01
A SWAT (Soil Water Assessment Tool) model is applied in the semi-arid Punjab region in Pakistan. The physically based hydrological model is set up to simulate hydrological processes and water resources demands under future land use, climate change and irrigation management scenarios. In order to successfully run the model, detailed focus is laid on the calibration procedure of the model. The study deals with the following calibration issues:i. lack of reliable calibration/validation data, ii. difficulty to accurately model a highly managed system with a physically based hydrological model and iii. use of alternative and spatially distributed data sets for model calibration. In our study area field observations are rare and the entirely human controlled irrigation system renders central calibration parameters (e.g. runoff/curve number) unsuitable, as it can't be assumed that they represent the natural behavior of the hydrological system. From evapotranspiration (ET) however principal hydrological processes can still be inferred. Usman et al. (2015) derived satellite based monthly ET data for our study area based on SEBAL (Surface Energy Balance Algorithm) and created a reliable ET data set which we use in this study to calibrate our SWAT model. The initial SWAT model performance is evaluated with respect to the SEBAL results using correlation coefficients, RMSE, Nash-Sutcliffe efficiencies and mean differences. Particular focus is laid on the spatial patters, investigating the potential of a spatially differentiated parameterization instead of just using spatially uniform calibration data. A sensitivity analysis reveals the most sensitive parameters with respect to changes in ET, which are then selected for the calibration process.Using the SEBAL-ET product we calibrate the SWAT model for the time period 2005-2006 using a dynamically dimensioned global search algorithm to minimize RMSE. The model improvement after the calibration procedure is finally evaluated based on the previously chosen evaluation criteria for the time period 2007-2008. The study reveals the sensitivity of SWAT model parameters to changes in ET in a semi-arid and human controlled system and the potential of calibrating those parameters using satellite derived ET data.
A ricin forensic profiling approach based on a complex set of biomarkers.
Fredriksson, Sten-Åke; Wunschel, David S; Lindström, Susanne Wiklund; Nilsson, Calle; Wahl, Karen; Åstot, Crister
2018-08-15
A forensic method for the retrospective determination of preparation methods used for illicit ricin toxin production was developed. The method was based on a complex set of biomarkers, including carbohydrates, fatty acids, seed storage proteins, in combination with data on ricin and Ricinus communis agglutinin. The analyses were performed on samples prepared from four castor bean plant (R. communis) cultivars by four different sample preparation methods (PM1-PM4) ranging from simple disintegration of the castor beans to multi-step preparation methods including different protein precipitation methods. Comprehensive analytical data was collected by use of a range of analytical methods and robust orthogonal partial least squares-discriminant analysis- models (OPLS-DA) were constructed based on the calibration set. By the use of a decision tree and two OPLS-DA models, the sample preparation methods of test set samples were determined. The model statistics of the two models were good and a 100% rate of correct predictions of the test set was achieved. Copyright © 2018 Elsevier B.V. All rights reserved.
Compensating for Attenuation Differences in Ultrasonic Inspections of Titanium-Alloy Billets
NASA Astrophysics Data System (ADS)
Margetan, F. J.; Thompson, R. B.; Keller, Michael; Hassan, Waled
2004-02-01
Cylindrical billets of Titanium alloy are ultrasonically inspected prior to use in fabricating rotating jet-engine components. Although each billet has a cylindrical geometry, its ultrasonic properties are not cylindrically symmetric due to asymmetries in the process used to produce the billet from the original cast ingot. In the inspection process, a calibration standard of the same diameter containing flat-bottomed hole (FBH) reflectors is used to set the initial inspection gain (i.e., the signal amplification level). If the ultrasonic attenuation of the billet to be inspected differs significantly from that of the calibration standard, the inspection gain must be adjusted to maintain the desired defect detection sensitivity. In this paper we investigate several schemes for attenuation compensation. The gain adjustments fall into two broad categories: "global" adjustments (in dB/inch units), which are applied uniformly throughout the billet under inspection; and "local adjustments", which vary with axial and circumferential position. The schemes make use of the patterns of reflected back-wall amplitude and backscattered grain noise seen in the calibration standard and test billet. The various compensation schemes are tested using specimens of 6″-diameter Ti-6A1-4V billet into which many FBH targets were drilled. Results are summarized and tentative recommendations for improving billet inspection practices are offered.
Veiseth-Kent, Eva; Høst, Vibeke; Løvland, Atle
2017-01-01
The main objective of this work was to develop a method for rapid and non-destructive detection and grading of wooden breast (WB) syndrome in chicken breast fillets. Near-infrared (NIR) spectroscopy was chosen as detection method, and an industrial NIR scanner was applied and tested for large scale on-line detection of the syndrome. Two approaches were evaluated for discrimination of WB fillets: 1) Linear discriminant analysis based on NIR spectra only, and 2) a regression model for protein was made based on NIR spectra and the estimated concentrations of protein were used for discrimination. A sample set of 197 fillets was used for training and calibration. A test set was recorded under industrial conditions and contained spectra from 79 fillets. The classification methods obtained 99.5–100% correct classification of the calibration set and 100% correct classification of the test set. The NIR scanner was then installed in a commercial chicken processing plant and could detect incidence rates of WB in large batches of fillets. Examples of incidence are shown for three broiler flocks where a high number of fillets (9063, 6330 and 10483) were effectively measured. Prevalence of WB of 0.1%, 6.6% and 8.5% were estimated for these flocks based on the complete sample volumes. Such an on-line system can be used to alleviate the challenges WB represents to the poultry meat industry. It enables automatic quality sorting of chicken fillets to different product categories. Manual laborious grading can be avoided. Incidences of WB from different farms and flocks can be tracked and information can be used to understand and point out main causes for WB in the chicken production. This knowledge can be used to improve the production procedures and reduce today’s extensive occurrence of WB. PMID:28278170
Chander, G.; Xiong, X.; Angal, A.; Choi, T.
2009-01-01
The Committee on Earth Observation Satellites (CEOS) Infrared and Visible Optical Sensors (IVOS) subgroup members established a set of CEOS-endorsed globally distributed reference standard test sites for the postlaunch calibration of space-based optical imaging sensors. This paper discusses the top five African pseudo-invariant sites (Libya 4, Mauritania 1/2, Algeria 3, Libya 1, and Algeria 5) that were identified by the IVOS subgroup. This paper focuses on monitoring the long-term radiometric stability of the Terra Moderate Resolution Imaging Spectroradiometer (MODIS) and the Landsat 7 (L7) Enhanced Thematic Mapper Plus (ETM+) sensors using near-simultaneous and cloud-free image pairs acquired from launch to December 2008 over the five African desert sites. Residual errors and coefficients of determination were also generated to support the quality assessment of the calibration differences between the two sensors. An effort was also made to evaluate the relative stability of these sites for long-term monitoring of the optical sensors. ??2009 IEEE.
Kumar, Keshav
2018-03-01
Excitation-emission matrix fluorescence (EEMF) and total synchronous fluorescence spectroscopy (TSFS) are the 2 fluorescence techniques that are commonly used for the analysis of multifluorophoric mixtures. These 2 fluorescence techniques are conceptually different and provide certain advantages over each other. The manual analysis of such highly correlated large volume of EEMF and TSFS towards developing a calibration model is difficult. Partial least square (PLS) analysis can analyze the large volume of EEMF and TSFS data sets by finding important factors that maximize the correlation between the spectral and concentration information for each fluorophore. However, often the application of PLS analysis on entire data sets does not provide a robust calibration model and requires application of suitable pre-processing step. The present work evaluates the application of genetic algorithm (GA) analysis prior to PLS analysis on EEMF and TSFS data sets towards improving the precision and accuracy of the calibration model. The GA algorithm essentially combines the advantages provided by stochastic methods with those provided by deterministic approaches and can find the set of EEMF and TSFS variables that perfectly correlate well with the concentration of each of the fluorophores present in the multifluorophoric mixtures. The utility of the GA assisted PLS analysis is successfully validated using (i) EEMF data sets acquired for dilute aqueous mixture of four biomolecules and (ii) TSFS data sets acquired for dilute aqueous mixtures of four carcinogenic polycyclic aromatic hydrocarbons (PAHs) mixtures. In the present work, it is shown that by using the GA it is possible to significantly improve the accuracy and precision of the PLS calibration model developed for both EEMF and TSFS data set. Hence, GA must be considered as a useful pre-processing technique while developing an EEMF and TSFS calibration model.
NASA Astrophysics Data System (ADS)
Ouellette, G., Jr.; DeLong, K. L.
2016-02-01
High-resolution proxy records of sea surface temperature (SST) are increasingly being produced using trace element and isotope variability within the skeletal materials of marine organisms such as corals, mollusks, sclerosponges, and coralline algae. Translating the geochemical variations within these organisms into records of SST requires calibration with SST observations using linear regression methods, preferably with in situ SST records that span several years. However, locations with such records are sparse; therefore, calibration is often accomplished using gridded SST data products such as the Hadley Center's HADSST (5º) and interpolated HADISST (1º) data sets, NOAA's extended reconstructed SST data set (ERSST; 2º), optimum interpolation SST (OISST; 1º), and Kaplan SST data sets (5º). From these data products, the SST used for proxy calibration is obtained for a single grid cell that includes the proxy's study site. The gridded data sets are based on the International Comprehensive Ocean-Atmosphere Data Set (ICOADS) and each uses different methods of interpolation to produce the globally and temporally complete data products except for HadSST, which is not interpolated but quality controlled. This study compares SST for a single site from these gridded data products with a high-resolution satellite-based SST data set from NOAA (Pathfinder; 4 km) with in situ SST data and coral Sr/Ca variability for our study site in Haiti to assess differences between these SST records with a focus on seasonal variability. Our results indicate substantial differences in the seasonal variability captured for the same site among these data sets on the order of 1-3°C. This analysis suggests that of the data products, high-resolution satellite SST best captured seasonal variability at the study site. Unfortunately, satellite SST records are limited to the past few decades. If satellite SST are to be used to calibrate proxy records, collecting modern, living samples is desirable.
Optics-Only Calibration of a Neural-Net Based Optical NDE Method for Structural Health Monitoring
NASA Technical Reports Server (NTRS)
Decker, Arthur J.
2004-01-01
A calibration process is presented that uses optical measurements alone to calibrate a neural-net based NDE method. The method itself detects small changes in the vibration mode shapes of structures. The optics-only calibration process confirms previous work that the sensitivity to vibration-amplitude changes can be as small as 10 nanometers. A more practical value in an NDE service laboratory is shown to be 50 nanometers. Both model-generated and experimental calibrations are demonstrated using two implementations of the calibration technique. The implementations are based on previously published demonstrations of the NDE method and an alternative calibration procedure that depends on comparing neural-net and point sensor measurements. The optics-only calibration method, unlike the alternative method, does not require modifications of the structure being tested or the creation of calibration objects. The calibration process can be used to test improvements in the NDE process and to develop a vibration-mode-independence of damagedetection sensitivity. The calibration effort was intended to support NASA s objective to promote safety in the operations of ground test facilities or aviation safety, in general, by allowing the detection of the gradual onset of structural changes and damage.
NASA Technical Reports Server (NTRS)
Mahaffy, Paul R.
2012-01-01
The measurement goals of the Sample Analysis at Mars (SAM) instrument suite on the "Curiosity" Rover of the Mars Science Laboratory (MSL) include chemical and isotopic analysis of organic and inorganic volatiles for both atmospheric and solid samples [1,2]. SAM directly supports the ambitious goals of the MSL mission to provide a quantitative assessment of habitability and preservation in Gale crater by means of a range of chemical and geological measurements [3]. The SAM FM combined calibration and environmental testing took place primarily in 2010 with a limited set of tests implemented after integration into the rover in January 2011. The scope of SAM FM testing was limited both to preserve SAM consumables such as life time of its electromechanical elements and to minimize the level of terrestrial contamination in the SAM instrument. A more comprehensive calibration of a SAM-like suite of instruments will be implemented in 2012 with calibration runs planned for the SAM testbed. The SAM Testbed is nearly identical to the SAM FM and operates in a ambient pressure chamber. The SAM Instrument Suite: SAM's instruments are a Quadrupole Mass Spectrometer (QMS), a 6-column Gas Chromatograph (GC), and a 2-channel Tunable Laser Spectrometer (TLS). Gas Chromatography Mass Spectrometry is designed for identification of even trace organic compounds. The TLS [5] secures the C, H, and O isotopic composition in carbon dioxide, water, and methane. Sieved materials are delivered from the MSL sample acquisition and processing system to one of68 cups of the Sample Manipulation System (SMS). 59 of these cups are fabricated from inert quartz. After sample delivery, a cup is inserted into one of 2 ovens for evolved gas analysis (EGA ambient to >9500C) by the QMS and TLS. A portion of the gas released can be trapped and subsequently analyzed by GCMS. Nine sealed cups contain liquid solvents and chemical derivatization or thermochemolysis agents to extract and transform polar molecules such as amino acids, nucleobases, and carboxylic acids into compounds that are sufficiently volatile to transmit through the GC columns. The remaining 6 cups contain calibrants. SAM FM Calibration Overview: The SAM FM calibration in the Mars chamber employed a variety of pure gases, gas mixtures, and solid materials. Isotope calibration runs for the TLS utilized 13C enriched C02 standards and 0 enriched CH4. A variety of fluorocarbon compounds that spanned the entire mass range of the QMS as well as C3-C6 hydrocarbons were utilized for calibration of the GCMS. Solid samples consisting of a mixture of calcite, melanterite, and inert silica glass either doped or not with fluorocarbons were introduced into the SAM FM cups through the SAM inlet funnel/tube system.
NASA Astrophysics Data System (ADS)
Zimmerman, Naomi; Presto, Albert A.; Kumar, Sriniwasa P. N.; Gu, Jason; Hauryliuk, Aliaksei; Robinson, Ellis S.; Robinson, Allen L.; Subramanian, R.
2018-01-01
Low-cost sensing strategies hold the promise of denser air quality monitoring networks, which could significantly improve our understanding of personal air pollution exposure. Additionally, low-cost air quality sensors could be deployed to areas where limited monitoring exists. However, low-cost sensors are frequently sensitive to environmental conditions and pollutant cross-sensitivities, which have historically been poorly addressed by laboratory calibrations, limiting their utility for monitoring. In this study, we investigated different calibration models for the Real-time Affordable Multi-Pollutant (RAMP) sensor package, which measures CO, NO2, O3, and CO2. We explored three methods: (1) laboratory univariate linear regression, (2) empirical multiple linear regression, and (3) machine-learning-based calibration models using random forests (RF). Calibration models were developed for 16-19 RAMP monitors (varied by pollutant) using training and testing windows spanning August 2016 through February 2017 in Pittsburgh, PA, US. The random forest models matched (CO) or significantly outperformed (NO2, CO2, O3) the other calibration models, and their accuracy and precision were robust over time for testing windows of up to 16 weeks. Following calibration, average mean absolute error on the testing data set from the random forest models was 38 ppb for CO (14 % relative error), 10 ppm for CO2 (2 % relative error), 3.5 ppb for NO2 (29 % relative error), and 3.4 ppb for O3 (15 % relative error), and Pearson r versus the reference monitors exceeded 0.8 for most units. Model performance is explored in detail, including a quantification of model variable importance, accuracy across different concentration ranges, and performance in a range of monitoring contexts including the National Ambient Air Quality Standards (NAAQS) and the US EPA Air Sensors Guidebook recommendations of minimum data quality for personal exposure measurement. A key strength of the RF approach is that it accounts for pollutant cross-sensitivities. This highlights the importance of developing multipollutant sensor packages (as opposed to single-pollutant monitors); we determined this is especially critical for NO2 and CO2. The evaluation reveals that only the RF-calibrated sensors meet the US EPA Air Sensors Guidebook recommendations of minimum data quality for personal exposure measurement. We also demonstrate that the RF-model-calibrated sensors could detect differences in NO2 concentrations between a near-road site and a suburban site less than 1.5 km away. From this study, we conclude that combining RF models with carefully controlled state-of-the-art multipollutant sensor packages as in the RAMP monitors appears to be a very promising approach to address the poor performance that has plagued low-cost air quality sensors.
In-Space Calibration of a Gyro Quadruplet
NASA Technical Reports Server (NTRS)
Bar-Itzhack, Itzhack Y.; Harman, Richard R.
2001-01-01
This work presents a new approach to gyro calibration where, in addition to being used for computing attitude that is needed in the calibration process, the gyro outputs are also used as measurements in a Kalman filter. This work also presents an algorithm for calibrating a quadruplet rather than the customary triad gyro set. In particular, a new misalignment error model is derived for this case. The new calibration algorithm is applied to the EOS-AQUA satellite gyros. The effectiveness of the new algorithm is demonstrated through simulations.
Cárdenas, V; Cordobés, M; Blanco, M; Alcalà, M
2015-10-10
The pharmaceutical industry is under stringent regulations on quality control of their products because is critical for both, productive process and consumer safety. According to the framework of "process analytical technology" (PAT), a complete understanding of the process and a stepwise monitoring of manufacturing are required. Near infrared spectroscopy (NIRS) combined with chemometrics have lately performed efficient, useful and robust for pharmaceutical analysis. One crucial step in developing effective NIRS-based methodologies is selecting an appropriate calibration set to construct models affording accurate predictions. In this work, we developed calibration models for a pharmaceutical formulation during its three manufacturing stages: blending, compaction and coating. A novel methodology is proposed for selecting the calibration set -"process spectrum"-, into which physical changes in the samples at each stage are algebraically incorporated. Also, we established a "model space" defined by Hotelling's T(2) and Q-residuals statistics for outlier identification - inside/outside the defined space - in order to select objectively the factors to be used in calibration set construction. The results obtained confirm the efficacy of the proposed methodology for stepwise pharmaceutical quality control, and the relevance of the study as a guideline for the implementation of this easy and fast methodology in the pharma industry. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Parker, Peter A. (Inventor)
2003-01-01
A single vector calibration system is provided which facilitates the calibration of multi-axis load cells, including wind tunnel force balances. The single vector system provides the capability to calibrate a multi-axis load cell using a single directional load, for example loading solely in the gravitational direction. The system manipulates the load cell in three-dimensional space, while keeping the uni-directional calibration load aligned. The use of a single vector calibration load reduces the set-up time for the multi-axis load combinations needed to generate a complete calibration mathematical model. The system also reduces load application inaccuracies caused by the conventional requirement to generate multiple force vectors. The simplicity of the system reduces calibration time and cost, while simultaneously increasing calibration accuracy.
NASA Technical Reports Server (NTRS)
Smith, Ramsey; Reuter, Dennis; Irons, James; Lunsford, Allen; Montanero, Matthew; Tesfaye, Zelalem; Wenny, Brian; Thome, Kurtis
2011-01-01
The preflight calibration testing of TIRS evaluates the performance of the instrument at the component, subsystem and system level, The overall objective is to provide an instrument that is well calibrated and well characterized with specification compliant data that will ensure the data continuity of Landsat from the previous missions to the LDCM, The TIRS flight build unit and the flight instrument were assessed through a series of calibration tests at NASA Goddard Space Flight Center. Instrument-level requirements played a strong role in defining the test equipment and procedures used for the calibration in the thermal/vacuum chamber. The calibration ground support equipment (CGSE), manufactured by MEI and ATK Corporation, was used to measure the optical, radiometric and geometric characteristics of TIRS, The CGSE operates in three test configurations: GeoRad (geometric, radiometric and spatial), flood source and spectral, TIRS was evaluated though the following tests: bright target recovery, radiometry, spectral response, spatial shape, scatter, stray light, focus, and uniformity, Data were obtained for the instrument and various subsystems under conditions simulating those on orbit In the spectral configuration, a monochromator system with a blackbody source is used for in-band and out-of-band relative spectral response characterization, In the flood source configuration the entire focal plane array is illuminated simultaneously to investigate pixel-to-pixel uniformity and dead or inoperable pixels, The remaining tests were executed in the GeoRad configuration and use a NIST calibrated cavity blackbody source, The NIST calibration is transferred to the TIRS sensor and to the blackbody source on-board TIRS, The onboard calibrator will be the primary calibration source for the TIRS sensor on orbit.
Study on rapid valid acidity evaluation of apple by fiber optic diffuse reflectance technique
NASA Astrophysics Data System (ADS)
Liu, Yande; Ying, Yibin; Fu, Xiaping; Jiang, Xuesong
2004-03-01
Some issues related to nondestructive evaluation of valid acidity in intact apples by means of Fourier transform near infrared (FTNIR) (800-2631nm) method were addressed. A relationship was established between the diffuse reflectance spectra recorded with a bifurcated optic fiber and the valid acidity. The data were analyzed by multivariate calibration analysis such as partial least squares (PLS) analysis and principal component regression (PCR) technique. A total of 120 Fuji apples were tested and 80 of them were used to form a calibration data set. The influence of data preprocessing and different spectra treatments were also investigated. Models based on smoothing spectra were slightly worse than models based on derivative spectra and the best result was obtained when the segment length was 5 and the gap size was 10. Depending on data preprocessing and multivariate calibration technique, the best prediction model had a correlation efficient (0.871), a low RMSEP (0.0677), a low RMSEC (0.056) and a small difference between RMSEP and RMSEC by PLS analysis. The results point out the feasibility of FTNIR spectral analysis to predict the fruit valid acidity non-destructively. The ratio of data standard deviation to the root mean square error of prediction (SDR) is better to be less than 3 in calibration models, however, the results cannot meet the demand of actual application. Therefore, further study is required for better calibration and prediction.
NASA Astrophysics Data System (ADS)
Chatzistergos, Theodosios; Ermolli, Ilaria; Solanki, Sami K.; Krivova, Natalie A.
2018-01-01
Context. Historical Ca II K spectroheliograms (SHG) are unique in representing long-term variations of the solar chromospheric magnetic field. They usually suffer from numerous problems and lack photometric calibration. Thus accurate processing of these data is required to get meaningful results from their analysis. Aims: In this paper we aim at developing an automatic processing and photometric calibration method that provides precise and consistent results when applied to historical SHG. Methods: The proposed method is based on the assumption that the centre-to-limb variation of the intensity in quiet Sun regions does not vary with time. We tested the accuracy of the proposed method on various sets of synthetic images that mimic problems encountered in historical observations. We also tested our approach on a large sample of images randomly extracted from seven different SHG archives. Results: The tests carried out on the synthetic data show that the maximum relative errors of the method are generally <6.5%, while the average error is <1%, even if rather poor quality observations are considered. In the absence of strong artefacts the method returns images that differ from the ideal ones by <2% in any pixel. The method gives consistent values for both plage and network areas. We also show that our method returns consistent results for images from different SHG archives. Conclusions: Our tests show that the proposed method is more accurate than other methods presented in the literature. Our method can also be applied to process images from photographic archives of solar observations at other wavelengths than Ca II K.
On-ground calibration of the ART-XC/SRG mirror system and detector unit at IKI. Part I
NASA Astrophysics Data System (ADS)
Pavlinsky, M.; Tkachenko, A.; Levin, V.; Krivchenko, A.; Rotin, A.; Kuznetsova, M.; Lapshov, I.; Krivonos, R.; Semena, A.; Semena, N.; Serbinov, D.; Shtykovsky, A.; Yaskovich, A.; Oleinikov, V.; Glushenko, A.; Mereminskiy, I.; Molkov, S.; Sazonov, S.; Arefiev, V.
2018-05-01
From October 2016 to September 2017, we performed tests of the ART-XC /SRG spare mirror system and detector unit at the 60-m-long IKI X-ray test facility. We describe some technical features of this test facility. We also present a brief description of the ART-XC mirror system and focal detectors. The nominal focal length of the ART-XC optics is 2700 mm. The field of view is determined by the combination of the mirror system and the detector unit and is equal to ˜0.31 square degrees. The declared operating energy range is 5-30 keV. During the tests, we illuminated the detector with a 55Fe+241 Am calibration source and also with a quasi-parallel X-ray beam. The calibration source is integrated into the detector's collimator. The X-ray beam was generated by a set of Oxford Instruments X-ray tubes with Cr, Cu and Mo targets and an Amptek miniature X-ray tube (Mini-X) with Ag transmission target. The detector was exposed to the X-ray beam either directly or through the mirror system. We present the obtained results on the detector's energy resolution, the muon on-ground background level and the energy dependence of the W90 value. The accuracy of a mathematical model of the ART-XC mirror system, based on ray-tracing simulations, proves to be within 3.5% in the main energy range of 4-20 keV and 5.4% in the "hard" energy range of 20-40 keV.
Dual-angle, self-calibrating Thomson scattering measurements in RFX-MOD
NASA Astrophysics Data System (ADS)
Giudicotti, L.; Pasqualotto, R.; Fassina, A.
2014-11-01
In the multipoint Thomson scattering (TS) system of the RFX-MOD experiment the signals from a few spatial positions can be observed simultaneously under two different scattering angles. In addition the detection system uses optical multiplexing by signal delays in fiber optic cables of different length so that the two sets of TS signals can be observed by the same polychromator. Owing to the dependence of the TS spectrum on the scattering angle, it was then possible to implement self-calibrating TS measurements in which the electron temperature Te, the electron density ne and the relative calibration coefficients of spectral channels sensitivity Ci were simultaneously determined by a suitable analysis of the two sets of TS data collected at the two angles. The analysis has shown that, in spite of the small difference in the spectra obtained at the two angles, reliable values of the relative calibration coefficients can be determined by the analysis of good S/N dual-angle spectra recorded in a few tens of plasma shots. This analysis suggests that in RFX-MOD the calibration of the entire set of TS polychromators by means of the similar, dual-laser (Nd:YAG/Nd:YLF) TS technique, should be feasible.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ovard R. Perry; David L. Georgeson
This report describes the April 2011 calibration of the Accuscan II HpGe In Vivo system for high energy lung counting. The source used for the calibration was a NIST traceable lung set manufactured at the University of Cincinnati UCLL43AMEU & UCSL43AMEU containing Am-241 and Eu-152 with energies from 26 keV to 1408 keV. The lung set was used in conjunction with a Realistic Torso phantom. The phantom was placed on the RMC II counting table (with pins removed) between the v-ridges on the backwall of the Accuscan II counter. The top of the detector housing was positioned perpendicular to themore » junction of the phantom clavicle with the sternum. This position places the approximate center line of the detector housing with the center of the lungs. The energy and efficiency calibrations were performed using a Realistic Torso phantom (Appendix I) and the University of Cincinnati lung set. This report includes an overview introduction and records for the energy/FWHM and efficiency calibration including performance verification and validation counting. The Accuscan II system was successfully calibrated for high energy lung counting and verified in accordance with ANSI/HPS N13.30-1996 criteria.« less
Zhang, Hong-guang; Lu, Jian-gang
2016-02-01
Abstract To overcome the problems of significant difference among samples and nonlinearity between the property and spectra of samples in spectral quantitative analysis, a local regression algorithm is proposed in this paper. In this algorithm, net signal analysis method(NAS) was firstly used to obtain the net analyte signal of the calibration samples and unknown samples, then the Euclidean distance between net analyte signal of the sample and net analyte signal of calibration samples was calculated and utilized as similarity index. According to the defined similarity index, the local calibration sets were individually selected for each unknown sample. Finally, a local PLS regression model was built on each local calibration sets for each unknown sample. The proposed method was applied to a set of near infrared spectra of meat samples. The results demonstrate that the prediction precision and model complexity of the proposed method are superior to global PLS regression method and conventional local regression algorithm based on spectral Euclidean distance.
Whole-machine calibration approach for phased array radar with self-test
NASA Astrophysics Data System (ADS)
Shen, Kai; Yao, Zhi-Cheng; Zhang, Jin-Chang; Yang, Jian
2017-06-01
The performance of the missile-borne phased array radar is greatly influenced by the inter-channel amplitude and phase inconsistencies. In order to ensure its performance, the amplitude and the phase characteristics of radar should be calibrated. Commonly used methods mainly focus on antenna calibration, such as FFT, REV, etc. However, the radar channel also contains T / R components, channels, ADC and messenger. In order to achieve on-based phased array radar amplitude information for rapid machine calibration and compensation, we adopt a high-precision plane scanning test platform for phase amplitude test. A calibration approach for the whole channel system based on the radar frequency source test is proposed. Finally, the advantages and the application prospect of this approach are analysed.
Reliability of an x-ray system for calibrating and testing personal radiation dosimeters
NASA Astrophysics Data System (ADS)
Guimarães, M. C.; Silva, C. R. E.; Rosado, P. H. G.; Cunha, P. G.; Da Silva, T. A.
2018-03-01
Metrology laboratories are expected to maintain standardized radiation beams and traceable standard dosimeters to provide reliable calibrations or testing of detectors. Results of the characterization of an x-ray system for performing calibration and testing of radiation dosimeters used for individual monitoring are shown in this work.
New NREL Method Reduces Uncertainty in Photovoltaic Module Calibrations |
calibration traceability to certified test laboratories. This reliable calibration, in turn, determines the of a spire flash simulator, SOMS outdoor test bed, and LACSS continuous simulator. In NREL's Cell and % (k=2 coverage factor). This value is the lowest reported Pmax uncertainty of any accredited test
33 CFR 154.2181 - Alternative testing program-Test requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
... CE test must check the calibrated range of each analyzer using a lower (zero) and upper (span... instrument, R = reference value of zero or high-level calibration gas introduced into the monitoring system... Difference Zero Span 1-Zero 1-Span 2-Zero 2-Span 3-Zero 3-Span Mean Difference = Calibration Error = % % (3...
40 CFR 1066.145 - Test fuel, engine fluids, analytical gases, and other calibration standards.
Code of Federal Regulations, 2014 CFR
2014-07-01
... requirements of 40 CFR 1065.750. (e) Mass standards. Use mass standards that meet the requirements of 40 CFR... gases, and other calibration standards. 1066.145 Section 1066.145 Protection of Environment..., analytical gases, and other calibration standards. (a) Test fuel. Use test fuel as specified in the standard...
Optical laboratory facilities at the Finnish Meteorological Institute - Arctic Research Centre
NASA Astrophysics Data System (ADS)
Lakkala, Kaisa; Suokanerva, Hanne; Matti Karhu, Juha; Aarva, Antti; Poikonen, Antti; Karppinen, Tomi; Ahponen, Markku; Hannula, Henna-Reetta; Kontu, Anna; Kyrö, Esko
2016-07-01
This paper describes the laboratory facilities at the Finnish Meteorological Institute - Arctic Research Centre (FMI-ARC, http://fmiarc.fmi.fi). They comprise an optical laboratory, a facility for biological studies, and an office. A dark room has been built, in which an optical table and a fixed lamp test system are set up, and the electronics allow high-precision adjustment of the current. The Brewer spectroradiometer, NILU-UV multifilter radiometer, and Analytical Spectral Devices (ASD) spectroradiometer of the FMI-ARC are regularly calibrated or checked for stability in the laboratory. The facilities are ideal for responding to the needs of international multidisciplinary research, giving the possibility to calibrate and characterize the research instruments as well as handle and store samples.
Improvement of Predictive Ability by Uniform Coverage of the Target Genetic Space
Bustos-Korts, Daniela; Malosetti, Marcos; Chapman, Scott; Biddulph, Ben; van Eeuwijk, Fred
2016-01-01
Genome-enabled prediction provides breeders with the means to increase the number of genotypes that can be evaluated for selection. One of the major challenges in genome-enabled prediction is how to construct a training set of genotypes from a calibration set that represents the target population of genotypes, where the calibration set is composed of a training and validation set. A random sampling protocol of genotypes from the calibration set will lead to low quality coverage of the total genetic space by the training set when the calibration set contains population structure. As a consequence, predictive ability will be affected negatively, because some parts of the genotypic diversity in the target population will be under-represented in the training set, whereas other parts will be over-represented. Therefore, we propose a training set construction method that uniformly samples the genetic space spanned by the target population of genotypes, thereby increasing predictive ability. To evaluate our method, we constructed training sets alongside with the identification of corresponding genomic prediction models for four genotype panels that differed in the amount of population structure they contained (maize Flint, maize Dent, wheat, and rice). Training sets were constructed using uniform sampling, stratified-uniform sampling, stratified sampling and random sampling. We compared these methods with a method that maximizes the generalized coefficient of determination (CD). Several training set sizes were considered. We investigated four genomic prediction models: multi-locus QTL models, GBLUP models, combinations of QTL and GBLUPs, and Reproducing Kernel Hilbert Space (RKHS) models. For the maize and wheat panels, construction of the training set under uniform sampling led to a larger predictive ability than under stratified and random sampling. The results of our methods were similar to those of the CD method. For the rice panel, all training set construction methods led to similar predictive ability, a reflection of the very strong population structure in this panel. PMID:27672112
Automated Attitude Sensor Calibration: Progress and Plans
NASA Technical Reports Server (NTRS)
Sedlak, Joseph; Hashmall, Joseph
2004-01-01
This paper describes ongoing work a NASA/Goddard Space Flight Center to improve the quality of spacecraft attitude sensor calibration and reduce costs by automating parts of the calibration process. The new calibration software can autonomously preview data quality over a given time span, select a subset of the data for processing, perform the requested calibration, and output a report. This level of automation is currently being implemented for two specific applications: inertial reference unit (IRU) calibration and sensor alignment calibration. The IRU calibration utility makes use of a sequential version of the Davenport algorithm. This utility has been successfully tested with simulated and actual flight data. The alignment calibration is still in the early testing stage. Both utilities will be incorporated into the institutional attitude ground support system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Addair, Travis; Barno, Justin; Dodge, Doug
CCT is a Java based application for calibrating 10 shear wave coda measurement models to observed data using a much smaller set of reference moment magnitudes (MWs) calculated from other means (waveform modeling, etc.). These calibrated measurement models can then be used in other tools to generate coda moment magnitude measurements, source spectra, estimated stress drop, and other useful measurements for any additional events and any new data collected in the calibrated region.
Fang, Cheng; Butler, David Lee
2013-05-01
In this paper, an innovative method for CMM (Coordinate Measuring Machine) self-calibration is proposed. In contrast to conventional CMM calibration that relies heavily on a high precision reference standard such as a laser interferometer, the proposed calibration method is based on a low-cost artefact which is fabricated with commercially available precision ball bearings. By optimizing the mathematical model and rearranging the data sampling positions, the experimental process and data analysis can be simplified. In mathematical expression, the samples can be minimized by eliminating the redundant equations among those configured by the experimental data array. The section lengths of the artefact are measured at arranged positions, with which an equation set can be configured to determine the measurement errors at the corresponding positions. With the proposed method, the equation set is short of one equation, which can be supplemented by either measuring the total length of the artefact with a higher-precision CMM or calibrating the single point error at the extreme position with a laser interferometer. In this paper, the latter is selected. With spline interpolation, the error compensation curve can be determined. To verify the proposed method, a simple calibration system was set up on a commercial CMM. Experimental results showed that with the error compensation curve uncertainty of the measurement can be reduced to 50%.
Kodak Mirror Assembly Tested at Marshall Space Flight Center
NASA Technical Reports Server (NTRS)
2003-01-01
The Eastman-Kodak mirror assembly is being tested for the James Webb Space Telescope (JWST) project at the X-Ray Calibration Facility at Marshall Space Flight Center (MSFC). In this photo, one of many segments of the mirror assembly is being set up inside the 24-ft vacuum chamber where it will undergo x-ray calibration tests. MSFC is supporting Goddard Space Flight Center (GSFC) in developing the JWST by taking numerous measurements to predict its future performance. The tests are conducted in a vacuum chamber cooled to approximate the super cold temperatures found in space. During its 27 years of operation, the facility has performed testing in support of a wide array of projects, including the Hubble Space Telescope (HST), Solar A, Chandra technology development, Chandra High Resolution Mirror Assembly and science instruments, Constellation X-Ray Mission, and Solar X-Ray Imager, currently operating on a Geostationary Operational Environment Satellite. The JWST is NASA's next generation space telescope, a successor to the Hubble Space Telescope, named in honor of NASA's second administrator, James E. Webb. It is scheduled for launch in 2010 aboard an expendable launch vehicle. It will take about 3 months for the spacecraft to reach its destination, an orbit of 940,000 miles in space.
Stereoscopic 3D reconstruction using motorized zoom lenses within an embedded system
NASA Astrophysics Data System (ADS)
Liu, Pengcheng; Willis, Andrew; Sui, Yunfeng
2009-02-01
This paper describes a novel embedded system capable of estimating 3D positions of surfaces viewed by a stereoscopic rig consisting of a pair of calibrated cameras. Novel theoretical and technical aspects of the system are tied to two aspects of the design that deviate from typical stereoscopic reconstruction systems: (1) incorporation of an 10x zoom lens (Rainbow- H10x8.5) and (2) implementation of the system on an embedded system. The system components include a DSP running μClinux, an embedded version of the Linux operating system, and an FPGA. The DSP orchestrates data flow within the system and performs complex computational tasks and the FPGA provides an interface to the system devices which consist of a CMOS camera pair and a pair of servo motors which rotate (pan) each camera. Calibration of the camera pair is accomplished using a collection of stereo images that view a common chess board calibration pattern for a set of pre-defined zoom positions. Calibration settings for an arbitrary zoom setting are estimated by interpolation of the camera parameters. A low-computational cost method for dense stereo matching is used to compute depth disparities for the stereo image pairs. Surface reconstruction is accomplished by classical triangulation of the matched points from the depth disparities. This article includes our methods and results for the following problems: (1) automatic computation of the focus and exposure settings for the lens and camera sensor, (2) calibration of the system for various zoom settings and (3) stereo reconstruction results for several free form objects.
Tang, Jun; Wang, Qing; Tong, Hong; Liao, Xiang; Zhang, Zheng-fang
2016-03-01
This work aimed to use attenuated total reflectance Fourier transform infrared spectroscopy to identify the lavender essential oil by establishing a Lavender variety and quality analysis model. So, 96 samples were tested. For all samples, the raw spectra were pretreated as second derivative, and to determine the 1 750-900 cm(-1) wavelengths for pattern recognition analysis on the basis of the variance calculation. The results showed that principal component analysis (PCA) can basically discriminate lavender oil cultivar and the first three principal components mainly represent the ester, alcohol and terpenoid substances. When the orthogonal partial least-squares discriminant analysis (OPLS-DA) model was established, the 68 samples were used for the calibration set. Determination coefficients of OPLS-DA regression curve were 0.959 2, 0.976 4, and 0.958 8 respectively for three varieties of lavender essential oil. Three varieties of essential oil's the root mean square error of prediction (RMSEP) in validation set were 0.142 9, 0.127 3, and 0.124 9, respectively. The discriminant rate of calibration set and the prediction rate of validation set had reached 100%. The model has the very good recognition capability to detect the variety and quality of lavender essential oil. The result indicated that a model which provides a quick, intuitive and feasible method had been built to discriminate lavender oils.
Utilization of advanced calibration techniques in stochastic rock fall analysis of quarry slopes
NASA Astrophysics Data System (ADS)
Preh, Alexander; Ahmadabadi, Morteza; Kolenprat, Bernd
2016-04-01
In order to study rock fall dynamics, a research project was conducted by the Vienna University of Technology and the Austrian Central Labour Inspectorate (Federal Ministry of Labour, Social Affairs and Consumer Protection). A part of this project included 277 full-scale drop tests at three different quarries in Austria and recording key parameters of the rock fall trajectories. The tests involved a total of 277 boulders ranging from 0.18 to 1.8 m in diameter and from 0.009 to 8.1 Mg in mass. The geology of these sites included strong rock belonging to igneous, metamorphic and volcanic types. In this paper the results of the tests are used for calibration and validation a new stochastic computer model. It is demonstrated that the error of the model (i.e. the difference between observed and simulated results) has a lognormal distribution. Selecting two parameters, advanced calibration techniques including Markov Chain Monte Carlo Technique, Maximum Likelihood and Root Mean Square Error (RMSE) are utilized to minimize the error. Validation of the model based on the cross validation technique reveals that in general, reasonable stochastic approximations of the rock fall trajectories are obtained in all dimensions, including runout, bounce heights and velocities. The approximations are compared to the measured data in terms of median, 95% and maximum values. The results of the comparisons indicate that approximate first-order predictions, using a single set of input parameters, are possible and can be used to aid practical hazard and risk assessment.
Cryogenic characterization of LEDs for space application
NASA Astrophysics Data System (ADS)
Carron, Jérôme; Philippon, Anne; How, Lip Sun; Delbergue, Audrey; Hassanzadeh, Sahar; Cillierre, David; Danto, Pascale; Boutillier, Mathieu
2017-09-01
In the frame of EUCLID project, the Calibration Unit of the VIS (VISible Imager) instrument must provide an accurate and well characterized light source for in-flight instrument calibration without noise when it is switched off. The Calibration Unit consists of a set of LEDs emitting at various wavelengths in the visible towards an integrating sphere. The sphere's output provides a uniform illumination over the entire focal plane. Nine references of LEDs from different manufacturers were selected, screened and qualified under cryogenic conditions. Testing this large quantity of samples led to the implementation of automated testing equipment with complete in-situ monitoring of optoelectronic parameters as well as temperature and vacuum values. All the electrical and optical parameters of the LED have been monitored and recorded at ambient and cryogenic temperatures. These results have been compiled in order to show the total deviation of the LED electrical and electro-optical properties in the whole mission and to select the best suitable LED references for the mission. This qualification has demonstrated the robustness of COTS LEDs to operate at low cryogenic temperatures and in the space environment. Then 6 wavelengths were selected and submitted to an EMC sensitivity test at room and cold temperature by counting the number of photons when LEDs drivers are OFF. Characterizations were conducted in the full frequency spectrum in order to implement solutions at system level to suppress the emission of photons when the LED drivers are OFF. LEDs impedance was also characterized at room temperature and cold temperature.
UNIFORMLY MOST POWERFUL BAYESIAN TESTS
Johnson, Valen E.
2014-01-01
Uniformly most powerful tests are statistical hypothesis tests that provide the greatest power against a fixed null hypothesis among all tests of a given size. In this article, the notion of uniformly most powerful tests is extended to the Bayesian setting by defining uniformly most powerful Bayesian tests to be tests that maximize the probability that the Bayes factor, in favor of the alternative hypothesis, exceeds a specified threshold. Like their classical counterpart, uniformly most powerful Bayesian tests are most easily defined in one-parameter exponential family models, although extensions outside of this class are possible. The connection between uniformly most powerful tests and uniformly most powerful Bayesian tests can be used to provide an approximate calibration between p-values and Bayes factors. Finally, issues regarding the strong dependence of resulting Bayes factors and p-values on sample size are discussed. PMID:24659829
Asteroids as Calibration Standards in the Thermal Infrared -- Applications and Results from ISO
NASA Astrophysics Data System (ADS)
Müller, T. G.; Lagerros, J. S. V.
Asteroids have been used extensively as calibration sources for ISO. We summarise the asteroid observational parameters in the thermal infrared and explain the important modelling aspects. Ten selected asteroids were extensively used for the absolute photometric calibration of ISOPHOT in the far-IR. Additionally, the point-like and bright asteroids turned out to be of great interest for many technical tests and calibration aspects. They have been used for testing the calibration for SWS and LWS, the validation of relative spectral response functions of different bands, for colour correction and filter leak tests. Currently, there is a strong emphasis on ISO cross-calibration, where the asteroids contribute in many fields. Well known asteroids have also been seen serendipitously in the CAM Parallel Mode and the PHT Serendipity Mode, allowing for validation and improvement of the photometric calibration of these special observing modes.
On the Long-Term Calibration of the TOMS Total Ozone Record
NASA Technical Reports Server (NTRS)
Stolarski, Richard S.; McPeters, Richard; Labow, Gordon J.; Hollandsworth, Stacey; Flynn, Larry; Einaudi, Franco (Technical Monitor)
2000-01-01
Comparison of Total Ozone Mapping Spectrometer (TOMS) data to the network of ground-based Dobson/Brewer measurements reveals difference in the time dependence of the calibration of the two systems. We have been searching for a method to determine the time dependence of the TOMS calibrations that is independent of the Dobson/Brewer network. In a separate paper by DeLand et al., calibrations of the Solar Backscatter UV Spectrometer (SBUV) instruments have been rederived using the D-pair (306/313 nm wavelengths) data at the equator. These calibrations have been applied to the data from the Nimbus 7 SBUV and the NOAA 9 and 11 SBUV/2 data to derive a new version 7 data set for each instrument. We have used these data to do a detailed comparison to the Nimbus 7 and Earth Probe TOMS data. Assuming that the D-pair establishes the correct calibration, these comparisons reveal some small calibration drifts (approximately 1%) in the TOMS data. They also reveal an offset in the D-pair calibration with respect to the Dobson network of approximately 8 Dobson units with the Dobson being lower than the D-pair. The D-pair calibration offsets have been used to create a merged ozone data set from TOMS with a calibration that has been determined independent of the Dobson/Brewer network. Trend analyses of these data will be presented and compared to trend analyses using the ground-based data.
ERIC Educational Resources Information Center
Sahin, Alper; Weiss, David J.
2015-01-01
This study aimed to investigate the effects of calibration sample size and item bank size on examinee ability estimation in computerized adaptive testing (CAT). For this purpose, a 500-item bank pre-calibrated using the three-parameter logistic model with 10,000 examinees was simulated. Calibration samples of varying sizes (150, 250, 350, 500,…
-redshifted), Observed Flux, Statistical Error (Based on the optimal extraction algorithm of the IRAF packages were acquired using different instrumental settings for the blue and red parts of the spectrum to avoid extracted for systematics checks of the wavelength calibration. Wavelength and flux calibration were applied
Timothy J. Brady; Vicente J. Monleon; Andrew N. Gray
2010-01-01
We propose using future vascular plant abundances as indicators of future climate in a way analogous to the reconstruction of past environments by many palaeoecologists. To begin monitoring future short-term climate changes in the forests of Oregon and Washington, USA, we developed a set of transfer functions for a present-day calibration set consisting of climate...
On-orbit characterization of hyperspectral imagers
NASA Astrophysics Data System (ADS)
McCorkel, Joel
Remote Sensing Group (RSG) at the University of Arizona has a long history of using ground-based test sites for the calibration of airborne- and satellite-based sensors. Often, ground-truth measurements at these tests sites are not always successful due to weather and funding availability. Therefore, RSG has also employed automated ground instrument approaches and cross-calibration methods to verify the radiometric calibration of a sensor. The goal in the cross-calibration method is to transfer the calibration of a well-known sensor to that of a different sensor. This dissertation presents a method for determining the radiometric calibration of a hyperspectral imager using multispectral imagery. The work relies on a multispectral sensor, Moderate-resolution Imaging Spectroradiometer (MODIS), as a reference for the hyperspectral sensor Hyperion. Test sites used for comparisons are Railroad Valley in Nevada and a portion of the Libyan Desert in North Africa. A method to predict hyperspectral surface reflectance using a combination of MODIS data and spectral shape information is developed and applied for the characterization of Hyperion. Spectral shape information is based on RSG's historical in situ data for the Railroad Valley test site and spectral library data for the Libyan test site. Average atmospheric parameters, also based on historical measurements, are used in reflectance prediction and transfer to space. Results of several cross-calibration scenarios that differ in image acquisition coincidence, test site, and reference sensor are found for the characterization of Hyperion. These are compared with results from the reflectance-based approach of vicarious calibration, a well-documented method developed by the RSG that serves as a baseline for calibration performance for the cross-calibration method developed here. Cross-calibration provides results that are within 2% of those of reflectance-based results in most spectral regions. Larger disagreements exist for shorter wavelengths studied in this work as well as in spectral areas that experience absorption by the atmosphere.
Uncertainty analysis technique for OMEGA Dante measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
May, M. J.; Widmann, K.; Sorce, C.
2010-10-15
The Dante is an 18 channel x-ray filtered diode array which records the spectrally and temporally resolved radiation flux from various targets (e.g., hohlraums, etc.) at x-ray energies between 50 eV and 10 keV. It is a main diagnostic installed on the OMEGA laser facility at the Laboratory for Laser Energetics, University of Rochester. The absolute flux is determined from the photometric calibration of the x-ray diodes, filters and mirrors, and an unfold algorithm. Understanding the errors on this absolute measurement is critical for understanding hohlraum energetic physics. We present a new method for quantifying the uncertainties on the determinedmore » flux using a Monte Carlo parameter variation technique. This technique combines the uncertainties in both the unfold algorithm and the error from the absolute calibration of each channel into a one sigma Gaussian error function. One thousand test voltage sets are created using these error functions and processed by the unfold algorithm to produce individual spectra and fluxes. Statistical methods are applied to the resultant set of fluxes to estimate error bars on the measurements.« less