Sample records for specific calibration problems

  1. Redundant interferometric calibration as a complex optimization problem

    NASA Astrophysics Data System (ADS)

    Grobler, T. L.; Bernardi, G.; Kenyon, J. S.; Parsons, A. R.; Smirnov, O. M.

    2018-05-01

    Observations of the redshifted 21 cm line from the epoch of reionization have recently motivated the construction of low-frequency radio arrays with highly redundant configurations. These configurations provide an alternative calibration strategy - `redundant calibration' - and boost sensitivity on specific spatial scales. In this paper, we formulate calibration of redundant interferometric arrays as a complex optimization problem. We solve this optimization problem via the Levenberg-Marquardt algorithm. This calibration approach is more robust to initial conditions than current algorithms and, by leveraging an approximate matrix inversion, allows for further optimization and an efficient implementation (`redundant STEFCAL'). We also investigated using the preconditioned conjugate gradient method as an alternative to the approximate matrix inverse, but found that its computational performance is not competitive with respect to `redundant STEFCAL'. The efficient implementation of this new algorithm is made publicly available.

  2. Organ-specific SPECT activity calibration using 3D printed phantoms for molecular radiotherapy dosimetry.

    PubMed

    Robinson, Andrew P; Tipping, Jill; Cullen, David M; Hamilton, David; Brown, Richard; Flynn, Alex; Oldfield, Christopher; Page, Emma; Price, Emlyn; Smith, Andrew; Snee, Richard

    2016-12-01

    Patient-specific absorbed dose calculations for molecular radiotherapy require accurate activity quantification. This is commonly derived from Single-Photon Emission Computed Tomography (SPECT) imaging using a calibration factor relating detected counts to known activity in a phantom insert. A series of phantom inserts, based on the mathematical models underlying many clinical dosimetry calculations, have been produced using 3D printing techniques. SPECT/CT data for the phantom inserts has been used to calculate new organ-specific calibration factors for (99m) Tc and (177)Lu. The measured calibration factors are compared to predicted values from calculations using a Gaussian kernel. Measured SPECT calibration factors for 3D printed organs display a clear dependence on organ shape for (99m) Tc and (177)Lu. The observed variation in calibration factor is reproduced using Gaussian kernel-based calculation over two orders of magnitude change in insert volume for (99m) Tc and (177)Lu. These new organ-specific calibration factors show a 24, 11 and 8 % reduction in absorbed dose for the liver, spleen and kidneys, respectively. Non-spherical calibration factors from 3D printed phantom inserts can significantly improve the accuracy of whole organ activity quantification for molecular radiotherapy, providing a crucial step towards individualised activity quantification and patient-specific dosimetry. 3D printed inserts are found to provide a cost effective and efficient way for clinical centres to access more realistic phantom data.

  3. 40 CFR 90.305 - Dynamometer specifications and calibration accuracy.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    .... (a) Dynamometer specifications. The dynamometer test stand and other instruments for measurement of speed and power output must meet the engine speed and torque accuracy requirements shown in Table 2 in... measurement of power output must meet the calibration frequency shown in Table 2 in Appendix A of this subpart...

  4. 40 CFR 90.305 - Dynamometer specifications and calibration accuracy.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    .... (a) Dynamometer specifications. The dynamometer test stand and other instruments for measurement of speed and power output must meet the engine speed and torque accuracy requirements shown in Table 2 in... measurement of power output must meet the calibration frequency shown in Table 2 in Appendix A of this subpart...

  5. 40 CFR 90.305 - Dynamometer specifications and calibration accuracy.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    .... (a) Dynamometer specifications. The dynamometer test stand and other instruments for measurement of speed and power output must meet the engine speed and torque accuracy requirements shown in Table 2 in... measurement of power output must meet the calibration frequency shown in Table 2 in Appendix A of this subpart...

  6. 40 CFR 90.305 - Dynamometer specifications and calibration accuracy.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    .... (a) Dynamometer specifications. The dynamometer test stand and other instruments for measurement of speed and power output must meet the engine speed and torque accuracy requirements shown in Table 2 in... measurement of power output must meet the calibration frequency shown in Table 2 in Appendix A of this subpart...

  7. 40 CFR 90.305 - Dynamometer specifications and calibration accuracy.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    .... (a) Dynamometer specifications. The dynamometer test stand and other instruments for measurement of speed and power output must meet the engine speed and torque accuracy requirements shown in Table 2 in... measurement of power output must meet the calibration frequency shown in Table 2 in Appendix A of this subpart...

  8. Absolute mass scale calibration in the inverse problem of the physical theory of fireballs.

    NASA Astrophysics Data System (ADS)

    Kalenichenko, V. V.

    A method of the absolute mass scale calibration is suggested for solving the inverse problem of the physical theory of fireballs. The method is based on the data on the masses of the fallen meteorites whose fireballs have been photographed in their flight. The method may be applied to those fireballs whose bodies have not experienced considerable fragmentation during their destruction in the atmosphere and have kept their form well enough. Statistical analysis of the inverse problem solution for a sufficiently representative sample makes it possible to separate a subsample of such fireballs. The data on the Lost City and Innisfree meteorites are used to obtain calibration coefficients.

  9. The Prediction Properties of Inverse and Reverse Regression for the Simple Linear Calibration Problem

    NASA Technical Reports Server (NTRS)

    Parker, Peter A.; Geoffrey, Vining G.; Wilson, Sara R.; Szarka, John L., III; Johnson, Nels G.

    2010-01-01

    The calibration of measurement systems is a fundamental but under-studied problem within industrial statistics. The origins of this problem go back to basic chemical analysis based on NIST standards. In today's world these issues extend to mechanical, electrical, and materials engineering. Often, these new scenarios do not provide "gold standards" such as the standard weights provided by NIST. This paper considers the classic "forward regression followed by inverse regression" approach. In this approach the initial experiment treats the "standards" as the regressor and the observed values as the response to calibrate the instrument. The analyst then must invert the resulting regression model in order to use the instrument to make actual measurements in practice. This paper compares this classical approach to "reverse regression," which treats the standards as the response and the observed measurements as the regressor in the calibration experiment. Such an approach is intuitively appealing because it avoids the need for the inverse regression. However, it also violates some of the basic regression assumptions.

  10. Commodity-Free Calibration

    NASA Technical Reports Server (NTRS)

    2008-01-01

    Commodity-free calibration is a reaction rate calibration technique that does not require the addition of any commodities. This technique is a specific form of the reaction rate technique, where all of the necessary reactants, other than the sample being analyzed, are either inherent in the analyzing system or specifically added or provided to the system for a reason other than calibration. After introduction, the component of interest is exposed to other reactants or flow paths already present in the system. The instrument detector records one of the following to determine the rate of reaction: the increase in the response of the reaction product, a decrease in the signal of the analyte response, or a decrease in the signal from the inherent reactant. With this data, the initial concentration of the analyte is calculated. This type of system can analyze and calibrate simultaneously, reduce the risk of false positives and exposure to toxic vapors, and improve accuracy. Moreover, having an excess of the reactant already present in the system eliminates the need to add commodities, which further reduces cost, logistic problems, and potential contamination. Also, the calculations involved can be simplified by comparison to those of the reaction rate technique. We conducted tests with hypergols as an initial investigation into the feasiblility of the technique.

  11. Solving the robot-world, hand-eye(s) calibration problem with iterative methods

    USDA-ARS?s Scientific Manuscript database

    Robot-world, hand-eye calibration is the problem of determining the transformation between the robot end effector and a camera, as well as the transformation between the robot base and the world coordinate system. This relationship has been modeled as AX = ZB, where X and Z are unknown homogeneous ...

  12. The algorithm for automatic detection of the calibration object

    NASA Astrophysics Data System (ADS)

    Artem, Kruglov; Irina, Ugfeld

    2017-06-01

    The problem of the automatic image calibration is considered in this paper. The most challenging task of the automatic calibration is a proper detection of the calibration object. The solving of this problem required the appliance of the methods and algorithms of the digital image processing, such as morphology, filtering, edge detection, shape approximation. The step-by-step process of the development of the algorithm and its adopting to the specific conditions of the log cuts in the image's background is presented. Testing of the automatic calibration module was carrying out under the conditions of the production process of the logging enterprise. Through the tests the average possibility of the automatic isolating of the calibration object is 86.1% in the absence of the type 1 errors. The algorithm was implemented in the automatic calibration module within the mobile software for the log deck volume measurement.

  13. Calibration of mass spectrometric peptide mass fingerprint data without specific external or internal calibrants

    PubMed Central

    Wolski, Witold E; Lalowski, Maciej; Jungblut, Peter; Reinert, Knut

    2005-01-01

    Background Peptide Mass Fingerprinting (PMF) is a widely used mass spectrometry (MS) method of analysis of proteins and peptides. It relies on the comparison between experimentally determined and theoretical mass spectra. The PMF process requires calibration, usually performed with external or internal calibrants of known molecular masses. Results We have introduced two novel MS calibration methods. The first method utilises the local similarity of peptide maps generated after separation of complex protein samples by two-dimensional gel electrophoresis. It computes a multiple peak-list alignment of the data set using a modified Minimum Spanning Tree (MST) algorithm. The second method exploits the idea that hundreds of MS samples are measured in parallel on one sample support. It improves the calibration coefficients by applying a two-dimensional Thin Plate Splines (TPS) smoothing algorithm. We studied the novel calibration methods utilising data generated by three different MALDI-TOF-MS instruments. We demonstrate that a PMF data set can be calibrated without resorting to external or relying on widely occurring internal calibrants. The methods developed here were implemented in R and are part of the BioConductor package mscalib available from . Conclusion The MST calibration algorithm is well suited to calibrate MS spectra of protein samples resulting from two-dimensional gel electrophoretic separation. The TPS based calibration algorithm might be used to correct systematic mass measurement errors observed for large MS sample supports. As compared to other methods, our combined MS spectra calibration strategy increases the peptide/protein identification rate by an additional 5 – 15%. PMID:16102175

  14. Simultaneous calibration phantom commission and geometry calibration in cone beam CT

    NASA Astrophysics Data System (ADS)

    Xu, Yuan; Yang, Shuai; Ma, Jianhui; Li, Bin; Wu, Shuyu; Qi, Hongliang; Zhou, Linghong

    2017-09-01

    Geometry calibration is a vital step for describing the geometry of a cone beam computed tomography (CBCT) system and is a prerequisite for CBCT reconstruction. In current methods, calibration phantom commission and geometry calibration are divided into two independent tasks. Small errors in ball-bearing (BB) positioning in the phantom-making step will severely degrade the quality of phantom calibration. To solve this problem, we propose an integrated method to simultaneously realize geometry phantom commission and geometry calibration. Instead of assuming the accuracy of the geometry phantom, the integrated method considers BB centers in the phantom as an optimized parameter in the workflow. Specifically, an evaluation phantom and the corresponding evaluation contrast index are used to evaluate geometry artifacts for optimizing the BB coordinates in the geometry phantom. After utilizing particle swarm optimization, the CBCT geometry and BB coordinates in the geometry phantom are calibrated accurately and are then directly used for the next geometry calibration task in other CBCT systems. To evaluate the proposed method, both qualitative and quantitative studies were performed on simulated and realistic CBCT data. The spatial resolution of reconstructed images using dental CBCT can reach up to 15 line pair cm-1. The proposed method is also superior to the Wiesent method in experiments. This paper shows that the proposed method is attractive for simultaneous and accurate geometry phantom commission and geometry calibration.

  15. Soil specific re-calibration of water content sensors for a field-scale sensor network

    NASA Astrophysics Data System (ADS)

    Gasch, Caley K.; Brown, David J.; Anderson, Todd; Brooks, Erin S.; Yourek, Matt A.

    2015-04-01

    Obtaining accurate soil moisture data from a sensor network requires sensor calibration. Soil moisture sensors are factory calibrated, but multiple site specific factors may contribute to sensor inaccuracies. Thus, sensors should be calibrated for the specific soil type and conditions in which they will be installed. Lab calibration of a large number of sensors prior to installation in a heterogeneous setting may not be feasible, and it may not reflect the actual performance of the installed sensor. We investigated a multi-step approach to retroactively re-calibrate sensor water content data from the dielectric permittivity readings obtained by sensors in the field. We used water content data collected since 2009 from a sensor network installed at 42 locations and 5 depths (210 sensors total) within the 37-ha Cook Agronomy Farm with highly variable soils located in the Palouse region of the Northwest United States. First, volumetric water content was calculated from sensor dielectric readings using three equations: (1) a factory calibration using the Topp equation; (2) a custom calibration obtained empirically from an instrumented soil in the field; and (3) a hybrid equation that combines the Topp and custom equations. Second, we used soil physical properties (particle size and bulk density) and pedotransfer functions to estimate water content at saturation, field capacity, and wilting point for each installation location and depth. We also extracted the same reference points from the sensor readings, when available. Using these reference points, we re-scaled the sensor readings, such that water content was restricted to the range of values that we would expect given the physical properties of the soil. The re-calibration accuracy was assessed with volumetric water content measurements obtained from field-sampled cores taken on multiple dates. In general, the re-calibration was most accurate when all three reference points (saturation, field capacity, and wilting

  16. 10 CFR 70.39 - Specific licenses for the manufacture or initial transfer of calibration or reference sources.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 2 2010-01-01 2010-01-01 false Specific licenses for the manufacture or initial transfer... manufacture or initial transfer of calibration or reference sources. (a) An application for a specific license to manufacture or initially transfer calibration or reference sources containing plutonium, for...

  17. Elementary Students' Metacognitive Processes and Post-Performance Calibration on Mathematical Problem-Solving Tasks

    ERIC Educational Resources Information Center

    García, Trinidad; Rodríguez, Celestino; González-Castro, Paloma; González-Pienda, Julio Antonio; Torrance, Mark

    2016-01-01

    Calibration, or the correspondence between perceived performance and actual performance, is linked to students' metacognitive and self-regulatory skills. Making students more aware of the quality of their performance is important in elementary school settings, and more so when math problems are involved. However, many students seem to be poorly…

  18. A GPU-Based Implementation of the Firefly Algorithm for Variable Selection in Multivariate Calibration Problems

    PubMed Central

    de Paula, Lauro C. M.; Soares, Anderson S.; de Lima, Telma W.; Delbem, Alexandre C. B.; Coelho, Clarimar J.; Filho, Arlindo R. G.

    2014-01-01

    Several variable selection algorithms in multivariate calibration can be accelerated using Graphics Processing Units (GPU). Among these algorithms, the Firefly Algorithm (FA) is a recent proposed metaheuristic that may be used for variable selection. This paper presents a GPU-based FA (FA-MLR) with multiobjective formulation for variable selection in multivariate calibration problems and compares it with some traditional sequential algorithms in the literature. The advantage of the proposed implementation is demonstrated in an example involving a relatively large number of variables. The results showed that the FA-MLR, in comparison with the traditional algorithms is a more suitable choice and a relevant contribution for the variable selection problem. Additionally, the results also demonstrated that the FA-MLR performed in a GPU can be five times faster than its sequential implementation. PMID:25493625

  19. A GPU-Based Implementation of the Firefly Algorithm for Variable Selection in Multivariate Calibration Problems.

    PubMed

    de Paula, Lauro C M; Soares, Anderson S; de Lima, Telma W; Delbem, Alexandre C B; Coelho, Clarimar J; Filho, Arlindo R G

    2014-01-01

    Several variable selection algorithms in multivariate calibration can be accelerated using Graphics Processing Units (GPU). Among these algorithms, the Firefly Algorithm (FA) is a recent proposed metaheuristic that may be used for variable selection. This paper presents a GPU-based FA (FA-MLR) with multiobjective formulation for variable selection in multivariate calibration problems and compares it with some traditional sequential algorithms in the literature. The advantage of the proposed implementation is demonstrated in an example involving a relatively large number of variables. The results showed that the FA-MLR, in comparison with the traditional algorithms is a more suitable choice and a relevant contribution for the variable selection problem. Additionally, the results also demonstrated that the FA-MLR performed in a GPU can be five times faster than its sequential implementation.

  20. Using Multiple Calibration Indices in Order to Capture the Complex Picture of What Affects Students' Accuracy of Feeling of Confidence

    ERIC Educational Resources Information Center

    Boekaerts, Monique; Rozendaal, Jeroen S.

    2010-01-01

    The present study used multiple calibration indices to capture the complex picture of fifth graders' calibration of feeling of confidence in mathematics. Specifically, the effects of gender, type of mathematical problem, instruction method, and time of measurement (before and after problem solving) on calibration skills were investigated. Fourteen…

  1. Investigating temporal field sampling strategies for site-specific calibration of three soil moisture-neutron intensity parameterisation methods

    NASA Astrophysics Data System (ADS)

    Iwema, J.; Rosolem, R.; Baatz, R.; Wagener, T.; Bogena, H. R.

    2015-07-01

    The Cosmic-Ray Neutron Sensor (CRNS) can provide soil moisture information at scales relevant to hydrometeorological modelling applications. Site-specific calibration is needed to translate CRNS neutron intensities into sensor footprint average soil moisture contents. We investigated temporal sampling strategies for calibration of three CRNS parameterisations (modified N0, HMF, and COSMIC) by assessing the effects of the number of sampling days and soil wetness conditions on the performance of the calibration results while investigating actual neutron intensity measurements, for three sites with distinct climate and land use: a semi-arid site, a temperate grassland, and a temperate forest. When calibrated with 1 year of data, both COSMIC and the modified N0 method performed better than HMF. The performance of COSMIC was remarkably good at the semi-arid site in the USA, while the N0mod performed best at the two temperate sites in Germany. The successful performance of COSMIC at all three sites can be attributed to the benefits of explicitly resolving individual soil layers (which is not accounted for in the other two parameterisations). To better calibrate these parameterisations, we recommend in situ soil sampled to be collected on more than a single day. However, little improvement is observed for sampling on more than 6 days. At the semi-arid site, the N0mod method was calibrated better under site-specific average wetness conditions, whereas HMF and COSMIC were calibrated better under drier conditions. Average soil wetness condition gave better calibration results at the two humid sites. The calibration results for the HMF method were better when calibrated with combinations of days with similar soil wetness conditions, opposed to N0mod and COSMIC, which profited from using days with distinct wetness conditions. Errors in actual neutron intensities were translated to average errors specifically to each site. At the semi-arid site, these errors were below the

  2. Calibration of context-specific survey items to assess youth physical activity behaviour.

    PubMed

    Saint-Maurice, Pedro F; Welk, Gregory J; Bartee, R Todd; Heelan, Kate

    2017-05-01

    This study tests calibration models to re-scale context-specific physical activity (PA) items to accelerometer-derived PA. A total of 195 4th-12th grades children wore an Actigraph monitor and completed the Physical Activity Questionnaire (PAQ) one week later. The relative time spent in moderate-to-vigorous PA (MVPA % ) obtained from the Actigraph at recess, PE, lunch, after-school, evening and weekend was matched with a respective item score obtained from the PAQ's. Item scores from 145 participants were calibrated against objective MVPA % using multiple linear regression with age, and sex as additional predictors. Predicted minutes of MVPA for school, out-of-school and total week were tested in the remaining sample (n = 50) using equivalence testing. The results showed that PAQ β-weights ranged from 0.06 (lunch) to 4.94 (PE) MVPA % (P < 0.05) and models root mean square error ranged from 4.2% (evening) to 20.2% (recess). When applied to an independent sample, differences between PAQ and accelerometer MVPA at school and out-of-school ranged from -15.6 to +3.8 min and the PAQ was within 10-15% of accelerometer measured activity. This study demonstrated that context-specific items can be calibrated to predict minutes of MVPA in groups of youth during in- and out-of-school periods.

  3. Probabilistic methods for sensitivity analysis and calibration in the NASA challenge problem

    DOE PAGES

    Safta, Cosmin; Sargsyan, Khachik; Najm, Habib N.; ...

    2015-01-01

    In this study, a series of algorithms are proposed to address the problems in the NASA Langley Research Center Multidisciplinary Uncertainty Quantification Challenge. A Bayesian approach is employed to characterize and calibrate the epistemic parameters based on the available data, whereas a variance-based global sensitivity analysis is used to rank the epistemic and aleatory model parameters. A nested sampling of the aleatory–epistemic space is proposed to propagate uncertainties from model parameters to output quantities of interest.

  4. Probabilistic methods for sensitivity analysis and calibration in the NASA challenge problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Safta, Cosmin; Sargsyan, Khachik; Najm, Habib N.

    In this study, a series of algorithms are proposed to address the problems in the NASA Langley Research Center Multidisciplinary Uncertainty Quantification Challenge. A Bayesian approach is employed to characterize and calibrate the epistemic parameters based on the available data, whereas a variance-based global sensitivity analysis is used to rank the epistemic and aleatory model parameters. A nested sampling of the aleatory–epistemic space is proposed to propagate uncertainties from model parameters to output quantities of interest.

  5. Absolute calibration of the mass scale in the inverse problem of the physical theory of fireballs

    NASA Astrophysics Data System (ADS)

    Kalenichenko, V. V.

    1992-08-01

    A method of the absolute calibration of the mass scale is proposed for solving the inverse problem of the physical theory of fireballs. The method is based on data on the masses of fallen meteorites whose fireballs have been photographed in flight. The method can be applied to fireballs whose bodies have not experienced significant fragmentation during their flight in the atmosphere and have kept their shape relatively well. Data on the Lost City and Innisfree meteorites are used to calculate the calibration coefficients.

  6. Synthesis Polarimetry Calibration

    NASA Astrophysics Data System (ADS)

    Moellenbrock, George

    2017-10-01

    Synthesis instrumental polarization calibration fundamentals for both linear (ALMA) and circular (EVLA) feed bases are reviewed, with special attention to the calibration heuristics supported in CASA. Practical problems affecting modern instruments are also discussed.

  7. Correlates of specific childhood feeding problems.

    PubMed

    Field, D; Garland, M; Williams, K

    2003-01-01

    The correlates of specific childhood feeding problems are described to further examine possible predisposing factors for feeding problems. We report our experience with 349 participants evaluated by an interdisciplinary feeding team. A review of records was conducted and each participant was identified as having one or more of five functionally defined feeding problems: food refusal, food selectivity by type, food selectivity by texture, oral motor delays, or dysphagia. The prevalence of predisposing factors for these feeding problems was examined. Predisposing factors included developmental disabilities, gastrointestinal problems, cardiopulmonary problems, neurological problems, renal disease and anatomical anomalies. The frequencies of predisposing factors varied by feeding problem. Differences were found in the prevalence of the five feeding problems among children with three different developmental disabilities: autism, Down syndrome and cerebral palsy. Gastro-oesophageal reflux was the most prevalent condition found among all children in the sample and was the factor most often associated with food refusal. Neurological conditions and anatomical anomalies were highly associated with skill deficits, such as oral motor delays and dysphagia. Specific medical conditions and developmental disabilities are often associated with certain feeding problems. Information concerning predisposing factors of feeding problems can help providers employ appropriate primary, secondary and tertiary prevention measures to decrease the frequency or severity of some feeding problems.

  8. Calibration of the Urbana lidar system

    NASA Technical Reports Server (NTRS)

    Cerny, T.; Sechrist, C. F., Jr.

    1980-01-01

    A method for calibrating data obtained by the Urban sodium lidar system is presented. First, an expression relating the number of photocounts originating from a specific altitude range to the soodium concentration is developed. This relation is then simplified by normalizing the sodium photocounts with photocounts originating from the Rayleigh region of the atmosphere. To evaluate the calibration expression, the laser linewidth must be known. Therefore, a method for measuring the laser linewidth using a Fabry-Perot interferometer is given. The laser linewidth was found to be 6 + or - 2.5 pm. Problems due to photomultiplier tube overloading are discussed. Finally, calibrated data is presented. The sodium column abundance exhibits something close to a sinusoidal variation throughout the year with the winter months showing an enhancement of a factor of 5 to 7 over the summer months.

  9. Patient-specific calibration of cone-beam computed tomography data sets for radiotherapy dose calculations and treatment plan assessment.

    PubMed

    MacFarlane, Michael; Wong, Daniel; Hoover, Douglas A; Wong, Eugene; Johnson, Carol; Battista, Jerry J; Chen, Jeff Z

    2018-03-01

    In this work, we propose a new method of calibrating cone beam computed tomography (CBCT) data sets for radiotherapy dose calculation and plan assessment. The motivation for this patient-specific calibration (PSC) method is to develop an efficient, robust, and accurate CBCT calibration process that is less susceptible to deformable image registration (DIR) errors. Instead of mapping the CT numbers voxel-by-voxel with traditional DIR calibration methods, the PSC methods generates correlation plots between deformably registered planning CT and CBCT voxel values, for each image slice. A linear calibration curve specific to each slice is then obtained by least-squares fitting, and applied to the CBCT slice's voxel values. This allows each CBCT slice to be corrected using DIR without altering the patient geometry through regional DIR errors. A retrospective study was performed on 15 head-and-neck cancer patients, each having routine CBCTs and a middle-of-treatment re-planning CT (reCT). The original treatment plan was re-calculated on the patient's reCT image set (serving as the gold standard) as well as the image sets produced by voxel-to-voxel DIR, density-overriding, and the new PSC calibration methods. Dose accuracy of each calibration method was compared to the reference reCT data set using common dose-volume metrics and 3D gamma analysis. A phantom study was also performed to assess the accuracy of the DIR and PSC CBCT calibration methods compared with planning CT. Compared with the gold standard using reCT, the average dose metric differences were ≤ 1.1% for all three methods (PSC: -0.3%; DIR: -0.7%; density-override: -1.1%). The average gamma pass rates with thresholds 3%, 3 mm were also similar among the three techniques (PSC: 95.0%; DIR: 96.1%; density-override: 94.4%). An automated patient-specific calibration method was developed which yielded strong dosimetric agreement with the results obtained using a re-planning CT for head-and-neck patients.

  10. Neuromusculoskeletal model self-calibration for on-line sequential bayesian moment estimation

    NASA Astrophysics Data System (ADS)

    Bueno, Diana R.; Montano, L.

    2017-04-01

    Objective. Neuromusculoskeletal models involve many subject-specific physiological parameters that need to be adjusted to adequately represent muscle properties. Traditionally, neuromusculoskeletal models have been calibrated with a forward-inverse dynamic optimization which is time-consuming and unfeasible for rehabilitation therapy. Non self-calibration algorithms have been applied to these models. To the best of our knowledge, the algorithm proposed in this work is the first on-line calibration algorithm for muscle models that allows a generic model to be adjusted to different subjects in a few steps. Approach. In this paper we propose a reformulation of the traditional muscle models that is able to sequentially estimate the kinetics (net joint moments), and also its full self-calibration (subject-specific internal parameters of the muscle from a set of arbitrary uncalibrated data), based on the unscented Kalman filter. The nonlinearity of the model as well as its calibration problem have obliged us to adopt the sum of Gaussians filter suitable for nonlinear systems. Main results. This sequential Bayesian self-calibration algorithm achieves a complete muscle model calibration using as input only a dataset of uncalibrated sEMG and kinematics data. The approach is validated experimentally using data from the upper limbs of 21 subjects. Significance. The results show the feasibility of neuromusculoskeletal model self-calibration. This study will contribute to a better understanding of the generalization of muscle models for subject-specific rehabilitation therapies. Moreover, this work is very promising for rehabilitation devices such as electromyography-driven exoskeletons or prostheses.

  11. Does simultaneous bilingualism aggravate children's specific language problems?

    PubMed

    Korkman, Marit; Stenroos, Maria; Mickos, Annika; Westman, Martin; Ekholm, Pia; Byring, Roger

    2012-09-01

    There is little data on whether or not a bilingual upbringing may aggravate specific language problems in children. This study analysed whether there was an interaction of such problems and simultaneous bilingualism. Participants were 5- to 7-year-old children with specific language problems (LANG group, N = 56) or who were typically developing (CONTR group, N = 60). Seventy-three children were Swedish-Finnish bilingual and 43 were Swedish-speaking monolingual. Assessments (in Swedish) included tests of expressive language, comprehension, repetition and verbal memory. Per definition, the LANG group had lower scores than the CONTR group on all language tests. The bilingual group had lower scores than the monolingual group only on a test of body part naming. Importantly, the interaction of group (LANG or CONTR) and bilingualism was not significant on any of the language scores. Simultaneous bilingualism does not aggravate specific language problems but may result in a slower development of vocabulary both in children with and without specific language problems. Considering also advantages, a bilingual upbringing is an option also for children with specific language problems. In assessment, tests of vocabulary may be sensitive to bilingualism, instead tests assessing comprehension, syntax and nonword repetition may provide less biased methods. © 2012 The Author(s)/Acta Paediatrica © 2012 Foundation Acta Paediatrica.

  12. Tensor-based classification of an auditory mobile BCI without a subject-specific calibration phase

    NASA Astrophysics Data System (ADS)

    Zink, Rob; Hunyadi, Borbála; Van Huffel, Sabine; De Vos, Maarten

    2016-04-01

    Objective. One of the major drawbacks in EEG brain-computer interfaces (BCI) is the need for subject-specific training of the classifier. By removing the need for a supervised calibration phase, new users could potentially explore a BCI faster. In this work we aim to remove this subject-specific calibration phase and allow direct classification. Approach. We explore canonical polyadic decompositions and block term decompositions of the EEG. These methods exploit structure in higher dimensional data arrays called tensors. The BCI tensors are constructed by concatenating ERP templates from other subjects to a target and non-target trial and the inherent structure guides a decomposition that allows accurate classification. We illustrate the new method on data from a three-class auditory oddball paradigm. Main results. The presented approach leads to a fast and intuitive classification with accuracies competitive with a supervised and cross-validated LDA approach. Significance. The described methods are a promising new way of classifying BCI data with a forthright link to the original P300 ERP signal over the conventional and widely used supervised approaches.

  13. Tensor-based classification of an auditory mobile BCI without a subject-specific calibration phase.

    PubMed

    Zink, Rob; Hunyadi, Borbála; Huffel, Sabine Van; Vos, Maarten De

    2016-04-01

    One of the major drawbacks in EEG brain-computer interfaces (BCI) is the need for subject-specific training of the classifier. By removing the need for a supervised calibration phase, new users could potentially explore a BCI faster. In this work we aim to remove this subject-specific calibration phase and allow direct classification. We explore canonical polyadic decompositions and block term decompositions of the EEG. These methods exploit structure in higher dimensional data arrays called tensors. The BCI tensors are constructed by concatenating ERP templates from other subjects to a target and non-target trial and the inherent structure guides a decomposition that allows accurate classification. We illustrate the new method on data from a three-class auditory oddball paradigm. The presented approach leads to a fast and intuitive classification with accuracies competitive with a supervised and cross-validated LDA approach. The described methods are a promising new way of classifying BCI data with a forthright link to the original P300 ERP signal over the conventional and widely used supervised approaches.

  14. Infrared stereo calibration for unmanned ground vehicle navigation

    NASA Astrophysics Data System (ADS)

    Harguess, Josh; Strange, Shawn

    2014-06-01

    The problem of calibrating two color cameras as a stereo pair has been heavily researched and many off-the-shelf software packages, such as Robot Operating System and OpenCV, include calibration routines that work in most cases. However, the problem of calibrating two infrared (IR) cameras for the purposes of sensor fusion and point could generation is relatively new and many challenges exist. We present a comparison of color camera and IR camera stereo calibration using data from an unmanned ground vehicle. There are two main challenges in IR stereo calibration; the calibration board (material, design, etc.) and the accuracy of calibration pattern detection. We present our analysis of these challenges along with our IR stereo calibration methodology. Finally, we present our results both visually and analytically with computed reprojection errors.

  15. Source calibrations and SDC calorimeter requirements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Green, D.

    Several studies of the problem of calibration of the SDC calorimeter exist. In this note the attempt is made to give a connected account of the requirements on the source calibration from the point of view of the desired, and acceptable, constant term induced in the EM resolution. It is assumed that a local'' calibration resulting from exposing each tower to a beam of electrons is not feasible. It is further assumed that an in situ'' calibration is either not yet performed, or is unavailable due to tracking alignment problems or high luminosity operation rendering tracking inoperative. Therefore, the assumptionsmore » used are rather conservative. In this scenario, each scintillator plate of each tower is exposed to a moving radioactive source. That reading is used to mask'' an optical cookie'' in a grey code chosen so as to make the response uniform. The source is assumed to be the sole calibration of the tower. Therefore, the phrase global'' calibration of towers by movable radioactive sources is adopted.« less

  16. Source calibrations and SDC calorimeter requirements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Green, D.

    Several studies of the problem of calibration of the SDC calorimeter exist. In this note the attempt is made to give a connected account of the requirements on the source calibration from the point of view of the desired, and acceptable, constant term induced in the EM resolution. It is assumed that a ``local`` calibration resulting from exposing each tower to a beam of electrons is not feasible. It is further assumed that an ``in situ`` calibration is either not yet performed, or is unavailable due to tracking alignment problems or high luminosity operation rendering tracking inoperative. Therefore, the assumptionsmore » used are rather conservative. In this scenario, each scintillator plate of each tower is exposed to a moving radioactive source. That reading is used to ``mask`` an optical ``cookie`` in a grey code chosen so as to make the response uniform. The source is assumed to be the sole calibration of the tower. Therefore, the phrase ``global`` calibration of towers by movable radioactive sources is adopted.« less

  17. Precise dielectric property measurements and E-field probe calibration for specific absorption rate measurements using a rectangular waveguide

    PubMed Central

    Hakim, B M; Beard, B B; Davis, C C

    2018-01-01

    Specific absorption rate (SAR) measurements require accurate calculations of the dielectric properties of tissue-equivalent liquids and associated calibration of E-field probes. We developed a precise tissue-equivalent dielectric measurement and E-field probe calibration system. The system consists of a rectangular waveguide, electric field probe, and data control and acquisition system. Dielectric properties are calculated using the field attenuation factor inside the tissue-equivalent liquid and power reflectance inside the waveguide at the air/dielectric-slab interface. Calibration factors were calculated using isotropicity measurements of the E-field probe. The frequencies used are 900 MHz and 1800 MHz. The uncertainties of the measured values are within ±3%, at the 95% confidence level. Using the same waveguide for dielectric measurements as well as calibrating E-field probes used in SAR assessments eliminates a source of uncertainty. Moreover, we clearly identified the system parameters that affect the overall uncertainty of the measurement system. PMID:29520129

  18. Calibration of a stochastic health evolution model using NHIS data

    NASA Astrophysics Data System (ADS)

    Gupta, Aparna; Li, Zhisheng

    2011-10-01

    This paper presents and calibrates an individual's stochastic health evolution model. In this health evolution model, the uncertainty of health incidents is described by a stochastic process with a finite number of possible outcomes. We construct a comprehensive health status index (HSI) to describe an individual's health status, as well as a health risk factor system (RFS) to classify individuals into different risk groups. Based on the maximum likelihood estimation (MLE) method and the method of nonlinear least squares fitting, model calibration is formulated in terms of two mixed-integer nonlinear optimization problems. Using the National Health Interview Survey (NHIS) data, the model is calibrated for specific risk groups. Longitudinal data from the Health and Retirement Study (HRS) is used to validate the calibrated model, which displays good validation properties. The end goal of this paper is to provide a model and methodology, whose output can serve as a crucial component of decision support for strategic planning of health related financing and risk management.

  19. Air data position-error calibration using state reconstruction techniques

    NASA Technical Reports Server (NTRS)

    Whitmore, S. A.; Larson, T. J.; Ehernberger, L. J.

    1984-01-01

    During the highly maneuverable aircraft technology (HiMAT) flight test program recently completed at NASA Ames Research Center's Dryden Flight Research Facility, numerous problems were experienced in airspeed calibration. This necessitated the use of state reconstruction techniques to arrive at a position-error calibration. For the HiMAT aircraft, most of the calibration effort was expended on flights in which the air data pressure transducers were not performing accurately. Following discovery of this problem, the air data transducers of both aircraft were wrapped in heater blankets to correct the problem. Additional calibration flights were performed, and from the resulting data a satisfactory position-error calibration was obtained. This calibration and data obtained before installation of the heater blankets were used to develop an alternate calibration method. The alternate approach took advantage of high-quality inertial data that was readily available. A linearized Kalman filter (LKF) was used to reconstruct the aircraft's wind-relative trajectory; the trajectory was then used to separate transducer measurement errors from the aircraft position error. This calibration method is accurate and inexpensive. The LKF technique has an inherent advantage of requiring that no flight maneuvers be specially designed for airspeed calibrations. It is of particular use when the measurements of the wind-relative quantities are suspected to have transducer-related errors.

  20. Introducing global peat-specific temperature and pH calibrations based on brGDGT bacterial lipids

    NASA Astrophysics Data System (ADS)

    Naafs, B. D. A.; Inglis, G. N.; Zheng, Y.; Amesbury, M. J.; Biester, H.; Bindler, R.; Blewett, J.; Burrows, M. A.; del Castillo Torres, D.; Chambers, F. M.; Cohen, A. D.; Evershed, R. P.; Feakins, S. J.; Gałka, M.; Gallego-Sala, A.; Gandois, L.; Gray, D. M.; Hatcher, P. G.; Honorio Coronado, E. N.; Hughes, P. D. M.; Huguet, A.; Könönen, M.; Laggoun-Défarge, F.; Lähteenoja, O.; Lamentowicz, M.; Marchant, R.; McClymont, E.; Pontevedra-Pombal, X.; Ponton, C.; Pourmand, A.; Rizzuti, A. M.; Rochefort, L.; Schellekens, J.; De Vleeschouwer, F.; Pancost, R. D.

    2017-07-01

    Glycerol dialkyl glycerol tetraethers (GDGTs) are membrane-spanning lipids from Bacteria and Archaea that are ubiquitous in a range of natural archives and especially abundant in peat. Previous work demonstrated that the distribution of bacterial branched GDGTs (brGDGTs) in mineral soils is correlated to environmental factors such as mean annual air temperature (MAAT) and soil pH. However, the influence of these parameters on brGDGT distributions in peat is largely unknown. Here we investigate the distribution of brGDGTs in 470 samples from 96 peatlands around the world with a broad mean annual air temperature (-8 to 27 °C) and pH (3-8) range and present the first peat-specific brGDGT-based temperature and pH calibrations. Our results demonstrate that the degree of cyclisation of brGDGTs in peat is positively correlated with pH, pH = 2.49 × CBTpeat + 8.07 (n = 51, R2 = 0.58, RMSE = 0.8) and the degree of methylation of brGDGTs is positively correlated with MAAT, MAATpeat (°C) = 52.18 × MBT5me‧ - 23.05 (n = 96, R2 = 0.76, RMSE = 4.7 °C). These peat-specific calibrations are distinct from the available mineral soil calibrations. In light of the error in the temperature calibration (∼4.7 °C), we urge caution in any application to reconstruct late Holocene climate variability, where the climatic signals are relatively small, and the duration of excursions could be brief. Instead, these proxies are well-suited to reconstruct large amplitude, longer-term shifts in climate such as deglacial transitions. Indeed, when applied to a peat deposit spanning the late glacial period (∼15.2 kyr), we demonstrate that MAATpeat yields absolute temperatures and relative temperature changes that are consistent with those from other proxies. In addition, the application of MAATpeat to fossil peat (i.e. lignites) has the potential to reconstruct terrestrial climate during the Cenozoic. We conclude that there is clear potential to use brGDGTs in peats and lignites to

  1. Approach to derivation of SIR-C science requirements for calibration. [Shuttle Imaging Radar

    NASA Technical Reports Server (NTRS)

    Dubois, Pascale C.; Evans, Diane; Van Zyl, Jakob

    1992-01-01

    Many of the experiments proposed for the forthcoming SIR-C mission require calibrated data, for example those which emphasize (1) deriving quantitative geophysical information (e.g., surface roughness and dielectric constant), (2) monitoring daily and seasonal changes in the Earth's surface (e.g., soil moisture), (3) extending local case studies to regional and worldwide scales, and (4) using SIR-C data with other spaceborne sensors (e.g., ERS-1, JERS-1, and Radarsat). There are three different aspects to the SIR-C calibration problem: radiometric and geometric calibration, which have been previously reported, and polarimetric calibration. The study described in this paper is an attempt at determining the science requirements for polarimetric calibration for SIR-C. A model describing the effect of miscalibration is presented first, followed by an example describing how to assess the calibration requirements specific to an experiment. The effects of miscalibration on some commonly used polarimetric parameters are also discussed. It is shown that polarimetric calibration requirements are strongly application dependent. In consequence, the SIR-C investigators are advised to assess the calibration requirements of their own experiment. A set of numbers summarizing SIR-C polarimetric calibration goals concludes this paper.

  2. A definitive calibration record for the Landsat-5 thematic mapper anchored to the Landsat-7 radiometric scale

    USGS Publications Warehouse

    Teillet, P.M.; Helder, D.L.; Ruggles, T.A.; Landry, R.; Ahern, F.J.; Higgs, N.J.; Barsi, J.; Chander, G.; Markham, B.L.; Barker, J.L.; Thome, K.J.; Schott, J.R.; Palluconi, Frank Don

    2004-01-01

    A coordinated effort on the part of several agencies has led to the specification of a definitive radiometric calibration record for the Landsat-5 thematic mapper (TM) for its lifetime since launch in 1984. The time-dependent calibration record for Landsat-5 TM has been placed on the same radiometric scale as the Landsat-7 enhanced thematic mapper plus (ETM+). It has been implemented in the National Landsat Archive Production Systems (NLAPS) in use in North America. This paper documents the results of this collaborative effort and the specifications for the related calibration processing algorithms. The specifications include (i) anchoring of the Landsat-5 TM calibration record to the Landsat-7 ETM+ absolute radiometric calibration, (ii) new time-dependent calibration processing equations and procedures applicable to raw Landsat-5 TM data, and (iii) algorithms for recalibration computations applicable to some of the existing processed datasets in the North American context. The cross-calibration between Landsat-5 TM and Landsat-7 ETM+ was achieved using image pairs from the tandem-orbit configuration period that was programmed early in the Laridsat-7 mission. The time-dependent calibration for Landsat-5 TM is based on a detailed trend analysis of data from the on-board internal calibrator. The new lifetime radiometric calibration record for Landsat-5 will overcome problems with earlier product generation owing to inadequate maintenance and documentation of the calibration over time and will facilitate the quantitative examination of a continuous, near-global dataset at 30-m scale that spans almost two decades.

  3. A Review of Calibration Transfer Practices and Instrument Differences in Spectroscopy.

    PubMed

    Workman, Jerome J

    2018-03-01

    Calibration transfer for use with spectroscopic instruments, particularly for near-infrared, infrared, and Raman analysis, has been the subject of multiple articles, research papers, book chapters, and technical reviews. There has been a myriad of approaches published and claims made for resolving the problems associated with transferring calibrations; however, the capability of attaining identical results over time from two or more instruments using an identical calibration still eludes technologists. Calibration transfer, in a precise definition, refers to a series of analytical approaches or chemometric techniques used to attempt to apply a single spectral database, and the calibration model developed using that database, for two or more instruments, with statistically retained accuracy and precision. Ideally, one would develop a single calibration for any particular application, and move it indiscriminately across instruments and achieve identical analysis or prediction results. There are many technical aspects involved in such precision calibration transfer, related to the measuring instrument reproducibility and repeatability, the reference chemical values used for the calibration, the multivariate mathematics used for calibration, and sample presentation repeatability and reproducibility. Ideally, a multivariate model developed on a single instrument would provide a statistically identical analysis when used on other instruments following transfer. This paper reviews common calibration transfer techniques, mostly related to instrument differences, and the mathematics of the uncertainty between instruments when making spectroscopic measurements of identical samples. It does not specifically address calibration maintenance or reference laboratory differences.

  4. DEM Calibration Approach: design of experiment

    NASA Astrophysics Data System (ADS)

    Boikov, A. V.; Savelev, R. V.; Payor, V. A.

    2018-05-01

    The problem of DEM models calibration is considered in the article. It is proposed to divide models input parameters into those that require iterative calibration and those that are recommended to measure directly. A new method for model calibration based on the design of the experiment for iteratively calibrated parameters is proposed. The experiment is conducted using a specially designed stand. The results are processed with technical vision algorithms. Approximating functions are obtained and the error of the implemented software and hardware complex is estimated. The prospects of the obtained results are discussed.

  5. Applying transport-distance specific SOC distribution to calibrate soil erosion model WaTEM

    NASA Astrophysics Data System (ADS)

    Hu, Yaxian; Heckrath, Goswin J.; Kuhn, Nikolaus J.

    2016-04-01

    Slope-scale soil erosion, transport and deposition fundamentally decide the spatial redistribution of eroded sediments in terrestrial and aquatic systems, which further affect the burial and decomposition of eroded SOC. However, comparisons of SOC contents between upper eroding slope and lower depositional site cannot fully reflect the movement of eroded SOC in-transit along hillslopes. The actual transport distance of eroded SOC is decided by its settling velocity. So far, the settling velocity distribution of eroded SOC is mostly calculated from mineral particle specific SOC distribution. Yet, soil is mostly eroded in form of aggregates, and the movement of aggregates differs significantly from individual mineral particles. This urges a SOC erodibility parameter based on actual transport distance distribution of eroded fractions to better calibrate soil erosion models. Previous field investigation on a freshly seeded cropland in Denmark has shown immediate deposition of fast settling soil fractions and the associated SOC at footslopes, followed by a fining trend at the slope tail. To further quantify the long-term effects of topography on erosional redistribution of eroded SOC, the actual transport-distance specific SOC distribution observed on the field was applied to a soil erosion model WaTEM (based on USLE). After integrating with local DEM, our calibrated model succeeded in locating the hotspots of enrichment/depletion of eroded SOC on different topographic positions, much better corresponding to the real-world field observation. By extrapolating into repeated erosion events, our projected results on the spatial distribution of eroded SOC are also adequately consistent with the SOC properties in the consecutive sample profiles along the slope.

  6. Signal inference with unknown response: calibration-uncertainty renormalized estimator.

    PubMed

    Dorn, Sebastian; Enßlin, Torsten A; Greiner, Maksim; Selig, Marco; Boehm, Vanessa

    2015-01-01

    The calibration of a measurement device is crucial for every scientific experiment, where a signal has to be inferred from data. We present CURE, the calibration-uncertainty renormalized estimator, to reconstruct a signal and simultaneously the instrument's calibration from the same data without knowing the exact calibration, but its covariance structure. The idea of the CURE method, developed in the framework of information field theory, is to start with an assumed calibration to successively include more and more portions of calibration uncertainty into the signal inference equations and to absorb the resulting corrections into renormalized signal (and calibration) solutions. Thereby, the signal inference and calibration problem turns into a problem of solving a single system of ordinary differential equations and can be identified with common resummation techniques used in field theories. We verify the CURE method by applying it to a simplistic toy example and compare it against existent self-calibration schemes, Wiener filter solutions, and Markov chain Monte Carlo sampling. We conclude that the method is able to keep up in accuracy with the best self-calibration methods and serves as a noniterative alternative to them.

  7. Contributions to the problem of piezoelectric accelerometer calibration. [using lock-in voltmeter

    NASA Technical Reports Server (NTRS)

    Jakab, I.; Bordas, A.

    1974-01-01

    After discussing the principal calibration methods for piezoelectric accelerometers, an experimental setup for accelerometer calibration by the reciprocity method is described It is shown how the use of a lock-in voltmeter eliminates errors due to viscous damping and electrical loading.

  8. Use of commercial off-the-shelf digital cameras for scientific data acquisition and scene-specific color calibration

    PubMed Central

    Akkaynak, Derya; Treibitz, Tali; Xiao, Bei; Gürkan, Umut A.; Allen, Justine J.; Demirci, Utkan; Hanlon, Roger T.

    2014-01-01

    Commercial off-the-shelf digital cameras are inexpensive and easy-to-use instruments that can be used for quantitative scientific data acquisition if images are captured in raw format and processed so that they maintain a linear relationship with scene radiance. Here we describe the image-processing steps required for consistent data acquisition with color cameras. In addition, we present a method for scene-specific color calibration that increases the accuracy of color capture when a scene contains colors that are not well represented in the gamut of a standard color-calibration target. We demonstrate applications of the proposed methodology in the fields of biomedical engineering, artwork photography, perception science, marine biology, and underwater imaging. PMID:24562030

  9. Calibration of groundwater vulnerability mapping using the generalized reduced gradient method

    NASA Astrophysics Data System (ADS)

    Elçi, Alper

    2017-12-01

    Groundwater vulnerability assessment studies are essential in water resources management. Overlay-and-index methods such as DRASTIC are widely used for mapping of groundwater vulnerability, however, these methods mainly suffer from a subjective selection of model parameters. The objective of this study is to introduce a calibration procedure that results in a more accurate assessment of groundwater vulnerability. The improvement of the assessment is formulated as a parameter optimization problem using an objective function that is based on the correlation between actual groundwater contamination and vulnerability index values. The non-linear optimization problem is solved with the generalized-reduced-gradient (GRG) method, which is numerical algorithm based optimization method. To demonstrate the applicability of the procedure, a vulnerability map for the Tahtali stream basin is calibrated using nitrate concentration data. The calibration procedure is easy to implement and aims the maximization of correlation between observed pollutant concentrations and groundwater vulnerability index values. The influence of each vulnerability parameter in the calculation of the vulnerability index is assessed by performing a single-parameter sensitivity analysis. Results of the sensitivity analysis show that all factors are effective on the final vulnerability index. Calibration of the vulnerability map improves the correlation between index values and measured nitrate concentrations by 19%. The regression coefficient increases from 0.280 to 0.485. It is evident that the spatial distribution and the proportions of vulnerability class areas are significantly altered with the calibration process. Although the applicability of the calibration method is demonstrated on the DRASTIC model, the applicability of the approach is not specific to a certain model and can also be easily applied to other overlay-and-index methods.

  10. Chaos, Consternation and CALIPSO Calibration: New Strategies for Calibrating the CALIOP 1064 nm Channel

    NASA Technical Reports Server (NTRS)

    Vaughan, Mark; Garnier, Anne; Liu, Zhaoyan; Josset, Damien; Hu, Yongxiang; Lee, Kam-Pui; Hunt, William; Vernier, Jean-Paul; Rodier, Sharon; Pelon, Jaques; hide

    2012-01-01

    The very low signal-to-noise ratios of the 1064 nm CALIOP molecular backscatter signal make it effectively impossible to employ the "clear air" normalization technique typically used to calibrate elastic back-scatter lidars. The CALIPSO mission has thus chosen to cross-calibrate their 1064 nm measurements with respect to the 532 nm data using the two-wavelength backscatter from cirrus clouds. In this paper we discuss several known issues in the version 3 CALIOP 1064 nm calibration procedure, and describe the strategies that will be employed in the version 4 data release to surmount these problems.

  11. Calibration and evaluation of a dispersant application system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shum, J.S.

    1987-05-01

    The report presents recommended methods for calibrating and operating boat-mounted dispersant application systems. Calibration of one commercially-available system and several unusual problems encountered in calibration are described. Charts and procedures for selecting pump rates and other operating parameters in order to achieve a desired dosage are provided. The calibration was performed at the EPA's Oil and Hazardous Materials Simulated Environmental Test Tank (OHMSETT) facility in Leonardo, New Jersey.

  12. An Automated Thermocouple Calibration System

    NASA Technical Reports Server (NTRS)

    Bethea, Mark D.; Rosenthal, Bruce N.

    1992-01-01

    An Automated Thermocouple Calibration System (ATCS) was developed for the unattended calibration of type K thermocouples. This system operates from room temperature to 650 C and has been used for calibration of thermocouples in an eight-zone furnace system which may employ as many as 60 thermocouples simultaneously. It is highly efficient, allowing for the calibration of large numbers of thermocouples in significantly less time than required for manual calibrations. The system consists of a personal computer, a data acquisition/control unit, and a laboratory calibration furnace. The calibration furnace is a microprocessor-controlled multipurpose temperature calibrator with an accuracy of +/- 0.7 C. The accuracy of the calibration furnace is traceable to the National Institute of Standards and Technology (NIST). The computer software is menu-based to give the user flexibility and ease of use. The user needs no programming experience to operate the systems. This system was specifically developed for use in the Microgravity Materials Science Laboratory (MMSL) at the NASA LeRC.

  13. Does Preschool Self-Regulation Predict Later Behavior Problems in General or Specific Problem Behaviors?

    PubMed

    Lonigan, Christopher J; Spiegel, Jamie A; Goodrich, J Marc; Morris, Brittany M; Osborne, Colleen M; Lerner, Matthew D; Phillips, Beth M

    2017-11-01

    Findings from prior research have consistently indicated significant associations between self-regulation and externalizing behaviors. Significant associations have also been reported between children's language skills and both externalizing behaviors and self-regulation. Few studies to date, however, have examined these relations longitudinally, simultaneously, or with respect to unique clusters of externalizing problems. The current study examined the influence of preschool self-regulation on general and specific externalizing behavior problems in early elementary school and whether these relations were independent of associations between language, self-regulation, and externalizing behaviors in a sample of 815 children (44% female). Additionally, given a general pattern of sex differences in the presentations of externalizing behavior problems, self-regulation, and language skills, sex differences for these associations were examined. Results indicated unique relations of preschool self-regulation and language with both general externalizing behavior problems and specific problems of inattention. In general, self-regulation was a stronger longitudinal correlate of externalizing behavior for boys than it was for girls, and language was a stronger longitudinal predictor of hyperactive/impulsive behavior for girls than it was for boys.

  14. Specific Cognitive Predictors of Early Math Problem Solving

    ERIC Educational Resources Information Center

    Decker, Scott L.; Roberts, Alycia M.

    2015-01-01

    Development of early math skill depends on a prerequisite level of cognitive development. Identification of specific cognitive skills that are important for math development may not only inform instructional approaches but also inform assessment approaches to identifying children with specific learning problems in math. This study investigated the…

  15. Ethnic Variability in Body Size, Proportions and Composition in Children Aged 5 to 11 Years: Is Ethnic-Specific Calibration of Bioelectrical Impedance Required?

    PubMed Central

    Lee, Simon; Bountziouka, Vassiliki; Lum, Sooky; Stocks, Janet; Bonner, Rachel; Naik, Mitesh; Fothergill, Helen; Wells, Jonathan C. K.

    2014-01-01

    Background Bioelectrical Impedance Analysis (BIA) has the potential to be used widely as a method of assessing body fatness and composition, both in clinical and community settings. BIA provides bioelectrical properties, such as whole-body impedance which ideally needs to be calibrated against a gold-standard method in order to provide accurate estimates of fat-free mass. UK studies in older children and adolescents have shown that, when used in multi-ethnic populations, calibration equations need to include ethnic-specific terms, but whether this holds true for younger children remains to be elucidated. The aims of this study were to examine ethnic differences in body size, proportions and composition in children aged 5 to 11 years, and to establish the extent to which such differences could influence BIA calibration. Methods In a multi-ethnic population of 2171 London primary school-children (47% boys; 34% White, 29% Black African/Caribbean, 25% South Asian, 12% Other) detailed anthropometric measurements were performed and ethnic differences in body size and proportion were assessed. Ethnic differences in fat-free mass, derived by deuterium dilution, were further evaluated in a subsample of the population (n = 698). Multiple linear regression models were used to calibrate BIA against deuterium dilution. Results In children <11 years of age, Black African/Caribbean children were significantly taller, heavier and had larger body size than children of other ethnicities. They also had larger waist and limb girths and relatively longer legs. Despite these differences, ethnic-specific terms did not contribute significantly to the BIA calibration equation (Fat-free mass = 1.12+0.71*(height2/impedance)+0.18*weight). Conclusion Although clear ethnic differences in body size, proportions and composition were evident in this population of young children aged 5 to 11 years, an ethnic-specific BIA calibration equation was not required. PMID:25478928

  16. Ethnic variability in body size, proportions and composition in children aged 5 to 11 years: is ethnic-specific calibration of bioelectrical impedance required?

    PubMed

    Lee, Simon; Bountziouka, Vassiliki; Lum, Sooky; Stocks, Janet; Bonner, Rachel; Naik, Mitesh; Fothergill, Helen; Wells, Jonathan C K

    2014-01-01

    Bioelectrical Impedance Analysis (BIA) has the potential to be used widely as a method of assessing body fatness and composition, both in clinical and community settings. BIA provides bioelectrical properties, such as whole-body impedance which ideally needs to be calibrated against a gold-standard method in order to provide accurate estimates of fat-free mass. UK studies in older children and adolescents have shown that, when used in multi-ethnic populations, calibration equations need to include ethnic-specific terms, but whether this holds true for younger children remains to be elucidated. The aims of this study were to examine ethnic differences in body size, proportions and composition in children aged 5 to 11 years, and to establish the extent to which such differences could influence BIA calibration. In a multi-ethnic population of 2171 London primary school-children (47% boys; 34% White, 29% Black African/Caribbean, 25% South Asian, 12% Other) detailed anthropometric measurements were performed and ethnic differences in body size and proportion were assessed. Ethnic differences in fat-free mass, derived by deuterium dilution, were further evaluated in a subsample of the population (n = 698). Multiple linear regression models were used to calibrate BIA against deuterium dilution. In children < 11 years of age, Black African/Caribbean children were significantly taller, heavier and had larger body size than children of other ethnicities. They also had larger waist and limb girths and relatively longer legs. Despite these differences, ethnic-specific terms did not contribute significantly to the BIA calibration equation (Fat-free mass = 1.12+0.71*(height2/impedance)+0.18*weight). Although clear ethnic differences in body size, proportions and composition were evident in this population of young children aged 5 to 11 years, an ethnic-specific BIA calibration equation was not required.

  17. Bilinear Inverse Problems: Theory, Algorithms, and Applications

    NASA Astrophysics Data System (ADS)

    Ling, Shuyang

    We will discuss how several important real-world signal processing problems, such as self-calibration and blind deconvolution, can be modeled as bilinear inverse problems and solved by convex and nonconvex optimization approaches. In Chapter 2, we bring together three seemingly unrelated concepts, self-calibration, compressive sensing and biconvex optimization. We show how several self-calibration problems can be treated efficiently within the framework of biconvex compressive sensing via a new method called SparseLift. More specifically, we consider a linear system of equations y = DAx, where the diagonal matrix D (which models the calibration error) is unknown and x is an unknown sparse signal. By "lifting" this biconvex inverse problem and exploiting sparsity in this model, we derive explicit theoretical guarantees under which both x and D can be recovered exactly, robustly, and numerically efficiently. In Chapter 3, we study the question of the joint blind deconvolution and blind demixing, i.e., extracting a sequence of functions [special characters omitted] from observing only the sum of their convolutions [special characters omitted]. In particular, for the special case s = 1, it becomes the well-known blind deconvolution problem. We present a non-convex algorithm which guarantees exact recovery under conditions that are competitive with convex optimization methods, with the additional advantage of being computationally much more efficient. We discuss several applications of the proposed framework in image processing and wireless communications in connection with the Internet-of-Things. In Chapter 4, we consider three different self-calibration models of practical relevance. We show how their corresponding bilinear inverse problems can be solved by both the simple linear least squares approach and the SVD-based approach. As a consequence, the proposed algorithms are numerically extremely efficient, thus allowing for real-time deployment. Explicit theoretical

  18. Simple transfer calibration method for a Cimel Sun-Moon photometer: calculating lunar calibration coefficients from Sun calibration constants.

    PubMed

    Li, Zhengqiang; Li, Kaitao; Li, Donghui; Yang, Jiuchun; Xu, Hua; Goloub, Philippe; Victori, Stephane

    2016-09-20

    The Cimel new technologies allow both daytime and nighttime aerosol optical depth (AOD) measurements. Although the daytime AOD calibration protocols are well established, accurate and simple nighttime calibration is still a challenging task. Standard lunar-Langley and intercomparison calibration methods both require specific conditions in terms of atmospheric stability and site condition. Additionally, the lunar irradiance model also has some known limits on its uncertainty. This paper presents a simple calibration method that transfers the direct-Sun calibration constant, V0,Sun, to the lunar irradiance calibration coefficient, CMoon. Our approach is a pure calculation method, independent of site limits, e.g., Moon phase. The method is also not affected by the lunar irradiance model limitations, which is the largest error source of traditional calibration methods. Besides, this new transfer calibration approach is easy to use in the field since CMoon can be obtained directly once V0,Sun is known. Error analysis suggests that the average uncertainty of CMoon over the 440-1640 nm bands obtained with the transfer method is 2.4%-2.8%, depending on the V0,Sun approach (Langley or intercomparison), which is comparable with that of lunar-Langley approach, theoretically. In this paper, the Sun-Moon transfer and the Langley methods are compared based on site measurements in Beijing, and the day-night measurement continuity and performance are analyzed.

  19. Specific Reading Comprehension Disability: Major Problem, Myth, or Misnomer?

    PubMed Central

    Spencer, Mercedes; Quinn, Jamie M.; Wagner, Richard K.

    2013-01-01

    The goal of the present study was to test three competing hypotheses about the nature of comprehension problems of students who are poor in reading comprehension. Participants in the study were first, second, and third graders, totaling 9 cohorts and over 425,000 participants in all. The pattern of results was consistent across all cohorts: Less than one percent of first- through third-grade students who scored as poor in reading comprehension were adequate in both decoding and vocabulary. Although poor reading comprehension certainly qualifies as a major problem rather than a myth, the term specific reading comprehension disability is a misnomer: Individuals with problems in reading comprehension that are not attributable to poor word recognition have comprehension problems that are general to language comprehension rather than specific to reading. Implications for assessment and intervention are discussed. PMID:25143666

  20. Specific Reading Comprehension Disability: Major Problem, Myth, or Misnomer?

    PubMed

    Spencer, Mercedes; Quinn, Jamie M; Wagner, Richard K

    2014-02-01

    The goal of the present study was to test three competing hypotheses about the nature of comprehension problems of students who are poor in reading comprehension. Participants in the study were first, second, and third graders, totaling 9 cohorts and over 425,000 participants in all. The pattern of results was consistent across all cohorts: Less than one percent of first- through third-grade students who scored as poor in reading comprehension were adequate in both decoding and vocabulary. Although poor reading comprehension certainly qualifies as a major problem rather than a myth, the term specific reading comprehension disability is a misnomer: Individuals with problems in reading comprehension that are not attributable to poor word recognition have comprehension problems that are general to language comprehension rather than specific to reading. Implications for assessment and intervention are discussed.

  1. Calibration of groundwater vulnerability mapping using the generalized reduced gradient method.

    PubMed

    Elçi, Alper

    2017-12-01

    Groundwater vulnerability assessment studies are essential in water resources management. Overlay-and-index methods such as DRASTIC are widely used for mapping of groundwater vulnerability, however, these methods mainly suffer from a subjective selection of model parameters. The objective of this study is to introduce a calibration procedure that results in a more accurate assessment of groundwater vulnerability. The improvement of the assessment is formulated as a parameter optimization problem using an objective function that is based on the correlation between actual groundwater contamination and vulnerability index values. The non-linear optimization problem is solved with the generalized-reduced-gradient (GRG) method, which is numerical algorithm based optimization method. To demonstrate the applicability of the procedure, a vulnerability map for the Tahtali stream basin is calibrated using nitrate concentration data. The calibration procedure is easy to implement and aims the maximization of correlation between observed pollutant concentrations and groundwater vulnerability index values. The influence of each vulnerability parameter in the calculation of the vulnerability index is assessed by performing a single-parameter sensitivity analysis. Results of the sensitivity analysis show that all factors are effective on the final vulnerability index. Calibration of the vulnerability map improves the correlation between index values and measured nitrate concentrations by 19%. The regression coefficient increases from 0.280 to 0.485. It is evident that the spatial distribution and the proportions of vulnerability class areas are significantly altered with the calibration process. Although the applicability of the calibration method is demonstrated on the DRASTIC model, the applicability of the approach is not specific to a certain model and can also be easily applied to other overlay-and-index methods. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. A Robust Bayesian Random Effects Model for Nonlinear Calibration Problems

    PubMed Central

    Fong, Y.; Wakefield, J.; De Rosa, S.; Frahm, N.

    2013-01-01

    Summary In the context of a bioassay or an immunoassay, calibration means fitting a curve, usually nonlinear, through the observations collected on a set of samples containing known concentrations of a target substance, and then using the fitted curve and observations collected on samples of interest to predict the concentrations of the target substance in these samples. Recent technological advances have greatly improved our ability to quantify minute amounts of substance from a tiny volume of biological sample. This has in turn led to a need to improve statistical methods for calibration. In this paper, we focus on developing calibration methods robust to dependent outliers. We introduce a novel normal mixture model with dependent error terms to model the experimental noise. In addition, we propose a re-parameterization of the five parameter logistic nonlinear regression model that allows us to better incorporate prior information. We examine the performance of our methods with simulation studies and show that they lead to a substantial increase in performance measured in terms of mean squared error of estimation and a measure of the average prediction accuracy. A real data example from the HIV Vaccine Trials Network Laboratory is used to illustrate the methods. PMID:22551415

  3. Radiometer calibration methods and resulting irradiance differences: Radiometer calibration methods and resulting irradiance differences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habte, Aron; Sengupta, Manajit; Andreas, Afshin

    Accurate solar radiation measured by radiometers depends on instrument performance specifications, installation method, calibration procedure, measurement conditions, maintenance practices, location, and environmental conditions. This study addresses the effect of different calibration methodologies and resulting differences provided by radiometric calibration service providers such as the National Renewable Energy Laboratory (NREL) and manufacturers of radiometers. Some of these methods calibrate radiometers indoors and some outdoors. To establish or understand the differences in calibration methodologies, we processed and analyzed field-measured data from radiometers deployed for 10 months at NREL's Solar Radiation Research Laboratory. These different methods of calibration resulted in a difference ofmore » +/-1% to +/-2% in solar irradiance measurements. Analyzing these differences will ultimately assist in determining the uncertainties of the field radiometer data and will help develop a consensus on a standard for calibration. Further advancing procedures for precisely calibrating radiometers to world reference standards that reduce measurement uncertainties will help the accurate prediction of the output of planned solar conversion projects and improve the bankability of financing solar projects.« less

  4. Functional design specification for the problem data system. [space shuttle

    NASA Technical Reports Server (NTRS)

    Boatman, T. W.

    1975-01-01

    The purpose of the Functional Design Specification is to outline the design for the Problem Data System. The Problem Data System is a computer-based data management system designed to track the status of problems and corrective actions pertinent to space shuttle hardware.

  5. Efficient calibration for imperfect computer models

    DOE PAGES

    Tuo, Rui; Wu, C. F. Jeff

    2015-12-01

    Many computer models contain unknown parameters which need to be estimated using physical observations. Furthermore, the calibration method based on Gaussian process models may lead to unreasonable estimate for imperfect computer models. In this work, we extend their study to calibration problems with stochastic physical data. We propose a novel method, called the L 2 calibration, and show its semiparametric efficiency. The conventional method of the ordinary least squares is also studied. Theoretical analysis shows that it is consistent but not efficient. Here, numerical examples show that the proposed method outperforms the existing ones.

  6. Evaluating Statistical Process Control (SPC) techniques and computing the uncertainty of force calibrations

    NASA Technical Reports Server (NTRS)

    Navard, Sharon E.

    1989-01-01

    In recent years there has been a push within NASA to use statistical techniques to improve the quality of production. Two areas where statistics are used are in establishing product and process quality control of flight hardware and in evaluating the uncertainty of calibration of instruments. The Flight Systems Quality Engineering branch is responsible for developing and assuring the quality of all flight hardware; the statistical process control methods employed are reviewed and evaluated. The Measurement Standards and Calibration Laboratory performs the calibration of all instruments used on-site at JSC as well as those used by all off-site contractors. These calibrations must be performed in such a way as to be traceable to national standards maintained by the National Institute of Standards and Technology, and they must meet a four-to-one ratio of the instrument specifications to calibrating standard uncertainty. In some instances this ratio is not met, and in these cases it is desirable to compute the exact uncertainty of the calibration and determine ways of reducing it. A particular example where this problem is encountered is with a machine which does automatic calibrations of force. The process of force calibration using the United Force Machine is described in detail. The sources of error are identified and quantified when possible. Suggestions for improvement are made.

  7. The analytical calibration in (bio)imaging/mapping of the metallic elements in biological samples--definitions, nomenclature and strategies: state of the art.

    PubMed

    Jurowski, Kamil; Buszewski, Bogusław; Piekoszewski, Wojciech

    2015-01-01

    Nowadays, studies related to the distribution of metallic elements in biological samples are one of the most important issues. There are many articles dedicated to specific analytical atomic spectrometry techniques used for mapping/(bio)imaging the metallic elements in various kinds of biological samples. However, in such literature, there is a lack of articles dedicated to reviewing calibration strategies, and their problems, nomenclature, definitions, ways and methods used to obtain quantitative distribution maps. The aim of this article was to characterize the analytical calibration in the (bio)imaging/mapping of the metallic elements in biological samples including (1) nomenclature; (2) definitions, and (3) selected and sophisticated, examples of calibration strategies with analytical calibration procedures applied in the different analytical methods currently used to study an element's distribution in biological samples/materials such as LA ICP-MS, SIMS, EDS, XRF and others. The main emphasis was placed on the procedures and methodology of the analytical calibration strategy. Additionally, the aim of this work is to systematize the nomenclature for the calibration terms: analytical calibration, analytical calibration method, analytical calibration procedure and analytical calibration strategy. The authors also want to popularize the division of calibration methods that are different than those hitherto used. This article is the first work in literature that refers to and emphasizes many different and complex aspects of analytical calibration problems in studies related to (bio)imaging/mapping metallic elements in different kinds of biological samples. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Geometrical calibration of an AOTF hyper-spectral imaging system

    NASA Astrophysics Data System (ADS)

    Špiclin, Žiga; Katrašnik, Jaka; Bürmen, Miran; Pernuš, Franjo; Likar, Boštjan

    2010-02-01

    Optical aberrations present an important problem in optical measurements. Geometrical calibration of an imaging system is therefore of the utmost importance for achieving accurate optical measurements. In hyper-spectral imaging systems, the problem of optical aberrations is even more pronounced because optical aberrations are wavelength dependent. Geometrical calibration must therefore be performed over the entire spectral range of the hyper-spectral imaging system, which is usually far greater than that of the visible light spectrum. This problem is especially adverse in AOTF (Acousto- Optic Tunable Filter) hyper-spectral imaging systems, as the diffraction of light in AOTF filters is dependent on both wavelength and angle of incidence. Geometrical calibration of hyper-spectral imaging system was performed by stable caliber of known dimensions, which was imaged at different wavelengths over the entire spectral range. The acquired images were then automatically registered to the caliber model by both parametric and nonparametric transformation based on B-splines and by minimizing normalized correlation coefficient. The calibration method was tested on an AOTF hyper-spectral imaging system in the near infrared spectral range. The results indicated substantial wavelength dependent optical aberration that is especially pronounced in the spectral range closer to the infrared part of the spectrum. The calibration method was able to accurately characterize the aberrations and produce transformations for efficient sub-pixel geometrical calibration over the entire spectral range, finally yielding better spatial resolution of hyperspectral imaging system.

  9. Automated Attitude Sensor Calibration: Progress and Plans

    NASA Technical Reports Server (NTRS)

    Sedlak, Joseph; Hashmall, Joseph

    2004-01-01

    This paper describes ongoing work a NASA/Goddard Space Flight Center to improve the quality of spacecraft attitude sensor calibration and reduce costs by automating parts of the calibration process. The new calibration software can autonomously preview data quality over a given time span, select a subset of the data for processing, perform the requested calibration, and output a report. This level of automation is currently being implemented for two specific applications: inertial reference unit (IRU) calibration and sensor alignment calibration. The IRU calibration utility makes use of a sequential version of the Davenport algorithm. This utility has been successfully tested with simulated and actual flight data. The alignment calibration is still in the early testing stage. Both utilities will be incorporated into the institutional attitude ground support system.

  10. Quasi-Static Calibration Method of a High-g Accelerometer

    PubMed Central

    Wang, Yan; Fan, Jinbiao; Zu, Jing; Xu, Peng

    2017-01-01

    To solve the problem of resonance during quasi-static calibration of high-g accelerometers, we deduce the relationship between the minimum excitation pulse width and the resonant frequency of the calibrated accelerometer according to the second-order mathematical model of the accelerometer, and improve the quasi-static calibration theory. We establish a quasi-static calibration testing system, which uses a gas gun to generate high-g acceleration signals, and apply a laser interferometer to reproduce the impact acceleration. These signals are used to drive the calibrated accelerometer. By comparing the excitation acceleration signal and the output responses of the calibrated accelerometer to the excitation signals, the impact sensitivity of the calibrated accelerometer is obtained. As indicated by the calibration test results, this calibration system produces excitation acceleration signals with a pulse width of less than 1000 μs, and realize the quasi-static calibration of high-g accelerometers with a resonant frequency above 20 kHz when the calibration error was 3%. PMID:28230743

  11. Calibration method for a large-scale structured light measurement system.

    PubMed

    Wang, Peng; Wang, Jianmei; Xu, Jing; Guan, Yong; Zhang, Guanglie; Chen, Ken

    2017-05-10

    The structured light method is an effective non-contact measurement approach. The calibration greatly affects the measurement precision of structured light systems. To construct a large-scale structured light system with high accuracy, a large-scale and precise calibration gauge is always required, which leads to an increased cost. To this end, in this paper, a calibration method with a planar mirror is proposed to reduce the calibration gauge size and cost. An out-of-focus camera calibration method is also proposed to overcome the defocusing problem caused by the shortened distance during the calibration procedure. The experimental results verify the accuracy of the proposed calibration method.

  12. Spacecraft attitude calibration/verification baseline study

    NASA Technical Reports Server (NTRS)

    Chen, L. C.

    1981-01-01

    A baseline study for a generalized spacecraft attitude calibration/verification system is presented. It can be used to define software specifications for three major functions required by a mission: the pre-launch parameter observability and data collection strategy study; the in-flight sensor calibration; and the post-calibration attitude accuracy verification. Analytical considerations are given for both single-axis and three-axis spacecrafts. The three-axis attitudes considered include the inertial-pointing attitudes, the reference-pointing attitudes, and attitudes undergoing specific maneuvers. The attitude sensors and hardware considered include the Earth horizon sensors, the plane-field Sun sensors, the coarse and fine two-axis digital Sun sensors, the three-axis magnetometers, the fixed-head star trackers, and the inertial reference gyros.

  13. SU-C-204-02: Improved Patient-Specific Optimization of the Stopping Power Calibration for Proton Therapy Planning Using a Single Proton Radiography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rinaldi, I; Ludwig Maximilian University, Garching, DE; Heidelberg University Hospital, Heidelberg, DE

    2015-06-15

    Purpose: We present an improved method to calculate patient-specific calibration curves to convert X-ray computed tomography (CT) Hounsfield Unit (HU) to relative stopping powers (RSP) for proton therapy treatment planning. Methods: By optimizing the HU-RSP calibration curve, the difference between a proton radiographic image and a digitally reconstructed X-ray radiography (DRR) is minimized. The feasibility of this approach has previously been demonstrated. This scenario assumes that all discrepancies between proton radiography and DRR originate from uncertainties in the HU-RSP curve. In reality, external factors cause imperfections in the proton radiography, such as misalignment compared to the DRR and unfaithful representationmore » of geometric structures (“blurring”). We analyze these effects based on synthetic datasets of anthropomorphic phantoms and suggest an extended optimization scheme which explicitly accounts for these effects. Performance of the method is been tested for various simulated irradiation parameters. The ultimate purpose of the optimization is to minimize uncertainties in the HU-RSP calibration curve. We therefore suggest and perform a thorough statistical treatment to quantify the accuracy of the optimized HU-RSP curve. Results: We demonstrate that without extending the optimization scheme, spatial blurring (equivalent to FWHM=3mm convolution) in the proton radiographies can cause up to 10% deviation between the optimized and the ground truth HU-RSP calibration curve. Instead, results obtained with our extended method reach 1% or better correspondence. We have further calculated gamma index maps for different acceptance levels. With DTA=0.5mm and RD=0.5%, a passing ratio of 100% is obtained with the extended method, while an optimization neglecting effects of spatial blurring only reach ∼90%. Conclusion: Our contribution underlines the potential of a single proton radiography to generate a patient-specific calibration curve and to

  14. Extrinsic Calibration of Camera and 2D Laser Sensors without Overlap

    PubMed Central

    Al-Widyan, Khalid

    2017-01-01

    Extrinsic calibration of a camera and a 2D laser range finder (lidar) sensors is crucial in sensor data fusion applications; for example SLAM algorithms used in mobile robot platforms. The fundamental challenge of extrinsic calibration is when the camera-lidar sensors do not overlap or share the same field of view. In this paper we propose a novel and flexible approach for the extrinsic calibration of a camera-lidar system without overlap, which can be used for robotic platform self-calibration. The approach is based on the robot–world hand–eye calibration (RWHE) problem; proven to have efficient and accurate solutions. First, the system was mapped to the RWHE calibration problem modeled as the linear relationship AX=ZB, where X and Z are unknown calibration matrices. Then, we computed the transformation matrix B, which was the main challenge in the above mapping. The computation is based on reasonable assumptions about geometric structure in the calibration environment. The reliability and accuracy of the proposed approach is compared to a state-of-the-art method in extrinsic 2D lidar to camera calibration. Experimental results from real datasets indicate that the proposed approach provides better results with an L2 norm translational and rotational deviations of 314 mm and 0.12∘ respectively. PMID:29036905

  15. Extrinsic Calibration of Camera and 2D Laser Sensors without Overlap.

    PubMed

    Ahmad Yousef, Khalil M; Mohd, Bassam J; Al-Widyan, Khalid; Hayajneh, Thaier

    2017-10-14

    Extrinsic calibration of a camera and a 2D laser range finder (lidar) sensors is crucial in sensor data fusion applications; for example SLAM algorithms used in mobile robot platforms. The fundamental challenge of extrinsic calibration is when the camera-lidar sensors do not overlap or share the same field of view. In this paper we propose a novel and flexible approach for the extrinsic calibration of a camera-lidar system without overlap, which can be used for robotic platform self-calibration. The approach is based on the robot-world hand-eye calibration (RWHE) problem; proven to have efficient and accurate solutions. First, the system was mapped to the RWHE calibration problem modeled as the linear relationship AX = ZB , where X and Z are unknown calibration matrices. Then, we computed the transformation matrix B , which was the main challenge in the above mapping. The computation is based on reasonable assumptions about geometric structure in the calibration environment. The reliability and accuracy of the proposed approach is compared to a state-of-the-art method in extrinsic 2D lidar to camera calibration. Experimental results from real datasets indicate that the proposed approach provides better results with an L2 norm translational and rotational deviations of 314 mm and 0 . 12 ∘ respectively.

  16. Novel crystal timing calibration method based on total variation

    NASA Astrophysics Data System (ADS)

    Yu, Xingjian; Isobe, Takashi; Watanabe, Mitsuo; Liu, Huafeng

    2016-11-01

    A novel crystal timing calibration method based on total variation (TV), abbreviated as ‘TV merge’, has been developed for a high-resolution positron emission tomography (PET) system. The proposed method was developed for a system with a large number of crystals, it can provide timing calibration at the crystal level. In the proposed method, the timing calibration process was formulated as a linear problem. To robustly optimize the timing resolution, a TV constraint was added to the linear equation. Moreover, to solve the computer memory problem associated with the calculation of the timing calibration factors for systems with a large number of crystals, the merge component was used for obtaining the crystal level timing calibration values. Compared with other conventional methods, the data measured from a standard cylindrical phantom filled with a radioisotope solution was sufficient for performing a high-precision crystal-level timing calibration. In this paper, both simulation and experimental studies were performed to demonstrate the effectiveness and robustness of the TV merge method. We compare the timing resolutions of a 22Na point source, which was located in the field of view (FOV) of the brain PET system, with various calibration techniques. After implementing the TV merge method, the timing resolution improved from 3.34 ns at full width at half maximum (FWHM) to 2.31 ns FWHM.

  17. Axial calibration methods of piezoelectric load sharing dynamometer

    NASA Astrophysics Data System (ADS)

    Zhang, Jun; Chang, Qingbing; Ren, Zongjin; Shao, Jun; Wang, Xinlei; Tian, Yu

    2018-06-01

    The relationship between input and output of load sharing dynamometer is seriously non-linear in different loading points of a plane, so it's significant for accutately measuring force to precisely calibrate the non-linear relationship. In this paper, firstly, based on piezoelectric load sharing dynamometer, calibration experiments of different loading points are performed in a plane. And then load sharing testing system is respectively calibrated based on BP algorithm and ELM (Extreme Learning Machine) algorithm. Finally, the results show that the calibration result of ELM is better than BP for calibrating the non-linear relationship between input and output of loading sharing dynamometer in the different loading points of a plane, which verifies that ELM algorithm is feasible in solving force non-linear measurement problem.

  18. Excimer laser calibration system.

    PubMed

    Gottsch, J D; Rencs, E V; Cambier, J L; Hall, D; Azar, D T; Stark, W J

    1996-01-01

    Excimer laser photoablation for refractive and therapeutic keratectomies has been demonstrated to be feasible and practicable. However, corneal laser ablations are not without problems, including the delivery and maintenance of a homogeneous beam. We have developed an excimer laser calibration system capable of characterizing a laser ablation profile. Beam homogeneity is determined by the analysis of a polymethylmethacrylate (PMMA)-based thin-film using video capture and image processing. The ablation profile is presented as a color-coded map. Interpolation of excimer calibration system analysis provides a three-dimensional representation of elevation profiles that correlates with two-dimensional scanning profilometry. Excimer calibration analysis was performed before treating a monkey undergoing phototherapeutic keratectomy and two human subjects undergoing myopic spherocylindrical photorefractive keratectomy. Excimer calibration analysis was performed before and after laser refurbishing. Laser ablation profiles in PMMA are resolved by the excimer calibration system to .006 microns/pulse. Correlations with ablative patterns in a monkey cornea were demonstrated with preoperative and postoperative keratometry using corneal topography, and two human subjects using video-keratography. Excimer calibration analysis predicted a central-steep-island ablative pattern with the VISX Twenty/Twenty laser, which was confirmed by corneal topography immediately postoperatively and at 1 week after reepithelialization in the monkey. Predicted central steep islands in the two human subjects were confirmed by video-keratography at 1 week and at 1 month. Subsequent technical refurbishing of the laser resulted in a beam with an overall increased ablation rate measured as microns/pulse with a donut ablation profile. A patient treated after repair of the laser electrodes demonstrated no central island. This excimer laser calibration system can precisely detect laser-beam ablation

  19. Calibrated tree priors for relaxed phylogenetics and divergence time estimation.

    PubMed

    Heled, Joseph; Drummond, Alexei J

    2012-01-01

    The use of fossil evidence to calibrate divergence time estimation has a long history. More recently, Bayesian Markov chain Monte Carlo has become the dominant method of divergence time estimation, and fossil evidence has been reinterpreted as the specification of prior distributions on the divergence times of calibration nodes. These so-called "soft calibrations" have become widely used but the statistical properties of calibrated tree priors in a Bayesian setting hashave not been carefully investigated. Here, we clarify that calibration densities, such as those defined in BEAST 1.5, do not represent the marginal prior distribution of the calibration node. We illustrate this with a number of analytical results on small trees. We also describe an alternative construction for a calibrated Yule prior on trees that allows direct specification of the marginal prior distribution of the calibrated divergence time, with or without the restriction of monophyly. This method requires the computation of the Yule prior conditional on the height of the divergence being calibrated. Unfortunately, a practical solution for multiple calibrations remains elusive. Our results suggest that direct estimation of the prior induced by specifying multiple calibration densities should be a prerequisite of any divergence time dating analysis.

  20. Integrated calibration sphere and calibration step fixture for improved coordinate measurement machine calibration

    DOEpatents

    Clifford, Harry J [Los Alamos, NM

    2011-03-22

    A method and apparatus for mounting a calibration sphere to a calibration fixture for Coordinate Measurement Machine (CMM) calibration and qualification is described, decreasing the time required for such qualification, thus allowing the CMM to be used more productively. A number of embodiments are disclosed that allow for new and retrofit manufacture to perform as integrated calibration sphere and calibration fixture devices. This invention renders unnecessary the removal of a calibration sphere prior to CMM measurement of calibration features on calibration fixtures, thereby greatly reducing the time spent qualifying a CMM.

  1. Calibration-free optical chemical sensors

    DOEpatents

    DeGrandpre, Michael D.

    2006-04-11

    An apparatus and method for taking absorbance-based chemical measurements are described. In a specific embodiment, an indicator-based pCO2 (partial pressure of CO2) sensor displays sensor-to-sensor reproducibility and measurement stability. These qualities are achieved by: 1) renewing the sensing solution, 2) allowing the sensing solution to reach equilibrium with the analyte, and 3) calculating the response from a ratio of the indicator solution absorbances which are determined relative to a blank solution. Careful solution preparation, wavelength calibration, and stray light rejection also contribute to this calibration-free system. Three pCO2 sensors were calibrated and each had response curves which were essentially identical within the uncertainty of the calibration. Long-term laboratory and field studies showed the response had no drift over extended periods (months). The theoretical response, determined from thermodynamic characterization of the indicator solution, also predicted the observed calibration-free performance.

  2. Method calibration of the model 13145 infrared target projectors

    NASA Astrophysics Data System (ADS)

    Huang, Jianxia; Gao, Yuan; Han, Ying

    2014-11-01

    The SBIR Model 13145 Infrared Target Projectors ( The following abbreviation Evaluation Unit ) used for characterizing the performances of infrared imaging system. Test items: SiTF, MTF, NETD, MRTD, MDTD, NPS. Infrared target projectors includes two area blackbodies, a 12 position target wheel, all reflective collimator. It provide high spatial frequency differential targets, Precision differential targets imaged by infrared imaging system. And by photoelectricity convert on simulate signal or digital signal. Applications software (IR Windows TM 2001) evaluate characterizing the performances of infrared imaging system. With regards to as a whole calibration, first differently calibration for distributed component , According to calibration specification for area blackbody to calibration area blackbody, by means of to amend error factor to calibration of all reflective collimator, radiance calibration of an infrared target projectors using the SR5000 spectral radiometer, and to analyze systematic error. With regards to as parameter of infrared imaging system, need to integrate evaluation method. According to regulation with -GJB2340-1995 General specification for military thermal imaging sets -testing parameters of infrared imaging system, the results compare with results from Optical Calibration Testing Laboratory . As a goal to real calibration performances of the Evaluation Unit.

  3. Active Subspace Methods for Data-Intensive Inverse Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Qiqi

    2017-04-27

    The project has developed theory and computational tools to exploit active subspaces to reduce the dimension in statistical calibration problems. This dimension reduction enables MCMC methods to calibrate otherwise intractable models. The same theoretical and computational tools can also reduce the measurement dimension for calibration problems that use large stores of data.

  4. Calibration and assessment of channel-specific biases in microarray data with extended dynamical range.

    PubMed

    Bengtsson, Henrik; Jönsson, Göran; Vallon-Christersson, Johan

    2004-11-12

    Non-linearities in observed log-ratios of gene expressions, also known as intensity dependent log-ratios, can often be accounted for by global biases in the two channels being compared. Any step in a microarray process may introduce such offsets and in this article we study the biases introduced by the microarray scanner and the image analysis software. By scanning the same spotted oligonucleotide microarray at different photomultiplier tube (PMT) gains, we have identified a channel-specific bias present in two-channel microarray data. For the scanners analyzed it was in the range of 15-25 (out of 65,535). The observed bias was very stable between subsequent scans of the same array although the PMT gain was greatly adjusted. This indicates that the bias does not originate from a step preceding the scanner detector parts. The bias varies slightly between arrays. When comparing estimates based on data from the same array, but from different scanners, we have found that different scanners introduce different amounts of bias. So do various image analysis methods. We propose a scanning protocol and a constrained affine model that allows us to identify and estimate the bias in each channel. Backward transformation removes the bias and brings the channels to the same scale. The result is that systematic effects such as intensity dependent log-ratios are removed, but also that signal densities become much more similar. The average scan, which has a larger dynamical range and greater signal-to-noise ratio than individual scans, can then be obtained. The study shows that microarray scanners may introduce a significant bias in each channel. Such biases have to be calibrated for, otherwise systematic effects such as intensity dependent log-ratios will be observed. The proposed scanning protocol and calibration method is simple to use and is useful for evaluating scanner biases or for obtaining calibrated measurements with extended dynamical range and better precision. The

  5. Simultaneous Calibration: A Joint Optimization Approach for Multiple Kinect and External Cameras.

    PubMed

    Liao, Yajie; Sun, Ying; Li, Gongfa; Kong, Jianyi; Jiang, Guozhang; Jiang, Du; Cai, Haibin; Ju, Zhaojie; Yu, Hui; Liu, Honghai

    2017-06-24

    Camera calibration is a crucial problem in many applications, such as 3D reconstruction, structure from motion, object tracking and face alignment. Numerous methods have been proposed to solve the above problem with good performance in the last few decades. However, few methods are targeted at joint calibration of multi-sensors (more than four devices), which normally is a practical issue in the real-time systems. In this paper, we propose a novel method and a corresponding workflow framework to simultaneously calibrate relative poses of a Kinect and three external cameras. By optimizing the final cost function and adding corresponding weights to the external cameras in different locations, an effective joint calibration of multiple devices is constructed. Furthermore, the method is tested in a practical platform, and experiment results show that the proposed joint calibration method can achieve a satisfactory performance in a project real-time system and its accuracy is higher than the manufacturer's calibration.

  6. Simultaneous Calibration: A Joint Optimization Approach for Multiple Kinect and External Cameras

    PubMed Central

    Liao, Yajie; Sun, Ying; Li, Gongfa; Kong, Jianyi; Jiang, Guozhang; Jiang, Du; Cai, Haibin; Ju, Zhaojie; Yu, Hui; Liu, Honghai

    2017-01-01

    Camera calibration is a crucial problem in many applications, such as 3D reconstruction, structure from motion, object tracking and face alignment. Numerous methods have been proposed to solve the above problem with good performance in the last few decades. However, few methods are targeted at joint calibration of multi-sensors (more than four devices), which normally is a practical issue in the real-time systems. In this paper, we propose a novel method and a corresponding workflow framework to simultaneously calibrate relative poses of a Kinect and three external cameras. By optimizing the final cost function and adding corresponding weights to the external cameras in different locations, an effective joint calibration of multiple devices is constructed. Furthermore, the method is tested in a practical platform, and experiment results show that the proposed joint calibration method can achieve a satisfactory performance in a project real-time system and its accuracy is higher than the manufacturer’s calibration. PMID:28672823

  7. Parameterizations for reducing camera reprojection error for robot-world hand-eye calibration

    USDA-ARS?s Scientific Manuscript database

    Accurate robot-world, hand-eye calibration is crucial to automation tasks. In this paper, we discuss the robot-world, hand-eye calibration problem which has been modeled as the linear relationship AX equals ZB, where X and Z are the unknown calibration matrices composed of rotation and translation ...

  8. Application of six sigma and AHP in analysis of variable lead time calibration process instrumentation

    NASA Astrophysics Data System (ADS)

    Rimantho, Dino; Rahman, Tomy Abdul; Cahyadi, Bambang; Tina Hernawati, S.

    2017-02-01

    Calibration of instrumentation equipment in the pharmaceutical industry is an important activity to determine the true value of a measurement. Preliminary studies indicated that occur lead-time calibration resulted in disruption of production and laboratory activities. This study aimed to analyze the causes of lead-time calibration. Several methods used in this study such as, Six Sigma in order to determine the capability process of the calibration instrumentation of equipment. Furthermore, the method of brainstorming, Pareto diagrams, and Fishbone diagrams were used to identify and analyze the problems. Then, the method of Hierarchy Analytical Process (AHP) was used to create a hierarchical structure and prioritize problems. The results showed that the value of DPMO around 40769.23 which was equivalent to the level of sigma in calibration equipment approximately 3,24σ. This indicated the need for improvements in the calibration process. Furthermore, the determination of problem-solving strategies Lead Time Calibration such as, shortens the schedule preventive maintenance, increase the number of instrument Calibrators, and train personnel. Test results on the consistency of the whole matrix of pairwise comparisons and consistency test showed the value of hierarchy the CR below 0.1.

  9. Multiple Use One-Sided Hypotheses Testing in Univariate Linear Calibration

    NASA Technical Reports Server (NTRS)

    Krishnamoorthy, K.; Kulkarni, Pandurang M.; Mathew, Thomas

    1996-01-01

    Consider a normally distributed response variable, related to an explanatory variable through the simple linear regression model. Data obtained on the response variable, corresponding to known values of the explanatory variable (i.e., calibration data), are to be used for testing hypotheses concerning unknown values of the explanatory variable. We consider the problem of testing an unlimited sequence of one sided hypotheses concerning the explanatory variable, using the corresponding sequence of values of the response variable and the same set of calibration data. This is the situation of multiple use of the calibration data. The tests derived in this context are characterized by two types of uncertainties: one uncertainty associated with the sequence of values of the response variable, and a second uncertainty associated with the calibration data. We derive tests based on a condition that incorporates both of these uncertainties. The solution has practical applications in the decision limit problem. We illustrate our results using an example dealing with the estimation of blood alcohol concentration based on breath estimates of the alcohol concentration. In the example, the problem is to test if the unknown blood alcohol concentration of an individual exceeds a threshold that is safe for driving.

  10. Self-calibration of robot-sensor system

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu

    1990-01-01

    The process of finding the coordinate transformation between a robot and an external sensor system has been addressed. This calibration is equivalent to solving a nonlinear optimization problem for the parameters that characterize the transformation. A two-step procedure is herein proposed for solving the problem. The first step involves finding a nominal solution that is a good approximation of the final solution. A varational problem is then generated to replace the original problem in the next step. With the assumption that the variational parameters are small compared to unity, the problem that can be more readily solved with relatively small computation effort.

  11. Definition of energy-calibrated spectra for national reachback

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kunz, Christopher L.; Hertz, Kristin L.

    2014-01-01

    Accurate energy calibration is critical for the timeliness and accuracy of analysis results of spectra submitted to National Reachback, particularly for the detection of threat items. Many spectra submitted for analysis include either a calibration spectrum using 137Cs or no calibration spectrum at all. The single line provided by 137Cs is insufficient to adequately calibrate nonlinear spectra. A calibration source that provides several lines that are well-spaced, from the low energy cutoff to the full energy range of the detector, is needed for a satisfactory energy calibration. This paper defines the requirements of an energy calibration for the purposes ofmore » National Reachback, outlines a method to validate whether a given spectrum meets that definition, discusses general source considerations, and provides a specific operating procedure for calibrating the GR-135.« less

  12. Analytic Solution to the Problem of Aircraft Electric Field Mill Calibration

    NASA Technical Reports Server (NTRS)

    Koshak, William

    2003-01-01

    It is by no means a simple task to retrieve storm electric fields from an aircraft instrumented with electric field mill sensors. The presence of the aircraft distorts the ambient field in a complicated way. Before retrievals of the storm field can be made, the field mill measurement system must be "calibrated". In other words, a relationship between impressed (i.e., ambient) electric field and mill output must be established. If this relationship can be determined, it is mathematically inverted so that ambient field can be inferred from the mill outputs. Previous studies have primarily focused on linear theories where the relationship between ambient field and mill output is described by a "calibration matrix" M. Each element of the matrix describes how a particular component of the ambient field is enhanced by the aircraft. For example the product M(sub ix), E(sub x), is the contribution of the E(sub x) field to the i(th) mill output. Similarly, net aircraft charge (described by a "charge field component" E(sub q)) contributes an amount M(sub iq)E(sub q) to the output of the i(th) sensor. The central difficulty in obtaining M stems from the fact that the impressed field (E(sub x), E(sub y), E(sub z), E(sub q) is not known but is instead estimated. Typically, the aircraft is flown through a series of roll and pitch maneuvers in fair weather, and the values of the fair weather field and aircraft charge are estimated at each point along the aircraft trajectory. These initial estimates are often highly inadequate, but several investigators have improved the estimates by implementing various (ad hoc) iterative methods. Unfortunately, none of the iterative methods guarantee absolute convergence to correct values (i.e., absolute convergence to correct values has not been rigorously proven). In this work, the mathematical problem is solved directly by analytic means. For m mills installed on an arbitrary aircraft, it is shown that it is possible to solve for a single 2m

  13. Dose Calculation on KV Cone Beam CT Images: An Investigation of the Hu-Density Conversion Stability and Dose Accuracy Using the Site-Specific Calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rong Yi, E-mail: rong@humonc.wisc.ed; Smilowitz, Jennifer; Tewatia, Dinesh

    2010-10-01

    Precise calibration of Hounsfield units (HU) to electron density (HU-density) is essential to dose calculation. On-board kV cone beam computed tomography (CBCT) imaging is used predominantly for patients' positioning, but will potentially be used for dose calculation. The impacts of varying 3 imaging parameters (mAs, source-imager distance [SID], and cone angle) and phantom size on the HU number accuracy and HU-density calibrations for CBCT imaging were studied. We proposed a site-specific calibration method to achieve higher accuracy in CBCT image-based dose calculation. Three configurations of the Computerized Imaging Reference Systems (CIRS) water equivalent electron density phantom were used to simulatemore » sites including head, lungs, and lower body (abdomen/pelvis). The planning computed tomography (CT) scan was used as the baseline for comparisons. CBCT scans of these phantom configurations were performed using Varian Trilogy{sup TM} system in a precalibrated mode with fixed tube voltage (125 kVp), but varied mAs, SID, and cone angle. An HU-density curve was generated and evaluated for each set of scan parameters. Three HU-density tables generated using different phantom configurations with the same imaging parameter settings were selected for dose calculation on CBCT images for an accuracy comparison. Changing mAs or SID had small impact on HU numbers. For adipose tissue, the HU discrepancy from the baseline was 20 HU in a small phantom, but 5 times lager in a large phantom. Yet, reducing the cone angle significantly decreases the HU discrepancy. The HU-density table was also affected accordingly. By performing dose comparison between CT and CBCT image-based plans, results showed that using the site-specific HU-density tables to calibrate CBCT images of different sites improves the dose accuracy to {approx}2%. Our phantom study showed that CBCT imaging can be a feasible option for dose computation in adaptive radiotherapy approach if the site-specific

  14. Absolute radiometric calibration of advanced remote sensing systems

    NASA Technical Reports Server (NTRS)

    Slater, P. N.

    1982-01-01

    The distinction between the uses of relative and absolute spectroradiometric calibration of remote sensing systems is discussed. The advantages of detector-based absolute calibration are described, and the categories of relative and absolute system calibrations are listed. The limitations and problems associated with three common methods used for the absolute calibration of remote sensing systems are addressed. Two methods are proposed for the in-flight absolute calibration of advanced multispectral linear array systems. One makes use of a sun-illuminated panel in front of the sensor, the radiance of which is monitored by a spectrally flat pyroelectric radiometer. The other uses a large, uniform, high-radiance reference ground surface. The ground and atmospheric measurements required as input to a radiative transfer program to predict the radiance level at the entrance pupil of the orbital sensor are discussed, and the ground instrumentation is described.

  15. Matching Images to Models: Camera Calibration for 3-D Surface Reconstruction

    NASA Technical Reports Server (NTRS)

    Morris, Robin D.; Smelyanskiy, Vadim N.; Cheeseman. Peter C.; Norvig, Peter (Technical Monitor)

    2001-01-01

    In a previous paper we described a system which recursively recovers a super-resolved three dimensional surface model from a set of images of the surface. In that paper we assumed that the camera calibration for each image was known. In this paper we solve two problems. Firstly, if an estimate of the surface is already known, the problem is to calibrate a new image relative to the existing surface model. Secondly, if no surface estimate is available, the relative camera calibration between the images in the set must be estimated. This will allow an initial surface model to be estimated. Results of both types of estimation are given.

  16. NICMOS Cycles 13 and 14 Calibration Plans

    NASA Astrophysics Data System (ADS)

    Arribas, Santiago; Bergeron, Eddie; de Jong, Roeof; Malhotra, Sangeeta; Mobasher, Bahram; Noll, Keith; Schultz, Al; Wiklind, Tommy; Xu, Chun

    2005-11-01

    This document summarizes the NICMOS Calibration Plans for Cycles 13 and 14. These plans complement the SMOV3b, the Cycle 10 (interim), and the Cycles 11 and 12 (regular) calibration programs executed after the installation of the NICMOS Cooling System (NCS).. These previous programs have shown that the instrument is very stable, which has motivated a further reduction in the frequency of the monitoring programs for Cycle 13. In addition, for Cycle 14 some of these programs were slightly modified to account for 2 Gyro HST operations. The special calibrations on Cycle 13 were focussed on a follow up of the spectroscopic recalibration initiated in Cycle 12. This program led to the discovery of a possible count rate non-linearity, which has triggered a special program for Cycle 13 and a number of subsequent tests and calibrations during Cycle 14. At the time of writing this is a very active area of research. We also briefly comment on other calibrations defined to address other specific issues like: the autoreset test, the SPAR sequences tests, and the low-frequency flat residual for NIC1. The calibration programs for the 2-Gyro campaigns are not included here, since they have been described somewhere else. Further details and updates on specific programs can be found via the NICMOS web site.

  17. The Chandra Source Catalog 2.0: Calibrations

    NASA Astrophysics Data System (ADS)

    Graessle, Dale E.; Evans, Ian N.; Rots, Arnold H.; Allen, Christopher E.; Anderson, Craig S.; Budynkiewicz, Jamie A.; Burke, Douglas; Chen, Judy C.; Civano, Francesca Maria; D'Abrusco, Raffaele; Doe, Stephen M.; Evans, Janet D.; Fabbiano, Giuseppina; Gibbs, Danny G., II; Glotfelty, Kenny J.; Grier, John D.; Hain, Roger; Hall, Diane M.; Harbo, Peter N.; Houck, John C.; Lauer, Jennifer L.; Laurino, Omar; Lee, Nicholas P.; Martínez-Galarza, Juan Rafael; McCollough, Michael L.; McDowell, Jonathan C.; Miller, Joseph; McLaughlin, Warren; Morgan, Douglas L.; Mossman, Amy E.; Nguyen, Dan T.; Nichols, Joy S.; Nowak, Michael A.; Paxson, Charles; Plummer, David A.; Primini, Francis Anthony; Siemiginowska, Aneta; Sundheim, Beth A.; Tibbetts, Michael; Van Stone, David W.; Zografou, Panagoula

    2018-01-01

    Among the many enhancements implemented for the release of Chandra Source Catalog (CSC) 2.0 are improvements in the processing calibration database (CalDB). We have included a thorough overhaul of the CalDB software used in the processing. The software system upgrade, called "CalDB version 4," allows for a more rational and consistent specification of flight configurations and calibration boundary conditions. Numerous improvements in the specific calibrations applied have also been added. Chandra's radiometric and detector response calibrations vary considerably with time, detector operating temperature, and position on the detector. The CalDB has been enhanced to provide the best calibrations possible to each observation over the fifteen-year period included in CSC 2.0. Calibration updates include an improved ACIS contamination model, as well as updated time-varying gain (i.e., photon energy) and quantum efficiency maps for ACIS and HRC-I. Additionally, improved corrections for the ACIS quantum efficiency losses due to CCD charge transfer inefficiency (CTI) have been added for each of the ten ACIS detectors. These CTI corrections are now time and temperature-dependent, allowing ACIS to maintain a 0.3% energy calibration accuracy over the 0.5-7.0 keV range for any ACIS source in the catalog. Radiometric calibration (effective area) accuracy is estimated at ~4% over that range. We include a few examples where improvements in the Chandra CalDB allow for improved data reduction and modeling for the new CSC.This work has been supported by NASA under contract NAS 8-03060 to the Smithsonian Astrophysical Observatory for operation of the Chandra X-ray Center.

  18. A BPM calibration procedure using TBT data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, M.J.; Crisp, J.; Prieto, P.

    2007-06-01

    Accurate BPM calibration is crucial for lattice analysis. It is also reassuring when the calibration can be independently verified. This paper outlines a procedure that can extract BPM calibration information from TBT orbit data. The procedure is developed as an extension to the Turn-By-Turn lattice analysis [1]. Its application to data from both Recycler Ring and Main Injector (MI) at Fermilab have produced very encouraging results. Some specifics in hardware design will be mentioned to contrast that of analysis results.

  19. A simplified gross primary production and evapotranspiration model for boreal coniferous forests - is a generic calibration sufficient?

    NASA Astrophysics Data System (ADS)

    Minunno, F.; Peltoniemi, M.; Launiainen, S.; Aurela, M.; Lindroth, A.; Lohila, A.; Mammarella, I.; Minkkinen, K.; Mäkelä, A.

    2015-07-01

    The problem of model complexity has been lively debated in environmental sciences as well as in the forest modelling community. Simple models are less input demanding and their calibration involves a lower number of parameters, but they might be suitable only at local scale. In this work we calibrated a simplified ecosystem process model (PRELES) to data from multiple sites and we tested if PRELES can be used at regional scale to estimate the carbon and water fluxes of Boreal conifer forests. We compared a multi-site (M-S) with site-specific (S-S) calibrations. Model calibrations and evaluations were carried out by the means of the Bayesian method; Bayesian calibration (BC) and Bayesian model comparison (BMC) were used to quantify the uncertainty in model parameters and model structure. To evaluate model performances BMC results were combined with more classical analysis of model-data mismatch (M-DM). Evapotranspiration (ET) and gross primary production (GPP) measurements collected in 10 sites of Finland and Sweden were used in the study. Calibration results showed that similar estimates were obtained for the parameters at which model outputs are most sensitive. No significant differences were encountered in the predictions of the multi-site and site-specific versions of PRELES with exception of a site with agricultural history (Alkkia). Although PRELES predicted GPP better than evapotranspiration, we concluded that the model can be reliably used at regional scale to simulate carbon and water fluxes of Boreal forests. Our analyses underlined also the importance of using long and carefully collected flux datasets in model calibration. In fact, even a single site can provide model calibrations that can be applied at a wider spatial scale, since it covers a wide range of variability in climatic conditions.

  20. 40 CFR 90.315 - Analyzer initial calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... specifications in § 90.316(b). (c) Zero setting and calibration. Using purified synthetic air (or nitrogen), set the CO, CO2, NOX. and HC analyzers at zero. Connect the appropriate calibrating gases to the analyzers...) Rechecking of zero setting. Recheck the zero setting and, if necessary, repeat the procedure described in...

  1. 40 CFR 90.315 - Analyzer initial calibration.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... specifications in § 90.316(b). (c) Zero setting and calibration. Using purified synthetic air (or nitrogen), set the CO, CO2, NOX. and HC analyzers at zero. Connect the appropriate calibrating gases to the analyzers...) Rechecking of zero setting. Recheck the zero setting and, if necessary, repeat the procedure described in...

  2. 40 CFR 90.315 - Analyzer initial calibration.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... specifications in § 90.316(b). (c) Zero setting and calibration. Using purified synthetic air (or nitrogen), set the CO, CO2, NOX. and HC analyzers at zero. Connect the appropriate calibrating gases to the analyzers...) Rechecking of zero setting. Recheck the zero setting and, if necessary, repeat the procedure described in...

  3. 40 CFR 90.315 - Analyzer initial calibration.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... specifications in § 90.316(b). (c) Zero setting and calibration. Using purified synthetic air (or nitrogen), set the CO, CO2, NOX. and HC analyzers at zero. Connect the appropriate calibrating gases to the analyzers...) Rechecking of zero setting. Recheck the zero setting and, if necessary, repeat the procedure described in...

  4. 40 CFR 90.315 - Analyzer initial calibration.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... specifications in § 90.316(b). (c) Zero setting and calibration. Using purified synthetic air (or nitrogen), set the CO, CO2, NOX. and HC analyzers at zero. Connect the appropriate calibrating gases to the analyzers...) Rechecking of zero setting. Recheck the zero setting and, if necessary, repeat the procedure described in...

  5. Calibration of the clumped isotope thermometer for planktic foraminifers

    NASA Astrophysics Data System (ADS)

    Meinicke, N.; Ho, S. L.; Nürnberg, D.; Tripati, A. K.; Jansen, E.; Dokken, T.; Schiebel, R.; Meckler, A. N.

    2017-12-01

    Many proxies for past ocean temperature suffer from secondary influences or require species-specific calibrations that might not be applicable on longer time scales. Being thermodynamically based and thus independent of seawater composition, clumped isotopes in carbonates (Δ47) have the potential to circumvent such issues affecting other proxies and provide reliable temperature reconstructions far back in time and in unknown settings. Although foraminifers are commonly used for paleoclimate reconstructions, their use for clumped isotope thermometry has been hindered so far by large sample-size requirements. Existing calibration studies suggest that data from a variety of foraminifer species agree with synthetic carbonate calibrations (Tripati, et al., GCA, 2010; Grauel, et al., GCA, 2013). However, these studies did not include a sufficient number of samples to fully assess the existence of species-specific effects, and data coverage was especially sparse in the low temperature range (<10 °C). To expand the calibration database of clumped isotopes in planktic foraminifers, especially for colder temperatures (<10°C), we present new Δ47 data analysed on 14 species of planktic foraminifers from 13 sites, covering a temperature range of 1-29 °C. Our method allows for analysis of smaller sample sizes (3-5 mg), hence also the measurement of multiple species from the same samples. We analyzed surface-dwelling ( 0-50 m) species and deep-dwelling (habitat depth up to several hundred meters) planktic foraminifers from the same sites to evaluate species-specific effects and to assess the feasibility of temperature reconstructions for different water depths. We also assess the effects of different techniques in estimating foraminifer calcification temperature on the calibration. Finally, we compare our calibration to existing clumped isotope calibrations. Our results confirm previous findings that indicate no species-specific effects on the Δ47-temperature relationship

  6. Camera calibration correction in shape from inconsistent silhouette

    USDA-ARS?s Scientific Manuscript database

    The use of shape from silhouette for reconstruction tasks is plagued by two types of real-world errors: camera calibration error and silhouette segmentation error. When either error is present, we call the problem the Shape from Inconsistent Silhouette (SfIS) problem. In this paper, we show how sm...

  7. Body composition in Nepalese children using isotope dilution: the production of ethnic-specific calibration equations and an exploration of methodological issues.

    PubMed

    Devakumar, Delan; Grijalva-Eternod, Carlos S; Roberts, Sebastian; Chaube, Shiva Shankar; Saville, Naomi M; Manandhar, Dharma S; Costello, Anthony; Osrin, David; Wells, Jonathan C K

    2015-01-01

    Background. Body composition is important as a marker of both current and future health. Bioelectrical impedance (BIA) is a simple and accurate method for estimating body composition, but requires population-specific calibration equations. Objectives. (1) To generate population specific calibration equations to predict lean mass (LM) from BIA in Nepalese children aged 7-9 years. (2) To explore methodological changes that may extend the range and improve accuracy. Methods. BIA measurements were obtained from 102 Nepalese children (52 girls) using the Tanita BC-418. Isotope dilution with deuterium oxide was used to measure total body water and to estimate LM. Prediction equations for estimating LM from BIA data were developed using linear regression, and estimates were compared with those obtained from the Tanita system. We assessed the effects of flexing the arms of children to extend the range of coverage towards lower weights. We also estimated potential error if the number of children included in the study was reduced. Findings. Prediction equations were generated, incorporating height, impedance index, weight and sex as predictors (R (2) 93%). The Tanita system tended to under-estimate LM, with a mean error of 2.2%, but extending up to 25.8%. Flexing the arms to 90° increased the lower weight range, but produced a small error that was not significant when applied to children <16 kg (p 0.42). Reducing the number of children increased the error at the tails of the weight distribution. Conclusions. Population-specific isotope calibration of BIA for Nepalese children has high accuracy. Arm position is important and can be used to extend the range of low weight covered. Smaller samples reduce resource requirements, but leads to large errors at the tails of the weight distribution.

  8. Photometric Calibration of Consumer Video Cameras

    NASA Technical Reports Server (NTRS)

    Suggs, Robert; Swift, Wesley, Jr.

    2007-01-01

    Equipment and techniques have been developed to implement a method of photometric calibration of consumer video cameras for imaging of objects that are sufficiently narrow or sufficiently distant to be optically equivalent to point or line sources. Heretofore, it has been difficult to calibrate consumer video cameras, especially in cases of image saturation, because they exhibit nonlinear responses with dynamic ranges much smaller than those of scientific-grade video cameras. The present method not only takes this difficulty in stride but also makes it possible to extend effective dynamic ranges to several powers of ten beyond saturation levels. The method will likely be primarily useful in astronomical photometry. There are also potential commercial applications in medical and industrial imaging of point or line sources in the presence of saturation.This development was prompted by the need to measure brightnesses of debris in amateur video images of the breakup of the Space Shuttle Columbia. The purpose of these measurements is to use the brightness values to estimate relative masses of debris objects. In most of the images, the brightness of the main body of Columbia was found to exceed the dynamic ranges of the cameras. A similar problem arose a few years ago in the analysis of video images of Leonid meteors. The present method is a refined version of the calibration method developed to solve the Leonid calibration problem. In this method, one performs an endto- end calibration of the entire imaging system, including not only the imaging optics and imaging photodetector array but also analog tape recording and playback equipment (if used) and any frame grabber or other analog-to-digital converter (if used). To automatically incorporate the effects of nonlinearity and any other distortions into the calibration, the calibration images are processed in precisely the same manner as are the images of meteors, space-shuttle debris, or other objects that one seeks to

  9. Calibration of decadal ensemble predictions

    NASA Astrophysics Data System (ADS)

    Pasternack, Alexander; Rust, Henning W.; Bhend, Jonas; Liniger, Mark; Grieger, Jens; Müller, Wolfgang; Ulbrich, Uwe

    2017-04-01

    Decadal climate predictions are of great socio-economic interest due to the corresponding planning horizons of several political and economic decisions. Due to uncertainties of weather and climate, forecasts (e.g. due to initial condition uncertainty), they are issued in a probabilistic way. One issue frequently observed for probabilistic forecasts is that they tend to be not reliable, i.e. the forecasted probabilities are not consistent with the relative frequency of the associated observed events. Thus, these kind of forecasts need to be re-calibrated. While re-calibration methods for seasonal time scales are available and frequently applied, these methods still have to be adapted for decadal time scales and its characteristic problems like climate trend and lead time dependent bias. Regarding this, we propose a method to re-calibrate decadal ensemble predictions that takes the above mentioned characteristics into account. Finally, this method will be applied and validated to decadal forecasts from the MiKlip system (Germany's initiative for decadal prediction).

  10. Hand-Eye Calibration in Visually-Guided Robot Grinding.

    PubMed

    Li, Wen-Long; Xie, He; Zhang, Gang; Yan, Si-Jie; Yin, Zhou-Ping

    2016-11-01

    Visually-guided robot grinding is a novel and promising automation technique for blade manufacturing. One common problem encountered in robot grinding is hand-eye calibration, which establishes the pose relationship between the end effector (hand) and the scanning sensor (eye). This paper proposes a new calibration approach for robot belt grinding. The main contribution of this paper is its consideration of both joint parameter errors and pose parameter errors in a hand-eye calibration equation. The objective function of the hand-eye calibration is built and solved, from which 30 compensated values (corresponding to 24 joint parameters and six pose parameters) are easily calculated in a closed solution. The proposed approach is economic and simple because only a criterion sphere is used to calculate the calibration parameters, avoiding the need for an expensive and complicated tracking process using a laser tracker. The effectiveness of this method is verified using a calibration experiment and a blade grinding experiment. The code used in this approach is attached in the Appendix.

  11. Mexican national pyronometer network calibration

    NASA Astrophysics Data System (ADS)

    VAldes, M.; Villarreal, L.; Estevez, H.; Riveros, D.

    2013-12-01

    In order to take advantage of the solar radiation as an alternate energy source it is necessary to evaluate the spatial and temporal availability. The Mexican National Meterological Service (SMN) has a network with 136 meteorological stations, each coupled with a pyronometer for measuring the global solar radiation. Some of these stations had not been calibrated in several years. The Mexican Department of Energy (SENER) in order to count on a reliable evaluation of the solar resource funded this project to calibrate the SMN pyrometer network and validate the data. The calibration of the 136 pyronometers by the intercomparison method recommended by the World Meterological Organization (WMO) requires lengthy observations and specific environmental conditions such as clear skies and a stable atmosphere, circumstances that determine the site and season of the calibration. The Solar Radiation Section of the Instituto de Geofísica of the Universidad Nacional Autónoma de México is a Regional Center of the WMO and is certified to carry out the calibration procedures and emit certificates. We are responsible for the recalibration of the pyronometer network of the SMN. A continuous emission solar simulator with exposed areas with 30cm diameters was acquired to reduce the calibration time and not depend on atmospheric conditions. We present the results of the calibration of 10 thermopile pyronometers and one photovoltaic cell by the intercomparison method with more than 10000 observations each and those obtained with the solar simulator.

  12. SLC-off Landsat-7 ETM+ reflective band radiometric calibration

    USGS Publications Warehouse

    Markham, B.L.; Barsi, J.A.; Thome, K.J.; Barker, J.L.; Scaramuzza, P.L.; Helder, D.L.; ,

    2005-01-01

    Since May 31, 2003, when the scan line corrector (SLC) on the Landsat-7 ETM+ failed, the primary foci of Landsat-7 ETM+ analyses have been on understanding and attempting to fix the problem and later on developing composited products to mitigate the problem. In the meantime, the Image Assessment System personnel and vicarious calibration teams have continued to monitor the radiometric performance of the ETM+ reflective bands. The SLC failure produced no measurable change in the radiometric calibration of the ETM+ bands. No trends in the calibration are definitively present over the mission lifetime, and, if present, are less than 0.5% per year. Detector 12 in Band 7 dropped about 0.5% in response relative to the rest of the detectors in the band in May 2004 and recovered back to within 0.1% of its initial relative gain in October 2004.

  13. Evaluation of calibration efficacy under different levels of uncertainty

    DOE PAGES

    Heo, Yeonsook; Graziano, Diane J.; Guzowski, Leah; ...

    2014-06-10

    This study examines how calibration performs under different levels of uncertainty in model input data. It specifically assesses the efficacy of Bayesian calibration to enhance the reliability of EnergyPlus model predictions. A Bayesian approach can be used to update uncertain values of parameters, given measured energy-use data, and to quantify the associated uncertainty.We assess the efficacy of Bayesian calibration under a controlled virtual-reality setup, which enables rigorous validation of the accuracy of calibration results in terms of both calibrated parameter values and model predictions. Case studies demonstrate the performance of Bayesian calibration of base models developed from audit data withmore » differing levels of detail in building design, usage, and operation.« less

  14. Simbol-X Telescope Scientific Calibrations: Requirements and Plans

    NASA Astrophysics Data System (ADS)

    Malaguti, G.; Angelini, L.; Raimondi, L.; Moretti, A.; Trifoglio, M.

    2009-05-01

    The Simbol-X telescope characteristics and the mission scientific requirements impose a challenging calibration plan with a number of unprecedented issues. The 20 m focal length implies for the incoming X-ray beam a divergence comparable to the incidence angle of the mirror surface also for 100 m-long facilities. Moreover this is the first time that a direct focussing X-ray telescope will be calibrated on an energy band covering about three decades, and with a complex focal plane. These problems require a careful plan and organization of the measurements, together with an evaluation of the calibration needs in terms of both hardware and software.

  15. An episodic specificity induction enhances means-end problem solving in young and older adults.

    PubMed

    Madore, Kevin P; Schacter, Daniel L

    2014-12-01

    Episodic memory plays an important role not only in remembering past experiences, but also in constructing simulations of future experiences and solving means-end social problems. We recently found that an episodic specificity induction-brief training in recollecting details of past experiences-enhances performance of young and older adults on memory and imagination tasks. Here we tested the hypothesis that this specificity induction would also positively impact a means-end problem-solving task on which age-related changes have been linked to impaired episodic memory. Young and older adults received the specificity induction or a control induction before completing a means-end problem-solving task, as well as memory and imagination tasks. Consistent with previous findings, older adults provided fewer relevant steps on problem solving than did young adults, and their responses also contained fewer internal (i.e., episodic) details across the 3 tasks. There was no difference in the number of other (e.g., irrelevant) steps on problem solving or external (i.e., semantic) details generated on the 3 tasks as a function of age. Critically, the specificity induction increased the number of relevant steps and internal details (but not other steps or external details) that both young and older adults generated in problem solving compared with the control induction, as well as the number of internal details (but not external details) generated for memory and imagination. Our findings support the idea that episodic retrieval processes are involved in means-end problem solving, extend the range of tasks on which a specificity induction targets these processes, and show that the problem-solving performance of older adults can benefit from a specificity induction as much as that of young adults. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  16. An episodic specificity induction enhances means-end problem solving in young and older adults

    PubMed Central

    Madore, Kevin P.; Schacter, Daniel L.

    2014-01-01

    Episodic memory plays an important role not only in remembering past experiences, but also in constructing simulations of future experiences and solving means-end social problems. We recently found that an episodic specificity induction- brief training in recollecting details of past experiences- enhances performance of young and older adults on memory and imagination tasks. Here we tested the hypothesis that this specificity induction would also positively impact a means-end problem solving task on which age-related changes have been linked to impaired episodic memory. Young and older adults received the specificity induction or a control induction before completing a means-end problem solving task as well as memory and imagination tasks. Consistent with previous findings, older adults provided fewer relevant steps on problem solving than did young adults, and their responses also contained fewer internal (i.e., episodic) details across the three tasks. There was no difference in the number of other (e.g., irrelevant) steps on problem solving or external (i.e., semantic) details generated on the three tasks as a function of age. Critically, the specificity induction increased the number of relevant steps and internal details (but not other steps or external details) that both young and older adults generated in problem solving compared with the control induction, as well as the number of internal details (but not external details) generated for memory and imagination. Our findings support the idea that episodic retrieval processes are involved in means-end problem solving, extend the range of tasks on which a specificity induction targets these processes, and show that the problem solving performance of older adults can benefit from a specificity induction as much as that of young adults. PMID:25365688

  17. Calibration and characterization of UV sensors for water disinfection

    NASA Astrophysics Data System (ADS)

    Larason, T.; Ohno, Y.

    2006-04-01

    The National Institute of Standards and Technology (NIST), USA is participating in a project with the American Water Works Association Research Foundation (AwwaRF) to develop new guidelines for ultraviolet (UV) sensor characteristics to monitor the performance of UV water disinfection plants. The current UV water disinfection standards, ÖNORM M5873-1 and M5873-2 (Austria) and DVGW W294 3 (Germany), on the requirements for UV sensors for low-pressure mercury (LPM) and medium-pressure mercury (MPM) lamp systems have been studied. Additionally, the characteristics of various types of UV sensors from several different commercial vendors have been measured and analysed. This information will aid in the development of new guidelines to address issues such as sensor requirements, calibration methods, uncertainty and traceability. Practical problems were found in the calibration methods and evaluation of spectral responsivity requirements for sensors designed for MPM lamp systems. To solve the problems, NIST is proposing an alternative sensor calibration method for MPM lamp systems. A future calibration service is described for UV sensors intended for low- and medium-pressure mercury lamp systems used in water disinfection applications.

  18. Bayesian calibration of the Community Land Model using surrogates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ray, Jaideep; Hou, Zhangshuan; Huang, Maoyi

    2014-02-01

    We present results from the Bayesian calibration of hydrological parameters of the Community Land Model (CLM), which is often used in climate simulations and Earth system models. A statistical inverse problem is formulated for three hydrological parameters, conditional on observations of latent heat surface fluxes over 48 months. Our calibration method uses polynomial and Gaussian process surrogates of the CLM, and solves the parameter estimation problem using a Markov chain Monte Carlo sampler. Posterior probability densities for the parameters are developed for two sites with different soil and vegetation covers. Our method also allows us to examine the structural errormore » in CLM under two error models. We find that surrogate models can be created for CLM in most cases. The posterior distributions are more predictive than the default parameter values in CLM. Climatologically averaging the observations does not modify the parameters' distributions significantly. The structural error model reveals a correlation time-scale which can be used to identify the physical process that could be contributing to it. While the calibrated CLM has a higher predictive skill, the calibration is under-dispersive.« less

  19. Specific Pronunciation Problems.

    ERIC Educational Resources Information Center

    Avery, Peter; And Others

    1987-01-01

    Reviews common pronunciation problems experienced by learners of English as a second language who are native speakers of Vietnamese, Cantonese, Spanish, Portuguese, Italian, Polish, Greek, and Punjabi. (CB)

  20. Development of SIR-C Ground Calibration Equipment

    NASA Technical Reports Server (NTRS)

    Freeman, A.; Azeem, M.; Haub, D.; Sarabandi, K.

    1993-01-01

    SIR-C/X-SAR is currently scheduled for launch in April 1994. SIR-C is an L-Band and C-Band, multi-polarization spaceborne SAR system developed by NASA/JPL. X- SAR is an X-Band SAR system developed by DARA/ASI. One of the problems involved in calibrating the SIR-C instrument is to make sure that the horizontal (H) and vertical (V) polarized beams are aligned in the azimuth direction, i.e.. that they are pointing in the same direction. This is important if the polarimetric performance specifications for the system are to be met. To solve this problem, we have designed and built a prototype of a low-cost ground receiver capable of recording received power from two antennas, one H-polarized, the other V-polarized. The two signals are mixed to audio then recorded on the left and right stereo channels of a standard audio cassette player. The audio cassette recording can then be played back directly into a Macintosh computer, where it is digitized. Analysis of.

  1. Holographic Entanglement Entropy, SUSY & Calibrations

    NASA Astrophysics Data System (ADS)

    Colgáin, Eoin Ó.

    2018-01-01

    Holographic calculations of entanglement entropy boil down to identifying minimal surfaces in curved spacetimes. This generically entails solving second-order equations. For higher-dimensional AdS geometries, we demonstrate that supersymmetry and calibrations reduce the problem to first-order equations. We note that minimal surfaces corresponding to disks preserve supersymmetry, whereas strips do not.

  2. 40 CFR 1066.130 - Measurement instrument calibrations and verifications.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 33 2014-07-01 2014-07-01 false Measurement instrument calibrations... (CONTINUED) AIR POLLUTION CONTROLS VEHICLE-TESTING PROCEDURES Equipment, Measurement Instruments, Fuel, and Analytical Gas Specifications § 1066.130 Measurement instrument calibrations and verifications. The...

  3. Uncertainty propagation in the calibration equations for NTC thermistors

    NASA Astrophysics Data System (ADS)

    Liu, Guang; Guo, Liang; Liu, Chunlong; Wu, Qingwen

    2018-06-01

    The uncertainty propagation problem is quite important for temperature measurements, since we rely so much on the sensors and calibration equations. Although uncertainty propagation for platinum resistance or radiation thermometers is well known, there have been few publications concerning negative temperature coefficient (NTC) thermistors. Insight into the propagation characteristics of uncertainty that develop when equations are determined using the Lagrange interpolation or least-squares fitting method is presented here with respect to several of the most common equations used in NTC thermistor calibration. Within this work, analytical expressions of the propagated uncertainties for both fitting methods are derived for the uncertainties in the measured temperature and resistance at each calibration point. High-precision calibration of an NTC thermistor in a precision water bath was performed by means of the comparison method. Results show that, for both fitting methods, the propagated uncertainty is flat in the interpolation region but rises rapidly beyond the calibration range. Also, for temperatures interpolated between calibration points, the propagated uncertainty is generally no greater than that associated with the calibration points. For least-squares fitting, the propagated uncertainty is significantly reduced by increasing the number of calibration points and can be well kept below the uncertainty of the calibration points.

  4. Calibration facility for environment dosimetry instruments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bercea, Sorin; Celarel, Aurelia; Cenusa, Constantin

    2013-12-16

    In the last ten years, the nuclear activities, as well as the major nuclear events (see Fukushima accident) had an increasing impact on the environment, merely by contamination with radioactive materials. The most conferment way to quickly identify the presence of some radioactive elements in the environment, is to measure the dose-equivalent rate H. In this situation, information concerning the values of H due only to the natural radiation background must exist. Usually, the values of H due to the natural radiation background, are very low (∼10{sup −9} - 10{sup −8} Sv/h). A correct measurement of H in this rangemore » involve a performing calibration of the measuring instruments in the measuring range corresponding to the natural radiation background lead to important problems due to the presence of the natural background itself the best way to overlap this difficulty is to set up the calibration stand in an area with very low natural radiation background. In Romania, we identified an area with such special conditions at 200 m dept, in a salt mine. This paper deals with the necessary requirements for such a calibration facility, as well as with the calibration stand itself. The paper includes also, a description of the calibration stand (and images) as well as the radiological and metrological parameters. This calibration facilities for environment dosimetry is one of the few laboratories in this field in Europe.« less

  5. Sky camera geometric calibration using solar observations

    DOE PAGES

    Urquhart, Bryan; Kurtz, Ben; Kleissl, Jan

    2016-09-05

    A camera model and associated automated calibration procedure for stationary daytime sky imaging cameras is presented. The specific modeling and calibration needs are motivated by remotely deployed cameras used to forecast solar power production where cameras point skyward and use 180° fisheye lenses. Sun position in the sky and on the image plane provides a simple and automated approach to calibration; special equipment or calibration patterns are not required. Sun position in the sky is modeled using a solar position algorithm (requiring latitude, longitude, altitude and time as inputs). Sun position on the image plane is detected using a simple image processing algorithm. Themore » performance evaluation focuses on the calibration of a camera employing a fisheye lens with an equisolid angle projection, but the camera model is general enough to treat most fixed focal length, central, dioptric camera systems with a photo objective lens. Calibration errors scale with the noise level of the sun position measurement in the image plane, but the calibration is robust across a large range of noise in the sun position. In conclusion, calibration performance on clear days ranged from 0.94 to 1.24 pixels root mean square error.« less

  6. Automatic colorimetric calibration of human wounds

    PubMed Central

    2010-01-01

    Background Recently, digital photography in medicine is considered an acceptable tool in many clinical domains, e.g. wound care. Although ever higher resolutions are available, reproducibility is still poor and visual comparison of images remains difficult. This is even more the case for measurements performed on such images (colour, area, etc.). This problem is often neglected and images are freely compared and exchanged without further thought. Methods The first experiment checked whether camera settings or lighting conditions could negatively affect the quality of colorimetric calibration. Digital images plus a calibration chart were exposed to a variety of conditions. Precision and accuracy of colours after calibration were quantitatively assessed with a probability distribution for perceptual colour differences (dE_ab). The second experiment was designed to assess the impact of the automatic calibration procedure (i.e. chart detection) on real-world measurements. 40 Different images of real wounds were acquired and a region of interest was selected in each image. 3 Rotated versions of each image were automatically calibrated and colour differences were calculated. Results 1st Experiment: Colour differences between the measurements and real spectrophotometric measurements reveal median dE_ab values respectively 6.40 for the proper patches of calibrated normal images and 17.75 for uncalibrated images demonstrating an important improvement in accuracy after calibration. The reproducibility, visualized by the probability distribution of the dE_ab errors between 2 measurements of the patches of the images has a median of 3.43 dE* for all calibrated images, 23.26 dE_ab for all uncalibrated images. If we restrict ourselves to the proper patches of normal calibrated images the median is only 2.58 dE_ab! Wilcoxon sum-rank testing (p < 0.05) between uncalibrated normal images and calibrated normal images with proper squares were equal to 0 demonstrating a highly

  7. 40 CFR 1066.240 - Torque transducer verification and calibration.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) AIR POLLUTION CONTROLS VEHICLE-TESTING PROCEDURES Dynamometer Specifications § 1066.240 Torque transducer verification and calibration. Calibrate torque-measurement systems as described in 40 CFR 1065.310. ... 40 Protection of Environment 34 2013-07-01 2013-07-01 false Torque transducer verification and...

  8. 40 CFR 1066.240 - Torque transducer verification and calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) AIR POLLUTION CONTROLS VEHICLE-TESTING PROCEDURES Dynamometer Specifications § 1066.240 Torque transducer verification and calibration. Calibrate torque-measurement systems as described in 40 CFR 1065.310. ... 40 Protection of Environment 34 2012-07-01 2012-07-01 false Torque transducer verification and...

  9. Definition and sensitivity of the conceptual MORDOR rainfall-runoff model parameters using different multi-criteria calibration strategies

    NASA Astrophysics Data System (ADS)

    Garavaglia, F.; Seyve, E.; Gottardi, F.; Le Lay, M.; Gailhard, J.; Garçon, R.

    2014-12-01

    MORDOR is a conceptual hydrological model extensively used in Électricité de France (EDF, French electric utility company) operational applications: (i) hydrological forecasting, (ii) flood risk assessment, (iii) water balance and (iv) climate change studies. MORDOR is a lumped, reservoir, elevation based model with hourly or daily areal rainfall and air temperature as the driving input data. The principal hydrological processes represented are evapotranspiration, direct and indirect runoff, ground water, snow accumulation and melt and routing. The model has been intensively used at EDF for more than 20 years, in particular for modeling French mountainous watersheds. In the matter of parameters calibration we propose and test alternative multi-criteria techniques based on two specific approaches: automatic calibration using single-objective functions and a priori parameter calibration founded on hydrological watershed features. The automatic calibration approach uses single-objective functions, based on Kling-Gupta efficiency, to quantify the good agreement between the simulated and observed runoff focusing on four different runoff samples: (i) time-series sample, (I) annual hydrological regime, (iii) monthly cumulative distribution functions and (iv) recession sequences.The primary purpose of this study is to analyze the definition and sensitivity of MORDOR parameters testing different calibration techniques in order to: (i) simplify the model structure, (ii) increase the calibration-validation performance of the model and (iii) reduce the equifinality problem of calibration process. We propose an alternative calibration strategy that reaches these goals. The analysis is illustrated by calibrating MORDOR model to daily data for 50 watersheds located in French mountainous regions.

  10. AeroADL: applying the integration of the Suomi-NPP science algorithms with the Algorithm Development Library to the calibration and validation task

    NASA Astrophysics Data System (ADS)

    Houchin, J. S.

    2014-09-01

    A common problem for the off-line validation of the calibration algorithms and algorithm coefficients is being able to run science data through the exact same software used for on-line calibration of that data. The Joint Polar Satellite System (JPSS) program solved part of this problem by making the Algorithm Development Library (ADL) available, which allows the operational algorithm code to be compiled and run on a desktop Linux workstation using flat file input and output. However, this solved only part of the problem, as the toolkit and methods to initiate the processing of data through the algorithms were geared specifically toward the algorithm developer, not the calibration analyst. In algorithm development mode, a limited number of sets of test data are staged for the algorithm once, and then run through the algorithm over and over as the software is developed and debugged. In calibration analyst mode, we are continually running new data sets through the algorithm, which requires significant effort to stage each of those data sets for the algorithm without additional tools. AeroADL solves this second problem by providing a set of scripts that wrap the ADL tools, providing both efficient means to stage and process an input data set, to override static calibration coefficient look-up-tables (LUT) with experimental versions of those tables, and to manage a library containing multiple versions of each of the static LUT files in such a way that the correct set of LUTs required for each algorithm are automatically provided to the algorithm without analyst effort. Using AeroADL, The Aerospace Corporation's analyst team has demonstrated the ability to quickly and efficiently perform analysis tasks for both the VIIRS and OMPS sensors with minimal training on the software tools.

  11. An Integrated Calibration Technique for Stereo Vision Systems (PREPRINT)

    DTIC Science & Technology

    2010-03-01

    technique for stereo vision systems has been developed. To demonstrate and evaluate this calibration technique, multiple Wii Remotes (Wiimotes) from Nintendo ...from Nintendo were used to form stereo vision systems to perform 3D motion capture in real time. This integrated technique is a two-step process...Wiimotes) used in Nintendo Wii games. Many researchers have successfully dealt with the problem of camera calibration by taking images from a 2D

  12. ATLAS Tile Calorimeter time calibration, monitoring and performance

    NASA Astrophysics Data System (ADS)

    Davidek, T.; ATLAS Collaboration

    2017-11-01

    The Tile Calorimeter (TileCal) is the hadronic calorimeter covering the central region of the ATLAS experiment at the LHC. This sampling device is made of plastic scintillating tiles alternated with iron plates and its response is calibrated to electromagnetic scale by means of several dedicated calibration systems. The accurate time calibration is important for the energy reconstruction, non-collision background removal as well as for specific physics analyses. The initial time calibration with so-called splash events and subsequent fine-tuning with collision data are presented. The monitoring of the time calibration with laser system and physics collision data is discussed as well as the corrections for sudden changes performed still before the recorded data are processed for physics analyses. Finally, the time resolution as measured with jets and isolated muons is presented.

  13. Improved infra-red procedure for the evaluation of calibrating units.

    DOT National Transportation Integrated Search

    2011-01-04

    Introduction. The NHTSA Model Specifications for Calibrating Units for Breath : Alcohol Testers (FR 72 34742-34748) requires that calibration units submitted for : inclusion on the NHTSA Conforming Products List for such devices be evaluated using : ...

  14. Volumetric calibration of a plenoptic camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hall, Elise Munz; Fahringer, Timothy W.; Guildenbecher, Daniel Robert

    Here, the volumetric calibration of a plenoptic camera is explored to correct for inaccuracies due to real-world lens distortions and thin-lens assumptions in current processing methods. Two methods of volumetric calibration based on a polynomial mapping function that does not require knowledge of specific lens parameters are presented and compared to a calibration based on thin-lens assumptions. The first method, volumetric dewarping, is executed by creation of a volumetric representation of a scene using the thin-lens assumptions, which is then corrected in post-processing using a polynomial mapping function. The second method, direct light-field calibration, uses the polynomial mapping in creationmore » of the initial volumetric representation to relate locations in object space directly to image sensor locations. The accuracy and feasibility of these methods is examined experimentally by capturing images of a known dot card at a variety of depths. Results suggest that use of a 3D polynomial mapping function provides a significant increase in reconstruction accuracy and that the achievable accuracy is similar using either polynomial-mapping-based method. Additionally, direct light-field calibration provides significant computational benefits by eliminating some intermediate processing steps found in other methods. Finally, the flexibility of this method is shown for a nonplanar calibration.« less

  15. Volumetric calibration of a plenoptic camera.

    PubMed

    Hall, Elise Munz; Fahringer, Timothy W; Guildenbecher, Daniel R; Thurow, Brian S

    2018-02-01

    The volumetric calibration of a plenoptic camera is explored to correct for inaccuracies due to real-world lens distortions and thin-lens assumptions in current processing methods. Two methods of volumetric calibration based on a polynomial mapping function that does not require knowledge of specific lens parameters are presented and compared to a calibration based on thin-lens assumptions. The first method, volumetric dewarping, is executed by creation of a volumetric representation of a scene using the thin-lens assumptions, which is then corrected in post-processing using a polynomial mapping function. The second method, direct light-field calibration, uses the polynomial mapping in creation of the initial volumetric representation to relate locations in object space directly to image sensor locations. The accuracy and feasibility of these methods is examined experimentally by capturing images of a known dot card at a variety of depths. Results suggest that use of a 3D polynomial mapping function provides a significant increase in reconstruction accuracy and that the achievable accuracy is similar using either polynomial-mapping-based method. Additionally, direct light-field calibration provides significant computational benefits by eliminating some intermediate processing steps found in other methods. Finally, the flexibility of this method is shown for a nonplanar calibration.

  16. Volumetric calibration of a plenoptic camera

    DOE PAGES

    Hall, Elise Munz; Fahringer, Timothy W.; Guildenbecher, Daniel Robert; ...

    2018-02-01

    Here, the volumetric calibration of a plenoptic camera is explored to correct for inaccuracies due to real-world lens distortions and thin-lens assumptions in current processing methods. Two methods of volumetric calibration based on a polynomial mapping function that does not require knowledge of specific lens parameters are presented and compared to a calibration based on thin-lens assumptions. The first method, volumetric dewarping, is executed by creation of a volumetric representation of a scene using the thin-lens assumptions, which is then corrected in post-processing using a polynomial mapping function. The second method, direct light-field calibration, uses the polynomial mapping in creationmore » of the initial volumetric representation to relate locations in object space directly to image sensor locations. The accuracy and feasibility of these methods is examined experimentally by capturing images of a known dot card at a variety of depths. Results suggest that use of a 3D polynomial mapping function provides a significant increase in reconstruction accuracy and that the achievable accuracy is similar using either polynomial-mapping-based method. Additionally, direct light-field calibration provides significant computational benefits by eliminating some intermediate processing steps found in other methods. Finally, the flexibility of this method is shown for a nonplanar calibration.« less

  17. Hand-eye calibration using a target registration error model.

    PubMed

    Chen, Elvis C S; Morgan, Isabella; Jayarathne, Uditha; Ma, Burton; Peters, Terry M

    2017-10-01

    Surgical cameras are prevalent in modern operating theatres and are often used as a surrogate for direct vision. Visualisation techniques (e.g. image fusion) made possible by tracking the camera require accurate hand-eye calibration between the camera and the tracking system. The authors introduce the concept of 'guided hand-eye calibration', where calibration measurements are facilitated by a target registration error (TRE) model. They formulate hand-eye calibration as a registration problem between homologous point-line pairs. For each measurement, the position of a monochromatic ball-tip stylus (a point) and its projection onto the image (a line) is recorded, and the TRE of the resulting calibration is predicted using a TRE model. The TRE model is then used to guide the placement of the calibration tool, so that the subsequent measurement minimises the predicted TRE. Assessing TRE after each measurement produces accurate calibration using a minimal number of measurements. As a proof of principle, they evaluated guided calibration using a webcam and an endoscopic camera. Their endoscopic camera results suggest that millimetre TRE is achievable when at least 15 measurements are acquired with the tracker sensor ∼80 cm away on the laparoscope handle for a target ∼20 cm away from the camera.

  18. Thematic Mapper. Volume 1: Calibration report flight model, LANDSAT 5

    NASA Technical Reports Server (NTRS)

    Cooley, R. C.; Lansing, J. C.

    1984-01-01

    The calibration of the Flight 1 Model Thematic Mapper is discussed. Spectral response, scan profile, coherent noise, line spread profiles and white light leaks, square wave response, radiometric calibration, and commands and telemetry are specifically addressed.

  19. Signal processing and calibration procedures for in situ diode-laser absorption spectroscopy.

    PubMed

    Werle, P W; Mazzinghi, P; D'Amato, F; De Rosa, M; Maurer, K; Slemr, F

    2004-07-01

    Gas analyzers based on tunable diode-laser spectroscopy (TDLS) provide high sensitivity, fast response and highly specific in situ measurements of several atmospheric trace gases simultaneously. Under optimum conditions even a shot noise limited performance can be obtained. For field applications outside the laboratory practical limitations are important. At ambient mixing ratios below a few parts-per-billion spectrometers become more and more sensitive towards noise, interference, drift effects and background changes associated with low level signals. It is the purpose of this review to address some of the problems which are encountered at these low levels and to describe a signal processing strategy for trace gas monitoring and a concept for in situ system calibration applicable for tunable diode-laser spectroscopy. To meet the requirement of quality assurance for field measurements and monitoring applications, procedures to check the linearity according to International Standard Organization regulations are described and some measurements of calibration functions are presented and discussed.

  20. Validation and Calibration of Nuclear Thermal Hydraulics Multiscale Multiphysics Models - Subcooled Flow Boiling Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anh Bui; Nam Dinh; Brian Williams

    In addition to validation data plan, development of advanced techniques for calibration and validation of complex multiscale, multiphysics nuclear reactor simulation codes are a main objective of the CASL VUQ plan. Advanced modeling of LWR systems normally involves a range of physico-chemical models describing multiple interacting phenomena, such as thermal hydraulics, reactor physics, coolant chemistry, etc., which occur over a wide range of spatial and temporal scales. To a large extent, the accuracy of (and uncertainty in) overall model predictions is determined by the correctness of various sub-models, which are not conservation-laws based, but empirically derived from measurement data. Suchmore » sub-models normally require extensive calibration before the models can be applied to analysis of real reactor problems. This work demonstrates a case study of calibration of a common model of subcooled flow boiling, which is an important multiscale, multiphysics phenomenon in LWR thermal hydraulics. The calibration process is based on a new strategy of model-data integration, in which, all sub-models are simultaneously analyzed and calibrated using multiple sets of data of different types. Specifically, both data on large-scale distributions of void fraction and fluid temperature and data on small-scale physics of wall evaporation were simultaneously used in this work’s calibration. In a departure from traditional (or common-sense) practice of tuning/calibrating complex models, a modern calibration technique based on statistical modeling and Bayesian inference was employed, which allowed simultaneous calibration of multiple sub-models (and related parameters) using different datasets. Quality of data (relevancy, scalability, and uncertainty) could be taken into consideration in the calibration process. This work presents a step forward in the development and realization of the “CIPS Validation Data Plan” at the Consortium for Advanced Simulation of LWRs to

  1. Polarimetric SAR calibration experiment using active radar calibrators

    NASA Astrophysics Data System (ADS)

    Freeman, Anthony; Shen, Yuhsyen; Werner, Charles L.

    1990-03-01

    Active radar calibrators are used to derive both the amplitude and phase characteristics of a multichannel polarimetric SAR from the complex image data. Results are presented from an experiment carried out using the NASA/JPL DC-8 aircraft SAR over a calibration site at Goldstone, California. As part of the experiment, polarimetric active radar calibrators (PARCs) with adjustable polarization signatures were deployed. Experimental results demonstrate that the PARCs can be used to calibrate polarimetric SAR images successfully. Restrictions on the application of the PARC calibration procedure are discussed.

  2. Polarimetric SAR calibration experiment using active radar calibrators

    NASA Technical Reports Server (NTRS)

    Freeman, Anthony; Shen, Yuhsyen; Werner, Charles L.

    1990-01-01

    Active radar calibrators are used to derive both the amplitude and phase characteristics of a multichannel polarimetric SAR from the complex image data. Results are presented from an experiment carried out using the NASA/JPL DC-8 aircraft SAR over a calibration site at Goldstone, California. As part of the experiment, polarimetric active radar calibrators (PARCs) with adjustable polarization signatures were deployed. Experimental results demonstrate that the PARCs can be used to calibrate polarimetric SAR images successfully. Restrictions on the application of the PARC calibration procedure are discussed.

  3. Development of a calibration equipment for spectrometer qualification

    NASA Astrophysics Data System (ADS)

    Michel, C.; Borguet, B.; Boueé, A.; Blain, P.; Deep, A.; Moreau, V.; François, M.; Maresi, L.; Myszkowiak, A.; Taccola, M.; Versluys, J.; Stockman, Y.

    2017-09-01

    With the development of new spectrometer concepts, it is required to adapt the calibration facilities to characterize correctly their performances. These spectro-imaging performances are mainly Modulation Transfer Function, spectral response, resolution and registration; polarization, straylight and radiometric calibration. The challenge of this calibration development is to achieve better performance than the item under test using mostly standard items. Because only the subsystem spectrometer needs to be calibrated, the calibration facility needs to simulate the geometrical "behaviours" of the imaging system. A trade-off study indicates that no commercial devices are able to fulfil completely all the requirements so that it was necessary to opt for an in home telecentric achromatic design. The proposed concept is based on an Offner design. This allows mainly to use simple spherical mirrors and to cover the spectral range. The spectral range is covered with a monochromator. Because of the large number of parameters to record the calibration facility is fully automatized. The performances of the calibration system have been verified by analysis and experimentally. Results achieved recently on a free-form grating Offner spectrometer demonstrate the capacities of this new calibration facility. In this paper, a full calibration facility is described, developed specifically for a new free-form spectro-imager.

  4. Discrete Regularization for Calibration of Geologic Facies Against Dynamic Flow Data

    NASA Astrophysics Data System (ADS)

    Khaninezhad, Mohammad-Reza; Golmohammadi, Azarang; Jafarpour, Behnam

    2018-04-01

    Subsurface flow model calibration involves many more unknowns than measurements, leading to ill-posed problems with nonunique solutions. To alleviate nonuniqueness, the problem is regularized by constraining the solution space using prior knowledge. In certain sedimentary environments, such as fluvial systems, the contrast in hydraulic properties of different facies types tends to dominate the flow and transport behavior, making the effect of within facies heterogeneity less significant. Hence, flow model calibration in those formations reduces to delineating the spatial structure and connectivity of different lithofacies types and their boundaries. A major difficulty in calibrating such models is honoring the discrete, or piecewise constant, nature of facies distribution. The problem becomes more challenging when complex spatial connectivity patterns with higher-order statistics are involved. This paper introduces a novel formulation for calibration of complex geologic facies by imposing appropriate constraints to recover plausible solutions that honor the spatial connectivity and discreteness of facies models. To incorporate prior connectivity patterns, plausible geologic features are learned from available training models. This is achieved by learning spatial patterns from training data, e.g., k-SVD sparse learning or the traditional Principal Component Analysis. Discrete regularization is introduced as a penalty functions to impose solution discreteness while minimizing the mismatch between observed and predicted data. An efficient gradient-based alternating directions algorithm is combined with variable splitting to minimize the resulting regularized nonlinear least squares objective function. Numerical results show that imposing learned facies connectivity and discreteness as regularization functions leads to geologically consistent solutions that improve facies calibration quality.

  5. Model Robust Calibration: Method and Application to Electronically-Scanned Pressure Transducers

    NASA Technical Reports Server (NTRS)

    Walker, Eric L.; Starnes, B. Alden; Birch, Jeffery B.; Mays, James E.

    2010-01-01

    This article presents the application of a recently developed statistical regression method to the controlled instrument calibration problem. The statistical method of Model Robust Regression (MRR), developed by Mays, Birch, and Starnes, is shown to improve instrument calibration by reducing the reliance of the calibration on a predetermined parametric (e.g. polynomial, exponential, logarithmic) model. This is accomplished by allowing fits from the predetermined parametric model to be augmented by a certain portion of a fit to the residuals from the initial regression using a nonparametric (locally parametric) regression technique. The method is demonstrated for the absolute scale calibration of silicon-based pressure transducers.

  6. Multi-Dimensional Calibration of Impact Dynamic Models

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Reaves, Mercedes C.; Annett, Martin S.; Jackson, Karen E.

    2011-01-01

    NASA Langley, under the Subsonic Rotary Wing Program, recently completed two helicopter tests in support of an in-house effort to study crashworthiness. As part of this effort, work is on-going to investigate model calibration approaches and calibration metrics for impact dynamics models. Model calibration of impact dynamics problems has traditionally assessed model adequacy by comparing time histories from analytical predictions to test at only a few critical locations. Although this approach provides for a direct measure of the model predictive capability, overall system behavior is only qualitatively assessed using full vehicle animations. In order to understand the spatial and temporal relationships of impact loads as they migrate throughout the structure, a more quantitative approach is needed. In this work impact shapes derived from simulated time history data are used to recommend sensor placement and to assess model adequacy using time based metrics and orthogonality multi-dimensional metrics. An approach for model calibration is presented that includes metric definitions, uncertainty bounds, parameter sensitivity, and numerical optimization to estimate parameters to reconcile test with analysis. The process is illustrated using simulated experiment data.

  7. A practical approach to spectral calibration of short wavelength infrared hyper-spectral imaging systems

    NASA Astrophysics Data System (ADS)

    Bürmen, Miran; Pernuš, Franjo; Likar, Boštjan

    2010-02-01

    Near-infrared spectroscopy is a promising, rapidly developing, reliable and noninvasive technique, used extensively in the biomedicine and in pharmaceutical industry. With the introduction of acousto-optic tunable filters (AOTF) and highly sensitive InGaAs focal plane sensor arrays, real-time high resolution hyper-spectral imaging has become feasible for a number of new biomedical in vivo applications. However, due to the specificity of the AOTF technology and lack of spectral calibration standardization, maintaining long-term stability and compatibility of the acquired hyper-spectral images across different systems is still a challenging problem. Efficiently solving both is essential as the majority of methods for analysis of hyper-spectral images relay on a priori knowledge extracted from large spectral databases, serving as the basis for reliable qualitative or quantitative analysis of various biological samples. In this study, we propose and evaluate fast and reliable spectral calibration of hyper-spectral imaging systems in the short wavelength infrared spectral region. The proposed spectral calibration method is based on light sources or materials, exhibiting distinct spectral features, which enable robust non-rigid registration of the acquired spectra. The calibration accounts for all of the components of a typical hyper-spectral imaging system such as AOTF, light source, lens and optical fibers. The obtained results indicated that practical, fast and reliable spectral calibration of hyper-spectral imaging systems is possible, thereby assuring long-term stability and inter-system compatibility of the acquired hyper-spectral images.

  8. Bore-sight calibration of the profile laser scanner using a large size exterior calibration field

    NASA Astrophysics Data System (ADS)

    Koska, Bronislav; Křemen, Tomáš; Štroner, Martin

    2014-10-01

    The bore-sight calibration procedure and results of a profile laser scanner using a large size exterior calibration field is presented in the paper. The task is a part of Autonomous Mapping Airship (AMA) project which aims to create s surveying system with specific properties suitable for effective surveying of medium-wide areas (units to tens of square kilometers per a day). As is obvious from the project name an airship is used as a carrier. This vehicle has some specific properties. The most important properties are high carrying capacity (15 kg), long flight time (3 hours), high operating safety and special flight characteristics such as stability of flight, in terms of vibrations, and possibility to flight at low speed. The high carrying capacity enables using of high quality sensors like professional infrared (IR) camera FLIR SC645, high-end visible spectrum (VIS) digital camera and optics in the visible spectrum and tactical grade INSGPS sensor iMAR iTracerRT-F200 and profile laser scanner SICK LD-LRS1000. The calibration method is based on direct laboratory measuring of coordinate offset (lever-arm) and in-flight determination of rotation offsets (bore-sights). The bore-sight determination is based on the minimization of squares of individual point distances from measured planar surfaces.

  9. Method and apparatus for calibrating a particle emissions monitor

    DOEpatents

    Flower, W.L.; Renzi, R.F.

    1998-07-07

    The invention discloses a method and apparatus for calibrating particulate emissions monitors, in particular, sampling probes, and in general, without removing the instrument from the system being monitored. A source of one or more specific metals in aerosol (either solid or liquid) or vapor form is housed in the instrument. The calibration operation is initiated by moving a focusing lens, used to focus a light beam onto an analysis location and collect the output light response, from an operating position to a calibration position such that the focal point of the focusing lens is now within a calibration stream issuing from a calibration source. The output light response from the calibration stream can be compared to that derived from an analysis location in the operating position to more accurately monitor emissions within the emissions flow stream. 6 figs.

  10. Method and apparatus for calibrating a particle emissions monitor

    DOEpatents

    Flower, William L.; Renzi, Ronald F.

    1998-07-07

    The instant invention discloses method and apparatus for calibrating particulate emissions monitors, in particular, and sampling probes, in general, without removing the instrument from the system being monitored. A source of one or more specific metals in aerosol (either solid or liquid) or vapor form is housed in the instrument. The calibration operation is initiated by moving a focusing lens, used to focus a light beam onto an analysis location and collect the output light response, from an operating position to a calibration position such that the focal point of the focusing lens is now within a calibration stream issuing from a calibration source. The output light response from the calibration stream can be compared to that derived from an analysis location in the operating position to more accurately monitor emissions within the emissions flow stream.

  11. Calibration of neural networks using genetic algorithms, with application to optimal path planning

    NASA Technical Reports Server (NTRS)

    Smith, Terence R.; Pitney, Gilbert A.; Greenwood, Daniel

    1987-01-01

    Genetic algorithms (GA) are used to search the synaptic weight space of artificial neural systems (ANS) for weight vectors that optimize some network performance function. GAs do not suffer from some of the architectural constraints involved with other techniques and it is straightforward to incorporate terms into the performance function concerning the metastructure of the ANS. Hence GAs offer a remarkably general approach to calibrating ANS. GAs are applied to the problem of calibrating an ANS that finds optimal paths over a given surface. This problem involves training an ANS on a relatively small set of paths and then examining whether the calibrated ANS is able to find good paths between arbitrary start and end points on the surface.

  12. Actuator-Assisted Calibration of Freehand 3D Ultrasound System.

    PubMed

    Koo, Terry K; Silvia, Nathaniel

    2018-01-01

    Freehand three-dimensional (3D) ultrasound has been used independently of other technologies to analyze complex geometries or registered with other imaging modalities to aid surgical and radiotherapy planning. A fundamental requirement for all freehand 3D ultrasound systems is probe calibration. The purpose of this study was to develop an actuator-assisted approach to facilitate freehand 3D ultrasound calibration using point-based phantoms. We modified the mathematical formulation of the calibration problem to eliminate the need of imaging the point targets at different viewing angles and developed an actuator-assisted approach/setup to facilitate quick and consistent collection of point targets spanning the entire image field of view. The actuator-assisted approach was applied to a commonly used cross wire phantom as well as two custom-made point-based phantoms (original and modified), each containing 7 collinear point targets, and compared the results with the traditional freehand cross wire phantom calibration in terms of calibration reproducibility, point reconstruction precision, point reconstruction accuracy, distance reconstruction accuracy, and data acquisition time. Results demonstrated that the actuator-assisted single cross wire phantom calibration significantly improved the calibration reproducibility and offered similar point reconstruction precision, point reconstruction accuracy, distance reconstruction accuracy, and data acquisition time with respect to the freehand cross wire phantom calibration. On the other hand, the actuator-assisted modified "collinear point target" phantom calibration offered similar precision and accuracy when compared to the freehand cross wire phantom calibration, but it reduced the data acquisition time by 57%. It appears that both actuator-assisted cross wire phantom and modified collinear point target phantom calibration approaches are viable options for freehand 3D ultrasound calibration.

  13. Actuator-Assisted Calibration of Freehand 3D Ultrasound System

    PubMed Central

    2018-01-01

    Freehand three-dimensional (3D) ultrasound has been used independently of other technologies to analyze complex geometries or registered with other imaging modalities to aid surgical and radiotherapy planning. A fundamental requirement for all freehand 3D ultrasound systems is probe calibration. The purpose of this study was to develop an actuator-assisted approach to facilitate freehand 3D ultrasound calibration using point-based phantoms. We modified the mathematical formulation of the calibration problem to eliminate the need of imaging the point targets at different viewing angles and developed an actuator-assisted approach/setup to facilitate quick and consistent collection of point targets spanning the entire image field of view. The actuator-assisted approach was applied to a commonly used cross wire phantom as well as two custom-made point-based phantoms (original and modified), each containing 7 collinear point targets, and compared the results with the traditional freehand cross wire phantom calibration in terms of calibration reproducibility, point reconstruction precision, point reconstruction accuracy, distance reconstruction accuracy, and data acquisition time. Results demonstrated that the actuator-assisted single cross wire phantom calibration significantly improved the calibration reproducibility and offered similar point reconstruction precision, point reconstruction accuracy, distance reconstruction accuracy, and data acquisition time with respect to the freehand cross wire phantom calibration. On the other hand, the actuator-assisted modified “collinear point target” phantom calibration offered similar precision and accuracy when compared to the freehand cross wire phantom calibration, but it reduced the data acquisition time by 57%. It appears that both actuator-assisted cross wire phantom and modified collinear point target phantom calibration approaches are viable options for freehand 3D ultrasound calibration. PMID:29854371

  14. Calibrating and adjusting expectations in life: A grounded theory on how elderly persons with somatic health problems maintain control and balance in life and optimize well-being

    PubMed Central

    Helvik, Anne-Sofie; Iversen, Valentina Cabral; Steiring, Randi; Hallberg, Lillemor R-M

    2011-01-01

    Aim This study aims at exploring the main concern for elderly individuals with somatic health problems and what they do to manage this. Method In total, 14 individuals (mean=74.2 years; range=68–86 years) of both gender including hospitalized and outpatient persons participated in the study. Open interviews were conducted and analyzed according to grounded theory, an inductive theory-generating method. Results The main concern for the elderly individuals with somatic health problems was identified as their striving to maintain control and balance in life. The analysis ended up in a substantive theory explaining how elderly individuals with somatic disease were calibrating and adjusting their expectations in life in order to adapt to their reduced energy level, health problems, and aging. By adjusting the expectations to their actual abilities, the elderly can maintain a sense of that they still have the control over their lives and create stability. The ongoing adjustment process is facilitated by different strategies and result despite lower expectations in subjective well-being. The facilitating strategies are utilizing the network of important others, enjoying cultural heritage, being occupied with interests, having a mission to fulfill, improving the situation by limiting boundaries and, finally, creating meaning in everyday life. Conclusion The main concern of the elderly with somatic health problems was to maintain control and balance in life. The emerging theory explains how elderly people with somatic health problems calibrate their expectations of life in order to adjust to reduced energy, health problems, and aging. This process is facilitated by different strategies and result despite lower expectation in subjective well-being. PMID:21468299

  15. SCAMP: Automatic Astrometric and Photometric Calibration

    NASA Astrophysics Data System (ADS)

    Bertin, Emmanuel

    2010-10-01

    Astrometric and photometric calibrations have remained the most tiresome step in the reduction of large imaging surveys. SCAMP has been written to address this problem. The program efficiently computes accurate astrometric and photometric solutions for any arbitrary sequence of FITS images in a completely automatic way. SCAMP is released under the GNU General Public License.

  16. In-flight calibration verification of spaceborne remote sensing instruments

    NASA Astrophysics Data System (ADS)

    LaBaw, Clayton C.

    1990-07-01

    The need to verify the pei1ormaixc of untended instrumentation has been recognized since scientists began sending thnse instrumems into hostile environments to quire data. The sea floor and the stratosphere have been explored, and the quality and cury of the data obtained vified by calibrating the instrumentalion in the laboratoiy, both jxior and subsequent to deployment The inability to make the lau measurements on deep-space missions make the calibration vthficatkin of these insiruments a uniclue problem.

  17. Hand–eye calibration using a target registration error model

    PubMed Central

    Morgan, Isabella; Jayarathne, Uditha; Ma, Burton; Peters, Terry M.

    2017-01-01

    Surgical cameras are prevalent in modern operating theatres and are often used as a surrogate for direct vision. Visualisation techniques (e.g. image fusion) made possible by tracking the camera require accurate hand–eye calibration between the camera and the tracking system. The authors introduce the concept of ‘guided hand–eye calibration’, where calibration measurements are facilitated by a target registration error (TRE) model. They formulate hand–eye calibration as a registration problem between homologous point–line pairs. For each measurement, the position of a monochromatic ball-tip stylus (a point) and its projection onto the image (a line) is recorded, and the TRE of the resulting calibration is predicted using a TRE model. The TRE model is then used to guide the placement of the calibration tool, so that the subsequent measurement minimises the predicted TRE. Assessing TRE after each measurement produces accurate calibration using a minimal number of measurements. As a proof of principle, they evaluated guided calibration using a webcam and an endoscopic camera. Their endoscopic camera results suggest that millimetre TRE is achievable when at least 15 measurements are acquired with the tracker sensor ∼80 cm away on the laparoscope handle for a target ∼20 cm away from the camera. PMID:29184657

  18. Calibration plots for risk prediction models in the presence of competing risks.

    PubMed

    Gerds, Thomas A; Andersen, Per K; Kattan, Michael W

    2014-08-15

    A predicted risk of 17% can be called reliable if it can be expected that the event will occur to about 17 of 100 patients who all received a predicted risk of 17%. Statistical models can predict the absolute risk of an event such as cardiovascular death in the presence of competing risks such as death due to other causes. For personalized medicine and patient counseling, it is necessary to check that the model is calibrated in the sense that it provides reliable predictions for all subjects. There are three often encountered practical problems when the aim is to display or test if a risk prediction model is well calibrated. The first is lack of independent validation data, the second is right censoring, and the third is that when the risk scale is continuous, the estimation problem is as difficult as density estimation. To deal with these problems, we propose to estimate calibration curves for competing risks models based on jackknife pseudo-values that are combined with a nearest neighborhood smoother and a cross-validation approach to deal with all three problems. Copyright © 2014 John Wiley & Sons, Ltd.

  19. A frequentist approach to computer model calibration

    DOE PAGES

    Wong, Raymond K. W.; Storlie, Curtis Byron; Lee, Thomas C. M.

    2016-05-05

    The paper considers the computer model calibration problem and provides a general frequentist solution. Under the framework proposed, the data model is semiparametric with a non-parametric discrepancy function which accounts for any discrepancy between physical reality and the computer model. In an attempt to solve a fundamentally important (but often ignored) identifiability issue between the computer model parameters and the discrepancy function, the paper proposes a new and identifiable parameterization of the calibration problem. It also develops a two-step procedure for estimating all the relevant quantities under the new parameterization. This estimation procedure is shown to enjoy excellent rates ofmore » convergence and can be straightforwardly implemented with existing software. For uncertainty quantification, bootstrapping is adopted to construct confidence regions for the quantities of interest. As a result, the practical performance of the methodology is illustrated through simulation examples and an application to a computational fluid dynamics model.« less

  20. Calibration Procedures in Mid Format Camera Setups

    NASA Astrophysics Data System (ADS)

    Pivnicka, F.; Kemper, G.; Geissler, S.

    2012-07-01

    A growing number of mid-format cameras are used for aerial surveying projects. To achieve a reliable and geometrically precise result also in the photogrammetric workflow, awareness on the sensitive parts is important. The use of direct referencing systems (GPS/IMU), the mounting on a stabilizing camera platform and the specific values of the mid format camera make a professional setup with various calibration and misalignment operations necessary. An important part is to have a proper camera calibration. Using aerial images over a well designed test field with 3D structures and/or different flight altitudes enable the determination of calibration values in Bingo software. It will be demonstrated how such a calibration can be performed. The direct referencing device must be mounted in a solid and reliable way to the camera. Beside the mechanical work especially in mounting the camera beside the IMU, 2 lever arms have to be measured in mm accuracy. Important are the lever arms from the GPS Antenna to the IMU's calibrated centre and also the lever arm from the IMU centre to the Camera projection centre. In fact, the measurement with a total station is not a difficult task but the definition of the right centres and the need for using rotation matrices can cause serious accuracy problems. The benefit of small and medium format cameras is that also smaller aircrafts can be used. Like that, a gyro bases stabilized platform is recommended. This causes, that the IMU must be mounted beside the camera on the stabilizer. The advantage is, that the IMU can be used to control the platform, the problematic thing is, that the IMU to GPS antenna lever arm is floating. In fact we have to deal with an additional data stream, the values of the movement of the stabiliser to correct the floating lever arm distances. If the post-processing of the GPS-IMU data by taking the floating levers into account, delivers an expected result, the lever arms between IMU and camera can be applied

  1. Preschool children with intellectual disability: syndrome specificity, behaviour problems, and maternal well-being.

    PubMed

    Eisenhower, A S; Baker, B L; Blacher, J

    2005-09-01

    Children with intellectual disability (ID) are at heightened risk for behaviour problems and diagnosed mental disorder. Likewise, mothers of children with ID are more stressed than mothers of typically developing children. Research on behavioural phenotypes suggests that different syndromes of ID may be associated with distinct child behavioural risks and maternal well-being risks. In the present study, maternal reports of child behaviour problems and maternal well-being were examined for syndrome-specific differences. The present authors studied the early manifestation and continuity of syndrome-specific behaviour problems in 215 preschool children belonging to 5 groups (typically developing, undifferentiated developmental delays, Down syndrome, autism, cerebral palsy) as well as the relation of syndrome group to maternal well-being. At age 3, children with autism and cerebral palsy showed the highest levels of behaviour problems, and children with Down syndrome and typically developing children showed the lowest levels. Mothers of children with autism reported more parenting stress than all other groups. These syndrome-specific patterns of behaviour and maternal stress were stable across ages 3, 4 and 5 years, except for relative increases in behaviour problems and maternal stress in the Down syndrome and cerebral palsy groups. Child syndrome contributed to maternal stress even after accounting for differences in behaviour problems and cognitive level. These results, although based on small syndrome groups, suggest that phenotypic expressions of behaviour problems are manifested as early as age 3. These behavioural differences were paralleled by differences in maternal stress, such that mothers of children with autism are at elevated risk for high stress. In addition, there appear to be other unexamined characteristics of these syndromes, beyond behaviour problems, which also contribute to maternal stress.

  2. An analysis of the least-squares problem for the DSN systematic pointing error model

    NASA Technical Reports Server (NTRS)

    Alvarez, L. S.

    1991-01-01

    A systematic pointing error model is used to calibrate antennas in the Deep Space Network. The least squares problem is described and analyzed along with the solution methods used to determine the model's parameters. Specifically studied are the rank degeneracy problems resulting from beam pointing error measurement sets that incorporate inadequate sky coverage. A least squares parameter subset selection method is described and its applicability to the systematic error modeling process is demonstrated on Voyager 2 measurement distribution.

  3. Cryogenic Pressure Calibrator for Wide Temperature Electronically Scanned (ESP) Pressure Modules

    NASA Technical Reports Server (NTRS)

    Faulcon, Nettie D.

    2001-01-01

    Electronically scanned pressure (ESP) modules have been developed that can operate in ambient and in cryogenic environments, particularly Langley's National Transonic Facility (NTF). Because they can operate directly in a cryogenic environment, their use eliminates many of the operational problems associated with using conventional modules at low temperatures. To ensure the accuracy of these new instruments, calibration was conducted in a laboratory simulating the environmental conditions of NTF. This paper discusses the calibration process by means of the simulation laboratory, the system inputs and outputs and the analysis of the calibration data. Calibration results of module M4, a wide temperature ESP module with 16 ports and a pressure range of +/- 4 psid are given.

  4. Specific and social fears in children and adolescents: separating normative fears from problem indicators and phobias.

    PubMed

    Laporte, Paola P; Pan, Pedro M; Hoffmann, Mauricio S; Wakschlag, Lauren S; Rohde, Luis A; Miguel, Euripedes C; Pine, Daniel S; Manfro, Gisele G; Salum, Giovanni A

    2017-01-01

    To distinguish normative fears from problematic fears and phobias. We investigated 2,512 children and adolescents from a large community school-based study, the High Risk Study for Psychiatric Disorders. Parent reports of 18 fears and psychiatric diagnosis were investigated. We used two analytical approaches: confirmatory factor analysis (CFA)/item response theory (IRT) and nonparametric receiver operating characteristic (ROC) curve. According to IRT and ROC analyses, social fears are more likely to indicate problems and phobias than specific fears. Most specific fears were normative when mild; all specific fears indicate problems when pervasive. In addition, the situational fear of toilets and people who look unusual were highly indicative of specific phobia. Among social fears, those not restricted to performance and fear of writing in front of others indicate problems when mild. All social fears indicate problems and are highly indicative of social phobia when pervasive. These preliminary findings provide guidance for clinicians and researchers to determine the boundaries that separate normative fears from problem indicators in children and adolescents, and indicate a differential severity threshold for specific and social fears.

  5. Accuracy evaluation of optical distortion calibration by digital image correlation

    NASA Astrophysics Data System (ADS)

    Gao, Zeren; Zhang, Qingchuan; Su, Yong; Wu, Shangquan

    2017-11-01

    Due to its convenience of operation, the camera calibration algorithm, which is based on the plane template, is widely used in image measurement, computer vision and other fields. How to select a suitable distortion model is always a problem to be solved. Therefore, there is an urgent need for an experimental evaluation of the accuracy of camera distortion calibrations. This paper presents an experimental method for evaluating camera distortion calibration accuracy, which is easy to implement, has high precision, and is suitable for a variety of commonly used lens. First, we use the digital image correlation method to calculate the in-plane rigid body displacement field of an image displayed on a liquid crystal display before and after translation, as captured with a camera. Next, we use a calibration board to calibrate the camera to obtain calibration parameters which are used to correct calculation points of the image before and after deformation. The displacement field before and after correction is compared to analyze the distortion calibration results. Experiments were carried out to evaluate the performance of two commonly used industrial camera lenses for four commonly used distortion models.

  6. Waveguide Calibrator for Multi-Element Probe Calibration

    NASA Technical Reports Server (NTRS)

    Sommerfeldt, Scott D.; Blotter, Jonathan D.

    2007-01-01

    A calibrator, referred to as the spider design, can be used to calibrate probes incorporating multiple acoustic sensing elements. The application is an acoustic energy density probe, although the calibrator can be used for other types of acoustic probes. The calibrator relies on the use of acoustic waveguide technology to produce the same acoustic field at each of the sensing elements. As a result, the sensing elements can be separated from each other, but still calibrated through use of the acoustic waveguides. Standard calibration techniques involve placement of an individual microphone into a small cavity with a known, uniform pressure to perform the calibration. If a cavity is manufactured with sufficient size to insert the energy density probe, it has been found that a uniform pressure field can only be created at very low frequencies, due to the size of the probe. The size of the energy density probe prevents one from having the same pressure at each microphone in a cavity, due to the wave effects. The "spider" design probe is effective in calibrating multiple microphones separated from each other. The spider design ensures that the same wave effects exist for each microphone, each with an indivdual sound path. The calibrator s speaker is mounted at one end of a 14-cm-long and 4.1-cm diameter small plane-wave tube. This length was chosen so that the first evanescent cross mode of the plane-wave tube would be attenuated by about 90 dB, thus leaving just the plane wave at the termination plane of the tube. The tube terminates with a small, acrylic plate with five holes placed symmetrically about the axis of the speaker. Four ports are included for the four microphones on the probe. The fifth port is included for the pre-calibrated reference microphone. The ports in the acrylic plate are in turn connected to the probe sensing elements via flexible PVC tubes. These five tubes are the same length, so the acoustic wave effects are the same in each tube. The

  7. A Visual Servoing-Based Method for ProCam Systems Calibration

    PubMed Central

    Berry, Francois; Aider, Omar Ait; Mosnier, Jeremie

    2013-01-01

    Projector-camera systems are currently used in a wide field of applications, such as 3D reconstruction and augmented reality, and can provide accurate measurements, depending on the configuration and calibration. Frequently, the calibration task is divided into two steps: camera calibration followed by projector calibration. The latter still poses certain problems that are not easy to solve, such as the difficulty in obtaining a set of 2D–3D points to compute the projection matrix between the projector and the world. Existing methods are either not sufficiently accurate or not flexible. We propose an easy and automatic method to calibrate such systems that consists in projecting a calibration pattern and superimposing it automatically on a known printed pattern. The projected pattern is provided by a virtual camera observing a virtual pattern in an OpenGL model. The projector displays what the virtual camera visualizes. Thus, the projected pattern can be controlled and superimposed on the printed one with the aid of visual servoing. Our experimental results compare favorably with those of other methods considering both usability and accuracy. PMID:24084121

  8. The cost of uniqueness in groundwater model calibration

    NASA Astrophysics Data System (ADS)

    Moore, Catherine; Doherty, John

    2006-04-01

    Calibration of a groundwater model requires that hydraulic properties be estimated throughout a model domain. This generally constitutes an underdetermined inverse problem, for which a solution can only be found when some kind of regularization device is included in the inversion process. Inclusion of regularization in the calibration process can be implicit, for example through the use of zones of constant parameter value, or explicit, for example through solution of a constrained minimization problem in which parameters are made to respect preferred values, or preferred relationships, to the degree necessary for a unique solution to be obtained. The "cost of uniqueness" is this: no matter which regularization methodology is employed, the inevitable consequence of its use is a loss of detail in the calibrated field. This, in turn, can lead to erroneous predictions made by a model that is ostensibly "well calibrated". Information made available as a by-product of the regularized inversion process allows the reasons for this loss of detail to be better understood. In particular, it is easily demonstrated that the estimated value for an hydraulic property at any point within a model domain is, in fact, a weighted average of the true hydraulic property over a much larger area. This averaging process causes loss of resolution in the estimated field. Where hydraulic conductivity is the hydraulic property being estimated, high averaging weights exist in areas that are strategically disposed with respect to measurement wells, while other areas may contribute very little to the estimated hydraulic conductivity at any point within the model domain, this possibly making the detection of hydraulic conductivity anomalies in these latter areas almost impossible. A study of the post-calibration parameter field covariance matrix allows further insights into the loss of system detail incurred through the calibration process to be gained. A comparison of pre- and post-calibration

  9. User guide for the USGS aerial camera Report of Calibration.

    USGS Publications Warehouse

    Tayman, W.P.

    1984-01-01

    Calibration and testing of aerial mapping cameras includes the measurement of optical constants and the check for proper functioning of a number of complicated mechanical and electrical parts. For this purpose the US Geological Survey performs an operational type photographic calibration. This paper is not strictly a scientific paper but rather a 'user guide' to the USGS Report of Calibration of an aerial mapping camera for compliance with both Federal and State mapping specifications. -Author

  10. Development and testing of item response theory-based item banks and short forms for eye, skin and lung problems in sarcoidosis.

    PubMed

    Victorson, David E; Choi, Seung; Judson, Marc A; Cella, David

    2014-05-01

    Sarcoidosis is a multisystem disease that can negatively impact health-related quality of life (HRQL) across generic (e.g., physical, social and emotional wellbeing) and disease-specific (e.g., pulmonary, ocular, dermatologic) domains. Measurement of HRQL in sarcoidosis has largely relied on generic patient-reported outcome tools, with little disease-specific measures available. The purpose of this paper is to present the development and testing of disease-specific item banks and short forms of lung, skin and eye problems, which are a part of a new patient-reported outcome (PRO) instrument called the sarcoidosis assessment tool. After prioritizing and selecting the most important disease-specific domains, we wrote new items to reflect disease-specific problems by drawing from patient focus group and clinician expert survey data that were used to create our conceptual model of HRQL in sarcoidosis. Item pools underwent cognitive interviews by sarcoidosis patients (n = 13), and minor modifications were made. These items were administered in a multi-site study (n = 300) to obtain item calibrations and create calibrated short forms using item response theory (IRT) approaches. From the available item pools, we created four new item banks and short forms: (1) skin problems, (2) skin stigma, (3) lung problems, and (4) eye Problems. We also created and tested supplemental forms of the most common constitutional symptoms and negative effects of corticosteroids. Several new sarcoidosis-specific PROs were developed and tested using IRT approaches. These new measures can advance more precise and targeted HRQL assessment in sarcoidosis clinical trials and clinical practice.

  11. Model calibration for ice sheets and glaciers dynamics: a general theory of inverse problems in glaciology

    NASA Astrophysics Data System (ADS)

    Giudici, Mauro; Baratelli, Fulvia; Vassena, Chiara; Cattaneo, Laura

    2014-05-01

    Numerical modelling of the dynamic evolution of ice sheets and glaciers requires the solution of discrete equations which are based on physical principles (e.g. conservation of mass, linear momentum and energy) and phenomenological constitutive laws (e.g. Glen's and Fourier's laws). These equations must be accompanied by information on the forcing term and by initial and boundary conditions (IBC) on ice velocity, stress and temperature; on the other hand the constitutive laws involves many physical parameters, which possibly depend on the ice thermodynamical state. The proper forecast of the dynamics of ice sheets and glaciers (forward problem, FP) requires a precise knowledge of several quantities which appear in the IBCs, in the forcing terms and in the phenomenological laws and which cannot be easily measured at the study scale in the field. Therefore these quantities can be obtained through model calibration, i.e. by the solution of an inverse problem (IP). Roughly speaking, the IP aims at finding the optimal values of the model parameters that yield the best agreement of the model output with the field observations and data. The practical application of IPs is usually formulated as a generalised least squares approach, which can be cast in the framework of Bayesian inference. IPs are well developed in several areas of science and geophysics and several applications were proposed also in glaciology. The objective of this paper is to provide a further step towards a thorough and rigorous theoretical framework in cryospheric studies. Although the IP is often claimed to be ill-posed, this is rigorously true for continuous domain models, whereas for numerical models, which require the solution of algebraic equations, the properties of the IP must be analysed with more care. First of all, it is necessary to clarify the role of experimental and monitoring data to determine the calibration targets and the values of the parameters that can be considered to be fixed

  12. Calibration of Clinical Audio Recording and Analysis Systems for Sound Intensity Measurement.

    PubMed

    Maryn, Youri; Zarowski, Andrzej

    2015-11-01

    Sound intensity is an important acoustic feature of voice/speech signals. Yet recordings are performed with different microphone, amplifier, and computer configurations, and it is therefore crucial to calibrate sound intensity measures of clinical audio recording and analysis systems on the basis of output of a sound-level meter. This study was designed to evaluate feasibility, validity, and accuracy of calibration methods, including audiometric speech noise signals and human voice signals under typical speech conditions. Calibration consisted of 3 comparisons between data from 29 measurement microphone-and-computer systems and data from the sound-level meter: signal-specific comparison with audiometric speech noise at 5 levels, signal-specific comparison with natural voice at 3 levels, and cross-signal comparison with natural voice at 3 levels. Intensity measures from recording systems were then linearly converted into calibrated data on the basis of these comparisons, and validity and accuracy of calibrated sound intensity were investigated. Very strong correlations and quasisimilarity were found between calibrated data and sound-level meter data across calibration methods and recording systems. Calibration of clinical sound intensity measures according to this method is feasible, valid, accurate, and representative for a heterogeneous set of microphones and data acquisition systems in real-life circumstances with distinct noise contexts.

  13. Improved Radial Velocity Precision with a Tunable Laser Calibrator

    NASA Astrophysics Data System (ADS)

    Cramer, Claire; Brown, S.; Dupree, A. K.; Lykke, K. R.; Smith, A.; Szentgyorgyi, A.

    2010-01-01

    We present radial velocities obtained using a novel laser-based wavelength calibration technique. We have built a prototype laser calibrator for the Hectochelle spectrograph at the MMT 6.5 m telescope. The Hectochelle is a high-dispersion, fiber-fed, multi-object spectrograph capable of recording up to 240 spectra simultaneously with a resolving power of 40000. The standard wavelength calibration method makes use of spectra from thorium-argon hollow cathode lamps shining directly onto the fibers. The difference in light path between calibration and science light as well as the uneven distribution of spectral lines are believed to introduce errors of up to several hundred m/s in the wavelength scale. Our tunable laser wavelength calibrator solves these problems. The laser is bright enough for use with a dome screen, allowing the calibration light path to better match the science light path. Further, the laser is tuned in regular steps across a spectral order to generate a calibration spectrum, creating a comb of evenly-spaced lines on the detector. Using the solar spectrum reflected from the atmosphere to record the same spectrum in every fiber, we show that laser wavelength calibration brings radial velocity uncertainties down below 100 m/s. We present these results as well as an application of tunable laser calibration to stellar radial velocities determined with the infrared Ca triplet in globular clusters M15 and NGC 7492. We also suggest how the tunable laser could be useful for other instruments, including single-object, cross-dispersed echelle spectrographs, and adapted for infrared spectroscopy.

  14. Novel Calibration Algorithm for a Three-Axis Strapdown Magnetometer

    PubMed Central

    Liu, Yan Xia; Li, Xi Sheng; Zhang, Xiao Juan; Feng, Yi Bo

    2014-01-01

    A complete error calibration model with 12 independent parameters is established by analyzing the three-axis magnetometer error mechanism. The said model conforms to an ellipsoid restriction, the parameters of the ellipsoid equation are estimated, and the ellipsoid coefficient matrix is derived. However, the calibration matrix cannot be determined completely, as there are fewer ellipsoid parameters than calibration model parameters. Mathematically, the calibration matrix derived from the ellipsoid coefficient matrix by a different matrix decomposition method is not unique, and there exists an unknown rotation matrix R between them. This paper puts forward a constant intersection angle method (angles between the geomagnetic field and gravitational field are fixed) to estimate R. The Tikhonov method is adopted to solve the problem that rounding errors or other errors may seriously affect the calculation results of R when the condition number of the matrix is very large. The geomagnetic field vector and heading error are further corrected by R. The constant intersection angle method is convenient and practical, as it is free from any additional calibration procedure or coordinate transformation. In addition, the simulation experiment indicates that the heading error declines from ±1° calibrated by classical ellipsoid fitting to ±0.2° calibrated by a constant intersection angle method, and the signal-to-noise ratio is 50 dB. The actual experiment exhibits that the heading error is further corrected from ±0.8° calibrated by the classical ellipsoid fitting to ±0.3° calibrated by a constant intersection angle method. PMID:24831110

  15. Calibration of radio-astronomical data on the cloud. LOFAR, the pathway to SKA

    NASA Astrophysics Data System (ADS)

    Sabater, J.; Sánchez-Expósito, S.; Garrido, J.; Ruiz, J. E.; Best, P. N.; Verdes-Montenegro, L.

    2015-05-01

    The radio interferometer LOFAR (LOw Frequency ARray) is fully operational now. This Square Kilometre Array (SKA) pathfinder allows the observation of the sky at frequencies between 10 and 240 MHz, a relatively unexplored region of the spectrum. LOFAR is a software defined telescope: the data is mainly processed using specialized software running in common computing facilities. That means that the capabilities of the telescope are virtually defined by software and mainly limited by the available computing power. However, the quantity of data produced can quickly reach huge volumes (several Petabytes per day). After the correlation and pre-processing of the data in a dedicated cluster, the final dataset is handled to the user (typically several Terabytes). The calibration of these data requires a powerful computing facility in which the specific state of the art software under heavy continuous development can be easily installed and updated. That makes this case a perfect candidate for a cloud infrastructure which adds the advantages of an on demand, flexible solution. We present our approach to the calibration of LOFAR data using Ibercloud, the cloud infrastructure provided by Ibergrid. With the calibration work-flow adapted to the cloud, we can explore calibration strategies for the SKA and show how private or commercial cloud infrastructures (Ibercloud, Amazon EC2, Google Compute Engine, etc.) can help to solve the problems with big datasets that will be prevalent in the future of astronomy.

  16. Gap Test Calibrations And Their Scalin

    NASA Astrophysics Data System (ADS)

    Sandusky, Harold

    2012-03-01

    Common tests for measuring the threshold for shock initiation are the NOL large scale gap test (LSGT) with a 50.8-mm diameter donor/gap and the expanded large scale gap test (ELSGT) with a 95.3-mm diameter donor/gap. Despite the same specifications for the explosive donor and polymethyl methacrylate (PMMA) gap in both tests, calibration of shock pressure in the gap versus distance from the donor scales by a factor of 1.75, not the 1.875 difference in their sizes. Recently reported model calculations suggest that the scaling discrepancy results from the viscoelastic properties of PMMA in combination with different methods for obtaining shock pressure. This is supported by the consistent scaling of these donors when calibrated in water-filled aquariums. Calibrations and their scaling are compared for other donors with PMMA gaps and for various donors in water.

  17. Identification and testing of countermeasures for specific alcohol accident types and problems. Volume 2, General driver alcohol problem

    DOT National Transportation Integrated Search

    1984-12-01

    This report summarizes work conducted to investigate the feasibility of developing effective countermeasures directed at specific alcohol-related accidents or problems. In Phase I, literature and accident data were reviewed to determine the scope and...

  18. A Hierarchical Bayesian Model for Calibrating Estimates of Species Divergence Times

    PubMed Central

    Heath, Tracy A.

    2012-01-01

    In Bayesian divergence time estimation methods, incorporating calibrating information from the fossil record is commonly done by assigning prior densities to ancestral nodes in the tree. Calibration prior densities are typically parametric distributions offset by minimum age estimates provided by the fossil record. Specification of the parameters of calibration densities requires the user to quantify his or her prior knowledge of the age of the ancestral node relative to the age of its calibrating fossil. The values of these parameters can, potentially, result in biased estimates of node ages if they lead to overly informative prior distributions. Accordingly, determining parameter values that lead to adequate prior densities is not straightforward. In this study, I present a hierarchical Bayesian model for calibrating divergence time analyses with multiple fossil age constraints. This approach applies a Dirichlet process prior as a hyperprior on the parameters of calibration prior densities. Specifically, this model assumes that the rate parameters of exponential prior distributions on calibrated nodes are distributed according to a Dirichlet process, whereby the rate parameters are clustered into distinct parameter categories. Both simulated and biological data are analyzed to evaluate the performance of the Dirichlet process hyperprior. Compared with fixed exponential prior densities, the hierarchical Bayesian approach results in more accurate and precise estimates of internal node ages. When this hyperprior is applied using Markov chain Monte Carlo methods, the ages of calibrated nodes are sampled from mixtures of exponential distributions and uncertainty in the values of calibration density parameters is taken into account. PMID:22334343

  19. Automatic alignment method for calibration of hydrometers

    NASA Astrophysics Data System (ADS)

    Lee, Y. J.; Chang, K. H.; Chon, J. C.; Oh, C. Y.

    2004-04-01

    This paper presents a new method to automatically align specific scale-marks for the calibration of hydrometers. A hydrometer calibration system adopting the new method consists of a vision system, a stepping motor, and software to control the system. The vision system is composed of a CCD camera and a frame grabber, and is used to acquire images. The stepping motor moves the camera, which is attached to the vessel containing a reference liquid, along the hydrometer. The operating program has two main functions: to process images from the camera to find the position of the horizontal plane and to control the stepping motor for the alignment of the horizontal plane with a particular scale-mark. Any system adopting this automatic alignment method is a convenient and precise means of calibrating a hydrometer. The performance of the proposed method is illustrated by comparing the calibration results using the automatic alignment method with those obtained using the manual method.

  20. The Importance of Calibration in Clinical Psychology.

    PubMed

    Lindhiem, Oliver; Petersen, Isaac T; Mentch, Lucas K; Youngstrom, Eric A

    2018-02-01

    Accuracy has several elements, not all of which have received equal attention in the field of clinical psychology. Calibration, the degree to which a probabilistic estimate of an event reflects the true underlying probability of the event, has largely been neglected in the field of clinical psychology in favor of other components of accuracy such as discrimination (e.g., sensitivity, specificity, area under the receiver operating characteristic curve). Although it is frequently overlooked, calibration is a critical component of accuracy with particular relevance for prognostic models and risk-assessment tools. With advances in personalized medicine and the increasing use of probabilistic (0% to 100%) estimates and predictions in mental health research, the need for careful attention to calibration has become increasingly important.

  1. Calibrating LOFAR using the Black Board Selfcal System

    NASA Astrophysics Data System (ADS)

    Pandey, V. N.; van Zwieten, J. E.; de Bruyn, A. G.; Nijboer, R.

    2009-09-01

    The Black Board SelfCal (BBS) system is designed as the final processing system to carry out the calibration of LOFAR in an efficient way. In this paper we give a brief description of its architectural and software design including its distributed computing approach. A confusion limited deep all sky image (from 38-62 MHz) by calibrating LOFAR test data with the BBS suite is shown as a sample result. The present status and future directions of development of BBS suite are also touched upon. Although BBS is mainly developed for LOFAR, it may also be used to calibrate other instruments once their specific algorithms are plugged in.

  2. Radiometric calibration of Landsat Thematic Mapper multispectral images

    USGS Publications Warehouse

    Chavez, P.S.

    1989-01-01

    A main problem encountered in radiometric calibration of satellite image data is correcting for atmospheric effects. Without this correction, an image digital number (DN) cannot be converted to a surface reflectance value. In this paper the accuracy of a calibration procedure, which includes a correction for atmospheric scattering, is tested. Two simple methods, a stand-alone and an in situ sky radiance measurement technique, were used to derive the HAZE DN values for each of the six reflectance Thematic Mapper (TM) bands. The DNs of two Landsat TM images of Phoenix, Arizona were converted to surface reflectances. -from Author

  3. The influence of eating psychopathology on autobiographical memory specificity and social problem-solving.

    PubMed

    Ridout, Nathan; Matharu, Munveen; Sanders, Elizabeth; Wallis, Deborah J

    2015-08-30

    The primary aim was to examine the influence of subclinical disordered eating on autobiographical memory specificity (AMS) and social problem solving (SPS). A further aim was to establish if AMS mediated the relationship between eating psychopathology and SPS. A non-clinical sample of 52 females completed the autobiographical memory test (AMT), where they were asked to retrieve specific memories of events from their past in response to cue words, and the means-end problem-solving task (MEPS), where they were asked to generate means of solving a series of social problems. Participants also completed the Eating Disorders Inventory (EDI) and Hospital Anxiety and Depression Scale. After controlling for mood, high scores on the EDI subscales, particularly Drive-for-Thinness, were associated with the retrieval of fewer specific and a greater proportion of categorical memories on the AMT and with the generation of fewer and less effective means on the MEPS. Memory specificity fully mediated the relationship between eating psychopathology and SPS. These findings have implications for individuals exhibiting high levels of disordered eating, as poor AMS and SPS are likely to impact negatively on their psychological wellbeing and everyday social functioning and could represent a risk factor for the development of clinically significant eating disorders. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  4. Matrix Factorisation-based Calibration For Air Quality Crowd-sensing

    NASA Astrophysics Data System (ADS)

    Dorffer, Clement; Puigt, Matthieu; Delmaire, Gilles; Roussel, Gilles; Rouvoy, Romain; Sagnier, Isabelle

    2017-04-01

    Internet of Things (IoT) is extending internet to physical objects and places. The internet-enabled objects are thus able to communicate with each other and with their users. One main interest of IoT is the ease of production of huge masses of data (Big Data) using distributed networks of connected objects, thus making possible a fine-grained yet accurate analysis of physical phenomena. Mobile crowdsensing is a way to collect data using IoT. It basically consists of acquiring geolocalized data from the sensors (from or connected to the mobile devices, e.g., smartphones) of a crowd of volunteers. The sensed data are then collectively shared using wireless connection—such as GSM or WiFi—and stored on a dedicated server to be processed. One major application of mobile crowdsensing is environment monitoring. Indeed, with the proliferation of miniaturized yet sensitive sensors on one hand and, on the other hand, of low-cost microcontrollers/single-card PCs, it is easy to extend the sensing abilities of smartphones. Alongside the conventional, regulated, bulky and expensive instruments used in authoritative air quality stations, it is then possible to create a large-scale mobile sensor network providing insightful information about air quality. In particular, the finer spatial sampling rate due to such a dense network should allow air quality models to take into account local effects such as street canyons. However, one key issue with low-cost air quality sensors is the lack of trust in the sensed data. In most crowdsensing scenarios, the sensors (i) cannot be calibrated in a laboratory before or during their deployment and (ii) might be sparsely or continuously faulty (thus providing outliers in the data). Such issues should be automatically handled from the sensor readings. Indeed, due to the masses of generated data, solving the above issues cannot be performed by experts but requires specific data processing techniques. In this work, we assume that some mobile

  5. Experimental Demonstration of In-Place Calibration for Time Domain Microwave Imaging System

    NASA Astrophysics Data System (ADS)

    Kwon, S.; Son, S.; Lee, K.

    2018-04-01

    In this study, the experimental demonstration of in-place calibration was conducted using the developed time domain measurement system. Experiments were conducted using three calibration methods—in-place calibration and two existing calibrations, that is, array rotation and differential calibration. The in-place calibration uses dual receivers located at an equal distance from the transmitter. The received signals at the dual receivers contain similar unwanted signals, that is, the directly received signal and antenna coupling. In contrast to the simulations, the antennas are not perfectly matched and there might be unexpected environmental errors. Thus, we experimented with the developed experimental system to demonstrate the proposed method. The possible problems with low signal-to-noise ratio and clock jitter, which may exist in time domain systems, were rectified by averaging repeatedly measured signals. The tumor was successfully detected using the three calibration methods according to the experimental results. The cross correlation was calculated using the reconstructed image of the ideal differential calibration for a quantitative comparison between the existing rotation calibration and the proposed in-place calibration. The mean value of cross correlation between the in-place calibration and ideal differential calibration was 0.80, and the mean value of cross correlation of the rotation calibration was 0.55. Furthermore, the results of simulation were compared with the experimental results to verify the in-place calibration method. A quantitative analysis was also performed, and the experimental results show a tendency similar to the simulation.

  6. Preparation of the calibration unit for LINC-NIRVANA

    NASA Astrophysics Data System (ADS)

    Labadie, Lucas; de Bonis, Fulvio; Egner, Sebastian; Herbst, Tom; Bizenberger, Peter; Kürster, Martin; Delboulé, Alain

    2008-07-01

    We present in this paper the status of the calibration unit for the interferometric infrared imager LINC-NIRVANA that will be installed on the Large Binocular Telescope, Arizona. LINC-NIRVANA will combine high angular resolution (~10 mas in J), and wide field-of-view (up to 2'×2') thanks to the conjunct use of interferometry and MCAO. The goal of the calibration unit is to provide calibration tools for the different sub-systems of the instrument. We give an overview of the different tasks that are foreseen as well as of the preliminary detailed design. We show some interferometric results obtained with specific fiber splitters optimized for LINC-NIRVANA. The different components of the calibration unit will be used either during the integration phase on site, or during the science exploitation phase of the instrument.

  7. The Geostationary Lightning Mapper: Its Performance and Calibration

    NASA Astrophysics Data System (ADS)

    Christian, H. J., Jr.

    2015-12-01

    The Geostationary Lightning Mapper (GLM) has been developed to be an operational instrument on the GOES-R series of spacecraft. The GLM is a unique instrument, unlike other meteorological instruments, both in how it operates and in the information content that it provides. Instrumentally, it is an event detector, rather than an imager. While processing almost a billion pixels per second with 14 bits of resolution, the event detection process reduces the required telemetry bandwidth by almost 105, thus keeping the telemetry requirements modest and enabling efficient ground processing that leads to rapid data distribution to operational users. The GLM was designed to detect about 90 percent of the total lightning flashes within its almost hemispherical field of view. Based on laboratory calibration, we expect the on-orbit detection efficiency to be closer to 85%, making it the highest performing, large area coverage total lightning detector. It has a number of unique design features that will enable it have near uniform special resolution over most of its field of view and to operate with minimal impact on performance during solar eclipses. The GLM has no dedicated on-orbit calibration system, thus the ground-based calibration provides the bases for the predicted radiometric performance. A number of problems were encountered during the calibration of Flight Model 1. The issues arouse from GLM design features including its wide field of view, fast lens, the narrow-band interference filters located in both object and collimated space and the fact that the GLM is inherently a event detector yet the calibration procedures required both calibration of images and events. The GLM calibration techniques were based on those developed for the Lightning Imaging Sensor calibration, but there are enough differences between the sensors that the initial GLM calibration suggested that it is significantly more sensitive than its design parameters. The calibration discrepancies have

  8. Development of a 300 L Calibration Bath for Oceanographic Thermometers

    NASA Astrophysics Data System (ADS)

    Baba, S.; Yamazawa, K.; Nakano, T.; Saito, I.; Tamba, J.; Wakimoto, T.; Katoh, K.

    2017-11-01

    The Japan Agency for Marine-Earth Science and Technology (JAMSTEC) has been developing a 300 L calibration bath to calibrate 24 oceanographic thermometers (OT) simultaneously and thereby reduce the calibration work load necessary to service more than 180 OT every year. This study investigated characteristics of the developed 300 L calibration bath using a SBE 3plus thermometer produced by an OT manufacturer. We also used 11 thermistor thermometers that were calibrated to be traceable to the international temperature scale of 1990 (ITS-90) within 1 mK of standard uncertainty through collaboration of JAMSTEC and NMIJ/AIST. Results show that the time stability of temperature of the developed bath was within ± 1 mK. Furthermore, the temperature uniformity was ± 1.3 mK. The expanded uncertainty (k=2) components for the characteristics of the developed 300 L calibration bath were estimated as 2.9 mK, which is much less than the value of 10 mK: the required specification for uncertainty of calibration for the OT. These results demonstrated the utility of this 300 L calibration bath as a device for use with a new calibration system.

  9. Hierarchical calibration and validation for modeling bench-scale solvent-based carbon capture. Part 1: Non-reactive physical mass transfer across the wetted wall column: Original Research Article: Hierarchical calibration and validation for modeling bench-scale solvent-based carbon capture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Chao; Xu, Zhijie; Lai, Canhai

    A hierarchical model calibration and validation is proposed for quantifying the confidence level of mass transfer prediction using a computational fluid dynamics (CFD) model, where the solvent-based carbon dioxide (CO2) capture is simulated and simulation results are compared to the parallel bench-scale experimental data. Two unit problems with increasing level of complexity are proposed to breakdown the complex physical/chemical processes of solvent-based CO2 capture into relatively simpler problems to separate the effects of physical transport and chemical reaction. This paper focuses on the calibration and validation of the first unit problem, i.e. the CO2 mass transfer across a falling ethanolaminemore » (MEA) film in absence of chemical reaction. This problem is investigated both experimentally and numerically using nitrous oxide (N2O) as a surrogate for CO2. To capture the motion of gas-liquid interface, a volume of fluid method is employed together with a one-fluid formulation to compute the mass transfer between the two phases. Bench-scale parallel experiments are designed and conducted to validate and calibrate the CFD models using a general Bayesian calibration. Two important transport parameters, e.g. Henry’s constant and gas diffusivity, are calibrated to produce the posterior distributions, which will be used as the input for the second unit problem to address the chemical adsorption of CO2 across the MEA falling film, where both mass transfer and chemical reaction are involved.« less

  10. Calibration of Lévy Processes with American Options

    NASA Astrophysics Data System (ADS)

    Achdou, Yves

    We study options on financial assets whose discounted prices are exponential of Lévy processes. The price of an American vanilla option as a function of the maturity and the strike satisfies a linear complementarity problem involving a non-local partial integro-differential operator. It leads to a variational inequality in a suitable weighted Sobolev space. Calibrating the Lévy process may be done by solving an inverse least square problem where the state variable satisfies the previously mentioned variational inequality. We first assume that the volatility is positive: after carefully studying the direct problem, we propose necessary optimality conditions for the least square inverse problem. We also consider the direct problem when the volatility is zero.

  11. An automatic calibration procedure for remote eye-gaze tracking systems.

    PubMed

    Model, Dmitri; Guestrin, Elias D; Eizenman, Moshe

    2009-01-01

    Remote gaze estimation systems use calibration procedures to estimate subject-specific parameters that are needed for the calculation of the point-of-gaze. In these procedures, subjects are required to fixate on a specific point or points at specific time instances. Advanced remote gaze estimation systems can estimate the optical axis of the eye without any personal calibration procedure, but use a single calibration point to estimate the angle between the optical axis and the visual axis (line-of-sight). This paper presents a novel automatic calibration procedure that does not require active user participation. To estimate the angles between the optical and visual axes of each eye, this procedure minimizes the distance between the intersections of the visual axes of the left and right eyes with the surface of a display while subjects look naturally at the display (e.g., watching a video clip). Simulation results demonstrate that the performance of the algorithm improves as the range of viewing angles increases. For a subject sitting 75 cm in front of an 80 cm x 60 cm display (40" TV) the standard deviation of the error in the estimation of the angles between the optical and visual axes is 0.5 degrees.

  12. Calibration of Safecast dose rate measurements.

    PubMed

    Cervone, Guido; Hultquist, Carolynne

    2018-10-01

    A methodology is presented to calibrate contributed Safecast dose rate measurements acquired between 2011 and 2016 in the Fukushima prefecture of Japan. The Safecast data are calibrated using observations acquired by the U.S. Department of Energy at the time of the 2011 Fukushima Daiichi power plant nuclear accident. The methodology performs a series of interpolations between the U.S. government and contributed datasets at specific temporal windows and at corresponding spatial locations. The coefficients found for all the different temporal windows are aggregated and interpolated using quadratic regressions to generate a time dependent calibration function. Normal background radiation, decay rates, and missing values are taken into account during the analysis. Results show that the standard Safecast static transformation function overestimates the official measurements because it fails to capture the presence of two different Cesium isotopes and their changing magnitudes with time. A model is created to predict the ratio of the isotopes from the time of the accident through 2020. The proposed time dependent calibration takes into account this Cesium isotopes ratio, and it is shown to reduce the error between U.S. government and contributed data. The proposed calibration is needed through 2020, after which date the errors introduced by ignoring the presence of different isotopes will become negligible. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Self-calibration performance in stereoscopic PIV acquired in a transonic wind tunnel

    DOE PAGES

    Beresh, Steven J.; Wagner, Justin L.; Smith, Barton L.

    2016-03-16

    Three stereoscopic PIV experiments have been examined to test the effectiveness of self-calibration under varied circumstances. Furthermore, we our measurements taken in a streamwise plane yielded a robust self-calibration that returned common results regardless of the specific calibration procedure, but measurements in the crossplane exhibited substantial velocity bias errors whose nature was sensitive to the particulars of the self-calibration approach. Self-calibration is complicated by thick laser sheets and large stereoscopic camera angles and further exacerbated by small particle image diameters and high particle seeding density. In spite of the different answers obtained by varied self-calibrations, each implementation locked onto anmore » apparently valid solution with small residual disparity and converged adjustment of the calibration plane. Thus, the convergence of self-calibration on a solution with small disparity is not sufficient to indicate negligible velocity error due to the stereo calibration.« less

  14. Calibration of hydrological models using flow-duration curves

    NASA Astrophysics Data System (ADS)

    Westerberg, I. K.; Guerrero, J.-L.; Younger, P. M.; Beven, K. J.; Seibert, J.; Halldin, S.; Freer, J. E.; Xu, C.-Y.

    2011-07-01

    The degree of belief we have in predictions from hydrologic models will normally depend on how well they can reproduce observations. Calibrations with traditional performance measures, such as the Nash-Sutcliffe model efficiency, are challenged by problems including: (1) uncertain discharge data, (2) variable sensitivity of different performance measures to different flow magnitudes, (3) influence of unknown input/output errors and (4) inability to evaluate model performance when observation time periods for discharge and model input data do not overlap. This paper explores a calibration method using flow-duration curves (FDCs) to address these problems. The method focuses on reproducing the observed discharge frequency distribution rather than the exact hydrograph. It consists of applying limits of acceptability for selected evaluation points (EPs) on the observed uncertain FDC in the extended GLUE approach. Two ways of selecting the EPs were tested - based on equal intervals of discharge and of volume of water. The method was tested and compared to a calibration using the traditional model efficiency for the daily four-parameter WASMOD model in the Paso La Ceiba catchment in Honduras and for Dynamic TOPMODEL evaluated at an hourly time scale for the Brue catchment in Great Britain. The volume method of selecting EPs gave the best results in both catchments with better calibrated slow flow, recession and evaporation than the other criteria. Observed and simulated time series of uncertain discharges agreed better for this method both in calibration and prediction in both catchments. An advantage with the method is that the rejection criterion is based on an estimation of the uncertainty in discharge data and that the EPs of the FDC can be chosen to reflect the aims of the modelling application, e.g. using more/less EPs at high/low flows. While the method appears less sensitive to epistemic input/output errors than previous use of limits of acceptability applied

  15. Calibration of hydrological models using flow-duration curves

    NASA Astrophysics Data System (ADS)

    Westerberg, I. K.; Guerrero, J.-L.; Younger, P. M.; Beven, K. J.; Seibert, J.; Halldin, S.; Freer, J. E.; Xu, C.-Y.

    2010-12-01

    The degree of belief we have in predictions from hydrologic models depends on how well they can reproduce observations. Calibrations with traditional performance measures such as the Nash-Sutcliffe model efficiency are challenged by problems including: (1) uncertain discharge data, (2) variable importance of the performance with flow magnitudes, (3) influence of unknown input/output errors and (4) inability to evaluate model performance when observation time periods for discharge and model input data do not overlap. A new calibration method using flow-duration curves (FDCs) was developed which addresses these problems. The method focuses on reproducing the observed discharge frequency distribution rather than the exact hydrograph. It consists of applying limits of acceptability for selected evaluation points (EPs) of the observed uncertain FDC in the extended GLUE approach. Two ways of selecting the EPs were tested - based on equal intervals of discharge and of volume of water. The method was tested and compared to a calibration using the traditional model efficiency for the daily four-parameter WASMOD model in the Paso La Ceiba catchment in Honduras and for Dynamic TOPMODEL evaluated at an hourly time scale for the Brue catchment in Great Britain. The volume method of selecting EPs gave the best results in both catchments with better calibrated slow flow, recession and evaporation than the other criteria. Observed and simulated time series of uncertain discharges agreed better for this method both in calibration and prediction in both catchments without resulting in overpredicted simulated uncertainty. An advantage with the method is that the rejection criterion is based on an estimation of the uncertainty in discharge data and that the EPs of the FDC can be chosen to reflect the aims of the modelling application e.g. using more/less EPs at high/low flows. While the new method is less sensitive to epistemic input/output errors than the normal use of limits of

  16. Maternal anxiety versus depressive disorders: specific relations to infants' crying, feeding and sleeping problems.

    PubMed

    Petzoldt, J; Wittchen, H-U; Einsle, F; Martini, J

    2016-03-01

    Maternal depression has been associated with excessive infant crying, feeding and sleeping problems, but the specificity of maternal depression, as compared with maternal anxiety remains unclear and manifest disorders prior to pregnancy have been widely neglected. In this prospective longitudinal study, the specific associations of maternal anxiety and depressive disorders prior to, during and after pregnancy and infants' crying, feeding and sleeping problems were investigated in the context of maternal parity. In the Maternal Anxiety in Relation to Infant Development (MARI) Study, n = 306 primiparous and multiparous women were repeatedly interviewed from early pregnancy until 16 months post partum with the Composite International Diagnostic Interview for Women (CIDI-V) to assess DSM-IV anxiety and depressive disorders. Information on excessive infant crying, feeding and sleeping problems was obtained from n = 286 mothers during postpartum period via questionnaire and interview (Baby-DIPS). Findings from this study revealed syndrome-specific risk constellations for maternal anxiety and depressive disorders as early as prior to pregnancy: Excessive infant crying (10.1%) was specifically associated with maternal anxiety disorders, especially in infants of younger and lower educated first-time mothers. Feeding problems (36.4%) were predicted by maternal anxiety (and comorbid depressive) disorders in primiparous mothers and infants with lower birth weight. Infant sleeping problems (12.2%) were related to maternal depressive (and comorbid anxiety) disorders irrespective of maternal parity. Primiparous mothers with anxiety disorders may be more prone to anxious misinterpretations of crying and feeding situations leading to an escalation of mother-infant interactions. The relation between maternal depressive and infant sleeping problems may be better explained by a transmission of unsettled maternal sleep to the fetus during pregnancy or a lack of daily

  17. MODIS calibration

    NASA Technical Reports Server (NTRS)

    Barker, John L.

    1992-01-01

    The MODIS/MCST (MODIS Characterization Support Team) Status Report contains an outline of the calibration strategy, handbook, and plan. It also contains an outline of the MODIS/MCST action item from the 4th EOS Cal/Val Meeting, for which the objective was to locate potential MODIS calibration targets on the Earth's surface that are radiometrically homogeneous on a scale of 3 by 3 Km. As appendices, draft copies of the handbook table of contents, calibration plan table of contents, and detailed agenda for MODIS calibration working group are included.

  18. Ring Laser Gyro G-Sensitive Misalignment Calibration in Linear Vibration Environments.

    PubMed

    Wang, Lin; Wu, Wenqi; Li, Geng; Pan, Xianfei; Yu, Ruihang

    2018-02-16

    The ring laser gyro (RLG) dither axis will bend and exhibit errors due to the specific forces acting on the instrument, which are known as g-sensitive misalignments of the gyros. The g-sensitive misalignments of the RLG triad will cause severe attitude error in vibration or maneuver environments where large-amplitude specific forces and angular rates coexist. However, g-sensitive misalignments are usually ignored when calibrating the strapdown inertial navigation system (SINS). This paper proposes a novel method to calibrate the g-sensitive misalignments of an RLG triad in linear vibration environments. With the SINS is attached to a linear vibration bench through outer rubber dampers, rocking of the SINS can occur when the linear vibration is performed on the SINS. Therefore, linear vibration environments can be created to simulate the harsh environment during aircraft flight. By analyzing the mathematical model of g-sensitive misalignments, the relationship between attitude errors and specific forces as well as angular rates is established, whereby a calibration scheme with approximately optimal observations is designed. Vibration experiments are conducted to calibrate g-sensitive misalignments of the RLG triad. Vibration tests also show that SINS velocity error decreases significantly after g-sensitive misalignments compensation.

  19. Automated Mounting Bias Calibration for Airborne LIDAR System

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Jiang, W.; Jiang, S.

    2012-07-01

    Mounting bias is the major error source of Airborne LIDAR system. In this paper, an automated calibration method for estimating LIDAR system mounting parameters is introduced. LIDAR direct geo-referencing model is used to calculate systematic errors. Due to LIDAR footprints discretely sampled, the real corresponding laser points are hardly existence among different strips. The traditional corresponding point methodology does not seem to apply to LIDAR strip registration. We proposed a Virtual Corresponding Point Model to resolve the corresponding problem among discrete laser points. Each VCPM contains a corresponding point and three real laser footprints. Two rules are defined to calculate tie point coordinate from real laser footprints. The Scale Invariant Feature Transform (SIFT) is used to extract corresponding points in LIDAR strips, and the automatic flow of LIDAR system calibration based on VCPM is detailed described. The practical examples illustrate the feasibility and effectiveness of the proposed calibration method.

  20. The Calibration and error analysis of Shallow water (less than 100m) Multibeam Echo-Sounding System

    NASA Astrophysics Data System (ADS)

    Lin, M.

    2016-12-01

    Multibeam echo-sounders(MBES) have been developed to gather bathymetric and acoustic data for more efficient and more exact mapping of the oceans. This gain in efficiency does not come without drawbacks. Indeed, the finer the resolution of remote sensing instruments, the harder they are to calibrate. This is the case for multibeam echo-sounding systems (MBES). We are no longer dealing with sounding lines where the bathymetry must be interpolated between them to engender consistent representations of the seafloor. We now need to match together strips (swaths) of totally ensonified seabed. As a consequence, misalignment and time lag problems emerge as artifacts in the bathymetry from adjacent or overlapping swaths, particularly when operating in shallow water. More importantly, one must still verify that bathymetric data meet the accuracy requirements. This paper aims to summarize the system integration involved with MBES and identify the various source of error pertaining to shallow water survey (100m and less). A systematic method for the calibration of shallow water MBES is proposed and presented as a set of field procedures. The procedures aim at detecting, quantifying and correcting systematic instrumental and installation errors. Hence, calibrating for variations of the speed of sound in the water column, which is natural in origin, is not addressed in this document. The data which used in calibration will reference International Hydrographic Organization(IHO) and other related standards to compare. This paper aims to set a model in the specific area which can calibrate the error due to instruments. We will construct a procedure in patch test and figure out all the possibilities may make sounding data with error then calculate the error value to compensate. In general, the problems which have to be solved is the patch test's 4 correction in the Hypack system 1.Roll 2.GPS Latency 3.Pitch 4.Yaw. Cause These 4 correction affect each others, we run each survey line

  1. A calibration method for patient specific IMRT QA using a single therapy verification film

    PubMed Central

    Shukla, Arvind Kumar; Oinam, Arun S.; Kumar, Sanjeev; Sandhu, I.S.; Sharma, S.C.

    2013-01-01

    Aim The aim of the present study is to develop and verify the single film calibration procedure used in intensity-modulated radiation therapy (IMRT) quality assurance. Background Radiographic films have been regularly used in routine commissioning of treatment modalities and verification of treatment planning system (TPS). The radiation dosimetery based on radiographic films has ability to give absolute two-dimension dose distribution and prefer for the IMRT quality assurance. However, the single therapy verification film gives a quick and significant reliable method for IMRT verification. Materials and methods A single extended dose rate (EDR 2) film was used to generate the sensitometric curve of film optical density and radiation dose. EDR 2 film was exposed with nine 6 cm × 6 cm fields of 6 MV photon beam obtained from a medical linear accelerator at 5-cm depth in solid water phantom. The nine regions of single film were exposed with radiation doses raging from 10 to 362 cGy. The actual dose measurements inside the field regions were performed using 0.6 cm3 ionization chamber. The exposed film was processed after irradiation using a VIDAR film scanner and the value of optical density was noted for each region. Ten IMRT plans of head and neck carcinoma were used for verification using a dynamic IMRT technique, and evaluated using the gamma index method with TPS calculated dose distribution. Results Sensitometric curve has been generated using a single film exposed at nine field region to check quantitative dose verifications of IMRT treatments. The radiation scattered factor was observed to decrease exponentially with the increase in the distance from the centre of each field region. The IMRT plans based on calibration curve were verified using the gamma index method and found to be within acceptable criteria. Conclusion The single film method proved to be superior to the traditional calibration method and produce fast daily film calibration for highly

  2. A new polarimetric active radar calibrator and calibration technique

    NASA Astrophysics Data System (ADS)

    Tang, Jianguo; Xu, Xiaojian

    2015-10-01

    Polarimetric active radar calibrator (PARC) is one of the most important calibrators with high radar cross section (RCS) for polarimetry measurement. In this paper, a new double-antenna polarimetric active radar calibrator (DPARC) is proposed, which consists of two rotatable antennas with wideband electromagnetic polarization filters (EMPF) to achieve lower cross-polarization for transmission and reception. With two antennas which are rotatable around the radar line of sight (LOS), the DPARC provides a variety of standard polarimetric scattering matrices (PSM) through the rotation combination of receiving and transmitting polarization, which are useful for polarimatric calibration in different applications. In addition, a technique based on Fourier analysis is proposed for calibration processing. Numerical simulation results are presented to demonstrate the superior performance of the proposed DPARC and processing technique.

  3. The Role of Problem Specification Workshops in Extension: An IPM Example.

    ERIC Educational Resources Information Center

    Foster, John; And Others

    1995-01-01

    Of three extension models--top-down technology transfer, farmers-first approach, and participatory research--the latter extends elements of the other two into a more comprehensive analysis of a problem and specification of solution strategies. An Australian integrated pest management (IPM) example illustrates how structured workshops are a useful…

  4. Two combinatorial optimization problems for SNP discovery using base-specific cleavage and mass spectrometry.

    PubMed

    Chen, Xin; Wu, Qiong; Sun, Ruimin; Zhang, Louxin

    2012-01-01

    The discovery of single-nucleotide polymorphisms (SNPs) has important implications in a variety of genetic studies on human diseases and biological functions. One valuable approach proposed for SNP discovery is based on base-specific cleavage and mass spectrometry. However, it is still very challenging to achieve the full potential of this SNP discovery approach. In this study, we formulate two new combinatorial optimization problems. While both problems are aimed at reconstructing the sample sequence that would attain the minimum number of SNPs, they search over different candidate sequence spaces. The first problem, denoted as SNP - MSP, limits its search to sequences whose in silico predicted mass spectra have all their signals contained in the measured mass spectra. In contrast, the second problem, denoted as SNP - MSQ, limits its search to sequences whose in silico predicted mass spectra instead contain all the signals of the measured mass spectra. We present an exact dynamic programming algorithm for solving the SNP - MSP problem and also show that the SNP - MSQ problem is NP-hard by a reduction from a restricted variation of the 3-partition problem. We believe that an efficient solution to either problem above could offer a seamless integration of information in four complementary base-specific cleavage reactions, thereby improving the capability of the underlying biotechnology for sensitive and accurate SNP discovery.

  5. A tunable laser system for precision wavelength calibration of spectra

    NASA Astrophysics Data System (ADS)

    Cramer, Claire

    2010-02-01

    We present a novel laser-based wavelength calibration technique that improves the precision of astronomical spectroscopy, and solves a calibration problem inherent to multi-object spectroscopy. We have tested a prototype with the Hectochelle spectrograph at the MMT 6.5 m telescope. The Hectochelle is a high-dispersion, fiber-fed, multi-object spectrograph capable of recording up to 240 spectra simultaneously with a resolving power of 40000. The standard wavelength calibration method uses of spectra from ThAr hollow-cathode lamps shining directly onto the fibers. The difference in light path between calibration and science light as well as the uneven distribution of spectral lines are believed to introduce errors of up to several hundred m/s in the wavelength scale. Our tunable laser wavelength calibrator is bright enough for use with a dome screen, allowing the calibration light path to better match the science light path. Further, the laser is tuned in regular steps across a spectral order, creating a comb of evenly-spaced lines on the detector. Using the solar spectrum reflected from the atmosphere to record the same spectrum in every fiber, we show that laser wavelength calibration brings radial velocity uncertainties down below 100 m/s. We also present results from studies of globular clusters, and explain how the calibration technique can aid in stellar age determinations, studies of young stars, and searches for dark matter clumping in the galactic halo. )

  6. Preserving Flow Variability in Watershed Model Calibrations

    EPA Science Inventory

    Background/Question/Methods Although watershed modeling flow calibration techniques often emphasize a specific flow mode, ecological conditions that depend on flow-ecology relationships often emphasize a range of flow conditions. We used informal likelihood methods to investig...

  7. Gap Test Calibrations and Their Scaling

    NASA Astrophysics Data System (ADS)

    Sandusky, Harold

    2011-06-01

    Common tests for measuring the threshold for shock initiation are the NOL large scale gap test (LSGT) with a 50.8-mm diameter donor/gap and the expanded large scale gap test (ELSGT) with a 95.3-mm diameter donor/gap. Despite the same specifications for the explosive donor and polymethyl methacrylate (PMMA) gap in both tests, calibration of shock pressure in the gap versus distance from the donor scales by a factor of 1.75, not the 1.875 difference in their sizes. Recently reported model calculations suggest that the scaling discrepancy results from the viscoelastic properties of PMMA in combination with different methods for obtaining shock pressure. This is supported by the consistent scaling of these donors when calibrated in water-filled aquariums. Calibrations with water gaps will be provided and compared with PMMA gaps. Scaling for other donor systems will also be provided. Shock initiation data with water gaps will be reviewed.

  8. Perceptual Calibration for Immersive Display Environments

    PubMed Central

    Ponto, Kevin; Gleicher, Michael; Radwin, Robert G.; Shin, Hyun Joon

    2013-01-01

    The perception of objects, depth, and distance has been repeatedly shown to be divergent between virtual and physical environments. We hypothesize that many of these discrepancies stem from incorrect geometric viewing parameters, specifically that physical measurements of eye position are insufficiently precise to provide proper viewing parameters. In this paper, we introduce a perceptual calibration procedure derived from geometric models. While most research has used geometric models to predict perceptual errors, we instead use these models inversely to determine perceptually correct viewing parameters. We study the advantages of these new psychophysically determined viewing parameters compared to the commonly used measured viewing parameters in an experiment with 20 subjects. The perceptually calibrated viewing parameters for the subjects generally produced new virtual eye positions that were wider and deeper than standard practices would estimate. Our study shows that perceptually calibrated viewing parameters can significantly improve depth acuity, distance estimation, and the perception of shape. PMID:23428454

  9. OPTICAL–NEAR-INFRARED PHOTOMETRIC CALIBRATION OF M DWARF METALLICITY AND ITS APPLICATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hejazi, N.; Robertis, M. M. De; Dawson, P. C., E-mail: nedahej@yorku.ca, E-mail: mmdr@yorku.ca, E-mail: pdawson@trentu.ca

    2015-04-15

    Based on a carefully constructed sample of dwarf stars, a new optical–near-infrared photometric calibration to estimate the metallicity of late-type K and early-to-mid-type M dwarfs is presented. The calibration sample has two parts; the first part includes 18 M dwarfs with metallicities determined by high-resolution spectroscopy and the second part contains 49 dwarfs with metallicities obtained through moderate-resolution spectra. By applying this calibration to a large sample of around 1.3 million M dwarfs from the Sloan Digital Sky Survey and 2MASS, the metallicity distribution of this sample is determined and compared with those of previous studies. Using photometric parallaxes, themore » Galactic heights of M dwarfs in the large sample are also estimated. Our results show that stars farther from the Galactic plane, on average, have lower metallicity, which can be attributed to the age–metallicity relation. A scarcity of metal-poor dwarf stars in the metallicity distribution relative to the Simple Closed Box Model indicates the existence of the “M dwarf problem,” similar to the previously known G and K dwarf problems. Several more complicated Galactic chemical evolution models which have been proposed to resolve the G and K dwarf problems are tested and it is shown that these models could, to some extent, mitigate the M dwarf problem as well.« less

  10. Trend analysis of Terra/ASTER/VNIR radiometric calibration coefficient through onboard and vicarious calibrations as well as cross calibration with MODIS

    NASA Astrophysics Data System (ADS)

    Arai, Kohei

    2012-07-01

    More than 11 years Radiometric Calibration Coefficients (RCC) derived from onboard and vicarious calibrations are compared together with cross comparison to the well calibrated MODIS RCC. Fault Tree Analysis (FTA) is also conducted for clarification of possible causes of the RCC degradation together with sensitivity analysis for vicarious calibration. One of the suspects of causes of RCC degradation is clarified through FTA. Test site dependency on vicarious calibration is quite obvious. It is because of the vicarious calibration RCC is sensitive to surface reflectance measurement accuracy, not atmospheric optical depth. The results from cross calibration with MODIS support that significant sensitivity of surface reflectance measurements on vicarious calibration.

  11. A Theoretical Framework for Calibration in Computer Models: Parametrization, Estimation and Convergence Properties

    DOE PAGES

    Tuo, Rui; Jeff Wu, C. F.

    2016-07-19

    Calibration parameters in deterministic computer experiments are those attributes that cannot be measured or available in physical experiments. Here, an approach to estimate them by using data from physical experiments and computer simulations. A theoretical framework is given which allows us to study the issues of parameter identifiability and estimation. We define the L 2-consistency for calibration as a justification for calibration methods. It is shown that a simplified version of the original KO method leads to asymptotically L 2-inconsistent calibration. This L 2-inconsistency can be remedied by modifying the original estimation procedure. A novel calibration method, called the Lmore » 2 calibration, is proposed and proven to be L 2-consistent and enjoys optimal convergence rate. Furthermore a numerical example and some mathematical analysis are used to illustrate the source of the L 2-inconsistency problem.« less

  12. Calibration Matters: Advances in Strapdown Airborne Gravimetry

    NASA Astrophysics Data System (ADS)

    Becker, D.

    2015-12-01

    Using a commercial navigation-grade strapdown inertial measurement unit (IMU) for airborne gravimetry can be advantageous in terms of cost, handling, and space consumption compared to the classical stable-platform spring gravimeters. Up to now, however, large sensor errors made it impossible to reach the mGal-level using such type IMUs as they are not designed or optimized for this kind of application. Apart from a proper error-modeling in the filtering process, specific calibration methods that are tailored to the application of aerogravity may help to bridge this gap and to improve their performance. Based on simulations, a quantitative analysis is presented on how much IMU sensor errors, as biases, scale factors, cross couplings, and thermal drifts distort the determination of gravity and the deflection of the vertical (DOV). Several lab and in-field calibration methods are briefly discussed, and calibration results are shown for an iMAR RQH unit. In particular, a thermal lab calibration of its QA2000 accelerometers greatly improved the long-term drift behavior. Latest results from four recent airborne gravimetry campaigns confirm the effectiveness of the calibrations applied, with cross-over accuracies reaching 1.0 mGal (0.6 mGal after cross-over adjustment) and DOV accuracies reaching 1.1 arc seconds after cross-over adjustment.

  13. Four years of Landsat-7 on-orbit geometric calibration and performance

    USGS Publications Warehouse

    Lee, D.S.; Storey, James C.; Choate, M.J.; Hayes, R.W.

    2004-01-01

    Unlike its predecessors, Landsat-7 has undergone regular geometric and radiometric performance monitoring and calibration since launch in April 1999. This ongoing activity, which includes issuing quarterly updates to calibration parameters, has generated a wealth of geometric performance data over the four-year on-orbit period of operations. A suite of geometric characterization (measurement and evaluation procedures) and calibration (procedures to derive improved estimates of instrument parameters) methods are employed by the Landsat-7 Image Assessment System to maintain the geometric calibration and to track specific aspects of geometric performance. These include geodetic accuracy, band-to-band registration accuracy, and image-to-image registration accuracy. These characterization and calibration activities maintain image product geometric accuracy at a high level - by monitoring performance to determine when calibration is necessary, generating new calibration parameters, and verifying that new parameters achieve desired improvements in accuracy. Landsat-7 continues to meet and exceed all geometric accuracy requirements, although aging components have begun to affect performance.

  14. Recent Loads Calibration Experience With a Delta Wing Airplane

    NASA Technical Reports Server (NTRS)

    Jenkins, Jerald M.; Kuhl, Albert E.

    1977-01-01

    Aircraft which are designed for supersonic and hypersonic flight are evolving with delta wing configurations. An integral part of the evolution of all new aircraft is the flight test phase. Included in the flight test phase is an effort to identify and evaluate the loads environment of the aircraft. The most effective way of examining the loads environment is to utilize calibrated strain gages to provide load magnitudes. Using strain gage data to accomplish this has turned out to be anything but a straightforward task. The delta wing configuration has turned out to be a very difficult type of wing structure to calibrate. Elevated structural temperatures result in thermal effects which contaminate strain gage data being used to deduce flight loads. The concept of thermally calibrating a strain gage system is an approach to solving this problem. This paper will address how these problems were approached on a program directed toward measuring loads on the wing of a large, flexible supersonic aircraft. Structural configurations typical of high-speed delta wing aircraft will be examined. The temperature environment will be examined to see how it induces thermal stresses which subsequently cause errors in loads equations used to deduce the flight loads.

  15. The calibration of video cameras for quantitative measurements

    NASA Technical Reports Server (NTRS)

    Snow, Walter L.; Childers, Brooks A.; Shortis, Mark R.

    1993-01-01

    Several different recent applications of velocimetry at Langley Research Center are described in order to show the need for video camera calibration for quantitative measurements. Problems peculiar to video sensing are discussed, including synchronization and timing, targeting, and lighting. The extension of the measurements to include radiometric estimates is addressed.

  16. Efficient Calibration of Distributed Catchment Models Using Perceptual Understanding and Hydrologic Signatures

    NASA Astrophysics Data System (ADS)

    Hutton, C.; Wagener, T.; Freer, J. E.; Duffy, C.; Han, D.

    2015-12-01

    Distributed models offer the potential to resolve catchment systems in more detail, and therefore simulate the hydrological impacts of spatial changes in catchment forcing (e.g. landscape change). Such models may contain a large number of model parameters which are computationally expensive to calibrate. Even when calibration is possible, insufficient data can result in model parameter and structural equifinality. In order to help reduce the space of feasible models and supplement traditional outlet discharge calibration data, semi-quantitative information (e.g. knowledge of relative groundwater levels), may also be used to identify behavioural models when applied to constrain spatially distributed predictions of states and fluxes. The challenge is to combine these different sources of information together to identify a behavioural region of state-space, and efficiently search a large, complex parameter space to identify behavioural parameter sets that produce predictions that fall within this behavioural region. Here we present a methodology to incorporate different sources of data to efficiently calibrate distributed catchment models. Metrics of model performance may be derived from multiple sources of data (e.g. perceptual understanding and measured or regionalised hydrologic signatures). For each metric, an interval or inequality is used to define the behaviour of the catchment system, accounting for data uncertainties. These intervals are then combined to produce a hyper-volume in state space. The state space is then recast as a multi-objective optimisation problem, and the Borg MOEA is applied to first find, and then populate the hyper-volume, thereby identifying acceptable model parameter sets. We apply the methodology to calibrate the PIHM model at Plynlimon, UK by incorporating perceptual and hydrologic data into the calibration problem. Furthermore, we explore how to improve calibration efficiency through search initialisation from shorter model runs.

  17. Exploiting semantics for sensor re-calibration in event detection systems

    NASA Astrophysics Data System (ADS)

    Vaisenberg, Ronen; Ji, Shengyue; Hore, Bijit; Mehrotra, Sharad; Venkatasubramanian, Nalini

    2008-01-01

    Event detection from a video stream is becoming an important and challenging task in surveillance and sentient systems. While computer vision has been extensively studied to solve different kinds of detection problems over time, it is still a hard problem and even in a controlled environment only simple events can be detected with a high degree of accuracy. Instead of struggling to improve event detection using image processing only, we bring in semantics to direct traditional image processing. Semantics are the underlying facts that hide beneath video frames, which can not be "seen" directly by image processing. In this work we demonstrate that time sequence semantics can be exploited to guide unsupervised re-calibration of the event detection system. We present an instantiation of our ideas by using an appliance as an example--Coffee Pot level detection based on video data--to show that semantics can guide the re-calibration of the detection model. This work exploits time sequence semantics to detect when re-calibration is required to automatically relearn a new detection model for the newly evolved system state and to resume monitoring with a higher rate of accuracy.

  18. MIRO Calibration Switch Mechanism

    NASA Technical Reports Server (NTRS)

    Suchman, Jason; Salinas, Yuki; Kubo, Holly

    2001-01-01

    The Jet Propulsion Laboratory has designed, analyzed, built, and tested a calibration switch mechanism for the MIRO instrument on the ROSETTA spacecraft. MIRO is the Microwave Instrument for the Rosetta Orbiter; this instrument hopes to investigate the origin of the solar system by studying the origin of comets. Specifically, the instrument will be the first to use submillimeter and millimeter wave heterodyne receivers to remotely examine the P-54 Wirtanen comet. In order to calibrate the instrument, it needs to view a hot and cold target. The purpose of the mechanism is to divert the instrument's field of view from the hot target, to the cold target, and then back into space. This cycle is to be repeated every 30 minutes for the duration of the 1.5 year mission. The paper describes the development of the mechanism, as well as analysis and testing techniques.

  19. Goal specificity and knowledge acquisition in statistics problem solving: evidence for attentional focus.

    PubMed

    Trumpower, David L; Goldsmith, Timothy E; Guynn, Melissa J

    2004-12-01

    Solving training problems with nonspecific goals (NG; i.e., solving for all possible unknown values) often results in better transfer than solving training problems with standard goals (SG; i.e., solving for one particular unknown value). In this study, we evaluated an attentional focus explanation of the goal specificity effect. According to the attentional focus view, solving NG problems causes attention to be directed to local relations among successive problem states, whereas solving SG problems causes attention to be directed to relations between the various problem states and the goal state. Attention to the former is thought to enhance structural knowledge about the problem domain and thus promote transfer. Results supported this view because structurally different transfer problems were solved faster following NG training than following SG training. Moreover, structural knowledge representations revealed more links depicting local relations following NG training and more links to the training goal following SG training. As predicted, these effects were obtained only by domain novices.

  20. Numerical Analysis of a Radiant Heat Flux Calibration System

    NASA Technical Reports Server (NTRS)

    Jiang, Shanjuan; Horn, Thomas J.; Dhir, V. K.

    1998-01-01

    A radiant heat flux gage calibration system exists in the Flight Loads Laboratory at NASA's Dryden Flight Research Center. This calibration system must be well understood if the heat flux gages calibrated in it are to provide useful data during radiant heating ground tests or flight tests of high speed aerospace vehicles. A part of the calibration system characterization process is to develop a numerical model of the flat plate heater element and heat flux gage, which will help identify errors due to convection, heater element erosion, and other factors. A 2-dimensional mathematical model of the gage-plate system has been developed to simulate the combined problem involving convection, radiation and mass loss by chemical reaction. A fourth order finite difference scheme is used to solve the steady state governing equations and determine the temperature distribution in the gage and plate, incident heat flux on the gage face, and flat plate erosion. Initial gage heat flux predictions from the model are found to be within 17% of experimental results.

  1. Cognitive arithmetic and problem solving: a comparison of children with specific and general mathematics difficulties.

    PubMed

    Jordan, N C; Montani, T O

    1997-01-01

    This study examined problem-solving and number-fact skills in two subgroups of third-grade children with mathematics difficulties (MD): MD-specific (n = 12) and MD-general (n = 12). The MD-specific group had difficulties in mathematics but not in reading, and the MD-general group had difficulties in reading as well as in mathematics. A comparison group of nonimpaired children (n = 24) also was included. The findings showed that on both story and number-fact problems, the MD-specific group performed worse than the nonimpaired group in timed conditions but not in untimed conditions. The MD-general group, on the other hand, performed worse than the nonimpaired group, regardless of whether tasks were timed or not. An analysis of children's strategies in untimed conditions showed that both the MD-specific and the MD-general groups relied more on backup strategies than the nonimpaired group. However, children in the MD-specific group executed backup strategies more skillfully than children in the MD-general group, allowing them to achieve parity with children in the nonimpaired group when tasks were not timed. The findings suggest that children with specific MD have circumscribed deficits associated with fact retrieval, whereas children with general MD have more basic delays associated with problem conceptualization and execution of calculation procedures.

  2. 40 CFR 89.307 - Dynamometer calibration.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... master load-cell for each in-use range used. (5) The in-use torque measurement must be within 2 percent... torque measurement for each range used by the following method: (1) Warm up the dynamometer following the dynamometer manufacturer's specifications. (2) Determine the dynamometer calibration moment arm (a distance...

  3. External calibration of polarimetric radars using point and distributed targets

    NASA Technical Reports Server (NTRS)

    Yueh, S. H.; Kong, J. A.; Shin, R. T.

    1991-01-01

    Polarimetric calibration algorithms using combinations of point targets and reciprocal distributed targets are developed. From the reciprocity relations of distributed targets, and equivalent point target response is derived. Then the problem of polarimetric calibration using two point targets and one distributed target reduces to that using three point targets, which has been previously solved. For calibration using one point target and one reciprocal distributed target, two cases are analyzed with the point target being a trihedral reflector or a polarimetric active radar calibrator (PARC). For both cases, the general solutions of the system distortion matrices are written as a product of a particular solution and a matrix with one free parameter. For the trihedral-reflector case, this free parameter is determined by assuming azimuthal symmetry for the distributed target. For the PARC case, knowledge of one ratio of two covariance matrix elements of the distributed target is required to solve for the free parameter. Numerical results are simulated to demonstrate the usefulness of the developed algorithms.

  4. External calibration of polarimetric radars using point and distributed targets

    NASA Astrophysics Data System (ADS)

    Yueh, S. H.; Kong, J. A.; Shin, R. T.

    1991-08-01

    Polarimetric calibration algorithms using combinations of point targets and reciprocal distributed targets are developed. From the reciprocity relations of distributed targets, and equivalent point target response is derived. Then the problem of polarimetric calibration using two point targets and one distributed target reduces to that using three point targets, which has been previously solved. For calibration using one point target and one reciprocal distributed target, two cases are analyzed with the point target being a trihedral reflector or a polarimetric active radar calibrator (PARC). For both cases, the general solutions of the system distortion matrices are written as a product of a particular solution and a matrix with one free parameter. For the trihedral-reflector case, this free parameter is determined by assuming azimuthal symmetry for the distributed target. For the PARC case, knowledge of one ratio of two covariance matrix elements of the distributed target is required to solve for the free parameter. Numerical results are simulated to demonstrate the usefulness of the developed algorithms.

  5. Extrinsic Calibration of a Laser Galvanometric Setup and a Range Camera.

    PubMed

    Sels, Seppe; Bogaerts, Boris; Vanlanduit, Steve; Penne, Rudi

    2018-05-08

    Currently, galvanometric scanning systems (like the one used in a scanning laser Doppler vibrometer) rely on a planar calibration procedure between a two-dimensional (2D) camera and the laser galvanometric scanning system to automatically aim a laser beam at a particular point on an object. In the case of nonplanar or moving objects, this calibration is not sufficiently accurate anymore. In this work, a three-dimensional (3D) calibration procedure that uses a 3D range sensor is proposed. The 3D calibration is valid for all types of objects and retains its accuracy when objects are moved between subsequent measurement campaigns. The proposed 3D calibration uses a Non-Perspective-n-Point (NPnP) problem solution. The 3D range sensor is used to calculate the position of the object under test relative to the laser galvanometric system. With this extrinsic calibration, the laser galvanometric scanning system can automatically aim a laser beam to this object. In experiments, the mean accuracy of aiming the laser beam on an object is below 10 mm for 95% of the measurements. This achieved accuracy is mainly determined by the accuracy and resolution of the 3D range sensor. The new calibration method is significantly better than the original 2D calibration method, which in our setup achieves errors below 68 mm for 95% of the measurements.

  6. Using the cloud to speed-up calibration of watershed-scale hydrologic models (Invited)

    NASA Astrophysics Data System (ADS)

    Goodall, J. L.; Ercan, M. B.; Castronova, A. M.; Humphrey, M.; Beekwilder, N.; Steele, J.; Kim, I.

    2013-12-01

    This research focuses on using the cloud to address computational challenges associated with hydrologic modeling. One example is calibration of a watershed-scale hydrologic model, which can take days of execution time on typical computers. While parallel algorithms for model calibration exist and some researchers have used multi-core computers or clusters to run these algorithms, these solutions do not fully address the challenge because (i) calibration can still be too time consuming even on multicore personal computers and (ii) few in the community have the time and expertise needed to manage a compute cluster. Given this, another option for addressing this challenge that we are exploring through this work is the use of the cloud for speeding-up calibration of watershed-scale hydrologic models. The cloud used in this capacity provides a means for renting a specific number and type of machines for only the time needed to perform a calibration model run. The cloud allows one to precisely balance the duration of the calibration with the financial costs so that, if the budget allows, the calibration can be performed more quickly by renting more machines. Focusing specifically on the SWAT hydrologic model and a parallel version of the DDS calibration algorithm, we show significant speed-up time across a range of watershed sizes using up to 256 cores to perform a model calibration. The tool provides a simple web-based user interface and the ability to monitor the calibration job submission process during the calibration process. Finally this talk concludes with initial work to leverage the cloud for other tasks associated with hydrologic modeling including tasks related to preparing inputs for constructing place-based hydrologic models.

  7. Towards a global network of gamma-ray detector calibration facilities

    NASA Astrophysics Data System (ADS)

    Tijs, Marco; Koomans, Ronald; Limburg, Han

    2016-09-01

    Gamma-ray logging tools are applied worldwide. At various locations, calibration facilities are used to calibrate these gamma-ray logging systems. Several attempts have been made to cross-correlate well known calibration pits, but this cross-correlation does not include calibration facilities in Europe or private company calibration facilities. Our aim is to set-up a framework that gives the possibility to interlink all calibration facilities worldwide by using `tools of opportunity' - tools that have been calibrated in different calibration facilities, whether this usage was on a coordinated basis or by coincidence. To compare the measurement of different tools, it is important to understand the behaviour of the tools in the different calibration pits. Borehole properties, such as diameter, fluid, casing and probe diameter strongly influence the outcome of gamma-ray borehole logging. Logs need to be properly calibrated and compensated for these borehole properties in order to obtain in-situ grades or to do cross-hole correlation. Some tool providers provide tool-specific correction curves for this purpose. Others rely on reference measurements against sources of known radionuclide concentration and geometry. In this article, we present an attempt to set-up a framework for transferring `local' calibrations to be applied `globally'. This framework includes corrections for any geometry and detector size to give absolute concentrations of radionuclides from borehole measurements. This model is used to compare measurements in the calibration pits of Grand Junction, located in the USA; Adelaide (previously known as AMDEL), located in Adelaide Australia; and Stonehenge, located at Medusa Explorations BV in the Netherlands.

  8. Automatic Astrometric and Photometric Calibration with SCAMP

    NASA Astrophysics Data System (ADS)

    Bertin, E.

    2006-07-01

    Astrometric and photometric calibrations have remained the most tiresome step in the reduction of large imaging surveys. I present a new software package, SCAMP which has been written to address this problem. SCAMP efficiently computes accurate astrometric and photometric solutions for any arbitrary sequence of FITS images in a completely automatic way. SCAMP is released under the GNU General Public Licence.

  9. Apparatus for in-situ calibration of instruments that measure fluid depth

    DOEpatents

    Campbell, M.D.

    1994-01-11

    The present invention provides a method and apparatus for in-situ calibration of distance measuring equipment. The method comprises obtaining a first distance measurement in a first location, then obtaining at least one other distance measurement in at least one other location of a precisely known distance from the first location, and calculating a calibration constant. The method is applied specifically to calculating a calibration constant for obtaining fluid level and embodied in an apparatus using a pressure transducer and a spacer of precisely known length. The calibration constant is used to calculate the depth of a fluid from subsequent single pressure measurements at any submerged position. 8 figures.

  10. Apparatus for in-situ calibration of instruments that measure fluid depth

    DOEpatents

    Campbell, Melvin D.

    1994-01-01

    The present invention provides a method and apparatus for in-situ calibration of distance measuring equipment. The method comprises obtaining a first distance measurement in a first location, then obtaining at least one other distance measurement in at least one other location of a precisely known distance from the first location, and calculating a calibration constant. The method is applied specifically to calculating a calibration constant for obtaining fluid level and embodied in an apparatus using a pressure transducer and a spacer of precisely known length. The calibration constant is used to calculate the depth of a fluid from subsequent single pressure measurements at any submerged position.

  11. The Impact of Indoor and Outdoor Radiometer Calibration on Solar Measurements: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habte, Aron; Sengupta, Manajit; Andreas, Afshin

    2016-07-01

    Accurate solar radiation data sets are critical to reducing the expenses associated with mitigating performance risk for solar energy conversion systems, and they help utility planners and grid system operators understand the impacts of solar resource variability. The accuracy of solar radiation measured by radiometers depends on the instrument performance specification, installation method, calibration procedure, measurement conditions, maintenance practices, location, and environmental conditions. This study addresses the effect of calibration methodologies and the resulting calibration responsivities provided by radiometric calibration service providers such as the National Renewable Energy Laboratory (NREL) and manufacturers of radiometers. Some of these radiometers are calibratedmore » indoors, and some are calibrated outdoors. To establish or understand the differences in calibration methodology, we processed and analyzed field-measured data from these radiometers. This study investigates calibration responsivities provided by NREL's broadband outdoor radiometer calibration (BORCAL) and a few prominent manufacturers. The reference radiometer calibrations are traceable to the World Radiometric Reference. These different methods of calibration demonstrated 1% to 2% differences in solar irradiance measurement. Analyzing these values will ultimately assist in determining the uncertainties of the radiometer data and will assist in developing consensus on a standard for calibration.« less

  12. Bayesian SEM for Specification Search Problems in Testing Factorial Invariance.

    PubMed

    Shi, Dexin; Song, Hairong; Liao, Xiaolan; Terry, Robert; Snyder, Lori A

    2017-01-01

    Specification search problems refer to two important but under-addressed issues in testing for factorial invariance: how to select proper reference indicators and how to locate specific non-invariant parameters. In this study, we propose a two-step procedure to solve these issues. Step 1 is to identify a proper reference indicator using the Bayesian structural equation modeling approach. An item is selected if it is associated with the highest likelihood to be invariant across groups. Step 2 is to locate specific non-invariant parameters, given that a proper reference indicator has already been selected in Step 1. A series of simulation analyses show that the proposed method performs well under a variety of data conditions, and optimal performance is observed under conditions of large magnitude of non-invariance, low proportion of non-invariance, and large sample sizes. We also provide an empirical example to demonstrate the specific procedures to implement the proposed method in applied research. The importance and influences are discussed regarding the choices of informative priors with zero mean and small variances. Extensions and limitations are also pointed out.

  13. Accuracy of airspeed measurements and flight calibration procedures

    NASA Technical Reports Server (NTRS)

    Huston, Wilber B

    1948-01-01

    The sources of error that may enter into the measurement of airspeed by pitot-static methods are reviewed in detail together with methods of flight calibration of airspeed installations. Special attention is given to the problem of accurate measurements of airspeed under conditions of high speed and maneuverability required of military airplanes. (author)

  14. Setup and Calibration of SLAC's Peripheral Monitoring Stations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cooper, C.

    2004-09-03

    The goals of this project were to troubleshoot, repair, calibrate, and establish documentation regarding SLAC's (Stanford Linear Accelerator Center's) PMS (Peripheral Monitoring Station) system. The PMS system consists of seven PMSs that continuously monitor skyshine (neutron and photon) radiation levels in SLAC's environment. Each PMS consists of a boron trifluoride (BF{sub 3}) neutron detector (model RS-P1-0802-104 or NW-G-20-12) and a Geiger Moeller (GM) gamma ray detector (model TGM N107 or LND 719) together with their respective electronics. Electronics for each detector are housed in Nuclear Instrument Modules (NIMs) and are plugged into a NIM bin in the station. All communicationmore » lines from the stations to the Main Control Center (MCC) were tested prior to troubleshooting. To test communication with MCC, a pulse generator (Systron Donner model 100C) was connected to each channel in the PMS and data at MCC was checked for consistency. If MCC displayed no data, the communication cables to MCC or the CAMAC (Computer Automated Measurement and Control) crates were in need of repair. If MCC did display data, then it was known that the communication lines were intact. All electronics from each station were brought into the lab for troubleshooting. Troubleshooting usually consisted of connecting an oscilloscope or scaler (Ortec model 871 or 775) at different points in the circuit of each detector to record simulated pulses produced by a pulse generator; the input and output pulses were compared to establish the location of any problems in the circuit. Once any problems were isolated, repairs were done accordingly. The detectors and electronics were then calibrated in the field using radioactive sources. Calibration is a process that determines the response of the detector. Detector response is defined as the ratio of the number of counts per minute interpreted by the detector to the amount of dose equivalent rate (in mrem per hour, either calculated or

  15. Detection of Unexpected High Correlations between Balance Calibration Loads and Load Residuals

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.; Volden, T.

    2014-01-01

    An algorithm was developed for the assessment of strain-gage balance calibration data that makes it possible to systematically investigate potential sources of unexpected high correlations between calibration load residuals and applied calibration loads. The algorithm investigates correlations on a load series by load series basis. The linear correlation coefficient is used to quantify the correlations. It is computed for all possible pairs of calibration load residuals and applied calibration loads that can be constructed for the given balance calibration data set. An unexpected high correlation between a load residual and a load is detected if three conditions are met: (i) the absolute value of the correlation coefficient of a residual/load pair exceeds 0.95; (ii) the maximum of the absolute values of the residuals of a load series exceeds 0.25 % of the load capacity; (iii) the load component of the load series is intentionally applied. Data from a baseline calibration of a six-component force balance is used to illustrate the application of the detection algorithm to a real-world data set. This analysis also showed that the detection algorithm can identify load alignment errors as long as repeat load series are contained in the balance calibration data set that do not suffer from load alignment problems.

  16. Automatic calibration system for analog instruments based on DSP and CCD sensor

    NASA Astrophysics Data System (ADS)

    Lan, Jinhui; Wei, Xiangqin; Bai, Zhenlong

    2008-12-01

    Currently, the calibration work of analog measurement instruments is mainly completed by manual and there are many problems waiting for being solved. In this paper, an automatic calibration system (ACS) based on Digital Signal Processor (DSP) and Charge Coupled Device (CCD) sensor is developed and a real-time calibration algorithm is presented. In the ACS, TI DM643 DSP processes the data received by CCD sensor and the outcome is displayed on Liquid Crystal Display (LCD) screen. For the algorithm, pointer region is firstly extracted for improving calibration speed. And then a math model of the pointer is built to thin the pointer and determine the instrument's reading. Through numbers of experiments, the time of once reading is no more than 20 milliseconds while it needs several seconds if it is done manually. At the same time, the error of the instrument's reading satisfies the request of the instruments. It is proven that the automatic calibration system can effectively accomplish the calibration work of the analog measurement instruments.

  17. Data analysis and calibration for a bulk-refractive-index-compensated surface plasmon resonance affinity sensor

    NASA Astrophysics Data System (ADS)

    Chinowsky, Timothy M.; Yee, Sinclair S.

    2002-02-01

    Surface plasmon resonance (SPR) affinity sensing, the problem of bulk refractive index (RI) interference in SPR sensing, and a sensor developed to overcome this problem are briefly reviewed. The sensor uses a design based on Texas Instruments' Spreeta SPR sensor to simultaneously measure both bulk and surface RI. The bulk RI measurement is then used to compensate the surface measurement and remove the effects of bulk RI interference. To achieve accurate compensation, robust data analysis and calibration techniques are necessary. Simple linear data analysis techniques derived from measurements of the sensor response were found to provide a versatile, low noise method for extracting measurements of bulk and surface refractive index from the raw sensor data. Automatic calibration using RI gradients was used to correct the linear estimates, enabling the sensor to produce accurate data even when the sensor has a complicated nonlinear response which varies with time. The calibration procedure is described, and the factors influencing calibration accuracy are discussed. Data analysis and calibration principles are illustrated with an experiment in which sucrose and detergent solutions are used to produce changes in bulk and surface RI, respectively.

  18. Two laboratory methods for the calibration of GPS speed meters

    NASA Astrophysics Data System (ADS)

    Bai, Yin; Sun, Qiao; Du, Lei; Yu, Mei; Bai, Jie

    2015-01-01

    The set-ups of two calibration systems are presented to investigate calibration methods of GPS speed meters. The GPS speed meter calibrated is a special type of high accuracy speed meter for vehicles which uses Doppler demodulation of GPS signals to calculate the measured speed of a moving target. Three experiments are performed: including simulated calibration, field-test signal replay calibration, and in-field test comparison with an optical speed meter. The experiments are conducted at specific speeds in the range of 40-180 km h-1 with the same GPS speed meter as the device under calibration. The evaluation of measurement results validates both methods for calibrating GPS speed meters. The relative deviations between the measurement results of the GPS-based high accuracy speed meter and those of the optical speed meter are analyzed, and the equivalent uncertainty of the comparison is evaluated. The comparison results justify the utilization of GPS speed meters as reference equipment if no fewer than seven satellites are available. This study contributes to the widespread use of GPS-based high accuracy speed meters as legal reference equipment in traffic speed metrology.

  19. Hierarchical calibration and validation of computational fluid dynamics models for solid sorbent-based carbon capture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lai, Canhai; Xu, Zhijie; Pan, Wenxiao

    2016-01-01

    To quantify the predictive confidence of a solid sorbent-based carbon capture design, a hierarchical validation methodology—consisting of basic unit problems with increasing physical complexity coupled with filtered model-based geometric upscaling has been developed and implemented. This paper describes the computational fluid dynamics (CFD) multi-phase reactive flow simulations and the associated data flows among different unit problems performed within the said hierarchical validation approach. The bench-top experiments used in this calibration and validation effort were carefully designed to follow the desired simple-to-complex unit problem hierarchy, with corresponding data acquisition to support model parameters calibrations at each unit problem level. A Bayesianmore » calibration procedure is employed and the posterior model parameter distributions obtained at one unit-problem level are used as prior distributions for the same parameters in the next-tier simulations. Overall, the results have demonstrated that the multiphase reactive flow models within MFIX can be used to capture the bed pressure, temperature, CO2 capture capacity, and kinetics with quantitative accuracy. The CFD modeling methodology and associated uncertainty quantification techniques presented herein offer a solid framework for estimating the predictive confidence in the virtual scale up of a larger carbon capture device.« less

  20. A proposed standard method for polarimetric calibration and calibration verification

    NASA Astrophysics Data System (ADS)

    Persons, Christopher M.; Jones, Michael W.; Farlow, Craig A.; Morell, L. Denise; Gulley, Michael G.; Spradley, Kevin D.

    2007-09-01

    Accurate calibration of polarimetric sensors is critical to reducing and analyzing phenomenology data, producing uniform polarimetric imagery for deployable sensors, and ensuring predictable performance of polarimetric algorithms. It is desirable to develop a standard calibration method, including verification reporting, in order to increase credibility with customers and foster communication and understanding within the polarimetric community. This paper seeks to facilitate discussions within the community on arriving at such standards. Both the calibration and verification methods presented here are performed easily with common polarimetric equipment, and are applicable to visible and infrared systems with either partial Stokes or full Stokes sensitivity. The calibration procedure has been used on infrared and visible polarimetric imagers over a six year period, and resulting imagery has been presented previously at conferences and workshops. The proposed calibration method involves the familiar calculation of the polarimetric data reduction matrix by measuring the polarimeter's response to a set of input Stokes vectors. With this method, however, linear combinations of Stokes vectors are used to generate highly accurate input states. This allows the direct measurement of all system effects, in contrast with fitting modeled calibration parameters to measured data. This direct measurement of the data reduction matrix allows higher order effects that are difficult to model to be discovered and corrected for in calibration. This paper begins with a detailed tutorial on the proposed calibration and verification reporting methods. Example results are then presented for a LWIR rotating half-wave retarder polarimeter.

  1. Wavelength calibration of arc spectra using intensity modelling

    NASA Astrophysics Data System (ADS)

    Balona, L. A.

    2010-12-01

    Wavelength calibration for astronomical spectra usually involves the use of different arc lamps for different resolving powers to reduce the problem of line blending. We present a technique which eliminates the necessity of different lamps. A lamp producing a very rich spectrum, normally used only at high resolving powers, can be used at the lowest resolving power as well. This is accomplished by modelling the observed arc spectrum and solving for the wavelength calibration as part of the modelling procedure. Line blending is automatically incorporated as part of the model. The method has been implemented and successfully tested on spectra taken with the Robert Stobie spectrograph of the Southern African Large Telescope.

  2. Calibration of the APEX Model to Simulate Management Practice Effects on Runoff, Sediment, and Phosphorus Loss.

    PubMed

    Bhandari, Ammar B; Nelson, Nathan O; Sweeney, Daniel W; Baffaut, Claire; Lory, John A; Senaviratne, Anomaa; Pierzynski, Gary M; Janssen, Keith A; Barnes, Philip L

    2017-11-01

    Process-based computer models have been proposed as a tool to generate data for Phosphorus (P) Index assessment and development. Although models are commonly used to simulate P loss from agriculture using managements that are different from the calibration data, this use of models has not been fully tested. The objective of this study is to determine if the Agricultural Policy Environmental eXtender (APEX) model can accurately simulate runoff, sediment, total P, and dissolved P loss from 0.4 to 1.5 ha of agricultural fields with managements that are different from the calibration data. The APEX model was calibrated with field-scale data from eight different managements at two locations (management-specific models). The calibrated models were then validated, either with the same management used for calibration or with different managements. Location models were also developed by calibrating APEX with data from all managements. The management-specific models resulted in satisfactory performance when used to simulate runoff, total P, and dissolved P within their respective systems, with > 0.50, Nash-Sutcliffe efficiency > 0.30, and percent bias within ±35% for runoff and ±70% for total and dissolved P. When applied outside the calibration management, the management-specific models only met the minimum performance criteria in one-third of the tests. The location models had better model performance when applied across all managements compared with management-specific models. Our results suggest that models only be applied within the managements used for calibration and that data be included from multiple management systems for calibration when using models to assess management effects on P loss or evaluate P Indices. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  3. Calibration of GPS based high accuracy speed meter for vehicles

    NASA Astrophysics Data System (ADS)

    Bai, Yin; Sun, Qiao; Du, Lei; Yu, Mei; Bai, Jie

    2015-02-01

    GPS based high accuracy speed meter for vehicles is a special type of GPS speed meter which uses Doppler Demodulation of GPS signals to calculate the speed of a moving target. It is increasingly used as reference equipment in the field of traffic speed measurement, but acknowledged standard calibration methods are still lacking. To solve this problem, this paper presents the set-ups of simulated calibration, field test signal replay calibration, and in-field test comparison with an optical sensor based non-contact speed meter. All the experiments were carried out on particular speed values in the range of (40-180) km/h with the same GPS speed meter. The speed measurement errors of simulated calibration fall in the range of +/-0.1 km/h or +/-0.1%, with uncertainties smaller than 0.02% (k=2). The errors of replay calibration fall in the range of +/-0.1% with uncertainties smaller than 0.10% (k=2). The calibration results justify the effectiveness of the two methods. The relative deviations of the GPS speed meter from the optical sensor based noncontact speed meter fall in the range of +/-0.3%, which validates the use of GPS speed meter as reference instruments. The results of this research can provide technical basis for the establishment of internationally standard calibration methods of GPS speed meters, and thus ensures the legal status of GPS speed meters as reference equipment in the field of traffic speed metrology.

  4. VS2DI: Model use, calibration, and validation

    USGS Publications Warehouse

    Healy, Richard W.; Essaid, Hedeff I.

    2012-01-01

    VS2DI is a software package for simulating water, solute, and heat transport through soils or other porous media under conditions of variable saturation. The package contains a graphical preprocessor for constructing simulations, a postprocessor for displaying simulation results, and numerical models that solve for flow and solute transport (VS2DT) and flow and heat transport (VS2DH). Flow is described by the Richards equation, and solute and heat transport are described by advection-dispersion equations; the finite-difference method is used to solve these equations. Problems can be simulated in one, two, or three (assuming radial symmetry) dimensions. This article provides an overview of calibration techniques that have been used with VS2DI; included is a detailed description of calibration procedures used in simulating the interaction between groundwater and a stream fed by drainage from agricultural fields in central Indiana. Brief descriptions of VS2DI and the various types of problems that have been addressed with the software package are also presented.

  5. 40 CFR 86.126-90 - Calibration of other equipment.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... according to good practice. Specific equipment requiring calibration are the gas chromatograph and flame ionization detector used in measuring methanol and the high pressure liquid chromatograph (HPLC) and...

  6. 40 CFR 86.526-90 - Calibration of other equipment.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... necessary according to good practice. Specific equipment requiring calibration is the gas chromatograph and flame ionization detector used in measuring methanol and the high pressure liquid chromatograph (HPLC...

  7. 40 CFR 86.126-90 - Calibration of other equipment.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... according to good practice. Specific equipment requiring calibration are the gas chromatograph and flame ionization detector used in measuring methanol and the high pressure liquid chromatograph (HPLC) and...

  8. 40 CFR 86.526-90 - Calibration of other equipment.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... necessary according to good practice. Specific equipment requiring calibration is the gas chromatograph and flame ionization detector used in measuring methanol and the high pressure liquid chromatograph (HPLC...

  9. Radiometric Calibration Techniques for Signal-of-Opportunity Reflectometers

    NASA Technical Reports Server (NTRS)

    Piepmeier, Jeffrey R.; Shah, Rashmi; Deshpande, Manohar; Johnson, Carey

    2014-01-01

    Bi-static reflection measurements utilizing global navigation satellite service (GNSS) or other signals of opportunity (SoOp) can be used to sense ocean and terrestrial surface properties. End-to-end calibration of GNSS-R has been performed using well-characterized reflection surface (e.g., water), direct path antenna, and receiver gain characterization. We propose an augmented approach using on-board receiver electronics for radiometric calibration of SoOp reflectometers utilizing direct and reflected signal receiving antennas. The method calibrates receiver and correlator gains and offsets utilizing a reference switch and common noise source. On-board electronic calibration sources, such as reference switches, noise diodes and loop-back circuits, have shown great utility in stabilizing total power and correlation microwave radiometer and scatterometer receiver electronics in L-band spaceborne instruments. Application to SoOp instruments is likely to bring several benefits. For example, application to provide short and long time scale calibration stability of the direct path channel, especially in low signal-to-noise ratio configurations, is directly analogous to the microwave radiometer problem. The direct path channel is analogous to the loopback path in a scatterometer to provide a reference of the transmitted power, although the receiver is independent from the reflected path channel. Thus, a common noise source can be used to measure the gain ratio of the two paths. Using these techniques long-term (days to weeks) calibration stability of spaceborne L-band scatterometer and radiometer has been achieved better than 0.1. Similar long-term stability would likely be needed for a spaceborne reflectometer mission to measure terrestrial properties such as soil moisture.

  10. Investigating the Effects of Variable Water Type for VIIRS Calibration

    NASA Astrophysics Data System (ADS)

    Bowers, J.; Ladner, S.; Martinolich, P.; Arnone, R.; Lawson, A.; Crout, R. L.; Vandermeulen, R. A.

    2016-02-01

    The Naval Research Laboratory - Stennis Space Center (NRL-SSC) currently provides calibration and validation support for the Visible Infrared Imaging Radiometer Suite (VIIRS) satellite ocean color products. NRL-SSC utilizes the NASA Ocean Biology Processing Group (OBPG) methodology for on-orbit vicarious calibration with in situ data collected in blue ocean water by the Marine Optical Buoy (MOBY). An acceptable calibration consists of 20-40 satellite to in situ matchups that establish the radiance correlation at specific points within the operating range of the VIIRS instrument. While the current method improves the VIIRS performance, the MOBY data alone does not represent the full range of radiance values seen in the coastal oceans. However, by utilizing data from the AERONET-OC coastal sites we expand our calibration matchups to cover a more realistic range of continuous values particularly in the green and red spectral regions of the sensor. Improved calibration will provide more accurate data to support daily operations and enable construction of valid climatology for future reference.

  11. TWSTFT Link Calibration Report

    DTIC Science & Technology

    2015-09-01

    1 Annex II. TWSTFT link calibration with a GPS calibrator Calibration reference: CI-888-2015 Version history: ZJ/V0/25Feb2015, V0a,b/HE/ZJ...7Mar; V0s/VZ9Mar; V0d,e,f+/DM10,17Mar; V1.0/1Apr; Final version 1Sept2015 TWSTFT link calibration report -- Calibration of the Lab(k)-PTB UTC...bipm.org * Coordinator Abstract This report includes the calibration results of the Lab(k)-PTB TWSTFT link and closure measurements of the BIPM

  12. [A plane-based hand-eye calibration method for surgical robots].

    PubMed

    Zeng, Bowei; Meng, Fanle; Ding, Hui; Liu, Wenbo; Wu, Di; Wang, Guangzhi

    2017-04-01

    In order to calibrate the hand-eye transformation of the surgical robot and laser range finder (LRF), a calibration algorithm based on a planar template was designed. A mathematical model of the planar template had been given and the approach to address the equations had been derived. Aiming at the problems of the measurement error in a practical system, we proposed a new algorithm for selecting coplanar data. This algorithm can effectively eliminate considerable measurement error data to improve the calibration accuracy. Furthermore, three orthogonal planes were used to improve the calibration accuracy, in which a nonlinear optimization for hand-eye calibration was used. With the purpose of verifying the calibration precision, we used the LRF to measure some fixed points in different directions and a cuboid's surfaces. Experimental results indicated that the precision of a single planar template method was (1.37±0.24) mm, and that of the three orthogonal planes method was (0.37±0.05) mm. Moreover, the mean FRE of three-dimensional (3D) points was 0.24 mm and mean TRE was 0.26 mm. The maximum angle measurement error was 0.4 degree. Experimental results show that the method presented in this paper is effective with high accuracy and can meet the requirements of surgical robot precise location.

  13. Problems of millipound thrust measurement. The "Hansen Suspension"

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carta, David G.

    Considered in detail are problems which led to the need and use of the 'Hansen Suspension'. Also discussed are problems which are likely to be encountered in any low level thrust measuring system. The methods of calibration and the accuracies involved are given careful attention. With all parameters optimized and calibration techniques perfected, the system was found capable of a resolution of 10 {mu} lbs. A comparison of thrust measurements made by the 'Hansen Suspension' with measurements of a less sophisticated device leads to some surprising results.

  14. Spitzer/JWST Cross Calibration: IRAC Observations of Potential Calibrators for JWST

    NASA Astrophysics Data System (ADS)

    Carey, Sean J.; Gordon, Karl D.; Lowrance, Patrick; Ingalls, James G.; Glaccum, William J.; Grillmair, Carl J.; E Krick, Jessica; Laine, Seppo J.; Fazio, Giovanni G.; Hora, Joseph L.; Bohlin, Ralph

    2017-06-01

    We present observations at 3.6 and 4.5 microns using IRAC on the Spitzer Space Telescope of a set of main sequence A stars and white dwarfs that are potential calibrators across the JWST instrument suite. The stars range from brightnesses of 4.4 to 15 mag in K band. The calibration observations use a similar redundancy to the observing strategy for the IRAC primary calibrators (Reach et al. 2005) and the photometry is obtained using identical methods and instrumental photometric corrections as those applied to the IRAC primary calibrators (Carey et al. 2009). The resulting photometry is then compared to the predictions based on spectra from the CALSPEC Calibration Database (http://www.stsci.edu/hst/observatory/crds/calspec.html) and the IRAC bandpasses. These observations are part of an ongoing collaboration between IPAC and STScI investigating absolute calibration in the infrared.

  15. Two Approaches to Calibration in Metrology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campanelli, Mark

    2014-04-01

    Inferring mathematical relationships with quantified uncertainty from measurement data is common to computational science and metrology. Sufficient knowledge of measurement process noise enables Bayesian inference. Otherwise, an alternative approach is required, here termed compartmentalized inference, because collection of uncertain data and model inference occur independently. Bayesian parameterized model inference is compared to a Bayesian-compatible compartmentalized approach for ISO-GUM compliant calibration problems in renewable energy metrology. In either approach, model evidence can help reduce model discrepancy.

  16. Camera Geo calibration Using an MCMC Approach (Author’s Manuscript)

    DTIC Science & Technology

    2016-08-19

    calibration problem that supports priors over camera parameters and constraints that relate image anno - tations, camera geometry, and a geographic...Θi+1|Θi) = Ndim∏ j =1 1 σj φ ( Θi+1, j −Θi, j σj ) , where σj denotes the sampling step size on the j -th dimen- sion, and φ(x) denotes the PDF of the...image calibration,” Imaging for Crime Detection and Prevention, 2013. 1 [2] Haipeng Zhang, Mohammed Korayem, David J Crandall, and Gretchen LeBuhn

  17. An in-situ Mobile pH Calibrator for application with HOV and ROV platform in deep sea environments

    NASA Astrophysics Data System (ADS)

    Tan, C.; Ding, K.; Seyfried, W. E., Jr.

    2014-12-01

    Recently, a novel in-situ sensor calibration instrument, Mobile pH Calibrator (MpHC), was developed for application with HOV Alvin. It was specifically designed to conduct in-situ pH measurement in deep sea hydrothermal diffuse fluids with in-situ calibration function. In general, the sensor calibrator involves three integrated electrodes (pH, dissolved H2 and H2S) and a temperature sensor, all of which are installed in a cell with a volume of ~ 1 ml. A PEEK check valve cartridge is installed at the inlet end of the cell to guide the flow path during the measurement and calibration processes. Two PEEK tubes are connected at outlet end of the cell for drawing out hydrothermal fluid and delivering pH buffer fluids. During its measurement operation, the pump draws in hydrothermal fluid, which then passes through the check valve directly into the sensing cell. When in calibration mode, the pump delivers pH buffers into the cell, while automatically closing the check valve to the outside environment. This probe has two advantages compared to our previous unit used during KNOX18RR MAR cruise in 2008 and MARS cabled observatory deployment in 2012. First, in the former design, a 5 cm solenoid valve was equipped with the probe. This enlarged size prevented its application in specific point or small area. In this version, the probe has only a dimension of 1.6 cm for an easy access to hydrothermal biological environments. Secondly, the maximum temperature condition of the earlier system was limited by the solenoid valve precluding operation in excess of 50 ºC. The new design avoids this problem, which improves its temperature tolerance. The upper limit of temperature condition is now up to 100oC, therefore enabling broader application in hydrothermal diffuse flow system on the seafloor. During SVC cruise (AT26-12) in the Gulf of Mexico this year, the MpHC was successfully tested with Alvin dives at the depth up to 2600 m for measuring pH with in-situ calibration in seafloor

  18. Calibrated birth-death phylogenetic time-tree priors for bayesian inference.

    PubMed

    Heled, Joseph; Drummond, Alexei J

    2015-05-01

    Here we introduce a general class of multiple calibration birth-death tree priors for use in Bayesian phylogenetic inference. All tree priors in this class separate ancestral node heights into a set of "calibrated nodes" and "uncalibrated nodes" such that the marginal distribution of the calibrated nodes is user-specified whereas the density ratio of the birth-death prior is retained for trees with equal values for the calibrated nodes. We describe two formulations, one in which the calibration information informs the prior on ranked tree topologies, through the (conditional) prior, and the other which factorizes the prior on divergence times and ranked topologies, thus allowing uniform, or any arbitrary prior distribution on ranked topologies. Although the first of these formulations has some attractive properties, the algorithm we present for computing its prior density is computationally intensive. However, the second formulation is always faster and computationally efficient for up to six calibrations. We demonstrate the utility of the new class of multiple-calibration tree priors using both small simulations and a real-world analysis and compare the results to existing schemes. The two new calibrated tree priors described in this article offer greater flexibility and control of prior specification in calibrated time-tree inference and divergence time dating, and will remove the need for indirect approaches to the assessment of the combined effect of calibration densities and tree priors in Bayesian phylogenetic inference. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.

  19. A Self-Calibrating Radar Sensor System for Measuring Vital Signs.

    PubMed

    Huang, Ming-Chun; Liu, Jason J; Xu, Wenyao; Gu, Changzhan; Li, Changzhi; Sarrafzadeh, Majid

    2016-04-01

    Vital signs (i.e., heartbeat and respiration) are crucial physiological signals that are useful in numerous medical applications. The process of measuring these signals should be simple, reliable, and comfortable for patients. In this paper, a noncontact self-calibrating vital signs monitoring system based on the Doppler radar is presented. The system hardware and software were designed with a four-tiered layer structure. To enable accurate vital signs measurement, baseband signals in the radar sensor were modeled and a framework for signal demodulation was proposed. Specifically, a signal model identification method was formulated into a quadratically constrained l1 minimization problem and solved using the upper bound and linear matrix inequality (LMI) relaxations. The performance of the proposed system was comprehensively evaluated using three experimental sets, and the results indicated that this system can be used to effectively measure human vital signs.

  20. Specificity of interpersonal problems in generalized anxiety disorder versus other anxiety disorders and depression.

    PubMed

    Uhmann, Stefan; Beesdo-Baum, Katja; Becker, Eni S; Hoyer, Jürgen

    2010-11-01

    We examined the diagnostic specificity of interpersonal problems (IP) in generalized anxiety disorder (GAD). We expected generally higher interpersonal distress, and specifically higher levels of nonassertive, exploitable, overly nurturant, and intrusive behavior in n = 58 patients with Diagnostic and Statistical Manual of Mental Disorders, 4th Edition GAD compared with patients with post-traumatic stress disorder (n = 46), other anxiety disorders (n = 47), and unipolar depressive disorders (n = 47). IP were assessed with the Inventory of Interpersonal Problems. Specificity in the sense of heightened interpersonal distress for GAD was not supported in any of the aforementioned scales, neither for pure nor for comorbid GAD. This finding persisted after accounting for the degree of depressiveness (Beck Depression Inventory). GAD patients are rather not characterized by more self-ascribed IPs although they may worry more about interpersonal issues in general.

  1. Calibration of the optical torque wrench.

    PubMed

    Pedaci, Francesco; Huang, Zhuangxiong; van Oene, Maarten; Dekker, Nynke H

    2012-02-13

    The optical torque wrench is a laser trapping technique that expands the capability of standard optical tweezers to torque manipulation and measurement, using the laser linear polarization to orient tailored microscopic birefringent particles. The ability to measure torque of the order of kBT (∼4 pN nm) is especially important in the study of biophysical systems at the molecular and cellular level. Quantitative torque measurements rely on an accurate calibration of the instrument. Here we describe and implement a set of calibration approaches for the optical torque wrench, including methods that have direct analogs in linear optical tweezers as well as introducing others that are specifically developed for the angular variables. We compare the different methods, analyze their differences, and make recommendations regarding their implementations.

  2. The Effect of Inappropriate Calibration: Three Case Studies in Molecular Ecology

    PubMed Central

    Ho, Simon Y. W.; Saarma, Urmas; Barnett, Ross; Haile, James; Shapiro, Beth

    2008-01-01

    Time-scales estimated from sequence data play an important role in molecular ecology. They can be used to draw correlations between evolutionary and palaeoclimatic events, to measure the tempo of speciation, and to study the demographic history of an endangered species. In all of these studies, it is paramount to have accurate estimates of time-scales and substitution rates. Molecular ecological studies typically focus on intraspecific data that have evolved on genealogical scales, but often these studies inappropriately employ deep fossil calibrations or canonical substitution rates (e.g., 1% per million years for birds and mammals) for calibrating estimates of divergence times. These approaches can yield misleading estimates of molecular time-scales, with significant impacts on subsequent evolutionary and ecological inferences. We illustrate this calibration problem using three case studies: avian speciation in the late Pleistocene, the demographic history of bowhead whales, and the Pleistocene biogeography of brown bears. For each data set, we compare the date estimates that are obtained using internal and external calibration points. In all three cases, the conclusions are significantly altered by the application of revised, internally-calibrated substitution rates. Collectively, the results emphasise the importance of judicious selection of calibrations for analyses of recent evolutionary events. PMID:18286172

  3. The effect of inappropriate calibration: three case studies in molecular ecology.

    PubMed

    Ho, Simon Y W; Saarma, Urmas; Barnett, Ross; Haile, James; Shapiro, Beth

    2008-02-20

    Time-scales estimated from sequence data play an important role in molecular ecology. They can be used to draw correlations between evolutionary and palaeoclimatic events, to measure the tempo of speciation, and to study the demographic history of an endangered species. In all of these studies, it is paramount to have accurate estimates of time-scales and substitution rates. Molecular ecological studies typically focus on intraspecific data that have evolved on genealogical scales, but often these studies inappropriately employ deep fossil calibrations or canonical substitution rates (e.g., 1% per million years for birds and mammals) for calibrating estimates of divergence times. These approaches can yield misleading estimates of molecular time-scales, with significant impacts on subsequent evolutionary and ecological inferences. We illustrate this calibration problem using three case studies: avian speciation in the late Pleistocene, the demographic history of bowhead whales, and the Pleistocene biogeography of brown bears. For each data set, we compare the date estimates that are obtained using internal and external calibration points. In all three cases, the conclusions are significantly altered by the application of revised, internally-calibrated substitution rates. Collectively, the results emphasise the importance of judicious selection of calibrations for analyses of recent evolutionary events.

  4. SPRT Calibration Uncertainties and Internal Quality Control at a Commercial SPRT Calibration Facility

    NASA Astrophysics Data System (ADS)

    Wiandt, T. J.

    2008-06-01

    The Hart Scientific Division of the Fluke Corporation operates two accredited standard platinum resistance thermometer (SPRT) calibration facilities, one at the Hart Scientific factory in Utah, USA, and the other at a service facility in Norwich, UK. The US facility is accredited through National Voluntary Laboratory Accreditation Program (NVLAP), and the UK facility is accredited through UKAS. Both provide SPRT calibrations using similar equipment and procedures, and at similar levels of uncertainty. These uncertainties are among the lowest available commercially. To achieve and maintain low uncertainties, it is required that the calibration procedures be thorough and optimized. However, to minimize customer downtime, it is also important that the instruments be calibrated in a timely manner and returned to the customer. Consequently, subjecting the instrument to repeated calibrations or extensive repeated measurements is not a viable approach. Additionally, these laboratories provide SPRT calibration services involving a wide variety of SPRT designs. These designs behave differently, yet predictably, when subjected to calibration measurements. To this end, an evaluation strategy involving both statistical process control and internal consistency measures is utilized to provide confidence in both the instrument calibration and the calibration process. This article describes the calibration facilities, procedure, uncertainty analysis, and internal quality assurance measures employed in the calibration of SPRTs. Data will be reviewed and generalities will be presented. Finally, challenges and considerations for future improvements will be discussed.

  5. CubiCal - Fast radio interferometric calibration suite exploiting complex optimisation

    NASA Astrophysics Data System (ADS)

    Kenyon, J. S.; Smirnov, O. M.; Grobler, T. L.; Perkins, S. J.

    2018-05-01

    It has recently been shown that radio interferometric gain calibration can be expressed succinctly in the language of complex optimisation. In addition to providing an elegant framework for further development, it exposes properties of the calibration problem which can be exploited to accelerate traditional non-linear least squares solvers such as Gauss-Newton and Levenberg-Marquardt. We extend existing derivations to chains of Jones terms: products of several gains which model different aberrant effects. In doing so, we find that the useful properties found in the single term case still hold. We also develop several specialised solvers which deal with complex gains parameterised by real values. The newly developed solvers have been implemented in a Python package called CubiCal, which uses a combination of Cython, multiprocessing and shared memory to leverage the power of modern hardware. We apply CubiCal to both simulated and real data, and perform both direction-independent and direction-dependent self-calibration. Finally, we present the results of some rudimentary profiling to show that CubiCal is competitive with respect to existing calibration tools such as MeqTrees.

  6. Updated radiometric calibration for the Landsat-5 thematic mapper reflective bands

    USGS Publications Warehouse

    Helder, D.L.; Markham, B.L.; Thome, K.J.; Barsi, J.A.; Chander, G.; Malla, R.

    2008-01-01

    The Landsat-5 Thematic Mapper (TM) has been the workhorse of the Landsat system. Launched in 1984, it continues collecting data through the time frame of this paper. Thus, it provides an invaluable link to the past history of the land features of the Earth's surface, and it becomes imperative to provide an accurate radiometric calibration of the reflective bands to the user community. Previous calibration has been based on information obtained from prelaunch, the onboard calibrator, vicarious calibration attempts, and cross-calibration with Landsat-7. Currently, additional data sources are available to improve this calibration. Specifically, improvements in vicarious calibration methods and development of the use of pseudoinvariant sites for trending provide two additional independent calibration sources. The use of these additional estimates has resulted in a consistent calibration approach that ties together all of the available calibration data sources. Results from this analysis indicate a simple exponential, or a constant model may be used for all bands throughout the lifetime of Landsat-5 TM. Where previously time constants for the exponential models were approximately one year, the updated model has significantly longer time constants in bands 1-3. In contrast, bands 4, 5, and 7 are shown to be best modeled by a constant. The models proposed in this paper indicate calibration knowledge of 5% or better early in life, decreasing to nearly 2% later in life. These models have been implemented at the U.S. Geological Survey Earth Resources Observation and Science (EROS) and are the default calibration used for all Landsat TM data now distributed through EROS. ?? 2008 IEEE.

  7. Rivaroxaban Levels in Patients' Plasmas are Comparable by Using Two Different Anti Xa Assay/Coagulometer Systems Calibrated with Two Different Calibrators.

    PubMed

    Martinuzzo, Marta E; Duboscq, Cristina; Lopez, Marina S; Barrera, Luis H; Vinuales, Estela S; Ceresetto, Jose; Forastiero, Ricardo R; Oyhamburu, Jose

    2018-06-01

    Rivaroxaban oral anticoagulant does not need laboratory monitoring, but in some situations plasma level measurement is useful. The objective of this paper was to verify analytical performance and compare two rivaroxaban calibrated anti Xa assays/coagulometer systems with specific or other branch calibrators. In 59 samples drawn at trough or peak from patients taking rivaroxaban, plasma levels were measured by HemosIL Liquid anti Xa in ACLTOP 300/500, and STA liquid Anti Xa in TCoag Destiny Plus. HemosIL and STA rivaroxaban calibrators and controls were used. CLSI guideline procedures EP15A3 for precision and trueness, EP6 for linearity, and EP9 for methods comparison were used. Coefficient of variation within run and total precision (CVR and CVWL respectively) of plasmatic rivaroxaban were < 4.2 and < 4.85% and BIAS < 7.4 and < 6.5%, for HemosIL-ACL TOP and STA-Destiny systems, respectively. Linearity verification 8 - 525 ng/mL a Deming regression for methods comparison presented R 0.963, 0.968 and 0.982, with a mean CV 13.3% when using different systems and calibrations. The analytical performance of plasma rivaroxaban was acceptable in both systems, and results from reagent/coagulometer systems are comparable even when calibrating with different branch material.

  8. Method for Accurately Calibrating a Spectrometer Using Broadband Light

    NASA Technical Reports Server (NTRS)

    Simmons, Stephen; Youngquist, Robert

    2011-01-01

    A novel method has been developed for performing very fine calibration of a spectrometer. This process is particularly useful for modern miniature charge-coupled device (CCD) spectrometers where a typical factory wavelength calibration has been performed and a finer, more accurate calibration is desired. Typically, the factory calibration is done with a spectral line source that generates light at known wavelengths, allowing specific pixels in the CCD array to be assigned wavelength values. This method is good to about 1 nm across the spectrometer s wavelength range. This new method appears to be accurate to about 0.1 nm, a factor of ten improvement. White light is passed through an unbalanced Michelson interferometer, producing an optical signal with significant spectral variation. A simple theory can be developed to describe this spectral pattern, so by comparing the actual spectrometer output against this predicted pattern, errors in the wavelength assignment made by the spectrometer can be determined.

  9. An overview of sensor calibration inter-comparison and applications

    USGS Publications Warehouse

    Xiong, Xiaoxiong; Cao, Changyong; Chander, Gyanesh

    2010-01-01

    Long-term climate data records (CDR) are often constructed using observations made by multiple Earth observing sensors over a broad range of spectra and a large scale in both time and space. These sensors can be of the same or different types operated on the same or different platforms. They can be developed and built with different technologies and are likely operated over different time spans. It has been known that the uncertainty of climate models and data records depends not only on the calibration quality (accuracy and stability) of individual sensors, but also on their calibration consistency across instruments and platforms. Therefore, sensor calibration inter-comparison and validation have become increasingly demanding and will continue to play an important role for a better understanding of the science product quality. This paper provides an overview of different methodologies, which have been successfully applied for sensor calibration inter-comparison. Specific examples using different sensors, including MODIS, AVHRR, and ETM+, are presented to illustrate the implementation of these methodologies.

  10. A High Precision $3.50 Open Source 3D Printed Rain Gauge Calibrator

    NASA Astrophysics Data System (ADS)

    Lopez Alcala, J. M.; Udell, C.; Selker, J. S.

    2017-12-01

    Currently available rain gauge calibrators tend to be designed for specific rain gauges, are expensive, employ low-precision water reservoirs, and do not offer the flexibility needed to test the ever more popular small-aperture rain gauges. The objective of this project was to develop and validate a freely downloadable, open-source, 3D printed rain gauge calibrator that can be adjusted for a wide range of gauges. The proposed calibrator provides for applying low, medium, and high intensity flow, and allows the user to modify the design to conform to unique system specifications based on parametric design, which may be modified and printed using CAD software. To overcome the fact that different 3D printers yield different print qualities, we devised a simple post-printing step that controlled critical dimensions to assure robust performance. Specifically, the three orifices of the calibrator are drilled to reach the three target flow rates. Laboratory tests showed that flow rates were consistent between prints, and between trials of each part, while the total applied water was precisely controlled by the use of a volumetric flask as the reservoir.

  11. Stress moderates the relationships between problem-gambling severity and specific psychopathologies.

    PubMed

    Ronzitti, Silvia; Kraus, Shane W; Hoff, Rani A; Potenza, Marc N

    2018-01-01

    The purpose of this study was to examine the extent to which stress moderated the relationships between problem-gambling severity and psychopathologies. We analyzed Wave-1 data from 41,869 participants of the National Epidemiologic Survey of Alcohol and Related Conditions (NESARC). Logistic regression showed that as compared to a non-gambling (NG) group, individuals at-risk gambling (ARG) and problem gambling (PPG) demonstrated higher odds of multiple Axis-I and Axis-II disorders in both high- and low-stress groups. Interactions odds ratios were statistically significant for stress moderating the relationships between at-risk gambling (versus non-gambling) and Any Axis-I and Any Axis-II disorder, with substance-use and Cluster-A and Cluster-B disorders contributing significantly. Some similar patterns were observed for pathological gambling (versus non-gambling), with stress moderating relationships with Cluster-B disorders. In all cases, a stronger relationship was observed between problem-gambling severity and psychopathology in the low-stress versus high-stress groups. The findings suggest that perceived stress accounts for some of the variance in the relationship between problem-gambling severity and specific forms of psychopathology, particularly with respect to lower intensity, subsyndromal levels of gambling. Findings suggest that stress may be particularly important to consider in the relationships between problem-gambling severity and substance use and Cluster-B disorders. Published by Elsevier B.V.

  12. SAR calibration technology review

    NASA Technical Reports Server (NTRS)

    Walker, J. L.; Larson, R. W.

    1981-01-01

    Synthetic Aperture Radar (SAR) calibration technology including a general description of the primary calibration techniques and some of the factors which affect the performance of calibrated SAR systems are reviewed. The use of reference reflectors for measurement of the total system transfer function along with an on-board calibration signal generator for monitoring the temporal variations of the receiver to processor output is a practical approach for SAR calibration. However, preliminary error analysis and previous experimental measurements indicate that reflectivity measurement accuracies of better than 3 dB will be difficult to achieve. This is not adequate for many applications and, therefore, improved end-to-end SAR calibration techniques are required.

  13. Calibration methods influence quantitative material decomposition in photon-counting spectral CT

    NASA Astrophysics Data System (ADS)

    Curtis, Tyler E.; Roeder, Ryan K.

    2017-03-01

    Photon-counting detectors and nanoparticle contrast agents can potentially enable molecular imaging and material decomposition in computed tomography (CT). Material decomposition has been investigated using both simulated and acquired data sets. However, the effect of calibration methods on material decomposition has not been systematically investigated. Therefore, the objective of this study was to investigate the influence of the range and number of contrast agent concentrations within a modular calibration phantom on quantitative material decomposition. A commerciallyavailable photon-counting spectral micro-CT (MARS Bioimaging) was used to acquire images with five energy bins selected to normalize photon counts and leverage the contrast agent k-edge. Material basis matrix values were determined using multiple linear regression models and material decomposition was performed using a maximum a posteriori estimator. The accuracy of quantitative material decomposition was evaluated by the root mean squared error (RMSE), specificity, sensitivity, and area under the curve (AUC). An increased maximum concentration (range) in the calibration significantly improved RMSE, specificity and AUC. The effects of an increased number of concentrations in the calibration were not statistically significant for the conditions in this study. The overall results demonstrated that the accuracy of quantitative material decomposition in spectral CT is significantly influenced by calibration methods, which must therefore be carefully considered for the intended diagnostic imaging application.

  14. A study and meta-analysis of lay attributions of cures for overcoming specific psychological problems.

    PubMed

    Furnham, A; Hayward, R

    1997-09-01

    Lay beliefs about the importance of 24 different contributors to overcoming 4 disorders that constitute primarily cognitive deficits were studied. A meta-analysis of previous programmatic studies in the area was performed so that 22 different psychological problems could be compared. In the present study, 107 participants completed a questionnaire indicating how effective 24 factors were in overcoming 4 specific problems: dyslexia, fear of flying, amnesia, and learning difficulties. Factor analysis revealed almost identical clusters (inner control, social consequences, understanding, receiving help, and fate) for each problem. The perceived relevance of those factors differed significantly between problems. Some individual difference factors (sex and religion) were found to predict certain factor attributions for specific disorders. A meta-analysis of the 5 studies in this series yielded a 6-factor structure comparable to those of the individual studies and provided results indicating the benefits and limitations of this kind of investigation. The clinical relevance of studying attributions for cure is considered.

  15. Input variable selection and calibration data selection for storm water quality regression models.

    PubMed

    Sun, Siao; Bertrand-Krajewski, Jean-Luc

    2013-01-01

    Storm water quality models are useful tools in storm water management. Interest has been growing in analyzing existing data for developing models for urban storm water quality evaluations. It is important to select appropriate model inputs when many candidate explanatory variables are available. Model calibration and verification are essential steps in any storm water quality modeling. This study investigates input variable selection and calibration data selection in storm water quality regression models. The two selection problems are mutually interacted. A procedure is developed in order to fulfil the two selection tasks in order. The procedure firstly selects model input variables using a cross validation method. An appropriate number of variables are identified as model inputs to ensure that a model is neither overfitted nor underfitted. Based on the model input selection results, calibration data selection is studied. Uncertainty of model performances due to calibration data selection is investigated with a random selection method. An approach using the cluster method is applied in order to enhance model calibration practice based on the principle of selecting representative data for calibration. The comparison between results from the cluster selection method and random selection shows that the former can significantly improve performances of calibrated models. It is found that the information content in calibration data is important in addition to the size of calibration data.

  16. Objective calibration of regional climate models

    NASA Astrophysics Data System (ADS)

    Bellprat, O.; Kotlarski, S.; Lüthi, D.; SchäR, C.

    2012-12-01

    Climate models are subject to high parametric uncertainty induced by poorly confined model parameters of parameterized physical processes. Uncertain model parameters are typically calibrated in order to increase the agreement of the model with available observations. The common practice is to adjust uncertain model parameters manually, often referred to as expert tuning, which lacks objectivity and transparency in the use of observations. These shortcomings often haze model inter-comparisons and hinder the implementation of new model parameterizations. Methods which would allow to systematically calibrate model parameters are unfortunately often not applicable to state-of-the-art climate models, due to computational constraints facing the high dimensionality and non-linearity of the problem. Here we present an approach to objectively calibrate a regional climate model, using reanalysis driven simulations and building upon a quadratic metamodel presented by Neelin et al. (2010) that serves as a computationally cheap surrogate of the model. Five model parameters originating from different parameterizations are selected for the optimization according to their influence on the model performance. The metamodel accurately estimates spatial averages of 2 m temperature, precipitation and total cloud cover, with an uncertainty of similar magnitude as the internal variability of the regional climate model. The non-linearities of the parameter perturbations are well captured, such that only a limited number of 20-50 simulations are needed to estimate optimal parameter settings. Parameter interactions are small, which allows to further reduce the number of simulations. In comparison to an ensemble of the same model which has undergone expert tuning, the calibration yields similar optimal model configurations, but leading to an additional reduction of the model error. The performance range captured is much wider than sampled with the expert-tuned ensemble and the presented

  17. Limits of Predictability in Commuting Flows in the Absence of Data for Calibration

    PubMed Central

    Yang, Yingxiang; Herrera, Carlos; Eagle, Nathan; González, Marta C.

    2014-01-01

    The estimation of commuting flows at different spatial scales is a fundamental problem for different areas of study. Many current methods rely on parameters requiring calibration from empirical trip volumes. Their values are often not generalizable to cases without calibration data. To solve this problem we develop a statistical expression to calculate commuting trips with a quantitative functional form to estimate the model parameter when empirical trip data is not available. We calculate commuting trip volumes at scales from within a city to an entire country, introducing a scaling parameter α to the recently proposed parameter free radiation model. The model requires only widely available population and facility density distributions. The parameter can be interpreted as the influence of the region scale and the degree of heterogeneity in the facility distribution. We explore in detail the scaling limitations of this problem, namely under which conditions the proposed model can be applied without trip data for calibration. On the other hand, when empirical trip data is available, we show that the proposed model's estimation accuracy is as good as other existing models. We validated the model in different regions in the U.S., then successfully applied it in three different countries. PMID:25012599

  18. Kalman Filter for Calibrating a Telescope Focal Plane

    NASA Technical Reports Server (NTRS)

    Kang, Bryan; Bayard, David

    2006-01-01

    The instrument-pointing frame (IPF) Kalman filter, and an algorithm that implements this filter, have been devised for calibrating the focal plane of a telescope. As used here, calibration signifies, more specifically, a combination of measurements and calculations directed toward ensuring accuracy in aiming the telescope and determining the locations of objects imaged in various arrays of photodetectors in instruments located on the focal plane. The IPF Kalman filter was originally intended for application to a spaceborne infrared astronomical telescope, but can also be applied to other spaceborne and ground-based telescopes. In the traditional approach to calibration of a telescope, (1) one team of experts concentrates on estimating parameters (e.g., pointing alignments and gyroscope drifts) that are classified as being of primarily an engineering nature, (2) another team of experts concentrates on estimating calibration parameters (e.g., plate scales and optical distortions) that are classified as being primarily of a scientific nature, and (3) the two teams repeatedly exchange data in an iterative process in which each team refines its estimates with the help of the data provided by the other team. This iterative process is inefficient and uneconomical because it is time-consuming and entails the maintenance of two survey teams and the development of computer programs specific to the requirements of each team. Moreover, theoretical analysis reveals that the engineering/ science iterative approach is not optimal in that it does not yield the best estimates of focal-plane parameters and, depending on the application, may not even enable convergence toward a set of estimates.

  19. (abstract) A VLBI Test of Tropospheric Delay Calibration with WVRs

    NASA Technical Reports Server (NTRS)

    Linfield, R. P.; Teitelbaum, L. P.; Keihm, S. J.; Resch, G. M.; Mahoney, M. J.; Treuhaft, R. N.

    1994-01-01

    Dual frequency (S/X band) very long baseline interferometry (VLBI) observations were used to test troposphere calibration by water vapor radiometers (WVRs). Comparison of the VLBI and WVR measurements show a statistical agreement (specifically, their structure functions agree) on time scales less than 700 seconds. On longer time scales, VLBI instrumental errors become important. The improvement in VLBI residual delays from WVR calibration was consistent with the measured level of tropospheric fluctuations.

  20. Hybrid Geometric Calibration Method for Multi-Platform Spaceborne SAR Image with Sparse Gcps

    NASA Astrophysics Data System (ADS)

    Lv, G.; Tang, X.; Ai, B.; Li, T.; Chen, Q.

    2018-04-01

    Geometric calibration is able to provide high-accuracy geometric coordinates of spaceborne SAR image through accurate geometric parameters in the Range-Doppler model by ground control points (GCPs). However, it is very difficult to obtain GCPs that covering large-scale areas, especially in the mountainous regions. In addition, the traditional calibration method is only used for single platform SAR images and can't support the hybrid geometric calibration for multi-platform images. To solve the above problems, a hybrid geometric calibration method for multi-platform spaceborne SAR images with sparse GCPs is proposed in this paper. First, we calibrate the master image that contains GCPs. Secondly, the point tracking algorithm is used to obtain the tie points (TPs) between the master and slave images. Finally, we calibrate the slave images using TPs as the GCPs. We take the Beijing-Tianjin- Hebei region as an example to study SAR image hybrid geometric calibration method using 3 TerraSAR-X images, 3 TanDEM-X images and 5 GF-3 images covering more than 235 kilometers in the north-south direction. Geometric calibration of all images is completed using only 5 GCPs. The GPS data extracted from GNSS receiver are used to assess the plane accuracy after calibration. The results after geometric calibration with sparse GCPs show that the geometric positioning accuracy is 3 m for TSX/TDX images and 7.5 m for GF-3 images.

  1. Calibration and use of filter test facility orifice plates

    NASA Astrophysics Data System (ADS)

    Fain, D. E.; Selby, T. W.

    1984-07-01

    There are three official DOE filter test facilities. These test facilities are used by the DOE, and others, to test nuclear grade HEPA filters to provide Quality Assurance that the filters meet the required specifications. The filters are tested for both filter efficiency and pressure drop. In the test equipment, standard orifice plates are used to set the specified flow rates for the tests. There has existed a need to calibrate the orifice plates from the three facilities with a common calibration source to assure that the facilities have comparable tests. A project has been undertaken to calibrate these orifice plates. In addition to reporting the results of the calibrations of the orifice plates, the means for using the calibration results will be discussed. A comparison of the orifice discharge coefficients for the orifice plates used at the seven facilities will be given. The pros and cons for the use of mass flow or volume flow rates for testing will be discussed. It is recommended that volume flow rates be used as a more practical and comparable means of testing filters. The rationale for this recommendation will be discussed.

  2. Assessing groundwater vulnerability in the Kinshasa region, DR Congo, using a calibrated DRASTIC model

    NASA Astrophysics Data System (ADS)

    Mfumu Kihumba, Antoine; Vanclooster, Marnik; Ndembo Longo, Jean

    2017-02-01

    This study assessed the vulnerability of groundwater against pollution in the Kinshasa region, DR Congo, as a support of a groundwater protection program. The parametric vulnerability model (DRASTIC) was modified and calibrated to predict the intrinsic vulnerability as well as the groundwater pollution risk. The method uses groundwater body specific parameters for the calibration of the factor ratings and weightings of the original DRASTIC model. These groundwater specific parameters are inferred from the statistical relation between the original DRASTIC model and observed nitrate pollution for a specific period. In addition, site-specific land use parameters are integrated into the method. The method is fully embedded in a Geographic Information System (GIS). Following these modifications, the correlation coefficient between groundwater pollution risk and observed nitrate concentrations for the 2013-2014 survey improved from r = 0.42, for the original DRASTIC model, to r = 0.61 for the calibrated model. As a way to validate this pollution risk map, observed nitrate concentrations from another survey (2008) are compared to pollution risk indices showing a good degree of coincidence with r = 0.51. The study shows that a calibration of a vulnerability model is recommended when vulnerability maps are used for groundwater resource management and land use planning at the regional scale and that it is adapted to a specific area.

  3. Information Retrieval Performance of Probabilistically Generated, Problem-Specific Computerized Provider Order Entry Pick-Lists: A Pilot Study

    PubMed Central

    Rothschild, Adam S.; Lehmann, Harold P.

    2005-01-01

    Objective: The aim of this study was to preliminarily determine the feasibility of probabilistically generating problem-specific computerized provider order entry (CPOE) pick-lists from a database of explicitly linked orders and problems from actual clinical cases. Design: In a pilot retrospective validation, physicians reviewed internal medicine cases consisting of the admission history and physical examination and orders placed using CPOE during the first 24 hours after admission. They created coded problem lists and linked orders from individual cases to the problem for which they were most indicated. Problem-specific order pick-lists were generated by including a given order in a pick-list if the probability of linkage of order and problem (PLOP) equaled or exceeded a specified threshold. PLOP for a given linked order-problem pair was computed as its prevalence among the other cases in the experiment with the given problem. The orders that the reviewer linked to a given problem instance served as the reference standard to evaluate its system-generated pick-list. Measurements: Recall, precision, and length of the pick-lists. Results: Average recall reached a maximum of .67 with a precision of .17 and pick-list length of 31.22 at a PLOP threshold of 0. Average precision reached a maximum of .73 with a recall of .09 and pick-list length of .42 at a PLOP threshold of .9. Recall varied inversely with precision in classic information retrieval behavior. Conclusion: We preliminarily conclude that it is feasible to generate problem-specific CPOE pick-lists probabilistically from a database of explicitly linked orders and problems. Further research is necessary to determine the usefulness of this approach in real-world settings. PMID:15684134

  4. Multispectral scanner flight model (F-1) radiometric calibration and alignment handbook

    NASA Technical Reports Server (NTRS)

    1981-01-01

    This handbook on the calibration of the MSS-D flight model (F-1) provides both the relevant data and a summary description of how the data were obtained for the system radiometric calibration, system relative spectral response, and the filter response characteristics for all 24 channels of the four band MSS-D F-1 scanner. The calibration test procedure and resulting test data required to establish the reference light levels of the MSS-D internal calibration system are discussed. The final set of data ("nominal" calibration wedges for all 24 channels) for the internal calibration system is given. The system relative spectral response measurements for all 24 channels of MSS-D F-1 are included. These data are the spectral response of the complete scanner, which are the composite of the spectral responses of the scan mirror primary and secondary telescope mirrors, fiber optics, optical filters, and detectors. Unit level test data on the measurements of the individual channel optical transmission filters are provided. Measured performance is compared to specification values.

  5. Automated Calibration of Atmospheric Oxidized Mercury Measurements.

    PubMed

    Lyman, Seth; Jones, Colleen; O'Neil, Trevor; Allen, Tanner; Miller, Matthieu; Gustin, Mae Sexauer; Pierce, Ashley M; Luke, Winston; Ren, Xinrong; Kelley, Paul

    2016-12-06

    The atmosphere is an important reservoir for mercury pollution, and understanding of oxidation processes is essential to elucidating the fate of atmospheric mercury. Several recent studies have shown that a low bias exists in a widely applied method for atmospheric oxidized mercury measurements. We developed an automated, permeation tube-based calibrator for elemental and oxidized mercury, and we integrated this calibrator with atmospheric mercury instrumentation (Tekran 2537/1130/1135 speciation systems) in Reno, Nevada and at Mauna Loa Observatory, Hawaii, U.S.A. While the calibrator has limitations, it was able to routinely inject stable amounts of HgCl 2 and HgBr 2 into atmospheric mercury measurement systems over periods of several months. In Reno, recovery of injected mercury compounds as gaseous oxidized mercury (as opposed to elemental mercury) decreased with increasing specific humidity, as has been shown in other studies, although this trend was not observed at Mauna Loa, likely due to differences in atmospheric chemistry at the two locations. Recovery of injected mercury compounds as oxidized mercury was greater in Mauna Loa than in Reno, and greater still for a cation-exchange membrane-based measurement system. These results show that routine calibration of atmospheric oxidized mercury measurements is both feasible and necessary.

  6. The photomultiplier tube calibration system of the MicroBooNE experiment

    DOE PAGES

    Conrad, J.; Jones, B. J. P.; Moss, Z.; ...

    2015-06-03

    Here, we report on the design and construction of a LED-based fiber calibration system for large liquid argon time projection detectors. This system was developed to calibrate the optical systems of the MicroBooNE experiment. As well as detailing the materials and installation procedure, we provide technical drawings and specifications so that the system may be easily replicated in future LArTPC detectors.

  7. Geometrical Calibration of the Photo-Spectral System and Digital Maps Retrieval

    NASA Astrophysics Data System (ADS)

    Bruchkouskaya, S.; Skachkova, A.; Katkovski, L.; Martinov, A.

    2013-12-01

    Imaging systems for remote sensing of the Earth are required to demonstrate high metric accuracy of the picture which can be provided through preliminary geometrical calibration of optical systems. Being defined as a result of the geometrical calibration, parameters of internal and external orientation of the cameras are needed while solving such problems of image processing, as orthotransformation, geometrical correction, geographical coordinate fixing, scale adjustment and image registration from various channels and cameras, creation of image mosaics of filmed territories, and determination of geometrical characteristics of objects in the images. The geometrical calibration also helps to eliminate image deformations arising due to manufacturing defects and errors in installation of camera elements and photo receiving matrices as well as those resulted from lens distortions. A Photo-Spectral System (PhSS), which is intended for registering reflected radiation spectra of underlying surfaces in a wavelength range from 350 nm to 1050 nm and recording images of high spatial resolution, has been developed at the A.N. Sevchenko Research Institute of Applied Physical Problems of the Belarusian State University. The PhSS has undergone flight tests over the territory of Belarus onboard the Antonov AN-2 aircraft with the aim to obtain visible range images of the underlying surface. Then we performed the geometrical calibration of the PhSS and carried out the correction of images obtained during the flight tests. Furthermore, we have plotted digital maps of the terrain using the stereo pairs of images acquired from the PhSS and evaluated the accuracy of the created maps. Having obtained the calibration parameters, we apply them for correction of the images from another identical PhSS device, which is located at the Russian Orbital Segment of the International Space Station (ROS ISS), aiming to retrieve digital maps of the terrain with higher accuracy.

  8. Worrying about the Future: An Episodic Specificity Induction Impacts Problem Solving, Reappraisal, and Well-Being

    PubMed Central

    Jing, Helen G.; Madore, Kevin P.; Schacter, Daniel L.

    2015-01-01

    Previous research has demonstrated that an episodic specificity induction – brief training in recollecting details of a recent experience – enhances performance on various subsequent tasks thought to draw upon episodic memory processes. Existing work has also shown that mental simulation can be beneficial for emotion regulation and coping with stressors. Here we focus on understanding how episodic detail can affect problem solving, reappraisal, and psychological well-being regarding worrisome future events. In Experiment 1, an episodic specificity induction significantly improved participants’ performance on a subsequent means-end problem solving task (i.e., more relevant steps) and an episodic reappraisal task (i.e., more episodic details) involving personally worrisome future events compared with a control induction not focused on episodic specificity. Imagining constructive behaviors with increased episodic detail via the specificity induction was also related to significantly larger decreases in anxiety, perceived likelihood of a bad outcome, and perceived difficulty to cope with a bad outcome, as well as larger increases in perceived likelihood of a good outcome and indicated use of active coping behaviors compared with the control. In Experiment 2, we extended these findings using a more stringent control induction, and found preliminary evidence that the specificity induction was related to an increase in positive affect and decrease in negative affect compared with the control. Our findings support the idea that episodic memory processes are involved in means-end problem solving and episodic reappraisal, and that increasing the episodic specificity of imagining constructive behaviors regarding worrisome events may be related to improved psychological well-being. PMID:26820166

  9. Radiometric calibration status of Landsat-7 and Landsat-5

    USGS Publications Warehouse

    Barsi, J.A.; Markham, B.L.; Helder, D.L.; Chander, G.

    2007-01-01

    Launched in April 1999, Landsat-7 ETM+ continues to acquire data globally. The Scan Line Corrector in failure in 2003 has affected ground coverage and the recent switch to Bumper Mode operations in April 2007 has degraded the internal geometric accuracy of the data, but the radiometry has been unaffected. The best of the three on-board calibrators for the reflective bands, the Full Aperture Solar Calibrator, has indicated slow changes in the ETM+, but this is believed to be due to contamination on the panel rather then instrument degradation. The Internal Calibrator lamp 2, though it has not been used regularly throughout the whole mission, indicates smaller changes than the FASC since 2003. The changes indicated by lamp 2 are only statistically significant in band 1, circa 0.3% per year, and may be lamp as opposed to instrument degradations. Regular observations of desert targets in the Saharan and Arabian deserts indicate the no change in the ETM+ reflective band response, though the uncertainty is larger and does not preclude the small changes indicated by lamp 2. The thermal band continues to be stable and well-calibrated since an offset error was corrected in late-2000. Launched in 1984, Landsat-5 TM also continues to acquire global data; though without the benefit of an on-board recorder, data can only be acquired where a ground station is within range. Historically, the calibration of the TM reflective bands has used an onboard calibration system with multiple lamps. The calibration procedure for the TM reflective bands was updated in 2003 based on the best estimate at the time, using only one of the three lamps and a cross-calibration with Landsat-7 ETM+. Since then, the Saharan desert sites have been used to validate this calibration model. Problems were found with the lamp based model of up to 13% in band 1. Using the Saharan data, a new model was developed and implemented in the US processing system in April 2007. The TM thermal band was found to have a

  10. Absolute radiometric calibration of Landsat using a pseudo invariant calibration site

    USGS Publications Warehouse

    Helder, D.; Thome, K.J.; Mishra, N.; Chander, G.; Xiong, Xiaoxiong; Angal, A.; Choi, Tae-young

    2013-01-01

    Pseudo invariant calibration sites (PICS) have been used for on-orbit radiometric trending of optical satellite systems for more than 15 years. This approach to vicarious calibration has demonstrated a high degree of reliability and repeatability at the level of 1-3% depending on the site, spectral channel, and imaging geometries. A variety of sensors have used this approach for trending because it is broadly applicable and easy to implement. Models to describe the surface reflectance properties, as well as the intervening atmosphere have also been developed to improve the precision of the method. However, one limiting factor of using PICS is that an absolute calibration capability has not yet been fully developed. Because of this, PICS are primarily limited to providing only long term trending information for individual sensors or cross-calibration opportunities between two sensors. This paper builds an argument that PICS can be used more extensively for absolute calibration. To illustrate this, a simple empirical model is developed for the well-known Libya 4 PICS based on observations by Terra MODIS and EO-1 Hyperion. The model is validated by comparing model predicted top-of-atmosphere reflectance values to actual measurements made by the Landsat ETM+ sensor reflective bands. Following this, an outline is presented to develop a more comprehensive and accurate PICS absolute calibration model that can be Système international d'unités (SI) traceable. These initial concepts suggest that absolute calibration using PICS is possible on a broad scale and can lead to improved on-orbit calibration capabilities for optical satellite sensors.

  11. Omnidirectional Underwater Camera Design and Calibration

    PubMed Central

    Bosch, Josep; Gracias, Nuno; Ridao, Pere; Ribas, David

    2015-01-01

    This paper presents the development of an underwater omnidirectional multi-camera system (OMS) based on a commercially available six-camera system, originally designed for land applications. A full calibration method is presented for the estimation of both the intrinsic and extrinsic parameters, which is able to cope with wide-angle lenses and non-overlapping cameras simultaneously. This method is valid for any OMS in both land or water applications. For underwater use, a customized housing is required, which often leads to strong image distortion due to refraction among the different media. This phenomena makes the basic pinhole camera model invalid for underwater cameras, especially when using wide-angle lenses, and requires the explicit modeling of the individual optical rays. To address this problem, a ray tracing approach has been adopted to create a field-of-view (FOV) simulator for underwater cameras. The simulator allows for the testing of different housing geometries and optics for the cameras to ensure a complete hemisphere coverage in underwater operation. This paper describes the design and testing of a compact custom housing for a commercial off-the-shelf OMS camera (Ladybug 3) and presents the first results of its use. A proposed three-stage calibration process allows for the estimation of all of the relevant camera parameters. Experimental results are presented, which illustrate the performance of the calibration method and validate the approach. PMID:25774707

  12. Iodine-Containing Mass-Defect-Tuned Dendrimers for Use as Internal Mass Spectrometry Calibrants

    NASA Astrophysics Data System (ADS)

    Giesen, Joseph A.; Diament, Benjamin J.; Grayson, Scott M.

    2018-03-01

    Calibrants based on synthetic dendrimers have been recently proposed as a versatile alternative to peptides and proteins for both MALDI and ESI mass spectrometry calibration. Because of their modular synthetic platform, dendrimer calibrants are particularly amenable to tailoring for specific applications. Utilizing this versatility, a set of dendrimers has been designed as an internal calibrant with a tailored mass defect to differentiate them from the majority of natural peptide analytes. This was achieved by incorporating a tris-iodinated aromatic core as an initiator for the dendrimer synthesis, thereby affording multiple calibration points ( m/z range 600-2300) with an optimized mass-defect offset relative to all peptides composed of the 20 most common proteinogenic amino acids. [Figure not available: see fulltext.

  13. Self-Calibrating Pressure Transducer

    NASA Technical Reports Server (NTRS)

    Lueck, Dale E. (Inventor)

    2006-01-01

    A self-calibrating pressure transducer is disclosed. The device uses an embedded zirconia membrane which pumps a determined quantity of oxygen into the device. The associated pressure can be determined, and thus, the transducer pressure readings can be calibrated. The zirconia membrane obtains oxygen .from the surrounding environment when possible. Otherwise, an oxygen reservoir or other source is utilized. In another embodiment, a reversible fuel cell assembly is used to pump oxygen and hydrogen into the system. Since a known amount of gas is pumped across the cell, the pressure produced can be determined, and thus, the device can be calibrated. An isolation valve system is used to allow the device to be calibrated in situ. Calibration is optionally automated so that calibration can be continuously monitored. The device is preferably a fully integrated MEMS device. Since the device can be calibrated without removing it from the process, reductions in costs and down time are realized.

  14. Accuracy and Calibration of High Explosive Thermodynamic Equations of State

    NASA Astrophysics Data System (ADS)

    Baker, Ernest L.; Capellos, Christos; Stiel, Leonard I.; Pincay, Jack

    2010-10-01

    The Jones-Wilkins-Lee-Baker (JWLB) equation of state (EOS) was developed to more accurately describe overdriven detonation while maintaining an accurate description of high explosive products expansion work output. The increased mathematical complexity of the JWLB high explosive equations of state provides increased accuracy for practical problems of interest. Increased numbers of parameters are often justified based on improved physics descriptions but can also mean increased calibration complexity. A generalized extent of aluminum reaction Jones-Wilkins-Lee (JWL)-based EOS was developed in order to more accurately describe the observed behavior of aluminized explosives detonation products expansion. A calibration method was developed to describe the unreacted, partially reacted, and completely reacted explosive using nonlinear optimization. A reasonable calibration of a generalized extent of aluminum reaction JWLB EOS as a function of aluminum reaction fraction has not yet been achieved due to the increased mathematical complexity of the JWLB form.

  15. A new calibration code for the JET polarimeter.

    PubMed

    Gelfusa, M; Murari, A; Gaudio, P; Boboc, A; Brombin, M; Orsitto, F P; Giovannozzi, E

    2010-05-01

    An equivalent model of JET polarimeter is presented, which overcomes the drawbacks of previous versions of the fitting procedures used to provide calibrated results. First of all the signal processing electronics has been simulated, to confirm that it is still working within the original specifications. Then the effective optical path of both the vertical and lateral chords has been implemented to produce the calibration curves. The principle approach to the model has allowed obtaining a unique procedure which can be applied to any manual calibration and remains constant until the following one. The optical model of the chords is then applied to derive the plasma measurements. The results are in good agreement with the estimates of the most advanced full wave propagation code available and have been benchmarked with other diagnostics. The devised procedure has proved to work properly also for the most recent campaigns and high current experiments.

  16. Co-calibrating quality-of-life scores from three pulmonary disorders: implications for comparative-effectiveness research.

    PubMed

    Rouse, M; Twiss, J; McKenna, S P

    2016-06-01

    Background Efficient use of health resources requires accurate outcome assessment. Disease-specific patient-reported outcome (PRO) measures are designed to be highly relevant to patients with a specific disease. They have advantages over generic PROs that lack relevance to patient groups and miss crucial impacts of illness. It is thought that disease-specific measurement cannot be used in comparative effectiveness research (CER). The present study provides further evidence of the value of disease-specific measures in making valid comparisons across diseases. Methods The Asthma Life Impact Scale (ALIS, 22 items), Living with Chronic Obstructive Pulmonary Disease (LCOPD, 22 items) scale, and Cambridge Pulmonary Hypertension Outcome Review (CAMPHOR, 25 items) were completed by 140, 162, and 91 patients, respectively. The three samples were analyzed for fit to the Rasch model, then combined into a scale consisting of 58 unique items and re-analyzed. Raw scores on the three measures were co-calibrated and a transformation table produced. Results The scales fit the Rasch model individually (ALIS Chi(2) probability value (p-Chi(2)) = 0.05; LCOPD p-Chi(2 )=( )0.38; CAMPHOR p-Chi(2 )=( )0.92). The combined data also fit the Rasch model (p-Chi(2 )=( )0.22). There was no differential item functioning related to age, gender, or disease. The co-calibrated scales successfully distinguished between perceived severity groups (p < 0.001). Limitations The samples were drawn from different sources. For scales to be co-calibrated using a common item design, they must be based on the same theoretical construct, be unidimensional, and have overlapping items. Conclusions The results showed that it is possible to co-calibrate scores from disease-specific PRO measures. This will permit more accurate and sensitive outcome measurement to be incorporated into CER. The co-calibration of needs-based disease-specific measures allows the calculation of γ scores that can be

  17. A Genre-Specific Investigation of Video Game Engagement and Problem Play in the Early Life Course

    PubMed Central

    Ream, Geoffrey L.; Elliott, Luther C.; Dunlap, Eloise

    2013-01-01

    This study explored predictors of engagement with specific video game genres, and degree of problem play experienced by players of specific genres, during the early life course. Video game players ages 18–29 (n = 692) were recruited in and around video game retail outlets, arcades, conventions, and other video game related contexts in New York City. Participants completed a Computer-Assisted Personal Interview (CAPI) of contemporaneous demographic and personality measures and a Life-History Calendar (LHC) measuring video gaming, school/work engagement, and caffeine and sugar consumption for each year of life ages 6 - present. Findings were that likelihood of engagement with most genres rose during childhood, peaked at some point during the second decade of life, and declined through emerging adulthood. Cohorts effects on engagement also emerged which were probably attributable to changes in the availability and popularity of various genres over the 12-year age range of our participants. The relationship between age and problem play of most genres was either negative or non-significant. Sensation-seeking was the only consistent positive predictor of problem play. Relationships between other variables and engagement with and problem play of specific genres are discussed in detail. PMID:24688802

  18. A Genre-Specific Investigation of Video Game Engagement and Problem Play in the Early Life Course.

    PubMed

    Ream, Geoffrey L; Elliott, Luther C; Dunlap, Eloise

    2013-05-21

    This study explored predictors of engagement with specific video game genres, and degree of problem play experienced by players of specific genres, during the early life course. Video game players ages 18-29 (n = 692) were recruited in and around video game retail outlets, arcades, conventions, and other video game related contexts in New York City. Participants completed a Computer-Assisted Personal Interview (CAPI) of contemporaneous demographic and personality measures and a Life-History Calendar (LHC) measuring video gaming, school/work engagement, and caffeine and sugar consumption for each year of life ages 6 - present. Findings were that likelihood of engagement with most genres rose during childhood, peaked at some point during the second decade of life, and declined through emerging adulthood. Cohorts effects on engagement also emerged which were probably attributable to changes in the availability and popularity of various genres over the 12-year age range of our participants. The relationship between age and problem play of most genres was either negative or non-significant. Sensation-seeking was the only consistent positive predictor of problem play. Relationships between other variables and engagement with and problem play of specific genres are discussed in detail.

  19. Online geometrical calibration of a mobile C-arm using external sensors

    NASA Astrophysics Data System (ADS)

    Mitschke, Matthias M.; Navab, Nassir; Schuetz, Oliver

    2000-04-01

    3D tomographic reconstruction of high contrast objects such as contrast agent enhanced blood vessels or bones from x-ray images acquired by isocentric C-arm systems recently gained interest. For tomographic reconstruction, a sequence of images is captured during the C-arm rotation around the patient and the precise projection geometry has to be determined for each image. This is a difficult task, as C- arms usually do not provide accurate information about their projection geometry. Standard methods propose the use of an x-ray calibration phantom and an offline calibration, when the motion of the C-arm is supposed to be reproducible between calibration and patient run. However, mobile C-arms usually do not have this desirable property. Therefore, an online recovery of projection geometry is necessary. Here, we study the use of external tracking systems such as Polaris or Optotrak from Northern Digital, Inc., for online calibration. In order to use the external tracking system for recovery of x-ray projection geometry two unknown transformations have to be estimated. We describe our attempt to solve this calibration problem. These are the relations between x-ray imaging system and marker plate of the tracking system as well as worked and sensor coordinate system. Experimental result son anatomical data are presented and visually compared with the results of estimating the projection geometry with an x-ray calibration phantom.

  20. Maximum likelihood estimation in calibrating a stereo camera setup.

    PubMed

    Muijtjens, A M; Roos, J M; Arts, T; Hasman, A

    1999-02-01

    Motion and deformation of the cardiac wall may be measured by following the positions of implanted radiopaque markers in three dimensions, using two x-ray cameras simultaneously. Regularly, calibration of the position measurement system is obtained by registration of the images of a calibration object, containing 10-20 radiopaque markers at known positions. Unfortunately, an accidental change of the position of a camera after calibration requires complete recalibration. Alternatively, redundant information in the measured image positions of stereo pairs can be used for calibration. Thus, a separate calibration procedure can be avoided. In the current study a model is developed that describes the geometry of the camera setup by five dimensionless parameters. Maximum Likelihood (ML) estimates of these parameters were obtained in an error analysis. It is shown that the ML estimates can be found by application of a nonlinear least squares procedure. Compared to the standard unweighted least squares procedure, the ML method resulted in more accurate estimates without noticeable bias. The accuracy of the ML method was investigated in relation to the object aperture. The reconstruction problem appeared well conditioned as long as the object aperture is larger than 0.1 rad. The angle between the two viewing directions appeared to be the parameter that was most likely to cause major inaccuracies in the reconstruction of the 3-D positions of the markers. Hence, attempts to improve the robustness of the method should primarily focus on reduction of the error in this parameter.

  1. Application of coordinate transform on ball plate calibration

    NASA Astrophysics Data System (ADS)

    Wei, Hengzheng; Wang, Weinong; Ren, Guoying; Pei, Limei

    2015-02-01

    For the ball plate calibration method with coordinate measurement machine (CMM) equipped with laser interferometer, it is essential to adjust the ball plate parallel to the direction of laser beam. It is very time-consuming. To solve this problem, a method based on coordinate transformation between machine system and object system is presented. With the fixed points' coordinates of the ball plate measured in the object system and machine system, the transformation matrix between the coordinate systems is calculated. The laser interferometer measurement data error due to the placement of ball plate can be corrected with this transformation matrix. Experimental results indicate that this method is consistent with the handy adjustment method. It avoids the complexity of ball plate adjustment. It also can be applied to the ball beam calibration.

  2. SUMS calibration test report

    NASA Technical Reports Server (NTRS)

    Robertson, G.

    1982-01-01

    Calibration was performed on the shuttle upper atmosphere mass spectrometer (SUMS). The results of the calibration and the as run test procedures are presented. The output data is described, and engineering data conversion factors, tables and curves, and calibration on instrument gauges are included. Static calibration results which include: instrument sensitive versus external pressure for N2 and O2, data from each scan of calibration, data plots from N2 and O2, and sensitivity of SUMS at inlet for N2 and O2, and ratios of 14/28 for nitrogen and 16/32 for oxygen are given.

  3. Calibration of X-Ray Observatories

    NASA Technical Reports Server (NTRS)

    Weisskopf, Martin C.; L'Dell, Stephen L.

    2011-01-01

    Accurate calibration of x-ray observatories has proved an elusive goal. Inaccuracies and inconsistencies amongst on-ground measurements, differences between on-ground and in-space performance, in-space performance changes, and the absence of cosmic calibration standards whose physics we truly understand have precluded absolute calibration better than several percent and relative spectral calibration better than a few percent. The philosophy "the model is the calibration" relies upon a complete high-fidelity model of performance and an accurate verification and calibration of this model. As high-resolution x-ray spectroscopy begins to play a more important role in astrophysics, additional issues in accurately calibrating at high spectral resolution become more evident. Here we review the challenges of accurately calibrating the absolute and relative response of x-ray observatories. On-ground x-ray testing by itself is unlikely to achieve a high-accuracy calibration of in-space performance, especially when the performance changes with time. Nonetheless, it remains an essential tool in verifying functionality and in characterizing and verifying the performance model. In the absence of verified cosmic calibration sources, we also discuss the notion of an artificial, in-space x-ray calibration standard. 6th

  4. Evaluation of “Autotune” calibration against manual calibration of building energy models

    DOE PAGES

    Chaudhary, Gaurav; New, Joshua; Sanyal, Jibonananda; ...

    2016-08-26

    Our paper demonstrates the application of Autotune, a methodology aimed at automatically producing calibrated building energy models using measured data, in two case studies. In the first case, a building model is de-tuned by deliberately injecting faults into more than 60 parameters. This model was then calibrated using Autotune and its accuracy with respect to the original model was evaluated in terms of the industry-standard normalized mean bias error and coefficient of variation of root mean squared error metrics set forth in ASHRAE Guideline 14. In addition to whole-building energy consumption, outputs including lighting, plug load profiles, HVAC energy consumption,more » zone temperatures, and other variables were analyzed. In the second case, Autotune calibration is compared directly to experts’ manual calibration of an emulated-occupancy, full-size residential building with comparable calibration results in much less time. Lastly, our paper concludes with a discussion of the key strengths and weaknesses of auto-calibration approaches.« less

  5. Calibrated intercepts for solar radiometers used in remote sensor calibration

    NASA Technical Reports Server (NTRS)

    Gellman, David I.; Biggar, Stuart F.; Slater, Philip N.; Bruegge, Carol J.

    1991-01-01

    Calibrated solar radiometer intercepts allow spectral optical depths to be determined for days with intermittently clear skies. This is of particular importance on satellite sensor calibration days that are cloudy except at the time of image acquisition. This paper describes the calibration of four solar radiometers using the Langley-Bouguer technique for data collected on days with a clear, stable atmosphere. Intercepts are determined with an uncertainty of less than six percent, corresponding to a maximum uncertainty of 0.06 in optical depth. The spread of voltage intercepts calculated in this process is carried through three methods of radiometric calibration of satellite sensors to yield an uncertainty in radiance at the top of the atmosphere of less than one percent associated with the uncertainty in solar radiometer intercepts for a range of ground reflectances.

  6. An Enclosed Laser Calibration Standard

    NASA Astrophysics Data System (ADS)

    Adams, Thomas E.; Fecteau, M. L.

    1985-02-01

    We have designed, evaluated and calibrated an enclosed, safety-interlocked laser calibration standard for use in US Army Secondary Reference Calibration Laboratories. This Laser Test Set Calibrator (LTSC) represents the Army's first-generation field laser calibration standard. Twelve LTSC's are now being fielded world-wide. The main requirement on the LTSC is to provide calibration support for the Test Set (TS3620) which, in turn, is a GO/NO GO tester of the Hand-Held Laser Rangefinder (AN/GVS-5). However, we believe it's design is flexible enough to accommodate the calibration of other laser test, measurement and diagnostic equipment (TMDE) provided that single-shot capability is adequate to perform the task. In this paper we describe the salient aspects and calibration requirements of the AN/GVS-5 Rangefinder and the Test Set which drove the basic LTSC design. Also, we detail our evaluation and calibration of the LTSC, in particular, the LTSC system standards. We conclude with a review of our error analysis from which uncertainties were assigned to the LTSC calibration functions.

  7. Effective Data-Driven Calibration for a Galvanometric Laser Scanning System Using Binocular Stereo Vision.

    PubMed

    Tu, Junchao; Zhang, Liyan

    2018-01-12

    A new solution to the problem of galvanometric laser scanning (GLS) system calibration is presented. Under the machine learning framework, we build a single-hidden layer feedforward neural network (SLFN)to represent the GLS system, which takes the digital control signal at the drives of the GLS system as input and the space vector of the corresponding outgoing laser beam as output. The training data set is obtained with the aid of a moving mechanism and a binocular stereo system. The parameters of the SLFN are efficiently solved in a closed form by using extreme learning machine (ELM). By quantitatively analyzing the regression precision with respective to the number of hidden neurons in the SLFN, we demonstrate that the proper number of hidden neurons can be safely chosen from a broad interval to guarantee good generalization performance. Compared to the traditional model-driven calibration, the proposed calibration method does not need a complex modeling process and is more accurate and stable. As the output of the network is the space vectors of the outgoing laser beams, it costs much less training time and can provide a uniform solution to both laser projection and 3D-reconstruction, in contrast with the existing data-driven calibration method which only works for the laser triangulation problem. Calibration experiment, projection experiment and 3D reconstruction experiment are respectively conducted to test the proposed method, and good results are obtained.

  8. Calibrating Wide Field Surveys

    NASA Astrophysics Data System (ADS)

    González Fernández, Carlos; Irwin, M.; Lewis, J.; González Solares, E.

    2017-09-01

    "In this talk I will review the strategies in CASU to calibrate wide field surveys, in particular applied to data taken with the VISTA telescope. These include traditional night-by-night calibrations along with the search for a global, coherent calibration of all the data once observations are finished. The difficulties of obtaining photometric accuracy of a few percent and a good absolute calibration will also be discussed."

  9. A calibration method based on virtual large planar target for cameras with large FOV

    NASA Astrophysics Data System (ADS)

    Yu, Lei; Han, Yangyang; Nie, Hong; Ou, Qiaofeng; Xiong, Bangshu

    2018-02-01

    In order to obtain high precision in camera calibration, a target should be large enough to cover the whole field of view (FOV). For cameras with large FOV, using a small target will seriously reduce the precision of calibration. However, using a large target causes many difficulties in making, carrying and employing the large target. In order to solve this problem, a calibration method based on the virtual large planar target (VLPT), which is virtually constructed with multiple small targets (STs), is proposed for cameras with large FOV. In the VLPT-based calibration method, first, the positions and directions of STs are changed several times to obtain a number of calibration images. Secondly, the VLPT of each calibration image is created by finding the virtual point corresponding to the feature points of the STs. Finally, intrinsic and extrinsic parameters of the camera are calculated by using the VLPTs. Experiment results show that the proposed method can not only achieve the similar calibration precision as those employing a large target, but also have good stability in the whole measurement area. Thus, the difficulties to accurately calibrate cameras with large FOV can be perfectly tackled by the proposed method with good operability.

  10. One-calibrant kinetic calibration for on-site water sampling with solid-phase microextraction.

    PubMed

    Ouyang, Gangfeng; Cui, Shufen; Qin, Zhipei; Pawliszyn, Janusz

    2009-07-15

    The existing solid-phase microextraction (SPME) kinetic calibration technique, using the desorption of the preloaded standards to calibrate the extraction of the analytes, requires that the physicochemical properties of the standard should be similar to those of the analyte, which limited the application of the technique. In this study, a new method, termed the one-calibrant kinetic calibration technique, which can use the desorption of a single standard to calibrate all extracted analytes, was proposed. The theoretical considerations were validated by passive water sampling in laboratory and rapid water sampling in the field. To mimic the variety of the environment, such as temperature, turbulence, and the concentration of the analytes, the flow-through system for the generation of standard aqueous polycyclic aromatic hydrocarbons (PAHs) solution was modified. The experimental results of the passive samplings in the flow-through system illustrated that the effect of the environmental variables was successfully compensated with the kinetic calibration technique, and all extracted analytes can be calibrated through the desorption of a single calibrant. On-site water sampling with rotated SPME fibers also illustrated the feasibility of the new technique for rapid on-site sampling of hydrophobic organic pollutants in water. This technique will accelerate the application of the kinetic calibration method and also will be useful for other microextraction techniques.

  11. Geometric calibration of Colour and Stereo Surface Imaging System of ESA's Trace Gas Orbiter

    NASA Astrophysics Data System (ADS)

    Tulyakov, Stepan; Ivanov, Anton; Thomas, Nicolas; Roloff, Victoria; Pommerol, Antoine; Cremonese, Gabriele; Weigel, Thomas; Fleuret, Francois

    2018-01-01

    There are many geometric calibration methods for "standard" cameras. These methods, however, cannot be used for the calibration of telescopes with large focal lengths and complex off-axis optics. Moreover, specialized calibration methods for the telescopes are scarce in literature. We describe the calibration method that we developed for the Colour and Stereo Surface Imaging System (CaSSIS) telescope, on board of the ExoMars Trace Gas Orbiter (TGO). Although our method is described in the context of CaSSIS, with camera-specific experiments, it is general and can be applied to other telescopes. We further encourage re-use of the proposed method by making our calibration code and data available on-line.

  12. Initial Radiometric Calibration of the AWiFS using Vicarious Calibration Techniques

    NASA Technical Reports Server (NTRS)

    Pagnutti, Mary; Thome, Kurtis; Aaron, David; Leigh, Larry

    2006-01-01

    NASA SSC maintains four ASD FieldSpec FR spectroradiometers: 1) Laboratory transfer radiometers; 2) Ground surface reflectance for V&V field collection activities. Radiometric Calibration consists of a NIST-calibrated integrating sphere which serves as a source with known spectral radiance. Spectral Calibration consists of a laser and pen lamp illumination of integrating sphere. Environmental Testing includes temperature stability tests performed in environmental chamber.

  13. An investigation into force-moment calibration techniques applicable to a magnetic suspension and balance system. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Eskins, Jonathan

    1988-01-01

    The problem of determining the forces and moments acting on a wind tunnel model suspended in a Magnetic Suspension and Balance System is addressed. Two calibration methods were investigated for three types of model cores, i.e., Alnico, Samarium-Cobalt, and a superconducting solenoid. Both methods involve calibrating the currents in the electromagnetic array against known forces and moments. The first is a static calibration method using calibration weights and a system of pulleys. The other method, dynamic calibration, involves oscillating the model and using its inertia to provide calibration forces and moments. Static calibration data, found to produce the most reliable results, is presented for three degrees of freedom at 0, 15, and -10 deg angle of attack. Theoretical calculations are hampered by the inability to represent iron-cored electromagnets. Dynamic calibrations, despite being quicker and easier to perform, are not as accurate as static calibrations. Data for dynamic calibrations at 0 and 15 deg is compared with the relevant static data acquired. Distortion of oscillation traces is cited as a major source of error in dynamic calibrations.

  14. Ecologically-focused Calibration of Hydrological Models for Environmental Flow Applications

    NASA Astrophysics Data System (ADS)

    Adams, S. K.; Bledsoe, B. P.

    2015-12-01

    Hydrologic alteration resulting from watershed urbanization is a common cause of aquatic ecosystem degradation. Developing environmental flow criteria for urbanizing watersheds requires quantitative flow-ecology relationships that describe biological responses to streamflow alteration. Ideally, gaged flow data are used to develop flow-ecology relationships; however, biological monitoring sites are frequently ungaged. For these ungaged locations, hydrologic models must be used to predict streamflow characteristics through calibration and testing at gaged sites, followed by extrapolation to ungaged sites. Physically-based modeling of rainfall-runoff response has frequently utilized "best overall fit" calibration criteria, such as the Nash-Sutcliffe Efficiency (NSE), that do not necessarily focus on specific aspects of the flow regime relevant to biota of interest. This study investigates the utility of employing flow characteristics known a priori to influence regional biological endpoints as "ecologically-focused" calibration criteria compared to traditional, "best overall fit" criteria. For this study, 19 continuous HEC-HMS 4.0 models were created in coastal southern California and calibrated to hourly USGS streamflow gages with nearby biological monitoring sites using one "best overall fit" and three "ecologically-focused" criteria: NSE, Richards-Baker Flashiness Index (RBI), percent of time when the flow is < 1 cfs (%<1), and a Combined Calibration (RBI and %<1). Calibrated models were compared using calibration accuracy, environmental flow metric reproducibility, and the strength of flow-ecology relationships. Results indicate that "ecologically-focused" criteria can be calibrated with high accuracy and may provide stronger flow-ecology relationships than "best overall fit" criteria, especially when multiple "ecologically-focused" criteria are used in concert, despite inabilities to accurately reproduce additional types of ecological flow metrics to which the

  15. Comparison Between One-Point Calibration and Two-Point Calibration Approaches in a Continuous Glucose Monitoring Algorithm

    PubMed Central

    Mahmoudi, Zeinab; Johansen, Mette Dencker; Christiansen, Jens Sandahl

    2014-01-01

    Background: The purpose of this study was to investigate the effect of using a 1-point calibration approach instead of a 2-point calibration approach on the accuracy of a continuous glucose monitoring (CGM) algorithm. Method: A previously published real-time CGM algorithm was compared with its updated version, which used a 1-point calibration instead of a 2-point calibration. In addition, the contribution of the corrective intercept (CI) to the calibration performance was assessed. Finally, the sensor background current was estimated real-time and retrospectively. The study was performed on 132 type 1 diabetes patients. Results: Replacing the 2-point calibration with the 1-point calibration improved the CGM accuracy, with the greatest improvement achieved in hypoglycemia (18.4% median absolute relative differences [MARD] in hypoglycemia for the 2-point calibration, and 12.1% MARD in hypoglycemia for the 1-point calibration). Using 1-point calibration increased the percentage of sensor readings in zone A+B of the Clarke error grid analysis (EGA) in the full glycemic range, and also enhanced hypoglycemia sensitivity. Exclusion of CI from calibration reduced hypoglycemia accuracy, while slightly increased euglycemia accuracy. Both real-time and retrospective estimation of the sensor background current suggest that the background current can be considered zero in the calibration of the SCGM1 sensor. Conclusions: The sensor readings calibrated with the 1-point calibration approach indicated to have higher accuracy than those calibrated with the 2-point calibration approach. PMID:24876420

  16. On constraining pilot point calibration with regularization in PEST

    USGS Publications Warehouse

    Fienen, M.N.; Muffels, C.T.; Hunt, R.J.

    2009-01-01

    Ground water model calibration has made great advances in recent years with practical tools such as PEST being instrumental for making the latest techniques available to practitioners. As models and calibration tools get more sophisticated, however, the power of these tools can be misapplied, resulting in poor parameter estimates and/or nonoptimally calibrated models that do not suit their intended purpose. Here, we focus on an increasingly common technique for calibrating highly parameterized numerical models - pilot point parameterization with Tikhonov regularization. Pilot points are a popular method for spatially parameterizing complex hydrogeologic systems; however, additional flexibility offered by pilot points can become problematic if not constrained by Tikhonov regularization. The objective of this work is to explain and illustrate the specific roles played by control variables in the PEST software for Tikhonov regularization applied to pilot points. A recent study encountered difficulties implementing this approach, but through examination of that analysis, insight into underlying sources of potential misapplication can be gained and some guidelines for overcoming them developed. ?? 2009 National Ground Water Association.

  17. Speech comprehension and emotional/behavioral problems in children with specific language impairment (SLI).

    PubMed

    Gregl, Ana; Kirigin, Marin; Bilać, Snjeiana; Sućeska Ligutić, Radojka; Jaksić, Nenad; Jakovljević, Miro

    2014-09-01

    This research aims to investigate differences in speech comprehension between children with specific language impairment (SLI) and their developmentally normal peers, and the relationship between speech comprehension and emotional/behavioral problems on Achenbach's Child Behavior Checklist (CBCL) and Caregiver Teacher's Report Form (C-TRF) according to the DSMIV The clinical sample comprised 97preschool children with SLI, while the peer sample comprised 60 developmentally normal preschool children. Children with SLI had significant delays in speech comprehension and more emotional/behavioral problems than peers. In children with SLI, speech comprehension significantly correlated with scores on Attention Deficit/Hyperactivity Problems (CBCL and C-TRF), and Pervasive Developmental Problems scales (CBCL)(p<0.05). In the peer sample, speech comprehension significantly correlated with scores on Affective Problems and Attention Deficit/Hyperactivity Problems (C-TRF) scales. Regression analysis showed that 12.8% of variance in speech comprehension is saturated with 5 CBCL variables, of which Attention Deficit/Hyperactivity (beta = -0.281) and Pervasive Developmental Problems (beta = -0.280) are statistically significant (p < 0.05). In the reduced regression model Attention Deficit/Hyperactivity explains 7.3% of the variance in speech comprehension, (beta = -0.270, p < 0.01). It is possible that, to a certain degree, the same neurodevelopmental process lies in the background of problems with speech comprehension, problems with attention and hyperactivity, and pervasive developmental problems. This study confirms the importance of triage for behavioral problems and attention training in the rehabilitation of children with SLI and children with normal language development that exhibit ADHD symptoms.

  18. Calibration method for spectroscopic systems

    DOEpatents

    Sandison, David R.

    1998-01-01

    Calibration spots of optically-characterized material placed in the field of view of a spectroscopic system allow calibration of the spectroscopic system. Response from the calibration spots is measured and used to calibrate for varying spectroscopic system operating parameters. The accurate calibration achieved allows quantitative spectroscopic analysis of responses taken at different times, different excitation conditions, and of different targets.

  19. Calibration method for spectroscopic systems

    DOEpatents

    Sandison, D.R.

    1998-11-17

    Calibration spots of optically-characterized material placed in the field of view of a spectroscopic system allow calibration of the spectroscopic system. Response from the calibration spots is measured and used to calibrate for varying spectroscopic system operating parameters. The accurate calibration achieved allows quantitative spectroscopic analysis of responses taken at different times, different excitation conditions, and of different targets. 3 figs.

  20. Coda Calibration Tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Addair, Travis; Barno, Justin; Dodge, Doug

    CCT is a Java based application for calibrating 10 shear wave coda measurement models to observed data using a much smaller set of reference moment magnitudes (MWs) calculated from other means (waveform modeling, etc.). These calibrated measurement models can then be used in other tools to generate coda moment magnitude measurements, source spectra, estimated stress drop, and other useful measurements for any additional events and any new data collected in the calibrated region.

  1. Calibration Procedures on Oblique Camera Setups

    NASA Astrophysics Data System (ADS)

    Kemper, G.; Melykuti, B.; Yu, C.

    2016-06-01

    Beside the creation of virtual animated 3D City models, analysis for homeland security and city planning, the accurately determination of geometric features out of oblique imagery is an important task today. Due to the huge number of single images the reduction of control points force to make use of direct referencing devices. This causes a precise camera-calibration and additional adjustment procedures. This paper aims to show the workflow of the various calibration steps and will present examples of the calibration flight with the final 3D City model. In difference to most other software, the oblique cameras are used not as co-registered sensors in relation to the nadir one, all camera images enter the AT process as single pre-oriented data. This enables a better post calibration in order to detect variations in the single camera calibration and other mechanical effects. The shown sensor (Oblique Imager) is based o 5 Phase One cameras were the nadir one has 80 MPIX equipped with a 50 mm lens while the oblique ones capture images with 50 MPix using 80 mm lenses. The cameras are mounted robust inside a housing to protect this against physical and thermal deformations. The sensor head hosts also an IMU which is connected to a POS AV GNSS Receiver. The sensor is stabilized by a gyro-mount which creates floating Antenna -IMU lever arms. They had to be registered together with the Raw GNSS-IMU Data. The camera calibration procedure was performed based on a special calibration flight with 351 shoots of all 5 cameras and registered the GPS/IMU data. This specific mission was designed in two different altitudes with additional cross lines on each flying heights. The five images from each exposure positions have no overlaps but in the block there are many overlaps resulting in up to 200 measurements per points. On each photo there were in average 110 well distributed measured points which is a satisfying number for the camera calibration. In a first step with the help of

  2. Balance Calibration – A Method for Assigning a Direct-Reading Uncertainty to an Electronic Balance.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mike Stears

    2010-07-01

    Paper Title: Balance Calibration – A method for assigning a direct-reading uncertainty to an electronic balance. Intended Audience: Those who calibrate or use electronic balances. Abstract: As a calibration facility, we provide on-site (at the customer’s location) calibrations of electronic balances for customers within our company. In our experience, most of our customers are not using their balance as a comparator, but simply putting an unknown quantity on the balance and reading the displayed mass value. Manufacturer’s specifications for balances typically include specifications such as readability, repeatability, linearity, and sensitivity temperature drift, but what does this all mean when themore » balance user simply reads the displayed mass value and accepts the reading as the true value? This paper discusses a method for assigning a direct-reading uncertainty to a balance based upon the observed calibration data and the environment where the balance is being used. The method requires input from the customer regarding the environment where the balance is used and encourages discussion with the customer regarding sources of uncertainty and possible means for improvement; the calibration process becomes an educational opportunity for the balance user as well as calibration personnel. This paper will cover the uncertainty analysis applied to the calibration weights used for the field calibration of balances; the uncertainty is calculated over the range of environmental conditions typically encountered in the field and the resulting range of air density. The temperature stability in the area of the balance is discussed with the customer and the temperature range over which the balance calibration is valid is decided upon; the decision is based upon the uncertainty needs of the customer and the desired rigor in monitoring by the customer. Once the environmental limitations are decided, the calibration is performed and the measurement data is entered into

  3. MATE: Machine Learning for Adaptive Calibration Template Detection

    PubMed Central

    Donné, Simon; De Vylder, Jonas; Goossens, Bart; Philips, Wilfried

    2016-01-01

    The problem of camera calibration is two-fold. On the one hand, the parameters are estimated from known correspondences between the captured image and the real world. On the other, these correspondences themselves—typically in the form of chessboard corners—need to be found. Many distinct approaches for this feature template extraction are available, often of large computational and/or implementational complexity. We exploit the generalized nature of deep learning networks to detect checkerboard corners: our proposed method is a convolutional neural network (CNN) trained on a large set of example chessboard images, which generalizes several existing solutions. The network is trained explicitly against noisy inputs, as well as inputs with large degrees of lens distortion. The trained network that we evaluate is as accurate as existing techniques while offering improved execution time and increased adaptability to specific situations with little effort. The proposed method is not only robust against the types of degradation present in the training set (lens distortions, and large amounts of sensor noise), but also to perspective deformations, e.g., resulting from multi-camera set-ups. PMID:27827920

  4. Self calibrating autoTRAC

    NASA Technical Reports Server (NTRS)

    Everett, Louis J.

    1994-01-01

    The work reported here demonstrates how to automatically compute the position and attitude of a targeting reflective alignment concept (TRAC) camera relative to the robot end effector. In the robotics literature this is known as the sensor registration problem. The registration problem is important to solve if TRAC images need to be related to robot position. Previously, when TRAC operated on the end of a robot arm, the camera had to be precisely located at the correct orientation and position. If this location is in error, then the robot may not be able to grapple an object even though the TRAC sensor indicates it should. In addition, if the camera is significantly far from the alignment it is expected to be at, TRAC may give incorrect feedback for the control of the robot. A simple example is if the robot operator thinks the camera is right side up but the camera is actually upside down, the camera feedback will tell the operator to move in an incorrect direction. The automatic calibration algorithm requires the operator to translate and rotate the robot arbitrary amounts along (about) two coordinate directions. After the motion, the algorithm determines the transformation matrix from the robot end effector to the camera image plane. This report discusses the TRAC sensor registration problem.

  5. Tradeoffs among watershed model calibration targets for parameter estimation

    EPA Science Inventory

    Hydrologic models are commonly calibrated by optimizing a single objective function target to compare simulated and observed flows, although individual targets are influenced by specific flow modes. Nash-Sutcliffe efficiency (NSE) emphasizes flood peaks in evaluating simulation f...

  6. Development of a problem solving evaluation instrument; untangling of specific problem solving assets

    NASA Astrophysics Data System (ADS)

    Adams, Wendy Kristine

    The purpose of my research was to produce a problem solving evaluation tool for physics. To do this it was necessary to gain a thorough understanding of how students solve problems. Although physics educators highly value problem solving and have put extensive effort into understanding successful problem solving, there is currently no efficient way to evaluate problem solving skill. Attempts have been made in the past; however, knowledge of the principles required to solve the subject problem are so absolutely critical that they completely overshadow any other skills students may use when solving a problem. The work presented here is unique because the evaluation tool removes the requirement that the student already have a grasp of physics concepts. It is also unique because I picked a wide range of people and picked a wide range of tasks for evaluation. This is an important design feature that helps make things emerge more clearly. This dissertation includes an extensive literature review of problem solving in physics, math, education and cognitive science as well as descriptions of studies involving student use of interactive computer simulations, the design and validation of a beliefs about physics survey and finally the design of the problem solving evaluation tool. I have successfully developed and validated a problem solving evaluation tool that identifies 44 separate assets (skills) necessary for solving problems. Rigorous validation studies, including work with an independent interviewer, show these assets identified by this content-free evaluation tool are the same assets that students use to solve problems in mechanics and quantum mechanics. Understanding this set of component assets will help teachers and researchers address problem solving within the classroom.

  7. The Calibration Reference Data System

    NASA Astrophysics Data System (ADS)

    Greenfield, P.; Miller, T.

    2016-07-01

    We describe a software architecture and implementation for using rules to determine which calibration files are appropriate for calibrating a given observation. This new system, the Calibration Reference Data System (CRDS), replaces what had been previously used for the Hubble Space Telescope (HST) calibration pipelines, the Calibration Database System (CDBS). CRDS will be used for the James Webb Space Telescope (JWST) calibration pipelines, and is currently being used for HST calibration pipelines. CRDS can be easily generalized for use in similar applications that need a rules-based system for selecting the appropriate item for a given dataset; we give some examples of such generalizations that will likely be used for JWST. The core functionality of the Calibration Reference Data System is available under an Open Source license. CRDS is briefly contrasted with a sampling of other similar systems used at other observatories.

  8. Calibration procedure for a laser triangulation scanner with uncertainty evaluation

    NASA Astrophysics Data System (ADS)

    Genta, Gianfranco; Minetola, Paolo; Barbato, Giulio

    2016-11-01

    Most of low cost 3D scanning devices that are nowadays available on the market are sold without a user calibration procedure to correct measurement errors related to changes in environmental conditions. In addition, there is no specific international standard defining a procedure to check the performance of a 3D scanner along time. This paper aims at detailing a thorough methodology to calibrate a 3D scanner and assess its measurement uncertainty. The proposed procedure is based on the use of a reference ball plate and applied to a triangulation laser scanner. Experimental results show that the metrological performance of the instrument can be greatly improved by the application of the calibration procedure that corrects systematic errors and reduces the device's measurement uncertainty.

  9. A Photometric (griz) Metallicity Calibration for Cool Stars

    NASA Astrophysics Data System (ADS)

    West, Andrew A.; Davenport, James R. A.; Dhital, Saurav; Mann, Andrew; Massey, Angela P

    2014-06-01

    We present results from a study that uses wide pairs as tools for estimating and constraining the metal content of cool stars from their spectra and broad band colors. Specifically, we will present results that optimize the Mann et al. M dwarf metallicity calibrations (derived using wide binaries) for the optical regime covered by SDSS spectra. We will demonstrate the robustness of the new calibrations using a sample of wide, low-mass binaries for which both components have an SDSS spectrum. Using these new spectroscopic metallicity calibrations, we will present relations between the metallicities (from optical spectra) and the Sloan colors derived using more than 20,000 M dwarfs in the SDSS DR7 spectroscopic catalog. These relations have important ramifications for studies of Galactic chemical evolution, the search for exoplanets and subdwarfs, and are essential for surveys such as Pan-STARRS and LSST, which use griz photometry but have no spectroscopic component.

  10. GPI Calibrations

    NASA Astrophysics Data System (ADS)

    Rantakyrö, Fredrik T.

    2017-09-01

    "The Gemini Planet Imager requires a large set of Calibrations. These can be split into two major sets, one set associated with each observation and one set related to biweekly calibrations. The observation set is to optimize the correction of miscroshifts in the IFU spectra and the latter set is for correction of detector and instrument cosmetics."

  11. Calibration of stereo rigs based on the backward projection process

    NASA Astrophysics Data System (ADS)

    Gu, Feifei; Zhao, Hong; Ma, Yueyang; Bu, Penghui; Zhao, Zixin

    2016-08-01

    High-accuracy 3D measurement based on binocular vision system is heavily dependent on the accurate calibration of two rigidly-fixed cameras. In most traditional calibration methods, stereo parameters are iteratively optimized through the forward imaging process (FIP). However, the results can only guarantee the minimal 2D pixel errors, but not the minimal 3D reconstruction errors. To address this problem, a simple method to calibrate a stereo rig based on the backward projection process (BPP) is proposed. The position of a spatial point can be determined separately from each camera by planar constraints provided by the planar pattern target. Then combined with pre-defined spatial points, intrinsic and extrinsic parameters of the stereo-rig can be optimized by minimizing the total 3D errors of both left and right cameras. An extensive performance study for the method in the presence of image noise and lens distortions is implemented. Experiments conducted on synthetic and real data demonstrate the accuracy and robustness of the proposed method.

  12. Hybrid dynamic radioactive particle tracking (RPT) calibration technique for multiphase flow systems

    NASA Astrophysics Data System (ADS)

    Khane, Vaibhav; Al-Dahhan, Muthanna H.

    2017-04-01

    The radioactive particle tracking (RPT) technique has been utilized to measure three-dimensional hydrodynamic parameters for multiphase flow systems. An analytical solution to the inverse problem of the RPT technique, i.e. finding the instantaneous tracer positions based upon instantaneous counts received in the detectors, is not possible. Therefore, a calibration to obtain a counts-distance map is needed. There are major shortcomings in the conventional RPT calibration method due to which it has limited applicability in practical applications. In this work, the design and development of a novel dynamic RPT calibration technique are carried out to overcome the shortcomings of the conventional RPT calibration method. The dynamic RPT calibration technique has been implemented around a test reactor with 1foot in diameter and 1 foot in height using Cobalt-60 as an isotopes tracer particle. Two sets of experiments have been carried out to test the capability of novel dynamic RPT calibration. In the first set of experiments, a manual calibration apparatus has been used to hold a tracer particle at known static locations. In the second set of experiments, the tracer particle was moved vertically downwards along a straight line path in a controlled manner. The obtained reconstruction results about the tracer particle position were compared with the actual known position and the reconstruction errors were estimated. The obtained results revealed that the dynamic RPT calibration technique is capable of identifying tracer particle positions with a reconstruction error between 1 to 5.9 mm for the conditions studied which could be improved depending on various factors outlined here.

  13. Integrating a Gravity Simulation and Groundwater Modeling on the Calibration of Specific Yield for Choshui Alluvial Fan

    NASA Astrophysics Data System (ADS)

    Chang, Liang Cheng; Tsai, Jui pin; Chen, Yu Wen; Way Hwang, Chein; Chung Cheng, Ching; Chiang, Chung Jung

    2014-05-01

    For sustainable management, accurate estimation of recharge can provide critical information. The accuracy of estimation is highly related to uncertainty of specific yield (Sy). Because Sy value is traditionally obtained by a multi-well pumping test, the available Sy values are usually limited due to high installation cost. Therefore, this information insufficiency of Sy may cause high uncertainty for recharge estimation. Because gravity is a function of a material mass and the inverse square of the distance, gravity measurement can assist to obtain the mass variation of a shallow groundwater system. Thus, the groundwater level observation data and gravity measurements are used for the calibration of Sy for a groundwater model. The calibration procedure includes four steps. First, gravity variations of three groundwater-monitoring wells, Si-jhou, Tu-ku and Ke-cuo, are observed in May, August and November 2012. To obtain the gravity caused by groundwater variation, this study filters the noises from other sources, such as ocean tide and land subsidence, in the collected data The refined data, which are data without noises, are named gravity residual. Second, this study develops a groundwater model using MODFLOW 2005 to simulate the water mass variation of the groundwater system. Third, we use Newton gravity integral to simulate the gravity variation caused by the simulated water mass variation during each of the observation periods. Fourth, comparing the ratio of the gravity variation between the two data sets, which are observed gravity residuals and simulated gravities. The values of Sy is continuously modified until the gravity variation ratios of the two data sets are the same. The Sy value of Si-jhou is 0.216, which is obtained by the multi-well pumping test. This Sy value is assigned to the simulation model. The simulation results show that the simulated gravity can well fit the observed gravity residual without parameter calibration. This result indicates

  14. A variable acceleration calibration system

    NASA Astrophysics Data System (ADS)

    Johnson, Thomas H.

    2011-12-01

    A variable acceleration calibration system that applies loads using gravitational and centripetal acceleration serves as an alternative, efficient and cost effective method for calibrating internal wind tunnel force balances. Two proof-of-concept variable acceleration calibration systems are designed, fabricated and tested. The NASA UT-36 force balance served as the test balance for the calibration experiments. The variable acceleration calibration systems are shown to be capable of performing three component calibration experiments with an approximate applied load error on the order of 1% of the full scale calibration loads. Sources of error are indentified using experimental design methods and a propagation of uncertainty analysis. Three types of uncertainty are indentified for the systems and are attributed to prediction error, calibration error and pure error. Angular velocity uncertainty is shown to be the largest indentified source of prediction error. The calibration uncertainties using a production variable acceleration based system are shown to be potentially equivalent to current methods. The production quality system can be realized using lighter materials and a more precise instrumentation. Further research is needed to account for balance deflection, forcing effects due to vibration, and large tare loads. A gyroscope measurement technique is shown to be capable of resolving the balance deflection angle calculation. Long term research objectives include a demonstration of a six degree of freedom calibration, and a large capacity balance calibration.

  15. The DFMS sensor of ROSINA onboard Rosetta: A computer-assisted approach to resolve mass calibration, flux calibration, and fragmentation issues

    NASA Astrophysics Data System (ADS)

    Dhooghe, Frederik; De Keyser, Johan; Altwegg, Kathrin; Calmonte, Ursina; Fuselier, Stephen; Hässig, Myrtha; Berthelier, Jean-Jacques; Mall, Urs; Gombosi, Tamas; Fiethe, Björn

    2014-05-01

    Rosetta will rendezvous with comet 67P/Churyumov-Gerasimenko in May 2014. The Rosetta Orbiter Spectrometer for Ion and Neutral Analysis (ROSINA) instrument comprises three sensors: the pressure sensor (COPS) and two mass spectrometers (RTOF and DFMS). The double focusing mass spectrometer DFMS is optimized for mass resolution and consists of an ion source, a mass analyser and a detector package operated in analogue mode. The magnetic sector of the analyser provides the mass dispersion needed for use with the position-sensitive microchannel plate (MCP) detector. Ions that hit the MCP release electrons that are recorded digitally using a linear electron detector array with 512 pixels. Raw data for a given commanded mass are obtained as ADC counts as a function of pixel number. We have developed a computer-assisted approach to address the problem of calibrating such raw data. Mass calibration: Ion identification is based on their mass-over-charge (m/Z) ratio and requires an accurate correlation of pixel number and m/Z. The m/Z scale depends on the commanded mass and the magnetic field and can be described by an offset of the pixel associated with the commanded mass from the centre of the detector array and a scaling factor. Mass calibration is aided by the built-in gas calibration unit (GCU), which allows one to inject a known gas mixture into the instrument. In a first, fully automatic step of the mass calibration procedure, the calibration uses all GCU spectra and extracts information about the mass peak closest to the centre pixel, since those peaks can be identified unambiguously. This preliminary mass-calibration relation can then be applied to all spectra. Human-assisted identification of additional mass peaks further improves the mass calibration. Ion flux calibration: ADC counts per pixel are converted to ion counts per second using the overall gain, the individual pixel gain, and the total data accumulation time. DFMS can perform an internal scan to determine

  16. Streamflow characteristics from modelled runoff time series: Importance of calibration criteria selection

    USGS Publications Warehouse

    Poole, Sandra; Vis, Marc; Knight, Rodney; Seibert, Jan

    2017-01-01

    Ecologically relevant streamflow characteristics (SFCs) of ungauged catchments are often estimated from simulated runoff of hydrologic models that were originally calibrated on gauged catchments. However, SFC estimates of the gauged donor catchments and subsequently the ungauged catchments can be substantially uncertain when models are calibrated using traditional approaches based on optimization of statistical performance metrics (e.g., Nash–Sutcliffe model efficiency). An improved calibration strategy for gauged catchments is therefore crucial to help reduce the uncertainties of estimated SFCs for ungauged catchments. The aim of this study was to improve SFC estimates from modeled runoff time series in gauged catchments by explicitly including one or several SFCs in the calibration process. Different types of objective functions were defined consisting of the Nash–Sutcliffe model efficiency, single SFCs, or combinations thereof. We calibrated a bucket-type runoff model (HBV – Hydrologiska Byråns Vattenavdelning – model) for 25 catchments in the Tennessee River basin and evaluated the proposed calibration approach on 13 ecologically relevant SFCs representing major flow regime components and different flow conditions. While the model generally tended to underestimate the tested SFCs related to mean and high-flow conditions, SFCs related to low flow were generally overestimated. The highest estimation accuracies were achieved by a SFC-specific model calibration. Estimates of SFCs not included in the calibration process were of similar quality when comparing a multi-SFC calibration approach to a traditional model efficiency calibration. For practical applications, this implies that SFCs should preferably be estimated from targeted runoff model calibration, and modeled estimates need to be carefully interpreted.

  17. Psychophysical contrast calibration

    PubMed Central

    To, Long; Woods, Russell L; Goldstein, Robert B; Peli, Eli

    2013-01-01

    Electronic displays and computer systems offer numerous advantages for clinical vision testing. Laboratory and clinical measurements of various functions and in particular of (letter) contrast sensitivity require accurately calibrated display contrast. In the laboratory this is achieved using expensive light meters. We developed and evaluated a novel method that uses only psychophysical responses of a person with normal vision to calibrate the luminance contrast of displays for experimental and clinical applications. Our method combines psychophysical techniques (1) for detection (and thus elimination or reduction) of display saturating nonlinearities; (2) for luminance (gamma function) estimation and linearization without use of a photometer; and (3) to measure without a photometer the luminance ratios of the display’s three color channels that are used in a bit-stealing procedure to expand the luminance resolution of the display. Using a photometer we verified that the calibration achieved with this procedure is accurate for both LCD and CRT displays enabling testing of letter contrast sensitivity to 0.5%. Our visual calibration procedure enables clinical, internet and home implementation and calibration verification of electronic contrast testing. PMID:23643843

  18. Link calibration against receiver calibration: an assessment of GPS time transfer uncertainties

    NASA Astrophysics Data System (ADS)

    Rovera, G. D.; Torre, J.-M.; Sherwood, R.; Abgrall, M.; Courde, C.; Laas-Bourez, M.; Uhrich, P.

    2014-10-01

    We present a direct comparison between two different techniques for the relative calibration of time transfer between remote time scales when using the signals transmitted by the Global Positioning System (GPS). Relative calibration estimates the delay of equipment or the delay of a time transfer link with respect to reference equipment. It is based on the circulation of some travelling GPS equipment between the stations in the network, against which the local equipment is measured. Two techniques can be considered: first a station calibration by the computation of the hardware delays of the local GPS equipment; second the computation of a global hardware delay offset for the time transfer between the reference points of two remote time scales. This last technique is called a ‘link’ calibration, with respect to the other one, which is a ‘receiver’ calibration. The two techniques require different measurements on site, which change the uncertainty budgets, and we discuss this and related issues. We report on one calibration campaign organized during Autumn 2013 between Observatoire de Paris (OP), Paris, France, Observatoire de la Côte d'Azur (OCA), Calern, France, and NERC Space Geodesy Facility (SGF), Herstmonceux, United Kingdom. The travelling equipment comprised two GPS receivers of different types, along with the required signal generator and distribution amplifier, and one time interval counter. We show the different ways to compute uncertainty budgets, leading to improvement factors of 1.2 to 1.5 on the hardware delay uncertainties when comparing the relative link calibration to the relative receiver calibration.

  19. In pursuit of precision: the calibration of minds and machines in late nineteenth-century psychology.

    PubMed

    Benschop, R; Draaisma, D

    2000-01-01

    A prominent feature of late nineteenth-century psychology was its intense preoccupation with precision. Precision was at once an ideal and an argument: the quest for precision helped psychology to establish its status as a mature science, sharing a characteristic concern with the natural sciences. We will analyse how psychologists set out to produce precision in 'mental chronometry', the measurement of the duration of psychological processes. In his Leipzig laboratory, Wundt inaugurated an elaborate research programme on mental chronometry. We will look at the problem of calibration of experimental apparatus and will describe the intricate material, literary, and social technologies involved in the manufacture of precision. First, we shall discuss some of the technical problems involved in the measurement of ever shorter time-spans. Next, the Cattell-Berger experiments will help us to argue against the received view that all the precision went into the hardware, and practically none into the social organization of experimentation. Experimenters made deliberate efforts to bring themselves and their subjects under a regime of control and calibration similar to that which reigned over the experimental machinery. In Leipzig psychology, the particular blend of material and social technology resulted in a specific object of study: the generalized mind. We will then show that the distribution of precision in experimental psychology outside Leipzig demanded a concerted effort of instruments, texts, and people. It will appear that the forceful attempts to produce precision and uniformity had some rather paradoxical consequences.

  20. Fission foil detector calibrations with high energy protons

    NASA Technical Reports Server (NTRS)

    Benton, E. V.; Frank, A. L.

    1995-01-01

    Fission foil detectors (FFD's) are passive devices composed of heavy metal foils in contact with muscovite mica films. The heavy metal nuclei have significant cross sections for fission when irradiated with neutrons and protons. Each isotope is characterized by threshold energies for the fission reactions and particular energy-dependent cross sections. In the FFD's, fission fragments produced by the reactions are emitted from the foils and create latent particle tracks in the adjacent mica films. When the films are processed surface tracks are formed which can be optically counted. The track densities are indications of the fluences and spectra of neutrons and/or protons. In the past, detection efficiencies have been calculated using the low energy neutron calibrated dosimeters and published fission cross sections for neutrons and protons. The problem is that the addition of a large kinetic energy to the (n,nucleus) or (p,nucleus) reaction could increase the energies and ranges of emitted fission fragments and increase the detector sensitivity as compared with lower energy neutron calibrations. High energy calibrations are the only method of resolving the uncertainties in detector efficiencies. At high energies, either proton or neutron calibrations are sufficient since the cross section data show that the proton and neutron fission cross sections are approximately equal. High energy proton beams have been utilized (1.8 and 4.9 GeV, 80 and 140 MeV) for measuring the tracks of fission fragments emitted backward and forward.

  1. Exploring a Three-Level Model of Calibration Accuracy

    ERIC Educational Resources Information Center

    Schraw, Gregory; Kuch, Fred; Gutierrez, Antonio P.; Richmond, Aaron S.

    2014-01-01

    We compared 5 different statistics (i.e., G index, gamma, "d'", sensitivity, specificity) used in the social sciences and medical diagnosis literatures to assess calibration accuracy in order to examine the relationship among them and to explore whether one statistic provided a best fitting general measure of accuracy. College…

  2. A method of calibrating wind velocity sensors with a modified gas flow calibrator

    NASA Technical Reports Server (NTRS)

    Stump, H. P.

    1978-01-01

    A procedure was described for calibrating air velocity sensors in the exhaust flow of a gas flow calibrator. The average velocity in the test section located at the calibrator exhaust was verified from the mass flow rate accurately measured by the calibrator's precision sonic nozzles. Air at elevated pressures flowed through a series of screens, diameter changes, and flow straighteners, resulting in a smooth flow through the open test section. The modified system generated air velocities of 2 to 90 meters per second with an uncertainty of about two percent for speeds below 15 meters per second and four percent for the higher speeds. Wind tunnel data correlated well with that taken in the flow calibrator.

  3. New blackbody calibration source for low temperatures from -20 C to +350 C

    NASA Astrophysics Data System (ADS)

    Mester, Ulrich; Winter, Peter

    2001-03-01

    Calibration procedures for infrared thermometers and thermal imaging systems require radiation sources of precisely known radiation properties. In the physical absence of an ideal Planck's radiator, the German Committee VDI/VDE-GMA FA 2.51, 'Applied Radiation Thermometry', agreed upon desirable specifications and limiting parameters for a blackbody calibration source with a temperature range from -20 degree(s)C to +350 degree(s)C, a spectral range from 2 to 15 microns, an emissivity greater than 0.999 and a useful source aperture of 60 mm, among others. As a result of the subsequent design and development performed with the support of the laboratory '7.31 Thermometry' of the German national institute of natural and engineering sciences (PTB), the Mester ME20 Blackbody Calibration Source is presented. The ME20 meets or exceeds all of the specifications formulated by the VDI/VDE committee.

  4. Calibration of the ARID robot

    NASA Technical Reports Server (NTRS)

    Doty, Keith L

    1992-01-01

    The author has formulated a new, general model for specifying the kinematic properties of serial manipulators. The new model kinematic parameters do not suffer discontinuities when nominally parallel adjacent axes deviate from exact parallelism. From this new theory the author develops a first-order, lumped-parameter, calibration-model for the ARID manipulator. Next, the author develops a calibration methodology for the ARID based on visual and acoustic sensing. A sensor platform, consisting of a camera and four sonars attached to the ARID end frame, performs calibration measurements. A calibration measurement consists of processing one visual frame of an accurately placed calibration image and recording four acoustic range measurements. A minimum of two measurement protocols determine the kinematics calibration-model of the ARID for a particular region: assuming the joint displacements are accurately measured, the calibration surface is planar, and the kinematic parameters do not vary rapidly in the region. No theoretical or practical limitations appear to contra-indicate the feasibility of the calibration method developed here.

  5. Research on camera on orbit radial calibration based on black body and infrared calibration stars

    NASA Astrophysics Data System (ADS)

    Wang, YuDu; Su, XiaoFeng; Zhang, WanYing; Chen, FanSheng

    2018-05-01

    Affected by launching process and space environment, the response capability of a space camera must be attenuated. So it is necessary for a space camera to have a spaceborne radiant calibration. In this paper, we propose a method of calibration based on accurate Infrared standard stars was proposed for increasing infrared radiation measurement precision. As stars can be considered as a point target, we use them as the radiometric calibration source and establish the Taylor expansion method and the energy extrapolation model based on WISE catalog and 2MASS catalog. Then we update the calibration results from black body. Finally, calibration mechanism is designed and the technology of design is verified by on orbit test. The experimental calibration result shows the irradiance extrapolation error is about 3% and the accuracy of calibration methods is about 10%, the results show that the methods could satisfy requirements of on orbit calibration.

  6. A parallel calibration utility for WRF-Hydro on high performance computers

    NASA Astrophysics Data System (ADS)

    Wang, J.; Wang, C.; Kotamarthi, V. R.

    2017-12-01

    A successful modeling of complex hydrological processes comprises establishing an integrated hydrological model which simulates the hydrological processes in each water regime, calibrates and validates the model performance based on observation data, and estimates the uncertainties from different sources especially those associated with parameters. Such a model system requires large computing resources and often have to be run on High Performance Computers (HPC). The recently developed WRF-Hydro modeling system provides a significant advancement in the capability to simulate regional water cycles more completely. The WRF-Hydro model has a large range of parameters such as those in the input table files — GENPARM.TBL, SOILPARM.TBL and CHANPARM.TBL — and several distributed scaling factors such as OVROUGHRTFAC. These parameters affect the behavior and outputs of the model and thus may need to be calibrated against the observations in order to obtain a good modeling performance. Having a parameter calibration tool specifically for automate calibration and uncertainty estimates of WRF-Hydro model can provide significant convenience for the modeling community. In this study, we developed a customized tool using the parallel version of the model-independent parameter estimation and uncertainty analysis tool, PEST, to enabled it to run on HPC with PBS and SLURM workload manager and job scheduler. We also developed a series of PEST input file templates that are specifically for WRF-Hydro model calibration and uncertainty analysis. Here we will present a flood case study occurred in April 2013 over Midwest. The sensitivity and uncertainties are analyzed using the customized PEST tool we developed.

  7. Space environment's effect on MODIS calibration

    NASA Astrophysics Data System (ADS)

    Dodd, J. L.; Wenny, B. N.; Chiang, K.; Xiong, X.

    2010-09-01

    The MODerate resolution Imaging Spectroradiometer flies on board the Earth Observing System (EOS) satellites Terra and Aqua in a sun-synchronous orbit that crosses the equator at 10:30 AM and 2:30 PM, respectively, at a low earth orbit (LEO) altitude of 705 km. Terra was launched on December 18,1999 and Aqua was launched on May 4, 2002. As the MODIS instruments on board these satellites continue to operate beyond the design lifetime of six years, the cumulative effect of the space environment on MODIS and its calibration is of increasing importance. There are several aspects of the space environment that impact both the top of atmosphere (TOA) calibration and, therefore, the final science products of MODIS. The south Atlantic anomaly (SAA), spacecraft drag, extreme radiative and thermal environment, and the presence of orbital debris have the potential to significantly impact both MODIS and the spacecraft, either directly or indirectly, possibly resulting in data loss. Efforts from the Terra and Aqua Flight Operations Teams (FOT), the MODIS Instrument Operations Team (IOT), and the MODIS Characterization Support Team (MCST) prevent or minimize external impact on the TOA calibrated data. This paper discusses specific effects of the space environment on MODIS and how they are minimized.

  8. POLCAL - POLARIMETRIC RADAR CALIBRATION

    NASA Technical Reports Server (NTRS)

    Vanzyl, J.

    1994-01-01

    Calibration of polarimetric radar systems is a field of research in which great progress has been made over the last few years. POLCAL (Polarimetric Radar Calibration) is a software tool intended to assist in the calibration of Synthetic Aperture Radar (SAR) systems. In particular, POLCAL calibrates Stokes matrix format data produced as the standard product by the NASA/Jet Propulsion Laboratory (JPL) airborne imaging synthetic aperture radar (AIRSAR). POLCAL was designed to be used in conjunction with data collected by the NASA/JPL AIRSAR system. AIRSAR is a multifrequency (6 cm, 24 cm, and 68 cm wavelength), fully polarimetric SAR system which produces 12 x 12 km imagery at 10 m resolution. AIRSTAR was designed as a testbed for NASA's Spaceborne Imaging Radar program. While the images produced after 1991 are thought to be calibrated (phase calibrated, cross-talk removed, channel imbalance removed, and absolutely calibrated), POLCAL can and should still be used to check the accuracy of the calibration and to correct it if necessary. Version 4.0 of POLCAL is an upgrade of POLCAL version 2.0 released to AIRSAR investigators in June, 1990. New options in version 4.0 include automatic absolute calibration of 89/90 data, distributed target analysis, calibration of nearby scenes with calibration parameters from a scene with corner reflectors, altitude or roll angle corrections, and calibration of errors introduced by known topography. Many sources of error can lead to false conclusions about the nature of scatterers on the surface. Errors in the phase relationship between polarization channels result in incorrect synthesis of polarization states. Cross-talk, caused by imperfections in the radar antenna itself, can also lead to error. POLCAL reduces cross-talk and corrects phase calibration without the use of ground calibration equipment. Removing the antenna patterns during SAR processing also forms a very important part of the calibration of SAR data. Errors in the

  9. Challenges in the Development of a Self-Calibrating Network of Ceilometers.

    NASA Astrophysics Data System (ADS)

    Hervo, Maxime; Wagner, Frank; Mattis, Ina; Baars, Holger; Haefele, Alexander

    2015-04-01

    self-calibration method. For 3 CALIPSO overpasses the agreement was on average 20.0%. It is less accurate due to the large uncertainties of CALIPSO data close to the surface. In opposition to the Rayleigh method, Cloud calibration method uses the complete attenuation of the transmitter beam by a liquid water cloud to calculate the lidar constant (O'Connor 2004). The main challenge is the selection of accurately measured water clouds. These clouds should not contain any ice crystals and the detector should not get into saturation. The first problem is especially important during winter time and the second problem is especially important for low clouds. Furthermore the overlap function should be known accurately, especially when the water cloud is located at a distance where the overlap between laser beam and telescope field-of-view is still incomplete. In the E-PROFILE pilot network, the Rayleigh calibration is already performed automatically. This demonstration network maked available, in real time, calibrated ALC measurements from 8 instruments of 4 different types in 6 countries. In collaboration with TOPROF and 20 national weathers services, E-PROFILE will provide, in 2017, near real time ALC measurements in most of Europe.

  10. Improved dewpoint-probe calibration

    NASA Technical Reports Server (NTRS)

    Stephenson, J. G.; Theodore, E. A.

    1978-01-01

    Relatively-simple pressure-control apparatus calibrates dewpoint probes considerably faster than conventional methods, with no loss of accuracy. Technique requires only pressure measurement at each calibration point and single absolute-humidity measurement at beginning of run. Several probes can be calibrated simultaneously and points can be checked above room temperature.

  11. Using Active Learning for Speeding up Calibration in Simulation Models.

    PubMed

    Cevik, Mucahit; Ergun, Mehmet Ali; Stout, Natasha K; Trentham-Dietz, Amy; Craven, Mark; Alagoz, Oguzhan

    2016-07-01

    Most cancer simulation models include unobservable parameters that determine disease onset and tumor growth. These parameters play an important role in matching key outcomes such as cancer incidence and mortality, and their values are typically estimated via a lengthy calibration procedure, which involves evaluating a large number of combinations of parameter values via simulation. The objective of this study is to demonstrate how machine learning approaches can be used to accelerate the calibration process by reducing the number of parameter combinations that are actually evaluated. Active learning is a popular machine learning method that enables a learning algorithm such as artificial neural networks to interactively choose which parameter combinations to evaluate. We developed an active learning algorithm to expedite the calibration process. Our algorithm determines the parameter combinations that are more likely to produce desired outputs and therefore reduces the number of simulation runs performed during calibration. We demonstrate our method using the previously developed University of Wisconsin breast cancer simulation model (UWBCS). In a recent study, calibration of the UWBCS required the evaluation of 378 000 input parameter combinations to build a race-specific model, and only 69 of these combinations produced results that closely matched observed data. By using the active learning algorithm in conjunction with standard calibration methods, we identify all 69 parameter combinations by evaluating only 5620 of the 378 000 combinations. Machine learning methods hold potential in guiding model developers in the selection of more promising parameter combinations and hence speeding up the calibration process. Applying our machine learning algorithm to one model shows that evaluating only 1.49% of all parameter combinations would be sufficient for the calibration. © The Author(s) 2015.

  12. Using Active Learning for Speeding up Calibration in Simulation Models

    PubMed Central

    Cevik, Mucahit; Ali Ergun, Mehmet; Stout, Natasha K.; Trentham-Dietz, Amy; Craven, Mark; Alagoz, Oguzhan

    2015-01-01

    Background Most cancer simulation models include unobservable parameters that determine the disease onset and tumor growth. These parameters play an important role in matching key outcomes such as cancer incidence and mortality and their values are typically estimated via lengthy calibration procedure, which involves evaluating large number of combinations of parameter values via simulation. The objective of this study is to demonstrate how machine learning approaches can be used to accelerate the calibration process by reducing the number of parameter combinations that are actually evaluated. Methods Active learning is a popular machine learning method that enables a learning algorithm such as artificial neural networks to interactively choose which parameter combinations to evaluate. We develop an active learning algorithm to expedite the calibration process. Our algorithm determines the parameter combinations that are more likely to produce desired outputs, therefore reduces the number of simulation runs performed during calibration. We demonstrate our method using previously developed University of Wisconsin Breast Cancer Simulation Model (UWBCS). Results In a recent study, calibration of the UWBCS required the evaluation of 378,000 input parameter combinations to build a race-specific model and only 69 of these combinations produced results that closely matched observed data. By using the active learning algorithm in conjunction with standard calibration methods, we identify all 69 parameter combinations by evaluating only 5620 of the 378,000 combinations. Conclusion Machine learning methods hold potential in guiding model developers in the selection of more promising parameter combinations and hence speeding up the calibration process. Applying our machine learning algorithm to one model shows that evaluating only 1.49% of all parameter combinations would be sufficient for the calibration. PMID:26471190

  13. Re-calibration of the magnetic compass in hand-raised European robins (Erithacus rubecula)

    PubMed Central

    Alert, Bianca; Michalik, Andreas; Thiele, Nadine; Bottesch, Michael; Mouritsen, Henrik

    2015-01-01

    Migratory birds can use a variety of environmental cues for orientation. A primary calibration between the celestial and magnetic compasses seems to be fundamental prior to a bird’s first autumn migration. Releasing hand-raised or rescued young birds back into the wild might therefore be a problem because they might not have established a functional orientation system during their first calendar year. Here, we test whether hand-raised European robins that did not develop any functional compass before or during their first autumn migration could relearn to orient if they were exposed to natural celestial cues during the subsequent winter and spring. When tested in the geomagnetic field without access to celestial cues, these birds could orient in their species-specific spring migratory direction. In contrast, control birds that were deprived of any natural celestial cues throughout remained unable to orient. Our experiments suggest that European robins are still capable of establishing a functional orientation system after their first autumn. Although the external reference remains speculative, most likely, natural celestial cues enabled our birds to calibrate their magnetic compass. Our data suggest that avian compass systems are more flexible than previously believed and have implications for the release of hand-reared migratory birds. PMID:26388258

  14. Lyman alpha SMM/UVSP absolute calibration and geocoronal correction

    NASA Technical Reports Server (NTRS)

    Fontenla, Juan M.; Reichmann, Edwin J.

    1987-01-01

    Lyman alpha observations from the Ultraviolet Spectrometer Polarimeter (UVSP) instrument of the Solar Maximum Mission (SMM) spacecraft were analyzed and provide instrumental calibration details. Specific values of the instrument quantum efficiency, Lyman alpha absolute intensity, and correction for geocoronal absorption are presented.

  15. Specifying and calibrating instrumentations for wideband electronic power measurements. [in switching circuits

    NASA Technical Reports Server (NTRS)

    Lesco, D. J.; Weikle, D. H.

    1980-01-01

    The wideband electric power measurement related topics of electronic wattmeter calibration and specification are discussed. Tested calibration techniques are described in detail. Analytical methods used to determine the bandwidth requirements of instrumentation for switching circuit waveforms are presented and illustrated with examples from electric vehicle type applications. Analog multiplier wattmeters, digital wattmeters and calculating digital oscilloscopes are compared. The instrumentation characteristics which are critical to accurate wideband power measurement are described.

  16. Estimating pole/zero errors in GSN-IRIS/USGS network calibration metadata

    USGS Publications Warehouse

    Ringler, A.T.; Hutt, C.R.; Aster, R.; Bolton, H.; Gee, L.S.; Storm, T.

    2012-01-01

    Mapping the digital record of a seismograph into true ground motion requires the correction of the data by some description of the instrument's response. For the Global Seismographic Network (Butler et al., 2004), as well as many other networks, this instrument response is represented as a Laplace domain pole–zero model and published in the Standard for the Exchange of Earthquake Data (SEED) format. This Laplace representation assumes that the seismometer behaves as a linear system, with any abrupt changes described adequately via multiple time-invariant epochs. The SEED format allows for published instrument response errors as well, but these typically have not been estimated or provided to users. We present an iterative three-step method to estimate the instrument response parameters (poles and zeros) and their associated errors using random calibration signals. First, we solve a coarse nonlinear inverse problem using a least-squares grid search to yield a first approximation to the solution. This approach reduces the likelihood of poorly estimated parameters (a local-minimum solution) caused by noise in the calibration records and enhances algorithm convergence. Second, we iteratively solve a nonlinear parameter estimation problem to obtain the least-squares best-fit Laplace pole–zero–gain model. Third, by applying the central limit theorem, we estimate the errors in this pole–zero model by solving the inverse problem at each frequency in a two-thirds octave band centered at each best-fit pole–zero frequency. This procedure yields error estimates of the 99% confidence interval. We demonstrate the method by applying it to a number of recent Incorporated Research Institutions in Seismology/United States Geological Survey (IRIS/USGS) network calibrations (network code IU).

  17. SURFplus Model Calibration for PBX 9502

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Menikoff, Ralph

    2017-12-06

    The SURFplus reactive burn model is calibrated for the TATB based explosive PBX 9502 at three initial temperatures; hot (75 C), ambient (23 C) and cold (-55 C). The CJ state depends on the initial temperature due to the variation in the initial density and initial specific energy of the PBX reactants. For the reactants, a porosity model for full density TATB is used. This allows the initial PBX density to be set to its measured value even though the coeffcient of thermal expansion for the TATB and the PBX differ. The PBX products EOS is taken as independent ofmore » the initial PBX state. The initial temperature also affects the sensitivity to shock initiation. The model rate parameters are calibrated to Pop plot data, the failure diameter, the limiting detonation speed just above the failure diameters, and curvature effect data for small curvature.« less

  18. Depth estimation and camera calibration of a focused plenoptic camera for visual odometry

    NASA Astrophysics Data System (ADS)

    Zeller, Niclas; Quint, Franz; Stilla, Uwe

    2016-08-01

    This paper presents new and improved methods of depth estimation and camera calibration for visual odometry with a focused plenoptic camera. For depth estimation we adapt an algorithm previously used in structure-from-motion approaches to work with images of a focused plenoptic camera. In the raw image of a plenoptic camera, scene patches are recorded in several micro-images under slightly different angles. This leads to a multi-view stereo-problem. To reduce the complexity, we divide this into multiple binocular stereo problems. For each pixel with sufficient gradient we estimate a virtual (uncalibrated) depth based on local intensity error minimization. The estimated depth is characterized by the variance of the estimate and is subsequently updated with the estimates from other micro-images. Updating is performed in a Kalman-like fashion. The result of depth estimation in a single image of the plenoptic camera is a probabilistic depth map, where each depth pixel consists of an estimated virtual depth and a corresponding variance. Since the resulting image of the plenoptic camera contains two plains: the optical image and the depth map, camera calibration is divided into two separate sub-problems. The optical path is calibrated based on a traditional calibration method. For calibrating the depth map we introduce two novel model based methods, which define the relation of the virtual depth, which has been estimated based on the light-field image, and the metric object distance. These two methods are compared to a well known curve fitting approach. Both model based methods show significant advantages compared to the curve fitting method. For visual odometry we fuse the probabilistic depth map gained from one shot of the plenoptic camera with the depth data gained by finding stereo correspondences between subsequent synthesized intensity images of the plenoptic camera. These images can be synthesized totally focused and thus finding stereo correspondences is enhanced

  19. Progressive calibration and averaging for tandem mass spectrometry statistical confidence estimation: Why settle for a single decoy?

    PubMed Central

    Keich, Uri; Noble, William Stafford

    2017-01-01

    Estimating the false discovery rate (FDR) among a list of tandem mass spectrum identifications is mostly done through target-decoy competition (TDC). Here we offer two new methods that can use an arbitrarily small number of additional randomly drawn decoy databases to improve TDC. Specifically, “Partial Calibration” utilizes a new meta-scoring scheme that allows us to gradually benefit from the increase in the number of identifications calibration yields and “Averaged TDC” (a-TDC) reduces the liberal bias of TDC for small FDR values and its variability throughout. Combining a-TDC with “Progressive Calibration” (PC), which attempts to find the “right” number of decoys required for calibration we see substantial impact in real datasets: when analyzing the Plasmodium falciparum data it typically yields almost the entire 17% increase in discoveries that “full calibration” yields (at FDR level 0.05) using 60 times fewer decoys. Our methods are further validated using a novel realistic simulation scheme and importantly, they apply more generally to the problem of controlling the FDR among discoveries from searching an incomplete database. PMID:29326989

  20. Calibration Issues and Operating System Requirements for Electron-Probe Microanalysis

    NASA Technical Reports Server (NTRS)

    Carpenter, P.

    2006-01-01

    Instrument purchase requirements and dialogue with manufacturers have established hardware parameters for alignment, stability, and reproducibility, which have helped improve the precision and accuracy of electron microprobe analysis (EPMA). The development of correction algorithms and the accurate solution to quantitative analysis problems requires the minimization of systematic errors and relies on internally consistent data sets. Improved hardware and computer systems have resulted in better automation of vacuum systems, stage and wavelength-dispersive spectrometer (WDS) mechanisms, and x-ray detector systems which have improved instrument stability and precision. Improved software now allows extended automated runs involving diverse setups and better integrates digital imaging and quantitative analysis. However, instrumental performance is not regularly maintained, as WDS are aligned and calibrated during installation but few laboratories appear to check and maintain this calibration. In particular, detector deadtime (DT) data is typically assumed rather than measured, due primarily to the difficulty and inconvenience of the measurement process. This is a source of fundamental systematic error in many microprobe laboratories and is unknown to the analyst, as the magnitude of DT correction is not listed in output by microprobe operating systems. The analyst must remain vigilant to deviations in instrumental alignment and calibration, and microprobe system software must conveniently verify the necessary parameters. Microanalysis of mission critical materials requires an ongoing demonstration of instrumental calibration. Possible approaches to improvements in instrument calibration, quality control, and accuracy will be discussed. Development of a set of core requirements based on discussions with users, researchers, and manufacturers can yield documents that improve and unify the methods by which instruments can be calibrated. These results can be used to

  1. Space Power Facility Reverberation Chamber Calibration Report

    NASA Technical Reports Server (NTRS)

    Lewis, Catherine C.; Dolesh, Robert J.; Garrett, Michael J.

    2014-01-01

    This document describes the process and results of calibrating the Space Environmental Test EMI Test facility at NASA Plum Brook Space Power Facility according to the specifications of IEC61000-4-21 for susceptibility testing from 100 MHz to 40 GHz. The chamber passed the field uniformity test, in both the empty and loaded conditions, making it the world's largest Reverberation Chamber.

  2. Technique for calibrating angular measurement devices when calibration standards are unavailable

    NASA Technical Reports Server (NTRS)

    Finley, Tom D.

    1991-01-01

    A calibration technique is proposed that will allow the calibration of certain angular measurement devices without requiring the use of absolute standard. The technique assumes that the device to be calibrated has deterministic bias errors. A comparison device must be available that meets the same requirements. The two devices are compared; one device is then rotated with respect to the other, and a second comparison is performed. If the data are reduced using the described technique, the individual errors of the two devices can be determined.

  3. Sensor-independent approach to the vicarious calibration of satellite ocean color radiometry.

    PubMed

    Franz, Bryan A; Bailey, Sean W; Werdell, P Jeremy; McClain, Charles R

    2007-08-01

    The retrieval of ocean color radiometry from space-based sensors requires on-orbit vicarious calibration to achieve the level of accuracy desired for quantitative oceanographic applications. The approach developed by the NASA Ocean Biology Processing Group (OBPG) adjusts the integrated instrument and atmospheric correction system to retrieve normalized water-leaving radiances that are in agreement with ground truth measurements. The method is independent of the satellite sensor or the source of the ground truth data, but it is specific to the atmospheric correction algorithm. The OBPG vicarious calibration approach is described in detail, and results are presented for the operational calibration of SeaWiFS using data from the Marine Optical Buoy (MOBY) and observations of clear-water sites in the South Pacific and southern Indian Ocean. It is shown that the vicarious calibration allows SeaWiFS to reproduce the MOBY radiances and achieve good agreement with radiometric and chlorophyll a measurements from independent in situ sources. We also find that the derived vicarious gains show no significant temporal or geometric dependencies, and that the mission-average calibration reaches stability after approximately 20-40 high-quality calibration samples. Finally, we demonstrate that the performance of the vicariously calibrated retrieval system is relatively insensitive to the assumptions inherent in our approach.

  4. PV Calibration Insights | NREL

    Science.gov Websites

    PV Calibration Insights PV Calibration Insights The Photovoltaic (PV) Calibration Insights blog will provide updates on the testing done by the NREL PV Device Performance group. This NREL research group measures the performance of any and all technologies and sizes of PV devices from around the world

  5. Emotional and Meta-Emotional Intelligence as Predictors of Adjustment Problems in Students with Specific Learning Disorders

    ERIC Educational Resources Information Center

    D'Amico, Antonella; Guastaferro, Teresa

    2017-01-01

    The purpose of this study was to analyse adjustment problems in a group of adolescents with a Specific Learning Disorder (SLD), examining to what extent they depend on the severity level of the learning disorder and/or on the individual's level of emotional intelligence. Adjustment problems,, perceived severity levels of SLD, and emotional and…

  6. Machine-Learning Based Co-adaptive Calibration: A Perspective to Fight BCI Illiteracy

    NASA Astrophysics Data System (ADS)

    Vidaurre, Carmen; Sannelli, Claudia; Müller, Klaus-Robert; Blankertz, Benjamin

    "BCI illiteracy" is one of the biggest problems and challenges in BCI research. It means that BCI control cannot be achieved by a non-negligible number of subjects (estimated 20% to 25%). There are two main causes for BCI illiteracy in BCI users: either no SMR idle rhythm is observed over motor areas, or this idle rhythm is not attenuated during motor imagery, resulting in a classification performance lower than 70% (criterion level) already for offline calibration data. In a previous work of the same authors, the concept of machine learning based co-adaptive calibration was introduced. This new type of calibration provided substantially improved performance for a variety of users. Here, we use a similar approach and investigate to what extent co-adapting learning enables substantial BCI control for completely novice users and those who suffered from BCI illiteracy before.

  7. Calibrating the Spatiotemporal Root Density Distribution for Macroscopic Water Uptake Models Using Tikhonov Regularization

    NASA Astrophysics Data System (ADS)

    Li, N.; Yue, X. Y.

    2018-03-01

    Macroscopic root water uptake models proportional to a root density distribution function (RDDF) are most commonly used to model water uptake by plants. As the water uptake is difficult and labor intensive to measure, these models are often calibrated by inverse modeling. Most previous inversion studies assume RDDF to be constant with depth and time or dependent on only depth for simplification. However, under field conditions, this function varies with type of soil and root growth and thus changes with both depth and time. This study proposes an inverse method to calibrate both spatially and temporally varying RDDF in unsaturated water flow modeling. To overcome the difficulty imposed by the ill-posedness, the calibration is formulated as an optimization problem in the framework of the Tikhonov regularization theory, adding additional constraint to the objective function. Then the formulated nonlinear optimization problem is numerically solved with an efficient algorithm on the basis of the finite element method. The advantage of our method is that the inverse problem is translated into a Tikhonov regularization functional minimization problem and then solved based on the variational construction, which circumvents the computational complexity in calculating the sensitivity matrix involved in many derivative-based parameter estimation approaches (e.g., Levenberg-Marquardt optimization). Moreover, the proposed method features optimization of RDDF without any prior form, which is applicable to a more general root water uptake model. Numerical examples are performed to illustrate the applicability and effectiveness of the proposed method. Finally, discussions on the stability and extension of this method are presented.

  8. Optical Comb from a Whispering Gallery Mode Resonator for Spectroscopy and Astronomy Instruments Calibration

    NASA Technical Reports Server (NTRS)

    Strekalov, Dmitry V.; Yu, Nam; Thompson, Robert J.

    2012-01-01

    The most accurate astronomical data is available from space-based observations that are not impeded by the Earth's atmosphere. Such measurements may require spectral samples taken as long as decades apart, with the 1 cm/s velocity precision integrated over a broad wavelength range. This raises the requirements specifically for instruments used in astrophysics research missions -- their stringent wavelength resolution and accuracy must be maintained over years and possibly decades. Therefore, a stable and broadband optical calibration technique compatible with spaceflights becomes essential. The space-based spectroscopic instruments need to be calibrated in situ, which puts forth specific requirements to the calibration sources, mainly concerned with their mass, power consumption, and reliability. A high-precision, high-resolution reference wavelength comb source for astronomical and astrophysics spectroscopic observations has been developed that is deployable in space. The optical comb will be used for wavelength calibrations of spectrographs and will enable Doppler measurements to better than 10 cm/s precision, one hundred times better than the current state-of-the- art.

  9. Automatic camera to laser calibration for high accuracy mobile mapping systems using INS

    NASA Astrophysics Data System (ADS)

    Goeman, Werner; Douterloigne, Koen; Gautama, Sidharta

    2013-09-01

    A mobile mapping system (MMS) is a mobile multi-sensor platform developed by the geoinformation community to support the acquisition of huge amounts of geodata in the form of georeferenced high resolution images and dense laser clouds. Since data fusion and data integration techniques are increasingly able to combine the complementary strengths of different sensor types, the external calibration of a camera to a laser scanner is a common pre-requisite on today's mobile platforms. The methods of calibration, nevertheless, are often relatively poorly documented, are almost always time-consuming, demand expert knowledge and often require a carefully constructed calibration environment. A new methodology is studied and explored to provide a high quality external calibration for a pinhole camera to a laser scanner which is automatic, easy to perform, robust and foolproof. The method presented here, uses a portable, standard ranging pole which needs to be positioned on a known ground control point. For calibration, a well studied absolute orientation problem needs to be solved. In many cases, the camera and laser sensor are calibrated in relation to the INS system. Therefore, the transformation from camera to laser contains the cumulated error of each sensor in relation to the INS. Here, the calibration of the camera is performed in relation to the laser frame using the time synchronization between the sensors for data association. In this study, the use of the inertial relative movement will be explored to collect more useful calibration data. This results in a better intersensor calibration allowing better coloring of the clouds and a more accurate depth mask for images, especially on the edges of objects in the scene.

  10. A game-theoretic approach for calibration of low-cost magnetometers under noise uncertainty

    NASA Astrophysics Data System (ADS)

    Siddharth, S.; Ali, A. S.; El-Sheimy, N.; Goodall, C. L.; Syed, Z. F.

    2012-02-01

    Pedestrian heading estimation is a fundamental challenge in Global Navigation Satellite System (GNSS)-denied environments. Additionally, the heading observability considerably degrades in low-speed mode of operation (e.g. walking), making this problem even more challenging. The goal of this work is to improve the heading solution when hand-held personal/portable devices, such as cell phones, are used for positioning and to improve the heading estimation in GNSS-denied signal environments. Most smart phones are now equipped with self-contained, low cost, small size and power-efficient sensors, such as magnetometers, gyroscopes and accelerometers. A magnetometer needs calibration before it can be properly employed for navigation purposes. Magnetometers play an important role in absolute heading estimation and are embedded in many smart phones. Before the users navigate with the phone, a calibration is invoked to ensure an improved signal quality. This signal is used later in the heading estimation. In most of the magnetometer-calibration approaches, the motion modes are seldom described to achieve a robust calibration. Also, suitable calibration approaches fail to discuss the stopping criteria for calibration. In this paper, the following three topics are discussed in detail that are important to achieve proper magnetometer-calibration results and in turn the most robust heading solution for the user while taking care of the device misalignment with respect to the user: (a) game-theoretic concepts to attain better filter parameter tuning and robustness in noise uncertainty, (b) best maneuvers with focus on 3D and 2D motion modes and related challenges and (c) investigation of the calibration termination criteria leveraging the calibration robustness and efficiency.

  11. OLI Radiometric Calibration

    NASA Technical Reports Server (NTRS)

    Markham, Brian; Morfitt, Ron; Kvaran, Geir; Biggar, Stuart; Leisso, Nathan; Czapla-Myers, Jeff

    2011-01-01

    Goals: (1) Present an overview of the pre-launch radiance, reflectance & uniformity calibration of the Operational Land Imager (OLI) (1a) Transfer to orbit/heliostat (1b) Linearity (2) Discuss on-orbit plans for radiance, reflectance and uniformity calibration of the OLI

  12. Abundances of isotopologues and calibration of CO2 greenhouse gas measurements

    NASA Astrophysics Data System (ADS)

    Tans, Pieter P.; Crotwell, Andrew M.; Thoning, Kirk W.

    2017-07-01

    We have developed a method to calculate the fractional distribution of CO2 across all of its component isotopologues based on measured δ13C and δ18O values. The fractional distribution can be used with known total CO2 to calculate the amount of substance fraction (mole fraction) of each component isotopologue in air individually. The technique is applicable to any molecule where isotopologue-specific values are desired. We used it with a new CO2 calibration system to account for isotopic differences among the primary CO2 standards that define the WMO X2007 CO2-in-air calibration scale and between the primary standards and standards in subsequent levels of the calibration hierarchy. The new calibration system uses multiple laser spectroscopic techniques to measure mole fractions of the three major CO2 isotopologues (16O12C16O, 16O13C16O, and 16O12C18O) individually. The three measured values are then combined into total CO2 (accounting for the rare unmeasured isotopologues), δ13C, and δ18O values. The new calibration system significantly improves our ability to transfer the WMO CO2 calibration scale with low uncertainty through our role as the World Meteorological Organization Global Atmosphere Watch Central Calibration Laboratory for CO2. Our current estimates for reproducibility of the new calibration system are ±0.01 µmol mol-1 CO2, ±0.2 ‰ δ13C, and ±0.2 ‰ δ18O, all at 68 % confidence interval (CI).

  13. Self-calibration techniques of underwater gamma ray spectrometers.

    PubMed

    Vlachos, D S

    2005-01-01

    In situ continuous monitoring of radioactivity in the water environment has many advantages compared to sampling and analysis techniques but a few shortcomings as well. Apart from the problems encountered in the assembly of the carrying autonomous systems, continuous operation some times alters the response function of the detectors. For example, the continuous operation of a photomultiplier tube results in a shift in the measured spectrum towards lower energies, making thus necessary the re-calibration of the detector. In this work, it is proved, that when measuring radioactivity in seawater, a photo peak around 50 keV will be always present in the measured spectrum. This peak is stable, depends only on the scattering rates of photons in seawater and, when it is detectable, can be used in conjunction with other peaks (40K and/or 208Tl) as a reference peak for the continuous calibration of the detector.

  14. Calibration strategy for the COROT photometry

    NASA Astrophysics Data System (ADS)

    Buey, J.-T.; Auvergne, M.; Lapeyrere, V.; Boumier, P.

    2004-01-01

    Like Eddington, the COROT photometer will measure very small fluctutions on a large signal: the amplitudes of planetary transits and solar-like oscillations are expressed in ppm (parts per million). For such an instrument, specific calibration has to be done during the different phases of the development of the instrument and of all the subsystems. Two main things have to be taken into account: - the calibration during the study phase; - the calibration of the sub-systems and building of numerical models. The first item allows us to clearly understand all the perturbations (internal and external) and to identify their relative impacts on the expected signal (by numerical models including expected values of perturbations and sensitivity of the instrument). Methods and a schedule for the calibration process can also be introduced, in good agreement with the development plan of the instrument. The second item is more related to the measurement of the sensitivity of the instrument and all its sub-systems. As the instrument is designed to be as stable as possible, we have to mix measurements (with larger fluctuations of parameters than expected) and numerical models. Some typical reasons for that are: - there are many parameters to introduce in the measurements and results from some models (bread-board for example) may be extrapolated to the flight model; - larger fluctuations than expected are used (to measure precisely the sensitivity) and numerical models give the real value of noise with the expected fluctuations. - Characteristics of sub-systems may be measured and models used to give the sensitivity of the whole system built with them, as end-to-end measurements may be impossible (time, budget, physical limitations). Also, house-keeping measurements have to be set up on the critical parts of the sub-systems: measurements on thermal probes, power supply, pointing, etc. All these house-keeping data are used during ground calibration and during the flight, so that

  15. NASA Metrology and Calibration, 1980

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The proceedings of the fourth annual NASA Metrology and Calibration Workshop are presented. This workshop covered (1) review and assessment of NASA metrology and calibration activities by NASA Headquarters, (2) results of audits by the Office of Inspector General, (3) review of a proposed NASA Equipment Management System, (4) current and planned field center activities, (5) National Bureau of Standards (NBS) calibration services for NASA, (6) review of NBS's Precision Measurement and Test Equipment Project activities, (7) NASA instrument loan pool operations at two centers, (8) mobile cart calibration systems at two centers, (9) calibration intervals and decals, (10) NASA Calibration Capabilities Catalog, and (11) development of plans and objectives for FY 1981. Several papers in this proceedings are slide presentations only.

  16. Calibrated FMRI.

    PubMed

    Hoge, Richard D

    2012-08-15

    Functional magnetic resonance imaging with blood oxygenation level-dependent (BOLD) contrast has had a tremendous influence on human neuroscience in the last twenty years, providing a non-invasive means of mapping human brain function with often exquisite sensitivity and detail. However the BOLD method remains a largely qualitative approach. While the same can be said of anatomic MRI techniques, whose clinical and research impact has not been diminished in the slightest by the lack of a quantitative interpretation of their image intensity, the quantitative expression of BOLD responses as a percent of the baseline T2*- weighted signal has been viewed as necessary since the earliest days of fMRI. Calibrated MRI attempts to dissociate changes in oxygen metabolism from changes in blood flow and volume, the latter three quantities contributing jointly to determine the physiologically ambiguous percent BOLD change. This dissociation is typically performed using a "calibration" procedure in which subjects inhale a gas mixture containing small amounts of carbon dioxide or enriched oxygen to produce changes in blood flow and BOLD signal which can be measured under well-defined hemodynamic conditions. The outcome is a calibration parameter M which can then be substituted into an expression providing the fractional change in oxygen metabolism given changes in blood flow and BOLD signal during a task. The latest generation of calibrated MRI methods goes beyond fractional changes to provide absolute quantification of resting-state oxygen consumption in micromolar units, in addition to absolute measures of evoked metabolic response. This review discusses the history, challenges, and advances in calibrated MRI, from the personal perspective of the author. Copyright © 2012 Elsevier Inc. All rights reserved.

  17. Early Evaluation of the VIIRS Calibration, Cloud Mask and Surface Reflectance Earth Data Records

    NASA Technical Reports Server (NTRS)

    Vermote, Eric; Justice, Chris; Csiszar, Ivan

    2014-01-01

    Surface reflectance is one of the key products fromVIIRS and as withMODIS, is used in developing several higherorder land products. The VIIRS Surface Reflectance (SR) Intermediate Product (IP) is based on the heritageMODIS Collection 5 product (Vermote, El Saleous, & Justice, 2002). The quality and character of surface reflectance depend on the accuracy of the VIIRS Cloud Mask (VCM), the aerosol algorithms and the adequate calibration of the sensor. The focus of this paper is the early evaluation of the VIIRS SR product in the context of the maturity of the operational processing system, the Interface Data Processing System (IDPS). After a brief introduction, the paper presents the calibration performance and the role of the surface reflectance in calibration monitoring. The analysis of the performance of the cloud mask with a focus on vegetation monitoring (no snow conditions) shows typical problems over bright surfaces and high elevation sites. Also discussed is the performance of the aerosol input used in the atmospheric correction and in particular the artifacts generated by the use of the Navy Aerosol Analysis and Prediction System. Early quantitative results of the performance of the SR product over the AERONET sites showthatwith the fewadjustments recommended, the accuracy iswithin the threshold specifications. The analysis of the adequacy of the SR product (Land PEATE adjusted version) in applications of societal benefits is then presented. We conclude with a set of recommendations to ensure consistency and continuity of the JPSS mission with the MODIS Land Climate Data Record.

  18. Automatic force balance calibration system

    NASA Technical Reports Server (NTRS)

    Ferris, Alice T. (Inventor)

    1995-01-01

    A system for automatically calibrating force balances is provided. The invention uses a reference balance aligned with the balance being calibrated to provide superior accuracy while minimizing the time required to complete the calibration. The reference balance and the test balance are rigidly attached together with closely aligned moment centers. Loads placed on the system equally effect each balance, and the differences in the readings of the two balances can be used to generate the calibration matrix for the test balance. Since the accuracy of the test calibration is determined by the accuracy of the reference balance and current technology allows for reference balances to be calibrated to within +/-0.05% the entire system has an accuracy of +/-0.2%. The entire apparatus is relatively small and can be mounted on a movable base for easy transport between test locations. The system can also accept a wide variety of reference balances, thus allowing calibration under diverse load and size requirements.

  19. Automatic force balance calibration system

    NASA Technical Reports Server (NTRS)

    Ferris, Alice T. (Inventor)

    1996-01-01

    A system for automatically calibrating force balances is provided. The invention uses a reference balance aligned with the balance being calibrated to provide superior accuracy while minimizing the time required to complete the calibration. The reference balance and the test balance are rigidly attached together with closely aligned moment centers. Loads placed on the system equally effect each balance, and the differences in the readings of the two balances can be used to generate the calibration matrix for the test balance. Since the accuracy of the test calibration is determined by the accuracy of the reference balance and current technology allows for reference balances to be calibrated to within .+-.0.05%, the entire system has an accuracy of a .+-.0.2%. The entire apparatus is relatively small and can be mounted on a movable base for easy transport between test locations. The system can also accept a wide variety of reference balances, thus allowing calibration under diverse load and size requirements.

  20. A Common Calibration Source Framework for Fully-Polarimetric and Interferometric Radiometers

    NASA Technical Reports Server (NTRS)

    Kim, Edward J.; Davis, Brynmor; Piepmeier, Jeff; Zukor, Dorothy J. (Technical Monitor)

    2000-01-01

    Two types of microwave radiometry--synthetic thinned array radiometry (STAR) and fully-polarimetric (FP) radiometry--have received increasing attention during the last several years. STAR radiometers offer a technological solution to achieving high spatial resolution imaging from orbit without requiring a filled aperture or a moving antenna, and FP radiometers measure extra polarization state information upon which entirely new or more robust geophysical retrieval algorithms can be based. Radiometer configurations used for both STAR and FP instruments share one fundamental feature that distinguishes them from more 'standard' radiometers, namely, they measure correlations between pairs of microwave signals. The calibration requirements for correlation radiometers are broader than those for standard radiometers. Quantities of interest include total powers, complex correlation coefficients, various offsets, and possible nonlinearities. A candidate for an ideal calibration source would be one that injects test signals with precisely controllable correlation coefficients and absolute powers simultaneously into a pair of receivers, permitting all of these calibration quantities to be measured. The complex nature of correlation radiometer calibration, coupled with certain inherent similarities between STAR and FP instruments, suggests significant leverage in addressing both problems together. Recognizing this, a project was recently begun at NASA Goddard Space Flight Center to develop a compact low-power subsystem for spaceflight STAR or FP receiver calibration. We present a common theoretical framework for the design of signals for a controlled correlation calibration source. A statistical model is described, along with temporal and spectral constraints on such signals. Finally, a method for realizing these signals is demonstrated using a Matlab-based implementation.

  1. SDSS-IV/MaNGA: SPECTROPHOTOMETRIC CALIBRATION TECHNIQUE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yan, Renbin; Sánchez-Gallego, José R.; Tremonti, Christy

    2016-01-15

    Mapping Nearby Galaxies at Apache Point Observatory (MaNGA), one of three core programs in the Sloan Digital Sky Survey-IV, is an integral-field spectroscopic survey of roughly 10,000 nearby galaxies. It employs dithered observations using 17 hexagonal bundles of 2″ fibers to obtain resolved spectroscopy over a wide wavelength range of 3600–10300 Å. To map the internal variations within each galaxy, we need to perform accurate spectral surface photometry, which is to calibrate the specific intensity at every spatial location sampled by each individual aperture element of the integral field unit. The calibration must correct only for the flux loss duemore » to atmospheric throughput and the instrument response, but not for losses due to the finite geometry of the fiber aperture. This requires the use of standard star measurements to strictly separate these two flux loss factors (throughput versus geometry), a difficult challenge with standard single-fiber spectroscopy techniques due to various practical limitations. Therefore, we developed a technique for spectral surface photometry using multiple small fiber-bundles targeting standard stars simultaneously with galaxy observations. We discuss the principles of our approach and how they compare to previous efforts, and we demonstrate the precision and accuracy achieved. MaNGA's relative calibration between the wavelengths of Hα and Hβ has an rms of 1.7%, while that between [N ii] λ6583 and [O ii] λ3727 has an rms of 4.7%. Using extinction-corrected star formation rates and gas-phase metallicities as an illustration, this level of precision guarantees that flux calibration errors will be sub-dominant when estimating these quantities. The absolute calibration is better than 5% for more than 89% of MaNGA's wavelength range.« less

  2. Enhanced anatomical calibration in human movement analysis.

    PubMed

    Donati, Marco; Camomilla, Valentina; Vannozzi, Giuseppe; Cappozzo, Aurelio

    2007-07-01

    The representation of human movement requires knowledge of both movement and morphology of bony segments. The determination of subject-specific morphology data and their registration with movement data is accomplished through an anatomical calibration procedure (calibrated anatomical systems technique: CAST). This paper describes a novel approach to this calibration (UP-CAST) which, as compared with normally used techniques, achieves better repeatability, a shorter application time, and can be effectively performed by non-skilled examiners. Instead of the manual location of prominent bony anatomical landmarks, the description of which is affected by subjective interpretation, a large number of unlabelled points is acquired over prominent parts of the subject's bone, using a wand fitted with markers. A digital model of a template-bone is then submitted to isomorphic deformation and re-orientation to optimally match the above-mentioned points. The locations of anatomical landmarks are automatically made available. The UP-CAST was validated considering the femur as a paradigmatic case. Intra- and inter-examiner repeatability of the identification of anatomical landmarks was assessed both in vivo, using average weight subjects, and on bare bones. Accuracy of the identification was assessed using the anatomical landmark locations manually located on bare bones as reference. The repeatability of this method was markedly higher than that reported in the literature and obtained using the conventional palpation (ranges: 0.9-7.6 mm and 13.4-17.9, respectively). Accuracy resulted, on average, in a maximal error of 11 mm. Results suggest that the principal source of variability resides in the discrepancy between subject's and template bone morphology and not in the inter-examiner differences. The UP-CAST anatomical calibration could be considered a promising alternative to conventional calibration contributing to a more repeatable 3D human movement analysis.

  3. SDSS-IV/MaNGA: Spectrophotometric calibration technique

    DOE PAGES

    Yan, Renbin; Tremonti, Christy; Bershady, Matthew A.; ...

    2015-12-21

    Mapping Nearby Galaxies at Apache Point Observatory (MaNGA), one of three core programs in the Sloan Digital Sky Survey-IV, is an integral-field spectroscopic survey of roughly 10,000 nearby galaxies. It employs dithered observations using 17 hexagonal bundles of 2'' fibers to obtain resolved spectroscopy over a wide wavelength range of 3600-10300 Å. To map the internal variations within each galaxy, we need to perform accurate spectral surface photometry, which is to calibrate the specific intensity at every spatial location sampled by each individual aperture element of the integral field unit. The calibration must correct only for the flux loss duemore » to atmospheric throughput and the instrument response, but not for losses due to the finite geometry of the fiber aperture. This then requires the use of standard star measurements to strictly separate these two flux loss factors (throughput versus geometry), a difficult challenge with standard single-fiber spectroscopy techniques due to various practical limitations. Thus, we developed a technique for spectral surface photometry using multiple small fiber-bundles targeting standard stars simultaneously with galaxy observations. We discuss the principles of our approach and how they compare to previous efforts, and we demonstrate the precision and accuracy achieved. MaNGA's relative calibration between the wavelengths of Hα and Hβ has an rms of 1.7%, while that between [N ii] λ6583 and [O ii] λ3727 has an rms of 4.7%. In using extinction-corrected star formation rates and gas-phase metallicities as an illustration, this level of precision guarantees that flux calibration errors will be sub-dominant when estimating these quantities. The absolute calibration is better than 5% for more than 89% of MaNGA's wavelength range.« less

  4. Calibration of an electronic nose for poultry farm

    NASA Astrophysics Data System (ADS)

    Abdullah, A. H.; Shukor, S. A.; Kamis, M. S.; Shakaff, A. Y. M.; Zakaria, A.; Rahim, N. A.; Mamduh, S. M.; Kamarudin, K.; Saad, F. S. A.; Masnan, M. J.; Mustafa, H.

    2017-03-01

    Malodour from the poultry farms could cause air pollution and therefore potentially dangerous to humans' and animals' health. This issue also poses sustainability risk to the poultry industries due to objections from local community. The aim of this paper is to develop and calibrate a cost effective and efficient electronic nose for poultry farm air monitoring. The instrument main components include sensor chamber, array of specific sensors, microcontroller, signal conditioning circuits and wireless sensor networks. The instrument was calibrated to allow classification of different concentrations of main volatile compounds in the poultry farm malodour. The outcome of the process will also confirm the device's reliability prior to being used for poultry farm malodour assessment. The Multivariate Analysis (HCA and KNN) and Artificial Neural Network (ANN) pattern recognition technique was used to process the acquired data. The results show that the instrument is able to calibrate the samples using ANN classification model with high accuracy. The finding verifies the instrument's performance to be used as an effective poultry farm malodour monitoring.

  5. Results from Source-Based and Detector-Based Calibrations of a CLARREO Calibration Demonstration System

    NASA Technical Reports Server (NTRS)

    Angal, Amit; Mccorkel, Joel; Thome, Kurt

    2016-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission is formulated to determine long-term climate trends using SI-traceable measurements. The CLARREO mission will include instruments operating in the reflected solar (RS) wavelength region from 320 nm to 2300 nm. The Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO and facilitates testing and evaluation of calibration approaches. The basis of CLARREO and SOLARIS calibration is the Goddard Laser for Absolute Measurement of Response (GLAMR) that provides a radiance-based calibration at reflective solar wavelengths using continuously tunable lasers. SI-traceability is achieved via detector-based standards that, in GLAMRs case, are a set of NIST-calibrated transfer radiometers. A portable version of the SOLARIS, Suitcase SOLARIS is used to evaluate GLAMRs calibration accuracies. The calibration of Suitcase SOLARIS using GLAMR agrees with that obtained from source-based results of the Remote Sensing Group (RSG) at the University of Arizona to better than 5 (k2) in the 720-860 nm spectral range. The differences are within the uncertainties of the NIST-calibrated FEL lamp-based approach of RSG and give confidence that GLAMR is operating at 5 (k2) absolute uncertainties. Limitations of the Suitcase SOLARIS instrument also discussed and the next edition of the SOLARIS instrument (Suitcase SOLARIS- 2) is expected to provide an improved mechanism to further assess GLAMR and CLARREO calibration approaches. (2016) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  6. Compact Radar Transceiver with Included Calibration

    NASA Technical Reports Server (NTRS)

    McLinden, Matthew; Rincon, Rafael

    2013-01-01

    The Digital Beamforming Synthetic Aperture Radar (DBSAR) is an eight-channel phased array radar system that employs solid-state radar transceivers, a microstrip patch antenna, and a reconfigurable waveform generator and processor unit. The original DBSAR transceiver design utilizes connectorized electronic components that tend to be physically large and heavy. To achieve increased functionality in a smaller volume, PCB (printed circuit board) transceivers were designed to replace the large connectorized transceivers. One of the most challenging problems designing the transceivers in a PCB format was achieving proper performance in the calibration path. For a radar loop-back calibration path, a portion of the transmit signal is coupled out of the antenna feed and fed back into the receiver. This is achieved using passive components for stability and repeatability. Some signal also leaks through the receive path. As these two signal paths are correlated via an unpredictable phase, the leakage through the receive path during transmit must be 30 dB below the calibration path. For DBSAR s design, this requirement called for a 100-dB isolation in the receiver path during transmit. A total of 16 solid-state L-band transceivers on a PCB format were designed. The transceivers include frequency conversion stages, T/R switching, and a calibration path capable of measuring the transmit power-receiver gain product during transmit for pulse-by-pulse calibration or matched filtering. In particular, this calibration path achieves 100-dB isolation between the transmitted signal and the low-noise amplifier through the use of a switching network and a section of physical walls achieving attenuation of radiated leakage. The transceivers were designed in microstrip PCBs with lumped elements and individually packaged components for compactness. Each transceiver was designed on a single PCB with a custom enclosure providing interior walls and compartments to isolate transceiver

  7. Stepwise Regression Analysis of MDOE Balance Calibration Data Acquired at DNW

    NASA Technical Reports Server (NTRS)

    DeLoach, RIchard; Philipsen, Iwan

    2007-01-01

    This paper reports a comparison of two experiment design methods applied in the calibration of a strain-gage balance. One features a 734-point test matrix in which loads are varied systematically according to a method commonly applied in aerospace research and known in the literature of experiment design as One Factor At a Time (OFAT) testing. Two variations of an alternative experiment design were also executed on the same balance, each with different features of an MDOE experiment design. The Modern Design of Experiments (MDOE) is an integrated process of experiment design, execution, and analysis applied at NASA's Langley Research Center to achieve significant reductions in cycle time, direct operating cost, and experimental uncertainty in aerospace research generally and in balance calibration experiments specifically. Personnel in the Instrumentation and Controls Department of the German Dutch Wind Tunnels (DNW) have applied MDOE methods to evaluate them in the calibration of a balance using an automated calibration machine. The data have been sent to Langley Research Center for analysis and comparison. This paper reports key findings from this analysis. The chief result is that a 100-point calibration exploiting MDOE principles delivered quality comparable to a 700+ point OFAT calibration with significantly reduced cycle time and attendant savings in direct and indirect costs. While the DNW test matrices implemented key MDOE principles and produced excellent results, additional MDOE concepts implemented in balance calibrations at Langley Research Center are also identified and described.

  8. 40 CFR 89.313 - Initial calibration of analyzers.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... the HFID analyzer shall be optimized in order to meet the specifications in § 89.319(b)(2). (c) Zero... analyzers shall be set at zero. (2) Introduce the appropriate calibration gases to the analyzers and the values recorded. The same gas flow rates shall be used as when sampling exhaust. (d) Rechecking of zero...

  9. 40 CFR 89.313 - Initial calibration of analyzers.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... the HFID analyzer shall be optimized in order to meet the specifications in § 89.319(b)(2). (c) Zero... analyzers shall be set at zero. (2) Introduce the appropriate calibration gases to the analyzers and the values recorded. The same gas flow rates shall be used as when sampling exhaust. (d) Rechecking of zero...

  10. 40 CFR 89.313 - Initial calibration of analyzers.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... the HFID analyzer shall be optimized in order to meet the specifications in § 89.319(b)(2). (c) Zero... analyzers shall be set at zero. (2) Introduce the appropriate calibration gases to the analyzers and the values recorded. The same gas flow rates shall be used as when sampling exhaust. (d) Rechecking of zero...

  11. 40 CFR 89.313 - Initial calibration of analyzers.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... the HFID analyzer shall be optimized in order to meet the specifications in § 89.319(b)(2). (c) Zero... analyzers shall be set at zero. (2) Introduce the appropriate calibration gases to the analyzers and the values recorded. The same gas flow rates shall be used as when sampling exhaust. (d) Rechecking of zero...

  12. 40 CFR 89.313 - Initial calibration of analyzers.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... the HFID analyzer shall be optimized in order to meet the specifications in § 89.319(b)(2). (c) Zero... analyzers shall be set at zero. (2) Introduce the appropriate calibration gases to the analyzers and the values recorded. The same gas flow rates shall be used as when sampling exhaust. (d) Rechecking of zero...

  13. Improvements of VIIRS and MODIS Solar Diffuser and Lunar Calibration

    NASA Technical Reports Server (NTRS)

    Xiong, Xiaoxiong; Butler, James J.; Lei, Ning; Sun, Junqiang; Fulbright, Jon; Wang, Zhipeng; McIntire, Jeff; Angal, Amit Avinash

    2013-01-01

    Both VIIRS and MODIS instruments use solar diffuser (SD) and lunar observations to calibrate their reflective solar bands (RSB). A solar diffuser stability monitor (SDSM) is used to track the SD on-orbit degradation. On-orbit observations have shown similar wavelength-dependent SD degradation (larger at shorter VIS wavelengths) and SDSM detector response degradation (larger at longer NIR wavelengths) for both VIIRS and MODIS instruments. In general, the MODIS scan mirror has experienced more degradation in the VIS spectral region whereas the VIIRS rotating telescope assembly (RTA) mirrors have seen more degradation in the NIR and SWIR spectral region. Because of this wavelength dependent mirror degradation, the sensor's relative spectral response (RSR) needs to be modulated. Due to differences between the solar and lunar spectral irradiance, the modulated RSR could have different effects on the SD and lunar calibration. In this paper, we identify various factors that should be considered for the improvements of VIIRS and MODIS solar and lunar calibration and examine their potential impact. Specifically, we will characterize and assess the calibration impact due to SD and SDSM attenuation screen transmission (uncertainty), SD BRF uncertainty and onorbit degradation, SDSM detector response degradation, and modulated RSR resulting from the sensor's optics degradation. Also illustrated and discussed in this paper are the calibration strategies implemented in the VIIRS and MODIS SD and lunar calibrations and efforts that could be made for future improvements.

  14. Reversed inverse regression for the univariate linear calibration and its statistical properties derived using a new methodology

    NASA Astrophysics Data System (ADS)

    Kang, Pilsang; Koo, Changhoi; Roh, Hokyu

    2017-11-01

    Since simple linear regression theory was established at the beginning of the 1900s, it has been used in a variety of fields. Unfortunately, it cannot be used directly for calibration. In practical calibrations, the observed measurements (the inputs) are subject to errors, and hence they vary, thus violating the assumption that the inputs are fixed. Therefore, in the case of calibration, the regression line fitted using the method of least squares is not consistent with the statistical properties of simple linear regression as already established based on this assumption. To resolve this problem, "classical regression" and "inverse regression" have been proposed. However, they do not completely resolve the problem. As a fundamental solution, we introduce "reversed inverse regression" along with a new methodology for deriving its statistical properties. In this study, the statistical properties of this regression are derived using the "error propagation rule" and the "method of simultaneous error equations" and are compared with those of the existing regression approaches. The accuracy of the statistical properties thus derived is investigated in a simulation study. We conclude that the newly proposed regression and methodology constitute the complete regression approach for univariate linear calibrations.

  15. Design of experiments and data analysis challenges in calibration for forensics applications

    DOE PAGES

    Anderson-Cook, Christine M.; Burr, Thomas L.; Hamada, Michael S.; ...

    2015-07-15

    Forensic science aims to infer characteristics of source terms using measured observables. Our focus is on statistical design of experiments and data analysis challenges arising in nuclear forensics. More specifically, we focus on inferring aspects of experimental conditions (of a process to produce product Pu oxide powder), such as temperature, nitric acid concentration, and Pu concentration, using measured features of the product Pu oxide powder. The measured features, Y, include trace chemical concentrations and particle morphology such as particle size and shape of the produced Pu oxide power particles. Making inferences about the nature of inputs X that were usedmore » to create nuclear materials having particular characteristics, Y, is an inverse problem. Therefore, statistical analysis can be used to identify the best set (or sets) of Xs for a new set of observed responses Y. One can fit a model (or models) such as Υ = f(Χ) + error, for each of the responses, based on a calibration experiment and then “invert” to solve for the best set of Xs for a new set of Ys. This perspectives paper uses archived experimental data to consider aspects of data collection and experiment design for the calibration data to maximize the quality of the predicted Ys in the forward models; that is, we assume that well-estimated forward models are effective in the inverse problem. In addition, we consider how to identify a best solution for the inferred X, and evaluate the quality of the result and its robustness to a variety of initial assumptions, and different correlation structures between the responses. In addition, we also briefly review recent advances in metrology issues related to characterizing particle morphology measurements used in the response vector, Y.« less

  16. Design of experiments and data analysis challenges in calibration for forensics applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson-Cook, Christine M.; Burr, Thomas L.; Hamada, Michael S.

    Forensic science aims to infer characteristics of source terms using measured observables. Our focus is on statistical design of experiments and data analysis challenges arising in nuclear forensics. More specifically, we focus on inferring aspects of experimental conditions (of a process to produce product Pu oxide powder), such as temperature, nitric acid concentration, and Pu concentration, using measured features of the product Pu oxide powder. The measured features, Y, include trace chemical concentrations and particle morphology such as particle size and shape of the produced Pu oxide power particles. Making inferences about the nature of inputs X that were usedmore » to create nuclear materials having particular characteristics, Y, is an inverse problem. Therefore, statistical analysis can be used to identify the best set (or sets) of Xs for a new set of observed responses Y. One can fit a model (or models) such as Υ = f(Χ) + error, for each of the responses, based on a calibration experiment and then “invert” to solve for the best set of Xs for a new set of Ys. This perspectives paper uses archived experimental data to consider aspects of data collection and experiment design for the calibration data to maximize the quality of the predicted Ys in the forward models; that is, we assume that well-estimated forward models are effective in the inverse problem. In addition, we consider how to identify a best solution for the inferred X, and evaluate the quality of the result and its robustness to a variety of initial assumptions, and different correlation structures between the responses. In addition, we also briefly review recent advances in metrology issues related to characterizing particle morphology measurements used in the response vector, Y.« less

  17. Calibration of a parsimonious distributed ecohydrological daily model in a data-scarce basin by exclusively using the spatio-temporal variation of NDVI

    NASA Astrophysics Data System (ADS)

    Ruiz-Pérez, Guiomar; Koch, Julian; Manfreda, Salvatore; Caylor, Kelly; Francés, Félix

    2017-12-01

    Ecohydrological modeling studies in developing countries, such as sub-Saharan Africa, often face the problem of extensive parametrical requirements and limited available data. Satellite remote sensing data may be able to fill this gap, but require novel methodologies to exploit their spatio-temporal information that could potentially be incorporated into model calibration and validation frameworks. The present study tackles this problem by suggesting an automatic calibration procedure, based on the empirical orthogonal function, for distributed ecohydrological daily models. The procedure is tested with the support of remote sensing data in a data-scarce environment - the upper Ewaso Ngiro river basin in Kenya. In the present application, the TETIS-VEG model is calibrated using only NDVI (Normalized Difference Vegetation Index) data derived from MODIS. The results demonstrate that (1) satellite data of vegetation dynamics can be used to calibrate and validate ecohydrological models in water-controlled and data-scarce regions, (2) the model calibrated using only satellite data is able to reproduce both the spatio-temporal vegetation dynamics and the observed discharge at the outlet and (3) the proposed automatic calibration methodology works satisfactorily and it allows for a straightforward incorporation of spatio-temporal data into the calibration and validation framework of a model.

  18. ASTER preflight and inflight calibration and the validation of level 2 products

    USGS Publications Warehouse

    Thome, K.; Aral, K.; Hook, S.; Kieffer, H.; Lang, H.; Matsunaga, T.; Ono, A.; Palluconi, F. D.; Sakuma, H.; Slater, P.; Takashima, T.; Tonooka, H.; Tsuchida, S.; Welch, R.M.; Zalewski, E.

    1998-01-01

    This paper describes the preflight and inflight calibration approaches used for the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER). The system is a multispectral, high-spatial resolution sensor on the Earth Observing System's (EOS)-AMl platform. Preflight calibration of ASTER uses well-characterized sources to provide calibration and preflight round-robin exercises to understand biases between the calibration sources of ASTER and other EOS sensors. These round-robins rely on well-characterized, ultra-stable radiometers. An experiment held in Yokohama, Japan, showed that the output from the source used for the visible and near-infrared (VNIR) subsystem of ASTER may be underestimated by 1.5%, but this is still within the 4% specification for the absolute, radiometric calibration of these bands. Inflight calibration will rely on vicarious techniques and onboard blackbodies and lamps. Vicarious techniques include ground-reference methods using desert and water sites. A recent joint field campaign gives confidence that these methods currently provide absolute calibration to better than 5%, and indications are that uncertainties less than the required 4% should be achievable at launch. The EOS-AMI platform will also provide a spacecraft maneuver that will allow ASTER to see the moon, allowing further characterization of the sensor. A method for combining the results of these independent calibration results is presented. The paper also describes the plans for validating the Level 2 data products from ASTER. These plans rely heavily upon field campaigns using methods similar to those used for the ground-reference, vicarious calibration methods. ?? 1998 IEEE.

  19. A New Approach to the Internal Calibration of Reverberation-Mapping Spectra

    NASA Astrophysics Data System (ADS)

    Fausnaugh, M. M.

    2017-02-01

    We present a new procedure for the internal (night-to-night) calibration of timeseries spectra, with specific applications to optical AGN reverberation mapping data. The traditional calibration technique assumes that the narrow [O iii] λ5007 emission-line profile is constant in time; given a reference [O iii] λ5007 line profile, nightly spectra are aligned by fitting for a wavelength shift, a flux rescaling factor, and a change in the spectroscopic resolution. We propose the following modifications to this procedure: (1) we stipulate a constant spectral resolution for the final calibrated spectra, (2) we employ a more flexible model for changes in the spectral resolution, and (3) we use a Bayesian modeling framework to assess uncertainties in the calibration. In a test case using data for MCG+08-11-011, these modifications result in a calibration precision of ˜1 millimagnitude, which is approximately a factor of five improvement over the traditional technique. At this level, other systematic issues (e.g., the nightly sensitivity functions and Feii contamination) limit the final precision of the observed light curves. We implement this procedure as a python package (mapspec), which we make available to the community.

  20. Timing Calibration in PET Using a Time Alignment Probe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moses, William W.; Thompson, Christopher J.

    2006-05-05

    We evaluate the Scanwell Time Alignment Probe for performing the timing calibration for the LBNL Prostate-Specific PET Camera. We calibrate the time delay correction factors for each detector module in the camera using two methods--using the Time Alignment Probe (which measures the time difference between the probe and each detector module) and using the conventional method (which measures the timing difference between all module-module combinations in the camera). These correction factors, which are quantized in 2 ns steps, are compared on a module-by-module basis. The values are in excellent agreement--of the 80 correction factors, 62 agree exactly, 17 differ bymore » 1 step, and 1 differs by 2 steps. We also measure on-time and off-time counting rates when the two sets of calibration factors are loaded into the camera and find that they agree within statistical error. We conclude that the performance using the Time Alignment Probe and conventional methods are equivalent.« less

  1. Blind calibration of radio interferometric arrays using sparsity constraints and its implications for self-calibration

    NASA Astrophysics Data System (ADS)

    Chiarucci, Simone; Wijnholds, Stefan J.

    2018-02-01

    Blind calibration, i.e. calibration without a priori knowledge of the source model, is robust to the presence of unknown sources such as transient phenomena or (low-power) broad-band radio frequency interference that escaped detection. In this paper, we present a novel method for blind calibration of a radio interferometric array assuming that the observed field only contains a small number of discrete point sources. We show the huge computational advantage over previous blind calibration methods and we assess its statistical efficiency and robustness to noise and the quality of the initial estimate. We demonstrate the method on actual data from a Low-Frequency Array low-band antenna station showing that our blind calibration is able to recover the same gain solutions as the regular calibration approach, as expected from theory and simulations. We also discuss the implications of our findings for the robustness of regular self-calibration to poor starting models.

  2. Thermodynamically consistent model calibration in chemical kinetics

    PubMed Central

    2011-01-01

    Background The dynamics of biochemical reaction systems are constrained by the fundamental laws of thermodynamics, which impose well-defined relationships among the reaction rate constants characterizing these systems. Constructing biochemical reaction systems from experimental observations often leads to parameter values that do not satisfy the necessary thermodynamic constraints. This can result in models that are not physically realizable and may lead to inaccurate, or even erroneous, descriptions of cellular function. Results We introduce a thermodynamically consistent model calibration (TCMC) method that can be effectively used to provide thermodynamically feasible values for the parameters of an open biochemical reaction system. The proposed method formulates the model calibration problem as a constrained optimization problem that takes thermodynamic constraints (and, if desired, additional non-thermodynamic constraints) into account. By calculating thermodynamically feasible values for the kinetic parameters of a well-known model of the EGF/ERK signaling cascade, we demonstrate the qualitative and quantitative significance of imposing thermodynamic constraints on these parameters and the effectiveness of our method for accomplishing this important task. MATLAB software, using the Systems Biology Toolbox 2.1, can be accessed from http://www.cis.jhu.edu/~goutsias/CSS lab/software.html. An SBML file containing the thermodynamically feasible EGF/ERK signaling cascade model can be found in the BioModels database. Conclusions TCMC is a simple and flexible method for obtaining physically plausible values for the kinetic parameters of open biochemical reaction systems. It can be effectively used to recalculate a thermodynamically consistent set of parameter values for existing thermodynamically infeasible biochemical reaction models of cellular function as well as to estimate thermodynamically feasible values for the parameters of new models. Furthermore, TCMC can

  3. Uncertainty in Calibration, Detection and Estimation of Metal Concentrations in Engine Plumes Using OPAD

    NASA Technical Reports Server (NTRS)

    Hopkins, Randall C.; Benzing, Daniel A.

    1998-01-01

    Improvements in uncertainties in the values of radiant intensity (I) can be accomplished mainly by improvements in the calibration process and in minimizing the difference between the background and engine plume radiance. For engine tests in which the plume is extremely bright, the difference in luminance between the calibration lamp and the engine plume radiance can be so large as to cause relatively large uncertainties in the values of R. This is due to the small aperture necessary on the receiving optics to avoid saturating the instrument. However, this is not a problem with the SSME engine since the liquid oxygen/hydrogen combustion is not as bright as some other fuels. Applying the instrumentation to other type engine tests may require a much brighter calibration lamp.

  4. Emergence of Coding and its Specificity as a Physico-Informatic Problem

    NASA Astrophysics Data System (ADS)

    Wills, Peter R.; Nieselt, Kay; McCaskill, John S.

    2015-06-01

    We explore the origin-of-life consequences of the view that biological systems are demarcated from inanimate matter by their possession of referential information, which is processed computationally to control choices of specific physico-chemical events. Cells are cybernetic: they use genetic information in processes of communication and control, subjecting physical events to a system of integrated governance. The genetic code is the most obvious example of how cells use information computationally, but the historical origin of the usefulness of molecular information is not well understood. Genetic coding made information useful because it imposed a modular metric on the evolutionary search and thereby offered a general solution to the problem of finding catalysts of any specificity. We use the term "quasispecies symmetry breaking" to describe the iterated process of self-organisation whereby the alphabets of distinguishable codons and amino acids increased, step by step.

  5. VIIRS thermal emissive bands on-orbit calibration coefficient performance using vicarious calibration results

    NASA Astrophysics Data System (ADS)

    Moyer, D.; Moeller, C.; De Luccia, F.

    2013-09-01

    The Visible Infrared Imager Radiometer Suite (VIIRS), a primary sensor on-board the Suomi-National Polar-orbiting Partnership (SNPP) spacecraft, was launched October 28, 2011. It has 22 bands: 7 thermal emissive bands (TEBs), 14 reflective solar bands (RSBs) and a Day Night Band (DNB). The TEBs cover the spectral wavelengths between 3.7 to 12 μm and have two 371 m and five 742 m spatial resolution bands. A VIIRS Key Performance Parameter (KPP) is the sea surface temperature (SST) which uses bands M12 (3.7 μm), M15 (10.8 μm) and M16's (12.0 μm) calibrated Science Data Records (SDRs). The TEB SDRs rely on pre-launch calibration coefficients used in a quadratic algorithm to convert the detector's response to calibrated radiance. This paper will evaluate the performance of these prelaunch calibration coefficients using vicarious calibration information from the Cross-track Infrared Sounder (CrIS) also onboard the SNPP spacecraft and the Infrared Atmospheric Sounding Interferometer (IASI) on-board the Meteorological Operational (MetOp) satellite. Changes to the pre-launch calibration coefficients' offset term c0 to improve the SDR's performance at cold scene temperatures will also be discussed.

  6. A Comparison of Radiometric Calibration Techniques for Lunar Impact Flashes

    NASA Technical Reports Server (NTRS)

    Suggs, R.

    2016-01-01

    Video observations of lunar impact flashes have been made by a number of researchers since the late 1990's and the problem of determination of the impact energies has been approached in different ways (Bellot Rubio, et al., 2000 [1], Bouley, et al., 2012.[2], Suggs, et al. 2014 [3], Rembold and Ryan 2015 [4], Ortiz, et al. 2015 [5]). The wide spectral response of the unfiltered video cameras in use for all published measurements necessitates color correction for the standard filter magnitudes available for the comparison stars. An estimate of the color of the impact flash is also needed to correct it to the chosen passband. Magnitudes corrected to standard filters are then used to determine the luminous energy in the filter passband according to the stellar atmosphere calibrations of Bessell et al., 1998 [6]. Figure 1 illustrates the problem. The camera pass band is the wide black curve and the blue, green, red, and magenta curves show the band passes of the Johnson-Cousins B, V, R, and I filters for which we have calibration star magnitudes. The blackbody curve of an impact flash of temperature 2800K (Nemtchinov, et al., 1998 [7]) is the dashed line. This paper compares the various photometric calibration techniques and how they address the color corrections necessary for the calculation of luminous energy (radiometry) of impact flashes. This issue has significant implications for determination of luminous efficiency, predictions of impact crater sizes for observed flashes, and the flux of meteoroids in the 10s of grams to kilogram size range.

  7. Energy calibration issues in nuclear resonant vibrational spectroscopy: observing small spectral shifts and making fast calibrations.

    PubMed

    Wang, Hongxin; Yoda, Yoshitaka; Dong, Weibing; Huang, Songping D

    2013-09-01

    The conventional energy calibration for nuclear resonant vibrational spectroscopy (NRVS) is usually long. Meanwhile, taking NRVS samples out of the cryostat increases the chance of sample damage, which makes it impossible to carry out an energy calibration during one NRVS measurement. In this study, by manipulating the 14.4 keV beam through the main measurement chamber without moving out the NRVS sample, two alternative calibration procedures have been proposed and established: (i) an in situ calibration procedure, which measures the main NRVS sample at stage A and the calibration sample at stage B simultaneously, and calibrates the energies for observing extremely small spectral shifts; for example, the 0.3 meV energy shift between the 100%-(57)Fe-enriched [Fe4S4Cl4](=) and 10%-(57)Fe and 90%-(54)Fe labeled [Fe4S4Cl4](=) has been well resolved; (ii) a quick-switching energy calibration procedure, which reduces each calibration time from 3-4 h to about 30 min. Although the quick-switching calibration is not in situ, it is suitable for normal NRVS measurements.

  8. Automated Heat-Flux-Calibration Facility

    NASA Technical Reports Server (NTRS)

    Liebert, Curt H.; Weikle, Donald H.

    1989-01-01

    Computer control speeds operation of equipment and processing of measurements. New heat-flux-calibration facility developed at Lewis Research Center. Used for fast-transient heat-transfer testing, durability testing, and calibration of heat-flux gauges. Calibrations performed at constant or transient heat fluxes ranging from 1 to 6 MW/m2 and at temperatures ranging from 80 K to melting temperatures of most materials. Facility developed because there is need to build and calibrate very-small heat-flux gauges for Space Shuttle main engine (SSME).Includes lamp head attached to side of service module, an argon-gas-recirculation module, reflector, heat exchanger, and high-speed positioning system. This type of automated heat-flux calibration facility installed in industrial plants for onsite calibration of heat-flux gauges measuring fluxes of heat in advanced gas-turbine and rocket engines.

  9. Accelerated dynamic cardiac MRI exploiting sparse-Kalman-smoother self-calibration and reconstruction (k  -  t SPARKS)

    NASA Astrophysics Data System (ADS)

    Park, Suhyung; Park, Jaeseok

    2015-05-01

    Accelerated dynamic MRI, which exploits spatiotemporal redundancies in k  -  t space and coil dimension, has been widely used to reduce the number of signal encoding and thus increase imaging efficiency with minimal loss of image quality. Nonetheless, particularly in cardiac MRI it still suffers from artifacts and amplified noise in the presence of time-drifting coil sensitivity due to relative motion between coil and subject (e.g. free breathing). Furthermore, a substantial number of additional calibrating signals is to be acquired to warrant accurate calibration of coil sensitivity. In this work, we propose a novel, accelerated dynamic cardiac MRI with sparse-Kalman-smoother self-calibration and reconstruction (k  -  t SPARKS), which is robust to time-varying coil sensitivity even with a small number of calibrating signals. The proposed k  -  t SPARKS incorporates Kalman-smoother self-calibration in k  -  t space and sparse signal recovery in x  -   f space into a single optimization problem, leading to iterative, joint estimation of time-varying convolution kernels and missing signals in k  -  t space. In the Kalman-smoother calibration, motion-induced uncertainties over the entire time frames were included in modeling state transition while a coil-dependent noise statistic in describing measurement process. The sparse signal recovery iteratively alternates with the self-calibration to tackle the ill-conditioning problem potentially resulting from insufficient calibrating signals. Simulations and experiments were performed using both the proposed and conventional methods for comparison, revealing that the proposed k  -  t SPARKS yields higher signal-to-error ratio and superior temporal fidelity in both breath-hold and free-breathing cardiac applications over all reduction factors.

  10. Accelerated dynamic cardiac MRI exploiting sparse-Kalman-smoother self-calibration and reconstruction (k  -  t SPARKS).

    PubMed

    Park, Suhyung; Park, Jaeseok

    2015-05-07

    Accelerated dynamic MRI, which exploits spatiotemporal redundancies in k  -  t space and coil dimension, has been widely used to reduce the number of signal encoding and thus increase imaging efficiency with minimal loss of image quality. Nonetheless, particularly in cardiac MRI it still suffers from artifacts and amplified noise in the presence of time-drifting coil sensitivity due to relative motion between coil and subject (e.g. free breathing). Furthermore, a substantial number of additional calibrating signals is to be acquired to warrant accurate calibration of coil sensitivity. In this work, we propose a novel, accelerated dynamic cardiac MRI with sparse-Kalman-smoother self-calibration and reconstruction (k  -  t SPARKS), which is robust to time-varying coil sensitivity even with a small number of calibrating signals. The proposed k  -  t SPARKS incorporates Kalman-smoother self-calibration in k  -  t space and sparse signal recovery in x  -   f space into a single optimization problem, leading to iterative, joint estimation of time-varying convolution kernels and missing signals in k  -  t space. In the Kalman-smoother calibration, motion-induced uncertainties over the entire time frames were included in modeling state transition while a coil-dependent noise statistic in describing measurement process. The sparse signal recovery iteratively alternates with the self-calibration to tackle the ill-conditioning problem potentially resulting from insufficient calibrating signals. Simulations and experiments were performed using both the proposed and conventional methods for comparison, revealing that the proposed k  -  t SPARKS yields higher signal-to-error ratio and superior temporal fidelity in both breath-hold and free-breathing cardiac applications over all reduction factors.

  11. Common problems in the elicitation and analysis of expert opinion affecting probabilistic safety assessments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meyer, M.A.; Booker, J.M.

    1990-01-01

    Expert opinion is frequently used in probabilistic safety assessment (PSA), particularly in estimating low probability events. In this paper, we discuss some of the common problems encountered in eliciting and analyzing expert opinion data and offer solutions or recommendations. The problems are: that experts are not naturally Bayesian. People fail to update their existing information to account for new information as it becomes available, as would be predicted by the Bayesian philosophy; that experts cannot be fully calibrated. To calibrate experts, the feedback from the known quantities must be immediate, frequent, and specific to the task; that experts are limitedmore » in the number of things that they can mentally juggle at a time to 7 {plus minus} 2; that data gatherers and analysts can introduce bias by unintentionally causing an altering of the expert's thinking or answers; that the level of detail the data, or granularity, can affect the analyses; and the conditioning effect poses difficulties in gathering and analyzing of the expert data. The data that the expert gives can be conditioned on a variety of factors that can affect the analysis and the interpretation of the results. 31 refs.« less

  12. The specification-based validation of reliable multicast protocol: Problem Report. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Wu, Yunqing

    1995-01-01

    Reliable Multicast Protocol (RMP) is a communication protocol that provides an atomic, totally ordered, reliable multicast service on top of unreliable IP multicasting. In this report, we develop formal models for RMP using existing automated verification systems, and perform validation on the formal RMP specifications. The validation analysis help identifies some minor specification and design problems. We also use the formal models of RMP to generate a test suite for conformance testing of the implementation. Throughout the process of RMP development, we follow an iterative, interactive approach that emphasizes concurrent and parallel progress of implementation and verification processes. Through this approach, we incorporate formal techniques into our development process, promote a common understanding for the protocol, increase the reliability of our software, and maintain high fidelity between the specifications of RMP and its implementation.

  13. Identification and testing of countermeasures for specific alcohol accident types and problems. Volume 4, Appendices

    DOT National Transportation Integrated Search

    1984-12-01

    This report summarizes work conducted to investigate the feasibility of developing effective countermeasures directed at specific alcohol-related accidents or problems. In Phase I, literature and accident data were reviewed to determine the scope and...

  14. An accurate on-site calibration system for electronic voltage transformers using a standard capacitor

    NASA Astrophysics Data System (ADS)

    Hu, Chen; Chen, Mian-zhou; Li, Hong-bin; Zhang, Zhu; Jiao, Yang; Shao, Haiming

    2018-05-01

    Ordinarily electronic voltage transformers (EVTs) are calibrated off-line and the calibration procedure requires complex switching operations, which will influence the reliability of the power grid and induce large economic losses. To overcome this problem, this paper investigates a 110 kV on-site calibration system for EVTs, including a standard channel, a calibrated channel and a PC equipped with the LabView environment. The standard channel employs a standard capacitor and an analogue integrating circuit to reconstruct the primary voltage signal. Moreover, an adaptive full-phase discrete Fourier transform (DFT) algorithm is proposed to extract electrical parameters. The algorithm involves the process of extracting the frequency of the grid, adjusting the operation points, and calculating the results using DFT. In addition, an insulated automatic lifting device is designed to realize the live connection of the standard capacitor, which is driven by a wireless remote controller. A performance test of the capacitor verifies the accurateness of the standard capacitor. A system calibration test shows that the system ratio error is less than 0.04% and the phase error is below 2‧, which meets the requirement of the 0.2 accuracy class. Finally, the developed calibration system was used in a substation, and the field test data validates the availability of the system.

  15. Assisting problem drinkers to change on their own: effect of specific and non-specific advice.

    PubMed

    Spivak, K; Sanchez-Craig, M; Davila, R

    1994-09-01

    Problem drinkers (99 males, 41 females) wishing to quit or cut down without professional help received a 60-minute session during which they were assessed and given at random one of these materials: Guidelines, a two-page pamphlet outlining specific methods for achieving abstinence or moderate drinking; Manual, a 30-page booklet describing the methods in the Guidelines; or General Information, a package about alcohol effects. At 12 months follow-up, subjects in the Guidelines and Manual conditions showed significantly greater reductions of heavy days (of 5+ drinks) than subjects in General Information (70% vs. 24%); in addition, significantly fewer subjects in the Guidelines and the Manual conditions expressed need for professional assistance with their drinking (25% vs. 46% in General Information). No main effect of condition or gender was observed on rates of moderate drinkers. At 12 months follow-up, 31% of the men and 43% of the women were rated as moderate drinkers. It was concluded that drinkers intending to cut down on their own derive greater benefit (in terms of their alcohol use) from materials containing specific instructions to develop moderate drinking than from those providing general information on alcohol effects. Clinical and research implications of the findings are discussed.

  16. Mesh refinement and numerical sensitivity analysis for parameter calibration of partial differential equations

    NASA Astrophysics Data System (ADS)

    Becker, Roland; Vexler, Boris

    2005-06-01

    We consider the calibration of parameters in physical models described by partial differential equations. This task is formulated as a constrained optimization problem with a cost functional of least squares type using information obtained from measurements. An important issue in the numerical solution of this type of problem is the control of the errors introduced, first, by discretization of the equations describing the physical model, and second, by measurement errors or other perturbations. Our strategy is as follows: we suppose that the user defines an interest functional I, which might depend on both the state variable and the parameters and which represents the goal of the computation. First, we propose an a posteriori error estimator which measures the error with respect to this functional. This error estimator is used in an adaptive algorithm to construct economic meshes by local mesh refinement. The proposed estimator requires the solution of an auxiliary linear equation. Second, we address the question of sensitivity. Applying similar techniques as before, we derive quantities which describe the influence of small changes in the measurements on the value of the interest functional. These numbers, which we call relative condition numbers, give additional information on the problem under consideration. They can be computed by means of the solution of the auxiliary problem determined before. Finally, we demonstrate our approach at hand of a parameter calibration problem for a model flow problem.

  17. A Spectralon BRF Data Base for MISR Calibration Application

    NASA Technical Reports Server (NTRS)

    Bruegge, C.; Chrien, N.; Haner, D.

    1999-01-01

    The Multi-angle Imaging SpectroRadiometer (MISR) is an Earth observing sensor which will provide global retrievals of aerosols, clouds, and land surface parameters. Instrument specifications require high accuracy absolute calibration, as well as accurate camera-to-camera, band-to-band and pixel-to-pixel relative response determinations.

  18. MODIS airborne simulator visible and near-infrared calibration, 1992 ASTEX field experiment. Calibration version: ASTEX King 1.0

    NASA Technical Reports Server (NTRS)

    Arnold, G. Thomas; Fitzgerald, Michael; Grant, Patrick S.; King, Michael D.

    1994-01-01

    Calibration of the visible and near-infrared (near-IR) channels of the MODIS Airborne Simulator (MAS) is derived from observations of a calibrated light source. For the 1992 Atlantic Stratocumulus Transition Experiment (ASTEX) field deployment, the calibrated light source was the NASA Goddard 48-inch integrating hemisphere. Tests during the ASTEX deployment were conducted to calibrate the hemisphere and then the MAS. This report summarizes the ASTEX hemisphere calibration, and then describes how the MAS was calibrated from the hemisphere data. All MAS calibration measurements are presented and determination of the MAS calibration coefficients (raw counts to radiance conversion) is discussed. In addition, comparisons to an independent MAS calibration by Ames personnel using their 30-inch integrating sphere is discussed.

  19. Predicting the performance uncertainty of a 1-MW pilot-scale carbon capture system after hierarchical laboratory-scale calibration and validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Zhijie; Lai, Canhai; Marcy, Peter William

    2017-05-01

    A challenging problem in designing pilot-scale carbon capture systems is to predict, with uncertainty, the adsorber performance and capture efficiency under various operating conditions where no direct experimental data exist. Motivated by this challenge, we previously proposed a hierarchical framework in which relevant parameters of physical models were sequentially calibrated from different laboratory-scale carbon capture unit (C2U) experiments. Specifically, three models of increasing complexity were identified based on the fundamental physical and chemical processes of the sorbent-based carbon capture technology. Results from the corresponding laboratory experiments were used to statistically calibrate the physical model parameters while quantifying some of theirmore » inherent uncertainty. The parameter distributions obtained from laboratory-scale C2U calibration runs are used in this study to facilitate prediction at a larger scale where no corresponding experimental results are available. In this paper, we first describe the multiphase reactive flow model for a sorbent-based 1-MW carbon capture system then analyze results from an ensemble of simulations with the upscaled model. The simulation results are used to quantify uncertainty regarding the design’s predicted efficiency in carbon capture. In particular, we determine the minimum gas flow rate necessary to achieve 90% capture efficiency with 95% confidence.« less

  20. VSHEC—A program for the automatic spectrum calibration

    NASA Astrophysics Data System (ADS)

    Zlokazov, V. B.; Utyonkov, V. K.; Tsyganov, Yu. S.

    2013-02-01

    Calibration is the transformation of the output channels of a measuring device into the physical values (energies, times, angles, etc.). If dealt with manually, it is a labor- and time-consuming procedure even if only a few detectors are used. However, the situation changes appreciably if a calibration of multi-detector systems is required, where the number of registering devices extends to hundreds (Tsyganov et al. (2004) [1]). The calibration is aggravated by the fact that needed pivotal channel numbers should be determined from peak-like distributions. But peak distribution is an informal pattern so that a procedure of pattern recognition should be employed to discard the operator interference. The automatic calibration is the determination of the calibration curve parameters on the basis of reference quantity list and the data which partially are characterized by these quantities (energies, angles, etc). The program allows the physicist to perform the calibration of the spectrometric detectors for both the cases: that of one tract and that of many. Program summaryProgram title: VSHEC Catalogue identifier: AENN_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENN_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 6403 No. of bytes in distributed program, including test data, etc.: 325847 Distribution format: tar.gz Programming language: DELPHI-5 and higher. Computer: Any IBM PC compatible. Operating system: Windows XX. Classification: 2.3, 4.9. Nature of problem: Automatic conversion of detector channels into their energy equivalents. Solution method: Automatic decomposition of a spectrum into geometric figures such as peaks and an envelope of peaks from below, estimation of peak centers and search for the maximum peak center subsequence which matches the

  1. Ground-based automated radiometric calibration system in Baotou site, China

    NASA Astrophysics Data System (ADS)

    Wang, Ning; Li, Chuanrong; Ma, Lingling; Liu, Yaokai; Meng, Fanrong; Zhao, Yongguang; Pang, Bo; Qian, Yonggang; Li, Wei; Tang, Lingli; Wang, Dongjin

    2017-10-01

    Post-launch vicarious calibration method, as an important post launch method, not only can be used to evaluate the onboard calibrators but also can be allowed for a traceable knowledge of the absolute accuracy, although it has the drawbacks of low frequency data collections due expensive on personal and cost. To overcome the problems, CEOS Working Group on Calibration and Validation (WGCV) Infrared Visible Optical Sensors (IVOS) subgroup has proposed an Automated Radiative Calibration Network (RadCalNet) project. Baotou site is one of the four demonstration sites of RadCalNet. The superiority characteristics of Baotou site is the combination of various natural scenes and artificial targets. In each artificial target and desert, an automated spectrum measurement instrument is developed to obtain the surface reflected radiance spectra every 2 minutes with a spectrum resolution of 2nm. The aerosol optical thickness and column water vapour content are measured by an automatic sun photometer. To meet the requirement of RadCalNet, a surface reflectance spectrum retrieval method is used to generate the standard input files, with the support of surface and atmospheric measurements. Then the top of atmospheric reflectance spectra are derived from the input files. The results of the demonstration satellites, including Landsat 8, Sentinal-2A, show that there is a good agreement between observed and calculated results.

  2. Photogrammetric camera calibration

    USGS Publications Warehouse

    Tayman, W.P.; Ziemann, H.

    1984-01-01

    Section 2 (Calibration) of the document "Recommended Procedures for Calibrating Photogrammetric Cameras and Related Optical Tests" from the International Archives of Photogrammetry, Vol. XIII, Part 4, is reviewed in the light of recent practical work, and suggestions for changes are made. These suggestions are intended as a basis for a further discussion. ?? 1984.

  3. Uncertainty Analysis of Inertial Model Attitude Sensor Calibration and Application with a Recommended New Calibration Method

    NASA Technical Reports Server (NTRS)

    Tripp, John S.; Tcheng, Ping

    1999-01-01

    Statistical tools, previously developed for nonlinear least-squares estimation of multivariate sensor calibration parameters and the associated calibration uncertainty analysis, have been applied to single- and multiple-axis inertial model attitude sensors used in wind tunnel testing to measure angle of attack and roll angle. The analysis provides confidence and prediction intervals of calibrated sensor measurement uncertainty as functions of applied input pitch and roll angles. A comparative performance study of various experimental designs for inertial sensor calibration is presented along with corroborating experimental data. The importance of replicated calibrations over extended time periods has been emphasized; replication provides independent estimates of calibration precision and bias uncertainties, statistical tests for calibration or modeling bias uncertainty, and statistical tests for sensor parameter drift over time. A set of recommendations for a new standardized model attitude sensor calibration method and usage procedures is included. The statistical information provided by these procedures is necessary for the uncertainty analysis of aerospace test results now required by users of industrial wind tunnel test facilities.

  4. Experimental Results of Site Calibration and Sensitivity Measurements in OTR for UWB Systems

    NASA Astrophysics Data System (ADS)

    Viswanadham, Chandana; Rao, P. Mallikrajuna

    2017-06-01

    System calibration and parameter accuracy measurement of electronic support measures (ESM) systems is a major activity, carried out by electronic warfare (EW) engineers. These activities are very critical and needs good understanding in the field of microwaves, antennas, wave propagation, digital and communication domains. EW systems are broad band, built with state-of-the art electronic hardware, installed on different varieties of military platforms to guard country's security from time to time. EW systems operate in wide frequency ranges, typically in the order of thousands of MHz, hence these are ultra wide band (UWB) systems. Few calibration activities are carried within the system and in the test sites, to meet the accuracies of final specifications. After calibration, parameters are measured for their accuracies either in feed mode by injecting the RF signals into the front end or in radiation mode by transmitting the RF signals on to system antenna. To carry out these activities in radiation mode, a calibrated open test range (OTR) is necessary in the frequency band of interest. Thus site calibration of OTR is necessary to be carried out before taking up system calibration and parameter measurements. This paper presents the experimental results of OTR site calibration and sensitivity measurements of UWB systems in radiation mode.

  5. Pattern sampling for etch model calibration

    NASA Astrophysics Data System (ADS)

    Weisbuch, François; Lutich, Andrey; Schatz, Jirka

    2017-06-01

    Successful patterning requires good control of the photolithography and etch processes. While compact litho models, mainly based on rigorous physics, can predict very well the contours printed in photoresist, pure empirical etch models are less accurate and more unstable. Compact etch models are based on geometrical kernels to compute the litho-etch biases that measure the distance between litho and etch contours. The definition of the kernels as well as the choice of calibration patterns is critical to get a robust etch model. This work proposes to define a set of independent and anisotropic etch kernels -"internal, external, curvature, Gaussian, z_profile" - designed to capture the finest details of the resist contours and represent precisely any etch bias. By evaluating the etch kernels on various structures it is possible to map their etch signatures in a multi-dimensional space and analyze them to find an optimal sampling of structures to train an etch model. The method was specifically applied to a contact layer containing many different geometries and was used to successfully select appropriate calibration structures. The proposed kernels evaluated on these structures were combined to train an etch model significantly better than the standard one. We also illustrate the usage of the specific kernel "z_profile" which adds a third dimension to the description of the resist profile.

  6. Scheduling and calibration strategy for continuous radio monitoring of 1700 sources every three days

    NASA Astrophysics Data System (ADS)

    Max-Moerbeck, Walter

    2014-08-01

    The Owens Valley Radio Observatory 40 meter telescope is currently monitoring a sample of about 1700 blazars every three days at 15 GHz, with the main scientific goal of determining the relation between the variability of blazars at radio and gamma-rays as observed with the Fermi Gamma-ray Space Telescope. The time domain relation between radio and gamma-ray emission, in particular its correlation and time lag, can help us determine the location of the high-energy emission site in blazars, a current open question in blazar research. To achieve this goal, continuous observation of a large sample of blazars in a time scale of less than a week is indispensable. Since we only look at bright targets, the time available for target observations is mostly limited by source observability, calibration requirements and slewing of the telescope. Here I describe the implementation of a practical solution to this scheduling, calibration, and slewing time minimization problem. This solution combines ideas from optimization, in particular the traveling salesman problem, with astronomical and instrumental constraints. A heuristic solution using well established optimization techniques and astronomical insights particular to this situation, allow us to observe all the sources in the required three days cadence while obtaining reliable calibration of the radio flux densities. Problems of this nature will only be more common in the future and the ideas presented here can be relevant for other observing programs.

  7. Non-convex optimization for self-calibration of direction-dependent effects in radio interferometric imaging

    NASA Astrophysics Data System (ADS)

    Repetti, Audrey; Birdi, Jasleen; Dabbech, Arwa; Wiaux, Yves

    2017-10-01

    Radio interferometric imaging aims to estimate an unknown sky intensity image from degraded observations, acquired through an antenna array. In the theoretical case of a perfectly calibrated array, it has been shown that solving the corresponding imaging problem by iterative algorithms based on convex optimization and compressive sensing theory can be competitive with classical algorithms such as clean. However, in practice, antenna-based gains are unknown and have to be calibrated. Future radio telescopes, such as the Square Kilometre Array, aim at improving imaging resolution and sensitivity by orders of magnitude. At this precision level, the direction-dependency of the gains must be accounted for, and radio interferometric imaging can be understood as a blind deconvolution problem. In this context, the underlying minimization problem is non-convex, and adapted techniques have to be designed. In this work, leveraging recent developments in non-convex optimization, we propose the first joint calibration and imaging method in radio interferometry, with proven convergence guarantees. Our approach, based on a block-coordinate forward-backward algorithm, jointly accounts for visibilities and suitable priors on both the image and the direction-dependent effects (DDEs). As demonstrated in recent works, sparsity remains the prior of choice for the image, while DDEs are modelled as smooth functions of the sky, I.e. spatially band-limited. Finally, we show through simulations the efficiency of our method, for the reconstruction of both images of point sources and complex extended sources. matlab code is available on GitHub.

  8. Dutch X-band SLAR calibration

    NASA Technical Reports Server (NTRS)

    Groot, J. S.

    1990-01-01

    In August 1989 the NASA/JPL airborne P/L/C-band DC-8 SAR participated in several remote sensing campaigns in Europe. Amongst other test sites, data were obtained of the Flevopolder test site in the Netherlands on August the 16th. The Dutch X-band SLAR was flown on the same date and imaged parts of the same area as the SAR. To calibrate the two imaging radars a set of 33 calibration devices was deployed. 16 trihedrals were used to calibrate a part of the SLAR data. This short paper outlines the X-band SLAR characteristics, the experimental set-up and the calibration method used to calibrate the SLAR data. Finally some preliminary results are given.

  9. Automatic and robust extrinsic camera calibration for high-accuracy mobile mapping

    NASA Astrophysics Data System (ADS)

    Goeman, Werner; Douterloigne, Koen; Bogaert, Peter; Pires, Rui; Gautama, Sidharta

    2012-10-01

    A mobile mapping system (MMS) is the answer of the geoinformation community to the exponentially growing demand for various geospatial data with increasingly higher accuracies and captured by multiple sensors. As the mobile mapping technology is pushed to explore its use for various applications on water, rail, or road, the need emerges to have an external sensor calibration procedure which is portable, fast and easy to perform. This way, sensors can be mounted and demounted depending on the application requirements without the need for time consuming calibration procedures. A new methodology is presented to provide a high quality external calibration of cameras which is automatic, robust and fool proof.The MMS uses an Applanix POSLV420, which is a tightly coupled GPS/INS positioning system. The cameras used are Point Grey color video cameras synchronized with the GPS/INS system. The method uses a portable, standard ranging pole which needs to be positioned on a known ground control point. For calibration a well studied absolute orientation problem needs to be solved. Here, a mutual information based image registration technique is studied for automatic alignment of the ranging pole. Finally, a few benchmarking tests are done under various lighting conditions which proves the methodology's robustness, by showing high absolute stereo measurement accuracies of a few centimeters.

  10. Innovative anisotropic phantoms for calibration of diffusion tensor imaging sequences.

    PubMed

    Kłodowski, Krzysztof; Krzyżak, Artur Tadeusz

    2016-05-01

    The paper describes a novel type of anisotropic phantoms designed for b-matrix spatial distribution diffusion tensor imaging (BSD-DTI). Cubic plate anisotropic phantom, cylinder capillary phantom and water reference phantom are described as a complete set necessary for calibration, validation and normalization of BSD-DTI. An innovative design of the phantoms basing on enclosing the anisotropic cores in glass balls filled with liquid made for the first time possible BSD calibration with usage of echo planar imaging (EPI) sequence. Susceptibility artifacts prone to occur in EPI sequences were visibly reduced in the central region of the phantoms. The phantoms were designed for usage in a clinical scanner's head coil, but can be scaled for other coil or scanner types. The phantoms can be also used for a pre-calibration of imaging of other types of phantoms having more specific applications. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Parallel computing for automated model calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burke, John S.; Danielson, Gary R.; Schulz, Douglas A.

    2002-07-29

    Natural resources model calibration is a significant burden on computing and staff resources in modeling efforts. Most assessments must consider multiple calibration objectives (for example magnitude and timing of stream flow peak). An automated calibration process that allows real time updating of data/models, allowing scientists to focus effort on improving models is needed. We are in the process of building a fully featured multi objective calibration tool capable of processing multiple models cheaply and efficiently using null cycle computing. Our parallel processing and calibration software routines have been generically, but our focus has been on natural resources model calibration. Somore » far, the natural resources models have been friendly to parallel calibration efforts in that they require no inter-process communication, only need a small amount of input data and only output a small amount of statistical information for each calibration run. A typical auto calibration run might involve running a model 10,000 times with a variety of input parameters and summary statistical output. In the past model calibration has been done against individual models for each data set. The individual model runs are relatively fast, ranging from seconds to minutes. The process was run on a single computer using a simple iterative process. We have completed two Auto Calibration prototypes and are currently designing a more feature rich tool. Our prototypes have focused on running the calibration in a distributed computing cross platform environment. They allow incorporation of?smart? calibration parameter generation (using artificial intelligence processing techniques). Null cycle computing similar to SETI@Home has also been a focus of our efforts. This paper details the design of the latest prototype and discusses our plans for the next revision of the software.« less

  12. Time dependent calibration of a sediment extraction scheme.

    PubMed

    Roychoudhury, Alakendra N

    2006-04-01

    Sediment extraction methods to quantify metal concentration in aquatic sediments usually present limitations in accuracy and reproducibility because metal concentration in the supernatant is controlled to a large extent by the physico-chemical properties of the sediment that result in a complex interplay between the solid and the solution phase. It is suggested here that standardization of sediment extraction methods using pure mineral phases or reference material is futile and instead the extraction processes should be calibrated using site-specific sediments before their application. For calibration, time dependent release of metals should be observed for each leachate to ascertain the appropriate time for a given extraction step. Although such an approach is tedious and time consuming, using iron extraction as an example, it is shown here that apart from quantitative data such an approach provides additional information on factors that play an intricate role in metal dynamics in the environment. Single step ascorbate, HCl, oxalate and dithionite extractions were used for targeting specific iron phases from saltmarsh sediments and their response was observed over time in order to calibrate the extraction times for each extractant later to be used in a sequential extraction. For surficial sediments, an extraction time of 24 h, 1 h, 2 h and 3 h was ascertained for ascorbate, HCl, oxalate and dithionite extractions, respectively. Fluctuations in iron concentration in the supernatant over time were ubiquitous. The adsorption-desorption behavior is possibly controlled by the sediment organic matter, formation or consumption of active exchange sites during extraction and the crystallinity of iron mineral phase present in the sediments.

  13. Computerized tomography calibrator

    NASA Technical Reports Server (NTRS)

    Engel, Herbert P. (Inventor)

    1991-01-01

    A set of interchangeable pieces comprising a computerized tomography calibrator, and a method of use thereof, permits focusing of a computerized tomographic (CT) system. The interchangeable pieces include a plurality of nestable, generally planar mother rings, adapted for the receipt of planar inserts of predetermined sizes, and of predetermined material densities. The inserts further define openings therein for receipt of plural sub-inserts. All pieces are of known sizes and densities, permitting the assembling of different configurations of materials of known sizes and combinations of densities, for calibration (i.e., focusing) of a computerized tomographic system through variation of operating variables thereof. Rather than serving as a phanton, which is intended to be representative of a particular workpiece to be tested, the set of interchangeable pieces permits simple and easy standardized calibration of a CT system. The calibrator and its related method of use further includes use of air or of particular fluids for filling various openings, as part of a selected configuration of the set of pieces.

  14. Thermocouple Calibration and Accuracy in a Materials Testing Laboratory

    NASA Technical Reports Server (NTRS)

    Lerch, B. A.; Nathal, M. V.; Keller, D. J.

    2002-01-01

    A consolidation of information has been provided that can be used to define procedures for enhancing and maintaining accuracy in temperature measurements in materials testing laboratories. These studies were restricted to type R and K thermocouples (TCs) tested in air. Thermocouple accuracies, as influenced by calibration methods, thermocouple stability, and manufacturer's tolerances were all quantified in terms of statistical confidence intervals. By calibrating specific TCs the benefits in accuracy can be as great as 6 C or 5X better compared to relying on manufacturer's tolerances. The results emphasize strict reliance on the defined testing protocol and on the need to establish recalibration frequencies in order to maintain these levels of accuracy.

  15. Simultaneous estimation of diet composition and calibration coefficients with fatty acid signature data

    USGS Publications Warehouse

    Bromaghin, Jeffrey F.; Budge, Suzanne M.; Thiemann, Gregory W.; Rode, Karyn D.

    2017-01-01

    Knowledge of animal diets provides essential insights into their life history and ecology, although diet estimation is challenging and remains an active area of research. Quantitative fatty acid signature analysis (QFASA) has become a popular method of estimating diet composition, especially for marine species. A primary assumption of QFASA is that constants called calibration coefficients, which account for the differential metabolism of individual fatty acids, are known. In practice, however, calibration coefficients are not known, but rather have been estimated in feeding trials with captive animals of a limited number of model species. The impossibility of verifying the accuracy of feeding trial derived calibration coefficients to estimate the diets of wild animals is a foundational problem with QFASA that has generated considerable criticism. We present a new model that allows simultaneous estimation of diet composition and calibration coefficients based only on fatty acid signature samples from wild predators and potential prey. Our model performed almost flawlessly in four tests with constructed examples, estimating both diet proportions and calibration coefficients with essentially no error. We also applied the model to data from Chukchi Sea polar bears, obtaining diet estimates that were more diverse than estimates conditioned on feeding trial calibration coefficients. Our model avoids bias in diet estimates caused by conditioning on inaccurate calibration coefficients, invalidates the primary criticism of QFASA, eliminates the need to conduct feeding trials solely for diet estimation, and consequently expands the utility of fatty acid data to investigate aspects of ecology linked to animal diets.

  16. Calibration Software for Use with Jurassicprok

    NASA Technical Reports Server (NTRS)

    Chapin, Elaine; Hensley, Scott; Siqueira, Paul

    2004-01-01

    The Jurassicprok Interferometric Calibration Software (also called "Calibration Processor" or simply "CP") estimates the calibration parameters of an airborne synthetic-aperture-radar (SAR) system, the raw measurement data of which are processed by the Jurassicprok software described in the preceding article. Calibration parameters estimated by CP include time delays, baseline offsets, phase screens, and radiometric offsets. CP examines raw radar-pulse data, single-look complex image data, and digital elevation map data. For each type of data, CP compares the actual values with values expected on the basis of ground-truth data. CP then converts the differences between the actual and expected values into updates for the calibration parameters in an interferometric calibration file (ICF) and a radiometric calibration file (RCF) for the particular SAR system. The updated ICF and RCF are used as inputs to both Jurassicprok and to the companion Motion Measurement Processor software (described in the following article) for use in generating calibrated digital elevation maps.

  17. Theoretical foundation, methods, and criteria for calibrating human vibration models using frequency response functions

    PubMed Central

    Dong, Ren G.; Welcome, Daniel E.; McDowell, Thomas W.; Wu, John Z.

    2015-01-01

    While simulations of the measured biodynamic responses of the whole human body or body segments to vibration are conventionally interpreted as summaries of biodynamic measurements, and the resulting models are considered quantitative, this study looked at these simulations from a different angle: model calibration. The specific aims of this study are to review and clarify the theoretical basis for model calibration, to help formulate the criteria for calibration validation, and to help appropriately select and apply calibration methods. In addition to established vibration theory, a novel theorem of mechanical vibration is also used to enhance the understanding of the mathematical and physical principles of the calibration. Based on this enhanced understanding, a set of criteria was proposed and used to systematically examine the calibration methods. Besides theoretical analyses, a numerical testing method is also used in the examination. This study identified the basic requirements for each calibration method to obtain a unique calibration solution. This study also confirmed that the solution becomes more robust if more than sufficient calibration references are provided. Practically, however, as more references are used, more inconsistencies can arise among the measured data for representing the biodynamic properties. To help account for the relative reliabilities of the references, a baseline weighting scheme is proposed. The analyses suggest that the best choice of calibration method depends on the modeling purpose, the model structure, and the availability and reliability of representative reference data. PMID:26740726

  18. Ability of calibration phantom to reduce the interscan variability in electron beam computed tomography.

    PubMed

    Budoff, Matthew J; Mao, Songshou; Lu, Bin; Takasu, Junichiro; Child, Janis; Carson, Sivi; Fisher, Hans

    2002-01-01

    To test the hypothesis that a calibration phantom would improve interpatient and interscan variability in coronary artery calcium (CAC) studies. We scanned 144 patients twice with or without the calibration phantom and then scanned 93 patients with a single calcific lesion twice and, finally, scanned a cork heart with calcific foci. There were no linear correlations in computed tomography Hounsfield unit (CT HU) and CT HU interscan variation between blood pool and phantom plugs at any slice level in patient groups (p > 0.05). The CT HU interscan variation in phantom plugs (2.11 HU) was less than that of the blood pool (3.47 HU; p < 0.05) and CAC lesion (20.39; p < 0.001). Comparing images with and without a calibration phantom, there was a significant decrease in CT HU as well as an increase in noise and peak values in patient studies and the cork phantom study. The CT HU attenuation variations of the interpatient and interscan blood pool, calibration phantom plug, and cork coronary arteries were not parallel. Therefore, the ability to adjust the CT HU variation of calcific lesions by a calibration phantom is problematic and may worsen the problem.

  19. NASA Glenn Icing Research Tunnel: 2014 and 2015 Cloud Calibration Procedures and Results

    NASA Technical Reports Server (NTRS)

    Steen, Laura E.; Ide, Robert F.; Van Zante, Judith F.; Acosta, Waldo J.

    2015-01-01

    This report summarizes the current status of the NASA Glenn Research Center (GRC) Icing Research Tunnel cloud calibration: specifically, the cloud uniformity, liquid water content, and drop-size calibration results from both the January-February 2014 full cloud calibration and the January 2015 interim cloud calibration. Some aspects of the cloud have remained the same as what was reported for the 2014 full calibration, including the cloud uniformity from the Standard nozzles, the drop-size equations for Standard and Mod1 nozzles, and the liquid water content for large-drop conditions. Overall, the tests performed in January 2015 showed good repeatability to 2014, but there is new information to report as well. There have been minor updates to the Mod1 cloud uniformity on the north side of the test section. Also, successful testing with the OAP-230Y has allowed the IRT to re-expand its operating envelopes for large-drop conditions to a maximum median volumetric diameter of 270 microns. Lastly, improvements to the collection-efficiency correction for the SEA multi-wire have resulted in new calibration equations for Standard- and Mod1-nozzle liquid water content.

  20. Global Space-Based Inter-Calibration System Reflective Solar Calibration Reference: From Aqua MODIS to S-NPP VIIRS

    NASA Technical Reports Server (NTRS)

    Xiong, Xiaoxiong; Angal, Amit; Butler, James; Cao, Changyong; Doelling, Daivd; Wu, Aisheng; Wu, Xiangqian

    2016-01-01

    The MODIS has successfully operated on-board the NASA's EOS Terra and Aqua spacecraft for more than 16 and 14 years, respectively. MODIS instrument was designed with stringent calibration requirements and comprehensive on-board calibration capability. In the reflective solar spectral region, Aqua MODIS has performed better than Terra MODIS and, therefore, has been chosen by the Global Space-based Inter-Calibration System (GSICS) operational community as the calibration reference sensor in cross-sensor calibration and calibration inter-comparisons. For the same reason, it has also been used by a number of earth observing sensors as their calibration reference. Considering that Aqua MODIS has already operated for nearly 14 years, it is essential to transfer its calibration to a follow-on reference sensor with a similar calibration capability and stable performance. The VIIRS is a follow-on instrument to MODIS and has many similar design features as MODIS, including their on-board calibrators (OBC). As a result, VIIRS is an ideal candidate to replace MODIS to serve as the future GSICS reference sensor. Since launch, the S-NPP VIIRS has already operated for more than 4 years and its overall performance has been extensively characterized and demonstrated to meet its overall design requirements. This paper provides an overview of Aqua MODIS and S-NPP VIIRS reflective solar bands (RSB) calibration methodologies and strategies, traceability, and their on-orbit performance. It describes and illustrates different methods and approaches that can be used to facilitate the calibration reference transfer, including the use of desert and Antarctic sites, deep convective clouds (DCC), and the lunar observations.

  1. MODIS Instrument Operation and Calibration Improvements

    NASA Technical Reports Server (NTRS)

    Xiong, X.; Angal, A.; Madhavan, S.; Link, D.; Geng, X.; Wenny, B.; Wu, A.; Chen, H.; Salomonson, V.

    2014-01-01

    Terra and Aqua MODIS have successfully operated for over 14 and 12 years since their respective launches in 1999 and 2002. The MODIS on-orbit calibration is performed using a set of on-board calibrators, which include a solar diffuser for calibrating the reflective solar bands (RSB) and a blackbody for the thermal emissive bands (TEB). On-orbit changes in the sensor responses as well as key performance parameters are monitored using the measurements of these on-board calibrators. This paper provides an overview of MODIS on-orbit operation and calibration activities, and instrument long-term performance. It presents a brief summary of the calibration enhancements made in the latest MODIS data collection 6 (C6). Future improvements in the MODIS calibration and their potential applications to the S-NPP VIIRS are also discussed.

  2. Software For Calibration Of Polarimetric SAR Data

    NASA Technical Reports Server (NTRS)

    Van Zyl, Jakob; Zebker, Howard; Freeman, Anthony; Holt, John; Dubois, Pascale; Chapman, Bruce

    1994-01-01

    POLCAL (Polarimetric Radar Calibration) software tool intended to assist in calibration of synthetic-aperture radar (SAR) systems. In particular, calibrates Stokes-matrix-format data produced as standard product by NASA/Jet Propulsion Laboratory (JPL) airborne imaging synthetic aperture radar (AIRSAR). Version 4.0 of POLCAL is upgrade of version 2.0. New options include automatic absolute calibration of 89/90 data, distributed-target analysis, calibration of nearby scenes with corner reflectors, altitude or roll-angle corrections, and calibration of errors introduced by known topography. Reduces crosstalk and corrects phase calibration without use of ground calibration equipment. Written in FORTRAN 77.

  3. TIME CALIBRATED OSCILLOSCOPE SWEEP CIRCUIT

    DOEpatents

    Smith, V.L.; Carstensen, H.K.

    1959-11-24

    An improved time calibrated sweep circuit is presented, which extends the range of usefulness of conventional oscilloscopes as utilized for time calibrated display applications in accordance with U. S. Patent No. 2,832,002. Principal novelty resides in the provision of a pair of separate signal paths, each of which is phase and amplitude adjustable, to connect a high-frequency calibration oscillator to the output of a sawtooth generator also connected to the respective horizontal deflection plates of an oscilloscope cathode ray tube. The amplitude and phase of the calibration oscillator signals in the two signal paths are adjusted to balance out feedthrough currents capacitively coupled at high frequencies of the calibration oscillator from each horizontal deflection plate to the vertical plates of the cathode ray tube.

  4. Single Vector Calibration System for Multi-Axis Load Cells and Method for Calibrating a Multi-Axis Load Cell

    NASA Technical Reports Server (NTRS)

    Parker, Peter A. (Inventor)

    2003-01-01

    A single vector calibration system is provided which facilitates the calibration of multi-axis load cells, including wind tunnel force balances. The single vector system provides the capability to calibrate a multi-axis load cell using a single directional load, for example loading solely in the gravitational direction. The system manipulates the load cell in three-dimensional space, while keeping the uni-directional calibration load aligned. The use of a single vector calibration load reduces the set-up time for the multi-axis load combinations needed to generate a complete calibration mathematical model. The system also reduces load application inaccuracies caused by the conventional requirement to generate multiple force vectors. The simplicity of the system reduces calibration time and cost, while simultaneously increasing calibration accuracy.

  5. Deformation Monitoring of the Submillimetric UPV Calibration Baseline

    NASA Astrophysics Data System (ADS)

    García-Asenjo, Luis; Baselga, Sergio; Garrigues, Pascual

    2017-06-01

    A 330 m calibration baseline was established at the Universitat Politècnica de València (UPV) in 2007. Absolute scale was subsequently transferred in 2012 from the Nummela Standard Baseline in Finland and distances between pillars were determined with uncertainties ranging from 0.1 mm to 0.3 mm. In order to assess the long-term stability of the baseline three field campaigns were carried out from 2013 to 2015 in a co-operative effort with the Universidad Complutense de Madrid (UCM), which provided the only Mekometer ME5000 distance meter available in Spain. Since the application of the ISO17123-4 full procedure did not suffice to come to a definite conclusion about possible displacements of the pillars, we opted for the traditional geodetic network approach. This approach had to be adapted to the case at hand in order to deal with problems such as the geometric weakness inherent to calibration baselines and scale uncertainty derived from both the use of different instruments and the high correlation between the meteorological correction and scale determination. Additionally, the so-called the maximum number of stable points method was also tested. In this contribution it is described the process followed to assess the stability of the UPV submillimetric calibration baseline during the period of time from 2012 to 2015.

  6. Approaches on calibration of bolometer and establishment of bolometer calibration device

    NASA Astrophysics Data System (ADS)

    Xia, Ming; Gao, Jianqiang; Ye, Jun'an; Xia, Junwen; Yin, Dejin; Li, Tiecheng; Zhang, Dong

    2015-10-01

    Bolometer is mainly used for measuring thermal radiation in the field of public places, labor hygiene, heating and ventilation and building energy conservation. The working principle of bolometer is under the exposure of thermal radiation, temperature of black absorbing layer of detector rise after absorption of thermal radiation, which makes the electromotive force produced by thermoelectric. The white light reflective layer of detector does not absorb thermal radiation, so the electromotive force produced by thermoelectric is almost zero. A comparison of electromotive force produced by thermoelectric of black absorbing layer and white reflective layer can eliminate the influence of electric potential produced by the basal background temperature change. After the electromotive force which produced by thermal radiation is processed by the signal processing unit, the indication displays through the indication display unit. The measurement unit of thermal radiation intensity is usually W/m2 or kW/m2. Its accurate and reliable value has important significance for high temperature operation, labor safety and hygiene grading management. Bolometer calibration device is mainly composed of absolute radiometer, the reference light source, electric measuring instrument. Absolute radiometer is a self-calibration type radiometer. Its working principle is using the electric power which can be accurately measured replaces radiation power to absolutely measure the radiation power. Absolute radiometer is the standard apparatus of laser low power standard device, the measurement traceability is guaranteed. Using the calibration method of comparison, the absolute radiometer and bolometer measure the reference light source in the same position alternately which can get correction factor of irradiance indication. This paper is mainly about the design and calibration method of the bolometer calibration device. The uncertainty of the calibration result is also evaluated.

  7. LiDAR-IMU Time Delay Calibration Based on Iterative Closest Point and Iterated Sigma Point Kalman Filter

    PubMed Central

    Liu, Wanli

    2017-01-01

    The time delay calibration between Light Detection and Ranging (LiDAR) and Inertial Measurement Units (IMUs) is an essential prerequisite for its applications. However, the correspondences between LiDAR and IMU measurements are usually unknown, and thus cannot be computed directly for the time delay calibration. In order to solve the problem of LiDAR-IMU time delay calibration, this paper presents a fusion method based on iterative closest point (ICP) and iterated sigma point Kalman filter (ISPKF), which combines the advantages of ICP and ISPKF. The ICP algorithm can precisely determine the unknown transformation between LiDAR-IMU; and the ISPKF algorithm can optimally estimate the time delay calibration parameters. First of all, the coordinate transformation from the LiDAR frame to the IMU frame is realized. Second, the measurement model and time delay error model of LiDAR and IMU are established. Third, the methodology of the ICP and ISPKF procedure is presented for LiDAR-IMU time delay calibration. Experimental results are presented that validate the proposed method and demonstrate the time delay error can be accurately calibrated. PMID:28282897

  8. LiDAR-IMU Time Delay Calibration Based on Iterative Closest Point and Iterated Sigma Point Kalman Filter.

    PubMed

    Liu, Wanli

    2017-03-08

    The time delay calibration between Light Detection and Ranging (LiDAR) and Inertial Measurement Units (IMUs) is an essential prerequisite for its applications. However, the correspondences between LiDAR and IMU measurements are usually unknown, and thus cannot be computed directly for the time delay calibration. In order to solve the problem of LiDAR-IMU time delay calibration, this paper presents a fusion method based on iterative closest point (ICP) and iterated sigma point Kalman filter (ISPKF), which combines the advantages of ICP and ISPKF. The ICP algorithm can precisely determine the unknown transformation between LiDAR-IMU; and the ISPKF algorithm can optimally estimate the time delay calibration parameters. First of all, the coordinate transformation from the LiDAR frame to the IMU frame is realized. Second, the measurement model and time delay error model of LiDAR and IMU are established. Third, the methodology of the ICP and ISPKF procedure is presented for LiDAR-IMU time delay calibration. Experimental results are presented that validate the proposed method and demonstrate the time delay error can be accurately calibrated.

  9. When to Make Mountains out of Molehills: The Pros and Cons of Simple and Complex Model Calibration Procedures

    NASA Astrophysics Data System (ADS)

    Smith, K. A.; Barker, L. J.; Harrigan, S.; Prudhomme, C.; Hannaford, J.; Tanguy, M.; Parry, S.

    2017-12-01

    Earth and environmental models are relied upon to investigate system responses that cannot otherwise be examined. In simulating physical processes, models have adjustable parameters which may, or may not, have a physical meaning. Determining the values to assign to these model parameters is an enduring challenge for earth and environmental modellers. Selecting different error metrics by which the models results are compared to observations will lead to different sets of calibrated model parameters, and thus different model results. Furthermore, models may exhibit `equifinal' behaviour, where multiple combinations of model parameters lead to equally acceptable model performance against observations. These decisions in model calibration introduce uncertainty that must be considered when model results are used to inform environmental decision-making. This presentation focusses on the uncertainties that derive from the calibration of a four parameter lumped catchment hydrological model (GR4J). The GR models contain an inbuilt automatic calibration algorithm that can satisfactorily calibrate against four error metrics in only a few seconds. However, a single, deterministic model result does not provide information on parameter uncertainty. Furthermore, a modeller interested in extreme events, such as droughts, may wish to calibrate against more low flows specific error metrics. In a comprehensive assessment, the GR4J model has been run with 500,000 Latin Hypercube Sampled parameter sets across 303 catchments in the United Kingdom. These parameter sets have been assessed against six error metrics, including two drought specific metrics. This presentation compares the two approaches, and demonstrates that the inbuilt automatic calibration can outperform the Latin Hypercube experiment approach in single metric assessed performance. However, it is also shown that there are many merits of the more comprehensive assessment, which allows for probabilistic model results, multi

  10. The influence of the spectral emissivity of flat-plate calibrators on the calibration of IR thermometers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cárdenas-García, D.; Méndez-Lango, E.

    Flat Calibrators (FC) are an option for calibration of infrared thermometers (IT) with a fixed large target. FCs are neither blackbodies, nor gray-bodies; their spectral emissivity is lower than one and depends on wavelength. Nevertheless they are used as gray-bodies with a nominal emissivity value. FCs can be calibrated radiometrically using as reference a calibrated IR thermometer (RT). If an FC will be used to calibrate ITs that work in the same spectral range as the RT then its calibration is straightforward: the actual FC spectral emissivity is not required. This result is valid for any given fixed emissivity assessedmore » to the FC. On the other hand, when the RT working spectral range does not match with that of the ITs to be calibrated with the FC then it is required to know the FC spectral emissivity as part of the calibration process. For this purpose, at CENAM, we developed an experimental setup to measure spectral emissivity in the infrared spectral range, based on a Fourier transform infrared spectrometer. Not all laboratories have emissivity measurement capability in the appropriate wavelength and temperature ranges to obtain the spectral emissivity. Thus, we present an estimation of the error introduced when the spectral range of the RT used to calibrate an FC and the spectral ranges of the ITs to be calibrated with the FC do not match. Some examples are developed for the cases when RT and IT spectral ranges are [8,13] μm and [8,14] μm respectively.« less

  11. Self-Calibration of CMB Polarimeters

    NASA Astrophysics Data System (ADS)

    Keating, Brian

    2013-01-01

    Precision measurements of the polarization of the cosmic microwave background (CMB) radiation, especially experiments seeking to detect the odd-parity "B-modes", have far-reaching implications for cosmology. To detect the B-modes generated during inflation the flux response and polarization angle of these experiments must be calibrated to exquisite precision. While suitable flux calibration sources abound, polarization angle calibrators are deficient in many respects. Man-made polarized sources are often not located in the antenna's far-field, have spectral properties that are radically different from the CMB's, are cumbersome to implement and may be inherently unstable over the (long) duration these searches require to detect the faint signature of the inflationary epoch. Astrophysical sources suffer from time, frequency and spatial variability, are not visible from all CMB observatories, and none are understood with sufficient accuracy to calibrate future CMB polarimeters seeking to probe inflationary energy scales of ~1000 TeV. CMB TB and EB modes, expected to identically vanish in the standard cosmological model, can be used to calibrate CMB polarimeters. By enforcing the observed EB and TB power spectra to be consistent with zero, CMB polarimeters can be calibrated to levels not possible with man-made or astrophysical sources. All of this can be accomplished without any loss of observing time using a calibration source which is spectrally identical to the CMB B-modes. The calibration procedure outlined here can be used for any CMB polarimeter.

  12. Automatic Calibration of a Semi-Distributed Hydrologic Model Using Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Bekele, E. G.; Nicklow, J. W.

    2005-12-01

    Hydrologic simulation models need to be calibrated and validated before using them for operational predictions. Spatially-distributed hydrologic models generally have a large number of parameters to capture the various physical characteristics of a hydrologic system. Manual calibration of such models is a very tedious and daunting task, and its success depends on the subjective assessment of a particular modeler, which includes knowledge of the basic approaches and interactions in the model. In order to alleviate these shortcomings, an automatic calibration model, which employs an evolutionary optimization technique known as Particle Swarm Optimizer (PSO) for parameter estimation, is developed. PSO is a heuristic search algorithm that is inspired by social behavior of bird flocking or fish schooling. The newly-developed calibration model is integrated to the U.S. Department of Agriculture's Soil and Water Assessment Tool (SWAT). SWAT is a physically-based, semi-distributed hydrologic model that was developed to predict the long term impacts of land management practices on water, sediment and agricultural chemical yields in large complex watersheds with varying soils, land use, and management conditions. SWAT was calibrated for streamflow and sediment concentration. The calibration process involves parameter specification, whereby sensitive model parameters are identified, and parameter estimation. In order to reduce the number of parameters to be calibrated, parameterization was performed. The methodology is applied to a demonstration watershed known as Big Creek, which is located in southern Illinois. Application results show the effectiveness of the approach and model predictions are significantly improved.

  13. Development of landsat-5 thematic mapper internal calibrator gain and offset table

    USGS Publications Warehouse

    Barsi, J.A.; Chander, G.; Micijevic, E.; Markham, B.L.; Haque, Md. O.

    2008-01-01

    The National Landsat Archive Production System (NLAPS) has been the primary processing system for Landsat data since U.S. Geological Survey (USGS) Earth Resources Observation and Science Center (EROS) started archiving Landsat data. NLAPS converts raw satellite data into radiometrically and geometrically calibrated products. NLAPS has historically used the Internal Calibrator (IC) to calibrate the reflective bands of the Landsat-5 Thematic Mapper (TM), even though the lamps in the IC were less stable than the TM detectors, as evidenced by vicarious calibration results. In 2003, a major effort was made to model the actual TM gain change and to update NLAPS to use this model rather than the unstable IC data for radiometric calibration. The model coefficients were revised in 2007 to reflect greater understanding of the changes in the TM responsivity. While the calibration updates are important to users with recently processed data, the processing system no longer calculates the original IC gain or offset. For specific applications, it is useful to have a record of the gain and offset actually applied to the older data. Thus, the NLAPS calibration database was used to generate estimated daily values for the radiometric gain and offset that might have been applied to TM data. This paper discusses the need for and generation of the NLAPSIC gain and offset tables. A companion paper covers the application of and errors associated with using these tables.

  14. Base flow calibration in a global hydrological model

    NASA Astrophysics Data System (ADS)

    van Beek, L. P.; Bierkens, M. F.

    2006-12-01

    Base flow constitutes an important water resource in many parts of the world. Its provenance and yield over time are governed by the storage capacity of local aquifers and the internal drainage paths, which are difficult to capture at the global scale. To represent the spatial and temporal variability in base flow adequately in a distributed global model at 0.5 degree resolution, we resorted to the conceptual model of aquifer storage of Kraaijenhoff- van de Leur (1958) that yields the reservoir coefficient for a linear groundwater store. This model was parameterised using global information on drainage density, climatology and lithology. Initial estimates of aquifer thickness, permeability and specific porosity from literature were linked to the latter two categories and calibrated to low flow data by means of simulated annealing so as to conserve the ordinal information contained by them. The observations used stem from the RivDis dataset of monthly discharge. From this dataset 324 stations were selected with at least 10 years of observations in the period 1958-1991 and an areal coverage of at least 10 cells of 0.5 degree. The dataset was split between basins into a calibration and validation set whilst preserving a representative distribution of lithology types and climate zones. Optimisation involved minimising the absolute differences between the simulated base flow and the lowest 10% of the observed monthly discharge. Subsequently, the reliability of the calibrated parameters was tested by reversing the calibration and validation sets.

  15. Calibration of water-velocity meters

    USGS Publications Warehouse

    Kaehrle, William R.; Bowie, James E.

    1988-01-01

    The U.S. Geological Survey, Department of the Interior, as part of its responsibility to appraise the quantity of water resources in the United States, maintains facilities for the calibration of water-velocity meters at the Gulf Coast Hydroscience Center's Hydraulic Laboratory Facility, NSTL, Mississippi. These meters are used in hydrologic studies by the Geological Survey, U.S. Army Corps of Engineers, U.S. Department of Energy, state agencies, universities, and others in the public and private sector. This paper describes calibration facilities, types of water-velocity meters calibrated, and calibration standards, methods and results.

  16. (Mis)use of (133)Ba as a calibration surrogate for (131)I in clinical activity calibrators.

    PubMed

    Zimmerman, B E; Bergeron, D E

    2016-03-01

    Using NIST-calibrated solutions of (131)Ba and (131)I in the 5mL NIST ampoule geometry, measurements were made in three NIST-maintained Capintec activity calibrators and the NIST Vinten 671 ionization chamber to evaluate the suitability of using (133)Ba as a calibration surrogate for (131)I. For the Capintec calibrators, the (133)Ba response was a factor of about 300% higher than that of the same amount of (131)I. For the Vinten 671, the Ba-133 response was about 7% higher than that of (131)I. These results demonstrate that (133)Ba is a poor surrogate for (131)I. New calibration factors for these radionuclides in the ampoule geometry for the Vinten 671 and Capintec activity calibrators were also determined. Published by Elsevier Ltd.

  17. Absolute calibration for complex-geometry biomedical diffuse optical spectroscopy

    NASA Astrophysics Data System (ADS)

    Mastanduno, Michael A.; Jiang, Shudong; El-Ghussein, Fadi; diFlorio-Alexander, Roberta; Pogue, Brian W.; Paulsen, Keith D.

    2013-03-01

    We have presented methodology to calibrate data in NIRS/MRI imaging versus an absolute reference phantom and results in both phantoms and healthy volunteers. This method directly calibrates data to a diffusion-based model, takes advantage of patient specific geometry from MRI prior information, and generates an initial guess without the need for a large data set. This method of calibration allows for more accurate quantification of total hemoglobin, oxygen saturation, water content, scattering, and lipid concentration as compared with other, slope-based methods. We found the main source of error in the method to be derived from incorrect assignment of reference phantom optical properties rather than initial guess in reconstruction. We also present examples of phantom and breast images from a combined frequency domain and continuous wave MRI-coupled NIRS system. We were able to recover phantom data within 10% of expected contrast and within 10% of the actual value using this method and compare these results with slope-based calibration methods. Finally, we were able to use this technique to calibrate and reconstruct images from healthy volunteers. Representative images are shown and discussion is provided for comparison with existing literature. These methods work towards fully combining the synergistic attributes of MRI and NIRS for in-vivo imaging of breast cancer. Complete software and hardware integration in dual modality instruments is especially important due to the complexity of the technology and success will contribute to complex anatomical and molecular prognostic information that can be readily obtained in clinical use.

  18. LaCl3:Ce Coincidence Signatures to Calibrate Gamma-ray Detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McIntyre, Justin I.; Schrom, Brian T.; Cooper, Matthew W.

    Abstract Calibrating the gamma-ray detection efficiency of radiation detectors in a field environment is difficult under most circumstances. To counter this problem we have developed a technique that uses a Cerium doped Lanthanum-Tri-Chloride (LaCl3:Ce) scintillation detector to provide gated gammas[ , ]. Exploiting the inherent radioactivity of the LaCl3:Ce due to the long-lived radioactive isotope 138La (t1/2 = 1.06 x 1011 yrs) allows the use of the 788 and 1436-keV gammas as a measure of efficiency. In this paper we explore the effectiveness of using the beta-gamma coincidences radiation LaCl3:Ce detector to calibrate the energy and efficiency of a numbermore » of gamma-ray detectors.« less

  19. High Gain Antenna Calibration on Three Spacecraft

    NASA Technical Reports Server (NTRS)

    Hashmall, Joseph A.

    2011-01-01

    This paper describes the alignment calibration of spacecraft High Gain Antennas (HGAs) for three missions. For two of the missions (the Lunar Reconnaissance Orbiter and the Solar Dynamics Observatory) the calibration was performed on orbit. For the third mission (the Global Precipitation Measurement core satellite) ground simulation of the calibration was performed in a calibration feasibility study. These three satellites provide a range of calibration situations-Lunar orbit transmitting to a ground antenna for LRO, geosynchronous orbit transmitting to a ground antenna fer SDO, and low Earth orbit transmitting to TDRS satellites for GPM The calibration results depend strongly on the quality and quantity of calibration data. With insufficient data the calibration Junction may give erroneous solutions. Manual intervention in the calibration allowed reliable parameters to be generated for all three missions.

  20. Stand level height-diameter mixed effects models: parameters fitted using loblolly pine but calibrated for sweetgum

    Treesearch

    Curtis L. Vanderschaaf

    2008-01-01

    Mixed effects models can be used to obtain site-specific parameters through the use of model calibration that often produces better predictions of independent data. This study examined whether parameters of a mixed effect height-diameter model estimated using loblolly pine plantation data but calibrated using sweetgum plantation data would produce reasonable...

  1. Cross-calibration between airborne SAR sensors

    NASA Technical Reports Server (NTRS)

    Zink, Manfred; Olivier, Philippe; Freeman, Anthony

    1993-01-01

    As Synthetic Aperture Radar (SAR) system performance and experience in SAR signature evaluation increase, quantitative analysis becomes more and more important. Such analyses require an absolute radiometric calibration of the complete SAR system. To keep the expenditure on calibration of future multichannel and multisensor remote sensing systems (e.g., X-SAR/SIR-C) within a tolerable level, data from different tracks and different sensors (channels) must be cross calibrated. The 1989 joint E-SAR/DC-8 SAR calibration campaign gave a first opportunity for such an experiment, including cross sensor and cross track calibration. A basic requirement for successful cross calibration is the stability of the SAR systems. The calibration parameters derived from different tracks and the polarimetric properties of the uncalibrated data are used to describe this stability. Quality criteria for a successful cross calibration are the agreement of alpha degree values and the consistency of radar cross sections of equally sized corner reflectors. Channel imbalance and cross talk provide additional quality in case of the polarimetric DC-8 SAR.

  2. The development of an electrochemical technique for in situ calibrating of combustible gas detectors

    NASA Technical Reports Server (NTRS)

    Shumar, J. W.; Lantz, J. B.; Schubert, F. H.

    1976-01-01

    A program to determine the feasibility of performing in situ calibration of combustible gas detectors was successfully completed. Several possible techniques for performing the in situ calibration were proposed. The approach that showed the most promise involved the use of a miniature water vapor electrolysis cell for the generation of hydrogen within the flame arrestor of a combustible gas detector to be used for the purpose of calibrating the combustible gas detectors. A preliminary breadboard of the in situ calibration hardware was designed, fabricated and assembled. The breadboard equipment consisted of a commercially available combustible gas detector, modified to incorporate a water vapor electrolysis cell, and the instrumentation required for controlling the water vapor electrolysis and controlling and calibrating the combustible gas detector. The results showed that operation of the water vapor electrolysis at a given current density for a specific time period resulted in the attainment of a hydrogen concentration plateau within the flame arrestor of the combustible gas detector.

  3. Multiple-Objective Stepwise Calibration Using Luca

    USGS Publications Warehouse

    Hay, Lauren E.; Umemoto, Makiko

    2007-01-01

    This report documents Luca (Let us calibrate), a multiple-objective, stepwise, automated procedure for hydrologic model calibration and the associated graphical user interface (GUI). Luca is a wizard-style user-friendly GUI that provides an easy systematic way of building and executing a calibration procedure. The calibration procedure uses the Shuffled Complex Evolution global search algorithm to calibrate any model compiled with the U.S. Geological Survey's Modular Modeling System. This process assures that intermediate and final states of the model are simulated consistently with measured values.

  4. Antenna Calibration and Measurement Equipment

    NASA Technical Reports Server (NTRS)

    Rochblatt, David J.; Cortes, Manuel Vazquez

    2012-01-01

    A document describes the Antenna Calibration & Measurement Equipment (ACME) system that will provide the Deep Space Network (DSN) with instrumentation enabling a trained RF engineer at each complex to perform antenna calibration measurements and to generate antenna calibration data. This data includes continuous-scan auto-bore-based data acquisition with all-sky data gathering in support of 4th order pointing model generation requirements. Other data includes antenna subreflector focus, system noise temperature and tipping curves, antenna efficiency, reports system linearity, and instrument calibration. The ACME system design is based on the on-the-fly (OTF) mapping technique and architecture. ACME has contributed to the improved RF performance of the DSN by approximately a factor of two. It improved the pointing performances of the DSN antennas and productivity of its personnel and calibration engineers.

  5. Quality Management and Calibration

    NASA Astrophysics Data System (ADS)

    Merkus, Henk G.

    Good specification of a product’s performance requires adequate characterization of relevant properties. Particulate products are usually characterized by some PSD, shape or porosity parameter(s). For proper characterization, adequate sampling, dispersion, and measurement procedures should be available or developed and skilful personnel should use appropriate, well-calibrated/qualified equipment. The characterization should be executed, in agreement with customers, in a wellorganized laboratory. All related aspects should be laid down in a quality handbook. The laboratory should provide proof for its capability to perform the characterization of stated products and/or reference materials within stated confidence limits. This can be done either by internal validation and audits or by external GLP accreditation.

  6. Calibrating Historical IR Sensors Using GEO, and AVHRR Infrared Tropical Mean Calibration Models

    NASA Technical Reports Server (NTRS)

    Scarino, Benjamin; Doelling, David R.; Minnis, Patrick; Gopalan, Arun; Haney, Conor; Bhatt, Rajendra

    2014-01-01

    Long-term, remote-sensing-based climate data records (CDRs) are highly dependent on having consistent, wellcalibrated satellite instrument measurements of the Earth's radiant energy. Therefore, by making historical satellite calibrations consistent with those of today's imagers, the Earth-observing community can benefit from a CDR that spans a minimum of 30 years. Most operational meteorological satellites rely on an onboard blackbody and space looks to provide on-orbit IR calibration, but neither target is traceable to absolute standards. The IR channels can also be affected by ice on the detector window, angle dependency of the scan mirror emissivity, stray-light, and detector-to-detector striping. Being able to quantify and correct such degradations would mean IR data from any satellite imager could contribute to a CDR. Recent efforts have focused on utilizing well-calibrated modern hyper-spectral sensors to intercalibrate concurrent operational IR imagers to a single reference. In order to consistently calibrate both historical and current IR imagers to the same reference, however, another strategy is needed. Large, well-characterized tropical-domain Earth targets have the potential of providing an Earth-view reference accuracy of within 0.5 K. To that effort, NASA Langley is developing an IR tropical mean calibration model in order to calibrate historical Advanced Very High Resolution Radiometer (AVHRR) instruments. Using Meteosat-9 (Met-9) as a reference, empirical models are built based on spatially/temporally binned Met-9 and AVHRR tropical IR brightness temperatures. By demonstrating the stability of the Met-9 tropical models, NOAA-18 AVHRR can be calibrated to Met-9 by matching the AVHRR monthly histogram averages with the Met-9 model. This method is validated with ray-matched AVHRR and Met-9 biasdifference time series. Establishing the validity of this empirical model will allow for the calibration of historical AVHRR sensors to within 0.5 K, and thereby

  7. Preschool-Age Male Psychiatric Patients with Specific Developmental Disorders and Those Without: Do They Differ in Behavior Problems and Treatment Outcome?

    ERIC Educational Resources Information Center

    Achtergarde, Sandra; Becke, Johanna; Beyer, Thomas; Postert, Christian; Romer, Georg; Müller, Jörg Michael

    2014-01-01

    Specific developmental disorders of speech, language, and motor function in children are associated with a wide range of mental health problems. We examined whether preschool-age psychiatric patients with specific developmental disorders and those without differed in the severity of emotional and behavior problems. In addition, we examined whether…

  8. Attitude Sensor and Gyro Calibration for Messenger

    NASA Technical Reports Server (NTRS)

    O'Shaughnessy, Daniel; Pittelkau, Mark E.

    2007-01-01

    The Redundant Inertial Measurement Unit Attitude Determination/Calibration (RADICAL(TM)) filter was used to estimate star tracker and gyro calibration parameters using MESSENGER telemetry data from three calibration events. We present an overview of the MESSENGER attitude sensors and their configuration is given, the calibration maneuvers are described, the results are compared with previous calibrations, and variations and trends in the estimated calibration parameters are examined. The warm restart and covariance bump features of the RADICAL(TM) filter were used to estimate calibration parameters from two disjoint telemetry streams. Results show that the calibration parameters converge faster with much less transient variation during convergence than when the filter is cold-started at the start of each telemetry stream.

  9. Laser Calibration of an Impact Disdrometer

    NASA Technical Reports Server (NTRS)

    Lane, John E.; Kasparis, Takis; Metzger, Philip T.; Jones, W. Linwood

    2014-01-01

    A practical approach to developing an operational low-cost disdrometer hinges on implementing an effective in situ adaptive calibration strategy. This calibration strategy lowers the cost of the device and provides a method to guarantee continued automatic calibration. In previous work, a collocated tipping bucket rain gauge was utilized to provide a calibration signal to the disdrometer's digital signal processing software. Rainfall rate is proportional to the 11/3 moment of the drop size distribution (a 7/2 moment can also be assumed, depending on the choice of terminal velocity relationship). In the previous case, the disdrometer calibration was characterized and weighted to the 11/3 moment of the drop size distribution (DSD). Optical extinction by rainfall is proportional to the 2nd moment of the DSD. Using visible laser light as a means to focus and generate an auxiliary calibration signal, the adaptive calibration processing is significantly improved.

  10. Insecure Attachment Styles, Relationship-Drinking Contexts, and Marital Alcohol Problems: Testing the Mediating Role of Relationship-Specific Drinking-to-Cope Motives

    PubMed Central

    Levitt, Ash; Leonard, Kenneth E.

    2015-01-01

    Research and theory suggest that romantic couple members are motivated to drink to cope with interpersonal distress. Additionally, this behavior and its consequences appear to be differentially associated with insecure attachment styles. However, no research has directly examined drinking to cope that is specific to relationship problems, or with relationship-specific drinking outcomes. Based on alcohol motivation and attachment theories, the current study examines relationship-specific drinking-to-cope processes over the early years of marriage. Specifically, it was hypothesized that drinking to cope with a relationship problem would mediate the associations between insecure attachment styles (i.e., anxious and avoidant) and frequencies of drinking with and apart from one’s partner and marital alcohol problems in married couples. Multilevel models were tested via the Actor-Partner Interdependence Model using reports of both members of 470 couples over the first 9 years of marriage. As expected, relationship-specific drinking-to-cope motives mediated the effects of actor anxious attachment on drinking apart from one’s partner and on marital alcohol problems, but, unexpectedly, not on drinking with the partner. No mediated effects were found for attachment avoidance. Results suggest that anxious (but not avoidant) individuals are motivated to use alcohol to cope specifically with relationship problems in certain contexts, which may exacerbate relationship difficulties associated with attachment anxiety. Implications for theory and future research on relationship-motivated drinking are discussed. PMID:25799439

  11. Daytime sky polarization calibration limitations

    NASA Astrophysics Data System (ADS)

    Harrington, David M.; Kuhn, Jeffrey R.; Ariste, Arturo López

    2017-01-01

    The daytime sky has recently been demonstrated as a useful calibration tool for deriving polarization cross-talk properties of large astronomical telescopes. The Daniel K. Inouye Solar Telescope and other large telescopes under construction can benefit from precise polarimetric calibration of large mirrors. Several atmospheric phenomena and instrumental errors potentially limit the technique's accuracy. At the 3.67-m AEOS telescope on Haleakala, we performed a large observing campaign with the HiVIS spectropolarimeter to identify limitations and develop algorithms for extracting consistent calibrations. Effective sampling of the telescope optical configurations and filtering of data for several derived parameters provide robustness to the derived Mueller matrix calibrations. Second-order scattering models of the sky show that this method is relatively insensitive to multiple-scattering in the sky, provided calibration observations are done in regions of high polarization degree. The technique is also insensitive to assumptions about telescope-induced polarization, provided the mirror coatings are highly reflective. Zemax-derived polarization models show agreement between the functional dependence of polarization predictions and the corresponding on-sky calibrations.

  12. Task Complexity, Epistemological Beliefs and Metacognitive Calibration: An Exploratory Study

    ERIC Educational Resources Information Center

    Stahl, Elmar; Pieschl, Stephanie; Bromme, Rainer

    2006-01-01

    This article presents an explorative study, which is part of a comprehensive project to examine the impact of epistemological beliefs on metacognitive calibration during learning processes within a complex hypermedia information system. More specifically, this study investigates: 1) if learners differentiate between tasks of different complexity,…

  13. Gender-specific mediational links between parenting styles, parental monitoring, impulsiveness, drinking control, and alcohol-related problems.

    PubMed

    Patock-Peckham, Julie A; King, Kevin M; Morgan-Lopez, Antonio A; Ulloa, Emilio C; Moses, Jennifer M Filson

    2011-03-01

    Recently, it has been suggested that traits may dynamically change as conditions change. One possible mechanism that may influence impulsiveness is parental monitoring. Parental monitoring reflects a knowledge regarding one's offspring's whereabouts and social connections. The aim of this investigation was to examine potential gender-specific parental influences to impulsiveness (general behavioral control), control over one's own drinking (specific behavioral control), and alcohol-related problems among individuals in a period of emerging adulthood. Direct and mediational links between parenting styles (permissive, authoritarian, and authoritative), parental monitoring, impulsiveness, drinking control, and alcohol-related problems were investigated. A multiple-group, SEM model with (316 women, 265 men) university students was examined. In general, the overall pattern among male and female respondents was distinct. For daughters, perceptions of a permissive father were indirectly linked to more alcohol-related problems through lower levels of monitoring by fathers and more impulsive symptoms. Perceptions of an authoritative father were also indirectly linked to fewer impulsive symptoms through higher levels of monitoring by fathers among daughters. For men, perceptions of a permissive mother were indirectly linked to more alcohol-related problems through lower levels of monitoring by mothers and more impulsive symptoms. For sons, perceptions of mother authoritativeness were indirectly linked to fewer alcohol-related problems through more monitoring by mothers and fewer impulsive symptoms. Monitoring by an opposite-gender parent mediated the link between parenting styles (i.e., permissive, authoritative) on impulsiveness.

  14. Shortwave Radiometer Calibration Methods Comparison and Resulting Solar Irradiance Measurement Differences: A User Perspective

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habte, Aron; Sengupta, Manajit; Andreas, Afshin

    Banks financing solar energy projects require assurance that these systems will produce the energy predicted. Furthermore, utility planners and grid system operators need to understand the impact of the variable solar resource on solar energy conversion system performance. Accurate solar radiation data sets reduce the expense associated with mitigating performance risk and assist in understanding the impacts of solar resource variability. The accuracy of solar radiation measured by radiometers depends on the instrument performance specification, installation method, calibration procedure, measurement conditions, maintenance practices, location, and environmental conditions. This study addresses the effect of different calibration methods provided by radiometric calibrationmore » service providers, such as NREL and manufacturers of radiometers, on the resulting calibration responsivity. Some of these radiometers are calibrated indoors and some outdoors. To establish or understand the differences in calibration methodology, we processed and analyzed field-measured data from these radiometers. This study investigates calibration responsivities provided by NREL's broadband outdoor radiometer calibration (BORCAL) and a few prominent manufacturers. The BORCAL method provides the outdoor calibration responsivity of pyranometers and pyrheliometers at 45 degree solar zenith angle, and as a function of solar zenith angle determined by clear-sky comparisons with reference irradiance. The BORCAL method also employs a thermal offset correction to the calibration responsivity of single-black thermopile detectors used in pyranometers. Indoor calibrations of radiometers by their manufacturers are performed using a stable artificial light source in a side-by-side comparison between the test radiometer under calibration and a reference radiometer of the same type. In both methods, the reference radiometer calibrations are traceable to the World Radiometric Reference (WRR). These

  15. Re-calibration of coronary risk prediction: an example of the Seven Countries Study.

    PubMed

    Puddu, Paolo Emilio; Piras, Paolo; Kromhout, Daan; Tolonen, Hanna; Kafatos, Anthony; Menotti, Alessandro

    2017-12-14

    We aimed at performing a calibration and re-calibration process using six standard risk factors from Northern (NE, N = 2360) or Southern European (SE, N = 2789) middle-aged men of the Seven Countries Study, whose parameters and data were fully known, to establish whether re-calibration gave the right answer. Greenwood-Nam-D'Agostino technique as modified by Demler (GNDD) in 2015 produced chi-squared statistics using 10 deciles of observed/expected CHD mortality risk, corresponding to Hosmer-Lemeshaw chi-squared employed for multiple logistic equations whereby binary data are used. Instead of the number of events, the GNDD test uses survival probabilities of observed and predicted events. The exercise applied, in five different ways, the parameters of the NE-predictive model to SE (and vice-versa) and compared the outcome of the simulated re-calibration with the real data. Good re-calibration could be obtained only when risk factor coefficients were substituted, being similar in magnitude and not significantly different between NE-SE. In all other ways, a good re-calibration could not be obtained. This is enough to praise for an overall need of re-evaluation of most investigations that, without GNDD or another proper technique for statistically assessing the potential differences, concluded that re-calibration is a fair method and might therefore be used, with no specific caution.

  16. More efficient evolutionary strategies for model calibration with watershed model for demonstration

    NASA Astrophysics Data System (ADS)

    Baggett, J. S.; Skahill, B. E.

    2008-12-01

    Evolutionary strategies allow automatic calibration of more complex models than traditional gradient based approaches, but they are more computationally intensive. We present several efficiency enhancements for evolution strategies, many of which are not new, but when combined have been shown to dramatically decrease the number of model runs required for calibration of synthetic problems. To reduce the number of expensive model runs we employ a surrogate objective function for an adaptively determined fraction of the population at each generation (Kern et al., 2006). We demonstrate improvements to the adaptive ranking strategy that increase its efficiency while sacrificing little reliability and further reduce the number of model runs required in densely sampled parts of parameter space. Furthermore, we include a gradient individual in each generation that is usually not selected when the search is in a global phase or when the derivatives are poorly approximated, but when selected near a smooth local minimum can dramatically increase convergence speed (Tahk et al., 2007). Finally, the selection of the gradient individual is used to adapt the size of the population near local minima. We show, by incorporating these enhancements into the Covariance Matrix Adaption Evolution Strategy (CMAES; Hansen, 2006), that their synergetic effect is greater than their individual parts. This hybrid evolutionary strategy exploits smooth structure when it is present but degrades to an ordinary evolutionary strategy, at worst, if smoothness is not present. Calibration of 2D-3D synthetic models with the modified CMAES requires approximately 10%-25% of the model runs of ordinary CMAES. Preliminary demonstration of this hybrid strategy will be shown for watershed model calibration problems. Hansen, N. (2006). The CMA Evolution Strategy: A Comparing Review. In J.A. Lozano, P. Larrañga, I. Inza and E. Bengoetxea (Eds.). Towards a new evolutionary computation. Advances in estimation of

  17. A New Method for Calibrating Perceptual Salience across Dimensions in Infants: The Case of Color vs. Luminance

    ERIC Educational Resources Information Center

    Kaldy, Zsuzsa; Blaser, Erik A.; Leslie, Alan M.

    2006-01-01

    We report a new method for calibrating differences in perceptual salience across feature dimensions, in infants. The problem of inter-dimensional salience arises in many areas of infant studies, but a general method for addressing the problem has not previously been described. Our method is based on a preferential looking paradigm, adapted to…

  18. Dry calibration of electromagnetic flowmeters based on numerical models combining multiple physical phenomena (multiphysics)

    NASA Astrophysics Data System (ADS)

    Fu, X.; Hu, L.; Lee, K. M.; Zou, J.; Ruan, X. D.; Yang, H. Y.

    2010-10-01

    This paper presents a method for dry calibration of an electromagnetic flowmeter (EMF). This method, which determines the voltage induced in the EMF as conductive liquid flows through a magnetic field, numerically solves a coupled set of multiphysical equations with measured boundary conditions for the magnetic, electric, and flow fields in the measuring pipe of the flowmeter. Specifically, this paper details the formulation of dry calibration and an efficient algorithm (that adaptively minimizes the number of measurements and requires only the normal component of the magnetic flux density as boundary conditions on the pipe surface to reconstruct the magnetic field involved) for computing the sensitivity of EMF. Along with an in-depth discussion on factors that could significantly affect the final precision of a dry calibrated EMF, the effects of flow disturbance on measuring errors have been experimentally studied by installing a baffle at the inflow port of the EMF. Results of the dry calibration on an actual EMF were compared against flow-rig calibration; excellent agreements (within 0.3%) between dry calibration and flow-rig tests verify the multiphysical computation of the fields and the robustness of the method. As requiring no actual flow, the dry calibration is particularly useful for calibrating large-diameter EMFs where conventional flow-rig methods are often costly and difficult to implement.

  19. Airdata Measurement and Calibration

    NASA Technical Reports Server (NTRS)

    Haering, Edward A., Jr.

    1995-01-01

    This memorandum provides a brief introduction to airdata measurement and calibration. Readers will learn about typical test objectives, quantities to measure, and flight maneuvers and operations for calibration. The memorandum informs readers about tower-flyby, trailing cone, pacer, radar-tracking, and dynamic airdata calibration maneuvers. Readers will also begin to understand how some data analysis considerations and special airdata cases, including high-angle-of-attack flight, high-speed flight, and nonobtrusive sensors are handled. This memorandum is not intended to be all inclusive; this paper contains extensive reference and bibliography sections.

  20. On the prospects of cross-calibrating the Cherenkov Telescope Array with an airborne calibration platform

    NASA Astrophysics Data System (ADS)

    Brown, Anthony M.

    2018-01-01

    Recent advances in unmanned aerial vehicle (UAV) technology have made UAVs an attractive possibility as an airborne calibration platform for astronomical facilities. This is especially true for arrays of telescopes spread over a large area such as the Cherenkov Telescope Array (CTA). In this paper, the feasibility of using UAVs to calibrate CTA is investigated. Assuming a UAV at 1km altitude above CTA, operating on astronomically clear nights with stratified, low atmospheric dust content, appropriate thermal protection for the calibration light source and an onboard photodiode to monitor its absolute light intensity, inter-calibration of CTA's telescopes of the same size class is found to be achievable with a 6 - 8 % uncertainty. For cross-calibration of different telescope size classes, a systematic uncertainty of 8 - 10 % is found to be achievable. Importantly, equipping the UAV with a multi-wavelength calibration light source affords us the ability to monitor the wavelength-dependent degradation of CTA telescopes' optical system, allowing us to not only maintain this 6 - 10 % uncertainty after the first few years of telescope deployment, but also to accurately account for the effect of multi-wavelength degradation on the cross-calibration of CTA by other techniques, namely with images of air showers and local muons. A UAV-based system thus provides CTA with several independent and complementary methods of cross-calibrating the optical throughput of individual telescopes. Furthermore, housing environmental sensors on the UAV system allows us to not only minimise the systematic uncertainty associated with the atmospheric transmission of the calibration signal, it also allows us to map the dust content above CTA as well as monitor the temperature, humidity and pressure profiles of the first kilometre of atmosphere above CTA with each UAV flight.

  1. Model Calibration with Censored Data

    DOE PAGES

    Cao, Fang; Ba, Shan; Brenneman, William A.; ...

    2017-06-28

    Here, the purpose of model calibration is to make the model predictions closer to reality. The classical Kennedy-O'Hagan approach is widely used for model calibration, which can account for the inadequacy of the computer model while simultaneously estimating the unknown calibration parameters. In many applications, the phenomenon of censoring occurs when the exact outcome of the physical experiment is not observed, but is only known to fall within a certain region. In such cases, the Kennedy-O'Hagan approach cannot be used directly, and we propose a method to incorporate the censoring information when performing model calibration. The method is applied tomore » study the compression phenomenon of liquid inside a bottle. The results show significant improvement over the traditional calibration methods, especially when the number of censored observations is large.« less

  2. Problems in the use of interference filters for spectrophotometric determination of total ozone

    NASA Technical Reports Server (NTRS)

    Basher, R. E.; Matthews, W. A.

    1977-01-01

    An analysis of the use of ultraviolet narrow-band interference filters for total ozone determination is given with reference to the New Zealand filter spectrophotometer under the headings of filter monochromaticity, temperature dependence, orientation dependence, aging, and specification tolerances and nonuniformity. Quantitative details of each problem are given, together with the means used to overcome them in the New Zealand instrument. The tuning of the instrument's filter center wavelengths to a common set of values by tilting the filters is also described, along with a simple calibration method used to adjust and set these center wavelengths.

  3. Development and calibration of an air-floating six-axis force measurement platform using self-calibration

    NASA Astrophysics Data System (ADS)

    Huang, Bin; Wang, Xiaomeng; Li, Chengwei; Yi, Jiajing; Lu, Rongsheng; Tao, Jiayue

    2016-09-01

    This paper describes the design, working principle, as well as calibration of an air-floating six-axis force measurement platform, where the floating plate and nozzles were connected without contact, preventing inter-dimensional coupling and increasing precision significantly. The measurement repeatability error of the force size in the platform is less than 0.2% full scale (FS), which is significantly better than the precision of 1% FS in the six-axis force sensors on the current market. We overcame the difficulties of weight loading device in high-precision calibration by proposing a self-calibration method based on the floating plate gravity and met the calibration precision requirement of 0.02% FS. This study has general implications for the development and calibration of high-precision multi-axis force sensors. In particular, the air-floating six-axis force measurement platform could be applied to the calibration of some special sensors such as flexible tactile sensors and may be used as a micro-nano mechanical assembly platform for real-time assembly force testing.

  4. Mixture EMOS model for calibrating ensemble forecasts of wind speed.

    PubMed

    Baran, S; Lerch, S

    2016-03-01

    Ensemble model output statistics (EMOS) is a statistical tool for post-processing forecast ensembles of weather variables obtained from multiple runs of numerical weather prediction models in order to produce calibrated predictive probability density functions. The EMOS predictive probability density function is given by a parametric distribution with parameters depending on the ensemble forecasts. We propose an EMOS model for calibrating wind speed forecasts based on weighted mixtures of truncated normal (TN) and log-normal (LN) distributions where model parameters and component weights are estimated by optimizing the values of proper scoring rules over a rolling training period. The new model is tested on wind speed forecasts of the 50 member European Centre for Medium-range Weather Forecasts ensemble, the 11 member Aire Limitée Adaptation dynamique Développement International-Hungary Ensemble Prediction System ensemble of the Hungarian Meteorological Service, and the eight-member University of Washington mesoscale ensemble, and its predictive performance is compared with that of various benchmark EMOS models based on single parametric families and combinations thereof. The results indicate improved calibration of probabilistic and accuracy of point forecasts in comparison with the raw ensemble and climatological forecasts. The mixture EMOS model significantly outperforms the TN and LN EMOS methods; moreover, it provides better calibrated forecasts than the TN-LN combination model and offers an increased flexibility while avoiding covariate selection problems. © 2016 The Authors Environmetrics Published by JohnWiley & Sons Ltd.

  5. A standard stellar library for evolutionary synthesis. III. Metallicity calibration

    NASA Astrophysics Data System (ADS)

    Westera, P.; Lejeune, T.; Buser, R.; Cuisinier, F.; Bruzual, G.

    2002-01-01

    We extend the colour calibration of the widely used BaSeL standard stellar library (Lejeune et al. 1997, 1998) to non-solar metallicities, down to [Fe/H] ~ -2.0 dex. Surprisingly, we find that at the present epoch it is virtually impossible to establish a unique calibration of UBVRIJHKL colours in terms of stellar metallicity [Fe/H] which is consistent simultaneously with both colour-temperature relations and colour-absolute magnitude diagrams (CMDs) based on observed globular cluster photometry data and on published, currently popular standard stellar evolutionary tracks and isochrones. The problem appears to be related to the long-standing incompleteness in our understanding of convection in late-type stellar evolution, but is also due to a serious lack of relevant observational calibration data that would help resolve, or at least further significant progress towards resolving this issue. In view of the most important applications of the BaSeL library, we here propose two different metallicity calibration versions: (1) the ``WLBC 99'' library, which consistently matches empirical colour-temperature relations and which, therefore, should make an ideal tool for the study of individual stars; and (2), the ``PADOVA 2000'' library, which provides isochrones from the Padova 2000 grid (Girardi et al. \\cite{padova}) that successfully reproduce Galactic globular-cluster colour-absolute magnitude diagrams and which thus should prove particularly useful for studies of collective phenomena in stellar populations in clusters and galaxies.

  6. Marine04 Marine radiocarbon age calibration, 26 ? 0 ka BP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hughen, K; Baille, M; Bard, E

    2004-11-01

    New radiocarbon calibration curves, IntCal04 and Marine04, have been constructed and internationally ratified to replace the terrestrial and marine components of IntCal98. The new calibration datasets extend an additional 2000 years, from 0-26 ka cal BP (Before Present, 0 cal BP = AD 1950), and provide much higher resolution, greater precision and more detailed structure than IntCal98. For the Marine04 curve, dendrochronologically dated tree-ring samples, converted with a box-diffusion model to marine mixed-layer ages, cover the period from 0-10.5 ka cal BP. Beyond 10.5 ka cal BP, high-resolution marine data become available from foraminifera in varved sediments and U/Th-dated corals.more » The marine records are corrected with site-specific {sup 14}C reservoir age information to provide a single global marine mixed-layer calibration from 10.5-26.0 ka cal BP. A substantial enhancement relative to IntCal98 is the introduction of a random walk model, which takes into account the uncertainty in both the calendar age and the radiocarbon age to calculate the underlying calibration curve. The marine datasets and calibration curve for marine samples from the surface mixed layer (Marine04) are discussed here. The tree-ring datasets, sources of uncertainty, and regional offsets are presented in detail in a companion paper by Reimer et al.« less

  7. K-edge energy-based calibration method for photon counting detectors

    NASA Astrophysics Data System (ADS)

    Ge, Yongshuai; Ji, Xu; Zhang, Ran; Li, Ke; Chen, Guang-Hong

    2018-01-01

    In recent years, potential applications of energy-resolved photon counting detectors (PCDs) in the x-ray medical imaging field have been actively investigated. Unlike conventional x-ray energy integration detectors, PCDs count the number of incident x-ray photons within certain energy windows. For PCDs, the interactions between x-ray photons and photoconductor generate electronic voltage pulse signals. The pulse height of each signal is proportional to the energy of the incident photons. By comparing the pulse height with the preset energy threshold values, x-ray photons with specific energies are recorded and sorted into different energy bins. To quantitatively understand the meaning of the energy threshold values, and thus to assign an absolute energy value to each energy bin, energy calibration is needed to establish the quantitative relationship between the threshold values and the corresponding effective photon energies. In practice, the energy calibration is not always easy, due to the lack of well-calibrated energy references for the working energy range of the PCDs. In this paper, a new method was developed to use the precise knowledge of the characteristic K-edge energy of materials to perform energy calibration. The proposed method was demonstrated using experimental data acquired from three K-edge materials (viz., iodine, gadolinium, and gold) on two different PCDs (Hydra and Flite, XCounter, Sweden). Finally, the proposed energy calibration method was further validated using a radioactive isotope (Am-241) with a known decay energy spectrum.

  8. Internalizing and externalizing problems in adolescence: general and dimension-specific effects of familial loadings and preadolescent temperament traits.

    PubMed

    Ormel, J; Oldehinkel, A J; Ferdinand, R F; Hartman, C A; De Winter, A F; Veenstra, R; Vollebergh, W; Minderaa, R B; Buitelaar, J K; Verhulst, F C

    2005-12-01

    We investigated the links between familial loading, preadolescent temperament, and internalizing and externalizing problems in adolescence, hereby distinguishing effects on maladjustment in general versus dimension-specific effects on either internalizing or externalizing problems. In a population-based sample of 2230 preadolescents (10-11 years) familial loading (parental lifetime psychopathology) and offspring temperament were assessed at baseline by parent report, and offspring psychopathology at 2.5-years follow-up by self-report, teacher report and parent report. We used purified measures of temperament and psychopathology and partialled out shared variance between internalizing and externalizing problems. Familial loading of internalizing psychopathology predicted offspring internalizing but not externalizing problems, whereas familial loading of externalizing psychopathology predicted offspring externalizing but not internalizing problems. Both familial loadings were associated with Frustration, low Effortful Control, and Fear. Frustration acted as a general risk factor predicting severity of maladjustment; low Effortful Control and Fear acted as dimension-specific risk factors that predicted a particular type of psychopathology; whereas Shyness, High-Intensity Pleasure, and Affiliation acted as direction markers that steered the conditional probability of internalizing versus externalizing problems, in the event of maladjustment. Temperament traits mediated one-third of the association between familial loading and psychopathology. Findings were robust across different composite measures of psychopathology, and applied to girls as well as boys. With regard to familial loading and temperament, it is important to distinguish general risk factors (Frustration) from dimension-specific risk factors (familial loadings, Effortful Control, Fear), and direction markers that act as pathoplastic factors (Shyness, High-Intensity Pleasure, Affiliation) from both types of

  9. Cross-calibration of liquid and solid QCT calibration standards: corrections to the UCSF normative data

    NASA Technical Reports Server (NTRS)

    Faulkner, K. G.; Gluer, C. C.; Grampp, S.; Genant, H. K.

    1993-01-01

    Quantitative computed tomography (QCT) has been shown to be a precise and sensitive method for evaluating spinal bone mineral density (BMD) and skeletal response to aging and therapy. Precise and accurate determination of BMD using QCT requires a calibration standard to compensate for and reduce the effects of beam-hardening artifacts and scanner drift. The first standards were based on dipotassium hydrogen phosphate (K2HPO4) solutions. Recently, several manufacturers have developed stable solid calibration standards based on calcium hydroxyapatite (CHA) in water-equivalent plastic. Due to differences in attenuating properties of the liquid and solid standards, the calibrated BMD values obtained with each system do not agree. In order to compare and interpret the results obtained on both systems, cross-calibration measurements were performed in phantoms and patients using the University of California San Francisco (UCSF) liquid standard and the Image Analysis (IA) solid standard on the UCSF GE 9800 CT scanner. From the phantom measurements, a highly linear relationship was found between the liquid- and solid-calibrated BMD values. No influence on the cross-calibration due to simulated variations in body size or vertebral fat content was seen, though a significant difference in the cross-calibration was observed between scans acquired at 80 and 140 kVp. From the patient measurements, a linear relationship between the liquid (UCSF) and solid (IA) calibrated values was derived for GE 9800 CT scanners at 80 kVp (IA = [1.15 x UCSF] - 7.32).(ABSTRACT TRUNCATED AT 250 WORDS).

  10. Evaluation of factors affecting CGMS calibration.

    PubMed

    Buckingham, Bruce A; Kollman, Craig; Beck, Roy; Kalajian, Andrea; Fiallo-Scharer, Rosanna; Tansey, Michael J; Fox, Larry A; Wilson, Darrell M; Weinzimer, Stuart A; Ruedy, Katrina J; Tamborlane, William V

    2006-06-01

    The optimal number/timing of calibrations entered into the CGMS (Medtronic MiniMed, Northridge, CA) continuous glucose monitoring system have not been previously described. Fifty subjects with Type 1 diabetes mellitus (10-18 years old) were hospitalized in a clinical research center for approximately 24 h on two separate days. CGMS and OneTouch Ultra meter (LifeScan, Milpitas, CA) data were obtained. The CGMS was retrospectively recalibrated using the Ultra data varying the number and timing of calibrations. Resulting CGMS values were compared against laboratory reference values. There was a modest improvement in accuracy with increasing number of calibrations. The median relative absolute deviation (RAD) was 14%, 15%, 13%, and 13% when using three, four, five, and seven calibration values, respectively (P < 0.001). Corresponding percentages of CGMS-reference pairs meeting the International Organisation for Standardisation criteria were 66%, 67%, 71%, and 72% (P < 0.001). Nighttime accuracy improved when daytime calibrations (pre-lunch and pre-dinner) were removed leaving only two calibrations at 9 p.m. and 6 a.m. (median difference, -2 vs. -9 mg/dL, P < 0.001; median RAD, 12% vs. 15%, P = 0.001). Accuracy was better on visits where the average absolute rate of glucose change at the times of calibration was lower. On visits with average absolute rates <0.5, 0.5 to <1.0, 1.0 to <1.5, and >or=1.5 mg/dL/min, median RAD values were 13% versus 14% versus 17% versus 19%, respectively (P = 0.05). Although accuracy is slightly improved with more calibrations, the timing of the calibrations appears more important. Modifying the algorithm to put less weight on daytime calibrations for nighttime values and calibrating during times of relative glucose stability may have greater impact on accuracy.

  11. Evaluation of Factors Affecting CGMS Calibration

    PubMed Central

    2006-01-01

    Background The optimal number/timing of calibrations entered into the Continuous Glucose Monitoring System (“CGMS”; Medtronic MiniMed, Northridge, CA) have not been previously described. Methods Fifty subjects with T1DM (10–18y) were hospitalized in a clinical research center for ~24h on two separate days. CGMS and OneTouch® Ultra® Meter (“Ultra”; LifeScan, Milpitas, CA) data were obtained. The CGMS was retrospectively recalibrated using the Ultra data varying the number and timing of calibrations. Resulting CGMS values were compared against laboratory reference values. Results There was a modest improvement in accuracy with increasing number of calibrations. The median relative absolute deviation (RAD) was 14%, 15%, 13% and 13% when using 3, 4, 5 and 7 calibration values, respectively (p<0.001). Corresponding percentages of CGMS-reference pairs meeting the ISO criteria were 66%, 67%, 71% and 72% (p<0.001). Nighttime accuracy improved when daytime calibrations (pre-lunch and pre-dinner) were removed leaving only two calibrations at 9p.m. and 6a.m. (median difference: −2 vs. −9mg/dL, p<0.001; median RAD: 12% vs. 15%, p=0.001). Accuracy was better on visits where the average absolute rate of glucose change at the times of calibration was lower. On visits with average absolute rates <0.5, 0.5-<1.0, 1.0-<1.5 and ≥1.5mg/dL/min, median RAD values were 13% vs. 14% vs. 17% vs. 19%, respectively (p=0.05). Conclusions Although accuracy is slightly improved with more calibrations, the timing of the calibrations appears more important. Modifying the algorithm to put less weight on daytime calibrations for nighttime values and calibrating during times of relative glucose stability may have greater impact on accuracy. PMID:16800753

  12. Physical resist models and their calibration: their readiness for accurate EUV lithography simulation

    NASA Astrophysics Data System (ADS)

    Klostermann, U. K.; Mülders, T.; Schmöller, T.; Lorusso, G. F.; Hendrickx, E.

    2010-04-01

    In this paper, we discuss the performance of EUV resist models in terms of predictive accuracy, and we assess the readiness of the corresponding model calibration methodology. The study is done on an extensive OPC data set collected at IMEC for the ShinEtsu resist SEVR-59 on the ASML EUV Alpha Demo Tool (ADT), with the data set including more than thousand CD values. We address practical aspects such as the speed of calibration and selection of calibration patterns. The model is calibrated on 12 process window data series varying in pattern width (32, 36, 40 nm), orientation (H, V) and pitch (dense, isolated). The minimum measured feature size at nominal process condition is a 32 nm CD at a dense pitch of 64 nm. Mask metrology is applied to verify and eventually correct nominal width of the drawn CD. Cross-sectional SEM information is included in the calibration to tune the simulated resist loss and sidewall angle. The achieved calibration RMS is ~ 1.0 nm. We show what elements are important to obtain a well calibrated model. We discuss the impact of 3D mask effects on the Bossung tilt. We demonstrate that a correct representation of the flare level during the calibration is important to achieve a high predictability at various flare conditions. Although the model calibration is performed on a limited subset of the measurement data (one dimensional structures only), its accuracy is validated based on a large number of OPC patterns (at nominal dose and focus conditions) not included in the calibration; validation RMS results as small as 1 nm can be reached. Furthermore, we study the model's extendibility to two-dimensional end of line (EOL) structures. Finally, we correlate the experimentally observed fingerprint of the CD uniformity to a model, where EUV tool specific signatures are taken into account.

  13. MIRO Continuum Calibration for Asteroid Mode

    NASA Technical Reports Server (NTRS)

    Lee, Seungwon

    2011-01-01

    MIRO (Microwave Instrument for the Rosetta Orbiter) is a lightweight, uncooled, dual-frequency heterodyne radiometer. The MIRO encountered asteroid Steins in 2008, and during the flyby, MIRO used the Asteroid Mode to measure the emission spectrum of Steins. The Asteroid Mode is one of the seven modes of the MIRO operation, and is designed to increase the length of time that a spectral line is in the MIRO pass-band during a flyby of an object. This software is used to calibrate the continuum measurement of Steins emission power during the asteroid flyby. The MIRO raw measurement data need to be calibrated in order to obtain physically meaningful data. This software calibrates the MIRO raw measurements in digital units to the brightness temperature in Kelvin. The software uses two calibration sequences that are included in the Asteroid Mode. One sequence is at the beginning of the mode, and the other at the end. The first six frames contain the measurement of a cold calibration target, while the last six frames measure a warm calibration target. The targets have known temperatures and are used to provide reference power and gain, which can be used to convert MIRO measurements into brightness temperature. The software was developed to calibrate MIRO continuum measurements from Asteroid Mode. The software determines the relationship between the raw digital unit measured by MIRO and the equivalent brightness temperature by analyzing data from calibration frames. The found relationship is applied to non-calibration frames, which are the measurements of an object of interest such as asteroids and other planetary objects that MIRO encounters during its operation. This software characterizes the gain fluctuations statistically and determines which method to estimate gain between calibration frames. For example, if the fluctuation is lower than a statistically significant level, the averaging method is used to estimate the gain between the calibration frames. If the

  14. The MeqTrees software system and its use for third-generation calibration of radio interferometers

    NASA Astrophysics Data System (ADS)

    Noordam, J. E.; Smirnov, O. M.

    2010-12-01

    Context. The formulation of the radio interferometer measurement equation (RIME) for a generic radio telescope by Hamaker et al. has provided us with an elegant mathematical apparatus for better understanding, simulation and calibration of existing and future instruments. The calibration of the new radio telescopes (LOFAR, SKA) would be unthinkable without the RIME formalism, and new software to exploit it. Aims: The MeqTrees software system is designed to implement numerical models, and to solve for arbitrary subsets of their parameters. It may be applied to many problems, but was originally geared towards implementing Measurement Equations in radio astronomy for the purposes of simulation and calibration. The technical goal of MeqTrees is to provide a tool for rapid implementation of such models, while offering performance comparable to hand-written code. We are also pursuing the wider goal of increasing the rate of evolution of radio astronomical software, by offering a tool that facilitates rapid experimentation, and exchange of ideas (and scripts). Methods: MeqTrees is implemented as a Python-based front-end called the meqbrowser, and an efficient (C++-based) computational back-end called the meqserver. Numerical models are defined on the front-end via a Python-based Tree Definition Language (TDL), then rapidly executed on the back-end. The use of TDL facilitates an extremely short turn-around time (hours rather than weeks or months) for experimentation with new ideas. This is also helped by unprecedented visualization capabilities for all final and intermediate results. A flexible data model and a number of important optimizations in the back-end ensures that the numerical performance is comparable to that of hand-written code. Results: MeqTrees is already widely used as the simulation tool for new instruments (LOFAR, SKA) and technologies (focal plane arrays). It has demonstrated that it can achieve a noise-limited dynamic range in excess of a million, on

  15. Iterative Strain-Gage Balance Calibration Data Analysis for Extended Independent Variable Sets

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert Manfred

    2011-01-01

    A new method was developed that makes it possible to use an extended set of independent calibration variables for an iterative analysis of wind tunnel strain gage balance calibration data. The new method permits the application of the iterative analysis method whenever the total number of balance loads and other independent calibration variables is greater than the total number of measured strain gage outputs. Iteration equations used by the iterative analysis method have the limitation that the number of independent and dependent variables must match. The new method circumvents this limitation. It simply adds a missing dependent variable to the original data set by using an additional independent variable also as an additional dependent variable. Then, the desired solution of the regression analysis problem can be obtained that fits each gage output as a function of both the original and additional independent calibration variables. The final regression coefficients can be converted to data reduction matrix coefficients because the missing dependent variables were added to the data set without changing the regression analysis result for each gage output. Therefore, the new method still supports the application of the two load iteration equation choices that the iterative method traditionally uses for the prediction of balance loads during a wind tunnel test. An example is discussed in the paper that illustrates the application of the new method to a realistic simulation of temperature dependent calibration data set of a six component balance.

  16. The Characterization of a Piston Displacement-Type Flowmeter Calibration Facility and the Calibration and Use of Pulsed Output Type Flowmeters

    PubMed Central

    Mattingly, G. E.

    1992-01-01

    Critical measurement performance of fluid flowmeters requires proper and quantified verification data. These data should be generated using calibration and traceability techniques established for these verification purposes. In these calibration techniques, the calibration facility should be well-characterized and its components and performance properly traced to pertinent higher standards. The use of this calibrator to calibrate flowmeters should be appropriately established and the manner in which the calibrated flowmeter is used should be specified in accord with the conditions of the calibration. These three steps: 1) characterizing the calibration facility itself, 2) using the characterized facility to calibrate a flowmeter, and 3) using the calibrated flowmeter to make a measurement are described and the pertinent equations are given for an encoded-stroke, piston displacement-type calibrator and a pulsed output flowmeter. It is concluded that, given these equations and proper instrumentation of this type of calibrator, very high levels of performance can be attained and, in turn, these can be used to achieve high fluid flow rate measurement accuracy with pulsed output flowmeters. PMID:28053444

  17. Identification and testing of countermeasures for specific alcohol accident types and problems. Volume 1, Executive summary

    DOT National Transportation Integrated Search

    1984-12-01

    This report summarizes work conducted to investigate the feasibility of developing effective countermeasures directed at specific alcohol-related accidents or problems. In Phase I, literature and accident data were reviewed to determine the scope and...

  18. Iterative Calibration: A Novel Approach for Calibrating the Molecular Clock Using Complex Geological Events.

    PubMed

    Loeza-Quintana, Tzitziki; Adamowicz, Sarah J

    2018-02-01

    During the past 50 years, the molecular clock has become one of the main tools for providing a time scale for the history of life. In the era of robust molecular evolutionary analysis, clock calibration is still one of the most basic steps needing attention. When fossil records are limited, well-dated geological events are the main resource for calibration. However, biogeographic calibrations have often been used in a simplistic manner, for example assuming simultaneous vicariant divergence of multiple sister lineages. Here, we propose a novel iterative calibration approach to define the most appropriate calibration date by seeking congruence between the dates assigned to multiple allopatric divergences and the geological history. Exploring patterns of molecular divergence in 16 trans-Bering sister clades of echinoderms, we demonstrate that the iterative calibration is predominantly advantageous when using complex geological or climatological events-such as the opening/reclosure of the Bering Strait-providing a powerful tool for clock dating that can be applied to other biogeographic calibration systems and further taxa. Using Bayesian analysis, we observed that evolutionary rate variability in the COI-5P gene is generally distributed in a clock-like fashion for Northern echinoderms. The results reveal a large range of genetic divergences, consistent with multiple pulses of trans-Bering migrations. A resulting rate of 2.8% pairwise Kimura-2-parameter sequence divergence per million years is suggested for the COI-5P gene in Northern echinoderms. Given that molecular rates may vary across latitudes and taxa, this study provides a new context for dating the evolutionary history of Arctic marine life.

  19. Thin film surface treatments for lowering dust adhesion on Mars Rover calibration targets

    NASA Astrophysics Data System (ADS)

    Sabri, F.; Werhner, T.; Hoskins, J.; Schuerger, A. C.; Hobbs, A. M.; Barreto, J. A.; Britt, D.; Duran, R. A.

    The current generation of calibration targets on Mars Rover serve as a color and radiometric reference for the panoramic camera. They consist of a transparent silicon-based polymer tinted with either color or grey-scale pigments and cast with a microscopically rough Lambertian surface for a diffuse reflectance pattern. This material has successfully withstood the harsh conditions existent on Mars. However, the inherent roughness of the Lambertian surface (relative to the particle size of the Martian airborne dust) and the tackiness of the polymer in the calibration targets has led to a serious dust accumulation problem. In this work, non-invasive thin film technology was successfully implemented in the design of future generation calibration targets leading to significant reduction of dust adhesion and capture. The new design consists of a μm-thick interfacial layer capped with a nm-thick optically transparent layer of pure metal. The combination of these two additional layers is effective in burying the relatively rough Lambertian surface while maintaining diffuse properties of the samples which is central to the correct operation as calibration targets. A set of these targets are scheduled for flight on the Mars Phoenix mission.

  20. Solid laboratory calibration of a nonimaging spectroradiometer.

    PubMed

    Schaepman, M E; Dangel, S

    2000-07-20

    Field-based nonimaging spectroradiometers are often used in vicarious calibration experiments for airborne or spaceborne imaging spectrometers. The calibration uncertainties associated with these ground measurements contribute substantially to the overall modeling error in radiance- or reflectance-based vicarious calibration experiments. Because of limitations in the radiometric stability of compact field spectroradiometers, vicarious calibration experiments are based primarily on reflectance measurements rather than on radiance measurements. To characterize the overall uncertainty of radiance-based approaches and assess the sources of uncertainty, we carried out a full laboratory calibration. This laboratory calibration of a nonimaging spectroradiometer is based on a measurement plan targeted at achieving a calibration. The individual calibration steps include characterization of the signal-to-noise ratio, the noise equivalent signal, the dark current, the wavelength calibration, the spectral sampling interval, the nonlinearity, directional and positional effects, the spectral scattering, the field of view, the polarization, the size-of-source effects, and the temperature dependence of a particular instrument. The traceability of the radiance calibration is established to a secondary National Institute of Standards and Technology calibration standard by use of a 95% confidence interval and results in an uncertainty of less than ?7.1% for all spectroradiometer bands.