Sample records for software calibration procedures

  1. Improving Building Energy Simulation Programs Through Diagnostic Testing (Fact Sheet)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    2012-02-01

    New test procedure evaluates quality and accuracy of energy analysis tools for the residential building retrofit market. Reducing the energy use of existing homes in the United States offers significant energy-saving opportunities, which can be identified through building simulation software tools that calculate optimal packages of efficiency measures. To improve the accuracy of energy analysis for residential buildings, the National Renewable Energy Laboratory's (NREL) Buildings Research team developed the Building Energy Simulation Test for Existing Homes (BESTEST-EX), a method for diagnosing and correcting errors in building energy audit software and calibration procedures. BESTEST-EX consists of building physics and utility billmore » calibration test cases, which software developers can use to compare their tools simulation findings to reference results generated with state-of-the-art simulation tools. Overall, the BESTEST-EX methodology: (1) Tests software predictions of retrofit energy savings in existing homes; (2) Ensures building physics calculations and utility bill calibration procedures perform to a minimum standard; and (3) Quantifies impacts of uncertainties in input audit data and occupant behavior. BESTEST-EX is helping software developers identify and correct bugs in their software, as well as develop and test utility bill calibration procedures.« less

  2. Automation is an Effective Way to Improve Quality of Verification (Calibration) of Measuring Instruments

    NASA Astrophysics Data System (ADS)

    Golobokov, M.; Danilevich, S.

    2018-04-01

    In order to assess calibration reliability and automate such assessment, procedures for data collection and simulation study of thermal imager calibration procedure have been elaborated. The existing calibration techniques do not always provide high reliability. A new method for analyzing the existing calibration techniques and developing new efficient ones has been suggested and tested. A type of software has been studied that allows generating instrument calibration reports automatically, monitoring their proper configuration, processing measurement results and assessing instrument validity. The use of such software allows reducing man-hours spent on finalization of calibration data 2 to 5 times and eliminating a whole set of typical operator errors.

  3. Jeff Cheatham, senior metrologist

    NASA Image and Video Library

    2015-01-27

    JEFF CHEATHAM, SENIOR METROLOGIST AT THE MARSHALL METROLOGY AND CALIBRATION LABORATORY, SPENT 12 YEARS DEVELOPING 2400 AUTOMATED SOFTWARE PROCEDURES USED FOR CALIBRATION AND TESTING SPACE VEHICLES AND EQUIPMENT

  4. NREL Improves Building Energy Simulation Programs Through Diagnostic Testing (Fact Sheet)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    2012-01-01

    This technical highlight describes NREL research to develop Building Energy Simulation Test for Existing Homes (BESTEST-EX) to increase the quality and accuracy of energy analysis tools for the building retrofit market. Researchers at the National Renewable Energy Laboratory (NREL) have developed a new test procedure to increase the quality and accuracy of energy analysis tools for the building retrofit market. The Building Energy Simulation Test for Existing Homes (BESTEST-EX) is a test procedure that enables software developers to evaluate the performance of their audit tools in modeling energy use and savings in existing homes when utility bills are available formore » model calibration. Similar to NREL's previous energy analysis tests, such as HERS BESTEST and other BESTEST suites included in ANSI/ASHRAE Standard 140, BESTEST-EX compares software simulation findings to reference results generated with state-of-the-art simulation tools such as EnergyPlus, SUNREL, and DOE-2.1E. The BESTEST-EX methodology: (1) Tests software predictions of retrofit energy savings in existing homes; (2) Ensures building physics calculations and utility bill calibration procedures perform to a minimum standard; and (3) Quantifies impacts of uncertainties in input audit data and occupant behavior. BESTEST-EX includes building physics and utility bill calibration test cases. The diagram illustrates the utility bill calibration test cases. Participants are given input ranges and synthetic utility bills. Software tools use the utility bills to calibrate key model inputs and predict energy savings for the retrofit cases. Participant energy savings predictions using calibrated models are compared to NREL predictions using state-of-the-art building energy simulation programs.« less

  5. BESTEST-EX | Buildings | NREL

    Science.gov Websites

    method for testing home energy audit software and associated calibration methods. BESTEST-EX is one of Energy Analysis Model Calibration Methods. When completed, the ANSI/RESNET SMOT will specify test procedures for evaluating calibration methods used in conjunction with predicting building energy use and

  6. Liquid Scintillation Counting - Packard Triple-Label Calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Torretto, P. A.

    2017-03-23

    The Radiological Measurements Laboratory (RML) maintains and operates nine Packard Liquid Scintillation Counters (LSCs). These counters were obtained through various sources and were generally purchased as 2500, 2700 or 3100 series counters. In 2004/2005 the software and firmware on the counters were upgraded. The counters are now designated as 3100 series counters running the Quantasmart software package. Thus, a single procedure can be used to calibrate and operate the Packard LSCs.

  7. Calibration of the ROSAT HRI Spectral Response

    NASA Technical Reports Server (NTRS)

    Prestwich, Andrea

    1998-01-01

    The ROSAT High Resolution Imager has a limited (2-band) spectral response. This spectral capability can give X-ray hardness ratios on spatial scales of 5 arcseconds. The spectral response of the center of the detector was calibrated before the launch of ROSAT, but the gain decreases-with time and also is a function of position on the detector. To complicate matters further, the satellite is "wobbled", possibly moving a source across several spatial gain states. These difficulties have prevented the spectral response of the ROSAT HRI from being used for scientific measurements. We have used Bright Earth data and in-flight calibration sources to map the spatial and temporal gain changes, and written software which will allow ROSAT users to generate a calibrated XSPEC response matrix and hence determine a calibrated hardness ratio. In this report, we describe the calibration procedure and show how to obtain a response matrix. In Section 2 we give an overview of the calibration procedure, in Section 3 we give a summary of HRI spatial and temporal gain variations. Section 4 describes the routines used to determine the gain distribution of a source. In Sections 5 and 6, we describe in detail how the Bright Earth database and calibration sources are used to derive a corrected response matrix for a given observation. Finally, Section 7 describes how to use the software.

  8. Calibration of the ROSAT HRI Spectral Response

    NASA Technical Reports Server (NTRS)

    Prestwich, Andrea H.; Silverman, John; McDowell, Jonathan; Callanan, Paul; Snowden, Steve

    2000-01-01

    The ROSAT High Resolution Imager has a limited (2-band) spectral response. This spectral capability can give X-ray hardness ratios on spatial scales of 5 arcseconds. The spectral response of the center of the detector was calibrated before the launch of ROSAT, but the gain decreases with time and also is a function of position on the detector. To complicate matters further, the satellite is 'wobbled', possibly moving a source across several spatial gain states. These difficulties have prevented the spectral response of the ROSAT High Resolution Imager (HRI) from being used for scientific measurements. We have used Bright Earth data and in-flight calibration sources to map the spatial and temporal gain changes, and written software which will allow ROSAT users to generate a calibrated XSPEC (an x ray spectral fitting package) response matrix and hence determine a calibrated hardness ratio. In this report, we describe the calibration procedure and show how to obtain a response matrix. In Section 2 we give an overview of the calibration procedure, in Section 3 we give a summary of HRI spatial and temporal gain variations. Section 4 describes the routines used to determine the gain distribution of a source. In Sections 5 and 6, we describe in detail how, the Bright Earth database and calibration sources are used to derive a corrected response matrix for a given observation. Finally, Section 7 describes how to use the software.

  9. Obtaining continuous BrAC/BAC estimates in the field: A hybrid system integrating transdermal alcohol biosensor, Intellidrink smartphone app, and BrAC Estimator software tools.

    PubMed

    Luczak, Susan E; Hawkins, Ashley L; Dai, Zheng; Wichmann, Raphael; Wang, Chunming; Rosen, I Gary

    2018-08-01

    Biosensors have been developed to measure transdermal alcohol concentration (TAC), but converting TAC into interpretable indices of blood/breath alcohol concentration (BAC/BrAC) is difficult because of variations that occur in TAC across individuals, drinking episodes, and devices. We have developed mathematical models and the BrAC Estimator software for calibrating and inverting TAC into quantifiable BrAC estimates (eBrAC). The calibration protocol to determine the individualized parameters for a specific individual wearing a specific device requires a drinking session in which BrAC and TAC measurements are obtained simultaneously. This calibration protocol was originally conducted in the laboratory with breath analyzers used to produce the BrAC data. Here we develop and test an alternative calibration protocol using drinking diary data collected in the field with the smartphone app Intellidrink to produce the BrAC calibration data. We compared BrAC Estimator software results for 11 drinking episodes collected by an expert user when using Intellidrink versus breath analyzer measurements as BrAC calibration data. Inversion phase results indicated the Intellidrink calibration protocol produced similar eBrAC curves and captured peak eBrAC to within 0.0003%, time of peak eBrAC to within 18min, and area under the eBrAC curve to within 0.025% alcohol-hours as the breath analyzer calibration protocol. This study provides evidence that drinking diary data can be used in place of breath analyzer data in the BrAC Estimator software calibration procedure, which can reduce participant and researcher burden and expand the potential software user pool beyond researchers studying participants who can drink in the laboratory. Copyright © 2017. Published by Elsevier Ltd.

  10. Development of a new calibration procedure and its experimental validation applied to a human motion capture system.

    PubMed

    Royo Sánchez, Ana Cristina; Aguilar Martín, Juan José; Santolaria Mazo, Jorge

    2014-12-01

    Motion capture systems are often used for checking and analyzing human motion in biomechanical applications. It is important, in this context, that the systems provide the best possible accuracy. Among existing capture systems, optical systems are those with the highest accuracy. In this paper, the development of a new calibration procedure for optical human motion capture systems is presented. The performance and effectiveness of that new calibration procedure are also checked by experimental validation. The new calibration procedure consists of two stages. In the first stage, initial estimators of intrinsic and extrinsic parameters are sought. The camera calibration method used in this stage is the one proposed by Tsai. These parameters are determined from the camera characteristics, the spatial position of the camera, and the center of the capture volume. In the second stage, a simultaneous nonlinear optimization of all parameters is performed to identify the optimal values, which minimize the objective function. The objective function, in this case, minimizes two errors. The first error is the distance error between two markers placed in a wand. The second error is the error of position and orientation of the retroreflective markers of a static calibration object. The real co-ordinates of the two objects are calibrated in a co-ordinate measuring machine (CMM). The OrthoBio system is used to validate the new calibration procedure. Results are 90% lower than those from the previous calibration software and broadly comparable with results from a similarly configured Vicon system.

  11. New software tools for enhanced precision in robot-assisted laser phonomicrosurgery.

    PubMed

    Dagnino, Giulio; Mattos, Leonardo S; Caldwell, Darwin G

    2012-01-01

    This paper describes a new software package created to enhance precision during robot-assisted laser phonomicrosurgery procedures. The new software is composed of three tools for camera calibration, automatic tumor segmentation, and laser tracking. These were designed and developed to improve the outcome of this demanding microsurgical technique, and were tested herein to produce quantitative performance data. The experimental setup was based on the motorized laser micromanipulator created by Istituto Italiano di Tecnologia and the experimental protocols followed are fully described in this paper. The results show the new tools are robust and effective: The camera calibration tool reduced residual errors (RMSE) to 0.009 ± 0.002 mm under 40× microscope magnification; the automatic tumor segmentation tool resulted in deep lesion segmentations comparable to manual segmentations (RMSE= 0.160 ± 0.028 mm under 40× magnification); and the laser tracker tool proved to be reliable even during cutting procedures (RMSE= 0.073 ± 0.023 mm under 40× magnification). These results demonstrate the new software package can provide excellent improvements to the previous microsurgical system, leading to important enhancements in surgical outcome.

  12. Building Energy Simulation Test for Existing Homes (BESTEST-EX) (Presentation)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Judkoff, R.; Neymark, J.; Polly, B.

    2011-12-01

    This presentation discusses the goals of NREL Analysis Accuracy R&D; BESTEST-EX goals; what BESTEST-EX is; how it works; 'Building Physics' cases; 'Building Physics' reference results; 'utility bill calibration' cases; limitations and potential future work. Goals of NREL Analysis Accuracy R&D are: (1) Provide industry with the tools and technical information needed to improve the accuracy and consistency of analysis methods; (2) Reduce the risks associated with purchasing, financing, and selling energy efficiency upgrades; and (3) Enhance software and input collection methods considering impacts on accuracy, cost, and time of energy assessments. BESTEST-EX Goals are: (1) Test software predictions of retrofitmore » energy savings in existing homes; (2) Ensure building physics calculations and utility bill calibration procedures perform up to a minimum standard; and (3) Quantify impact of uncertainties in input audit data and occupant behavior. BESTEST-EX is a repeatable procedure that tests how well audit software predictions compare to the current state of the art in building energy simulation. There is no direct truth standard. However, reference software have been subjected to validation testing, including comparisons with empirical data.« less

  13. LV software support for supersonic flow analysis

    NASA Technical Reports Server (NTRS)

    Bell, William A.

    1991-01-01

    During 1991, the software developed allowed an operator to configure and checkout the TSI, Inc. laser velocimeter (LV) system prior to a run. This setup procedure established the operating conditions for the TSI MI-990 multichannel interface and the RMR-1989 rotating machinery resolver. In addition to initializing the instruments, the software package provides a means of specifying LV calibration constants, controlling the sampling process, and identifying the test parameters.

  14. Users manual for an expert system (HSPEXP) for calibration of the hydrological simulation program; Fortran

    USGS Publications Warehouse

    Lumb, A.M.; McCammon, R.B.; Kittle, J.L.

    1994-01-01

    Expert system software was developed to assist less experienced modelers with calibration of a watershed model and to facilitate the interaction between the modeler and the modeling process not provided by mathematical optimization. A prototype was developed with artificial intelligence software tools, a knowledge engineer, and two domain experts. The manual procedures used by the domain experts were identified and the prototype was then coded by the knowledge engineer. The expert system consists of a set of hierarchical rules designed to guide the calibration of the model through a systematic evaluation of model parameters. When the prototype was completed and tested, it was rewritten for portability and operational use and was named HSPEXP. The watershed model Hydrological Simulation Program--Fortran (HSPF) is used in the expert system. This report is the users manual for HSPEXP and contains a discussion of the concepts and detailed steps and examples for using the software. The system has been tested on watersheds in the States of Washington and Maryland, and the system correctly identified the model parameters to be adjusted and the adjustments led to improved calibration.

  15. In Search of Easy-to-Use Methods for Calibrating ADCP's for Velocity and Discharge Measurements

    USGS Publications Warehouse

    Oberg, K.; ,

    2002-01-01

    A cost-effective procedure for calibrating acoustic Doppler current profilers (ADCP) in the field was presented. The advantages and disadvantages of various methods which are used for calibrating ADCP were discussed. The proposed method requires the use of differential global positioning system (DGPS) with sub-meter accuracy and standard software for collecting ADCP data. The method involves traversing a long (400-800 meter) course at a constant compass heading and speed, while collecting simultaneous DGPS and ADCP data.

  16. Development of an automated film-reading system for ballistic ranges

    NASA Technical Reports Server (NTRS)

    Yates, Leslie A.

    1992-01-01

    Software for an automated film-reading system that uses personal computers and digitized shadowgraphs is described. The software identifies pixels associated with fiducial-line and model images, and least-squares procedures are used to calculate the positions and orientations of the images. Automated position and orientation readings for sphere and cone models are compared to those obtained using a manual film reader. When facility calibration errors are removed from these readings, the accuracy of the automated readings is better than the pixel resolution, and it is equal to, or better than, the manual readings. The effects of film-reading and facility-calibration errors on calculated aerodynamic coefficients is discussed.

  17. Nitrous oxide emissions from cropland: a procedure for calibrating the DayCent biogeochemical model using inverse modelling

    USGS Publications Warehouse

    Rafique, Rashad; Fienen, Michael N.; Parkin, Timothy B.; Anex, Robert P.

    2013-01-01

    DayCent is a biogeochemical model of intermediate complexity widely used to simulate greenhouse gases (GHG), soil organic carbon and nutrients in crop, grassland, forest and savannah ecosystems. Although this model has been applied to a wide range of ecosystems, it is still typically parameterized through a traditional “trial and error” approach and has not been calibrated using statistical inverse modelling (i.e. algorithmic parameter estimation). The aim of this study is to establish and demonstrate a procedure for calibration of DayCent to improve estimation of GHG emissions. We coupled DayCent with the parameter estimation (PEST) software for inverse modelling. The PEST software can be used for calibration through regularized inversion as well as model sensitivity and uncertainty analysis. The DayCent model was analysed and calibrated using N2O flux data collected over 2 years at the Iowa State University Agronomy and Agricultural Engineering Research Farms, Boone, IA. Crop year 2003 data were used for model calibration and 2004 data were used for validation. The optimization of DayCent model parameters using PEST significantly reduced model residuals relative to the default DayCent parameter values. Parameter estimation improved the model performance by reducing the sum of weighted squared residual difference between measured and modelled outputs by up to 67 %. For the calibration period, simulation with the default model parameter values underestimated mean daily N2O flux by 98 %. After parameter estimation, the model underestimated the mean daily fluxes by 35 %. During the validation period, the calibrated model reduced sum of weighted squared residuals by 20 % relative to the default simulation. Sensitivity analysis performed provides important insights into the model structure providing guidance for model improvement.

  18. The CCD Photometric Calibration Cookbook

    NASA Astrophysics Data System (ADS)

    Palmer, J.; Davenhall, A. C.

    This cookbook presents simple recipes for the photometric calibration of CCD frames. Using these recipes you can calibrate the brightness of objects measured in CCD frames into magnitudes in standard photometric systems, such as the Johnson-Morgan UBV, system. The recipes use standard software available at all Starlink sites. The topics covered include: selecting standard stars, measuring instrumental magnitudes and calibrating instrumental magnitudes into a standard system. The recipes are appropriate for use with data acquired with optical CCDs and filters, operated in standard ways, and describe the usual calibration technique of observing standard stars. The software is robust and reliable, but the techniques are usually not suitable where very high accuracy is required. In addition to the recipes and scripts, sufficient background material is presented to explain the procedures and techniques used. The treatment is deliberately practical rather than theoretical, in keeping with the aim of providing advice on the actual calibration of observations. This cookbook is aimed firmly at people who are new to astronomical photometry. Typical readers might have a set of photometric observations to reduce (perhaps observed by a colleague) or be planning a programme of photometric observations, perhaps for the first time. No prior knowledge of astronomical photometry is assumed. The cookbook is not aimed at experts in astronomical photometry. Many finer points are omitted for clarity and brevity. Also, in order to make the most accurate possible calibration of high-precision photometry, it is usually necessary to use bespoke software tailored to the observing programme and photometric system you are using.

  19. Psychophysica: Mathematica notebooks for psychophysical experiments (cinematica--psychometrica--quest)

    NASA Technical Reports Server (NTRS)

    Watson, A. B.; Solomon, J. A.

    1997-01-01

    Psychophysica is a set of software tools for psychophysical research. Functions are provided for calibrated visual displays, for fitting and plotting of psychometric functions, and for the QUEST adaptive staircase procedure. The functions are written in the Mathematica programming language.

  20. Precise Haptic Device Co-Location for Visuo-Haptic Augmented Reality.

    PubMed

    Eck, Ulrich; Pankratz, Frieder; Sandor, Christian; Klinker, Gudrun; Laga, Hamid

    2015-12-01

    Visuo-haptic augmented reality systems enable users to see and touch digital information that is embedded in the real world. PHANToM haptic devices are often employed to provide haptic feedback. Precise co-location of computer-generated graphics and the haptic stylus is necessary to provide a realistic user experience. Previous work has focused on calibration procedures that compensate the non-linear position error caused by inaccuracies in the joint angle sensors. In this article we present a more complete procedure that additionally compensates for errors in the gimbal sensors and improves position calibration. The proposed procedure further includes software-based temporal alignment of sensor data and a method for the estimation of a reference for position calibration, resulting in increased robustness against haptic device initialization and external tracker noise. We designed our procedure to require minimal user input to maximize usability. We conducted an extensive evaluation with two different PHANToMs, two different optical trackers, and a mechanical tracker. Compared to state-of-the-art calibration procedures, our approach significantly improves the co-location of the haptic stylus. This results in higher fidelity visual and haptic augmentations, which are crucial for fine-motor tasks in areas such as medical training simulators, assembly planning tools, or rapid prototyping applications.

  1. Improved accuracy in ground-based facilities - Development of an automated film-reading system for ballistic ranges

    NASA Technical Reports Server (NTRS)

    Yates, Leslie A.

    1992-01-01

    Software for an automated film-reading system that uses personal computers and digitized shadowgraphs is described. The software identifies pixels associated with fiducial-line and model images, and least-squares procedures are used to calculate the positions and orientations of the images. Automated position and orientation readings for sphere and cone models are compared to those obtained using a manual film reader. When facility calibration errors are removed from these readings, the accuracy of the automated readings is better than the pixel resolution, and it is equal to, or better than, the manual readings. The effects of film-reading and facility-calibration errors on calculated aerodynamic coefficients is discussed.

  2. Calibration of areal surface topography measuring instruments

    NASA Astrophysics Data System (ADS)

    Seewig, J.; Eifler, M.

    2017-06-01

    The ISO standards which are related to the calibration of areal surface topography measuring instruments are the ISO 25178-6xx series which defines the relevant metrological characteristics for the calibration of different measuring principles and the ISO 25178-7xx series which defines the actual calibration procedures. As the field of areal measurement is however not yet fully standardized, there are still open questions to be addressed which are subject to current research. Based on this, selected research results of the authors in this area are presented. This includes the design and fabrication of areal material measures. For this topic, two examples are presented with the direct laser writing of a stepless material measure for the calibration of the height axis which is based on the Abbott- Curve and the manufacturing of a Siemens star for the determination of the lateral resolution limit. Based on these results, as well a new definition for the resolution criterion, the small scale fidelity, which is still under discussion, is presented. Additionally, a software solution for automated calibration procedures is outlined.

  3. Medical color displays and their calibration

    NASA Astrophysics Data System (ADS)

    Fan, Jiahua; Roehrig, Hans; Dallas, W.; Krupinski, Elizabeth

    2009-08-01

    Color displays are increasingly used for medical imaging, replacing the traditional monochrome displays in radiology for multi-modality applications, 3D representation applications, etc. Color displays are also used increasingly because of wide spread application of Tele-Medicine, Tele-Dermatology and Digital Pathology. At this time, there is no concerted effort for calibration procedures for this diverse range of color displays in Telemedicine and in other areas of the medical field. Using a colorimeter to measure the display luminance and chrominance properties as well as some processing software we developed a first attempt to a color calibration protocol for the medical imaging field.

  4. SU-E-T-749: Thorough Calibration of MOSFET Dosimeters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Plenkovich, D; Thomas, J

    Purpose: To improve the accuracy of the MOSFET calibration procedure by performing the measurement several times and calculating the average value of the calibration factor for various photon and electron energies. Methods: The output of three photon and six electron beams of Varian Trilogy linear accelerator SN 5878 was calibrated. Five reinforced standard sensitivity MOSFET dosimeters were placed in the calibration jig and connected to the Reader Module. As the backscatter material was used 7 cm of Virtual Water. The MOSFET dosimeters were covered with 1.5 cm thick bolus for the regular and SRS 6 MV beams, 3 cm bolusmore » for 15 MV beam, 1.5 cm bolus for 6 MeV electron beam, and 2 cm bolus for the electron energies of 9, 12, 15, 18, and 22 MeV. The dosimeters were exposed to 100 MU, and the calibration factor was determined using the mobileMOSFET software. To improve the accuracy of calibration, this procedure was repeated ten times and the calibration factors were averaged. Results: As the number of calibrations was increasing the variability of calibration factors of different dosimeters was decreasing. After ten calibrations, the calibration factors for all five dosimeters were within 1% of one another for all energies, except 6 MV SRS photons and 6 MeV electrons, for which the variability was 2%. Conclusions: The described process results in calibration factors which are almost independent of modality or energy. Once calibrated, the dosimeters may be used for in-vivo dosimetry or for daily verification of the beam output. Measurement of the radiation dose under bolus and scatter to the eye are examples of frequent use of calibrated MOSFET dosimeters. The calibration factor determined for full build-up is used under these circumstances. To the best of our knowledge, such thorough procedure for calibrating MOSFET dosimeters has not been reported previously. Best Medical Canada provided MOSFET dosimeters for this project.« less

  5. Crossed hot-wire data acquisition and reduction system

    NASA Technical Reports Server (NTRS)

    Westphal, R. V.; Mehta, R. D.

    1984-01-01

    The report describes a system for rapid computerized calibration acquisition, and processing of data from a crossed hot-wire anemometer is described. Advantages of the system are its speed, minimal use of analog electronics, and improved accuracy of the resulting data. Two components of mean velocity and turbulence statistics up to third order are provided by the data reduction. Details of the hardware, calibration procedures, response equations, software, and sample results from measurements in a turbulent plane mixing layer are presented.

  6. Simulation of streamflow in the Pleasant, Narraguagus, Sheepscot, and Royal Rivers, Maine, using watershed models

    USGS Publications Warehouse

    Dudley, Robert W.; Nielsen, Martha G.

    2011-01-01

    The U.S. Geological Survey (USGS) began a study in 2008 to investigate anticipated changes in summer streamflows and stream temperatures in four coastal Maine river basins and the potential effects of those changes on populations of endangered Atlantic salmon. To achieve this purpose, it was necessary to characterize the quantity and timing of streamflow in these rivers by developing and evaluating a distributed-parameter watershed model for a part of each river basin by using the USGS Precipitation-Runoff Modeling System (PRMS). The GIS (geographic information system) Weasel, a USGS software application, was used to delineate the four study basins and their many subbasins, and to derive parameters for their geographic features. The models were calibrated using a four-step optimization procedure in which model output was evaluated against four datasets for calibrating solar radiation, potential evapotranspiration, annual and seasonal water balances, and daily streamflows. The calibration procedure involved thousands of model runs that used the USGS software application Luca (Let us calibrate). Luca uses the Shuffled Complex Evolution (SCE) global search algorithm to calibrate the model parameters. The calibrated watershed models performed satisfactorily, in that Nash-Sutcliffe efficiency (NSE) statistic values for the calibration periods ranged from 0.59 to 0.75 (on a scale of negative infinity to 1) and NSE statistic values for the evaluation periods ranged from 0.55 to 0.73. The calibrated watershed models simulate daily streamflow at many locations in each study basin. These models enable natural resources managers to characterize the timing and amount of streamflow in order to support a variety of water-resources efforts including water-quality calculations, assessments of water use, modeling of population dynamics and migration of Atlantic salmon, modeling and assessment of habitat, and simulation of anticipated changes to streamflow and water temperature resulting from changes forecast for air temperature and precipitation.

  7. The ISO SWS on-line system

    NASA Technical Reports Server (NTRS)

    Roelfsema, P. R.; Kester, D. J. M.; Wesselius, P. R.; Wieprech, E.; Sym, N.

    1992-01-01

    The software which is currently being developed for the Short Wavelength Spectrometer (SWS) of the Infrared Space Observatory (ISO) is described. The spectrometer has a wide range of capabilities in the 2-45 micron infrared band. SWS contains two independent gratings, one for the long and one for the short wavelength section of the band. With the gratings a spectral resolution of approximately 1000 to approximately 2500 can be obtained. The instrument also contains two Fabry-Perault's yielding a resolution between approximately 1000 and approximately 20000. Software is currently being developed for the acquisition, calibration, and analysis of SWS data. The software is firstly required to run in a pipeline mode without human interaction, to process data as they are received from the telescope. However, both for testing and calibration of the instrument as well as for evaluation of the planned operating procedures the software should also be suitable for interactive use. Thirdly the same software will be used for long term characterization of the instrument. The software must work properly within the environment designed by the European Space Agency (ESA) for the spacecraft operations. As a result strict constraints are put on I/O devices, throughput etc.

  8. Geometric Calibration and Validation of Ultracam Aerial Sensors

    NASA Astrophysics Data System (ADS)

    Gruber, Michael; Schachinger, Bernhard; Muick, Marc; Neuner, Christian; Tschemmernegg, Helfried

    2016-03-01

    We present details of the calibration and validation procedure of UltraCam Aerial Camera systems. Results from the laboratory calibration and from validation flights are presented for both, the large format nadir cameras and the oblique cameras as well. Thus in this contribution we show results from the UltraCam Eagle and the UltraCam Falcon, both nadir mapping cameras, and the UltraCam Osprey, our oblique camera system. This sensor offers a mapping grade nadir component together with the four oblique camera heads. The geometric processing after the flight mission is being covered by the UltraMap software product. Thus we present details about the workflow as well. The first part consists of the initial post-processing which combines image information as well as camera parameters derived from the laboratory calibration. The second part, the traditional automated aerial triangulation (AAT) is the step from single images to blocks and enables an additional optimization process. We also present some special features of our software, which are designed to better support the operator to analyze large blocks of aerial images and to judge the quality of the photogrammetric set-up.

  9. TweezPal - Optical tweezers analysis and calibration software

    NASA Astrophysics Data System (ADS)

    Osterman, Natan

    2010-11-01

    Optical tweezers, a powerful tool for optical trapping, micromanipulation and force transduction, have in recent years become a standard technique commonly used in many research laboratories and university courses. Knowledge about the optical force acting on a trapped object can be gained only after a calibration procedure which has to be performed (by an expert) for each type of trapped objects. In this paper we present TweezPal, a user-friendly, standalone Windows software tool for optical tweezers analysis and calibration. Using TweezPal, the procedure can be performed in a matter of minutes even by non-expert users. The calibration is based on the Brownian motion of a particle trapped in a stationary optical trap, which is being monitored using video or photodiode detection. The particle trajectory is imported into the software which instantly calculates position histogram, trapping potential, stiffness and anisotropy. Program summaryProgram title: TweezPal Catalogue identifier: AEGR_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGR_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 44 891 No. of bytes in distributed program, including test data, etc.: 792 653 Distribution format: tar.gz Programming language: Borland Delphi Computer: Any PC running Microsoft Windows Operating system: Windows 95, 98, 2000, XP, Vista, 7 RAM: 12 Mbytes Classification: 3, 4.14, 18, 23 Nature of problem: Quick, robust and user-friendly calibration and analysis of optical tweezers. The optical trap is calibrated from the trajectory of a trapped particle undergoing Brownian motion in a stationary optical trap (input data) using two methods. Solution method: Elimination of the experimental drift in position data. Direct calculation of the trap stiffness from the positional variance. Calculation of 1D optical trapping potential from the positional distribution of data points. Trap stiffness calculation by fitting a parabola to the trapping potential. Presentation of X-Y positional density for close inspection of the 2D trapping potential. Calculation of the trap anisotropy. Running time: Seconds

  10. A frequentist approach to computer model calibration

    DOE PAGES

    Wong, Raymond K. W.; Storlie, Curtis Byron; Lee, Thomas C. M.

    2016-05-05

    The paper considers the computer model calibration problem and provides a general frequentist solution. Under the framework proposed, the data model is semiparametric with a non-parametric discrepancy function which accounts for any discrepancy between physical reality and the computer model. In an attempt to solve a fundamentally important (but often ignored) identifiability issue between the computer model parameters and the discrepancy function, the paper proposes a new and identifiable parameterization of the calibration problem. It also develops a two-step procedure for estimating all the relevant quantities under the new parameterization. This estimation procedure is shown to enjoy excellent rates ofmore » convergence and can be straightforwardly implemented with existing software. For uncertainty quantification, bootstrapping is adopted to construct confidence regions for the quantities of interest. As a result, the practical performance of the methodology is illustrated through simulation examples and an application to a computational fluid dynamics model.« less

  11. Multiple IMU system test plan, volume 4. [subroutines for space shuttle requirements

    NASA Technical Reports Server (NTRS)

    Landey, M.; Vincent, K. T., Jr.; Whittredge, R. S.

    1974-01-01

    Operating procedures for this redundant system are described. A test plan is developed with two objectives. First, performance of the hardware and software delivered is demonstrated. Second, applicability of multiple IMU systems to the space shuttle mission is shown through detailed experiments with FDI algorithms and other multiple IMU software: gyrocompassing, calibration, and navigation. Gimbal flip is examined in light of its possible detrimental effects on FDI and navigation. For Vol. 3, see N74-10296.

  12. VS2DI: Model use, calibration, and validation

    USGS Publications Warehouse

    Healy, Richard W.; Essaid, Hedeff I.

    2012-01-01

    VS2DI is a software package for simulating water, solute, and heat transport through soils or other porous media under conditions of variable saturation. The package contains a graphical preprocessor for constructing simulations, a postprocessor for displaying simulation results, and numerical models that solve for flow and solute transport (VS2DT) and flow and heat transport (VS2DH). Flow is described by the Richards equation, and solute and heat transport are described by advection-dispersion equations; the finite-difference method is used to solve these equations. Problems can be simulated in one, two, or three (assuming radial symmetry) dimensions. This article provides an overview of calibration techniques that have been used with VS2DI; included is a detailed description of calibration procedures used in simulating the interaction between groundwater and a stream fed by drainage from agricultural fields in central Indiana. Brief descriptions of VS2DI and the various types of problems that have been addressed with the software package are also presented.

  13. Performance analysis of a film dosimetric quality assurance procedure for IMRT with regard to the employment of quantitative evaluation methods.

    PubMed

    Winkler, Peter; Zurl, Brigitte; Guss, Helmuth; Kindl, Peter; Stuecklschweiger, Georg

    2005-02-21

    A system for dosimetric verification of intensity-modulated radiotherapy (IMRT) treatment plans using absolute calibrated radiographic films is presented. At our institution this verification procedure is performed for all IMRT treatment plans prior to patient irradiation. Therefore clinical treatment plans are transferred to a phantom and recalculated. Composite treatment plans are irradiated to a single film. Film density to absolute dose conversion is performed automatically based on a single calibration film. A software application encompassing film calibration, 2D registration of measurement and calculated distributions, image fusion, and a number of visual and quantitative evaluation utilities was developed. The main topic of this paper is a performance analysis for this quality assurance procedure, with regard to the specification of tolerance levels for quantitative evaluations. Spatial and dosimetric precision and accuracy were determined for the entire procedure, comprising all possible sources of error. The overall dosimetric and spatial measurement uncertainties obtained thereby were 1.9% and 0.8 mm respectively. Based on these results, we specified 5% dose difference and 3 mm distance-to-agreement as our tolerance levels for patient-specific quality assurance for IMRT treatments.

  14. Instrumentation & Data Acquisition System (D AS) Engineer

    NASA Technical Reports Server (NTRS)

    Jackson, Markus Deon

    2015-01-01

    The primary job of an Instrumentation and Data Acquisition System (DAS) Engineer is to properly measure physical phenomenon of hardware using appropriate instrumentation and DAS equipment designed to record data during a specified test of the hardware. A DAS system includes a CPU or processor, a data storage device such as a hard drive, a data communication bus such as Universal Serial Bus, software to control the DAS system processes like calibrations, recording of data and processing of data. It also includes signal conditioning amplifiers, and certain sensors for specified measurements. My internship responsibilities have included testing and adjusting Pacific Instruments Model 9355 signal conditioning amplifiers, writing and performing checkout procedures, writing and performing calibration procedures while learning the basics of instrumentation.

  15. Calibration of medium-resolution monochrome cathode ray tube displays for the purpose of board examinations.

    PubMed

    Evanoff, M G; Roehrig, H; Giffords, R S; Capp, M P; Rovinelli, R J; Hartmann, W H; Merritt, C

    2001-06-01

    This report discusses calibration and set-up procedures for medium-resolution monochrome cathode ray tubes (CRTs) taken in preparation of the oral portion of the board examination of the American Board of Radiology (ABR). The board examinations took place in more than 100 rooms of a hotel. There was one display-station (a computer and the associated CRT display) in each of the hotel rooms used for the examinations. The examinations covered the radiologic specialties cardiopulmonary, musculoskeletal, gastrointestinal, vascular, pediatric, and genitourinary. The software used for set-up and calibration was the VeriLUM 4.0 package from Image Smiths in Germantown, MD. The set-up included setting minimum luminance and maximum luminance, as well as positioning of the CRT in each examination room with respect to reflections of roomlights. The calibration for the grey scale rendition was done meeting the Digital Imaging and communication in Medicine (DICOM) 14 Standard Display Function. We describe these procedures, and present the calibration data in. tables and graphs, listing initial values of minimum luminance, maximum luminance, and grey scale rendition (DICOM 14 standard display function). Changes of these parameters over the duration of the examination were observed and recorded on 11 monitors in a particular room. These changes strongly suggest that all calibrated CRTs be monitored over the duration of the examination. In addition, other CRT performance data affecting image quality such as spatial resolution should be included in set-up and image quality-control procedures.

  16. The M68HC11 gripper controller electronics

    NASA Technical Reports Server (NTRS)

    Kelley, Robert B.; Bethel, Jeffrey

    1991-01-01

    This document describes the instrumentation, operational theory, circuit implementation, calibration procedures, and general notes for the CIRSSE general purpose pneumatic hand. The mechanical design and the control software are discussed. The circuit design, PCB layout, hand instrumentation, and controller construction described in detail in this document are the result of a senior project.

  17. Improving the Traceability of Meteorological Measurements at Automatic Weather Stations in Thailand

    NASA Astrophysics Data System (ADS)

    Keawprasert, T.; Sinhaneti, T.; Phuuntharo, P.; Phanakulwijit, S.; Nimsamer, A.

    2017-08-01

    A joint project between the National Institute of Metrology Thailand (NIMT) and the Thai Meteorology Department (TMD) was established for improving the traceability of meteorology measurements at automatic weather stations (AWSs) in Thailand. The project aimed to improve traceability of air temperature, relative humidity and atmospheric pressure by implementing on-site calibration facilities and developing of new calibration procedures. First, new portable calibration facilities for air temperature, humidity and pressure were set up as working standard of the TMD. A portable humidity calibrator was applied as a uniform and stable source for calibration of thermo-hygrometers. A dew-point hygrometer was employed as reference hygrometer and a platinum resistance thermometer (PRT) traceable to NIMT was used as reference thermometer. The uniformity and stability in both temperature and relative humidity were characterized at NIMT. A transportable pressure calibrator was used for calibration of air pressure sensor. The estimate overall uncertainty of the calibration setup is 0.2 K for air temperature, 1.0 % for relative humidity and 0.2 hPa for atmospheric pressure, respectively. Second, on-site calibration procedures were developed and four AWSs in the central part and the northern of Thailand were chosen as pilot stations for on-site calibration using the new calibration setups and developed calibration procedures. At each station, the calibration was done at the minimum temperature, average temperature and maximum temperature of the year, for air temperature, 20 %, 55 % and 90 % for relative humidity at the average air temperature of that station and at a one-year statistics pressure range for atmospheric pressure at ambient temperature. Additional in-field uncertainty contributions such as the temperature dependence on relative humidity measurement were evaluated and included in the overall uncertainty budget. Preliminary calibration results showed that using a separate PRT probe at these AWSs would be recommended for improving the accuracy of air temperature measurement. In case of relative humidity measurement, the data logger software is needed to be upgraded for achieving higher accuracy of less than 3 %. For atmospheric pressure measurement, a higher accuracy barometer traceable to NIMT could be used to reduce the calibration uncertainty to below 0.2 hPa.

  18. Procedure for the Selection and Validation of a Calibration Model I-Description and Application.

    PubMed

    Desharnais, Brigitte; Camirand-Lemyre, Félix; Mireault, Pascal; Skinner, Cameron D

    2017-05-01

    Calibration model selection is required for all quantitative methods in toxicology and more broadly in bioanalysis. This typically involves selecting the equation order (quadratic or linear) and weighting factor correctly modelizing the data. A mis-selection of the calibration model will generate lower quality control (QC) accuracy, with an error up to 154%. Unfortunately, simple tools to perform this selection and tests to validate the resulting model are lacking. We present a stepwise, analyst-independent scheme for selection and validation of calibration models. The success rate of this scheme is on average 40% higher than a traditional "fit and check the QCs accuracy" method of selecting the calibration model. Moreover, the process was completely automated through a script (available in Supplemental Data 3) running in RStudio (free, open-source software). The need for weighting was assessed through an F-test using the variances of the upper limit of quantification and lower limit of quantification replicate measurements. When weighting was required, the choice between 1/x and 1/x2 was determined by calculating which option generated the smallest spread of weighted normalized variances. Finally, model order was selected through a partial F-test. The chosen calibration model was validated through Cramer-von Mises or Kolmogorov-Smirnov normality testing of the standardized residuals. Performance of the different tests was assessed using 50 simulated data sets per possible calibration model (e.g., linear-no weight, quadratic-no weight, linear-1/x, etc.). This first of two papers describes the tests, procedures and outcomes of the developed procedure using real LC-MS-MS results for the quantification of cocaine and naltrexone. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  19. Quantitative X-ray Map Analyser (Q-XRMA): A new GIS-based statistical approach to Mineral Image Analysis

    NASA Astrophysics Data System (ADS)

    Ortolano, Gaetano; Visalli, Roberto; Godard, Gaston; Cirrincione, Rosolino

    2018-06-01

    We present a new ArcGIS®-based tool developed in the Python programming language for calibrating EDS/WDS X-ray element maps, with the aim of acquiring quantitative information of petrological interest. The calibration procedure is based on a multiple linear regression technique that takes into account interdependence among elements and is constrained by the stoichiometry of minerals. The procedure requires an appropriate number of spot analyses for use as internal standards and provides several test indexes for a rapid check of calibration accuracy. The code is based on an earlier image-processing tool designed primarily for classifying minerals in X-ray element maps; the original Python code has now been enhanced to yield calibrated maps of mineral end-members or the chemical parameters of each classified mineral. The semi-automated procedure can be used to extract a dataset that is automatically stored within queryable tables. As a case study, the software was applied to an amphibolite-facies garnet-bearing micaschist. The calibrated images obtained for both anhydrous (i.e., garnet and plagioclase) and hydrous (i.e., biotite) phases show a good fit with corresponding electron microprobe analyses. This new GIS-based tool package can thus find useful application in petrology and materials science research. Moreover, the huge quantity of data extracted opens new opportunities for the development of a thin-section microchemical database that, using a GIS platform, can be linked with other major global geoscience databases.

  20. Estimating BrAC from transdermal alcohol concentration data using the BrAC estimator software program.

    PubMed

    Luczak, Susan E; Rosen, I Gary

    2014-08-01

    Transdermal alcohol sensor (TAS) devices have the potential to allow researchers and clinicians to unobtrusively collect naturalistic drinking data for weeks at a time, but the transdermal alcohol concentration (TAC) data these devices produce do not consistently correspond with breath alcohol concentration (BrAC) data. We present and test the BrAC Estimator software, a program designed to produce individualized estimates of BrAC from TAC data by fitting mathematical models to a specific person wearing a specific TAS device. Two TAS devices were worn simultaneously by 1 participant for 18 days. The trial began with a laboratory alcohol session to calibrate the model and was followed by a field trial with 10 drinking episodes. Model parameter estimates and fit indices were compared across drinking episodes to examine the calibration phase of the software. Software-generated estimates of peak BrAC, time of peak BrAC, and area under the BrAC curve were compared with breath analyzer data to examine the estimation phase of the software. In this single-subject design with breath analyzer peak BrAC scores ranging from 0.013 to 0.057, the software created consistent models for the 2 TAS devices, despite differences in raw TAC data, and was able to compensate for the attenuation of peak BrAC and latency of the time of peak BrAC that are typically observed in TAC data. This software program represents an important initial step for making it possible for non mathematician researchers and clinicians to obtain estimates of BrAC from TAC data in naturalistic drinking environments. Future research with more participants and greater variation in alcohol consumption levels and patterns, as well as examination of gain scheduling calibration procedures and nonlinear models of diffusion, will help to determine how precise these software models can become. Copyright © 2014 by the Research Society on Alcoholism.

  1. New Software for Ensemble Creation in the Spitzer-Space-Telescope Operations Database

    NASA Technical Reports Server (NTRS)

    Laher, Russ; Rector, John

    2004-01-01

    Some of the computer pipelines used to process digital astronomical images from NASA's Spitzer Space Telescope require multiple input images, in order to generate high-level science and calibration products. The images are grouped into ensembles according to well documented ensemble-creation rules by making explicit associations in the operations Informix database at the Spitzer Science Center (SSC). The advantage of this approach is that a simple database query can retrieve the required ensemble of pipeline input images. New and improved software for ensemble creation has been developed. The new software is much faster than the existing software because it uses pre-compiled database stored-procedures written in Informix SPL (SQL programming language). The new software is also more flexible because the ensemble creation rules are now stored in and read from newly defined database tables. This table-driven approach was implemented so that ensemble rules can be inserted, updated, or deleted without modifying software.

  2. Calibration Software for Use with Jurassicprok

    NASA Technical Reports Server (NTRS)

    Chapin, Elaine; Hensley, Scott; Siqueira, Paul

    2004-01-01

    The Jurassicprok Interferometric Calibration Software (also called "Calibration Processor" or simply "CP") estimates the calibration parameters of an airborne synthetic-aperture-radar (SAR) system, the raw measurement data of which are processed by the Jurassicprok software described in the preceding article. Calibration parameters estimated by CP include time delays, baseline offsets, phase screens, and radiometric offsets. CP examines raw radar-pulse data, single-look complex image data, and digital elevation map data. For each type of data, CP compares the actual values with values expected on the basis of ground-truth data. CP then converts the differences between the actual and expected values into updates for the calibration parameters in an interferometric calibration file (ICF) and a radiometric calibration file (RCF) for the particular SAR system. The updated ICF and RCF are used as inputs to both Jurassicprok and to the companion Motion Measurement Processor software (described in the following article) for use in generating calibrated digital elevation maps.

  3. Improving the Thermal, Radial and Temporal Accuracy of the Analytical Ultracentrifuge through External References

    PubMed Central

    Ghirlando, Rodolfo; Balbo, Andrea; Piszczek, Grzegorz; Brown, Patrick H.; Lewis, Marc S.; Brautigam, Chad A.; Schuck, Peter; Zhao, Huaying

    2013-01-01

    Sedimentation velocity (SV) is a method based on first-principles that provides a precise hydrodynamic characterization of macromolecules in solution. Due to recent improvements in data analysis, the accuracy of experimental SV data emerges as a limiting factor in its interpretation. Our goal was to unravel the sources of experimental error and develop improved calibration procedures. We implemented the use of a Thermochron iButton® temperature logger to directly measure the temperature of a spinning rotor, and detected deviations that can translate into an error of as much as 10% in the sedimentation coefficient. We further designed a precision mask with equidistant markers to correct for instrumental errors in the radial calibration, which were observed to span a range of 8.6%. The need for an independent time calibration emerged with use of the current data acquisition software (Zhao et al., doi 10.1016/j.ab.2013.02.011) and we now show that smaller but significant time errors of up to 2% also occur with earlier versions. After application of these calibration corrections, the sedimentation coefficients obtained from eleven instruments displayed a significantly reduced standard deviation of ∼ 0.7 %. This study demonstrates the need for external calibration procedures and regular control experiments with a sedimentation coefficient standard. PMID:23711724

  4. Improving the thermal, radial, and temporal accuracy of the analytical ultracentrifuge through external references.

    PubMed

    Ghirlando, Rodolfo; Balbo, Andrea; Piszczek, Grzegorz; Brown, Patrick H; Lewis, Marc S; Brautigam, Chad A; Schuck, Peter; Zhao, Huaying

    2013-09-01

    Sedimentation velocity (SV) is a method based on first principles that provides a precise hydrodynamic characterization of macromolecules in solution. Due to recent improvements in data analysis, the accuracy of experimental SV data emerges as a limiting factor in its interpretation. Our goal was to unravel the sources of experimental error and develop improved calibration procedures. We implemented the use of a Thermochron iButton temperature logger to directly measure the temperature of a spinning rotor and detected deviations that can translate into an error of as much as 10% in the sedimentation coefficient. We further designed a precision mask with equidistant markers to correct for instrumental errors in the radial calibration that were observed to span a range of 8.6%. The need for an independent time calibration emerged with use of the current data acquisition software (Zhao et al., Anal. Biochem., 437 (2013) 104-108), and we now show that smaller but significant time errors of up to 2% also occur with earlier versions. After application of these calibration corrections, the sedimentation coefficients obtained from 11 instruments displayed a significantly reduced standard deviation of approximately 0.7%. This study demonstrates the need for external calibration procedures and regular control experiments with a sedimentation coefficient standard. Published by Elsevier Inc.

  5. Analysis of Photogrammetry Data from ISIM Mockup, June 1, 2007

    NASA Technical Reports Server (NTRS)

    Nowak, Maria; Hill, Mike

    2007-01-01

    During ground testing of the Integrated Science Instrument Module (ISIM) for the James Webb Space Telescope (JWST), the ISIM Optics group plans to use a Photogrammetry Measurement System for cryogenic calibration of specific target points on the ISIM composite structure and Science Instrument optical benches and other GSE equipment. This testing will occur in the Space Environmental Systems (SES) chamber at Goddard Space Flight Center. Close range photogrammetry is a 3 dimensional metrology system using triangulation to locate custom targets in 3 coordinates via a collection of digital photographs taken from various locations and orientations. These photos are connected using coded targets, special targets that are recognized by the software and can thus correlate the images to provide a 3 dimensional map of the targets, and scaled via well calibrated scale bars. Photogrammetry solves for the camera location and coordinates of the targets simultaneously through the bundling procedure contained in the V-STARS software.

  6. Quality control and assurance for validation of DOS/I measurements

    NASA Astrophysics Data System (ADS)

    Cerussi, Albert; Durkin, Amanda; Kwong, Richard; Quang, Timothy; Hill, Brian; Tromberg, Bruce J.; MacKinnon, Nick; Mantulin, William W.

    2010-02-01

    Ongoing multi-center clinical trials are crucial for Biophotonics to gain acceptance in medical imaging. In these trials, quality control (QC) and assurance (QA) are key to success and provide "data insurance". Quality control and assurance deal with standardization, validation, and compliance of procedures, materials and instrumentation. Specifically, QC/QA involves systematic assessment of testing materials, instrumentation performance, standard operating procedures, data logging, analysis, and reporting. QC and QA are important for FDA accreditation and acceptance by the clinical community. Our Biophotonics research in the Network for Translational Research in Optical Imaging (NTROI) program for breast cancer characterization focuses on QA/QC issues primarily related to the broadband Diffuse Optical Spectroscopy and Imaging (DOS/I) instrumentation, because this is an emerging technology with limited standardized QC/QA in place. In the multi-center trial environment, we implement QA/QC procedures: 1. Standardize and validate calibration standards and procedures. (DOS/I technology requires both frequency domain and spectral calibration procedures using tissue simulating phantoms and reflectance standards, respectively.) 2. Standardize and validate data acquisition, processing and visualization (optimize instrument software-EZDOS; centralize data processing) 3. Monitor, catalog and maintain instrument performance (document performance; modularize maintenance; integrate new technology) 4. Standardize and coordinate trial data entry (from individual sites) into centralized database 5. Monitor, audit and communicate all research procedures (database, teleconferences, training sessions) between participants ensuring "calibration". This manuscript describes our ongoing efforts, successes and challenges implementing these strategies.

  7. APEX/SPIN: a free test platform to measure speech intelligibility.

    PubMed

    Francart, Tom; Hofmann, Michael; Vanthornhout, Jonas; Van Deun, Lieselot; van Wieringen, Astrid; Wouters, Jan

    2017-02-01

    Measuring speech intelligibility in quiet and noise is important in clinical practice and research. An easy-to-use free software platform for conducting speech tests is presented, called APEX/SPIN. The APEX/SPIN platform allows the use of any speech material in combination with any noise. A graphical user interface provides control over a large range of parameters, such as number of loudspeakers, signal-to-noise ratio and parameters of the procedure. An easy-to-use graphical interface is provided for calibration and storage of calibration values. To validate the platform, perception of words in quiet and sentences in noise were measured both with APEX/SPIN and with an audiometer and CD player, which is a conventional setup in current clinical practice. Five normal-hearing listeners participated in the experimental evaluation. Speech perception results were similar for the APEX/SPIN platform and conventional procedures. APEX/SPIN is a freely available and open source platform that allows the administration of all kinds of custom speech perception tests and procedures.

  8. DIY soundcard based temperature logging system. Part I: design

    NASA Astrophysics Data System (ADS)

    Nunn, John

    2016-11-01

    This paper aims to enable schools to make their own low-cost temperature logging instrument and to learn a something about its calibration in the process. This paper describes how a thermistor can be integrated into a simple potential divider circuit which is powered with the sound output of a computer and monitored by the microphone input. The voltage across a fixed resistor is recorded and scaled to convert it into a temperature reading in the range 0-100 °C. The calibration process is described with reference to fixed points and the effects of non-linearity are highlighted. An optimised calibration procedure is described which enables sub degree resolution and a software program was written which makes it possible to log, display and save temperature changes over a user determined period of time.

  9. A Short Open Calibration (SOC) Technique to Calculate the Propagation Characteristics of Substrate Integrated Waveguide

    DTIC Science & Technology

    2015-07-01

    integrated with the commercial electromagnetic software for accurate extraction of propagation constant of substrate integrated waveguide ( SIW ) with...respectively. After three distinctive equivalent circuit networks are described for SOC de-embedding procedure. The propagation constants of SIW with...final, the phase and attenuation constants of SIW are derived to demonstrate the propagation and leakage characteristics of SIW . Index Terms

  10. Achieving consistent color and grayscale presentation on medial color displays

    NASA Astrophysics Data System (ADS)

    Fan, Jiahua; Roehrig, Hans; Dallas, William; Krupinski, Elizabeth A.

    2008-03-01

    Color displays are increasingly used for medical imaging, replacing the traditional monochrome displays in radiology for multi-modality applications, 3D representation applications, etc. Color displays are also used increasingly because of wide spread application of Tele-Medicine, Tele-Dermatology and Digital Pathology. At this time, there is no concerted effort for calibration procedures for this diverse range of color displays in Telemedicine and in other areas of the medical field. Using a colorimeter to measure the display luminance and chrominance properties as well as some processing software we developed a first attempt to a color calibration protocol for the medical imaging field.

  11. Development of a calibration protocol and identification of the most sensitive parameters for the particulate biofilm models used in biological wastewater treatment.

    PubMed

    Eldyasti, Ahmed; Nakhla, George; Zhu, Jesse

    2012-05-01

    Biofilm models are valuable tools for process engineers to simulate biological wastewater treatment. In order to enhance the use of biofilm models implemented in contemporary simulation software, model calibration is both necessary and helpful. The aim of this work was to develop a calibration protocol of the particulate biofilm model with a help of the sensitivity analysis of the most important parameters in the biofilm model implemented in BioWin® and verify the predictability of the calibration protocol. A case study of a circulating fluidized bed bioreactor (CFBBR) system used for biological nutrient removal (BNR) with a fluidized bed respirometric study of the biofilm stoichiometry and kinetics was used to verify and validate the proposed calibration protocol. Applying the five stages of the biofilm calibration procedures enhanced the applicability of BioWin®, which was capable of predicting most of the performance parameters with an average percentage error (APE) of 0-20%. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. Sensor Integration in a Low Cost Land Mobile Mapping System

    PubMed Central

    Madeira, Sergio; Gonçalves, José A.; Bastos, Luísa

    2012-01-01

    Mobile mapping is a multidisciplinary technique which requires several dedicated equipment, calibration procedures that must be as rigorous as possible, time synchronization of all acquired data and software for data processing and extraction of additional information. To decrease the cost and complexity of Mobile Mapping Systems (MMS), the use of less expensive sensors and the simplification of procedures for calibration and data acquisition are mandatory features. This article refers to the use of MMS technology, focusing on the main aspects that need to be addressed to guarantee proper data acquisition and describing the way those aspects were handled in a terrestrial MMS developed at the University of Porto. In this case the main aim was to implement a low cost system while maintaining good quality standards of the acquired georeferenced information. The results discussed here show that this goal has been achieved. PMID:22736985

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wong, Raymond K. W.; Storlie, Curtis Byron; Lee, Thomas C. M.

    The paper considers the computer model calibration problem and provides a general frequentist solution. Under the framework proposed, the data model is semiparametric with a non-parametric discrepancy function which accounts for any discrepancy between physical reality and the computer model. In an attempt to solve a fundamentally important (but often ignored) identifiability issue between the computer model parameters and the discrepancy function, the paper proposes a new and identifiable parameterization of the calibration problem. It also develops a two-step procedure for estimating all the relevant quantities under the new parameterization. This estimation procedure is shown to enjoy excellent rates ofmore » convergence and can be straightforwardly implemented with existing software. For uncertainty quantification, bootstrapping is adopted to construct confidence regions for the quantities of interest. As a result, the practical performance of the methodology is illustrated through simulation examples and an application to a computational fluid dynamics model.« less

  14. STIS Data Handbook v. 6.0

    NASA Astrophysics Data System (ADS)

    Bostroem, K. A.; Proffitt, C.

    2011-05-01

    This handbook describes data from the Space Telescope Imaging Spectrograph (STIS) onboard the Hubble Space Telescope (HST), and how to manipulate, calibrate, and analyze those data. The current version of the STIS Data Handbook is presented as an independent and self-contained document, extensively built on the contents of version 6 of the HST Data Handbook. Users are referred to a companion volume, Introduction to the HST Data Handbooks, for more general information about the details of acquiring data from the HST archive, HST file formats, and general purpose software for displaying and processing HST data. For detailed information on the capabilities of the instrument, and how to plan observations, users should refer to the STIS Instrument Handbook. For further information and timely updates, users should consult the STIS Web page (http://www.stsci.edu/hst/stis), especially the Document Archive link. In particular, the STScI Analysis Newsletters (STANs) highlight changes in code and calibration procedures and provide other instrument-related news. The Instrument Science Reports (ISRs) present in-depth characterizations of the instrument and detailed explanations of calibration code and procedures. The current edition of the STIS Data Handbook was completed in early-2011. The last major revision was published in January 2002, following the failure of the Side-1 electronics and the successful resumption of operations using Side-2 electronics in the summer of 2001. STIS continued to perform well until the Side-2 electronics failed on 3 August 2004. STIS was successfully repaired during the fourth HST servicing mission (SM4) in May 2009 and has resumed science operations with all channels. A static archive of all STIS data taken prior to the Side-2 failure was prepared in 2006 using the latest calibration code and reference files, and has now replaced On-the-Fly Reprocessing (OTFR) of STIS data. At that time, substantial improvements were made to calibration and pipeline codes and reference files (see Section 1.5). New STIS data taken after the 2009 repair will be processed through OTFR when requested from the HST archive. This will allow the data to be calibrated with the most up-to-date versions of the software and reference files.

  15. Meteor44 Video Meteor Photometry

    NASA Technical Reports Server (NTRS)

    Swift, Wesley R.; Suggs, Robert M.; Cooke, William J.

    2004-01-01

    Meteor44 is a software system developed at MSFC for the calibration and analysis of video meteor data. The dynamic range of the (8bit) video data is extended by approximately 4 magnitudes for both meteors and stellar images using saturation compensation. Camera and lens specific saturation compensation coefficients are derived from artificial variable star laboratory measurements. Saturation compensation significantly increases the number of meteors with measured intensity and improves the estimation of meteoroid mass distribution. Astrometry is automated to determine each image s plate coefficient using appropriate star catalogs. The images are simultaneously intensity calibrated from the contained stars to determine the photon sensitivity and the saturation level referenced above the atmosphere. The camera s spectral response is used to compensate for stellar color index and typical meteor spectra in order to report meteor light curves in traditional visual magnitude units. Recent efforts include improved camera calibration procedures, long focal length "streak" meteor photome&y and two-station track determination. Meteor44 has been used to analyze data from the 2001.2002 and 2003 MSFC Leonid observational campaigns as well as several lesser showers. The software is interactive and can be demonstrated using data from recent Leonid campaigns.

  16. MIRO Continuum Calibration for Asteroid Mode

    NASA Technical Reports Server (NTRS)

    Lee, Seungwon

    2011-01-01

    MIRO (Microwave Instrument for the Rosetta Orbiter) is a lightweight, uncooled, dual-frequency heterodyne radiometer. The MIRO encountered asteroid Steins in 2008, and during the flyby, MIRO used the Asteroid Mode to measure the emission spectrum of Steins. The Asteroid Mode is one of the seven modes of the MIRO operation, and is designed to increase the length of time that a spectral line is in the MIRO pass-band during a flyby of an object. This software is used to calibrate the continuum measurement of Steins emission power during the asteroid flyby. The MIRO raw measurement data need to be calibrated in order to obtain physically meaningful data. This software calibrates the MIRO raw measurements in digital units to the brightness temperature in Kelvin. The software uses two calibration sequences that are included in the Asteroid Mode. One sequence is at the beginning of the mode, and the other at the end. The first six frames contain the measurement of a cold calibration target, while the last six frames measure a warm calibration target. The targets have known temperatures and are used to provide reference power and gain, which can be used to convert MIRO measurements into brightness temperature. The software was developed to calibrate MIRO continuum measurements from Asteroid Mode. The software determines the relationship between the raw digital unit measured by MIRO and the equivalent brightness temperature by analyzing data from calibration frames. The found relationship is applied to non-calibration frames, which are the measurements of an object of interest such as asteroids and other planetary objects that MIRO encounters during its operation. This software characterizes the gain fluctuations statistically and determines which method to estimate gain between calibration frames. For example, if the fluctuation is lower than a statistically significant level, the averaging method is used to estimate the gain between the calibration frames. If the fluctuation is found to be statistically significant, a linear interpolation of gain and reference power is used to estimate the gain between the calibration frames.

  17. Radiometer Calibration and Characterization (RCC) User's Manual: Windows Version 4.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andreas, Afshin M.; Wilcox, Stephen M.

    2016-02-29

    The Radiometer Calibration and Characterization (RCC) software is a data acquisition and data archival system for performing Broadband Outdoor Radiometer Calibrations (BORCAL). RCC provides a unique method of calibrating broadband atmospheric longwave and solar shortwave radiometers using techniques that reduce measurement uncertainty and better characterize a radiometer's response profile. The RCC software automatically monitors and controls many of the components that contribute to uncertainty in an instrument's responsivity. This is a user's manual and guide to the RCC software.

  18. Calibration of a COTS Integration Cost Model Using Local Project Data

    NASA Technical Reports Server (NTRS)

    Boland, Dillard; Coon, Richard; Byers, Kathryn; Levitt, David

    1997-01-01

    The software measures and estimation techniques appropriate to a Commercial Off the Shelf (COTS) integration project differ from those commonly used for custom software development. Labor and schedule estimation tools that model COTS integration are available. Like all estimation tools, they must be calibrated with the organization's local project data. This paper describes the calibration of a commercial model using data collected by the Flight Dynamics Division (FDD) of the NASA Goddard Spaceflight Center (GSFC). The model calibrated is SLIM Release 4.0 from Quantitative Software Management (QSM). By adopting the SLIM reuse model and by treating configuration parameters as lines of code, we were able to establish a consistent calibration for COTS integration projects. The paper summarizes the metrics, the calibration process and results, and the validation of the calibration.

  19. Altimeter waveform software design

    NASA Technical Reports Server (NTRS)

    Hayne, G. S.; Miller, L. S.; Brown, G. S.

    1977-01-01

    Techniques are described for preprocessing raw return waveform data from the GEOS-3 radar altimeter. Topics discussed include: (1) general altimeter data preprocessing to be done at the GEOS-3 Data Processing Center to correct altimeter waveform data for temperature calibrations, to convert between engineering and final data units and to convert telemetered parameter quantities to more appropriate final data distribution values: (2) time "tagging" of altimeter return waveform data quantities to compensate for various delays, misalignments and calculational intervals; (3) data processing procedures for use in estimating spacecraft attitude from altimeter waveform sampling gates; and (4) feasibility of use of a ground-based reflector or transponder to obtain in-flight calibration information on GEOS-3 altimeter performance.

  20. Software For Calibration Of Polarimetric SAR Data

    NASA Technical Reports Server (NTRS)

    Van Zyl, Jakob; Zebker, Howard; Freeman, Anthony; Holt, John; Dubois, Pascale; Chapman, Bruce

    1994-01-01

    POLCAL (Polarimetric Radar Calibration) software tool intended to assist in calibration of synthetic-aperture radar (SAR) systems. In particular, calibrates Stokes-matrix-format data produced as standard product by NASA/Jet Propulsion Laboratory (JPL) airborne imaging synthetic aperture radar (AIRSAR). Version 4.0 of POLCAL is upgrade of version 2.0. New options include automatic absolute calibration of 89/90 data, distributed-target analysis, calibration of nearby scenes with corner reflectors, altitude or roll-angle corrections, and calibration of errors introduced by known topography. Reduces crosstalk and corrects phase calibration without use of ground calibration equipment. Written in FORTRAN 77.

  1. AMBER instrument control software

    NASA Astrophysics Data System (ADS)

    Le Coarer, Etienne P.; Zins, Gerard; Gluck, Laurence; Duvert, Gilles; Driebe, Thomas; Ohnaka, Keiichi; Heininger, Matthias; Connot, Claus; Behrend, Jan; Dugue, Michel; Clausse, Jean Michel; Millour, Florentin

    2004-09-01

    AMBER (Astronomical Multiple BEam Recombiner) is a 3 aperture interferometric recombiner operating between 1 and 2.5 um, for the Very Large Telescope Interferometer (VLTI). The control software of the instrument, based on the VLT Common Software, has been written to comply with specific features of the AMBER hardware, such as the Infrared detector read out modes or piezo stage drivers, as well as with the very specific operation modes of an interferomtric instrument. In this respect, the AMBER control software was designed to insure that all operations, from the preparation of the observations to the control/command of the instrument during the observations, would be kept as simple as possible for the users and operators, opening the use of an interferometric instrument to the largest community of astronomers. Peculiar attention was given to internal checks and calibration procedures both to evaluate data quality in real time, and improve the successes of long term UV plane coverage observations.

  2. Calibration of work zone impact analysis software for Missouri.

    DOT National Transportation Integrated Search

    2013-12-01

    This project calibrated two software programs used for estimating the traffic impacts of work zones. The WZ Spreadsheet : and VISSIM programs were recommended in a previous study by the authors. The two programs were calibrated using : field data fro...

  3. Generating standardized image data for testing and calibrating quantification of volumes, surfaces, lengths, and object counts in fibrous and porous materials using X-ray microtomography.

    PubMed

    Jiřík, Miroslav; Bartoš, Martin; Tomášek, Petr; Malečková, Anna; Kural, Tomáš; Horáková, Jana; Lukáš, David; Suchý, Tomáš; Kochová, Petra; Hubálek Kalbáčová, Marie; Králíčková, Milena; Tonar, Zbyněk

    2018-06-01

    Quantification of the structure and composition of biomaterials using micro-CT requires image segmentation due to the low contrast and overlapping radioopacity of biological materials. The amount of bias introduced by segmentation procedures is generally unknown. We aim to develop software that generates three-dimensional models of fibrous and porous structures with known volumes, surfaces, lengths, and object counts in fibrous materials and to provide a software tool that calibrates quantitative micro-CT assessments. Virtual image stacks were generated using the newly developed software TeIGen, enabling the simulation of micro-CT scans of unconnected tubes, connected tubes, and porosities. A realistic noise generator was incorporated. Forty image stacks were evaluated using micro-CT, and the error between the true known and estimated data was quantified. Starting with geometric primitives, the error of the numerical estimation of surfaces and volumes was eliminated, thereby enabling the quantification of volumes and surfaces of colliding objects. Analysis of the sensitivity of the thresholding upon parameters of generated testing image sets revealed the effects of decreasing resolution and increasing noise on the accuracy of the micro-CT quantification. The size of the error increased with decreasing resolution when the voxel size exceeded 1/10 of the typical object size, which simulated the effect of the smallest details that could still be reliably quantified. Open-source software for calibrating quantitative micro-CT assessments by producing and saving virtually generated image data sets with known morphometric data was made freely available to researchers involved in morphometry of three-dimensional fibrillar and porous structures in micro-CT scans. © 2018 Wiley Periodicals, Inc.

  4. Calibration of the LHAASO-KM2A electromagnetic particle detectors using charged particles within the extensive air showers

    NASA Astrophysics Data System (ADS)

    Lv, Hongkui; He, Huihai; Sheng, Xiangdong; Liu, Jia; Chen, Songzhan; Liu, Ye; Hou, Chao; Zhao, Jing; Zhang, Zhongquan; Wu, Sha; Wang, Yaping; Lhaaso Collaboration

    2018-07-01

    In the Large High Altitude Air Shower Observatory (LHAASO), one square kilometer array (KM2A), with 5242 electromagnetic particle detectors (EDs) and 1171 muon detectors (MDs), is designed to study ultra-high energy gamma-ray astronomy and cosmic ray physics. The remoteness and numerous detectors extremely demand a robust and automatic calibration procedure. In this paper, a self-calibration method which relies on the measurement of charged particles within the extensive air showers is proposed. The method is fully validated by Monte Carlo simulation and successfully applied in a KM2A prototype array experiment. Experimental results show that the self-calibration method can be used to determine the detector time offset constants at the sub-nanosecond level and the number density of particles collected by each ED with an accuracy of a few percents, which are adequate to meet the physical requirements of LHAASO experiment. This software calibration also offers an ideal method to realtime monitor the detector performances for next generation ground-based EAS experiments covering an area above square kilometers scale.

  5. Real-time self-calibration of a tracked augmented reality display

    NASA Astrophysics Data System (ADS)

    Baum, Zachary; Lasso, Andras; Ungi, Tamas; Fichtinger, Gabor

    2016-03-01

    PURPOSE: Augmented reality systems have been proposed for image-guided needle interventions but they have not become widely used in clinical practice due to restrictions such as limited portability, low display refresh rates, and tedious calibration procedures. We propose a handheld tablet-based self-calibrating image overlay system. METHODS: A modular handheld augmented reality viewbox was constructed from a tablet computer and a semi-transparent mirror. A consistent and precise self-calibration method, without the use of any temporary markers, was designed to achieve an accurate calibration of the system. Markers attached to the viewbox and patient are simultaneously tracked using an optical pose tracker to report the position of the patient with respect to a displayed image plane that is visualized in real-time. The software was built using the open-source 3D Slicer application platform's SlicerIGT extension and the PLUS toolkit. RESULTS: The accuracy of the image overlay with image-guided needle interventions yielded a mean absolute position error of 0.99 mm (95th percentile 1.93 mm) in-plane of the overlay and a mean absolute position error of 0.61 mm (95th percentile 1.19 mm) out-of-plane. This accuracy is clinically acceptable for tool guidance during various procedures, such as musculoskeletal injections. CONCLUSION: A self-calibration method was developed and evaluated for a tracked augmented reality display. The results show potential for the use of handheld image overlays in clinical studies with image-guided needle interventions.

  6. Analysis-Software for Hyperspectral Algal Reflectance Probes v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Timlin, Jerilyn A.; Reichardt, Thomas A.; Jenson, Travis J.

    This software provides onsite analysis of the hyperspectral reflectance data acquired on an outdoor algal pond by a multichannel, fiber-coupled spectroradiometer. The analysis algorithm is based on numerical inversion of a reflectance model, in which the above-water reflectance is expressed as a function of the single backscattering albedo, which is dependent on the backscatter and absorption coefficients of the algal culture, which are in turn related to the algal biomass and pigment optical activity, respectively. Prior to the development of this software, while raw multichannel data were displayed in real time, analysis required a post-processing procedure to extract the relevantmore » parameters. This software provides the capability to track the temporal variation of such culture parameters in real time, as raw data are being acquired, or can be run in a post processing mode. The software allows the user to select between different algal species, incorporate the appropriate calibration data, and observe the quality of the resulting model inversions.« less

  7. Development of Eye Dosimeter Using Additive Manufacturing Techniques to Monitor Occupational Eye Lens Exposures to Interventional Radiologists

    NASA Astrophysics Data System (ADS)

    Choi, JungHwan

    In this project, an eye dosimeter was designed for monitoring occupational lens of the eye exposures targeted to interventional radiologists who are often indirectly exposed to scattered radiation from the patient while performing image-guided procedures. The dosimeter was designed with a computer-aided design software to facilitate additive manufacturing techniques to make the dosimeter. The dosimeter consisted of three separate components that are attached to the hinges and the bridge of the occupational worker's protective eyewear. The produced dosimeter was radiologically calibrated to measure the lens dose on an anthropomorphic phantom of the human head. To supplement the physical design, an algorithm was written that prompts the user to input the element responses of the dosimeter, then estimates the average angle, energy, and resulting lens dose of the exposure by comparing the input with the data acquired during the dosimeter calibration procedure. The performance of the calibrated dosimeter (and the algorithm) was evaluated according to guidelines of the American National Standards Institute, and the dosimeter demonstrated a performance that was in compliance with the standard's performance criteria which suggests that the design of the eye dosimeter is feasible.

  8. Energy Reconstruction for Events Detected in TES X-ray Detectors

    NASA Astrophysics Data System (ADS)

    Ceballos, M. T.; Cardiel, N.; Cobo, B.

    2015-09-01

    The processing of the X-ray events detected by a TES (Transition Edge Sensor) device (such as the one that will be proposed in the ESA AO call for instruments for the Athena mission (Nandra et al. 2013) as a high spectral resolution instrument, X-IFU (Barret et al. 2013)), is a several step procedure that starts with the detection of the current pulses in a noisy signal and ends up with their energy reconstruction. For this last stage, an energy calibration process is required to convert the pseudo energies measured in the detector to the real energies of the incoming photons, accounting for possible nonlinearity effects in the detector. We present the details of the energy calibration algorithm we implemented as the last part of the Event Processing software that we are developing for the X-IFU instrument, that permits the calculation of the calibration constants in an analytical way.

  9. Photogrammetric 3d Building Reconstruction from Thermal Images

    NASA Astrophysics Data System (ADS)

    Maset, E.; Fusiello, A.; Crosilla, F.; Toldo, R.; Zorzetto, D.

    2017-08-01

    This paper addresses the problem of 3D building reconstruction from thermal infrared (TIR) images. We show that a commercial Computer Vision software can be used to automatically orient sequences of TIR images taken from an Unmanned Aerial Vehicle (UAV) and to generate 3D point clouds, without requiring any GNSS/INS data about position and attitude of the images nor camera calibration parameters. Moreover, we propose a procedure based on Iterative Closest Point (ICP) algorithm to create a model that combines high resolution and geometric accuracy of RGB images with the thermal information deriving from TIR images. The process can be carried out entirely by the aforesaid software in a simple and efficient way.

  10. SERPent: Automated reduction and RFI-mitigation software for e-MERLIN

    NASA Astrophysics Data System (ADS)

    Peck, Luke W.; Fenech, Danielle M.

    2013-08-01

    The Scripted E-merlin Rfi-mitigation PipelinE for iNTerferometry (SERPent) is an automated reduction and RFI-mitigation procedure utilising the SumThreshold methodology (Offringa et al., 2010a), originally developed for the LOFAR pipeline. SERPent is written in the Parseltongue language enabling interaction with the Astronomical Image Processing Software (AIPS) program. Moreover, SERPent is a simple 'out of the box' Python script, which is easy to set up and is free of compilers. In addition to the flagging of RFI affected visibilities, the script also flags antenna zero-amplitude dropouts and Lovell telescope phase calibrator stationary scans inherent to the e-MERLIN system. Both the flagging and computational performances of SERPent are presented here, for e-MERLIN commissioning datasets for both L-band (1.3-1.8 GHz) and C-band (4-8 GHz) observations. RFI typically amounts to <20%-25% for the more problematic L-band observations and <5% for the generally RFI quieter C-band. The level of RFI detection and flagging is more accurate and delicate than visual manual flagging, with the output immediately ready for AIPS calibration. SERPent is fully parallelised and has been tested on a range of computing systems. The current flagging rate is at 110 GB day-1 on a 'high-end' computer (16 CPUs, 100 GB memory) which amounts to ˜6.9 GB CPU-1 day-1, with an expected increase in performance when e-MERLIN has completed its commissioning. The refining of automated reduction and calibration procedures is essential for the e-MERLIN legacy projects and future interferometers such as the SKA and the associated pathfinders (MeerKAT and ASKAP), where the vast data sizes (>TB) make traditional astronomer interactions unfeasible.

  11. Cellular Oxygen and Nutrient Sensing in Microgravity Using Time-Resolved Fluorescence Microscopy

    NASA Technical Reports Server (NTRS)

    Szmacinski, Henryk

    2003-01-01

    Oxygen and nutrient sensing is fundamental to the understanding of cell growth and metabolism. This requires identification of optical probes and suitable detection technology without complex calibration procedures. Under this project Microcosm developed an experimental technique that allows for simultaneous imaging of intra- and inter-cellular events. The technique consists of frequency-domain Fluorescence Lifetime Imaging Microscopy (FLIM), a set of identified oxygen and pH probes, and methods for fabrication of microsensors. Specifications for electronic and optical components of FLIM instrumentation are provided. Hardware and software were developed for data acquisition and analysis. Principles, procedures, and representative images are demonstrated. Suitable lifetime sensitive oxygen, pH, and glucose probes for intra- and extra-cellular measurements of analyte concentrations have been identified and tested. Lifetime sensing and imaging have been performed using PBS buffer, culture media, and yeast cells as a model systems. Spectral specifications, calibration curves, and probes availability are also provided in the report.

  12. TH-CD-207B-11: Multi-Vendor Phantom Study of CT Lung Density Metrics: Is a Reproducibility of Less Than 1 HU Achievable?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen-Mayer, H; Judy, P; Fain, S

    Purpose: To standardize the calibration procedures of CT lung density measurements using low-density reference foams in a phantom, and to demonstrate a reproducibility of less than 1 HU for lung equivalent foam densities measured across CT vendor platforms and protocols. Methods: A phantom study was conducted on CT scanner models from 4 vendors at 100, 120, and 135/140 kVp and 1.5, 3, and 6 mGy dose settings, using a lung density phantom containing air, water, and 3 reference foams (indirectly calibrated) with discrete densities simulating a 5-cm slice of the human chest. Customized segmentation software was used to analyze themore » images and generate a mean HU and variance for each of the density for the 22 vendor/protocols. A 3-step calibration process was devised to remove a scanner-dependent parameter using linear regression of the HU value vs the relative electron density. The results were mapped to a single energy (80 keV) for final comparison. Results: The heterogeneity across vendor platforms for each density assessed by a random effects model was reduced by 50% after re-calibration, while the standard deviation of the mean HU values also improved by about the same amount. The 95% CI of the final HU value was within +/−1 HU for all 3 reference foam densities. For the backing lung foam in the phantom (served as an “unknown”), this CI is +/− 1.6 HU. The kVp and dose settings did not appear to have significant contributions to the variability. Conclusion: With the proposed calibration procedures, the inter-scanner reproducibility of better than 1 HU is demonstrated in the current phantom study for the reference foam densities, but not yet achieved for a test density. The sources of error are being investigated in the next round of scanning with a certified Standard Reference Material for direct calibration. Fain: research funding from GE Healthcare to develop pulmonary MRI techniques. Hoppel: employee of Toshiba Medical Research Institute USA/financial interest with GE Healthcare. M. Fuld: employee of Siemens Healthcare for medical device equipment and software. This project is supported partially by RSNA QIBA Concept Award (Fain), NIH/NIBIB, HHSN268201300071C (Y).« less

  13. Instruments and Methodologies for the Underwater Tridimensional Digitization and Data Musealization

    NASA Astrophysics Data System (ADS)

    Repola, L.; Memmolo, R.; Signoretti, D.

    2015-04-01

    In the research started within the SINAPSIS project of the Università degli Studi Suor Orsola Benincasa an underwater stereoscopic scanning aimed at surveying of submerged archaeological sites, integrable to standard systems for geomorphological detection of the coast, has been developed. The project involves the construction of hardware consisting of an aluminum frame supporting a pair of GoPro Hero Black Edition cameras and software for the production of point clouds and the initial processing of data. The software has features for stereoscopic vision system calibration, reduction of noise and the of distortion of underwater captured images, searching for corresponding points of stereoscopic images using stereo-matching algorithms (dense and sparse), for points cloud generating and filtering. Only after various calibration and survey tests carried out during the excavations envisaged in the project, the mastery of methods for an efficient acquisition of data has been achieved. The current development of the system has allowed generation of portions of digital models of real submerged scenes. A semi-automatic procedure for global registration of partial models is under development as a useful aid for the study and musealization of sites.

  14. SU-F-E-19: A Novel Method for TrueBeam Jaw Calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Corns, R; Zhao, Y; Huang, V

    2016-06-15

    Purpose: A simple jaw calibration method is proposed for Varian TrueBeam using an EPID-Encoder combination that gives accurate fields sizes and a homogeneous junction dose. This benefits clinical applications such as mono-isocentric half-beam block breast cancer or head and neck cancer treatment with junction/field matching. Methods: We use EPID imager with pixel size 0.392 mm × 0.392 mm to determine the radiation jaw position as measured from radio-opaque markers aligned with the crosshair. We acquire two images with different symmetric field sizes and record each individual jaw encoder values. A linear relationship between each jaw’s position and its encoder valuemore » is established, from which we predict the encoder values that produce the jaw positions required by TrueBeam’s calibration procedure. During TrueBeam’s jaw calibration procedure, we move the jaw with the pendant to set the jaw into position using the predicted encoder value. The overall accuracy is under 0.1 mm. Results: Our in-house software analyses images and provides sub-pixel accuracy to determine field centre and radiation edges (50% dose of the profile). We verified the TrueBeam encoder provides a reliable linear relationship for each individual jaw position (R{sup 2}>0.9999) from which the encoder values necessary to set jaw calibration points (1 cm and 19 cm) are predicted. Junction matching dose inhomogeneities were improved from >±20% to <±6% using this new calibration protocol. However, one technical challenge exists for junction matching, if the collimator walkout is large. Conclusion: Our new TrueBeam jaw calibration method can systematically calibrate the jaws to crosshair within sub-pixel accuracy and provides both good junction doses and field sizes. This method does not compensate for a larger collimator walkout, but can be used as the underlying foundation for addressing the walkout issue.« less

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, K.; Tsai, H.; Decision and Information Sciences

    The technical basis for extending the Model 9977 shipping package periodic maintenance beyond the one-year interval to a maximum of five years is based on the performance of the O-ring seals and the environmental conditions. The DOE Packaging Certification Program (PCP) has tasked Argonne National Laboratory to develop a Radio-Frequency Identification (RFID) temperature monitoring system for use by the facility personnel at DAF/NTS. The RFID temperature monitoring system, depicted in the figure below, consists of the Mk-1 RFId tags, a reader, and a control computer mounted on a mobile platform that can operate as a stand-alone system, or it canmore » be connected to the local IT network. As part of the Conditions of Approval of the CoC, the user must complete the prescribed training to become qualified and be certified for operation of the RFID temperature monitoring system. The training course will be administered by Argonne National Laboratory on behalf of the Headquarters Certifying Official. This is a complete documentation package for the RFID temperature monitoring system of the Model 9977 packagings at NTS. The documentation package will be used for training and certification. The table of contents are: Acceptance Testing Procedure of MK-1 RFID Tags for DOE/EM Nuclear Materials Management Applications; Acceptance Testing Result of MK-1 RFID Tags for DOE/EM Nuclear Materials Management Applications; Performance Test of the Single Bolt Seal Sensor for the Model 9977 Packaging; Calibration of Built-in Thermistors in RFID Tags for Nevada Test Site; Results of Calibration of Built-in Thermistors in RFID Tags; Results of Thermal Calibration of Second Batch of MK-I RFID Tags; Procedure for Installing and Removing MK-1 RFID Tag on Model 9977 Drum; User Guide for RFID Reader and Software for Temperature Monitoring of Model 9977 Drums at NTS; Software Quality Assurance Plan (SQAP) for the ARG-US System; Quality Category for the RFID Temperature Monitoring System; The Documentation Package for the RFID Temperature Monitoring System; Software Test Plan and Results for ARG-US OnSite; Configuration Management Plan (CMP) for the ARG-US System; Requirements Management Plan for the ARG-US System; and Design Management Plan for ARG-US.« less

  16. A hardware-software system for the automation of verification and calibration of oil metering units secondary equipment

    NASA Astrophysics Data System (ADS)

    Boyarnikov, A. V.; Boyarnikova, L. V.; Kozhushko, A. A.; Sekachev, A. F.

    2017-08-01

    In the article the process of verification (calibration) of oil metering units secondary equipment is considered. The purpose of the work is to increase the reliability and reduce the complexity of this process by developing a software and hardware system that provides automated verification and calibration. The hardware part of this complex carries out the commutation of the measuring channels of the verified controller and the reference channels of the calibrator in accordance with the introduced algorithm. The developed software allows controlling the commutation of channels, setting values on the calibrator, reading the measured data from the controller, calculating errors and compiling protocols. This system can be used for checking the controllers of the secondary equipment of the oil metering units in the automatic verification mode (with the open communication protocol) or in the semi-automatic verification mode (without it). The peculiar feature of the approach used is the development of a universal signal switch operating under software control, which can be configured for various verification methods (calibration), which allows to cover the entire range of controllers of metering units secondary equipment. The use of automatic verification with the help of a hardware and software system allows to shorten the verification time by 5-10 times and to increase the reliability of measurements, excluding the influence of the human factor.

  17. Calibration of Wide-Field Deconvolution Microscopy for Quantitative Fluorescence Imaging

    PubMed Central

    Lee, Ji-Sook; Wee, Tse-Luen (Erika); Brown, Claire M.

    2014-01-01

    Deconvolution enhances contrast in fluorescence microscopy images, especially in low-contrast, high-background wide-field microscope images, improving characterization of features within the sample. Deconvolution can also be combined with other imaging modalities, such as confocal microscopy, and most software programs seek to improve resolution as well as contrast. Quantitative image analyses require instrument calibration and with deconvolution, necessitate that this process itself preserves the relative quantitative relationships between fluorescence intensities. To ensure that the quantitative nature of the data remains unaltered, deconvolution algorithms need to be tested thoroughly. This study investigated whether the deconvolution algorithms in AutoQuant X3 preserve relative quantitative intensity data. InSpeck Green calibration microspheres were prepared for imaging, z-stacks were collected using a wide-field microscope, and the images were deconvolved using the iterative deconvolution algorithms with default settings. Afterwards, the mean intensities and volumes of microspheres in the original and the deconvolved images were measured. Deconvolved data sets showed higher average microsphere intensities and smaller volumes than the original wide-field data sets. In original and deconvolved data sets, intensity means showed linear relationships with the relative microsphere intensities given by the manufacturer. Importantly, upon normalization, the trend lines were found to have similar slopes. In original and deconvolved images, the volumes of the microspheres were quite uniform for all relative microsphere intensities. We were able to show that AutoQuant X3 deconvolution software data are quantitative. In general, the protocol presented can be used to calibrate any fluorescence microscope or image processing and analysis procedure. PMID:24688321

  18. VSO For Dummies

    NASA Astrophysics Data System (ADS)

    Schwartz, Richard A.; Zarro, D.; Csillaghy, A.; Dennis, B.; Tolbert, A. K.; Etesi, L.

    2009-05-01

    We report on our activities to integrate VSO search and retrieval capabilities into standard data access, display, and analysis tools. In addition to its standard Web-based search form, the VSO provides an Interactive Data Language (IDL) client (vso_search) that is available through the Solar Software (SSW) package. We have incorporated this client into an IDL-widget interface program (show_synop) that allows for more simplified searching and downloading of VSO datasets directly into a user's IDL data analysis environment. In particular, we have provided the capability to read VSO datasets into a general purpose IDL package (plotman) that can display different datatypes (lightcurves, images, and spectra) and perform basic data operations such as zooming, image overlays, solar rotation, etc. Currently, the show_synop tool supports access to ground-based and space-based (SOHO, STEREO, and Hinode) observations, and has the capability to include new datasets as they become available. A user encounters two major hurdles when using the VSO: (1) Instrument-specific software (such as level-0 file readers and data-prepping procedures) may not be available in the user's local SSW distribution. (2) Recent calibration files (such as flat-fields) are not automatically distributed with the analysis software. To address these issues, we have developed a dedicated server (prepserver) that incorporates all the latest instrument-specific software libraries and calibration files. The prepserver uses an IDL-Java bridge to read and implement data processing requests from a client and return a processed data file that can be readily displayed with the show_synop/plotman package. The advantage of the prepserver is that the user is only required to install the general branch (gen) of the SSW tree, and is freed from the more onerous task of installing instrument-specific libraries and calibration files. We will demonstrate how the prepserver can be used to read, process, and overlay SOHO/EIT, TRACE, SECCHI/EUVI, and RHESSI images.

  19. Auto-calibration of a one-dimensional hydrodynamic-ecological model using a Monte Carlo approach: simulation of hypoxic events in a polymictic lake

    NASA Astrophysics Data System (ADS)

    Luo, L.

    2011-12-01

    Automated calibration of complex deterministic water quality models with a large number of biogeochemical parameters can reduce time-consuming iterative simulations involving empirical judgements of model fit. We undertook auto-calibration of the one-dimensional hydrodynamic-ecological lake model DYRESM-CAEDYM, using a Monte Carlo sampling (MCS) method, in order to test the applicability of this procedure for shallow, polymictic Lake Rotorua (New Zealand). The calibration procedure involved independently minimising the root-mean-square-error (RMSE), maximizing the Pearson correlation coefficient (r) and Nash-Sutcliffe efficient coefficient (Nr) for comparisons of model state variables against measured data. An assigned number of parameter permutations was used for 10,000 simulation iterations. The 'optimal' temperature calibration produced a RMSE of 0.54 °C, Nr-value of 0.99 and r-value of 0.98 through the whole water column based on comparisons with 540 observed water temperatures collected between 13 July 2007 - 13 January 2009. The modeled bottom dissolved oxygen concentration (20.5 m below surface) was compared with 467 available observations. The calculated RMSE of the simulations compared with the measurements was 1.78 mg L-1, the Nr-value was 0.75 and the r-value was 0.87. The autocalibrated model was further tested for an independent data set by simulating bottom-water hypoxia events for the period 15 January 2009 to 8 June 2011 (875 days). This verification produced an accurate simulation of five hypoxic events corresponding to DO < 2 mg L-1 during summer of 2009-2011. The RMSE was 2.07 mg L-1, Nr-value 0.62 and r-value of 0.81, based on the available data set of 738 days. The auto-calibration software of DYRESM-CAEDYM developed here is substantially less time-consuming and more efficient in parameter optimisation than traditional manual calibration which has been the standard tool practiced for similar complex water quality models.

  20. Endoscopic Stone Measurement During Ureteroscopy.

    PubMed

    Ludwig, Wesley W; Lim, Sunghwan; Stoianovici, Dan; Matlaga, Brian R

    2018-01-01

    Currently, stone size cannot be accurately measured while performing flexible ureteroscopy (URS). We developed novel software for ureteroscopic, stone size measurement, and then evaluated its performance. A novel application capable of measuring stone fragment size, based on the known distance of the basket tip in the ureteroscope's visual field, was designed and calibrated in a laboratory setting. Complete URS procedures were recorded and 30 stone fragments were extracted and measured using digital calipers. The novel software program was applied to the recorded URS footage to obtain ureteroscope-derived stone size measurements. These ureteroscope-derived measurements were then compared with the actual-measured fragment size. The median longitudinal and transversal errors were 0.14 mm (95% confidence interval [CI] 0.1, 0.18) and 0.09 mm (95% CI 0.02, 0.15), respectively. The overall software accuracy and precision were 0.17 and 0.15 mm, respectively. The longitudinal and transversal measurements obtained by the software and digital calipers were highly correlated (r = 0.97 and 0.93). Neither stone size nor stone type was correlated with error measurements. This novel method and software reliably measured stone fragment size during URS. The software ultimately has the potential to make URS safer and more efficient.

  1. A procedure for accurate calibration of the orientation of the three sensors in a vector magnetometer. [at the Goddard Space Flight Center

    NASA Technical Reports Server (NTRS)

    Mcpherron, R. L.

    1977-01-01

    Procedures are described for the calibration of a vector magnetometer of high absolute accuracy. It is assumed that the calibration will be performed in the magnetic test facility of Goddard Space Flight Center (GSFC). The first main section of the report describes the test equipment and facility calibrations required. The second presents procedures for calibrating individual sensors. The third discusses the calibration of the sensor assembly. In a final section recommendations are made to GSFC for modification of the test facility required to carry out the calibration procedures.

  2. Characterizing the scientific potential of satellite sensors. [San Francisco, California

    NASA Technical Reports Server (NTRS)

    1984-01-01

    Eleven thematic mapper (TM) radiometric calibration programs were tested and evaluated in support of the task to characterize the potential of LANDSAT TM digital imagery for scientific investigations in the Earth sciences and terrestrial physics. Three software errors related to integer overflow, divide by zero, and nonexist file group were found and solved. Raw, calibrated, and corrected image groups that were created and stored on the Barker2 disk are enumerated. Black and white pixel print files were created for various subscenes of a San Francisco scene (ID 40392-18152). The development of linear regression software is discussed. The output of the software and its function are described. Future work in TM radiometric calibration, image processing, and software development is outlined.

  3. NASA/MSFC ground experiment for large space structure control verification

    NASA Technical Reports Server (NTRS)

    Waites, H. B.; Seltzer, S. M.; Tollison, D. K.

    1984-01-01

    Marshall Space Flight Center has developed a facility in which closed loop control of Large Space Structures (LSS) can be demonstrated and verified. The main objective of the facility is to verify LSS control system techniques so that on orbit performance can be ensured. The facility consists of an LSS test article which is connected to a payload mounting system that provides control torque commands. It is attached to a base excitation system which will simulate disturbances most likely to occur for Orbiter and DOD payloads. A control computer will contain the calibration software, the reference system, the alignment procedures, the telemetry software, and the control algorithms. The total system will be suspended in such a fashion that LSS test article has the characteristics common to all LSS.

  4. RGB color calibration for quantitative image analysis: the "3D thin-plate spline" warping approach.

    PubMed

    Menesatti, Paolo; Angelini, Claudio; Pallottino, Federico; Antonucci, Francesca; Aguzzi, Jacopo; Costa, Corrado

    2012-01-01

    In the last years the need to numerically define color by its coordinates in n-dimensional space has increased strongly. Colorimetric calibration is fundamental in food processing and other biological disciplines to quantitatively compare samples' color during workflow with many devices. Several software programmes are available to perform standardized colorimetric procedures, but they are often too imprecise for scientific purposes. In this study, we applied the Thin-Plate Spline interpolation algorithm to calibrate colours in sRGB space (the corresponding Matlab code is reported in the Appendix). This was compared with other two approaches. The first is based on a commercial calibration system (ProfileMaker) and the second on a Partial Least Square analysis. Moreover, to explore device variability and resolution two different cameras were adopted and for each sensor, three consecutive pictures were acquired under four different light conditions. According to our results, the Thin-Plate Spline approach reported a very high efficiency of calibration allowing the possibility to create a revolution in the in-field applicative context of colour quantification not only in food sciences, but also in other biological disciplines. These results are of great importance for scientific color evaluation when lighting conditions are not controlled. Moreover, it allows the use of low cost instruments while still returning scientifically sound quantitative data.

  5. Rapid mapping of landslide disaster using UAV- photogrammetry

    NASA Astrophysics Data System (ADS)

    Cahyono, A. B.; Zayd, R. A.

    2018-03-01

    Unmanned Aerial Vehicle (UAV) systems offered many advantages in several mapping applications such as slope mapping, geohazard studies, etc. This study utilizes UAV system for landslide disaster occurred in Jombang Regency, East Java. This study concentrates on type of rotor-wing UAV, that is because rotor wing units are stable and able to capture images easily. Aerial photograph were acquired in the form of strips which followed the procedure of acquiring aerial photograph where taken 60 photos. Secondary data of ground control points using GPS Geodetic and check points established using Total Station technique was used. The digital camera was calibrated using close range photogrammetric software and the recovered camera calibration parameters were then used in the processing of digital images. All the aerial photographs were processed using digital photogrammetric software and the output in the form of orthophoto was produced. The final result shows a 1: 1500 scale orthophoto map from the data processing with SfM algorithm with GSD accuracy of 3.45 cm. And the calculated volume of contour line delineation of 10527.03 m3. The result is significantly different from the result of terrestrial methode equal to 964.67 m3 or 8.4% of the difference of both.

  6. Numerical simulation of damage evolution for ductile materials and mechanical properties study

    NASA Astrophysics Data System (ADS)

    El Amri, A.; Hanafi, I.; Haddou, M. E. Y.; Khamlichi, A.

    2015-12-01

    This paper presents results of a numerical modelling of ductile fracture and failure of elements made of 5182H111 aluminium alloys subjected to dynamic traction. The analysis was performed using Johnson-Cook model based on ABAQUS software. The modelling difficulty related to prediction of ductile fracture mainly arises because there is a tremendous span of length scales from the structural problem to the micro-mechanics problem governing the material separation process. This study has been used the experimental results to calibrate a simple crack propagation criteria for shell elements of which one has often been used in practical analyses. The performance of the proposed model is in general good and it is believed that the presented results and experimental-numerical calibration procedure can be of use in practical finite-element simulations.

  7. Chromatography related performance of the Monitor for AeRosols and GAses in ambient air (MARGA): laboratory and field-based evaluation

    NASA Astrophysics Data System (ADS)

    Chen, Xi; Walker, John T.; Geron, Chris

    2017-10-01

    Evaluation of the semi-continuous Monitor for AeRosols and GAses in ambient air (MARGA, Metrohm Applikon B.V.) was conducted with an emphasis on examination of accuracy and precision associated with processing of chromatograms. Using laboratory standards and atmospheric measurements, analytical accuracy, precision and method detection limits derived using the commercial MARGA software were compared to an alternative chromatography procedure consisting of a custom Java script to reformat raw MARGA conductivity data and Chromeleon (Thermo Scientific Dionex) software for peak integration. Our analysis revealed issues with accuracy and precision resulting from misidentification and misintegration of chromatograph peaks by the MARGA automated software as well as a systematic bias at low concentrations for anions. Reprocessing and calibration of raw MARGA data using the alternative chromatography method lowered method detection limits and reduced variability (precision) between parallel sampler boxes. Instrument performance was further evaluated during a 1-month intensive field campaign in the fall of 2014, including analysis of diurnal patterns of gaseous and particulate water-soluble species (NH3, SO2, HNO3, NH4+, SO42- and NO3-), gas-to-particle partitioning and particle neutralization state. At ambient concentrations below ˜ 1 µg m-3, concentrations determined using the MARGA software are biased +30 and +10 % for NO3- and SO42-, respectively, compared to concentrations determined using the alternative chromatography procedure. Differences between the two methods increase at lower concentrations. We demonstrate that positively biased NO3- and SO42- measurements result in overestimation of aerosol acidity and introduce nontrivial errors to ion balances of inorganic aerosol. Though the source of the bias is uncertain, it is not corrected by the MARGA online single-point internal LiBr standard. Our results show that calibration and verification of instrument accuracy by multilevel external standards is required to adequately control analytical accuracy. During the field intensive, the MARGA was able to capture rapid compositional changes in PM2.5 due to changes in meteorology and air mass history relative to known source regions of PM precursors, including a fine NO3- aerosol event associated with intrusion of Arctic air into the southeastern US.

  8. Development and Characterization of a Low-Pressure Calibration System for Hypersonic Wind Tunnels

    NASA Technical Reports Server (NTRS)

    Green, Del L.; Everhart, Joel L.; Rhode, Matthew N.

    2004-01-01

    Minimization of uncertainty is essential for accurate ESP measurements at very low free-stream static pressures found in hypersonic wind tunnels. Statistical characterization of environmental error sources requires a well defined and controlled calibration method. A calibration system has been constructed and environmental control software developed to control experimentation to eliminate human induced error sources. The initial stability study of the calibration system shows a high degree of measurement accuracy and precision in temperature and pressure control. Control manometer drift and reference pressure instabilities induce uncertainty into the repeatability of voltage responses measured from the PSI System 8400 between calibrations. Methods of improving repeatability are possible through software programming and further experimentation.

  9. BAT3 Analyzer: Real-Time Data Display and Interpretation Software for the Multifunction Bedrock-Aquifer Transportable Testing Tool (BAT3)

    USGS Publications Warehouse

    Winston, Richard B.; Shapiro, Allen M.

    2007-01-01

    The BAT3 Analyzer provides real-time display and interpretation of fluid pressure responses and flow rates measured during geochemical sampling, hydraulic testing, or tracer testing conducted with the Multifunction Bedrock-Aquifer Transportable Testing Tool (BAT3) (Shapiro, 2007). Real-time display of the data collected with the Multifunction BAT3 allows the user to ensure that the downhole apparatus is operating properly, and that test procedures can be modified to correct for unanticipated hydraulic responses during testing. The BAT3 Analyzer can apply calibrations to the pressure transducer and flow meter data to display physically meaningful values. Plots of the time-varying data can be formatted for a specified time interval, and either saved to files, or printed. Libraries of calibrations for the pressure transducers and flow meters can be created, updated and reloaded to facilitate the rapid set up of the software to display data collected during testing with the Multifunction BAT3. The BAT3 Analyzer also has the functionality to estimate calibrations for pressure transducers and flow meters using data collected with the Multifunction BAT3 in conjunction with corroborating check measurements. During testing with the Multifunction BAT3, and also after testing has been completed, hydraulic properties of the test interval can be estimated by comparing fluid pressure responses with model results; a variety of hydrogeologic conceptual models of the formation are available for interpreting fluid-withdrawal, fluid-injection, and slug tests.

  10. 40 CFR Appendix B to Part 75 - Quality Assurance and Quality Control Procedures

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Systems 1.2.1Calibration Error Test and Linearity Check Procedures Keep a written record of the procedures used for daily calibration error tests and linearity checks (e.g., how gases are to be injected..., and when calibration adjustments should be made). Identify any calibration error test and linearity...

  11. 40 CFR Appendix B to Part 75 - Quality Assurance and Quality Control Procedures

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Systems 1.2.1Calibration Error Test and Linearity Check Procedures Keep a written record of the procedures used for daily calibration error tests and linearity checks (e.g., how gases are to be injected..., and when calibration adjustments should be made). Identify any calibration error test and linearity...

  12. Multiple-Objective Stepwise Calibration Using Luca

    USGS Publications Warehouse

    Hay, Lauren E.; Umemoto, Makiko

    2007-01-01

    This report documents Luca (Let us calibrate), a multiple-objective, stepwise, automated procedure for hydrologic model calibration and the associated graphical user interface (GUI). Luca is a wizard-style user-friendly GUI that provides an easy systematic way of building and executing a calibration procedure. The calibration procedure uses the Shuffled Complex Evolution global search algorithm to calibrate any model compiled with the U.S. Geological Survey's Modular Modeling System. This process assures that intermediate and final states of the model are simulated consistently with measured values.

  13. A Practical Guide to Calibration of a GSSHA Hydrologic Model Using ERDC Automated Model Calibration Software - Efficient Local Search

    DTIC Science & Technology

    2012-02-01

    use the ERDC software implementation of the secant LM method that accommodates the PEST model independent interface to calibrate a GSSHA...how the method works. We will also demonstrate how our LM/SLM implementation compares with its counterparts as implemented in the popular PEST ...function values and total model calls for local search to converge) associated with Examples 1 and 3 using the PEST LM/SLM implementations

  14. Root zone water quality model (RZWQM2): Model use, calibration and validation

    USGS Publications Warehouse

    Ma, Liwang; Ahuja, Lajpat; Nolan, B.T.; Malone, Robert; Trout, Thomas; Qi, Z.

    2012-01-01

    The Root Zone Water Quality Model (RZWQM2) has been used widely for simulating agricultural management effects on crop production and soil and water quality. Although it is a one-dimensional model, it has many desirable features for the modeling community. This article outlines the principles of calibrating the model component by component with one or more datasets and validating the model with independent datasets. Users should consult the RZWQM2 user manual distributed along with the model and a more detailed protocol on how to calibrate RZWQM2 provided in a book chapter. Two case studies (or examples) are included in this article. One is from an irrigated maize study in Colorado to illustrate the use of field and laboratory measured soil hydraulic properties on simulated soil water and crop production. It also demonstrates the interaction between soil and plant parameters in simulated plant responses to water stresses. The other is from a maize-soybean rotation study in Iowa to show a manual calibration of the model for crop yield, soil water, and N leaching in tile-drained soils. Although the commonly used trial-and-error calibration method works well for experienced users, as shown in the second example, an automated calibration procedure is more objective, as shown in the first example. Furthermore, the incorporation of the Parameter Estimation Software (PEST) into RZWQM2 made the calibration of the model more efficient than a grid (ordered) search of model parameters. In addition, PEST provides sensitivity and uncertainty analyses that should help users in selecting the right parameters to calibrate.

  15. Development of a High Accuracy Angular Measurement System for Langley Research Center Hypersonic Wind Tunnel Facilities

    NASA Technical Reports Server (NTRS)

    Newman, Brett; Yu, Si-bok; Rhew, Ray D. (Technical Monitor)

    2003-01-01

    Modern experimental and test activities demand innovative and adaptable procedures to maximize data content and quality while working within severely constrained budgetary and facility resource environments. This report describes development of a high accuracy angular measurement capability for NASA Langley Research Center hypersonic wind tunnel facilities to overcome these deficiencies. Specifically, utilization of micro-electro-mechanical sensors including accelerometers and gyros, coupled with software driven data acquisition hardware, integrated within a prototype measurement system, is considered. Development methodology addresses basic design requirements formulated from wind tunnel facility constraints and current operating procedures, as well as engineering and scientific test objectives. Description of the analytical framework governing relationships between time dependent multi-axis acceleration and angular rate sensor data and the desired three dimensional Eulerian angular state of the test model is given. Calibration procedures for identifying and estimating critical parameters in the sensor hardware is also addressed.

  16. Bayesian regression models outperform partial least squares methods for predicting milk components and technological properties using infrared spectral data

    PubMed Central

    Ferragina, A.; de los Campos, G.; Vazquez, A. I.; Cecchinato, A.; Bittante, G.

    2017-01-01

    The aim of this study was to assess the performance of Bayesian models commonly used for genomic selection to predict “difficult-to-predict” dairy traits, such as milk fatty acid (FA) expressed as percentage of total fatty acids, and technological properties, such as fresh cheese yield and protein recovery, using Fourier-transform infrared (FTIR) spectral data. Our main hypothesis was that Bayesian models that can estimate shrinkage and perform variable selection may improve our ability to predict FA traits and technological traits above and beyond what can be achieved using the current calibration models (e.g., partial least squares, PLS). To this end, we assessed a series of Bayesian methods and compared their prediction performance with that of PLS. The comparison between models was done using the same sets of data (i.e., same samples, same variability, same spectral treatment) for each trait. Data consisted of 1,264 individual milk samples collected from Brown Swiss cows for which gas chromatographic FA composition, milk coagulation properties, and cheese-yield traits were available. For each sample, 2 spectra in the infrared region from 5,011 to 925 cm−1 were available and averaged before data analysis. Three Bayesian models: Bayesian ridge regression (Bayes RR), Bayes A, and Bayes B, and 2 reference models: PLS and modified PLS (MPLS) procedures, were used to calibrate equations for each of the traits. The Bayesian models used were implemented in the R package BGLR (http://cran.r-project.org/web/packages/BGLR/index.html), whereas the PLS and MPLS were those implemented in the WinISI II software (Infrasoft International LLC, State College, PA). Prediction accuracy was estimated for each trait and model using 25 replicates of a training-testing validation procedure. Compared with PLS, which is currently the most widely used calibration method, MPLS and the 3 Bayesian methods showed significantly greater prediction accuracy. Accuracy increased in moving from calibration to external validation methods, and in moving from PLS and MPLS to Bayesian methods, particularly Bayes A and Bayes B. The maximum R2 value of validation was obtained with Bayes B and Bayes A. For the FA, C10:0 (% of each FA on total FA basis) had the highest R2 (0.75, achieved with Bayes A and Bayes B), and among the technological traits, fresh cheese yield R2 of 0.82 (achieved with Bayes B). These 2 methods have proven to be useful instruments in shrinking and selecting very informative wavelengths and inferring the structure and functions of the analyzed traits. We conclude that Bayesian models are powerful tools for deriving calibration equations, and, importantly, these equations can be easily developed using existing open-source software. As part of our study, we provide scripts based on the open source R software BGLR, which can be used to train customized prediction equations for other traits or populations. PMID:26387015

  17. Developing of an automation for therapy dosimetry systems by using labview software

    NASA Astrophysics Data System (ADS)

    Aydin, Selim; Kam, Erol

    2018-06-01

    Traceability, accuracy and consistency of radiation measurements are essential in radiation dosimetry, particularly in radiotherapy, where the outcome of treatments is highly dependent on the radiation dose delivered to patients. Therefore it is very important to provide reliable, accurate and fast calibration services for therapy dosimeters since the radiation dose delivered to a radiotherapy patient is directly related to accuracy and reliability of these devices. In this study, we report the performance of in-house developed computer controlled data acquisition and monitoring software for the commercially available radiation therapy electrometers. LabVIEW® software suite is used to provide reliable, fast and accurate calibration services. The software also collects environmental data such as temperature, pressure and humidity in order to use to use these them in correction factor calculations. By using this software tool, a better control over the calibration process is achieved and the need for human intervention is reduced. This is the first software that can control frequently used dosimeter systems, in radiation thereapy field at hospitals, such as Unidos Webline, Unidos E, Dose-1 and PC Electrometers.

  18. Coleman performs VO2 Max PFS Software Calibrations and Instrument Check

    NASA Image and Video Library

    2011-02-24

    ISS026-E-029180 (24 Feb. 2011) --- NASA astronaut Catherine (Cady) Coleman, Expedition 26 flight engineer, performs VO2max portable Pulmonary Function System (PFS) software calibrations and instrument check while using the Cycle Ergometer with Vibration Isolation System (CEVIS) in the Destiny laboratory of the International Space Station.

  19. On-ground calibration of the BEPICOLOMBO/SIMBIO-SYS at instrument level

    NASA Astrophysics Data System (ADS)

    Rodriguez-Ferreira, J.; Poulet, F.; Eng, P.; Longval, Y.; Dassas, K.; Arondel, A.; Langevin, Y.; Capaccioni, F.; Filacchione, G.; Palumbo, P.; Cremonese, G.; Dami, M.

    2012-04-01

    The Mercury Planetary Orbiter/BepiColombo carries an integrated suite of instruments, the Spectrometer and Imagers for MPO BepiColombo-Integrated Observatory SYStem (SIMBIO-SYS). SIMBIO-SYS has 3 channels: a stereo imaging system (STC), a high-resolution imager (HRIC) and a visible-near-infrared imaging spectrometer (VIHI). SIMBIO-SYS will scan the surface of Mercury with these three channels and determine the physical, morphological and compositional properties of the entire planet. Before integration on the S/C, an on-ground calibration at the channels and at the instrument levels will be performed so as to describe the instrumental responses as a function of various parameters that might evolve while the instruments will be operating [1]. The Institut d'Astrophysique Spatiale (IAS) is responsible for the on-ground instrument calibration at the instrument level. During the 4 weeks of calibration campaign planned for June 2012, the instrument will be maintained in a mechanical and thermal environment simulating the space conditions. Four Optical stimuli (QTH lamp, Integrating Sphere, BlackBody with variable temperature from 50 to 1200°C and Monochromator), are placed over an optical bench to illuminate the four channels so as to make the radiometric calibration, straylight monitoring, as well as spectral proofing based on laboratory mineral samples. The instrument will be mounted on a hexapod placed inside a thermal vacuum chamber during the calibration campaign. The hexapod will move the channels within the well-characterized incoming beam. We will present the key activities of the preparation of this calibration: the derivation of the instrument radiometric model, the implementation of the optical, mechanical and software interfaces of the calibration assembly, the characterization of the optical bench and the definition of the calibration procedures.

  20. Development of procedures for programmable proximity aperture lithography

    NASA Astrophysics Data System (ADS)

    Whitlow, H. J.; Gorelick, S.; Puttaraksa, N.; Napari, M.; Hokkanen, M. J.; Norarat, R.

    2013-07-01

    Programmable proximity aperture lithography (PPAL) with MeV ions has been used in Jyväskylä and Chiang Mai universities for a number of years. Here we describe a number of innovations and procedures that have been incorporated into the LabView-based software. The basic operation involves the coordination of the beam blanker and five motor-actuated translators with high accuracy, close to the minimum step size with proper anti-collision algorithms. By using special approaches, such writing calibration patterns, linearisation of position and careful backlash correction the absolute accuracy of the aperture size and position, can be improved beyond the standard afforded by the repeatability of the translator end-point switches. Another area of consideration has been the fluence control procedures. These involve control of the uniformity of the beam where different approaches for fluence measurement such as simultaneous aperture current and the ion current passing through the aperture using a Faraday cup are used. Microfluidic patterns may contain many elements that make-up mixing sections, reaction chambers, separation columns and fluid reservoirs. To facilitate conception and planning we have implemented a .svg file interpreter, that allows the use of scalable vector graphics files produced by standard drawing software for generation of patterns made up of rectangular elements.

  1. EOS MLS Level 1B Data Processing Software. Version 3

    NASA Technical Reports Server (NTRS)

    Perun, Vincent S.; Jarnot, Robert F.; Wagner, Paul A.; Cofield, Richard E., IV; Nguyen, Honghanh T.; Vuu, Christina

    2011-01-01

    This software is an improvement on Version 2, which was described in EOS MLS Level 1B Data Processing, Version 2.2, NASA Tech Briefs, Vol. 33, No. 5 (May 2009), p. 34. It accepts the EOS MLS Level 0 science/engineering data, and the EOS Aura spacecraft ephemeris/attitude data, and produces calibrated instrument radiances and associated engineering and diagnostic data. This version makes the code more robust, improves calibration, provides more diagnostics outputs, defines the Galactic core more finely, and fixes the equator crossing. The Level 1 processing software manages several different tasks. It qualifies each data quantity using instrument configuration and checksum data, as well as data transmission quality flags. Statistical tests are applied for data quality and reasonableness. The instrument engineering data (e.g., voltages, currents, temperatures, and encoder angles) is calibrated by the software, and the filter channel space reference measurements are interpolated onto the times of each limb measurement with the interpolates being differenced from the measurements. Filter channel calibration target measurements are interpolated onto the times of each limb measurement, and are used to compute radiometric gain. The total signal power is determined and analyzed by each digital autocorrelator spectrometer (DACS) during each data integration. The software converts each DACS data integration from an autocorrelation measurement in the time domain into a spectral measurement in the frequency domain, and estimates separately the spectrally, smoothly varying and spectrally averaged components of the limb port signal arising from antenna emission and scattering effects. Limb radiances are also calibrated.

  2. Development of an automated ultrasonic testing system

    NASA Astrophysics Data System (ADS)

    Shuxiang, Jiao; Wong, Brian Stephen

    2005-04-01

    Non-Destructive Testing is necessary in areas where defects in structures emerge over time due to wear and tear and structural integrity is necessary to maintain its usability. However, manual testing results in many limitations: high training cost, long training procedure, and worse, the inconsistent test results. A prime objective of this project is to develop an automatic Non-Destructive testing system for a shaft of the wheel axle of a railway carriage. Various methods, such as the neural network, pattern recognition methods and knowledge-based system are used for the artificial intelligence problem. In this paper, a statistical pattern recognition approach, Classification Tree is applied. Before feature selection, a thorough study on the ultrasonic signals produced was carried out. Based on the analysis of the ultrasonic signals, three signal processing methods were developed to enhance the ultrasonic signals: Cross-Correlation, Zero-Phase filter and Averaging. The target of this step is to reduce the noise and make the signal character more distinguishable. Four features: 1. The Auto Regressive Model Coefficients. 2. Standard Deviation. 3. Pearson Correlation 4. Dispersion Uniformity Degree are selected. And then a Classification Tree is created and applied to recognize the peak positions and amplitudes. Searching local maximum is carried out before feature computing. This procedure reduces much computation time in the real-time testing. Based on this algorithm, a software package called SOFRA was developed to recognize the peaks, calibrate automatically and test a simulated shaft automatically. The automatic calibration procedure and the automatic shaft testing procedure are developed.

  3. Semi-Automatic Determination of Rockfall Trajectories

    PubMed Central

    Volkwein, Axel; Klette, Johannes

    2014-01-01

    In determining rockfall trajectories in the field, it is essential to calibrate and validate rockfall simulation software. This contribution presents an in situ device and a complementary Local Positioning System (LPS) that allow the determination of parts of the trajectory. An assembly of sensors (herein called rockfall sensor) is installed in the falling block recording the 3D accelerations and rotational velocities. The LPS automatically calculates the position of the block along the slope over time based on Wi-Fi signals emitted from the rockfall sensor. The velocity of the block over time is determined through post-processing. The setup of the rockfall sensor is presented followed by proposed calibration and validation procedures. The performance of the LPS is evaluated by means of different experiments. The results allow for a quality analysis of both the obtained field data and the usability of the rockfall sensor for future/further applications in the field. PMID:25268916

  4. Simulation of the Quantity, Variability, and Timing of Streamflow in the Dennys River Basin, Maine, by Use of a Precipitation-Runoff Watershed Model

    USGS Publications Warehouse

    Dudley, Robert W.

    2008-01-01

    The U.S. Geological Survey (USGS), in cooperation with the Maine Department of Marine Resources Bureau of Sea Run Fisheries and Habitat, began a study in 2004 to characterize the quantity, variability, and timing of streamflow in the Dennys River. The study included a synoptic summary of historical streamflow data at a long-term streamflow gage, collecting data from an additional four short-term streamflow gages, and the development and evaluation of a distributed-parameter watershed model for the Dennys River Basin. The watershed model used in this investigation was the USGS Precipitation-Runoff Modeling System (PRMS). The Geographic Information System (GIS) Weasel was used to delineate the Dennys River Basin and subbasins and derive parameters for their physical geographic features. Calibration of the models used in this investigation involved a four-step procedure in which model output was evaluated against four calibration data sets using computed objective functions for solar radiation, potential evapotranspiration, annual and seasonal water budgets, and daily streamflows. The calibration procedure involved thousands of model runs and was carried out using the USGS software application Luca (Let us calibrate). Luca uses the Shuffled Complex Evolution (SCE) global search algorithm to calibrate the model parameters. The SCE method reliably produces satisfactory solutions for large, complex optimization problems. The primary calibration effort went into the Dennys main stem watershed model. Calibrated parameter values obtained for the Dennys main stem model were transferred to the Cathance Stream model, and a similar four-step SCE calibration procedure was performed; this effort was undertaken to determine the potential to transfer modeling information to a nearby basin in the same region. The calibrated Dennys main stem watershed model performed with Nash-Sutcliffe efficiency (NSE) statistic values for the calibration period and evaluation period of 0.79 and 0.76, respectively. The Cathance Stream model had an NSE value of 0.68. The Dennys River Basin models make use of limited streamflow-gaging station data and provide information to characterize subbasin hydrology. The calibrated PRMS watershed models of the Dennys River Basin provide simulated daily streamflow time series from October 1, 1985, through September 30, 2006, for nearly any location within the basin. These models enable natural-resources managers to characterize the timing and quantity of water moving through the basin to support many endeavors including geochemical calculations, water-use assessment, Atlantic salmon population dynamics and migration modeling, habitat modeling and assessment, and other resource-management scenario evaluations. Characterizing streamflow contributions from subbasins in the basin and the relative amounts of surface- and ground-water contributions to streamflow throughout the basin will lead to a better understanding of water quantity and quality in the basin. Improved water-resources information will support Atlantic salmon protection efforts.

  5. Release of the gPhoton Database of GALEX Photon Events

    NASA Astrophysics Data System (ADS)

    Fleming, Scott W.; Million, Chase; Shiao, Bernie; Tucker, Michael; Loyd, R. O. Parke

    2016-01-01

    The GALEX spacecraft surveyed much of the sky in two ultraviolet bands between 2003 and 2013 with non-integrating microchannel plate detectors. The Mikulski Archive for Space Telescopes (MAST) has made more than one trillion photon events observed by the spacecraft available, stored as a 130 TB database, along with an open-source, python-based software package to query this database and create calibrated lightcurves or images from these data at user-defined spatial and temporal scales. In particular, MAST users can now conduct photometry at the intra-visit level (timescales of seconds and minutes). The software, along with the fully populated database, was officially released in Aug. 2015, and improvements to both software functionality and data calibration are ongoing. We summarize the current calibration status of the gPhoton software, along with examples of early science enabled by gPhoton that include stellar flares, AGN, white dwarfs, exoplanet hosts, novae, and nearby galaxies.

  6. WE-D-9A-06: Open Source Monitor Calibration and Quality Control Software for Enterprise Display Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bevins, N; Vanderhoek, M; Lang, S

    2014-06-15

    Purpose: Medical display monitor calibration and quality control present challenges to medical physicists. The purpose of this work is to demonstrate and share experiences with an open source package that allows for both initial monitor setup and routine performance evaluation. Methods: A software package, pacsDisplay, has been developed over the last decade to aid in the calibration of all monitors within the radiology group in our health system. The software is used to calibrate monitors to follow the DICOM Grayscale Standard Display Function (GSDF) via lookup tables installed on the workstation. Additional functionality facilitates periodic evaluations of both primary andmore » secondary medical monitors to ensure satisfactory performance. This software is installed on all radiology workstations, and can also be run as a stand-alone tool from a USB disk. Recently, a database has been developed to store and centralize the monitor performance data and to provide long-term trends for compliance with internal standards and various accrediting organizations. Results: Implementation and utilization of pacsDisplay has resulted in improved monitor performance across the health system. Monitor testing is now performed at regular intervals and the software is being used across multiple imaging modalities. Monitor performance characteristics such as maximum and minimum luminance, ambient luminance and illuminance, color tracking, and GSDF conformity are loaded into a centralized database for system performance comparisons. Compliance reports for organizations such as MQSA, ACR, and TJC are generated automatically and stored in the same database. Conclusion: An open source software solution has simplified and improved the standardization of displays within our health system. This work serves as an example method for calibrating and testing monitors within an enterprise health system.« less

  7. A Practical Guide to Calibration of a GSSHA Hydrologic Model Using ERDC Automated Model Calibration Software - Effective and Efficient Stochastic Global Optimization

    DTIC Science & Technology

    2012-02-01

    parameter estimation method, but rather to carefully describe how to use the ERDC software implementation of MLSL that accommodates the PEST model...model independent LM method based parameter estimation software PEST (Doherty, 2004, 2007a, 2007b), which quantifies model to measure- ment misfit...et al. (2011) focused on one drawback associated with LM-based model independent parameter estimation as implemented in PEST ; viz., that it requires

  8. Automatic calibration and signal switching system for the particle beam fusion research data acquisition facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boyer, W.B.

    1979-09-01

    This report describes both the hardware and software components of an automatic calibration and signal system (Autocal) for the data acquisition system for the Sandia particle beam fusion research accelerators Hydra, Proto I, and Proto II. The Autocal hardware consists of off-the-shelf commercial equipment. The various hardware components, special modifications and overall system configuration are described. Special software has been developed to support the Autocal hardware. Software operation and maintenance are described.

  9. Reconstructing the calibrated strain signal in the Advanced LIGO detectors

    NASA Astrophysics Data System (ADS)

    Viets, A. D.; Wade, M.; Urban, A. L.; Kandhasamy, S.; Betzwieser, J.; Brown, Duncan A.; Burguet-Castell, J.; Cahillane, C.; Goetz, E.; Izumi, K.; Karki, S.; Kissel, J. S.; Mendell, G.; Savage, R. L.; Siemens, X.; Tuyenbayev, D.; Weinstein, A. J.

    2018-05-01

    Advanced LIGO’s raw detector output needs to be calibrated to compute dimensionless strain h(t) . Calibrated strain data is produced in the time domain using both a low-latency, online procedure and a high-latency, offline procedure. The low-latency h(t) data stream is produced in two stages, the first of which is performed on the same computers that operate the detector’s feedback control system. This stage, referred to as the front-end calibration, uses infinite impulse response (IIR) filtering and performs all operations at a 16 384 Hz digital sampling rate. Due to several limitations, this procedure currently introduces certain systematic errors in the calibrated strain data, motivating the second stage of the low-latency procedure, known as the low-latency gstlal calibration pipeline. The gstlal calibration pipeline uses finite impulse response (FIR) filtering to apply corrections to the output of the front-end calibration. It applies time-dependent correction factors to the sensing and actuation components of the calibrated strain to reduce systematic errors. The gstlal calibration pipeline is also used in high latency to recalibrate the data, which is necessary due mainly to online dropouts in the calibrated data and identified improvements to the calibration models or filters.

  10. An Innovative Software Tool Suite for Power Plant Model Validation and Parameter Calibration using PMU Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Yuanyuan; Diao, Ruisheng; Huang, Renke

    Maintaining good quality of power plant stability models is of critical importance to ensure the secure and economic operation and planning of today’s power grid with its increasing stochastic and dynamic behavior. According to North American Electric Reliability (NERC) standards, all generators in North America with capacities larger than 10 MVA are required to validate their models every five years. Validation is quite costly and can significantly affect the revenue of generator owners, because the traditional staged testing requires generators to be taken offline. Over the past few years, validating and calibrating parameters using online measurements including phasor measurement unitsmore » (PMUs) and digital fault recorders (DFRs) has been proven to be a cost-effective approach. In this paper, an innovative open-source tool suite is presented for validating power plant models using PPMV tool, identifying bad parameters with trajectory sensitivity analysis, and finally calibrating parameters using an ensemble Kalman filter (EnKF) based algorithm. The architectural design and the detailed procedures to run the tool suite are presented, with results of test on a realistic hydro power plant using PMU measurements for 12 different events. The calibrated parameters of machine, exciter, governor and PSS models demonstrate much better performance than the original models for all the events and show the robustness of the proposed calibration algorithm.« less

  11. RGB Color Calibration for Quantitative Image Analysis: The “3D Thin-Plate Spline” Warping Approach

    PubMed Central

    Menesatti, Paolo; Angelini, Claudio; Pallottino, Federico; Antonucci, Francesca; Aguzzi, Jacopo; Costa, Corrado

    2012-01-01

    In the last years the need to numerically define color by its coordinates in n-dimensional space has increased strongly. Colorimetric calibration is fundamental in food processing and other biological disciplines to quantitatively compare samples' color during workflow with many devices. Several software programmes are available to perform standardized colorimetric procedures, but they are often too imprecise for scientific purposes. In this study, we applied the Thin-Plate Spline interpolation algorithm to calibrate colours in sRGB space (the corresponding Matlab code is reported in the Appendix). This was compared with other two approaches. The first is based on a commercial calibration system (ProfileMaker) and the second on a Partial Least Square analysis. Moreover, to explore device variability and resolution two different cameras were adopted and for each sensor, three consecutive pictures were acquired under four different light conditions. According to our results, the Thin-Plate Spline approach reported a very high efficiency of calibration allowing the possibility to create a revolution in the in-field applicative context of colour quantification not only in food sciences, but also in other biological disciplines. These results are of great importance for scientific color evaluation when lighting conditions are not controlled. Moreover, it allows the use of low cost instruments while still returning scientifically sound quantitative data. PMID:22969337

  12. Determination of $sup 241$Am in soil using an automated nuclear radiation measurement laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engstrom, D.E.; White, M.G.; Dunaway, P.B.

    The recent completion of REECo's Automated Laboratory and associated software systems has provided a significant increase in capability while reducing manpower requirements. The system is designed to perform gamma spectrum analyses on the large numbers of samples required by the current Nevada Applied Ecology Group (NAEG) and Plutonium Distribution Inventory Program (PDIP) soil sampling programs while maintaining sufficient sensitivities as defined by earlier investigations of the same type. The hardware and systems are generally described in this paper, with emphasis being placed on spectrum reduction and the calibration procedures used for soil samples. (auth)

  13. Evaluation of the geometric stability and the accuracy potential of digital cameras — Comparing mechanical stabilisation versus parameterisation

    NASA Astrophysics Data System (ADS)

    Rieke-Zapp, D.; Tecklenburg, W.; Peipe, J.; Hastedt, H.; Haig, Claudia

    Recent tests on the geometric stability of several digital cameras that were not designed for photogrammetric applications have shown that the accomplished accuracies in object space are either limited or that the accuracy potential is not exploited to the fullest extent. A total of 72 calibrations were calculated with four different software products for eleven digital camera models with different hardware setups, some with mechanical fixation of one or more parts. The calibration procedure was chosen in accord to a German guideline for evaluation of optical 3D measuring systems [VDI/VDE, VDI/VDE 2634 Part 1, 2002. Optical 3D Measuring Systems-Imaging Systems with Point-by-point Probing. Beuth Verlag, Berlin]. All images were taken with ringflashes which was considered a standard method for close-range photogrammetry. In cases where the flash was mounted to the lens, the force exerted on the lens tube and the camera mount greatly reduced the accomplished accuracy. Mounting the ringflash to the camera instead resulted in a large improvement of accuracy in object space. For standard calibration best accuracies in object space were accomplished with a Canon EOS 5D and a 35 mm Canon lens where the focusing tube was fixed with epoxy (47 μm maximum absolute length measurement error in object space). The fixation of the Canon lens was fairly easy and inexpensive resulting in a sevenfold increase in accuracy compared with the same lens type without modification. A similar accuracy was accomplished with a Nikon D3 when mounting the ringflash to the camera instead of the lens (52 μm maximum absolute length measurement error in object space). Parameterisation of geometric instabilities by introduction of an image variant interior orientation in the calibration process improved results for most cameras. In this case, a modified Alpa 12 WA yielded the best results (29 μm maximum absolute length measurement error in object space). Extending the parameter model with FiBun software to model not only an image variant interior orientation, but also deformations in the sensor domain of the cameras, showed significant improvements only for a small group of cameras. The Nikon D3 camera yielded the best overall accuracy (25 μm maximum absolute length measurement error in object space) with this calibration procedure indicating at the same time the presence of image invariant error in the sensor domain. Overall, calibration results showed that digital cameras can be applied for an accurate photogrammetric survey and that only a little effort was sufficient to greatly improve the accuracy potential of digital cameras.

  14. Photogrammetric measurement of 3D freeform millimetre-sized objects with micro features: an experimental validation of the close-range camera calibration model for narrow angles of view

    NASA Astrophysics Data System (ADS)

    Percoco, Gianluca; Sánchez Salmerón, Antonio J.

    2015-09-01

    The measurement of millimetre and micro-scale features is performed by high-cost systems based on technologies with narrow working ranges to accurately control the position of the sensors. Photogrammetry would lower the costs of 3D inspection of micro-features and would be applicable to the inspection of non-removable micro parts of large objects too. Unfortunately, the behaviour of photogrammetry is not known when photogrammetry is applied to micro-features. In this paper, the authors address these issues towards the application of digital close-range photogrammetry (DCRP) to the micro-scale, taking into account that in literature there are research papers stating that an angle of view (AOV) around 10° is the lower limit to the application of the traditional pinhole close-range calibration model (CRCM), which is the basis of DCRP. At first a general calibration procedure is introduced, with the aid of an open-source software library, to calibrate narrow AOV cameras with the CRCM. Subsequently the procedure is validated using a reflex camera with a 60 mm macro lens, equipped with extension tubes (20 and 32 mm) achieving magnification of up to 2 times approximately, to verify literature findings with experimental photogrammetric 3D measurements of millimetre-sized objects with micro-features. The limitation experienced by the laser printing technology, used to produce the bi-dimensional pattern on common paper, has been overcome using an accurate pattern manufactured with a photolithographic process. The results of the experimental activity prove that the CRCM is valid for AOVs down to 3.4° and that DCRP results are comparable with the results of existing and more expensive commercial techniques.

  15. Pre-Launch Algorithm and Data Format for the Level 1 Calibration Products for the EOS AM-1 Moderate Resolution Imaging Spectroradiometer (MODIS)

    NASA Technical Reports Server (NTRS)

    Guenther, Bruce W.; Godden, Gerald D.; Xiong, Xiao-Xiong; Knight, Edward J.; Qiu, Shi-Yue; Montgomery, Harry; Hopkins, M. M.; Khayat, Mohammad G.; Hao, Zhi-Dong; Smith, David E. (Technical Monitor)

    2000-01-01

    The Moderate Resolution Imaging Spectroradiometer (MODIS) radiometric calibration product is described for the thermal emissive and the reflective solar bands. Specific sensor design characteristics are identified to assist in understanding how the calibration algorithm software product is designed. The reflected solar band software products of radiance and reflectance factor both are described. The product file format is summarized and the MODIS Characterization Support Team (MCST) Homepage location for the current file format is provided.

  16. X-Band Acquisition Aid Software

    NASA Technical Reports Server (NTRS)

    Britcliffe, Michael J.; Strain, Martha M.; Wert, Michael

    2011-01-01

    The X-band Acquisition Aid (AAP) software is a low-cost acquisition aid for the Deep Space Network (DSN) antennas, and is used while acquiring a spacecraft shortly after it has launched. When enabled, the acquisition aid provides corrections to the antenna-predicted trajectory of the spacecraft to compensate for the variations that occur during the actual launch. The AAP software also provides the corrections to the antenna-predicted trajectory to the navigation team that uses the corrections to refine their model of the spacecraft in order to produce improved antenna-predicted trajectories for each spacecraft that passes over each complex. The software provides an automated Acquisition Aid receiver calibration, and provides graphical displays to the operator and remote viewers via an Ethernet connection. It has a Web server, and the remote workstations use the Firefox browser to view the displays. At any given time, only one operator can control any particular display in order to avoid conflicting commands from more than one control point. The configuration and control is accomplished solely via the graphical displays. The operator does not have to remember any commands. Only a few configuration parameters need to be changed, and can be saved to the appropriate spacecraft-dependent configuration file on the AAP s hard disk. AAP automates the calibration sequence by first commanding the antenna to the correct position, starting the receiver calibration sequence, and then providing the operator with the option of accepting or rejecting the new calibration parameters. If accepted, the new parameters are stored in the appropriate spacecraft-dependent configuration file. The calibration can be performed on the Sun, greatly expanding the window of opportunity for calibration. The spacecraft traditionally used for calibration is in view typically twice per day, and only for about ten minutes each pass.

  17. Research on radiation exposure from CT part of hybrid camera and diagnostic CT

    NASA Astrophysics Data System (ADS)

    Solný, Pavel; Zimák, Jaroslav

    2014-11-01

    Research on radiation exposure from CT part of hybrid camera in seven different Departments of Nuclear Medicine (DNM) was conducted. Processed data and effective dose (E) estimations led to the idea of phantom verification and comparison of absorbed doses and software estimation. Anonymous data from about 100 examinations from each DNM was gathered. Acquired data was processed and utilized by dose estimation programs (ExPACT, ImPACT, ImpactDose) with respect to the type of examination and examination procedures. Individual effective doses were calculated using enlisted programs. Preserving the same procedure in dose estimation process allows us to compare the resulting E. Some differences and disproportions during dose estimation led to the idea of estimated E verification. Consequently, two different sets of about 100 of TLD 100H detectors were calibrated for measurement inside the Aldersnon RANDO Anthropomorphic Phantom. Standard examination protocols were examined using a 2 Slice CT- part of hybrid SPECT/CT. Moreover, phantom exposure from body examining protocol for 32 Slice and 64 Slice diagnostic CT scanner was also verified. Absorbed dose (DT,R) measured using TLD detectors was compared with software estimation of equivalent dose HT values, computed by E estimation software. Though, only limited number of cavities for detectors enabled measurement within the regions of lung, liver, thyroid and spleen-pancreas region, some basic comparison is possible.

  18. Calibration of 4π NaI(Tl) detectors with coincidence summing correction using new numerical procedure and ANGLE4 software

    NASA Astrophysics Data System (ADS)

    Badawi, Mohamed S.; Jovanovic, Slobodan I.; Thabet, Abouzeid A.; El-Khatib, Ahmed M.; Dlabac, Aleksandar D.; Salem, Bohaysa A.; Gouda, Mona M.; Mihaljevic, Nikola N.; Almugren, Kholud S.; Abbas, Mahmoud I.

    2017-03-01

    The 4π NaI(Tl) γ-ray detectors are consisted of the well cavity with cylindrical cross section, and the enclosing geometry of measurements with large detection angle. This leads to exceptionally high efficiency level and a significant coincidence summing effect, much more than a single cylindrical or coaxial detector especially in very low activity measurements. In the present work, the detection effective solid angle in addition to both full-energy peak and total efficiencies of well-type detectors, were mainly calculated by the new numerical simulation method (NSM) and ANGLE4 software. To obtain the coincidence summing correction factors through the previously mentioned methods, the simulation of the coincident emission of photons was modeled mathematically, based on the analytical equations and complex integrations over the radioactive volumetric sources including the self-attenuation factor. The measured full-energy peak efficiencies and correction factors were done by using 152Eu, where an exact adjustment is required for the detector efficiency curve, because neglecting the coincidence summing effect can make the results inconsistent with the whole. These phenomena, in general due to the efficiency calibration process and the coincidence summing corrections, appear jointly. The full-energy peak and the total efficiencies from the two methods typically agree with discrepancy 10%. The discrepancy between the simulation, ANGLE4 and measured full-energy peak after corrections for the coincidence summing effect was on the average, while not exceeding 14%. Therefore, this technique can be easily applied in establishing the efficiency calibration curves of well-type detectors.

  19. Calibration process of highly parameterized semi-distributed hydrological model

    NASA Astrophysics Data System (ADS)

    Vidmar, Andrej; Brilly, Mitja

    2017-04-01

    Hydrological phenomena take place in the hydrological system, which is governed by nature, and are essentially stochastic. These phenomena are unique, non-recurring, and changeable across space and time. Since any river basin with its own natural characteristics and any hydrological event therein, are unique, this is a complex process that is not researched enough. Calibration is a procedure of determining the parameters of a model that are not known well enough. Input and output variables and mathematical model expressions are known, while only some parameters are unknown, which are determined by calibrating the model. The software used for hydrological modelling nowadays is equipped with sophisticated algorithms for calibration purposes without possibility to manage process by modeler. The results are not the best. We develop procedure for expert driven process of calibration. We use HBV-light-CLI hydrological model which has command line interface and coupling it with PEST. PEST is parameter estimation tool which is used widely in ground water modeling and can be used also on surface waters. Process of calibration managed by expert directly, and proportionally to the expert knowledge, affects the outcome of the inversion procedure and achieves better results than if the procedure had been left to the selected optimization algorithm. First step is to properly define spatial characteristic and structural design of semi-distributed model including all morphological and hydrological phenomena, like karstic area, alluvial area and forest area. This step includes and requires geological, meteorological, hydraulic and hydrological knowledge of modeler. Second step is to set initial parameter values at their preferred values based on expert knowledge. In this step we also define all parameter and observation groups. Peak data are essential in process of calibration if we are mainly interested in flood events. Each Sub Catchment in the model has own observations group. Third step is to set appropriate bounds to parameters in their range of realistic values. Fourth step is to use of singular value decomposition (SVD) ensures that PEST maintains numerical stability, regardless of how ill-posed is the inverse problem Fifth step is to run PWTADJ1. This creates a new PEST control file in which weights are adjusted such that the contribution made to the total objective function by each observation group is the same. This prevents the information content of any group from being invisible to the inversion process. Sixth step is to add Tikhonov regularization to the PEST control file by running the ADDREG1 utility (Doherty, J, 2013). In adding regularization to the PEST control file ADDREG1 automatically provides a prior information equation for each parameter in which the preferred value of that parameter is equated to its initial value. Last step is to run PEST. We run BeoPEST which a parallel version of PEST and can be run on multiple computers in parallel in same time on TCP communications and this speedup process of calibrations. The case study with results of calibration and validation of the model will be presented.

  20. Benefits of Oxygen Saturation Targeting Trials: Oximeter Calibration Software Revision and Infant Saturations.

    PubMed

    Whyte, Robin K; Nelson, Harvey; Roberts, Robin S; Schmidt, Barbara

    2017-03-01

    It has been reported in the 3 Benefits of Oxygen Saturation Targeting (BOOST-II) trials that changes in oximeter calibration software resulted in clearer separation between the oxygen saturations in the two trial target groups. A revised analysis of the published BOOST-II data does not support this conclusion. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Calibration Procedures on Oblique Camera Setups

    NASA Astrophysics Data System (ADS)

    Kemper, G.; Melykuti, B.; Yu, C.

    2016-06-01

    Beside the creation of virtual animated 3D City models, analysis for homeland security and city planning, the accurately determination of geometric features out of oblique imagery is an important task today. Due to the huge number of single images the reduction of control points force to make use of direct referencing devices. This causes a precise camera-calibration and additional adjustment procedures. This paper aims to show the workflow of the various calibration steps and will present examples of the calibration flight with the final 3D City model. In difference to most other software, the oblique cameras are used not as co-registered sensors in relation to the nadir one, all camera images enter the AT process as single pre-oriented data. This enables a better post calibration in order to detect variations in the single camera calibration and other mechanical effects. The shown sensor (Oblique Imager) is based o 5 Phase One cameras were the nadir one has 80 MPIX equipped with a 50 mm lens while the oblique ones capture images with 50 MPix using 80 mm lenses. The cameras are mounted robust inside a housing to protect this against physical and thermal deformations. The sensor head hosts also an IMU which is connected to a POS AV GNSS Receiver. The sensor is stabilized by a gyro-mount which creates floating Antenna -IMU lever arms. They had to be registered together with the Raw GNSS-IMU Data. The camera calibration procedure was performed based on a special calibration flight with 351 shoots of all 5 cameras and registered the GPS/IMU data. This specific mission was designed in two different altitudes with additional cross lines on each flying heights. The five images from each exposure positions have no overlaps but in the block there are many overlaps resulting in up to 200 measurements per points. On each photo there were in average 110 well distributed measured points which is a satisfying number for the camera calibration. In a first step with the help of the nadir camera and the GPS/IMU data, an initial orientation correction and radial correction were calculated. With this approach, the whole project was calculated and calibrated in one step. During the iteration process the radial and tangential parameters were switched on individually for the camera heads and after that the camera constants and principal point positions were checked and finally calibrated. Besides that, the bore side calibration can be performed either on basis of the nadir camera and their offsets, or independently for each camera without correlation to the others. This must be performed in a complete mission anyway to get stability between the single camera heads. Determining the lever arms of the nodal-points to the IMU centre needs more caution than for a single camera especially due to the strong tilt angle. Prepared all these previous steps, you get a highly accurate sensor that enables a fully automated data extraction with a rapid update of you existing data. Frequently monitoring urban dynamics is then possible in fully 3D environment.

  2. Method to improve the blade tip-timing accuracy of fiber bundle sensor under varying tip clearance

    NASA Astrophysics Data System (ADS)

    Duan, Fajie; Zhang, Jilong; Jiang, Jiajia; Guo, Haotian; Ye, Dechao

    2016-01-01

    Blade vibration measurement based on the blade tip-timing method has become an industry-standard procedure. Fiber bundle sensors are widely used for tip-timing measurement. However, the variation of clearance between the sensor and the blade will bring a tip-timing error to fiber bundle sensors due to the change in signal amplitude. This article presents methods based on software and hardware to reduce the error caused by the tip clearance change. The software method utilizes both the rising and falling edges of the tip-timing signal to determine the blade arrival time, and a calibration process suitable for asymmetric tip-timing signals is presented. The hardware method uses an automatic gain control circuit to stabilize the signal amplitude. Experiments are conducted and the results prove that both methods can effectively reduce the impact of tip clearance variation on the blade tip-timing and improve the accuracy of measurements.

  3. Photogrammetry Tool for Forensic Analysis

    NASA Technical Reports Server (NTRS)

    Lane, John

    2012-01-01

    A system allows crime scene and accident scene investigators the ability to acquire visual scene data using cameras for processing at a later time. This system uses a COTS digital camera, a photogrammetry calibration cube, and 3D photogrammetry processing software. In a previous instrument developed by NASA, the laser scaling device made use of parallel laser beams to provide a photogrammetry solution in 2D. This device and associated software work well under certain conditions. In order to make use of a full 3D photogrammetry system, a different approach was needed. When using multiple cubes, whose locations relative to each other are unknown, a procedure that would merge the data from each cube would be as follows: 1. One marks a reference point on cube 1, then marks points on cube 2 as unknowns. This locates cube 2 in cube 1 s coordinate system. 2. One marks reference points on cube 2, then marks points on cube 1 as unknowns. This locates cube 1 in cube 2 s coordinate system. 3. This procedure is continued for all combinations of cubes. 4. The coordinate of all of the found coordinate systems is then merged into a single global coordinate system. In order to achieve maximum accuracy, measurements are done in one of two ways, depending on scale: when measuring the size of objects, the coordinate system corresponding to the nearest cube is used, or when measuring the location of objects relative to a global coordinate system, a merged coordinate system is used. Presently, traffic accident analysis is time-consuming and not very accurate. Using cubes with differential GPS would give absolute positions of cubes in the accident area, so that individual cubes would provide local photogrammetry calibration to objects near a cube.

  4. Approaches in highly parameterized inversion: TSPROC, a general time-series processor to assist in model calibration and result summarization

    USGS Publications Warehouse

    Westenbroek, Stephen M.; Doherty, John; Walker, John F.; Kelson, Victor A.; Hunt, Randall J.; Cera, Timothy B.

    2012-01-01

    The TSPROC (Time Series PROCessor) computer software uses a simple scripting language to process and analyze time series. It was developed primarily to assist in the calibration of environmental models. The software is designed to perform calculations on time-series data commonly associated with surface-water models, including calculation of flow volumes, transformation by means of basic arithmetic operations, and generation of seasonal and annual statistics and hydrologic indices. TSPROC can also be used to generate some of the key input files required to perform parameter optimization by means of the PEST (Parameter ESTimation) computer software. Through the use of TSPROC, the objective function for use in the model-calibration process can be focused on specific components of a hydrograph.

  5. ATLAS tile calorimeter cesium calibration control and analysis software

    NASA Astrophysics Data System (ADS)

    Solovyanov, O.; Solodkov, A.; Starchenko, E.; Karyukhin, A.; Isaev, A.; Shalanda, N.

    2008-07-01

    An online control system to calibrate and monitor ATLAS Barrel hadronic calorimeter (TileCal) with a movable radioactive source, driven by liquid flow, is described. To read out and control the system an online software has been developed, using ATLAS TDAQ components like DVS (Diagnostic and Verification System) to verify the hardware before running, IS (Information Server) for data and status exchange between networked computers, and other components like DDC (DCS to DAQ Connection), to connect to PVSS-based slow control systems of Tile Calorimeter, high voltage and low voltage. A system of scripting facilities, based on Python language, is used to handle all the calibration and monitoring processes from hardware perspective to final data storage, including various abnormal situations. A QT based graphical user interface to display the status of the calibration system during the cesium source scan is described. The software for analysis of the detector response, using online data, is discussed. Performance of the system and first experience from the ATLAS pit are presented.

  6. Calibration Of Partial-Pressure-Of-Oxygen Sensors

    NASA Technical Reports Server (NTRS)

    Yount, David W.; Heronimus, Kevin

    1995-01-01

    Report and analysis of, and discussion of improvements in, procedure for calibrating partial-pressure-of-oxygen sensors to satisfy Spacelab calibration requirements released. Sensors exhibit fast drift, which results in short calibration period not suitable for Spacelab. By assessing complete process of determining total drift range available, calibration procedure modified to eliminate errors and still satisfy requirements without compromising integrity of system.

  7. Legato: Personal Computer Software for Analyzing Pressure-Sensitive Paint Data

    NASA Technical Reports Server (NTRS)

    Schairer, Edward T.

    2001-01-01

    'Legato' is personal computer software for analyzing radiometric pressure-sensitive paint (PSP) data. The software is written in the C programming language and executes under Windows 95/98/NT operating systems. It includes all operations normally required to convert pressure-paint image intensities to normalized pressure distributions mapped to physical coordinates of the test article. The program can analyze data from both single- and bi-luminophore paints and provides for both in situ and a priori paint calibration. In addition, there are functions for determining paint calibration coefficients from calibration-chamber data. The software is designed as a self-contained, interactive research tool that requires as input only the bare minimum of information needed to accomplish each function, e.g., images, model geometry, and paint calibration coefficients (for a priori calibration) or pressure-tap data (for in situ calibration). The program includes functions that can be used to generate needed model geometry files for simple model geometries (e.g., airfoils, trapezoidal wings, rotor blades) based on the model planform and airfoil section. All data files except images are in ASCII format and thus are easily created, read, and edited. The program does not use database files. This simplifies setup but makes the program inappropriate for analyzing massive amounts of data from production wind tunnels. Program output consists of Cartesian plots, false-colored real and virtual images, pressure distributions mapped to the surface of the model, assorted ASCII data files, and a text file of tabulated results. Graphical output is displayed on the computer screen and can be saved as publication-quality (PostScript) files.

  8. Galileo SSI/Ida Radiometrically Calibrated Images V1.0

    NASA Astrophysics Data System (ADS)

    Domingue, D. L.

    2016-05-01

    This data set includes Galileo Orbiter SSI radiometrically calibrated images of the asteroid 243 Ida, created using ISIS software and assuming nadir pointing. This is an original delivery of radiometrically calibrated files, not an update to existing files. All images archived include the asteroid within the image frame. Calibration was performed in 2013-2014.

  9. Experimental calibration procedures for rotating Lorentz-force flowmeters

    DOE PAGES

    Hvasta, M. G.; Slighton, N. T.; Kolemen, E.; ...

    2017-07-14

    Rotating Lorentz-force flowmeters are a novel and useful technology with a range of applications in a variety of different industries. However, calibrating these flowmeters can be challenging, time-consuming, and expensive. In this paper, simple calibration procedures for rotating Lorentz-force flowmeters are presented. These procedures eliminate the need for expensive equipment, numerical modeling, redundant flowmeters, and system down-time. Finally, the calibration processes are explained in a step-by-step manner and compared to experimental results.

  10. Experimental calibration procedures for rotating Lorentz-force flowmeters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hvasta, M. G.; Slighton, N. T.; Kolemen, E.

    Rotating Lorentz-force flowmeters are a novel and useful technology with a range of applications in a variety of different industries. However, calibrating these flowmeters can be challenging, time-consuming, and expensive. In this paper, simple calibration procedures for rotating Lorentz-force flowmeters are presented. These procedures eliminate the need for expensive equipment, numerical modeling, redundant flowmeters, and system down-time. Finally, the calibration processes are explained in a step-by-step manner and compared to experimental results.

  11. Control Program for an Optical-Calibration Robot

    NASA Technical Reports Server (NTRS)

    Johnston, Albert

    2005-01-01

    A computer program provides semiautomatic control of a moveable robot used to perform optical calibration of video-camera-based optoelectronic sensor systems that will be used to guide automated rendezvous maneuvers of spacecraft. The function of the robot is to move a target and hold it at specified positions. With the help of limit switches, the software first centers or finds the target. Then the target is moved to a starting position. Thereafter, with the help of an intuitive graphical user interface, an operator types in coordinates of specified positions, and the software responds by commanding the robot to move the target to the positions. The software has capabilities for correcting errors and for recording data from the guidance-sensor system being calibrated. The software can also command that the target be moved in a predetermined sequence of motions between specified positions and can be run in an advanced control mode in which, among other things, the target can be moved beyond the limits set by the limit switches.

  12. U.S.-MEXICO BORDER PROGRAM ARIZONA BORDER STUDY--STANDARD OPERATING PROCEDURE FOR HARVARD PM IMPACTOR CALIBRATION AND LEAK TESTING (UA-L-7.1)

    EPA Science Inventory

    The purpose of this SOP is to describe the procedures for the periodic calibration and leak testing of Harvard particulate matter (PM) impactor units. This procedure applies directly to the calibration and leak testing of Harvard PM impactor units used during the Arizona NHEXAS ...

  13. NHEXAS PHASE I ARIZONA STUDY--STANDARD OPERATING PROCEDURE FOR HARVARD PM IMPACTOR CALIBRATION AND LEAK TESTING (UA-L-7.1)

    EPA Science Inventory

    The purpose of this SOP is to describe the procedures for the periodic calibration and leak testing of Harvard particulate matter (PM) impactor units. This procedure applies directly to the calibration and leak testing of Harvard PM impactor units used during the Arizona NHEXAS ...

  14. Bayesian regression models outperform partial least squares methods for predicting milk components and technological properties using infrared spectral data.

    PubMed

    Ferragina, A; de los Campos, G; Vazquez, A I; Cecchinato, A; Bittante, G

    2015-11-01

    The aim of this study was to assess the performance of Bayesian models commonly used for genomic selection to predict "difficult-to-predict" dairy traits, such as milk fatty acid (FA) expressed as percentage of total fatty acids, and technological properties, such as fresh cheese yield and protein recovery, using Fourier-transform infrared (FTIR) spectral data. Our main hypothesis was that Bayesian models that can estimate shrinkage and perform variable selection may improve our ability to predict FA traits and technological traits above and beyond what can be achieved using the current calibration models (e.g., partial least squares, PLS). To this end, we assessed a series of Bayesian methods and compared their prediction performance with that of PLS. The comparison between models was done using the same sets of data (i.e., same samples, same variability, same spectral treatment) for each trait. Data consisted of 1,264 individual milk samples collected from Brown Swiss cows for which gas chromatographic FA composition, milk coagulation properties, and cheese-yield traits were available. For each sample, 2 spectra in the infrared region from 5,011 to 925 cm(-1) were available and averaged before data analysis. Three Bayesian models: Bayesian ridge regression (Bayes RR), Bayes A, and Bayes B, and 2 reference models: PLS and modified PLS (MPLS) procedures, were used to calibrate equations for each of the traits. The Bayesian models used were implemented in the R package BGLR (http://cran.r-project.org/web/packages/BGLR/index.html), whereas the PLS and MPLS were those implemented in the WinISI II software (Infrasoft International LLC, State College, PA). Prediction accuracy was estimated for each trait and model using 25 replicates of a training-testing validation procedure. Compared with PLS, which is currently the most widely used calibration method, MPLS and the 3 Bayesian methods showed significantly greater prediction accuracy. Accuracy increased in moving from calibration to external validation methods, and in moving from PLS and MPLS to Bayesian methods, particularly Bayes A and Bayes B. The maximum R(2) value of validation was obtained with Bayes B and Bayes A. For the FA, C10:0 (% of each FA on total FA basis) had the highest R(2) (0.75, achieved with Bayes A and Bayes B), and among the technological traits, fresh cheese yield R(2) of 0.82 (achieved with Bayes B). These 2 methods have proven to be useful instruments in shrinking and selecting very informative wavelengths and inferring the structure and functions of the analyzed traits. We conclude that Bayesian models are powerful tools for deriving calibration equations, and, importantly, these equations can be easily developed using existing open-source software. As part of our study, we provide scripts based on the open source R software BGLR, which can be used to train customized prediction equations for other traits or populations. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  15. A BPM calibration procedure using TBT data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, M.J.; Crisp, J.; Prieto, P.

    2007-06-01

    Accurate BPM calibration is crucial for lattice analysis. It is also reassuring when the calibration can be independently verified. This paper outlines a procedure that can extract BPM calibration information from TBT orbit data. The procedure is developed as an extension to the Turn-By-Turn lattice analysis [1]. Its application to data from both Recycler Ring and Main Injector (MI) at Fermilab have produced very encouraging results. Some specifics in hardware design will be mentioned to contrast that of analysis results.

  16. Radiometrie recalibration procedure for landsat-5 thematic mapper data

    USGS Publications Warehouse

    Chander, G.; Micijevic, E.; Hayes, R.W.; Barsi, J.A.

    2008-01-01

    The Landsat-5 (L5) satellite was launched on March 01, 1984, with a design life of three years. Incredibly, the L5 Thematic Mapper (TM) has collected data for 23 years. Over this time, the detectors have aged, and its radiometric characteristics have changed since launch. The calibration procedures and parameters have also changed with time. Revised radiometric calibrations have improved the radiometric accuracy of recently processed data; however, users with data that were processed prior to the calibration update do not benefit from the revisions. A procedure has been developed to give users the ability to recalibrate their existing Level 1 (L1) products without having to purchase reprocessed data from the U.S. Geological Survey (USGS). The accuracy of the recalibration is dependent on the knowledge of the prior calibration applied to the data. The ""Work Order" file, included with standard National Land Archive Production System (NLAFS) data products, gives parameters that define the applied calibration. These are the Internal Calibrator (IC) calibration parameters or the default prelaunch calibration, if there were problems with the IC calibration. This paper details the recalibration procedure for data processed using IC, in which users have the Work Order file. ?? 2001 IEEE.

  17. FlowCal: A user-friendly, open source software tool for automatically converting flow cytometry data from arbitrary to calibrated units

    PubMed Central

    Castillo-Hair, Sebastian M.; Sexton, John T.; Landry, Brian P.; Olson, Evan J.; Igoshin, Oleg A.; Tabor, Jeffrey J.

    2017-01-01

    Flow cytometry is widely used to measure gene expression and other molecular biological processes with single cell resolution via fluorescent probes. Flow cytometers output data in arbitrary units (a.u.) that vary with the probe, instrument, and settings. Arbitrary units can be converted to the calibrated unit molecules of equivalent fluorophore (MEF) using commercially available calibration particles. However, there is no convenient, non-proprietary tool available to perform this calibration. Consequently, most researchers report data in a.u., limiting interpretation. Here, we report a software tool named FlowCal to overcome current limitations. FlowCal can be run using an intuitive Microsoft Excel interface, or customizable Python scripts. The software accepts Flow Cytometry Standard (FCS) files as inputs and is compatible with different calibration particles, fluorescent probes, and cell types. Additionally, FlowCal automatically gates data, calculates common statistics, and produces publication quality plots. We validate FlowCal by calibrating a.u. measurements of E. coli expressing superfolder GFP (sfGFP) collected at 10 different detector sensitivity (gain) settings to a single MEF value. Additionally, we reduce day-to-day variability in replicate E. coli sfGFP expression measurements due to instrument drift by 33%, and calibrate S. cerevisiae mVenus expression data to MEF units. Finally, we demonstrate a simple method for using FlowCal to calibrate fluorescence units across different cytometers. FlowCal should ease the quantitative analysis of flow cytometry data within and across laboratories and facilitate the adoption of standard fluorescence units in synthetic biology and beyond. PMID:27110723

  18. Calibration of radio-astronomical data on the cloud. LOFAR, the pathway to SKA

    NASA Astrophysics Data System (ADS)

    Sabater, J.; Sánchez-Expósito, S.; Garrido, J.; Ruiz, J. E.; Best, P. N.; Verdes-Montenegro, L.

    2015-05-01

    The radio interferometer LOFAR (LOw Frequency ARray) is fully operational now. This Square Kilometre Array (SKA) pathfinder allows the observation of the sky at frequencies between 10 and 240 MHz, a relatively unexplored region of the spectrum. LOFAR is a software defined telescope: the data is mainly processed using specialized software running in common computing facilities. That means that the capabilities of the telescope are virtually defined by software and mainly limited by the available computing power. However, the quantity of data produced can quickly reach huge volumes (several Petabytes per day). After the correlation and pre-processing of the data in a dedicated cluster, the final dataset is handled to the user (typically several Terabytes). The calibration of these data requires a powerful computing facility in which the specific state of the art software under heavy continuous development can be easily installed and updated. That makes this case a perfect candidate for a cloud infrastructure which adds the advantages of an on demand, flexible solution. We present our approach to the calibration of LOFAR data using Ibercloud, the cloud infrastructure provided by Ibergrid. With the calibration work-flow adapted to the cloud, we can explore calibration strategies for the SKA and show how private or commercial cloud infrastructures (Ibercloud, Amazon EC2, Google Compute Engine, etc.) can help to solve the problems with big datasets that will be prevalent in the future of astronomy.

  19. A Review of LIDAR Radiometric Processing: From Ad Hoc Intensity Correction to Rigorous Radiometric Calibration.

    PubMed

    Kashani, Alireza G; Olsen, Michael J; Parrish, Christopher E; Wilson, Nicholas

    2015-11-06

    In addition to precise 3D coordinates, most light detection and ranging (LIDAR) systems also record "intensity", loosely defined as the strength of the backscattered echo for each measured point. To date, LIDAR intensity data have proven beneficial in a wide range of applications because they are related to surface parameters, such as reflectance. While numerous procedures have been introduced in the scientific literature, and even commercial software, to enhance the utility of intensity data through a variety of "normalization", "correction", or "calibration" techniques, the current situation is complicated by a lack of standardization, as well as confusing, inconsistent use of terminology. In this paper, we first provide an overview of basic principles of LIDAR intensity measurements and applications utilizing intensity information from terrestrial, airborne topographic, and airborne bathymetric LIDAR. Next, we review effective parameters on intensity measurements, basic theory, and current intensity processing methods. We define terminology adopted from the most commonly-used conventions based on a review of current literature. Finally, we identify topics in need of further research. Ultimately, the presented information helps lay the foundation for future standards and specifications for LIDAR radiometric calibration.

  20. Calibration of Action Cameras for Photogrammetric Purposes

    PubMed Central

    Balletti, Caterina; Guerra, Francesco; Tsioukas, Vassilios; Vernier, Paolo

    2014-01-01

    The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution. PMID:25237898

  1. Calibration of action cameras for photogrammetric purposes.

    PubMed

    Balletti, Caterina; Guerra, Francesco; Tsioukas, Vassilios; Vernier, Paolo

    2014-09-18

    The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution.

  2. Calibration of steady-state car-following models using macroscopic loop detector data.

    DOT National Transportation Integrated Search

    2010-05-01

    The paper develops procedures for calibrating the steady-state component of various car following models using : macroscopic loop detector data. The calibration procedures are developed for a number of commercially available : microscopic traffic sim...

  3. Calibration procedure for a laser triangulation scanner with uncertainty evaluation

    NASA Astrophysics Data System (ADS)

    Genta, Gianfranco; Minetola, Paolo; Barbato, Giulio

    2016-11-01

    Most of low cost 3D scanning devices that are nowadays available on the market are sold without a user calibration procedure to correct measurement errors related to changes in environmental conditions. In addition, there is no specific international standard defining a procedure to check the performance of a 3D scanner along time. This paper aims at detailing a thorough methodology to calibrate a 3D scanner and assess its measurement uncertainty. The proposed procedure is based on the use of a reference ball plate and applied to a triangulation laser scanner. Experimental results show that the metrological performance of the instrument can be greatly improved by the application of the calibration procedure that corrects systematic errors and reduces the device's measurement uncertainty.

  4. Continuous Odour Measurement with Chemosensor Systems

    NASA Astrophysics Data System (ADS)

    Boeker, Peter; Haas, T.; Diekmann, B.; Lammer, P. Schulze

    2009-05-01

    The continuous odour measurement is a challenging task for chemosensor systems. Firstly, a long term and stable measurement mode must be guaranteed in order to preserve the validity of the time consuming and expensive olfactometric calibration data. Secondly, a method is needed to deal with the incoming sensor data. The continuous online detection of signal patterns, the correlated gas emission and the assigned odour data is essential for the continuous odour measurement. Thirdly, a severe danger of over-fitting in the process of the odour calibration is present, because of the high measurement uncertainty of the olfactometry. In this contribution we present a technical solution for continuous measurements comprising of a hybrid QMB-sensor array and electrochemical cells. A set of software tools enables the efficient data processing and calibration and computes the calibration parameters. The internal software of the measurement systems microcontroller processes the calibration parameters online for the output of the desired odour information.

  5. Stereo Vision Inside Tire

    DTIC Science & Technology

    2015-08-21

    using the Open Computer Vision ( OpenCV ) libraries [6] for computer vision and the Qt library [7] for the user interface. The software has the...depth. The software application calibrates the cameras using the plane based calibration model from the OpenCV calib3D module and allows the...6] OpenCV . 2015. OpenCV Open Source Computer Vision. [Online]. Available at: opencv.org [Accessed]: 09/01/2015. [7] Qt. 2015. Qt Project home

  6. Analyses of Field Test Data at the Atucha-1 Spent Fuel Pools

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sitaraman, S.

    A field test was conducted at the Atucha-1 spent nuclear fuel pools to validate a software package for gross defect detection that is used in conjunction with the inspection tool, Spent Fuel Neutron Counter (SFNC). A set of measurements was taken with the SFNC and the software predictions were compared with these data and analyzed. The data spanned a wide range of cooling times and a set of burnup levels leading to count rates from the several hundreds to around twenty per second. The current calibration in the software using linear fitting required the use of multiple calibration factors tomore » cover the entire range of count rates recorded. The solution to this was to use power regression data fitting to normalize the predicted response and derive one calibration factor that can be applied to the entire set of data. The resulting comparisons between the predicted and measured responses were generally good and provided a quantitative method of detecting missing fuel in virtually all situations. Since the current version of the software uses the linear calibration method, it would need to be updated with the new power regression method to make it more user-friendly for real time verification and fieldable for the range of responses that will be encountered.« less

  7. Behavior driven testing in ALMA telescope calibration software

    NASA Astrophysics Data System (ADS)

    Gil, Juan P.; Garces, Mario; Broguiere, Dominique; Shen, Tzu-Chiang

    2016-07-01

    ALMA software development cycle includes well defined testing stages that involves developers, testers and scientists. We adapted Behavior Driven Development (BDD) to testing activities applied to Telescope Calibration (TELCAL) software. BDD is an agile technique that encourages communication between roles by defining test cases using natural language to specify features and scenarios, what allows participants to share a common language and provides a high level set of automated tests. This work describes how we implemented and maintain BDD testing for TELCAL, the infrastructure needed to support it and proposals to expand this technique to other subsystems.

  8. Methods for Calibration of Prout-Tompkins Kinetics Parameters Using EZM Iteration and GLO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wemhoff, A P; Burnham, A K; de Supinski, B

    2006-11-07

    This document contains information regarding the standard procedures used to calibrate chemical kinetics parameters for the extended Prout-Tompkins model to match experimental data. Two methods for calibration are mentioned: EZM calibration and GLO calibration. EZM calibration matches kinetics parameters to three data points, while GLO calibration slightly adjusts kinetic parameters to match multiple points. Information is provided regarding the theoretical approach and application procedure for both of these calibration algorithms. It is recommended that for the calibration process, the user begin with EZM calibration to provide a good estimate, and then fine-tune the parameters using GLO. Two examples have beenmore » provided to guide the reader through a general calibrating process.« less

  9. A fully automated calibration method for an optical see-through head-mounted operating microscope with variable zoom and focus.

    PubMed

    Figl, Michael; Ede, Christopher; Hummel, Johann; Wanschitz, Felix; Ewers, Rolf; Bergmann, Helmar; Birkfellner, Wolfgang

    2005-11-01

    Ever since the development of the first applications in image-guided therapy (IGT), the use of head-mounted displays (HMDs) was considered an important extension of existing IGT technologies. Several approaches to utilizing HMDs and modified medical devices for augmented reality (AR) visualization were implemented. These approaches include video-see through systems, semitransparent mirrors, modified endoscopes, and modified operating microscopes. Common to all these devices is the fact that a precise calibration between the display and three-dimensional coordinates in the patient's frame of reference is compulsory. In optical see-through devices based on complex optical systems such as operating microscopes or operating binoculars-as in the case of the system presented in this paper-this procedure can become increasingly difficult since precise camera calibration for every focus and zoom position is required. We present a method for fully automatic calibration of the operating binocular Varioscope M5 AR for the full range of zoom and focus settings available. Our method uses a special calibration pattern, a linear guide driven by a stepping motor, and special calibration software. The overlay error in the calibration plane was found to be 0.14-0.91 mm, which is less than 1% of the field of view. Using the motorized calibration rig as presented in the paper, we were also able to assess the dynamic latency when viewing augmentation graphics on a mobile target; spatial displacement due to latency was found to be in the range of 1.1-2.8 mm maximum, the disparity between the true object and its computed overlay represented latency of 0.1 s. We conclude that the automatic calibration method presented in this paper is sufficient in terms of accuracy and time requirements for standard uses of optical see-through systems in a clinical environment.

  10. Line fiducial material and thickness considerations for ultrasound calibration

    NASA Astrophysics Data System (ADS)

    Ameri, Golafsoun; McLeod, A. J.; Baxter, John S. H.; Chen, Elvis C. S.; Peters, Terry M.

    2015-03-01

    Ultrasound calibration is a necessary procedure in many image-guided interventions, relating the position of tools and anatomical structures in the ultrasound image to a common coordinate system. This is a necessary component of augmented reality environments in image-guided interventions as it allows for a 3D visualization where other surgical tools outside the imaging plane can be found. Accuracy of ultrasound calibration fundamentally affects the total accuracy of this interventional guidance system. Many ultrasound calibration procedures have been proposed based on a variety of phantom materials and geometries. These differences lead to differences in representation of the phantom on the ultrasound image which subsequently affect the ability to accurately and automatically segment the phantom. For example, taut wires are commonly used as line fiducials in ultrasound calibration. However, at large depths or oblique angles, the fiducials appear blurred and smeared in ultrasound images making it hard to localize their cross-section with the ultrasound image plane. Intuitively, larger diameter phantoms with lower echogenicity are more accurately segmented in ultrasound images in comparison to highly reflective thin phantoms. In this work, an evaluation of a variety of calibration phantoms with different geometrical and material properties for the phantomless calibration procedure was performed. The phantoms used in this study include braided wire, plastic straws, and polyvinyl alcohol cryogel tubes with different diameters. Conventional B-mode and synthetic aperture images of the phantoms at different positions were obtained. The phantoms were automatically segmented from the ultrasound images using an ellipse fitting algorithm, the centroid of which is subsequently used as a fiducial for calibration. Calibration accuracy was evaluated for these procedures based on the leave-one-out target registration error. It was shown that larger diameter phantoms with lower echogenicity are more accurately segmented in comparison to highly reflective thin phantoms. This improvement in segmentation accuracy leads to a lower fiducial localization error, which ultimately results in low target registration error. This would have a profound effect on calibration procedures and the feasibility of different calibration procedures in the context of image-guided procedures.

  11. New approach in the treatment of data from an acid-base potentiometric titrationI. Monocomponent systems of monofunctional acids and bases.

    PubMed

    Maslarska, Vania; Tencheva, Jasmina; Budevsky, Omortag

    2003-01-01

    Based on precise analysis of the acid-base equilibrium, a new approach in the treatment of experimental data from a potentiometric titration is proposed. A new general formula giving explicitly the relation V=f([H(+)]) is derived, valid for every acid-base titration, which includes mono- and polyfunctional protolytes and their mixtures. The present study is the first practical application of this formula for the simplest case, the analysis of one monofunctional protolyte. The collected mV data during the titration are converted into pH-values by means of an auto pH-calibration procedure, thus avoiding preliminary preparation of the measuring system. The mentioned pH-calibration method is applicable also in water-organic mixtures and allows the quantitative determination of sparingly soluble substances (particularly pharmaceuticals). The treatment of the data is performed by means of ready-to-use software products, which makes the proposed approach accessible for a wide range of applications.

  12. Calibration procedures to test the feasibility of heated fiber optics for measuring soil water content in field conditions.

    NASA Astrophysics Data System (ADS)

    Benítez, Javier; Sayde, Chadi; Rodríguez Sinobas, Leonor; Sánchez, Raúl; Gil, María; Selker, John

    2013-04-01

    This research provides insights of the calibration procedures carried out at the agricultural field of La Nava de Arévalo (Spain). The suitability of the heat pulse theory applied to fiber optics for measuring soil water content, in field conditions, is here analyzed. In addition, it highlights the major findings obtained and the weakness to be addressed in future studies. Within a corn field, in a plot of 500 m2 of bare soil, 600 m of fiber optic cable (BruggSteal) were buried on a ziz-zag deployment at two depths, 30cm and 60cm. Various electrical heat pulses of 20W/m were applied to the stainless steel shield of the fiber optic cable during 2 minutes. The resulting thermal response was captured by means of Distributed Fiber Optic Temperature sensing (DFOT), within a spatial and temporal resolution up to 25 cm and 1 s, respectively. The soil thermal response was then correlated to the soil water content by using undisturbed soil samples and soil moisture sensors (Decagon ECHO 5TM). The process was also modeled by applying the numerical methods software Hydrus 2D. Also, the soil thermal properties were measured in situ by using a dual heat pulse probe (Decagon Kd2Pro). For an ongoing process, first results obtained show the suitability of heated fiber optics for measuring soil water content, in real field conditions. Also, they highlight the usefulness of Hydrus 2D as a complementary tool for calibration purposes and for reducing uncertainty in addressing soil spatial variability.

  13. Optimal Experimental Design of Borehole Locations for Bayesian Inference of Past Ice Sheet Surface Temperatures

    NASA Astrophysics Data System (ADS)

    Davis, A. D.; Huan, X.; Heimbach, P.; Marzouk, Y.

    2017-12-01

    Borehole data are essential for calibrating ice sheet models. However, field expeditions for acquiring borehole data are often time-consuming, expensive, and dangerous. It is thus essential to plan the best sampling locations that maximize the value of data while minimizing costs and risks. We present an uncertainty quantification (UQ) workflow based on rigorous probability framework to achieve these objectives. First, we employ an optimal experimental design (OED) procedure to compute borehole locations that yield the highest expected information gain. We take into account practical considerations of location accessibility (e.g., proximity to research sites, terrain, and ice velocity may affect feasibility of drilling) and robustness (e.g., real-time constraints such as weather may force researchers to drill at sub-optimal locations near those originally planned), by incorporating a penalty reflecting accessibility as well as sensitivity to deviations from the optimal locations. Next, we extract vertical temperature profiles from these boreholes and formulate a Bayesian inverse problem to reconstruct past surface temperatures. Using a model of temperature advection/diffusion, the top boundary condition (corresponding to surface temperatures) is calibrated via efficient Markov chain Monte Carlo (MCMC). The overall procedure can then be iterated to choose new optimal borehole locations for the next expeditions.Through this work, we demonstrate powerful UQ methods for designing experiments, calibrating models, making predictions, and assessing sensitivity--all performed under an uncertain environment. We develop a theoretical framework as well as practical software within an intuitive workflow, and illustrate their usefulness for combining data and models for environmental and climate research.

  14. Multi-sensor calibration of low-cost magnetic, angular rate and gravity systems.

    PubMed

    Lüken, Markus; Misgeld, Berno J E; Rüschen, Daniel; Leonhardt, Steffen

    2015-10-13

    We present a new calibration procedure for low-cost nine degrees-of-freedom (9DOF) magnetic, angular rate and gravity (MARG) sensor systems, which relies on a calibration cube, a reference table and a body sensor network (BSN). The 9DOF MARG sensor is part of our recently-developed "Integrated Posture and Activity Network by Medit Aachen" (IPANEMA) BSN. The advantage of this new approach is the use of the calibration cube, which allows for easy integration of two sensor nodes of the IPANEMA BSN. One 9DOF MARG sensor node is thereby used for calibration; the second 9DOF MARG sensor node is used for reference measurements. A novel algorithm uses these measurements to further improve the performance of the calibration procedure by processing arbitrarily-executed motions. In addition, the calibration routine can be used in an alignment procedure to minimize errors in the orientation between the 9DOF MARG sensor system and a motion capture inertial reference system. A two-stage experimental study is conducted to underline the performance of our calibration procedure. In both stages of the proposed calibration procedure, the BSN data, as well as reference tracking data are recorded. In the first stage, the mean values of all sensor outputs are determined as the absolute measurement offset to minimize integration errors in the derived movement model of the corresponding body segment. The second stage deals with the dynamic characteristics of the measurement system where the dynamic deviation of the sensor output compared to a reference system is Sensors 2015, 15 25920 corrected. In practical validation experiments, this procedure showed promising results with a maximum RMS error of 3.89°.

  15. Multi-Sensor Calibration of Low-Cost Magnetic, Angular Rate and Gravity Systems

    PubMed Central

    Lüken, Markus; Misgeld, Berno J.E.; Rüschen, Daniel; Leonhardt, Steffen

    2015-01-01

    We present a new calibration procedure for low-cost nine degrees-of-freedom (9DOF) magnetic, angular rate and gravity (MARG) sensor systems, which relies on a calibration cube, a reference table and a body sensor network (BSN). The 9DOF MARG sensor is part of our recently-developed “Integrated Posture and Activity Network by Medit Aachen” (IPANEMA) BSN. The advantage of this new approach is the use of the calibration cube, which allows for easy integration of two sensor nodes of the IPANEMA BSN. One 9DOF MARG sensor node is thereby used for calibration; the second 9DOF MARG sensor node is used for reference measurements. A novel algorithm uses these measurements to further improve the performance of the calibration procedure by processing arbitrarily-executed motions. In addition, the calibration routine can be used in an alignment procedure to minimize errors in the orientation between the 9DOF MARG sensor system and a motion capture inertial reference system. A two-stage experimental study is conducted to underline the performance of our calibration procedure. In both stages of the proposed calibration procedure, the BSN data, as well as reference tracking data are recorded. In the first stage, the mean values of all sensor outputs are determined as the absolute measurement offset to minimize integration errors in the derived movement model of the corresponding body segment. The second stage deals with the dynamic characteristics of the measurement system where the dynamic deviation of the sensor output compared to a reference system is corrected. In practical validation experiments, this procedure showed promising results with a maximum RMS error of 3.89°. PMID:26473873

  16. Automated Camera Array Fine Calibration

    NASA Technical Reports Server (NTRS)

    Clouse, Daniel; Padgett, Curtis; Ansar, Adnan; Cheng, Yang

    2008-01-01

    Using aerial imagery, the JPL FineCalibration (JPL FineCal) software automatically tunes a set of existing CAHVOR camera models for an array of cameras. The software finds matching features in the overlap region between images from adjacent cameras, and uses these features to refine the camera models. It is not necessary to take special imagery of a known target and no surveying is required. JPL FineCal was developed for use with an aerial, persistent surveillance platform.

  17. Studies in Software Cost Model Behavior: Do We Really Understand Cost Model Performance?

    NASA Technical Reports Server (NTRS)

    Lum, Karen; Hihn, Jairus; Menzies, Tim

    2006-01-01

    While there exists extensive literature on software cost estimation techniques, industry practice continues to rely upon standard regression-based algorithms. These software effort models are typically calibrated or tuned to local conditions using local data. This paper cautions that current approaches to model calibration often produce sub-optimal models because of the large variance problem inherent in cost data and by including far more effort multipliers than the data supports. Building optimal models requires that a wider range of models be considered while correctly calibrating these models requires rejection rules that prune variables and records and use multiple criteria for evaluating model performance. The main contribution of this paper is to document a standard method that integrates formal model identification, estimation, and validation. It also documents what we call the large variance problem that is a leading cause of cost model brittleness or instability.

  18. Landsat-5 TM reflective-band absolute radiometric calibration

    USGS Publications Warehouse

    Chander, G.; Helder, D.L.; Markham, B.L.; Dewald, J.D.; Kaita, E.; Thome, K.J.; Micijevic, E.; Ruggles, T.A.

    2004-01-01

    The Landsat-5 Thematic Mapper (TM) sensor provides the longest running continuous dataset of moderate spatial resolution remote sensing imagery, dating back to its launch in March 1984. Historically, the radiometric calibration procedure for this imagery used the instrument's response to the Internal Calibrator (IC) on a scene-by-scene basis to determine the gain and offset of each detector. Due to observed degradations in the IC, a new procedure was implemented for U.S.-processed data in May 2003. This new calibration procedure is based on a lifetime radiometric calibration model for the instrument's reflective bands (1-5 and 7) and is derived, in part, from the IC response without the related degradation effects and is tied to the cross calibration with the Landsat-7 Enhanced Thematic Mapper Plus. Reflective-band absolute radiometric accuracy of the instrument tends to be on the order of 7% to 10%, based on a variety of calibration methods.

  19. Technical note: The US Dobson station network data record prior to 2015, re-evaluation of NDACC and WOUDC archived records with WinDobson processing software

    NASA Astrophysics Data System (ADS)

    Evans, Robert D.; Petropavlovskikh, Irina; McClure-Begley, Audra; McConville, Glen; Quincy, Dorothy; Miyagawa, Koji

    2017-10-01

    The United States government has operated Dobson ozone spectrophotometers at various sites, starting during the International Geophysical Year (1 July 1957 to 31 December 1958). A network of stations for long-term monitoring of the total column content (thickness of the ozone layer) of the atmosphere was established in the early 1960s and eventually grew to 16 stations, 14 of which are still operational and submit data to the United States of America's National Oceanic and Atmospheric Administration (NOAA). Seven of these sites are also part of the Network for the Detection of Atmospheric Composition Change (NDACC), an organization that maintains its own data archive. Due to recent changes in data processing software the entire dataset was re-evaluated for possible changes. To evaluate and minimize potential changes caused by the new processing software, the reprocessed data record was compared to the original data record archived in the World Ozone and UV Data Center (WOUDC) in Toronto, Canada. The history of the observations at the individual stations, the instruments used for the NOAA network monitoring at the station, the method for reducing zenith-sky observations to total ozone, and calibration procedures were re-evaluated using data quality control tools built into the new software. At the completion of the evaluation, the new datasets are to be published as an update to the WOUDC and NDACC archives, and the entire dataset is to be made available to the scientific community. The procedure for reprocessing Dobson data and the results of the reanalysis on the archived record are presented in this paper. A summary of historical changes to 14 station records is also provided.

  20. Energy calibration issues in nuclear resonant vibrational spectroscopy: observing small spectral shifts and making fast calibrations.

    PubMed

    Wang, Hongxin; Yoda, Yoshitaka; Dong, Weibing; Huang, Songping D

    2013-09-01

    The conventional energy calibration for nuclear resonant vibrational spectroscopy (NRVS) is usually long. Meanwhile, taking NRVS samples out of the cryostat increases the chance of sample damage, which makes it impossible to carry out an energy calibration during one NRVS measurement. In this study, by manipulating the 14.4 keV beam through the main measurement chamber without moving out the NRVS sample, two alternative calibration procedures have been proposed and established: (i) an in situ calibration procedure, which measures the main NRVS sample at stage A and the calibration sample at stage B simultaneously, and calibrates the energies for observing extremely small spectral shifts; for example, the 0.3 meV energy shift between the 100%-(57)Fe-enriched [Fe4S4Cl4](=) and 10%-(57)Fe and 90%-(54)Fe labeled [Fe4S4Cl4](=) has been well resolved; (ii) a quick-switching energy calibration procedure, which reduces each calibration time from 3-4 h to about 30 min. Although the quick-switching calibration is not in situ, it is suitable for normal NRVS measurements.

  1. The DFMS sensor of ROSINA onboard Rosetta: A computer-assisted approach to resolve mass calibration, flux calibration, and fragmentation issues

    NASA Astrophysics Data System (ADS)

    Dhooghe, Frederik; De Keyser, Johan; Altwegg, Kathrin; Calmonte, Ursina; Fuselier, Stephen; Hässig, Myrtha; Berthelier, Jean-Jacques; Mall, Urs; Gombosi, Tamas; Fiethe, Björn

    2014-05-01

    Rosetta will rendezvous with comet 67P/Churyumov-Gerasimenko in May 2014. The Rosetta Orbiter Spectrometer for Ion and Neutral Analysis (ROSINA) instrument comprises three sensors: the pressure sensor (COPS) and two mass spectrometers (RTOF and DFMS). The double focusing mass spectrometer DFMS is optimized for mass resolution and consists of an ion source, a mass analyser and a detector package operated in analogue mode. The magnetic sector of the analyser provides the mass dispersion needed for use with the position-sensitive microchannel plate (MCP) detector. Ions that hit the MCP release electrons that are recorded digitally using a linear electron detector array with 512 pixels. Raw data for a given commanded mass are obtained as ADC counts as a function of pixel number. We have developed a computer-assisted approach to address the problem of calibrating such raw data. Mass calibration: Ion identification is based on their mass-over-charge (m/Z) ratio and requires an accurate correlation of pixel number and m/Z. The m/Z scale depends on the commanded mass and the magnetic field and can be described by an offset of the pixel associated with the commanded mass from the centre of the detector array and a scaling factor. Mass calibration is aided by the built-in gas calibration unit (GCU), which allows one to inject a known gas mixture into the instrument. In a first, fully automatic step of the mass calibration procedure, the calibration uses all GCU spectra and extracts information about the mass peak closest to the centre pixel, since those peaks can be identified unambiguously. This preliminary mass-calibration relation can then be applied to all spectra. Human-assisted identification of additional mass peaks further improves the mass calibration. Ion flux calibration: ADC counts per pixel are converted to ion counts per second using the overall gain, the individual pixel gain, and the total data accumulation time. DFMS can perform an internal scan to determine the pixel gain and related detector aging. The software automatically corrects for these effects to calibrate the fluxes. The COPS sensor can be used for an a posteriori calibration of the fluxes. Neutral gas number densities: Neutrals are ionized in the ion source before they are transferred to the mass analyser, but during this process fragmentation may occur. Our software allows one to identify which neutrals entered the instrument, given the ion fragments that are detected. First, multiple spectra with a limited mass range are combined to provide an overview of as many ion fragments as possible. We then exploit a fragmentation database to assist in figuring out the relation between entering species and recorded fragments. Finally, using experimentally determined sensitivities, gas number densities are obtained. The instrument characterisation (experimental determination of sensitivities, fragmentation patterns for the most common neutral species, etc.) has been conducted by the consortium using an instrument copy in the University of Bern test facilities during the cruise phase of the mission.

  2. Analysis of Photogrammetry Data from ISIM Mockup

    NASA Technical Reports Server (NTRS)

    Nowak, Maria; Hill, Mike

    2007-01-01

    During ground testing of the Integrated Science Instrument Module (ISIM) for the James Webb Space Telescope (JWST), the ISIM Optics group plans to use a Photogrammetry Measurement System for cryogenic calibration of specific target points on the ISIM composite structure and Science Instrument optical benches and other GSE equipment. This testing will occur in the Space Environmental Systems (SES) chamber at Goddard Space Flight Center. Close range photogrammetry is a 3 dimensional metrology system using triangulation to locate custom targets in 3 coordinates via a collection of digital photographs taken from various locations and orientations. These photos are connected using coded targets, special targets that are recognized by the software and can thus correlate the images to provide a 3 dimensional map of the targets, and scaled via well calibrated scale bars. Photogrammetry solves for the camera location and coordinates of the targets simultaneously through the bundling procedure contained in the V-STARS software, proprietary software owned by Geodetic Systems Inc. The primary objectives of the metrology performed on the ISIM mock-up were (1) to quantify the accuracy of the INCA3 photogrammetry camera on a representative full scale version of the ISIM structure at ambient temperature by comparing the measurements obtained with this camera to measurements using the Leica laser tracker system and (2), empirically determine the smallest increment of target position movement that can be resolved by the PG camera in the test setup, i.e., precision, or resolution. In addition, the geometrical details of the test setup defined during the mockup testing, such as target locations and camera positions, will contribute to the final design of the photogrammetry system to be used on the ISIM Flight Structure.

  3. Hazardous Environment Robotics

    NASA Technical Reports Server (NTRS)

    1996-01-01

    Jet Propulsion Laboratory (JPL) developed video overlay calibration and demonstration techniques for ground-based telerobotics. Through a technology sharing agreement with JPL, Deneb Robotics added this as an option to its robotics software, TELEGRIP. The software is used for remotely operating robots in nuclear and hazardous environments in industries including automotive and medical. The option allows the operator to utilize video to calibrate 3-D computer models with the actual environment, and thus plan and optimize robot trajectories before the program is automatically generated.

  4. Utility Bill Calibration Test Cases | Buildings | NREL

    Science.gov Websites

    illustrates the utility bill calibration test cases in BESTEST-EX. In these cases, participants are given software results have been generated. This diagram provides an overview of the BESTEST-EX utility bill calibration case process. On the left side of the diagram is a box labeled "BESTEST-EX Document"

  5. New Method of Calibrating IRT Models.

    ERIC Educational Resources Information Center

    Jiang, Hai; Tang, K. Linda

    This discussion of new methods for calibrating item response theory (IRT) models looks into new optimization procedures, such as the Genetic Algorithm (GA) to improve on the use of the Newton-Raphson procedure. The advantages of using a global optimization procedure like GA is that this kind of procedure is not easily affected by local optima and…

  6. Step wise, multiple objective calibration of a hydrologic model for a snowmelt dominated basin

    USGS Publications Warehouse

    Hay, L.E.; Leavesley, G.H.; Clark, M.P.; Markstrom, S.L.; Viger, R.J.; Umemoto, M.

    2006-01-01

    The ability to apply a hydrologic model to large numbers of basins for forecasting purposes requires a quick and effective calibration strategy. This paper presents a step wise, multiple objective, automated procedure for hydrologic model calibration. This procedure includes the sequential calibration of a model's simulation of solar radiation (SR), potential evapotranspiration (PET), water balance, and daily runoff. The procedure uses the Shuffled Complex Evolution global search algorithm to calibrate the U.S. Geological Survey's Precipitation Runoff Modeling System in the Yampa River basin of Colorado. This process assures that intermediate states of the model (SR and PET on a monthly mean basis), as well as the water balance and components of the daily hydrograph are simulated, consistently with measured values.

  7. Calibration of 3D ultrasound to an electromagnetic tracking system

    NASA Astrophysics Data System (ADS)

    Lang, Andrew; Parthasarathy, Vijay; Jain, Ameet

    2011-03-01

    The use of electromagnetic (EM) tracking is an important guidance tool that can be used to aid procedures requiring accurate localization such as needle injections or catheter guidance. Using EM tracking, the information from different modalities can be easily combined using pre-procedural calibration information. These calibrations are performed individually, per modality, allowing different imaging systems to be mixed and matched according to the procedure at hand. In this work, a framework for the calibration of a 3D transesophageal echocardiography probe to EM tracking is developed. The complete calibration framework includes three required steps: data acquisition, needle segmentation, and calibration. Ultrasound (US) images of an EM tracked needle must be acquired with the position of the needles in each volume subsequently extracted by segmentation. The calibration transformation is determined through a registration between the segmented points and the recorded EM needle positions. Additionally, the speed of sound is compensated for since calibration is performed in water that has a different speed then is assumed by the US machine. A statistical validation framework has also been developed to provide further information related to the accuracy and consistency of the calibration. Further validation of the calibration showed an accuracy of 1.39 mm.

  8. Installation and calibration of Kayzero-assisted NAA in three Central European countries via a Copernicus project.

    PubMed

    De Corte, F; van Sluijs, R; Simonits, A; Kucera, J; Smodis, B; Byrne, A R; De Wispelaere, A; Bossus, D; Frána, J; Horák, Z; Jaćimović, R

    2001-09-01

    An account is given of the installation and calibration of k0-based NAA--assisted by the DSM Kayzero/Solcoi software package--at the KFKI-AEKI, Budapest, the NPI, Rez and the IJS, Ljubljana. Not only the calibration of the Ge-detectors and the irradiation facilities are discussed, but also other important topics such as gamma-spectrometric hard- and software, QC/QA of the IRMM-530 Al-Au flux monitor and the upgrade of the Kayzero/Solcoi code. The work was performed in the framework of a European Copernicus JRP, coordinated by the Laboratory of Analytical Chemistry, Gent, with DSM Research, Geleen, as the industrial partner.

  9. An automatic calibration procedure for remote eye-gaze tracking systems.

    PubMed

    Model, Dmitri; Guestrin, Elias D; Eizenman, Moshe

    2009-01-01

    Remote gaze estimation systems use calibration procedures to estimate subject-specific parameters that are needed for the calculation of the point-of-gaze. In these procedures, subjects are required to fixate on a specific point or points at specific time instances. Advanced remote gaze estimation systems can estimate the optical axis of the eye without any personal calibration procedure, but use a single calibration point to estimate the angle between the optical axis and the visual axis (line-of-sight). This paper presents a novel automatic calibration procedure that does not require active user participation. To estimate the angles between the optical and visual axes of each eye, this procedure minimizes the distance between the intersections of the visual axes of the left and right eyes with the surface of a display while subjects look naturally at the display (e.g., watching a video clip). Simulation results demonstrate that the performance of the algorithm improves as the range of viewing angles increases. For a subject sitting 75 cm in front of an 80 cm x 60 cm display (40" TV) the standard deviation of the error in the estimation of the angles between the optical and visual axes is 0.5 degrees.

  10. Autocalibration of a one-dimensional hydrodynamic-ecological model (DYRESM 4.0-CAEDYM 3.1) using a Monte Carlo approach: simulations of hypoxic events in a polymictic lake

    NASA Astrophysics Data System (ADS)

    Luo, Liancong; Hamilton, David; Lan, Jia; McBride, Chris; Trolle, Dennis

    2018-03-01

    Automated calibration of complex deterministic water quality models with a large number of biogeochemical parameters can reduce time-consuming iterative simulations involving empirical judgements of model fit. We undertook autocalibration of the one-dimensional hydrodynamic-ecological lake model DYRESM-CAEDYM, using a Monte Carlo sampling (MCS) method, in order to test the applicability of this procedure for shallow, polymictic Lake Rotorua (New Zealand). The calibration procedure involved independently minimizing the root-mean-square error (RMSE), maximizing the Pearson correlation coefficient (r) and Nash-Sutcliffe efficient coefficient (Nr) for comparisons of model state variables against measured data. An assigned number of parameter permutations was used for 10 000 simulation iterations. The "optimal" temperature calibration produced a RMSE of 0.54 °C, Nr value of 0.99, and r value of 0.98 through the whole water column based on comparisons with 540 observed water temperatures collected between 13 July 2007 and 13 January 2009. The modeled bottom dissolved oxygen concentration (20.5 m below surface) was compared with 467 available observations. The calculated RMSE of the simulations compared with the measurements was 1.78 mg L-1, the Nr value was 0.75, and the r value was 0.87. The autocalibrated model was further tested for an independent data set by simulating bottom-water hypoxia events from 15 January 2009 to 8 June 2011 (875 days). This verification produced an accurate simulation of five hypoxic events corresponding to DO < 2 mg L-1 during summer of 2009-2011. The RMSE was 2.07 mg L-1, Nr value 0.62, and r value of 0.81, based on the available data set of 738 days. The autocalibration software of DYRESM-CAEDYM developed here is substantially less time-consuming and more efficient in parameter optimization than traditional manual calibration which has been the standard tool practiced for similar complex water quality models.

  11. SPRT Calibration Uncertainties and Internal Quality Control at a Commercial SPRT Calibration Facility

    NASA Astrophysics Data System (ADS)

    Wiandt, T. J.

    2008-06-01

    The Hart Scientific Division of the Fluke Corporation operates two accredited standard platinum resistance thermometer (SPRT) calibration facilities, one at the Hart Scientific factory in Utah, USA, and the other at a service facility in Norwich, UK. The US facility is accredited through National Voluntary Laboratory Accreditation Program (NVLAP), and the UK facility is accredited through UKAS. Both provide SPRT calibrations using similar equipment and procedures, and at similar levels of uncertainty. These uncertainties are among the lowest available commercially. To achieve and maintain low uncertainties, it is required that the calibration procedures be thorough and optimized. However, to minimize customer downtime, it is also important that the instruments be calibrated in a timely manner and returned to the customer. Consequently, subjecting the instrument to repeated calibrations or extensive repeated measurements is not a viable approach. Additionally, these laboratories provide SPRT calibration services involving a wide variety of SPRT designs. These designs behave differently, yet predictably, when subjected to calibration measurements. To this end, an evaluation strategy involving both statistical process control and internal consistency measures is utilized to provide confidence in both the instrument calibration and the calibration process. This article describes the calibration facilities, procedure, uncertainty analysis, and internal quality assurance measures employed in the calibration of SPRTs. Data will be reviewed and generalities will be presented. Finally, challenges and considerations for future improvements will be discussed.

  12. Java Tool Framework for Automation of Hardware Commissioning and Maintenance Procedures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ho, J C; Fisher, J M; Gordon, J B

    2007-10-02

    The National Ignition Facility (NIF) is a 192-beam laser system designed to study high energy density physics. Each beam line contains a variety of line replaceable units (LRUs) that contain optics, stepping motors, sensors and other devices to control and diagnose the laser. During commissioning and subsequent maintenance of the laser, LRUs undergo a qualification process using the Integrated Computer Control System (ICCS) to verify and calibrate the equipment. The commissioning processes are both repetitive and tedious when we use remote manual computer controls, making them ideal candidates for software automation. Maintenance and Commissioning Tool (MCT) software was developed tomore » improve the efficiency of the qualification process. The tools are implemented in Java, leveraging ICCS services and CORBA to communicate with the control devices. The framework provides easy-to-use mechanisms for handling configuration data, task execution, task progress reporting, and generation of commissioning test reports. The tool framework design and application examples will be discussed.« less

  13. PRISM: Processing routines in IDL for spectroscopic measurements (installation manual and user's guide, version 1.0)

    USGS Publications Warehouse

    Kokaly, Raymond F.

    2011-01-01

    This report describes procedures for installing and using the U.S. Geological Survey Processing Routines in IDL for Spectroscopic Measurements (PRISM) software. PRISM provides a framework to conduct spectroscopic analysis of measurements made using laboratory, field, airborne, and space-based spectrometers. Using PRISM functions, the user can compare the spectra of materials of unknown composition with reference spectra of known materials. This spectroscopic analysis allows the composition of the material to be identified and characterized. Among its other functions, PRISM contains routines for the storage of spectra in database files, import/export of ENVI spectral libraries, importation of field spectra, correction of spectra to absolute reflectance, arithmetic operations on spectra, interactive continuum removal and comparison of spectral features, correction of imaging spectrometer data to ground-calibrated reflectance, and identification and mapping of materials using spectral feature-based analysis of reflectance data. This report provides step-by-step instructions for installing the PRISM software and running its functions.

  14. NRL Hyperspectral Imagery Trafficability Tool (HITT): Software andSpectral-Geotechnical Look-up Tables for Estimation and Mapping of Soil Bearing Strength from Hyperspectral Imagery

    DTIC Science & Technology

    2012-09-28

    spectral-geotechnical libraries and models developed during remote sensing and calibration/ validation campaigns conducted by NRL and collaborating...geotechnical libraries and models developed during remote sensing and calibration/ validation campaigns conducted by NRL and collaborating institutions in four...2010; Bachmann, Fry, et al, 2012a). The NRL HITT tool is a model for how we develop and validate software, and the future development of tools by

  15. Validation of vision-based obstacle detection algorithms for low-altitude helicopter flight

    NASA Technical Reports Server (NTRS)

    Suorsa, Raymond; Sridhar, Banavar

    1991-01-01

    A validation facility being used at the NASA Ames Research Center is described which is aimed at testing vision based obstacle detection and range estimation algorithms suitable for low level helicopter flight. The facility is capable of processing hundreds of frames of calibrated multicamera 6 degree-of-freedom motion image sequencies, generating calibrated multicamera laboratory images using convenient window-based software, and viewing range estimation results from different algorithms along with truth data using powerful window-based visualization software.

  16. Development of an in situ calibration technique for combustible gas detectors

    NASA Technical Reports Server (NTRS)

    Shumar, J. W.; Wynveen, R. A.; Lance, N., Jr.; Lantz, J. B.

    1977-01-01

    This paper describes the development of an in situ calibration procedure for combustible gas detectors (CGD). The CGD will be a necessary device for future space vehicles as many subsystems in the Environmental Control/Life Support System utilize or produce hydrogen (H2) gas. Existing calibration techniques are time-consuming and require support equipment such as an environmental chamber and calibration gas supply. The in situ calibration procedure involves utilization of a water vapor electrolysis cell for the automatic in situ generation of a H2/air calibration mixture within the flame arrestor of the CGD. The development effort concluded with the successful demonstration of in situ span calibrations of a CGD.

  17. Standardization of gamma-glutamyltransferase assays by intermethod calibration. Effect on determining common reference limits.

    PubMed

    Steinmetz, Josiane; Schiele, Françoise; Gueguen, René; Férard, Georges; Henny, Joseph

    2007-01-01

    The improvement of the consistency of gamma-glutamyltransferase (GGT) activity results among different assays after calibration with a common material was estimated. We evaluated if this harmonization could lead to reference limits common to different routine methods. Seven laboratories measured GGT activity using their own routine analytical system both according to the manufacturer's recommendation and after calibration with a multi-enzyme calibrator [value assigned by the International Federation of Clinical Chemistry and Laboratory Medicine (IFCC) reference procedure]. All samples were re-measured using the IFCC reference procedure. Two groups of subjects were selected in each laboratory: a group of healthy men aged 18-25 years without long-term medication and with alcohol consumption less than 44 g/day and a group of subjects with elevated GGT activity. The day-to-day coefficients of variation were less than 2.9% in each laboratory. The means obtained in the group of healthy subjects without common calibration (range of the means 16-23 U/L) were significantly different from those obtained by the IFCC procedure in five laboratories. After calibration, the means remained significantly different from the IFCC procedure results in only one laboratory. For three calibrated methods, the slope values of linear regression vs. the IFCC procedure were not different from the value 1. The results obtained with these three methods for healthy subjects (n=117) were gathered and reference limits were calculated. These were 11-49 U/L (2.5th-97.5th percentiles). The calibration also improved the consistency of elevated results when compared to the IFCC procedure. The common calibration improved the level of consistency between different routine methods. It permitted to define common reference limits which are quite similar to those proposed by the IFCC. This approach should lead to a real benefit in terms of prevention, screening, diagnosis, therapeutic monitoring and for epidemiological studies.

  18. A fast calibration method for 3-D tracking of ultrasound images using a spatial localizer.

    PubMed

    Pagoulatos, N; Haynor, D R; Kim, Y

    2001-09-01

    We have developed a fast calibration method for computing the position and orientation of 2-D ultrasound (US) images in 3-D space where a position sensor is mounted on the US probe. This calibration is required in the fields of 3-D ultrasound and registration of ultrasound with other imaging modalities. Most of the existing calibration methods require a complex and tedious experimental procedure. Our method is simple and it is based on a custom-built phantom. Thirty N-fiducials (markers in the shape of the letter "N") embedded in the phantom provide the basis for our calibration procedure. We calibrated a 3.5-MHz sector phased-array probe with a magnetic position sensor, and we studied the accuracy and precision of our method. A typical calibration procedure requires approximately 2 min. We conclude that we can achieve accurate and precise calibration using a single US image, provided that a large number (approximately ten) of N-fiducials are captured within the US image, enabling a representative sampling of the imaging plane.

  19. Polymers for Traveling Wave Ion Mobility Spectrometry Calibration

    NASA Astrophysics Data System (ADS)

    Duez, Quentin; Chirot, Fabien; Liénard, Romain; Josse, Thomas; Choi, ChangMin; Coulembier, Olivier; Dugourd, Philippe; Cornil, Jérôme; Gerbaux, Pascal; De Winter, Julien

    2017-07-01

    One of the main issues when using traveling wave ion mobility spectrometry (TWIMS) for the determination of collisional cross-section (CCS) concerns the need for a robust calibration procedure built from referent ions of known CCS. Here, we implement synthetic polymer ions as CCS calibrants in positive ion mode. Based on their intrinsic polydispersities, polymers offer in a single sample the opportunity to generate, upon electrospray ionization, numerous ions covering a broad mass range and a large CCS window for different charge states at a time. In addition, the key advantage of polymer ions as CCS calibrants lies in the robustness of their gas-phase structure with respect to the instrumental conditions, making them less prone to collisional-induced unfolding (CIU) than protein ions. In this paper, we present a CCS calibration procedure using sodium cationized polylactide and polyethylene glycol, PLA and PEG, as calibrants with reference CCS determined on a home-made drift tube. Our calibration procedure is further validated by testing the polymer calibration to determine CCS of numerous different ions for which CCS are reported in the literature. [Figure not available: see fulltext.

  20. Parallel computing for automated model calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burke, John S.; Danielson, Gary R.; Schulz, Douglas A.

    2002-07-29

    Natural resources model calibration is a significant burden on computing and staff resources in modeling efforts. Most assessments must consider multiple calibration objectives (for example magnitude and timing of stream flow peak). An automated calibration process that allows real time updating of data/models, allowing scientists to focus effort on improving models is needed. We are in the process of building a fully featured multi objective calibration tool capable of processing multiple models cheaply and efficiently using null cycle computing. Our parallel processing and calibration software routines have been generically, but our focus has been on natural resources model calibration. Somore » far, the natural resources models have been friendly to parallel calibration efforts in that they require no inter-process communication, only need a small amount of input data and only output a small amount of statistical information for each calibration run. A typical auto calibration run might involve running a model 10,000 times with a variety of input parameters and summary statistical output. In the past model calibration has been done against individual models for each data set. The individual model runs are relatively fast, ranging from seconds to minutes. The process was run on a single computer using a simple iterative process. We have completed two Auto Calibration prototypes and are currently designing a more feature rich tool. Our prototypes have focused on running the calibration in a distributed computing cross platform environment. They allow incorporation of?smart? calibration parameter generation (using artificial intelligence processing techniques). Null cycle computing similar to SETI@Home has also been a focus of our efforts. This paper details the design of the latest prototype and discusses our plans for the next revision of the software.« less

  1. Students' Calibration of Knowledge and Learning Processes: Implications for Designing Powerful Software Learning Environments

    ERIC Educational Resources Information Center

    Winne, Philip H.

    2004-01-01

    Calibration concerns (a) the deviation of a person's judgment from fact, introducing notions of bias and accuracy; and metric issues regarding (b) the validity of cues' contributions to judgments and (c) the grain size of cues. Miscalibration hinders self-regulated learning (SRL). Considering calibration in the context of Winne and Hadwin's…

  2. Self calibrating monocular camera measurement of traffic parameters.

    DOT National Transportation Integrated Search

    2009-12-01

    This proposed project will extend the work of previous projects that have developed algorithms and software : to measure traffic speed under adverse conditions using un-calibrated cameras. The present implementation : uses the WSDOT CCTV cameras moun...

  3. A methodology to evaluate occupational internal exposure to fluorine-18.

    PubMed

    Oliveira, C M; Dantas, A L A; Dantas, B M

    2009-11-15

    The objective of this work is to develop procedures for internal monitoring of (18)F to be applied in cases of possible incorporation of fluoride and (18)FDG, using in vivo and in vitro methods of measurements. The Na I (Tl) 8" x 4" scintillation detector installed at IRD-Whole Body Counter was calibrated for measurements with a whole body anthropomorphic phantom, simulating homogeneous distribution of (18)F in the body. The NaI(Tl) 3"x 3" scintillation detector installed at the IRD-Whole Body Counter was calibrated for in vivo measurements with a brain phantom inserted in an artificial skull, simulating (18)FDG incorporation. The HPGe detection system installed at the IRD-Bioassay Laboratory was calibrated for in vitro measurements of urine samples with 1 liter plastic bottles containing a standard liquid source. A methodology for bioassay data interpretation, based on standard ICRP models edited with the software AIDE-version 6, was established. It is concluded that in vivo measurements have sufficient sensitivity for monitoring (18)F in the forms of fluoride and (18)FDG. The use of both in vitro and in vivo bioassay data can provide useful information for the interpretation of bioassay data in cases of accidental incorporation in order to identify the chemical form of (18)F incorporated.

  4. NASA Glenn Icing Research Tunnel: 2012 Cloud Calibration Procedure and Results

    NASA Technical Reports Server (NTRS)

    VanZante, Judith Foss; Ide, Robert F.; Steen, Laura E.

    2012-01-01

    In 2011, NASA Glenn s Icing Research Tunnel underwent a major modification to it s refrigeration plant and heat exchanger. This paper presents the results of the subsequent full cloud calibration. Details of the calibration procedure and results are presented herein. The steps include developing a nozzle transfer map, establishing a uniform cloud, conducting a drop sizing calibration and finally a liquid water content calibration. The goal of the calibration is to develop a uniform cloud, and to build a transfer map from the inputs of air speed, spray bar atomizing air pressure and water pressure to the output of median volumetric droplet diameter and liquid water content.

  5. AOT Retrieval Procedure for Distributed Measurements With Low-Cost Sun Photometers

    NASA Astrophysics Data System (ADS)

    Toledo, F.; Garrido, C.; Díaz, M.; Rondanelli, R.; Jorquera, S.; Valdivieso, P.

    2018-01-01

    We propose a new application of inexpensive light-emitting diode (LED)-based Sun photometers, consisting of measuring the aerosol optical thickness (AOT) with high resolution within metropolitan scales. Previously, these instruments have been used at continental scales by the GLOBE program, but this extension is already covered by more expensive and higher-precision instruments of the AERONET global network. For this we built an open source two-channeled LED-based Sun photometer based on previous developments, with improvements in the hardware, software, and modifications on the calibration procedure. Among these we highlight the use of MODTRAN to characterize the effect introduced by using LED sensors in the AOT retrieval, an open design available for the scientific community and a calibration procedure that takes advantage of a CIMEL Sun photometer located within the city, enables the intercomparison of several LED Sun photometers with a common reference. We estimated the root-mean-square error in the AOT retrieved by the prototypes as 0.006 at the 564 nm and 0.009 at the 408 nm. This error is way under the magnitude of the AOT daily cycle variability measured by us in our campaigns, even for distances closer than 15 km. In addition to inner city campaigns, we also show aerosol-tracing applications by measuring AOT variations from the city of Santiago to the Andes glaciers. Measuring AOT at high spatial resolution in urban areas can improve our understanding of urban scale aerosol circulation, providing information for solar energy planning, health policies, and climatological studies, among others.

  6. Procedures for establishing and maintaining consistent air-kerma strength standards for low-energy, photon-emitting brachytherapy sources: recommendations of the Calibration Laboratory Accreditation Subcommittee of the American Association of Physicists in Medicine.

    PubMed

    DeWerd, Larry A; Huq, M Saiful; Das, Indra J; Ibbott, Geoffrey S; Hanson, William F; Slowey, Thomas W; Williamson, Jeffrey F; Coursey, Bert M

    2004-03-01

    Low dose rate brachytherapy is being used extensively for the treatment of prostate cancer. As of September 2003, there are a total of thirteen 125I and seven 103Pd sources that have calibrations from the National Institute of Standards and Technology (NIST) and the Accredited Dosimetry Calibration Laboratories (ADCLs) of the American Association of Physicists in Medicine (AAPM). The dosimetry standards for these sources are traceable to the NIST wide-angle free-air chamber. Procedures have been developed by the AAPM Calibration Laboratory Accreditation Subcommittee to standardize quality assurance and calibration, and to maintain the dosimetric traceability of these sources to ensure accurate clinical dosimetry. A description of these procedures is provided to the clinical users for traceability purposes as well as to provide guidance to the manufacturers of brachytherapy sources and ADCLs with regard to these procedures.

  7. 40 CFR 91.320 - Carbon dioxide analyzer calibration.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... required (see following table). Example calibration points (percent) Acceptable for calibration? 20, 30, 40... periodic interference, system check, and calibration test procedures specified in 40 CFR part 1065...

  8. More flexibility in representing geometric distortion in astronomical images

    NASA Astrophysics Data System (ADS)

    Shupe, David L.; Laher, Russ R.; Storrie-Lombardi, Lisa; Surace, Jason; Grillmair, Carl; Levitan, David; Sesar, Branimir

    2012-09-01

    A number of popular software tools in the public domain are used by astronomers, professional and amateur alike, but some of the tools that have similar purposes cannot be easily interchanged, owing to the lack of a common standard. For the case of image distortion, SCAMP and SExtractor, available from Astromatic.net, perform astrometric calibration and source-object extraction on image data, and image-data geometric distortion is computed in celestial coordinates with polynomial coefficients stored in the FITS header with the PV i_j keywords. Another widely-used astrometric-calibration service, Astrometry.net, solves for distortion in pixel coordinates using the SIP convention that was introduced by the Spitzer Science Center. Up until now, due to the complexity of these distortion representations, it was very difficult to use the output of one of these packages as input to the other. New Python software, along with faster-computing C-language translations, have been developed at the Infrared Processing and Analysis Center (IPAC) to convert FITS-image headers from PV to SIP and vice versa. It is now possible to straightforwardly use Astrometry.net for astrometric calibration and then SExtractor for source-object extraction. The new software also enables astrometric calibration by SCAMP followed by image visualization with tools that support SIP distortion, but not PV . The software has been incorporated into the image-processing pipelines of the Palomar Transient Factory (PTF), which generate FITS images with headers containing both distortion representations. The software permits the conversion of archived images, such as from the Spitzer Heritage Archive and NASA/IPAC Infrared Science Archive, from SIP to PV or vice versa. This new capability renders unnecessary any new representation, such as the proposed TPV distortion convention.

  9. NHEXAS PHASE I ARIZONA STUDY--STANDARD OPERATING PROCEDURE FOR CALIBRATION OF HARVARD PM SAMPLERS (UA-L-6.1)

    EPA Science Inventory

    The purpose of this SOP is to describe the procedures for calibrating Harvard particulate matter (PM) samplers. This procedure applies directly to the Harvard particulate matter (PM) samplers used during the Arizona NHEXAS project and the "Border" study. Keywords: lab; equipmen...

  10. Calculated and measured [Ca(2+)] in buffers used to calibrate Ca(2+) macroelectrodes.

    PubMed

    McGuigan, John A S; Stumpff, Friederike

    2013-05-01

    The ionized concentration of calcium in physiological buffers ([Ca(2+)]) is normally calculated using either tabulated constants or software programs. To investigate the accuracy of such calculations, the [Ca(2+)] in EGTA [ethylene glycol-bis(β-aminoethylether)-N,N,N|,N|-tetraacetic acid], BAPTA [1,2-bis(o-aminophenoxy) ethane-N,N,N|,N|-tetraacetic acid], HEDTA [N-(2-hydroxyethyl)-ethylenediamine-N,N|,N|-triacetic acid], and NTA [N,N-bis(carboxymethyl)glycine] buffers was estimated using the ligand optimization method, and these measured values were compared with calculated values. All measurements overlapped in the pCa range of 3.51 (NTA) to 8.12 (EGTA). In all four buffer solutions, there was no correlation between measured and calculated values; the calculated values differed among themselves by factors varying from 1.3 (NTA) to 6.9 (EGTA). Independent measurements of EGTA purity and the apparent dissociation constants for HEDTA and NTA were not significantly different from the values estimated by the ligand optimization method, further substantiating the method. Using two calibration solutions of pCa 2.0 and 3.01 and seven buffers in the pCa range of 4.0-7.5, calibration of a Ca(2+) electrode over the pCa range of 2.0-7.5 became a routine procedure. It is proposed that such Ca(2+) calibration/buffer solutions be internationally defined and made commercially available to allow the precise measurement of [Ca(2+)] in biology. Copyright © 2013 Elsevier Inc. All rights reserved.

  11. Modulated CMOS camera for fluorescence lifetime microscopy.

    PubMed

    Chen, Hongtao; Holst, Gerhard; Gratton, Enrico

    2015-12-01

    Widefield frequency-domain fluorescence lifetime imaging microscopy (FD-FLIM) is a fast and accurate method to measure the fluorescence lifetime of entire images. However, the complexity and high costs involved in construction of such a system limit the extensive use of this technique. PCO AG recently released the first luminescence lifetime imaging camera based on a high frequency modulated CMOS image sensor, QMFLIM2. Here we tested and provide operational procedures to calibrate the camera and to improve the accuracy using corrections necessary for image analysis. With its flexible input/output options, we are able to use a modulated laser diode or a 20 MHz pulsed white supercontinuum laser as the light source. The output of the camera consists of a stack of modulated images that can be analyzed by the SimFCS software using the phasor approach. The nonuniform system response across the image sensor must be calibrated at the pixel level. This pixel calibration is crucial and needed for every camera settings, e.g. modulation frequency and exposure time. A significant dependency of the modulation signal on the intensity was also observed and hence an additional calibration is needed for each pixel depending on the pixel intensity level. These corrections are important not only for the fundamental frequency, but also for the higher harmonics when using the pulsed supercontinuum laser. With these post data acquisition corrections, the PCO CMOS-FLIM camera can be used for various biomedical applications requiring a large frame and high speed acquisition. © 2015 Wiley Periodicals, Inc.

  12. [Application of AOTF in spectral analysis. 1. Hardware and software designs for the self-constructed visible AOTF spectrophotometer].

    PubMed

    He, Jia-yao; Peng, Rong-fei; Zhang, Zhan-xia

    2002-02-01

    A self-constructed visible spectrophotometer using an acousto-optic tunable filter(AOTF) as a dispersing element is described. Two different AOTFs (one from The Institute for Silicate (Shanghai, China) and the other from Brimrose(USA)) are tested. The software written with visual C++ and operated on a Window98 platform is an applied program with dual database and multi-windows. Four independent windows, namely scanning, quantitative, calibration and result are incorporated. The Fourier self-deconvolution algorithm is also incorporated to improve the spectral resolution. The wavelengths are calibrated using the polynomial curve fitting method. The spectra and calibration curves of soluble aniline blue and phenol red are presented to show the feasibility of the constructed spectrophotometer.

  13. Investigation of cloud properties and atmospheric stability with MODIS

    NASA Technical Reports Server (NTRS)

    Menzel, P.; Ackerman, S.; Moeller, C.; Gumley, L.; Strabala, K.; Frey, R.; Prins, E.; LaPorte, D.; Lynch, M.

    1996-01-01

    The last half year was spent in preparing Version 1 software for delivery, and culminated in transfer of the Level 2 cloud mask production software to the SDST in April. A simulated MODIS test data set with good radiometric integrity was produced using MAS data for a clear ocean scene. ER-2 flight support and MAS data processing were provided by CIMSS personnel during the Apr-May 96 SUCCESS field campaign in Salina, Kansas. Improvements have been made in the absolute calibration of the MAS, including better characterization of the spectral response for all 50 channels. Plans were laid out for validating and testing the MODIS calibration techniques; these plans were further refined during a UW calibration meeting with MCST.

  14. SU-E-T-421: Failure Mode and Effects Analysis (FMEA) of Xoft Electronic Brachytherapy for the Treatment of Superficial Skin Cancers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoisak, J; Manger, R; Dragojevic, I

    Purpose: To perform a failure mode and effects analysis (FMEA) of the process for treating superficial skin cancers with the Xoft Axxent electronic brachytherapy (eBx) system, given the recent introduction of expanded quality control (QC) initiatives at our institution. Methods: A process map was developed listing all steps in superficial treatments with Xoft eBx, from the initial patient consult to the completion of the treatment course. The process map guided the FMEA to identify the failure modes for each step in the treatment workflow and assign Risk Priority Numbers (RPN), calculated as the product of the failure mode’s probability ofmore » occurrence (O), severity (S) and lack of detectability (D). FMEA was done with and without the inclusion of recent QC initiatives such as increased staffing, physics oversight, standardized source calibration, treatment planning and documentation. The failure modes with the highest RPNs were identified and contrasted before and after introduction of the QC initiatives. Results: Based on the FMEA, the failure modes with the highest RPN were related to source calibration, treatment planning, and patient setup/treatment delivery (Fig. 1). The introduction of additional physics oversight, standardized planning and safety initiatives such as checklists and time-outs reduced the RPNs of these failure modes. High-risk failure modes that could be mitigated with improved hardware and software interlocks were identified. Conclusion: The FMEA analysis identified the steps in the treatment process presenting the highest risk. The introduction of enhanced QC initiatives mitigated the risk of some of these failure modes by decreasing their probability of occurrence and increasing their detectability. This analysis demonstrates the importance of well-designed QC policies, procedures and oversight in a Xoft eBx programme for treatment of superficial skin cancers. Unresolved high risk failure modes highlight the need for non-procedural quality initiatives such as improved planning software and more robust hardware interlock systems.« less

  15. Workcell calibration for effective offline programming

    NASA Technical Reports Server (NTRS)

    Stiles, Roger D.; Jones, Clyde S.

    1989-01-01

    In the application of graphics systems for off-line programming (OLP) of robotic systems, the inevitability of errors in the model representation of real-world situations requires that a method to map these differences is incorporated as an integral part of the overall system progamming procedures. This paper discusses several proven robot-to-positioner calibration techniques necessary to reflect real-world parameters in a work-cell model. Particular attention is given to the procedures used to adjust a graphics model to an acceptable degree of accuracy for integration of OLP for the Space Shuttle Main Engine welding automation. Consideration is given to the levels of calibration, requirements, special considerations for coordinated motion, and calibration procedures.

  16. An image-processing software package: UU and Fig for optical metrology applications

    NASA Astrophysics Data System (ADS)

    Chen, Lujie

    2013-06-01

    Modern optical metrology applications are largely supported by computational methods, such as phase shifting [1], Fourier Transform [2], digital image correlation [3], camera calibration [4], etc, in which image processing is a critical and indispensable component. While it is not too difficult to obtain a wide variety of image-processing programs from the internet; few are catered for the relatively special area of optical metrology. This paper introduces an image-processing software package: UU (data processing) and Fig (data rendering) that incorporates many useful functions to process optical metrological data. The cross-platform programs UU and Fig are developed based on wxWidgets. At the time of writing, it has been tested on Windows, Linux and Mac OS. The userinterface is designed to offer precise control of the underline processing procedures in a scientific manner. The data input/output mechanism is designed to accommodate diverse file formats and to facilitate the interaction with other independent programs. In terms of robustness, although the software was initially developed for personal use, it is comparably stable and accurate to most of the commercial software of similar nature. In addition to functions for optical metrology, the software package has a rich collection of useful tools in the following areas: real-time image streaming from USB and GigE cameras, computational geometry, computer vision, fitting of data, 3D image processing, vector image processing, precision device control (rotary stage, PZT stage, etc), point cloud to surface reconstruction, volume rendering, batch processing, etc. The software package is currently used in a number of universities for teaching and research.

  17. The March 1985 demonstration of the fiducial network concept for GPS geodesy: A preliminary report

    NASA Technical Reports Server (NTRS)

    Davidson, J. M.; Thornton, C. L.; Dixon, T. H.; Vegos, C. J.; Young, L. E.; Yunck, T. P.

    1986-01-01

    The first field tests in preparation for the NASA Global Positioning System (GPS) Caribbean Initiative were conducted in late March and Early April of 1985. The GPS receivers were located at the POLARIS Very Long Base Interferometry (VLBI) stations at Westford, Massachusetts; Richmond, Florida; and Ft. Davis, Texas; and at the Mojave, Owens Valley, and Hat Creek VLBI stations in California. Other mobile receivers were placed near Mammoth Lakes, California; Pt. Mugu, California; Austin, Texas; and Dahlgren, Virginia. These sites were equipped with a combination of GPS receiver types, including SERIES-X, TI-4100 and AFGL dual frequency receivers. The principal objectives of these tests were the demonstration of the fiducial network concept for precise GPS geodesy, the performance assessment of the participating GPS receiver types, and to conduct the first in a series of experiments to monitor ground deformation in the Mammoth Lakes-Long Valley caldera region in California. Other objectives included the testing of the water vapor radiometers for the calibration of GPS data, the development of efficient procedures for planning and coordinating GPS field exercise, the establishment of institutional interfaces for future cooperating ventures, the testing of the GPS Data Analysis Software (GIPSY, for GPS Inferred Positioning SYstem), and the establishment of a set of calibration baselines in California. Preliminary reports of the success of the field tests, including receiver performance and data quality, and on the status of the data analysis software are given.

  18. DEM Calibration Approach: design of experiment

    NASA Astrophysics Data System (ADS)

    Boikov, A. V.; Savelev, R. V.; Payor, V. A.

    2018-05-01

    The problem of DEM models calibration is considered in the article. It is proposed to divide models input parameters into those that require iterative calibration and those that are recommended to measure directly. A new method for model calibration based on the design of the experiment for iteratively calibrated parameters is proposed. The experiment is conducted using a specially designed stand. The results are processed with technical vision algorithms. Approximating functions are obtained and the error of the implemented software and hardware complex is estimated. The prospects of the obtained results are discussed.

  19. Strain Gauge Balance Calibration and Data Reduction at NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Ferris, A. T. Judy

    1999-01-01

    This paper will cover the standard force balance calibration and data reduction techniques used at Langley Research Center. It will cover balance axes definition, balance type, calibration instrumentation, traceability of standards to NIST, calibration loading procedures, balance calibration mathematical model, calibration data reduction techniques, balance accuracy reporting, and calibration frequency.

  20. Sensitivity and Calibration of Non-Destructive Evaluation Method That Uses Neural-Net Processing of Characteristic Fringe Patterns

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J.; Weiland, Kenneth E.

    2003-01-01

    This paper answers some performance and calibration questions about a non-destructive-evaluation (NDE) procedure that uses artificial neural networks to detect structural damage or other changes from sub-sampled characteristic patterns. The method shows increasing sensitivity as the number of sub-samples increases from 108 to 6912. The sensitivity of this robust NDE method is not affected by noisy excitations of the first vibration mode. A calibration procedure is proposed and demonstrated where the output of a trained net can be correlated with the outputs of the point sensors used for vibration testing. The calibration procedure is based on controlled changes of fastener torques. A heterodyne interferometer is used as a displacement sensor for a demonstration of the challenges to be handled in using standard point sensors for calibration.

  1. Calibration and Validation of the Checkpoint Model to the Air Force Electronic Systems Center Software Database

    DTIC Science & Technology

    1997-09-01

    Illinois Institute of Technology Research Institute (IITRI) calibrated seven parametric models including SPQR /20, the forerunner of CHECKPOINT. The...a semicolon); thus, SPQR /20 was calibrated using SLOC sizing data (IITRI, 1989: 3-4). The results showed only slight overall improvements in accuracy...even when validating the calibrated models with the same data sets. The IITRI study demonstrated SPQR /20 to be one of two models that were most

  2. Software Tools for Design and Performance Evaluation of Intelligent Systems

    DTIC Science & Technology

    2004-08-01

    Self-calibration of Three-Legged Modular Reconfigurable Parallel Robots Based on Leg-End Distance Errors,” Robotica , Vol. 19, pp. 187-198. [4...9] Lintott, A. B., and Dunlop, G. R., “Parallel Topology Robot Calibration,” Robotica . [10] Vischer, P., and Clavel, R., “Kinematic Calibration...of the Parallel Delta Robot,” Robotica , Vol. 16, pp.207- 218, 1998. [11] Joshi, S.A., and Surianarayan, A., “Calibration of a 6-DOF Cable Robot Using

  3. Non-uniformity calibration for MWIR polarization imagery obtained with integrated microgrid polarimeters

    NASA Astrophysics Data System (ADS)

    Liu, Hai-Zheng; Shi, Ze-Lin; Feng, Bin; Hui, Bin; Zhao, Yao-Hong

    2016-03-01

    Integrating microgrid polarimeters on focal plane array (FPA) of an infrared detector causes non-uniformity of polarization response. In order to reduce the effect of polarization non-uniformity, this paper constructs an experimental setup for capturing raw flat-field images and proposes a procedure for acquiring non-uniform calibration (NUC) matrix and calibrating raw polarization images. The proposed procedure takes the incident radiation as a polarization vector and offers a calibration matrix for each pixel. Both our matrix calibration and two-point calibration are applied to our mid-wavelength infrared (MWIR) polarization imaging system with integrated microgrid polarimeters. Compared with two point calibration, our matrix calibration reduces non-uniformity by 30 40% under condition of flat-field data test with polarization. The ourdoor scene observation experiment indicates that our calibration can effectively reduce polarization non-uniformity and improve the image quality of our MWIR polarization imaging system.

  4. Data collection procedures for the Software Engineering Laboratory (SEL) database

    NASA Technical Reports Server (NTRS)

    Heller, Gerard; Valett, Jon; Wild, Mary

    1992-01-01

    This document is a guidebook to collecting software engineering data on software development and maintenance efforts, as practiced in the Software Engineering Laboratory (SEL). It supersedes the document entitled Data Collection Procedures for the Rehosted SEL Database, number SEL-87-008 in the SEL series, which was published in October 1987. It presents procedures to be followed on software development and maintenance projects in the Flight Dynamics Division (FDD) of Goddard Space Flight Center (GSFC) for collecting data in support of SEL software engineering research activities. These procedures include detailed instructions for the completion and submission of SEL data collection forms.

  5. U.S.-MEXICO BORDER PROGRAM ARIZONA BORDER STUDY--STANDARD OPERATING PROCEDURE FOR CALIBRATION OF HARVARD PM SAMPLERS (UA-L-6.1)

    EPA Science Inventory

    The purpose of this SOP is to describe the procedures for calibrating Harvard particulate matter (PM) samplers. This procedure applies directly to the Harvard particulate matter (PM) samplers used during the Arizona NHEXAS project and the Border study. Keywords: lab; equipment;...

  6. NHEXAS PHASE I ARIZONA STUDY--STANDARD OPERATING PROCEDURE FOR CALIBRATION, MAINTENANCE AND OPERATION OF ELECTRONIC BALANCES (BCO-L-23.0)

    EPA Science Inventory

    The purpose of this SOP is to describe the general procedures to be followed for the operation, calibration and maintenance of electronic balances. This procedure was followed to ensure consistent data retrieval during the Arizona NHEXAS project and the "Border" study. Keywords: ...

  7. The Site-Scale Saturated Zone Flow Model for Yucca Mountain

    NASA Astrophysics Data System (ADS)

    Al-Aziz, E.; James, S. C.; Arnold, B. W.; Zyvoloski, G. A.

    2006-12-01

    This presentation provides a reinterpreted conceptual model of the Yucca Mountain site-scale flow system subject to all quality assurance procedures. The results are based on a numerical model of site-scale saturated zone beneath Yucca Mountain, which is used for performance assessment predictions of radionuclide transport and to guide future data collection and modeling activities. This effort started from the ground up with a revised and updated hydrogeologic framework model, which incorporates the latest lithology data, and increased grid resolution that better resolves the hydrogeologic framework, which was updated throughout the model domain. In addition, faults are much better represented using the 250× 250- m2 spacing (compared to the previous model's 500× 500-m2 spacing). Data collected since the previous model calibration effort have been included and they comprise all Nye County water-level data through Phase IV of their Early Warning Drilling Program. Target boundary fluxes are derived from the newest (2004) Death Valley Regional Flow System model from the US Geologic Survey. A consistent weighting scheme assigns importance to each measured water-level datum and boundary flux extracted from the regional model. The numerical model is calibrated by matching these weighted water level measurements and boundary fluxes using parameter estimation techniques, along with more informal comparisons of the model to hydrologic and geochemical information. The model software (hydrologic simulation code FEHM~v2.24 and parameter estimation software PEST~v5.5) and model setup facilitates efficient calibration of multiple conceptual models. Analyses evaluate the impact of these updates and additional data on the modeled potentiometric surface and the flowpaths emanating from below the repository. After examining the heads and permeabilities obtained from the calibrated models, we present particle pathways from the proposed repository and compare them to those from the previous model calibration. Specific discharge at a point 5~km from the repository is also examined and found to be within acceptable uncertainty. The results show that updated model yields a calibration with smaller residuals than the previous model revision while ensuring that flowpaths follow measured gradients and paths derived from hydrochemical analyses. This work was supported by the Yucca Mountain Site Characterization Office as part of the Civilian Radioactive Waste Management Program, which is managed by the U.S. Department of Energy, Yucca Mountain Site Characterization Project. Sandia National Laboratories is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy under Contract DE AC04 94AL85000.

  8. ACTS (Advanced Communications Technology Satellite) Propagation Experiment: Preprocessing Software User's Manual

    NASA Technical Reports Server (NTRS)

    Crane, Robert K.; Wang, Xuhe; Westenhaver, David

    1996-01-01

    The preprocessing software manual describes the Actspp program originally developed to observe and diagnose Advanced Communications Technology Satellite (ACTS) propagation terminal/receiver problems. However, it has been quite useful for automating the preprocessing functions needed to convert the terminal output to useful attenuation estimates. Prior to having data acceptable for archival functions, the individual receiver system must be calibrated and the power level shifts caused by ranging tone modulation must be received. Actspp provides three output files: the daylog, the diurnal coefficient file, and the file that contains calibration information.

  9. Improving mass measurement accuracy in mass spectrometry based proteomics by combining open source tools for chromatographic alignment and internal calibration.

    PubMed

    Palmblad, Magnus; van der Burgt, Yuri E M; Dalebout, Hans; Derks, Rico J E; Schoenmaker, Bart; Deelder, André M

    2009-05-02

    Accurate mass determination enhances peptide identification in mass spectrometry based proteomics. We here describe the combination of two previously published open source software tools to improve mass measurement accuracy in Fourier transform ion cyclotron resonance mass spectrometry (FTICRMS). The first program, msalign, aligns one MS/MS dataset with one FTICRMS dataset. The second software, recal2, uses peptides identified from the MS/MS data for automated internal calibration of the FTICR spectra, resulting in sub-ppm mass measurement errors.

  10. Calibration and GEANT4 Simulations of the Phase II Proton Compute Tomography (pCT) Range Stack Detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uzunyan, S. A.; Blazey, G.; Boi, S.

    Northern Illinois University in collaboration with Fermi National Accelerator Laboratory (FNAL) and Delhi University has been designing and building a proton CT scanner for applications in proton treatment planning. The Phase II proton CT scanner consists of eight planes of tracking detectors with two X and two Y coordinate measurements both before and after the patient. In addition, a range stack detector consisting of a stack of thin scintillator tiles, arranged in twelve eight-tile frames, is used to determine the water equivalent path length (WEPL) of each track through the patient. The X-Y coordinates and WEPL are required input formore » image reconstruction software to find the relative (proton) stopping powers (RSP) value of each voxel in the patient and generate a corresponding 3D image. In this Note we describe tests conducted in 2015 at the proton beam at the Central DuPage Hospital in Warrenville, IL, focusing on the range stack calibration procedure and comparisons with the GEANT~4 range stack simulation.« less

  11. A Testbed for Model Development

    NASA Astrophysics Data System (ADS)

    Berry, J. A.; Van der Tol, C.; Kornfeld, A.

    2014-12-01

    Carbon cycle and land-surface models used in global simulations need to be computationally efficient and have a high standard of software engineering. These models also make a number of scaling assumptions to simplify the representation of complex biochemical and structural properties of ecosystems. This makes it difficult to use these models to test new ideas for parameterizations or to evaluate scaling assumptions. The stripped down nature of these models also makes it difficult to "connect" with current disciplinary research which tends to be focused on much more nuanced topics than can be included in the models. In our opinion/experience this indicates the need for another type of model that can more faithfully represent the complexity ecosystems and which has the flexibility to change or interchange parameterizations and to run optimization codes for calibration. We have used the SCOPE (Soil Canopy Observation, Photochemistry and Energy fluxes) model in this way to develop, calibrate, and test parameterizations for solar induced chlorophyll fluorescence, OCS exchange and stomatal parameterizations at the canopy scale. Examples of the data sets and procedures used to develop and test new parameterizations are presented.

  12. Calibration and correction procedures for cosmic-ray neutron soil moisture probes located across Australia

    NASA Astrophysics Data System (ADS)

    Hawdon, Aaron; McJannet, David; Wallace, Jim

    2014-06-01

    The cosmic-ray probe (CRP) provides continuous estimates of soil moisture over an area of ˜30 ha by counting fast neutrons produced from cosmic rays which are predominantly moderated by water molecules in the soil. This paper describes the setup, measurement correction procedures, and field calibration of CRPs at nine locations across Australia with contrasting soil type, climate, and land cover. These probes form the inaugural Australian CRP network, which is known as CosmOz. CRP measurements require neutron count rates to be corrected for effects of atmospheric pressure, water vapor pressure changes, and variations in incoming neutron intensity. We assess the magnitude and importance of these corrections and present standardized approaches for network-wide analysis. In particular, we present a new approach to correct for incoming neutron intensity variations and test its performance against existing procedures used in other studies. Our field calibration results indicate that a generalized calibration function for relating neutron counts to soil moisture is suitable for all soil types, with the possible exception of very sandy soils with low water content. Using multiple calibration data sets, we demonstrate that the generalized calibration function only applies after accounting for persistent sources of hydrogen in the soil profile. Finally, we demonstrate that by following standardized correction procedures and scaling neutron counting rates of all CRPs to a single reference location, differences in calibrations between sites are related to site biomass. This observation provides a means for estimating biomass at a given location or for deriving coefficients for the calibration function in the absence of field calibration data.

  13. Geometrical Characterisation of a 2D Laser System and Calibration of a Cross-Grid Encoder by Means of a Self-Calibration Methodology

    PubMed Central

    Torralba, Marta; Díaz-Pérez, Lucía C.

    2017-01-01

    This article presents a self-calibration procedure and the experimental results for the geometrical characterisation of a 2D laser system operating along a large working range (50 mm × 50 mm) with submicrometre uncertainty. Its purpose is to correct the geometric errors of the 2D laser system setup generated when positioning the two laser heads and the plane mirrors used as reflectors. The non-calibrated artefact used in this procedure is a commercial grid encoder that is also a measuring instrument. Therefore, the self-calibration procedure also allows the determination of the geometrical errors of the grid encoder, including its squareness error. The precision of the proposed algorithm is tested using virtual data. Actual measurements are subsequently registered, and the algorithm is applied. Once the laser system is characterised, the error of the grid encoder is calculated along the working range, resulting in an expanded submicrometre calibration uncertainty (k = 2) for the X and Y axes. The results of the grid encoder calibration are comparable to the errors provided by the calibration certificate for its main central axes. It is, therefore, possible to confirm the suitability of the self-calibration methodology proposed in this article. PMID:28858239

  14. Java-Library for the Access, Storage and Editing of Calibration Metadata of Optical Sensors

    NASA Astrophysics Data System (ADS)

    Firlej, M.; Kresse, W.

    2016-06-01

    The standardization of the calibration of optical sensors in photogrammetry and remote sensing has been discussed for more than a decade. Projects of the German DGPF and the European EuroSDR led to the abstract International Technical Specification ISO/TS 19159-1:2014 "Calibration and validation of remote sensing imagery sensors and data - Part 1: Optical sensors". This article presents the first software interface for a read- and write-access to all metadata elements standardized in the ISO/TS 19159-1. This interface is based on an xml-schema that was automatically derived by ShapeChange from the UML-model of the Specification. The software interface serves two cases. First, the more than 300 standardized metadata elements are stored individually according to the xml-schema. Secondly, the camera manufacturers are using many administrative data that are not a part of the ISO/TS 19159-1. The new software interface provides a mechanism for input, storage, editing, and output of both types of data. Finally, an output channel towards a usual calibration protocol is provided. The interface is written in Java. The article also addresses observations made when analysing the ISO/TS 19159-1 and compiles a list of proposals for maturing the document, i.e. for an updated version of the Specification.

  15. Fermentation process tracking through enhanced spectral calibration modeling.

    PubMed

    Triadaphillou, Sophia; Martin, Elaine; Montague, Gary; Norden, Alison; Jeffkins, Paul; Stimpson, Sarah

    2007-06-15

    The FDA process analytical technology (PAT) initiative will materialize in a significant increase in the number of installations of spectroscopic instrumentation. However, to attain the greatest benefit from the data generated, there is a need for calibration procedures that extract the maximum information content. For example, in fermentation processes, the interpretation of the resulting spectra is challenging as a consequence of the large number of wavelengths recorded, the underlying correlation structure that is evident between the wavelengths and the impact of the measurement environment. Approaches to the development of calibration models have been based on the application of partial least squares (PLS) either to the full spectral signature or to a subset of wavelengths. This paper presents a new approach to calibration modeling that combines a wavelength selection procedure, spectral window selection (SWS), where windows of wavelengths are automatically selected which are subsequently used as the basis of the calibration model. However, due to the non-uniqueness of the windows selected when the algorithm is executed repeatedly, multiple models are constructed and these are then combined using stacking thereby increasing the robustness of the final calibration model. The methodology is applied to data generated during the monitoring of broth concentrations in an industrial fermentation process from on-line near-infrared (NIR) and mid-infrared (MIR) spectrometers. It is shown that the proposed calibration modeling procedure outperforms traditional calibration procedures, as well as enabling the identification of the critical regions of the spectra with regard to the fermentation process.

  16. Toward a standard line for use in multibeam echo sounder calibration

    NASA Astrophysics Data System (ADS)

    Weber, Thomas C.; Rice, Glen; Smith, Michael

    2018-06-01

    A procedure is suggested in which a relative calibration for the intensity output of a multibeam echo sounder (MBES) can be performed. This procedure identifies a common survey line (i.e., a standard line), over which acoustic backscatter from the seafloor is collected with multiple MBES systems or by the same system multiple times. A location on the standard line which exhibits temporal stability in its seafloor backscatter response is used to bring the intensity output of the multiple MBES systems to a common reference. This relative calibration procedure has utility for MBES users wishing to generate an aggregate seafloor backscatter mosaic using multiple systems, revisiting an area to detect changes in substrate type, and comparing substrate types in the same general area but with different systems or different system settings. The calibration procedure is demonstrated using three different MBES systems over 3 different years in New Castle, NH, USA.

  17. The analytical calibration in (bio)imaging/mapping of the metallic elements in biological samples--definitions, nomenclature and strategies: state of the art.

    PubMed

    Jurowski, Kamil; Buszewski, Bogusław; Piekoszewski, Wojciech

    2015-01-01

    Nowadays, studies related to the distribution of metallic elements in biological samples are one of the most important issues. There are many articles dedicated to specific analytical atomic spectrometry techniques used for mapping/(bio)imaging the metallic elements in various kinds of biological samples. However, in such literature, there is a lack of articles dedicated to reviewing calibration strategies, and their problems, nomenclature, definitions, ways and methods used to obtain quantitative distribution maps. The aim of this article was to characterize the analytical calibration in the (bio)imaging/mapping of the metallic elements in biological samples including (1) nomenclature; (2) definitions, and (3) selected and sophisticated, examples of calibration strategies with analytical calibration procedures applied in the different analytical methods currently used to study an element's distribution in biological samples/materials such as LA ICP-MS, SIMS, EDS, XRF and others. The main emphasis was placed on the procedures and methodology of the analytical calibration strategy. Additionally, the aim of this work is to systematize the nomenclature for the calibration terms: analytical calibration, analytical calibration method, analytical calibration procedure and analytical calibration strategy. The authors also want to popularize the division of calibration methods that are different than those hitherto used. This article is the first work in literature that refers to and emphasizes many different and complex aspects of analytical calibration problems in studies related to (bio)imaging/mapping metallic elements in different kinds of biological samples. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Teaching Camera Calibration by a Constructivist Methodology

    ERIC Educational Resources Information Center

    Samper, D.; Santolaria, J.; Pastor, J. J.; Aguilar, J. J.

    2010-01-01

    This article describes the Metrovisionlab simulation software and practical sessions designed to teach the most important machine vision camera calibration aspects in courses for senior undergraduate students. By following a constructivist methodology, having received introductory theoretical classes, students use the Metrovisionlab application to…

  19. A multi-objective approach to improve SWAT model calibration in alpine catchments

    NASA Astrophysics Data System (ADS)

    Tuo, Ye; Marcolini, Giorgia; Disse, Markus; Chiogna, Gabriele

    2018-04-01

    Multi-objective hydrological model calibration can represent a valuable solution to reduce model equifinality and parameter uncertainty. The Soil and Water Assessment Tool (SWAT) model is widely applied to investigate water quality and water management issues in alpine catchments. However, the model calibration is generally based on discharge records only, and most of the previous studies have defined a unique set of snow parameters for an entire basin. Only a few studies have considered snow observations to validate model results or have taken into account the possible variability of snow parameters for different subbasins. This work presents and compares three possible calibration approaches. The first two procedures are single-objective calibration procedures, for which all parameters of the SWAT model were calibrated according to river discharge alone. Procedures I and II differ from each other by the assumption used to define snow parameters: The first approach assigned a unique set of snow parameters to the entire basin, whereas the second approach assigned different subbasin-specific sets of snow parameters to each subbasin. The third procedure is a multi-objective calibration, in which we considered snow water equivalent (SWE) information at two different spatial scales (i.e. subbasin and elevation band), in addition to discharge measurements. We tested these approaches in the Upper Adige river basin where a dense network of snow depth measurement stations is available. Only the set of parameters obtained with this multi-objective procedure provided an acceptable prediction of both river discharge and SWE. These findings offer the large community of SWAT users a strategy to improve SWAT modeling in alpine catchments.

  20. Complete scanpaths analysis toolbox.

    PubMed

    Augustyniak, Piotr; Mikrut, Zbigniew

    2006-01-01

    This paper presents a complete open software environment for control, data processing and assessment of visual experiments. Visual experiments are widely used in research on human perception physiology and the results are applicable to various visual information-based man-machine interfacing, human-emulated automatic visual systems or scanpath-based learning of perceptual habits. The toolbox is designed for Matlab platform and supports infra-red reflection-based eyetracker in calibration and scanpath analysis modes. Toolbox procedures are organized in three layers: the lower one, communicating with the eyetracker output file, the middle detecting scanpath events on a physiological background and the one upper consisting of experiment schedule scripts, statistics and summaries. Several examples of visual experiments carried out with use of the presented toolbox complete the paper.

  1. The development of a dynamic software for the user interaction from the geographic information system environment with the database of the calibration site of the satellite remote electro-optic sensors

    NASA Astrophysics Data System (ADS)

    Zyelyk, Ya. I.; Semeniv, O. V.

    2015-12-01

    The state of the problem of the post-launch calibration of the satellite electro-optic remote sensors and its solutions in Ukraine is analyzed. The database is improved and dynamic services for user interaction with database from the environment of open geographical information system Quantum GIS for information support of calibration activities are created. A dynamic application under QGIS is developed, implementing these services in the direction of the possibility of data entering, editing and extraction from the database, using the technology of object-oriented programming and of modern complex program design patterns. The functional and algorithmic support of this dynamic software and its interface are developed.

  2. The Calibration of dc Voltage Standards at NIST

    PubMed Central

    Field, Bruce F.

    1990-01-01

    This document describes the procedures used at NIST to calibrate dc voltage standards in terms of the NIST volt. Three calibration services are offered by the Electricity Division: Regular Calibration Service (RCS) of client standard cells at NIST; the Volt Transfer Program (VTP) a process to determine the difference between the NIST volt and the volt as maintained by a group of standard cells in a client laboratory; and the calibration of client solid-state dc voltage standards at NIST. The operational procedures used to compare these voltage standards to NIST voltage standards and to maintain the NIST volt via the ac Josephson effect are discussed. PMID:28179777

  3. NHEXAS PHASE I ARIZONA STUDY--STANDARD OPERATING PROCEDURE FOR OPERATION, CALIBRATION AND MAINTENANCE OF FIXED AND ADJUSTABLE VOLUME PIPETTE GUNS (BCO-L-9.0)

    EPA Science Inventory

    The purpose of this SOP is to describe the general procedures for the operation, calibration, and maintenance of fixed- and adjustable-volume pipette guns. This procedure was followed to ensure consistent data retrieval during the Arizona NHEXAS project and the "Border" study. Ke...

  4. Mathematical calibration procedure of a capacitive sensor-based indexed metrology platform

    NASA Astrophysics Data System (ADS)

    Brau-Avila, A.; Santolaria, J.; Acero, R.; Valenzuela-Galvan, M.; Herrera-Jimenez, V. M.; Aguilar, J. J.

    2017-03-01

    The demand for faster and more reliable measuring tasks for the control and quality assurance of modern production systems has created new challenges for the field of coordinate metrology. Thus, the search for new solutions in coordinate metrology systems and the need for the development of existing ones still persists. One example of such a system is the portable coordinate measuring machine (PCMM), the use of which in industry has considerably increased in recent years, mostly due to its flexibility for accomplishing in-line measuring tasks as well as its reduced cost and operational advantages compared to traditional coordinate measuring machines. Nevertheless, PCMMs have a significant drawback derived from the techniques applied in the verification and optimization procedures of their kinematic parameters. These techniques are based on the capture of data with the measuring instrument from a calibrated gauge object, fixed successively in various positions so that most of the instrument measuring volume is covered, which results in time-consuming, tedious and expensive verification and optimization procedures. In this work the mathematical calibration procedure of a capacitive sensor-based indexed metrology platform (IMP) is presented. This calibration procedure is based on the readings and geometric features of six capacitive sensors and their targets with nanometer resolution. The final goal of the IMP calibration procedure is to optimize the geometric features of the capacitive sensors and their targets in order to use the optimized data in the verification procedures of PCMMs.

  5. Automated Attitude Sensor Calibration: Progress and Plans

    NASA Technical Reports Server (NTRS)

    Sedlak, Joseph; Hashmall, Joseph

    2004-01-01

    This paper describes ongoing work a NASA/Goddard Space Flight Center to improve the quality of spacecraft attitude sensor calibration and reduce costs by automating parts of the calibration process. The new calibration software can autonomously preview data quality over a given time span, select a subset of the data for processing, perform the requested calibration, and output a report. This level of automation is currently being implemented for two specific applications: inertial reference unit (IRU) calibration and sensor alignment calibration. The IRU calibration utility makes use of a sequential version of the Davenport algorithm. This utility has been successfully tested with simulated and actual flight data. The alignment calibration is still in the early testing stage. Both utilities will be incorporated into the institutional attitude ground support system.

  6. Infrared stereo calibration for unmanned ground vehicle navigation

    NASA Astrophysics Data System (ADS)

    Harguess, Josh; Strange, Shawn

    2014-06-01

    The problem of calibrating two color cameras as a stereo pair has been heavily researched and many off-the-shelf software packages, such as Robot Operating System and OpenCV, include calibration routines that work in most cases. However, the problem of calibrating two infrared (IR) cameras for the purposes of sensor fusion and point could generation is relatively new and many challenges exist. We present a comparison of color camera and IR camera stereo calibration using data from an unmanned ground vehicle. There are two main challenges in IR stereo calibration; the calibration board (material, design, etc.) and the accuracy of calibration pattern detection. We present our analysis of these challenges along with our IR stereo calibration methodology. Finally, we present our results both visually and analytically with computed reprojection errors.

  7. Deep space network software cost estimation model

    NASA Technical Reports Server (NTRS)

    Tausworthe, R. C.

    1981-01-01

    A parametric software cost estimation model prepared for Jet PRopulsion Laboratory (JPL) Deep Space Network (DSN) Data System implementation tasks is described. The resource estimation mdel modifies and combines a number of existing models. The model calibrates the task magnitude and difficulty, development environment, and software technology effects through prompted responses to a set of approximately 50 questions. Parameters in the model are adjusted to fit JPL software life-cycle statistics.

  8. Enabling image fusion for a CT guided needle placement robot

    NASA Astrophysics Data System (ADS)

    Seifabadi, Reza; Xu, Sheng; Aalamifar, Fereshteh; Velusamy, Gnanasekar; Puhazhendi, Kaliyappan; Wood, Bradford J.

    2017-03-01

    Purpose: This study presents development and integration of hardware and software that enables ultrasound (US) and computer tomography (CT) fusion for a FDA-approved CT-guided needle placement robot. Having real-time US image registered to a priori-taken intraoperative CT image provides more anatomic information during needle insertion, in order to target hard-to-see lesions or avoid critical structures invisible to CT, track target motion, and to better monitor ablation treatment zone in relation to the tumor location. Method: A passive encoded mechanical arm is developed for the robot in order to hold and track an abdominal US transducer. This 4 degrees of freedom (DOF) arm is designed to attach to the robot end-effector. The arm is locked by default and is released by a press of button. The arm is designed such that the needle is always in plane with US image. The articulated arm is calibrated to improve its accuracy. Custom designed software (OncoNav, NIH) was developed to fuse real-time US image to a priori-taken CT. Results: The accuracy of the end effector before and after passive arm calibration was 7.07mm +/- 4.14mm and 1.74mm +/-1.60mm, respectively. The accuracy of the US image to the arm calibration was 5mm. The feasibility of US-CT fusion using the proposed hardware and software was demonstrated in an abdominal commercial phantom. Conclusions: Calibration significantly improved the accuracy of the arm in US image tracking. Fusion of US to CT using the proposed hardware and software was feasible.

  9. Texas flexible pavements and overlays : calibration plans for M-E models and related software.

    DOT National Transportation Integrated Search

    2013-06-01

    This five-year project was initiated to collect materials and pavement performance data on a minimum of 100 highway test sections around the State of Texas, incorporating flexible pavements and overlays. Besides being used to calibrate and validate m...

  10. Hydrodynamic Energy Saving Enhancements for DDG 51 Class Ships

    DTIC Science & Technology

    2012-02-01

    temperature and pressure in the hydraulic pitch control system, expansion and contraction of the pitch control rods, improper pitch calibration procedure ...outdated pitch calibration, etc. Experience during hot pitch calibration procedures conducted by NSWCCD prior to powering trials has indicated that...18% increase in power.10 Sea trials conducted during a long-term evaluation on the USS WHIPPLE (FF 1062), showed that by 800 days out of drydock

  11. SWAT: Model use, calibration, and validation

    USDA-ARS?s Scientific Manuscript database

    SWAT (Soil and Water Assessment Tool) is a comprehensive, semi-distributed river basin model that requires a large number of input parameters which complicates model parameterization and calibration. Several calibration techniques have been developed for SWAT including manual calibration procedures...

  12. 40 CFR 1065.325 - Intake-flow calibration.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Section 1065.325 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Calibrations and Verifications Flow-Related Measurements § 1065.325 Intake-flow calibration. (a) Calibrate intake-air flow meters upon initial installation. Follow the...

  13. Simulation model calibration and validation : phase II : development of implementation handbook and short course.

    DOT National Transportation Integrated Search

    2006-01-01

    A previous study developed a procedure for microscopic simulation model calibration and validation and evaluated the procedure via two relatively simple case studies using three microscopic simulation models. Results showed that default parameters we...

  14. 40 CFR 1066.215 - Summary of verification and calibration procedures for chassis dynamometers.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS VEHICLE-TESTING PROCEDURES Dynamometer... manufacturer instructions and good engineering judgment. (c) Automated dynamometer verifications and... accomplish the verifications and calibrations specified in this subpart. You may use these automated...

  15. 40 CFR 1066.215 - Summary of verification and calibration procedures for chassis dynamometers.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS VEHICLE-TESTING PROCEDURES Dynamometer... manufacturer instructions and good engineering judgment. (c) Automated dynamometer verifications and... accomplish the verifications and calibrations specified in this subpart. You may use these automated...

  16. The MeqTrees software system and its use for third-generation calibration of radio interferometers

    NASA Astrophysics Data System (ADS)

    Noordam, J. E.; Smirnov, O. M.

    2010-12-01

    Context. The formulation of the radio interferometer measurement equation (RIME) for a generic radio telescope by Hamaker et al. has provided us with an elegant mathematical apparatus for better understanding, simulation and calibration of existing and future instruments. The calibration of the new radio telescopes (LOFAR, SKA) would be unthinkable without the RIME formalism, and new software to exploit it. Aims: The MeqTrees software system is designed to implement numerical models, and to solve for arbitrary subsets of their parameters. It may be applied to many problems, but was originally geared towards implementing Measurement Equations in radio astronomy for the purposes of simulation and calibration. The technical goal of MeqTrees is to provide a tool for rapid implementation of such models, while offering performance comparable to hand-written code. We are also pursuing the wider goal of increasing the rate of evolution of radio astronomical software, by offering a tool that facilitates rapid experimentation, and exchange of ideas (and scripts). Methods: MeqTrees is implemented as a Python-based front-end called the meqbrowser, and an efficient (C++-based) computational back-end called the meqserver. Numerical models are defined on the front-end via a Python-based Tree Definition Language (TDL), then rapidly executed on the back-end. The use of TDL facilitates an extremely short turn-around time (hours rather than weeks or months) for experimentation with new ideas. This is also helped by unprecedented visualization capabilities for all final and intermediate results. A flexible data model and a number of important optimizations in the back-end ensures that the numerical performance is comparable to that of hand-written code. Results: MeqTrees is already widely used as the simulation tool for new instruments (LOFAR, SKA) and technologies (focal plane arrays). It has demonstrated that it can achieve a noise-limited dynamic range in excess of a million, on WSRT data. It is the only package that is specifically designed to handle what we propose to call third-generation calibration (3GC), which is needed for the new generation of giant radio telescopes, but can also improve the calibration of existing instruments.

  17. 40 CFR 90.320 - Carbon dioxide analyzer calibration.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... (64 percent) is required (see following table). Example calibration points (%) Acceptable for... periodic interference, system check, and calibration test procedures specified in 40 CFR part 1065...

  18. 40 CFR 90.320 - Carbon dioxide analyzer calibration.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... (64 percent) is required (see following table). Example calibration points (%) Acceptable for... periodic interference, system check, and calibration test procedures specified in 40 CFR part 1065...

  19. 40 CFR 90.320 - Carbon dioxide analyzer calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... (64 percent) is required (see following table). Example calibration points (%) Acceptable for... periodic interference, system check, and calibration test procedures specified in 40 CFR part 1065...

  20. A methodology to develop computational phantoms with adjustable posture for WBC calibration

    NASA Astrophysics Data System (ADS)

    Ferreira Fonseca, T. C.; Bogaerts, R.; Hunt, John; Vanhavere, F.

    2014-11-01

    A Whole Body Counter (WBC) is a facility to routinely assess the internal contamination of exposed workers, especially in the case of radiation release accidents. The calibration of the counting device is usually done by using anthropomorphic physical phantoms representing the human body. Due to such a challenge of constructing representative physical phantoms a virtual calibration has been introduced. The use of computational phantoms and the Monte Carlo method to simulate radiation transport have been demonstrated to be a worthy alternative. In this study we introduce a methodology developed for the creation of realistic computational voxel phantoms with adjustable posture for WBC calibration. The methodology makes use of different software packages to enable the creation and modification of computational voxel phantoms. This allows voxel phantoms to be developed on demand for the calibration of different WBC configurations. This in turn helps to study the major source of uncertainty associated with the in vivo measurement routine which is the difference between the calibration phantoms and the real persons being counted. The use of realistic computational phantoms also helps the optimization of the counting measurement. Open source codes such as MakeHuman and Blender software packages have been used for the creation and modelling of 3D humanoid characters based on polygonal mesh surfaces. Also, a home-made software was developed whose goal is to convert the binary 3D voxel grid into a MCNPX input file. This paper summarizes the development of a library of phantoms of the human body that uses two basic phantoms called MaMP and FeMP (Male and Female Mesh Phantoms) to create a set of male and female phantoms that vary both in height and in weight. Two sets of MaMP and FeMP phantoms were developed and used for efficiency calibration of two different WBC set-ups: the Doel NPP WBC laboratory and AGM laboratory of SCK-CEN in Mol, Belgium.

  1. A methodology to develop computational phantoms with adjustable posture for WBC calibration.

    PubMed

    Fonseca, T C Ferreira; Bogaerts, R; Hunt, John; Vanhavere, F

    2014-11-21

    A Whole Body Counter (WBC) is a facility to routinely assess the internal contamination of exposed workers, especially in the case of radiation release accidents. The calibration of the counting device is usually done by using anthropomorphic physical phantoms representing the human body. Due to such a challenge of constructing representative physical phantoms a virtual calibration has been introduced. The use of computational phantoms and the Monte Carlo method to simulate radiation transport have been demonstrated to be a worthy alternative. In this study we introduce a methodology developed for the creation of realistic computational voxel phantoms with adjustable posture for WBC calibration. The methodology makes use of different software packages to enable the creation and modification of computational voxel phantoms. This allows voxel phantoms to be developed on demand for the calibration of different WBC configurations. This in turn helps to study the major source of uncertainty associated with the in vivo measurement routine which is the difference between the calibration phantoms and the real persons being counted. The use of realistic computational phantoms also helps the optimization of the counting measurement. Open source codes such as MakeHuman and Blender software packages have been used for the creation and modelling of 3D humanoid characters based on polygonal mesh surfaces. Also, a home-made software was developed whose goal is to convert the binary 3D voxel grid into a MCNPX input file. This paper summarizes the development of a library of phantoms of the human body that uses two basic phantoms called MaMP and FeMP (Male and Female Mesh Phantoms) to create a set of male and female phantoms that vary both in height and in weight. Two sets of MaMP and FeMP phantoms were developed and used for efficiency calibration of two different WBC set-ups: the Doel NPP WBC laboratory and AGM laboratory of SCK-CEN in Mol, Belgium.

  2. Automated response matching for organic scintillation detector arrays

    NASA Astrophysics Data System (ADS)

    Aspinall, M. D.; Joyce, M. J.; Cave, F. D.; Plenteda, R.; Tomanin, A.

    2017-07-01

    This paper identifies a digitizer technology with unique features that facilitates feedback control for the realization of a software-based technique for automatically calibrating detector responses. Three such auto-calibration techniques have been developed and are described along with an explanation of the main configuration settings and potential pitfalls. Automating this process increases repeatability, simplifies user operation, enables remote and periodic system calibration where consistency across detectors' responses are critical.

  3. Autotune Calibrates Models to Building Use Data

    ScienceCinema

    None

    2018-01-16

    Models of existing buildings are currently unreliable unless calibrated manually by a skilled professional. Autotune, as the name implies, automates this process by calibrating the model of an existing building to measured data, and is now available as open source software. This enables private businesses to incorporate Autotune into their products so that their customers can more effectively estimate cost savings of reduced energy consumption measures in existing buildings.

  4. The Chandra Source Catalog 2.0: Calibrations

    NASA Astrophysics Data System (ADS)

    Graessle, Dale E.; Evans, Ian N.; Rots, Arnold H.; Allen, Christopher E.; Anderson, Craig S.; Budynkiewicz, Jamie A.; Burke, Douglas; Chen, Judy C.; Civano, Francesca Maria; D'Abrusco, Raffaele; Doe, Stephen M.; Evans, Janet D.; Fabbiano, Giuseppina; Gibbs, Danny G., II; Glotfelty, Kenny J.; Grier, John D.; Hain, Roger; Hall, Diane M.; Harbo, Peter N.; Houck, John C.; Lauer, Jennifer L.; Laurino, Omar; Lee, Nicholas P.; Martínez-Galarza, Juan Rafael; McCollough, Michael L.; McDowell, Jonathan C.; Miller, Joseph; McLaughlin, Warren; Morgan, Douglas L.; Mossman, Amy E.; Nguyen, Dan T.; Nichols, Joy S.; Nowak, Michael A.; Paxson, Charles; Plummer, David A.; Primini, Francis Anthony; Siemiginowska, Aneta; Sundheim, Beth A.; Tibbetts, Michael; Van Stone, David W.; Zografou, Panagoula

    2018-01-01

    Among the many enhancements implemented for the release of Chandra Source Catalog (CSC) 2.0 are improvements in the processing calibration database (CalDB). We have included a thorough overhaul of the CalDB software used in the processing. The software system upgrade, called "CalDB version 4," allows for a more rational and consistent specification of flight configurations and calibration boundary conditions. Numerous improvements in the specific calibrations applied have also been added. Chandra's radiometric and detector response calibrations vary considerably with time, detector operating temperature, and position on the detector. The CalDB has been enhanced to provide the best calibrations possible to each observation over the fifteen-year period included in CSC 2.0. Calibration updates include an improved ACIS contamination model, as well as updated time-varying gain (i.e., photon energy) and quantum efficiency maps for ACIS and HRC-I. Additionally, improved corrections for the ACIS quantum efficiency losses due to CCD charge transfer inefficiency (CTI) have been added for each of the ten ACIS detectors. These CTI corrections are now time and temperature-dependent, allowing ACIS to maintain a 0.3% energy calibration accuracy over the 0.5-7.0 keV range for any ACIS source in the catalog. Radiometric calibration (effective area) accuracy is estimated at ~4% over that range. We include a few examples where improvements in the Chandra CalDB allow for improved data reduction and modeling for the new CSC.This work has been supported by NASA under contract NAS 8-03060 to the Smithsonian Astrophysical Observatory for operation of the Chandra X-ray Center.

  5. 40 CFR Appendix B to Part 75 - Quality Assurance and Quality Control Procedures

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... in section 2.3 of this appendix and the Hg emission tests described in §§ 75.81(c) and 75.81(d)(4). 1.2Specific Requirements for Continuous Emissions Monitoring Systems 1.2.1Calibration Error Test and Linearity Check Procedures Keep a written record of the procedures used for daily calibration error tests and...

  6. U.S.-MEXICO BORDER PROGRAM ARIZONA BORDER STUDY--STANDARD OPERATING PROCEDURE FOR OPERATION, CALIBRATION AND MAINTENANCE OF FIXED AND ADJUSTABLE VOLUME PIPETTE GUNS (BCO-L-9.0)

    EPA Science Inventory

    The purpose of this SOP is to describe the general procedures for the operation, calibration, and maintenance of fixed- and adjustable-volume pipette guns. This procedure was followed to ensure consistent data retrieval during the Arizona NHEXAS project and the Border study. Keyw...

  7. NHEXAS PHASE I ARIZONA STUDY--STANDARD OPERATING PROCEDURE FOR OPERATION, CALIBRATION AND ROUTINE USE OF THE SPECTRACE 9000 FIELD PORTABLE X-RAY FLUORESCENCE ANALYZER (UA-L-10.1)

    EPA Science Inventory

    The purpose of this SOP is to describe the procedures for operating and calibrating the Spectrace 9000 field portable X-ray fluorescence analyzer. This procedure applies to the determination of metal concentrations in samples during the Arizona NHEXAS project and the "Border" st...

  8. NHEXAS PHASE I ARIZONA STUDY--STANDARD OPERATING PROCEDURE FOR OPERATION, CALIBRATION AND MAINTENANCE OF THE PERKIN-ELMER 1100B ATOMIC ABSORPTION SPECTROMETER (BCO-L-5.1)

    EPA Science Inventory

    The purpose of this SOP is to outline the start-up, calibration, operation, and maintenance procedures for the Perkin-Elmer 5100 PC Atomic Absorption Spectrophotometer (PE 5100). These procedures are used for the determination of the target trace metal, as in soil, house dust, f...

  9. 40 CFR 91.320 - Carbon dioxide analyzer calibration.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... (64 percent) is required (see following table). Example calibration points (percent) Acceptable for...) The initial and periodic interference, system check, and calibration test procedures specified in 40...

  10. 40 CFR 91.320 - Carbon dioxide analyzer calibration.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... (64 percent) is required (see following table). Example calibration points (percent) Acceptable for...) The initial and periodic interference, system check, and calibration test procedures specified in 40...

  11. 40 CFR 91.320 - Carbon dioxide analyzer calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... (64 percent) is required (see following table). Example calibration points (percent) Acceptable for...) The initial and periodic interference, system check, and calibration test procedures specified in 40...

  12. Calibrating SANS data for instrument geometry and pixel sensitivity effects: access to an extended Q range

    PubMed Central

    Karge, Lukas; Gilles, Ralph

    2017-01-01

    An improved data-reduction procedure is proposed and demonstrated for small-angle neutron scattering (SANS) measurements. Its main feature is the correction of geometry- and wavelength-dependent intensity variations on the detector in a separate step from the different pixel sensitivities: the geometric and wavelength effects can be corrected analytically, while pixel sensitivities have to be calibrated to a reference measurement. The geometric effects are treated for position-sensitive 3He proportional counter tubes, where they are anisotropic owing to the cylindrical geometry of the gas tubes. For the calibration of pixel sensitivities, a procedure is developed that is valid for isotropic and anisotropic signals. The proposed procedure can save a significant amount of beamtime which has hitherto been used for calibration measurements. PMID:29021734

  13. Neural-Net Based Optical NDE Method for Structural Health Monitoring

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J.; Weiland, Kenneth E.

    2003-01-01

    This paper answers some performance and calibration questions about a non-destructive-evaluation (NDE) procedure that uses artificial neural networks to detect structural damage or other changes from sub-sampled characteristic patterns. The method shows increasing sensitivity as the number of sub-samples increases from 108 to 6912. The sensitivity of this robust NDE method is not affected by noisy excitations of the first vibration mode. A calibration procedure is proposed and demonstrated where the output of a trained net can be correlated with the outputs of the point sensors used for vibration testing. The calibration procedure is based on controlled changes of fastener torques. A heterodyne interferometer is used as a displacement sensor for a demonstration of the challenges to be handled in using standard point sensors for calibration.

  14. IceCube

    Science.gov Websites

    written the portions of the offline software and simulations that involve the electronics and calibrations resonsible for the pieces of the detector calibration and simulation that are connected to the electronics electronics that process and capture the signal produce by Cerenkov light in the photomultiplier tubes. It

  15. 40 CFR 204.54 - Test procedures.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... frequency response calibration and an attenuator (gain control) calibration plus a measurement of dynamic... 40 Protection of Environment 25 2011-07-01 2011-07-01 false Test procedures. 204.54 Section 204.54 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) NOISE ABATEMENT PROGRAMS NOISE EMISSION...

  16. Psychophysical contrast calibration

    PubMed Central

    To, Long; Woods, Russell L; Goldstein, Robert B; Peli, Eli

    2013-01-01

    Electronic displays and computer systems offer numerous advantages for clinical vision testing. Laboratory and clinical measurements of various functions and in particular of (letter) contrast sensitivity require accurately calibrated display contrast. In the laboratory this is achieved using expensive light meters. We developed and evaluated a novel method that uses only psychophysical responses of a person with normal vision to calibrate the luminance contrast of displays for experimental and clinical applications. Our method combines psychophysical techniques (1) for detection (and thus elimination or reduction) of display saturating nonlinearities; (2) for luminance (gamma function) estimation and linearization without use of a photometer; and (3) to measure without a photometer the luminance ratios of the display’s three color channels that are used in a bit-stealing procedure to expand the luminance resolution of the display. Using a photometer we verified that the calibration achieved with this procedure is accurate for both LCD and CRT displays enabling testing of letter contrast sensitivity to 0.5%. Our visual calibration procedure enables clinical, internet and home implementation and calibration verification of electronic contrast testing. PMID:23643843

  17. 2016 Research Outreach Program report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Hye Young; Kim, Yangkyu

    2016-10-13

    This paper is the research activity report for 4 weeks in LANL. Under the guidance of Dr. Lee, who performs nuclear physics research at LANSCE, LANL, I studied the Low Energy NZ (LENZ) setup and how to use the LENZ. First, I studied the LENZ chamber and Si detectors, and worked on detector calibrations, using the computer software, ROOT (CERN developed data analysis tool) and EXCEL (Microsoft office software). I also performed the calibration experiments that measure alpha particles emitted from a Th-229 source by using a S1-type detector (Si detector). And with Dr. Lee, we checked the result.

  18. The Calibration Reference Data System

    NASA Astrophysics Data System (ADS)

    Greenfield, P.; Miller, T.

    2016-07-01

    We describe a software architecture and implementation for using rules to determine which calibration files are appropriate for calibrating a given observation. This new system, the Calibration Reference Data System (CRDS), replaces what had been previously used for the Hubble Space Telescope (HST) calibration pipelines, the Calibration Database System (CDBS). CRDS will be used for the James Webb Space Telescope (JWST) calibration pipelines, and is currently being used for HST calibration pipelines. CRDS can be easily generalized for use in similar applications that need a rules-based system for selecting the appropriate item for a given dataset; we give some examples of such generalizations that will likely be used for JWST. The core functionality of the Calibration Reference Data System is available under an Open Source license. CRDS is briefly contrasted with a sampling of other similar systems used at other observatories.

  19. Analyzing Serendipitous Asteroid Observations in Imaging Data using PHOTOMETRYPIPELINE

    NASA Astrophysics Data System (ADS)

    Ard, Christopher; Mommert, Michael; Trilling, David E.

    2016-10-01

    Asteroids are nearly ubiquitous in the night sky, making them present in the majority of imaging data taken every night. Serendipitous asteroid observations represent a treasure trove to Solar System researchers: accurate positional measurements of asteroids provide important constraints on their sometimes highly uncertain orbits, whereas calibrated photometric measurements can be used to establish rotational periods, intrinsic colors, or photometric phase curves.We present an add-on to the PHOTOMETRYPIPELINE (PP, github.com/mommermi/photometrypipeline, see Poster presentation 123.42) that identifies asteroids that have been observed serendipitously and extracts astrometry and calibrated photometry for these objects. PP is an open-source Python 2.7 software suite that provides image registration, aperture photometry, photometric calibration, and target identification with only minimal human interaction.Asteroids are identified based on approximate positions that are pre-calculated for a range of dates. Using interpolated coordinates, we identify potential asteroids that might be in the observed field and query their exact positions and positional uncertainties from the JPL Horizons system. The method results in robust astrometry and calibrated photometry for all asteroids in the field as a function of time. Our measurements will supplement existing photometric databases of asteroids and improve their orbits.We present first results using this procedure based on imaging data from the Vatican Advanced Technology Telescope.This work was done in the framework of NAU's REU summer program that is supported by NSF grant AST-1461200. PP was developed in the framework of the "Mission Accessible Near-Earth Object Survey" (MANOS) and is supported by NASA SSO grants NNX15AE90G and NNX14AN82G.

  20. 40 CFR 91.425 - CVS calibration frequency.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... (CONTINUED) CONTROL OF EMISSIONS FROM MARINE SPARK-IGNITION ENGINES Gaseous Exhaust Test Procedures § 91.425 CVS calibration frequency. Calibrate the CVS positive displacement pump or critical flow venturi...

  1. 40 CFR 91.425 - CVS calibration frequency.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... (CONTINUED) CONTROL OF EMISSIONS FROM MARINE SPARK-IGNITION ENGINES Gaseous Exhaust Test Procedures § 91.425 CVS calibration frequency. Calibrate the CVS positive displacement pump or critical flow venturi...

  2. 40 CFR 91.425 - CVS calibration frequency.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... (CONTINUED) CONTROL OF EMISSIONS FROM MARINE SPARK-IGNITION ENGINES Gaseous Exhaust Test Procedures § 91.425 CVS calibration frequency. Calibrate the CVS positive displacement pump or critical flow venturi...

  3. 40 CFR 91.425 - CVS calibration frequency.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... (CONTINUED) CONTROL OF EMISSIONS FROM MARINE SPARK-IGNITION ENGINES Gaseous Exhaust Test Procedures § 91.425 CVS calibration frequency. Calibrate the CVS positive displacement pump or critical flow venturi...

  4. 40 CFR 91.425 - CVS calibration frequency.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... (CONTINUED) CONTROL OF EMISSIONS FROM MARINE SPARK-IGNITION ENGINES Gaseous Exhaust Test Procedures § 91.425 CVS calibration frequency. Calibrate the CVS positive displacement pump or critical flow venturi...

  5. Modular modeling system for building distributed hydrologic models with a user-friendly software package

    NASA Astrophysics Data System (ADS)

    Wi, S.; Ray, P. A.; Brown, C.

    2015-12-01

    A software package developed to facilitate building distributed hydrologic models in a modular modeling system is presented. The software package provides a user-friendly graphical user interface that eases its practical use in water resources-related research and practice. The modular modeling system organizes the options available to users when assembling models according to the stages of hydrological cycle, such as potential evapotranspiration, soil moisture accounting, and snow/glacier melting processes. The software is intended to be a comprehensive tool that simplifies the task of developing, calibrating, validating, and using hydrologic models through the inclusion of intelligent automation to minimize user effort, and reduce opportunities for error. Processes so far automated include the definition of system boundaries (i.e., watershed delineation), climate and geographical input generation, and parameter calibration. Built-in post-processing toolkits greatly improve the functionality of the software as a decision support tool for water resources system management and planning. Example post-processing toolkits enable streamflow simulation at ungauged sites with predefined model parameters, and perform climate change risk assessment by means of the decision scaling approach. The software is validated through application to watersheds representing a variety of hydrologic regimes.

  6. Automatic Camera Calibration for Cultural Heritage Applications Using Unstructured Planar Objects

    NASA Astrophysics Data System (ADS)

    Adam, K.; Kalisperakis, I.; Grammatikopoulos, L.; Karras, G.; Petsa, E.

    2013-07-01

    As a rule, image-based documentation of cultural heritage relies today on ordinary digital cameras and commercial software. As such projects often involve researchers not familiar with photogrammetry, the question of camera calibration is important. Freely available open-source user-friendly software for automatic camera calibration, often based on simple 2D chess-board patterns, are an answer to the demand for simplicity and automation. However, such tools cannot respond to all requirements met in cultural heritage conservation regarding possible imaging distances and focal lengths. Here we investigate the practical possibility of camera calibration from unknown planar objects, i.e. any planar surface with adequate texture; we have focused on the example of urban walls covered with graffiti. Images are connected pair-wise with inter-image homographies, which are estimated automatically through a RANSAC-based approach after extracting and matching interest points with the SIFT operator. All valid points are identified on all images on which they appear. Provided that the image set includes a "fronto-parallel" view, inter-image homographies with this image are regarded as emulations of image-to-world homographies and allow computing initial estimates for the interior and exterior orientation elements. Following this initialization step, the estimates are introduced into a final self-calibrating bundle adjustment. Measures are taken to discard unsuitable images and verify object planarity. Results from practical experimentation indicate that this method may produce satisfactory results. The authors intend to incorporate the described approach into their freely available user-friendly software tool, which relies on chess-boards, to assist non-experts in their projects with image-based approaches.

  7. AN EVALUATION OF FIVE COMMERCIAL IMMUNOASSAY DATA ANALYSIS SOFTWARE SYSTEMS

    EPA Science Inventory

    An evaluation of five commercial software systems used for immunoassay data analysis revealed numerous deficiencies. Often, the utility of statistical output was compromised by poor documentation. Several data sets were run through each system using a four-parameter calibration f...

  8. Algorithms for Coastal-Zone Color-Scanner Data

    NASA Technical Reports Server (NTRS)

    1986-01-01

    Software for Nimbus-7 Coastal-Zone Color-Scanner (CZCS) derived products consists of set of scientific algorithms for extracting information from CZCS-gathered data. Software uses CZCS-generated Calibrated RadianceTemperature (CRT) tape as input and outputs computer-compatible tape and film product.

  9. Identification of the most sensitive parameters in the activated sludge model implemented in BioWin software.

    PubMed

    Liwarska-Bizukojc, Ewa; Biernacki, Rafal

    2010-10-01

    In order to simulate biological wastewater treatment processes, data concerning wastewater and sludge composition, process kinetics and stoichiometry are required. Selection of the most sensitive parameters is an important step of model calibration. The aim of this work is to verify the predictability of the activated sludge model, which is implemented in BioWin software, and select its most influential kinetic and stoichiometric parameters with the help of sensitivity analysis approach. Two different measures of sensitivity are applied: the normalised sensitivity coefficient (S(i,j)) and the mean square sensitivity measure (delta(j)(msqr)). It occurs that 17 kinetic and stoichiometric parameters of the BioWin activated sludge (AS) model can be regarded as influential on the basis of S(i,j) calculations. Half of the influential parameters are associated with growth and decay of phosphorus accumulating organisms (PAOs). The identification of the set of the most sensitive parameters should support the users of this model and initiate the elaboration of determination procedures for the parameters, for which it has not been done yet. Copyright 2010 Elsevier Ltd. All rights reserved.

  10. Kepler Science Operations Center Architecture

    NASA Technical Reports Server (NTRS)

    Middour, Christopher; Klaus, Todd; Jenkins, Jon; Pletcher, David; Cote, Miles; Chandrasekaran, Hema; Wohler, Bill; Girouard, Forrest; Gunter, Jay P.; Uddin, Kamal; hide

    2010-01-01

    We give an overview of the operational concepts and architecture of the Kepler Science Data Pipeline. Designed, developed, operated, and maintained by the Science Operations Center (SOC) at NASA Ames Research Center, the Kepler Science Data Pipeline is central element of the Kepler Ground Data System. The SOC charter is to analyze stellar photometric data from the Kepler spacecraft and report results to the Kepler Science Office for further analysis. We describe how this is accomplished via the Kepler Science Data Pipeline, including the hardware infrastructure, scientific algorithms, and operational procedures. The SOC consists of an office at Ames Research Center, software development and operations departments, and a data center that hosts the computers required to perform data analysis. We discuss the high-performance, parallel computing software modules of the Kepler Science Data Pipeline that perform transit photometry, pixel-level calibration, systematic error-correction, attitude determination, stellar target management, and instrument characterization. We explain how data processing environments are divided to support operational processing and test needs. We explain the operational timelines for data processing and the data constructs that flow into the Kepler Science Data Pipeline.

  11. 40 CFR 1065.640 - Flow meter calibration calculations.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Flow meter calibration calculations... POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Calculations and Data Requirements § 1065.640 Flow meter calibration calculations. This section describes the calculations for calibrating various flow meters. After...

  12. Finding trap stiffness of optical tweezers using digital filters.

    PubMed

    Almendarez-Rangel, Pedro; Morales-Cruzado, Beatriz; Sarmiento-Gómez, Erick; Pérez-Gutiérrez, Francisco G

    2018-02-01

    Obtaining trap stiffness and calibration of the position detection system is the basis of a force measurement using optical tweezers. Both calibration quantities can be calculated using several experimental methods available in the literature. In most cases, stiffness determination and detection system calibration are performed separately, often requiring procedures in very different conditions, and thus confidence of calibration methods is not assured due to possible changes in the environment. In this work, a new method to simultaneously obtain both the detection system calibration and trap stiffness is presented. The method is based on the calculation of the power spectral density of positions through digital filters to obtain the harmonic contributions of the position signal. This method has the advantage of calculating both trap stiffness and photodetector calibration factor from the same dataset in situ. It also provides a direct method to avoid unwanted frequencies that could greatly affect calibration procedure, such as electric noise, for example.

  13. a Contemporary Approach for Evaluation of the best Measurement Capability of a Force Calibration Machine

    NASA Astrophysics Data System (ADS)

    Kumar, Harish

    The present paper discusses the procedure for evaluation of best measurement capability of a force calibration machine. The best measurement capability of force calibration machine is evaluated by a comparison through the precision force transfer standards to the force standard machines. The force transfer standards are calibrated by the force standard machine and then by the force calibration machine by adopting the similar procedure. The results are reported and discussed in the paper and suitable discussion has been made for force calibration machine of 200 kN capacity. Different force transfer standards of nominal capacity 20 kN, 50 kN and 200 kN are used. It is found that there are significant variations in the .uncertainty of force realization by the force calibration machine according to the proposed method in comparison to the earlier method adopted.

  14. EOS MLS Level 2 Data Processing Software Version 3

    NASA Technical Reports Server (NTRS)

    Livesey, Nathaniel J.; VanSnyder, Livesey W.; Read, William G.; Schwartz, Michael J.; Lambert, Alyn; Santee, Michelle L.; Nguyen, Honghanh T.; Froidevaux, Lucien; wang, Shuhui; Manney, Gloria L.; hide

    2011-01-01

    This software accepts the EOS MLS calibrated measurements of microwave radiances products and operational meteorological data, and produces a set of estimates of atmospheric temperature and composition. This version has been designed to be as flexible as possible. The software is controlled by a Level 2 Configuration File that controls all aspects of the software: defining the contents of state and measurement vectors, defining the configurations of the various forward models available, reading appropriate a priori spectroscopic and calibration data, performing retrievals, post-processing results, computing diagnostics, and outputting results in appropriate files. In production mode, the software operates in a parallel form, with one instance of the program acting as a master, coordinating the work of multiple slave instances on a cluster of computers, each computing the results for individual chunks of data. In addition, to do conventional retrieval calculations and producing geophysical products, the Level 2 Configuration File can instruct the software to produce files of simulated radiances based on a state vector formed from a set of geophysical product files taken as input. Combining both the retrieval and simulation tasks in a single piece of software makes it far easier to ensure that identical forward model algorithms and parameters are used in both tasks. This also dramatically reduces the complexity of the code maintenance effort.

  15. Radiometer calibration methods and resulting irradiance differences: Radiometer calibration methods and resulting irradiance differences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habte, Aron; Sengupta, Manajit; Andreas, Afshin

    Accurate solar radiation measured by radiometers depends on instrument performance specifications, installation method, calibration procedure, measurement conditions, maintenance practices, location, and environmental conditions. This study addresses the effect of different calibration methodologies and resulting differences provided by radiometric calibration service providers such as the National Renewable Energy Laboratory (NREL) and manufacturers of radiometers. Some of these methods calibrate radiometers indoors and some outdoors. To establish or understand the differences in calibration methodologies, we processed and analyzed field-measured data from radiometers deployed for 10 months at NREL's Solar Radiation Research Laboratory. These different methods of calibration resulted in a difference ofmore » +/-1% to +/-2% in solar irradiance measurements. Analyzing these differences will ultimately assist in determining the uncertainties of the field radiometer data and will help develop a consensus on a standard for calibration. Further advancing procedures for precisely calibrating radiometers to world reference standards that reduce measurement uncertainties will help the accurate prediction of the output of planned solar conversion projects and improve the bankability of financing solar projects.« less

  16. Effects of line fiducial parameters and beamforming on ultrasound calibration

    PubMed Central

    Ameri, Golafsoun; Baxter, John S. H.; McLeod, A. Jonathan; Peters, Terry M.; Chen, Elvis C. S.

    2017-01-01

    Abstract. Ultrasound (US)-guided interventions are often enhanced via integration with an augmented reality environment, a necessary component of which is US calibration. Calibration requires the segmentation of fiducials, i.e., a phantom, in US images. Fiducial localization error (FLE) can decrease US calibration accuracy, which fundamentally affects the total accuracy of the interventional guidance system. Here, we investigate the effects of US image reconstruction techniques as well as phantom material and geometry on US calibration. It was shown that the FLE was reduced by 29% with synthetic transmit aperture imaging compared with conventional B-mode imaging in a Z-bar calibration, resulting in a 10% reduction of calibration error. In addition, an evaluation of a variety of calibration phantoms with different geometrical and material properties was performed. The phantoms included braided wire, plastic straws, and polyvinyl alcohol cryogel tubes with different diameters. It was shown that these properties have a significant effect on calibration error, which is a variable based on US beamforming techniques. These results would have important implications for calibration procedures and their feasibility in the context of image-guided procedures. PMID:28331886

  17. Effects of line fiducial parameters and beamforming on ultrasound calibration.

    PubMed

    Ameri, Golafsoun; Baxter, John S H; McLeod, A Jonathan; Peters, Terry M; Chen, Elvis C S

    2017-01-01

    Ultrasound (US)-guided interventions are often enhanced via integration with an augmented reality environment, a necessary component of which is US calibration. Calibration requires the segmentation of fiducials, i.e., a phantom, in US images. Fiducial localization error (FLE) can decrease US calibration accuracy, which fundamentally affects the total accuracy of the interventional guidance system. Here, we investigate the effects of US image reconstruction techniques as well as phantom material and geometry on US calibration. It was shown that the FLE was reduced by 29% with synthetic transmit aperture imaging compared with conventional B-mode imaging in a Z-bar calibration, resulting in a 10% reduction of calibration error. In addition, an evaluation of a variety of calibration phantoms with different geometrical and material properties was performed. The phantoms included braided wire, plastic straws, and polyvinyl alcohol cryogel tubes with different diameters. It was shown that these properties have a significant effect on calibration error, which is a variable based on US beamforming techniques. These results would have important implications for calibration procedures and their feasibility in the context of image-guided procedures.

  18. Uncertainty Analysis of Inertial Model Attitude Sensor Calibration and Application with a Recommended New Calibration Method

    NASA Technical Reports Server (NTRS)

    Tripp, John S.; Tcheng, Ping

    1999-01-01

    Statistical tools, previously developed for nonlinear least-squares estimation of multivariate sensor calibration parameters and the associated calibration uncertainty analysis, have been applied to single- and multiple-axis inertial model attitude sensors used in wind tunnel testing to measure angle of attack and roll angle. The analysis provides confidence and prediction intervals of calibrated sensor measurement uncertainty as functions of applied input pitch and roll angles. A comparative performance study of various experimental designs for inertial sensor calibration is presented along with corroborating experimental data. The importance of replicated calibrations over extended time periods has been emphasized; replication provides independent estimates of calibration precision and bias uncertainties, statistical tests for calibration or modeling bias uncertainty, and statistical tests for sensor parameter drift over time. A set of recommendations for a new standardized model attitude sensor calibration method and usage procedures is included. The statistical information provided by these procedures is necessary for the uncertainty analysis of aerospace test results now required by users of industrial wind tunnel test facilities.

  19. Raman water vapor lidar calibration

    NASA Astrophysics Data System (ADS)

    Landulfo, E.; Da Costa, R. F.; Torres, A. S.; Lopes, F. J. S.; Whiteman, D. N.; Venable, D. D.

    2009-09-01

    We show here new results of a Raman LIDAR calibration methodology effort putting emphasis in the assessment of the cross-section ratio between water vapor and nitrogen by the use of a calibrated NIST traceable tungsten lamp. Therein we give a step by step procedure of how to employ such equipment by means of a mapping/scanning procedure over the receiving optics of a water vapor Raman LIDAR. This methodology has been independently used at Howard University Raman LIDAR and at IPEN Raman LIDAR what strongly supports its reproducibility and points towards an independently calibration methodology to be carried on within an experiment routine.

  20. Automated magnification calibration in transmission electron microscopy using Fourier analysis of replica images.

    PubMed

    van der Laak, Jeroen A W M; Dijkman, Henry B P M; Pahlplatz, Martin M M

    2006-03-01

    The magnification factor in transmission electron microscopy is not very precise, hampering for instance quantitative analysis of specimens. Calibration of the magnification is usually performed interactively using replica specimens, containing line or grating patterns with known spacing. In the present study, a procedure is described for automated magnification calibration using digital images of a line replica. This procedure is based on analysis of the power spectrum of Fourier transformed replica images, and is compared to interactive measurement in the same images. Images were used with magnification ranging from 1,000 x to 200,000 x. The automated procedure deviated on average 0.10% from interactive measurements. Especially for catalase replicas, the coefficient of variation of automated measurement was considerably smaller (average 0.28%) compared to that of interactive measurement (average 3.5%). In conclusion, calibration of the magnification in digital images from transmission electron microscopy may be performed automatically, using the procedure presented here, with high precision and accuracy.

  1. U.S.-MEXICO BORDER PROGRAM ARIZONA BORDER STUDY--STANDARD OPERATING PROCEDURE FOR OPERATION, CALIBRATION, AND ROUTINE USE OF THE SPECTRACE 9000 FIELD PORTABLE X-RAY FLUORESCENCE ANALYZER (UA-L-10.1)

    EPA Science Inventory

    The purpose of this SOP is to describe the procedures for operating and calibrating the Spectrace 9000 field portable X-ray fluorescence analyzer. This procedure applies to the determination of metal concentrations in samples during the Arizona NHEXAS project and the Border stud...

  2. U.S.-MEXICO BORDER PROGRAM ARIZONA BORDER STUDY--STANDARD OPERATING PROCEDURE FOR OPERATION, CALIBRATION, AND MAINTENANCE OF THE PERKIN-ELMER 5100 PC ATOMIC ABSORPTION SPECTROMETER (BCO-L-5.1)

    EPA Science Inventory

    The purpose of this SOP is to outline the start-up, calibration, operation, and maintenance procedures for the Perkin-Elmer 5100 PC Atomic Absorption Spectrophotometer (PE 5100). These procedures are used for the determination of the target trace metal, as in soil, house dust, f...

  3. 40 CFR 90.425 - CVS calibration frequency.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... (CONTINUED) CONTROL OF EMISSIONS FROM NONROAD SPARK-IGNITION ENGINES AT OR BELOW 19 KILOWATTS Gaseous Exhaust Test Procedures § 90.425 CVS calibration frequency. Calibrate the CVS positive displacement pump or...

  4. 40 CFR 90.425 - CVS calibration frequency.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... (CONTINUED) CONTROL OF EMISSIONS FROM NONROAD SPARK-IGNITION ENGINES AT OR BELOW 19 KILOWATTS Gaseous Exhaust Test Procedures § 90.425 CVS calibration frequency. Calibrate the CVS positive displacement pump or...

  5. 40 CFR 90.425 - CVS calibration frequency.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... (CONTINUED) CONTROL OF EMISSIONS FROM NONROAD SPARK-IGNITION ENGINES AT OR BELOW 19 KILOWATTS Gaseous Exhaust Test Procedures § 90.425 CVS calibration frequency. Calibrate the CVS positive displacement pump or...

  6. 40 CFR 90.425 - CVS calibration frequency.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... (CONTINUED) CONTROL OF EMISSIONS FROM NONROAD SPARK-IGNITION ENGINES AT OR BELOW 19 KILOWATTS Gaseous Exhaust Test Procedures § 90.425 CVS calibration frequency. Calibrate the CVS positive displacement pump or...

  7. 40 CFR 90.425 - CVS calibration frequency.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... (CONTINUED) CONTROL OF EMISSIONS FROM NONROAD SPARK-IGNITION ENGINES AT OR BELOW 19 KILOWATTS Gaseous Exhaust Test Procedures § 90.425 CVS calibration frequency. Calibrate the CVS positive displacement pump or...

  8. Supporting the Copernicus POD Service

    NASA Astrophysics Data System (ADS)

    Peter, Heike; Springer, Tim; Otten, Michiel; Fernandez, Jaime; Escobar, Diego; Femenias, Pierre

    2015-12-01

    The Copernicus POD (Precise Orbit Determination) Service is part of the Copernicus PDGS (Payload Data Ground Segment) of the Sentinel missions. A GMV-led consortium is operating the Copernicus POD Service being in charge of generating precise orbital products and auxiliary data files for their use as part of the processing chains of the respective Sentinel PDGS. As part of the consortium PosiTim is responsible for implementing and testing software and model updates thoroughly before integrating them in the operational chain of the Copernicus POD Service. The NAPEOS (Navigation Package for Earth Observation Satellites) software is used for the generation of the orbit products within the Copernicus POD Service. The test procedures and results obtained for a recent software and model update to IERS 2010 Conventions are presented. It has been tested as well that the arc length of 72 hours for the non-time critical (NTC) orbit solutions might be shorten to 48 hours without losing accuracy. Orbit comparisons to external solutions help to validate the different orbit solutions. GPS antenna phase centre variations (PCVs) are one of the largest systematic error sources in POD. Since the satellite body may cause signal multipath a ground calibration of the GPS antenna without taking into account the satellite body might not be sufficient to quantify the PCVs. The PCVs are therefore obtained by an in-flight calibration. A first map for the PCVs determined from a limited amount of data at the beginning of the mission has shown significant multipath signals in parts of the antenna for code and carrier phase measurements. Since the satellite has moving parts it has been checked carefully if these multipath regions are moving as well or if they are antenna-fixed. Normally the correction maps are only applied for the carrier phase measurements. Since significant multipath has been spotted for the code measurements as well investigations are performed to study the impact of additionally applying code correction maps in the POD process.

  9. SU-F-T-489: 4-Years Experience of QA in TomoTherapy MVCT: What Do We Look Out For?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, F; Chan, K

    2016-06-15

    Purpose: To evaluate the QA results of TomoTherapy MVCT from March 2012 to February 2016, and to identify issues that may affect consistency in HU numbers and reconstructed treatment dose in MVCT. Methods: Monthly QA was performed on our TomoHD system. Phantom with rod inserts of various mass densities was imaged in MVCT and compared to baseline to evaluate HU number consistency. To evaluate treatment dose reconstructed by delivered sinogram and MVCT, a treatment plan was designed on a humanoid skull phantom. The phantom was imaged with MVCT and treatment plan was delivered to obtain the sinogram. The dose reconstructedmore » with the Planned Adaptive software was compared to the dose in the original plan. The QA tolerance for HU numbers was ±30 HU, and ±2% for discrepancy between original plan dose and reconstructed dose. Tolerances were referenced to AAPM TG148. Results: Several technical modifications or maintenance activities to the system have been identified which affected QA Results: 1) Upgrade in console system software which added a weekly HU calibration procedure; 2) Linac or MLC replacement leading to change in Accelerator Output Machine (AOM) parameters; 3) Upgrade in planning system algorithm affecting MVCT dose reconstruction. These events caused abrupt changes in QA results especially for the reconstructed dose. In the past 9 months, when no such modifications were done to the system, reconstructed dose was consistent with maximum deviation from baseline less than 0.6%. The HU number deviated less than 5HU. Conclusion: Routine QA is essential for MVCT, especially if the MVCT is used for daily dose reconstruction to monitor delivered dose to patients. Several technical events which may affect consistency of this are software changes, linac or MLC replacement. QA results reflected changes which justify re-calibration or system adjustment. In normal circumstances, the system should be relatively stable and quarterly QA may be sufficient.« less

  10. CrossTalk: The Journal of Defense Software Engineering. Volume 20, Number 6, June 2007

    DTIC Science & Technology

    2007-06-01

    California. He has co-authored the book Software Cost Estimation With COCOMO II with Barry Boehm and others. Clark helped define the COCOMO II model...Software Engineering at the University of Southern California. She worked with Barry Boehm and Chris Abts to develop and calibrate a cost-estimation...2003/02/ schorsch.html>. 2. See “Software Engineering, A Practitioners Approach” by Roger Pressman for a good description of coupling, cohesion

  11. Preliminary design of the HARMONI science software

    NASA Astrophysics Data System (ADS)

    Piqueras, Laure; Jarno, Aurelien; Pécontal-Rousset, Arlette; Loupias, Magali; Richard, Johan; Schwartz, Noah; Fusco, Thierry; Sauvage, Jean-François; Neichel, Benoît; Correia, Carlos M.

    2016-08-01

    This paper introduces the science software of HARMONI. The Instrument Numerical Model simulates the instrument from the optical point of view and provides synthetic exposures simulating detector readouts from data-cubes containing astrophysical scenes. The Data Reduction Software converts raw-data frames into a fully calibrated, scientifically usable data cube. We present the functionalities and the preliminary design of this software, describe some of the methods and algorithms used and highlight the challenges that we will have to face.

  12. 40 CFR 86.1524 - Carbon dioxide analyzer calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 20 2012-07-01 2012-07-01 false Carbon dioxide analyzer calibration. 86.1524 Section 86.1524 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR... Test Procedures § 86.1524 Carbon dioxide analyzer calibration. (a) The calibration requirements for the...

  13. 40 CFR 86.1524 - Carbon dioxide analyzer calibration.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 19 2014-07-01 2014-07-01 false Carbon dioxide analyzer calibration. 86.1524 Section 86.1524 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR... Procedures § 86.1524 Carbon dioxide analyzer calibration. (a) The calibration requirements for the dilute...

  14. 40 CFR 86.1325-94 - Methane analyzer calibration.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 19 2010-07-01 2010-07-01 false Methane analyzer calibration. 86.1325... Procedures § 86.1325-94 Methane analyzer calibration. Prior to introduction into service and monthly thereafter, the methane analyzer shall be calibrated: (a) Follow the manufacturer's instructions for...

  15. 40 CFR 86.1325-94 - Methane analyzer calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 20 2012-07-01 2012-07-01 false Methane analyzer calibration. 86.1325... Procedures § 86.1325-94 Methane analyzer calibration. Prior to introduction into service and monthly thereafter, the methane analyzer shall be calibrated: (a) Follow the manufacturer's instructions for...

  16. 40 CFR 86.1325-94 - Methane analyzer calibration.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 20 2013-07-01 2013-07-01 false Methane analyzer calibration. 86.1325... Procedures § 86.1325-94 Methane analyzer calibration. Prior to introduction into service and monthly thereafter, the methane analyzer shall be calibrated: (a) Follow the manufacturer's instructions for...

  17. 40 CFR 86.1325-94 - Methane analyzer calibration.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 19 2011-07-01 2011-07-01 false Methane analyzer calibration. 86.1325... Procedures § 86.1325-94 Methane analyzer calibration. Prior to introduction into service and monthly thereafter, the methane analyzer shall be calibrated: (a) Follow the manufacturer's instructions for...

  18. Four years of Landsat-7 on-orbit geometric calibration and performance

    USGS Publications Warehouse

    Lee, D.S.; Storey, James C.; Choate, M.J.; Hayes, R.W.

    2004-01-01

    Unlike its predecessors, Landsat-7 has undergone regular geometric and radiometric performance monitoring and calibration since launch in April 1999. This ongoing activity, which includes issuing quarterly updates to calibration parameters, has generated a wealth of geometric performance data over the four-year on-orbit period of operations. A suite of geometric characterization (measurement and evaluation procedures) and calibration (procedures to derive improved estimates of instrument parameters) methods are employed by the Landsat-7 Image Assessment System to maintain the geometric calibration and to track specific aspects of geometric performance. These include geodetic accuracy, band-to-band registration accuracy, and image-to-image registration accuracy. These characterization and calibration activities maintain image product geometric accuracy at a high level - by monitoring performance to determine when calibration is necessary, generating new calibration parameters, and verifying that new parameters achieve desired improvements in accuracy. Landsat-7 continues to meet and exceed all geometric accuracy requirements, although aging components have begun to affect performance.

  19. The polyGeVero® software for fast and easy computation of 3D radiotherapy dosimetry data

    NASA Astrophysics Data System (ADS)

    Kozicki, Marek; Maras, Piotr

    2015-01-01

    The polyGeVero® software package was elaborated for calculations of 3D dosimetry data such as the polymer gel dosimetry. It comprises four workspaces designed for: i) calculating calibrations, ii) storing calibrations in a database, iii) calculating dose distribution 3D cubes, iv) comparing two datasets e.g. a measured one with a 3D dosimetry with a calculated one with the aid of a treatment planning system. To accomplish calculations the software was equipped with a number of tools such as the brachytherapy isotopes database, brachytherapy dose versus distance calculation based on the line approximation approach, automatic spatial alignment of two 3D dose cubes for comparison purposes, 3D gamma index, 3D gamma angle, 3D dose difference, Pearson's coefficient, histograms calculations, isodoses superimposition for two datasets, and profiles calculations in any desired direction. This communication is to briefly present the main functions of the software and report on the speed of calculations performed by polyGeVero®.

  20. 40 CFR 205.174 - Remedial orders.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... calibrated with the acoustic calibrator as often as is necessary throughout testing to maintain the accuracy... TRANSPORTATION EQUIPMENT NOISE EMISSION CONTROLS Motorcycle Exhaust Systems § 205.174 Remedial orders. The... Noise Emission Test Procedures Appendix I-1 to Subparts D and E—Test Procedure for Street and off-road...

  1. Hand-Eye Calibration of Robonaut

    NASA Technical Reports Server (NTRS)

    Nickels, Kevin; Huber, Eric

    2004-01-01

    NASA's Human Space Flight program depends heavily on Extra-Vehicular Activities (EVA's) performed by human astronauts. EVA is a high risk environment that requires extensive training and ground support. In collaboration with the Defense Advanced Research Projects Agency (DARPA), NASA is conducting a ground development project to produce a robotic astronaut's assistant, called Robonaut, that could help reduce human EVA time and workload. The project described in this paper designed and implemented a hand-eye calibration scheme for Robonaut, Unit A. The intent of this calibration scheme is to improve hand-eye coordination of the robot. The basic approach is to use kinematic and stereo vision measurements, namely the joint angles self-reported by the right arm and 3-D positions of a calibration fixture as measured by vision, to estimate the transformation from Robonaut's base coordinate system to its hand coordinate system and to its vision coordinate system. Two methods of gathering data sets have been developed, along with software to support each. In the first, the system observes the robotic arm and neck angles as the robot is operated under external control, and measures the 3-D position of a calibration fixture using Robonaut's stereo cameras, and logs these data. In the second, the system drives the arm and neck through a set of pre-recorded configurations, and data are again logged. Two variants of the calibration scheme have been developed. The full calibration scheme is a batch procedure that estimates all relevant kinematic parameters of the arm and neck of the robot The daily calibration scheme estimates only joint offsets for each rotational joint on the arm and neck, which are assumed to change from day to day. The schemes have been designed to be automatic and easy to use so that the robot can be fully recalibrated when needed such as after repair, upgrade, etc, and can be partially recalibrated after each power cycle. The scheme has been implemented on Robonaut Unit A and has been shown to reduce mismatch between kinematically derived positions and visually derived positions from a mean of 13.75cm using the previous calibration to means of 1.85cm using a full calibration and 2.02cm using a suboptimal but faster daily calibration. This improved calibration has already enabled the robot to more accurately reach for and grasp objects that it sees within its workspace. The system has been used to support an autonomous wrench-grasping experiment and significantly improved the workspace positioning of the hand based on visually derived wrench position. estimates.

  2. Spectral method for the correction of the Cerenkov light effect in plastic scintillation detectors: A comparison study of calibration procedures and validation in Cerenkov light-dominated situations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guillot, Mathieu; Gingras, Luc; Archambault, Louis

    2011-04-15

    Purpose: The purposes of this work were: (1) To determine if a spectral method can accurately correct the Cerenkov light effect in plastic scintillation detectors (PSDs) for situations where the Cerenkov light is dominant over the scintillation light and (2) to develop a procedural guideline for accurately determining the calibration factors of PSDs. Methods: The authors demonstrate, by using the equations of the spectral method, that the condition for accurately correcting the effect of Cerenkov light is that the ratio of the two calibration factors must be equal to the ratio of the Cerenkov light measured within the two differentmore » spectral regions used for analysis. Based on this proof, the authors propose two new procedures to determine the calibration factors of PSDs, which were designed to respect this condition. A PSD that consists of a cylindrical polystyrene scintillating fiber (1.6 mm{sup 3}) coupled to a plastic optical fiber was calibrated by using these new procedures and the two reference procedures described in the literature. To validate the extracted calibration factors, relative dose profiles and output factors for a 6 MV photon beam from a medical linac were measured with the PSD and an ionization chamber. Emphasis was placed on situations where the Cerenkov light is dominant over the scintillation light and on situations dissimilar to the calibration conditions. Results: The authors found that the accuracy of the spectral method depends on the procedure used to determine the calibration factors of the PSD and on the attenuation properties of the optical fiber used. The results from the relative dose profile measurements showed that the spectral method can correct the Cerenkov light effect with an accuracy level of 1%. The results obtained also indicate that PSDs measure output factors that are lower than those measured with ionization chambers for square field sizes larger than 25x25 cm{sup 2}, in general agreement with previously published Monte Carlo results. Conclusions: The authors conclude that the spectral method can be used to accurately correct the Cerenkov light effect in PSDs. The authors confirmed the importance of maximizing the difference of Cerenkov light production between calibration measurements. The authors also found that the attenuation of the optical fiber, which is assumed to be constant in the original formulation of the spectral method, may cause a variation of the calibration factors in some experimental setups.« less

  3. Assuring Software Cost Estimates: Is it an Oxymoron?

    NASA Technical Reports Server (NTRS)

    Hihn, Jarius; Tregre, Grant

    2013-01-01

    The software industry repeatedly observes cost growth of well over 100% even after decades of cost estimation research and well-known best practices, so "What's the problem?" In this paper we will provide an overview of the current state oj software cost estimation best practice. We then explore whether applying some of the methods used in software assurance might improve the quality of software cost estimates. This paper especially focuses on issues associated with model calibration, estimate review, and the development and documentation of estimates as part alan integrated plan.

  4. Environmental Health Monitor: Advanced Development of Temperature Sensor Suite.

    DTIC Science & Technology

    1995-07-30

    systems was implemented using program code existing at Veritay. The software , written in Microsoft® QuickBASIC, facilitated program changes for...currently unforeseen reason re-calibration is needed, this can be readily * accommodated by a straightforward change in the software program---without...unit. A linear relationship between these differences * was obtained using curve fitting software . The ½/-inch globe to 6-inch globe correlation * was

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Voorhees, D.R.; Rossmassler, R.L.; Zimmer, G.

    The tritium analytical system at TFTR is used to determine the purity of tritium bearing gas streams in order to provide inventory and accountability measurements. The system includes a quadrupole mass spectrometer (QMS) and beta scintillator originally configured at Monsanto Mound Research Laboratory. The system was commissioned and tested in 1992 and is used daily for analysis of calibration standards, incoming tritium shipments, gases evolved from uranium storage beds and effluent gases from the tokamak. The instruments are controlled by a personal computer with customized software written with a graphical programming system designed for data acquisition and control. A discussionmore » of the instrumentation, control systems, system parameters, procedural methods, algorithms, and operational issues will be presented. Measurements of gas holding tanks and tritiated water waste streams using ion chamber instrumentation are discussed elsewhere. 7 refs., 3 figs.« less

  6. Forensic examination of ink by high-performance thin layer chromatography--the United States Secret Service Digital Ink Library.

    PubMed

    Neumann, Cedric; Ramotowski, Robert; Genessay, Thibault

    2011-05-13

    Forensic examinations of ink have been performed since the beginning of the 20th century. Since the 1960s, the International Ink Library, maintained by the United States Secret Service, has supported those analyses. Until 2009, the search and identification of inks were essentially performed manually. This paper describes the results of a project designed to improve ink samples' analytical and search processes. The project focused on the development of improved standardization procedures to ensure the best possible reproducibility between analyses run on different HPTLC plates. The successful implementation of this new calibration method enabled the development of mathematical algorithms and of a software package to complement the existing ink library. Copyright © 2010 Elsevier B.V. All rights reserved.

  7. 40 CFR 86.125-94 - Methane analyzer calibration.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 18 2010-07-01 2010-07-01 false Methane analyzer calibration. 86.125... Complete Heavy-Duty Vehicles; Test Procedures § 86.125-94 Methane analyzer calibration. Prior to introduction into service and monthly thereafter, the methane analyzer shall be calibrated: (a) Follow the...

  8. 40 CFR 86.125-94 - Methane analyzer calibration.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 18 2011-07-01 2011-07-01 false Methane analyzer calibration. 86.125... Complete Heavy-Duty Vehicles; Test Procedures § 86.125-94 Methane analyzer calibration. Prior to introduction into service and monthly thereafter, the methane analyzer shall be calibrated: (a) Follow the...

  9. 48 CFR 227.7203-11 - Contractor procedures and records.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Rights in Computer Software and Computer Software Documentation 227.7203-11 Contractor procedures and records. (a) The clause at 252.227-7014, Rights in Noncommercial Computer Software and Noncommercial Computer Software Documentation, requires a contractor, and its subcontractors or suppliers that will...

  10. 48 CFR 227.7203-11 - Contractor procedures and records.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Rights in Computer Software and Computer Software Documentation 227.7203-11 Contractor procedures and records. (a) The clause at 252.227-7014, Rights in Noncommercial Computer Software and Noncommercial Computer Software Documentation, requires a contractor, and its subcontractors or suppliers that will...

  11. 48 CFR 227.7203-11 - Contractor procedures and records.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Rights in Computer Software and Computer Software Documentation 227.7203-11 Contractor procedures and records. (a) The clause at 252.227-7014, Rights in Noncommercial Computer Software and Noncommercial Computer Software Documentation, requires a contractor, and its subcontractors or suppliers that will...

  12. 48 CFR 227.7203-11 - Contractor procedures and records.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Rights in Computer Software and Computer Software Documentation 227.7203-11 Contractor procedures and records. (a) The clause at 252.227-7014, Rights in Noncommercial Computer Software and Noncommercial Computer Software Documentation, requires a contractor, and its subcontractors or suppliers that will...

  13. 48 CFR 227.7203-11 - Contractor procedures and records.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Rights in Computer Software and Computer Software Documentation 227.7203-11 Contractor procedures and records. (a) The clause at 252.227-7014, Rights in Noncommercial Computer Software and Noncommercial Computer Software Documentation, requires a contractor, and its subcontractors or suppliers that will...

  14. Quality assurance software inspections at NASA Ames: Metrics for feedback and modification

    NASA Technical Reports Server (NTRS)

    Wenneson, G.

    1985-01-01

    Software inspections are a set of formal technical review procedures held at selected key points during software development in order to find defects in software documents--is described in terms of history, participants, tools, procedures, statistics, and database analysis.

  15. U.S.-MEXICO BORDER PROGRAM ARIZONA BORDER STUDY--STANDARD OPERATING PROCEDURE FOR OPERATION, CALIBRATION, AND MAINTENANCE OF THE THERMO JARRELL ASH ICAP 61-975 PLASMA ATOMCOMP EMISSION SPECTROMETER (BCO-L-8.0)

    EPA Science Inventory

    The purpose of this SOP is to detail the procedures for the start-up, operation, calibration, shut-down, and maintenance of the Thermo Jarrell Ash ICAP 61-975 Plasma AtomComp Emission Spectrometer. These procedures were used in determining the trace target metals Al, As, Ba, Cd,...

  16. 40 CFR 91.316 - Hydrocarbon analyzer calibration.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... operating adjustment using the appropriate fuel (see § 91.312) and purified synthetic air or zero-grade nitrogen. (2) One of the following procedures is required for FID or HFID optimization: (i) The procedure...) Initial and periodic calibration. Prior to introduction into service and monthly thereafter, or within one...

  17. 16 CFR 1209.8 - Procedure for calibration of radiation instrumentation.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... SAFETY ACT REGULATIONS INTERIM SAFETY STANDARD FOR CELLULOSE INSULATION The Standard § 1209.8 Procedure... radiation pyrometer. Repeat for each temperature. (b) Total heat flux meter. The total flux meter shall be... meter. This latter calibration shall make use of the radiant panel tester as the heat source...

  18. 16 CFR 1209.8 - Procedure for calibration of radiation instrumentation.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... SAFETY ACT REGULATIONS INTERIM SAFETY STANDARD FOR CELLULOSE INSULATION The Standard § 1209.8 Procedure... radiation pyrometer. Repeat for each temperature. (b) Total heat flux meter. The total flux meter shall be... meter. This latter calibration shall make use of the radiant panel tester as the heat source...

  19. 16 CFR 1209.8 - Procedure for calibration of radiation instrumentation.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... SAFETY ACT REGULATIONS INTERIM SAFETY STANDARD FOR CELLULOSE INSULATION The Standard § 1209.8 Procedure... radiation pyrometer. Repeat for each temperature. (b) Total heat flux meter. The total flux meter shall be... meter. This latter calibration shall make use of the radiant panel tester as the heat source...

  20. 16 CFR 1209.8 - Procedure for calibration of radiation instrumentation.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... SAFETY ACT REGULATIONS INTERIM SAFETY STANDARD FOR CELLULOSE INSULATION The Standard § 1209.8 Procedure... radiation pyrometer. Repeat for each temperature. (b) Total heat flux meter. The total flux meter shall be... meter. This latter calibration shall make use of the radiant panel tester as the heat source...

  1. Video-guided calibration of an augmented reality mobile C-arm.

    PubMed

    Chen, Xin; Naik, Hemal; Wang, Lejing; Navab, Nassir; Fallavollita, Pascal

    2014-11-01

    The augmented reality (AR) fluoroscope augments an X-ray image by video and provides the surgeon with a real-time in situ overlay of the anatomy. The overlay alignment is crucial for diagnostic and intra-operative guidance, so precise calibration of the AR fluoroscope is required. The first and most complex step of the calibration procedure is the determination of the X-ray source position. Currently, this is achieved using a biplane phantom with movable metallic rings on its top layer and fixed X-ray opaque markers on its bottom layer. The metallic rings must be moved to positions where at least two pairs of rings and markers are isocentric in the X-ray image. The current "trial and error" calibration process currently requires acquisition of many X-ray images, a task that is both time consuming and radiation intensive. An improved process was developed and tested for C-arm calibration. Video guidance was used to drive the calibration procedure to minimize both X-ray exposure and the time involved. For this, a homography between X-ray and video images is estimated. This homography is valid for the plane at which the metallic rings are positioned and is employed to guide the calibration procedure. Eight users having varying calibration experience (i.e., 2 experts, 2 semi-experts, 4 novices) were asked to participate in the evaluation. The video-guided technique reduced the number of intra-operative X-ray calibration images by 89% and decreased the total time required by 59%. A video-based C-arm calibration method has been developed that improves the usability of the AR fluoroscope with a friendlier interface, reduced calibration time and clinically acceptable radiation doses.

  2. Design and realization of photoelectric instrument binocular optical axis parallelism calibration system

    NASA Astrophysics Data System (ADS)

    Ying, Jia-ju; Chen, Yu-dan; Liu, Jie; Wu, Dong-sheng; Lu, Jun

    2016-10-01

    The maladjustment of photoelectric instrument binocular optical axis parallelism will affect the observe effect directly. A binocular optical axis parallelism digital calibration system is designed. On the basis of the principle of optical axis binocular photoelectric instrument calibration, the scheme of system is designed, and the binocular optical axis parallelism digital calibration system is realized, which include four modules: multiband parallel light tube, optical axis translation, image acquisition system and software system. According to the different characteristics of thermal infrared imager and low-light-level night viewer, different algorithms is used to localize the center of the cross reticle. And the binocular optical axis parallelism calibration is realized for calibrating low-light-level night viewer and thermal infrared imager.

  3. APPLICATION OF SOFTWARE QUALITY ASSURANCE CONCEPTS AND PROCEDURES TO ENVIORNMENTAL RESEARCH INVOLVING SOFTWARE DEVELOPMENT

    EPA Science Inventory

    As EPA’s environmental research expands into new areas that involve the development of software, quality assurance concepts and procedures that were originally developed for environmental data collection may not be appropriate. Fortunately, software quality assurance is a ...

  4. A method for estimation of bias and variability of continuous gas monitor data: application to carbon monoxide monitor accuracy.

    PubMed

    Shulman, Stanley A; Smith, Jerome P

    2002-01-01

    A method is presented for the evaluation of the bias, variability, and accuracy of gas monitors. This method is based on using the parameters for the fitted response curves of the monitors. Thereby, variability between calibrations, between dates within each calibration period, and between different units can be evaluated at several different standard concentrations. By combining variability information with bias information, accuracy can be assessed. An example using carbon monoxide monitor data is provided. Although the most general statistical software required for these tasks is not available on a spreadsheet, when the same number of dates in a calibration period are evaluated for each monitor unit, the calculations can be done on a spreadsheet. An example of such calculations, together with the formulas needed for their implementation, is provided. In addition, the methods can be extended by use of appropriate statistical models and software to evaluate monitor trends within calibration periods, as well as consider the effects of other variables, such as humidity and temperature, on monitor variability and bias.

  5. The Sixth SeaWiFS/SIMBIOS Intercalibration Round-Robin Experiment (SIRREX-6)

    NASA Technical Reports Server (NTRS)

    Riley, Thomas; Bailey, Sean

    1998-01-01

    For the sixth Sea-Viewing Wide Field-of-View Sensor (SeaWiFS) Intercalibration Round-Robin Experiment (SIRREX-6), NASA personnel carried the same four Satlantic in-water radiometers to nine separate laboratories and calibrated them. Two of the sensors were seven-channel radiance heads and two were seven-channel irradiance heads. The calibration and data reduction procedures used at each site followed that laboratory's normal procedures. The reference lamps normally used for the calibration of these types of instruments by the various laboratories were also used for this experiment. NASA personnel processed the data to produce calibration parameters from the various laboratories

  6. Parameter estimation procedure for complex non-linear systems: calibration of ASM No. 1 for N-removal in a full-scale oxidation ditch.

    PubMed

    Abusam, A; Keesman, K J; van Straten, G; Spanjers, H; Meinema, K

    2001-01-01

    When applied to large simulation models, the process of parameter estimation is also called calibration. Calibration of complex non-linear systems, such as activated sludge plants, is often not an easy task. On the one hand, manual calibration of such complex systems is usually time-consuming, and its results are often not reproducible. On the other hand, conventional automatic calibration methods are not always straightforward and often hampered by local minima problems. In this paper a new straightforward and automatic procedure, which is based on the response surface method (RSM) for selecting the best identifiable parameters, is proposed. In RSM, the process response (output) is related to the levels of the input variables in terms of a first- or second-order regression model. Usually, RSM is used to relate measured process output quantities to process conditions. However, in this paper RSM is used for selecting the dominant parameters, by evaluating parameters sensitivity in a predefined region. Good results obtained in calibration of ASM No. 1 for N-removal in a full-scale oxidation ditch proved that the proposed procedure is successful and reliable.

  7. A New Approach to the Internal Calibration of Reverberation-Mapping Spectra

    NASA Astrophysics Data System (ADS)

    Fausnaugh, M. M.

    2017-02-01

    We present a new procedure for the internal (night-to-night) calibration of timeseries spectra, with specific applications to optical AGN reverberation mapping data. The traditional calibration technique assumes that the narrow [O iii] λ5007 emission-line profile is constant in time; given a reference [O iii] λ5007 line profile, nightly spectra are aligned by fitting for a wavelength shift, a flux rescaling factor, and a change in the spectroscopic resolution. We propose the following modifications to this procedure: (1) we stipulate a constant spectral resolution for the final calibrated spectra, (2) we employ a more flexible model for changes in the spectral resolution, and (3) we use a Bayesian modeling framework to assess uncertainties in the calibration. In a test case using data for MCG+08-11-011, these modifications result in a calibration precision of ˜1 millimagnitude, which is approximately a factor of five improvement over the traditional technique. At this level, other systematic issues (e.g., the nightly sensitivity functions and Feii contamination) limit the final precision of the observed light curves. We implement this procedure as a python package (mapspec), which we make available to the community.

  8. Velocity precision measurements using laser Doppler anemometry

    NASA Astrophysics Data System (ADS)

    Dopheide, D.; Taux, G.; Narjes, L.

    1985-07-01

    A Laser Doppler Anemometer (LDA) was calibrated to determine its applicability to high pressure measurements (up to 10 bars) for industrial purposes. The measurement procedure with LDA and the experimental computerized layouts are presented. The calibration procedure is based on absolute accuracy of Doppler frequency and calibration of interference strip intervals. A four-quadrant detector allows comparison of the interference strip distance measurements and computer profiles. Further development of LDA is recommended to increase accuracy (0.1% inaccuracy) and to apply the method industrially.

  9. [The Strategic Organization of Skill

    NASA Technical Reports Server (NTRS)

    Roberts, Ralph

    1996-01-01

    Eye-movement software was developed in addition to several studies that focused on expert-novice differences in the acquisition and organization of skill. These studies focused on how increasingly complex strategies utilize and incorporate visual look-ahead to calibrate action. Software for collecting, calibrating, and scoring eye-movements was refined and updated. Some new algorithms were developed for analyzing corneal-reflection eye movement data that detect the location of saccadic eye movements in space and time. Two full-scale studies were carried out which examined how experts use foveal and peripheral vision to acquire information about upcoming environmental circumstances in order to plan future action(s) accordingly.

  10. 40 CFR 86.124-78 - Carbon dioxide analyzer calibration.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 18 2010-07-01 2010-07-01 false Carbon dioxide analyzer calibration... Complete Heavy-Duty Vehicles; Test Procedures § 86.124-78 Carbon dioxide analyzer calibration. Prior to its introduction into service and monthly thereafter the NDIR carbon dioxide analyzer shall be calibrated: (a...

  11. 40 CFR 86.124-78 - Carbon dioxide analyzer calibration.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 19 2014-07-01 2014-07-01 false Carbon dioxide analyzer calibration... Complete Heavy-Duty Vehicles; Test Procedures § 86.124-78 Carbon dioxide analyzer calibration. Prior to its introduction into service and monthly thereafter the NDIR carbon dioxide analyzer shall be calibrated: (a...

  12. 40 CFR 86.124-78 - Carbon dioxide analyzer calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 19 2012-07-01 2012-07-01 false Carbon dioxide analyzer calibration... Complete Heavy-Duty Vehicles; Test Procedures § 86.124-78 Carbon dioxide analyzer calibration. Prior to its introduction into service and monthly thereafter the NDIR carbon dioxide analyzer shall be calibrated: (a...

  13. 40 CFR 86.519-90 - Constant volume sampler calibration.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 19 2013-07-01 2013-07-01 false Constant volume sampler calibration... Regulations for 1978 and Later New Motorcycles; Test Procedures § 86.519-90 Constant volume sampler calibration. (a) The CVS (Constant Volume Sampler) is calibrated using an accurate flowmeter and restrictor...

  14. 40 CFR 86.519-90 - Constant volume sampler calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 19 2012-07-01 2012-07-01 false Constant volume sampler calibration... Regulations for 1978 and Later New Motorcycles; Test Procedures § 86.519-90 Constant volume sampler calibration. (a) The CVS (Constant Volume Sampler) is calibrated using an accurate flowmeter and restrictor...

  15. 40 CFR 86.519-90 - Constant volume sampler calibration.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 18 2011-07-01 2011-07-01 false Constant volume sampler calibration... Regulations for 1978 and Later New Motorcycles; Test Procedures § 86.519-90 Constant volume sampler calibration. (a) The CVS (Constant Volume Sampler) is calibrated using an accurate flowmeter and restrictor...

  16. 40 CFR 86.519-90 - Constant volume sampler calibration.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 19 2014-07-01 2014-07-01 false Constant volume sampler calibration... Regulations for 1978 and Later New Motorcycles; Test Procedures § 86.519-90 Constant volume sampler calibration. (a) The CVS (Constant Volume Sampler) is calibrated using an accurate flowmeter and restrictor...

  17. 40 CFR 86.519-90 - Constant volume sampler calibration.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 18 2010-07-01 2010-07-01 false Constant volume sampler calibration... Regulations for 1978 and Later New Motorcycles; Test Procedures § 86.519-90 Constant volume sampler calibration. (a) The CVS (Constant Volume Sampler) is calibrated using an accurate flowmeter and restrictor...

  18. 40 CFR 86.516-90 - Calibrations, frequency and overview.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... be checked at a frequency consistent with observed column life or when the indicator of the column... Regulations for 1978 and Later New Motorcycles; Test Procedures § 86.516-90 Calibrations, frequency and...) At least monthly or after any maintenance which could alter calibration, the following calibrations...

  19. 40 CFR 86.516-90 - Calibrations, frequency and overview.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... be checked at a frequency consistent with observed column life or when the indicator of the column... Regulations for 1978 and Later New Motorcycles; Test Procedures § 86.516-90 Calibrations, frequency and...) At least monthly or after any maintenance which could alter calibration, the following calibrations...

  20. Software for simulation of a computed tomography imaging spectrometer using optical design software

    NASA Astrophysics Data System (ADS)

    Spuhler, Peter T.; Willer, Mark R.; Volin, Curtis E.; Descour, Michael R.; Dereniak, Eustace L.

    2000-11-01

    Our Imaging Spectrometer Simulation Software known under the name Eikon should improve and speed up the design of a Computed Tomography Imaging Spectrometer (CTIS). Eikon uses existing raytracing software to simulate a virtual instrument. Eikon enables designers to virtually run through the design, calibration and data acquisition, saving significant cost and time when designing an instrument. We anticipate that Eikon simulations will improve future designs of CTIS by allowing engineers to explore more instrument options.

  1. A transition matrix approach to the Davenport gryo calibration scheme

    NASA Technical Reports Server (NTRS)

    Natanson, G. A.

    1998-01-01

    The in-flight gyro calibration scheme commonly used by NASA Goddard Space Flight Center (GSFC) attitude ground support teams closely follows an original version of the Davenport algorithm developed in the late seventies. Its basic idea is to minimize the least-squares differences between attitudes gyro- propagated over the course of a maneuver and those determined using post- maneuver sensor measurements. The paper represents the scheme in a recursive form by combining necessary partials into a rectangular matrix, which is propagated in exactly the same way as a Kalman filters square transition matrix. The nontrivial structure of the propagation matrix arises from the fact that attitude errors are not included in the state vector, and therefore their derivatives with respect to estimated a parameters do not appear in the transition matrix gyro defined in the conventional way. In cases when the required accuracy can be achieved by a single iteration, representation of the Davenport gyro calibration scheme in a recursive form allows one to discard each gyro measurement immediately after it was used to propagate the attitude and state transition matrix. Another advantage of the new approach is that it utilizes the same expression for the error sensitivity matrix as that used by the Kalman filter. As a result the suggested modification of the Davenport algorithm made it possible to reuse software modules implemented in the Kalman filter estimator, where both attitude errors and gyro calibration parameters are included in the state vector. The new approach has been implemented in the ground calibration utilities used to support the Tropical Rainfall Measuring Mission (TRMM). The paper analyzes some preliminary results of gyro calibration performed by the TRMM ground attitude support team. It is demonstrated that an effect of the second iteration on estimated values of calibration parameters is negligibly small, and therefore there is no need to store processed gyro data. This opens a promising opportunity for onboard implementation of the suggested recursive procedure by combining, it with the Kalman filter used to obtain necessary attitude solutions at the beginning and end of each maneuver.

  2. MOtoNMS: A MATLAB toolbox to process motion data for neuromusculoskeletal modeling and simulation.

    PubMed

    Mantoan, Alice; Pizzolato, Claudio; Sartori, Massimo; Sawacha, Zimi; Cobelli, Claudio; Reggiani, Monica

    2015-01-01

    Neuromusculoskeletal modeling and simulation enable investigation of the neuromusculoskeletal system and its role in human movement dynamics. These methods are progressively introduced into daily clinical practice. However, a major factor limiting this translation is the lack of robust tools for the pre-processing of experimental movement data for their use in neuromusculoskeletal modeling software. This paper presents MOtoNMS (matlab MOtion data elaboration TOolbox for NeuroMusculoSkeletal applications), a toolbox freely available to the community, that aims to fill this lack. MOtoNMS processes experimental data from different motion analysis devices and generates input data for neuromusculoskeletal modeling and simulation software, such as OpenSim and CEINMS (Calibrated EMG-Informed NMS Modelling Toolbox). MOtoNMS implements commonly required processing steps and its generic architecture simplifies the integration of new user-defined processing components. MOtoNMS allows users to setup their laboratory configurations and processing procedures through user-friendly graphical interfaces, without requiring advanced computer skills. Finally, configuration choices can be stored enabling the full reproduction of the processing steps. MOtoNMS is released under GNU General Public License and it is available at the SimTK website and from the GitHub repository. Motion data collected at four institutions demonstrate that, despite differences in laboratory instrumentation and procedures, MOtoNMS succeeds in processing data and producing consistent inputs for OpenSim and CEINMS. MOtoNMS fills the gap between motion analysis and neuromusculoskeletal modeling and simulation. Its support to several devices, a complete implementation of the pre-processing procedures, its simple extensibility, the available user interfaces, and its free availability can boost the translation of neuromusculoskeletal methods in daily and clinical practice.

  3. A new calibration code for the JET polarimeter.

    PubMed

    Gelfusa, M; Murari, A; Gaudio, P; Boboc, A; Brombin, M; Orsitto, F P; Giovannozzi, E

    2010-05-01

    An equivalent model of JET polarimeter is presented, which overcomes the drawbacks of previous versions of the fitting procedures used to provide calibrated results. First of all the signal processing electronics has been simulated, to confirm that it is still working within the original specifications. Then the effective optical path of both the vertical and lateral chords has been implemented to produce the calibration curves. The principle approach to the model has allowed obtaining a unique procedure which can be applied to any manual calibration and remains constant until the following one. The optical model of the chords is then applied to derive the plasma measurements. The results are in good agreement with the estimates of the most advanced full wave propagation code available and have been benchmarked with other diagnostics. The devised procedure has proved to work properly also for the most recent campaigns and high current experiments.

  4. A Review of LIDAR Radiometric Processing: From Ad Hoc Intensity Correction to Rigorous Radiometric Calibration

    PubMed Central

    Kashani, Alireza G.; Olsen, Michael J.; Parrish, Christopher E.; Wilson, Nicholas

    2015-01-01

    In addition to precise 3D coordinates, most light detection and ranging (LIDAR) systems also record “intensity”, loosely defined as the strength of the backscattered echo for each measured point. To date, LIDAR intensity data have proven beneficial in a wide range of applications because they are related to surface parameters, such as reflectance. While numerous procedures have been introduced in the scientific literature, and even commercial software, to enhance the utility of intensity data through a variety of “normalization”, “correction”, or “calibration” techniques, the current situation is complicated by a lack of standardization, as well as confusing, inconsistent use of terminology. In this paper, we first provide an overview of basic principles of LIDAR intensity measurements and applications utilizing intensity information from terrestrial, airborne topographic, and airborne bathymetric LIDAR. Next, we review effective parameters on intensity measurements, basic theory, and current intensity processing methods. We define terminology adopted from the most commonly-used conventions based on a review of current literature. Finally, we identify topics in need of further research. Ultimately, the presented information helps lay the foundation for future standards and specifications for LIDAR radiometric calibration. PMID:26561813

  5. AeroADL: applying the integration of the Suomi-NPP science algorithms with the Algorithm Development Library to the calibration and validation task

    NASA Astrophysics Data System (ADS)

    Houchin, J. S.

    2014-09-01

    A common problem for the off-line validation of the calibration algorithms and algorithm coefficients is being able to run science data through the exact same software used for on-line calibration of that data. The Joint Polar Satellite System (JPSS) program solved part of this problem by making the Algorithm Development Library (ADL) available, which allows the operational algorithm code to be compiled and run on a desktop Linux workstation using flat file input and output. However, this solved only part of the problem, as the toolkit and methods to initiate the processing of data through the algorithms were geared specifically toward the algorithm developer, not the calibration analyst. In algorithm development mode, a limited number of sets of test data are staged for the algorithm once, and then run through the algorithm over and over as the software is developed and debugged. In calibration analyst mode, we are continually running new data sets through the algorithm, which requires significant effort to stage each of those data sets for the algorithm without additional tools. AeroADL solves this second problem by providing a set of scripts that wrap the ADL tools, providing both efficient means to stage and process an input data set, to override static calibration coefficient look-up-tables (LUT) with experimental versions of those tables, and to manage a library containing multiple versions of each of the static LUT files in such a way that the correct set of LUTs required for each algorithm are automatically provided to the algorithm without analyst effort. Using AeroADL, The Aerospace Corporation's analyst team has demonstrated the ability to quickly and efficiently perform analysis tasks for both the VIIRS and OMPS sensors with minimal training on the software tools.

  6. Efficient multi-objective calibration of a computationally intensive hydrologic model with parallel computing software in Python

    USDA-ARS?s Scientific Manuscript database

    With enhanced data availability, distributed watershed models for large areas with high spatial and temporal resolution are increasingly used to understand water budgets and examine effects of human activities and climate change/variability on water resources. Developing parallel computing software...

  7. Integrating model behavior, optimization, and sensitivity/uncertainty analysis: overview and application of the MOUSE software toolbox

    USDA-ARS?s Scientific Manuscript database

    This paper provides an overview of the Model Optimization, Uncertainty, and SEnsitivity Analysis (MOUSE) software application, an open-source, Java-based toolbox of visual and numerical analysis components for the evaluation of environmental models. MOUSE is based on the OPTAS model calibration syst...

  8. Improved infra-red procedure for the evaluation of calibrating units.

    DOT National Transportation Integrated Search

    2011-01-04

    Introduction. The NHTSA Model Specifications for Calibrating Units for Breath : Alcohol Testers (FR 72 34742-34748) requires that calibration units submitted for : inclusion on the NHTSA Conforming Products List for such devices be evaluated using : ...

  9. Wind Tunnel Balance Calibration: Are 1,000,000 Data Points Enough?

    NASA Technical Reports Server (NTRS)

    Rhew, Ray D.; Parker, Peter A.

    2016-01-01

    Measurement systems are typically calibrated based on standard practices established by a metrology standards laboratory, for example the National Institute for Standards and Technology (NIST), or dictated by an organization's metrology manual. Therefore, the calibration is designed and executed according to an established procedure. However, for many aerodynamic research measurement systems a universally accepted standard, traceable approach does not exist. Therefore, a strategy for how to develop a calibration protocol is left to the developer or user to define based on experience and recommended practice in their respective industry. Wind tunnel balances are one such measurement system. Many different calibration systems, load schedules and procedures have been developed for balances with little consensus on a recommended approach. Especially lacking is guidance the number of calibration data points needed. Regrettably, the number of data points tends to be correlated with the perceived quality of the calibration. Often, the number of data points is associated with ones ability to generate the data rather than by a defined need in support of measurement objectives. Hence the title of the paper was conceived to challenge recent observations in the wind tunnel balance community that shows an ever increasing desire for more data points per calibration absent of guidance to determine when there are enough. This paper presents fundamental concepts and theory to aid in the development of calibration procedures for wind tunnel balances and provides a framework that is generally applicable to the characterization and calibration of other measurement systems. Questions that need to be answered are for example: What constitutes an adequate calibration? How much data are needed in the calibration? How good is the calibration? This paper will assist a practitioner in answering these questions by presenting an underlying theory on how to evaluate a calibration based on objective measures. This will enable the developer and user to design calibrations with quantified performance in terms of their capability to meet the user's objectives and a basis for comparing existing calibrations that may have been developed in an ad-hoc manner.

  10. Using LabVIEW to facilitate calibration and verification for respiratory impedance plethysmography.

    PubMed

    Ellis, W S; Jones, R T

    1991-12-01

    A system for calibrating the Respitrace impedance plethysmograph was developed with the capacity to quantitatively verify the accuracy of calibration. LabVIEW software was used on a Macintosh II computer to create a user-friendly environment, with the added benefit of reducing development time. The system developed enabled a research assistant to calibrate the Respitrace within 15 min while achieving an accuracy within the normally accepted 10% deviation when the Respitrace output is compared to a water spirometer standard. The system and methods described were successfully used in a study of 10 subjects smoking cigarettes containing marijuana or cocaine under four conditions, calibrating all subjects to 10% accuracy within 15 min.

  11. Calibration of the computer model describing flows in the water supply system; example of the application of a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Orłowska-Szostak, Maria; Orłowski, Ryszard

    2017-11-01

    The paper discusses some relevant aspects of the calibration of a computer model describing flows in the water supply system. The authors described an exemplary water supply system and used it as a practical illustration of calibration. A range of measures was discussed and applied, which improve the convergence and effective use of calculations in the calibration process and also the effect of such calibration which is the validity of the results obtained. Drawing up results of performed measurements, i.e. estimating pipe roughnesses, the authors performed using the genetic algorithm implementation of which is a software developed by Resan Labs company from Brazil.

  12. Laser Calibration of an Impact Disdrometer

    NASA Technical Reports Server (NTRS)

    Lane, John E.; Kasparis, Takis; Metzger, Philip T.; Jones, W. Linwood

    2014-01-01

    A practical approach to developing an operational low-cost disdrometer hinges on implementing an effective in situ adaptive calibration strategy. This calibration strategy lowers the cost of the device and provides a method to guarantee continued automatic calibration. In previous work, a collocated tipping bucket rain gauge was utilized to provide a calibration signal to the disdrometer's digital signal processing software. Rainfall rate is proportional to the 11/3 moment of the drop size distribution (a 7/2 moment can also be assumed, depending on the choice of terminal velocity relationship). In the previous case, the disdrometer calibration was characterized and weighted to the 11/3 moment of the drop size distribution (DSD). Optical extinction by rainfall is proportional to the 2nd moment of the DSD. Using visible laser light as a means to focus and generate an auxiliary calibration signal, the adaptive calibration processing is significantly improved.

  13. [Fundamental aspects for accrediting medical equipment calibration laboratories in Colombia].

    PubMed

    Llamosa-Rincón, Luis E; López-Isaza, Giovanni A; Villarreal-Castro, Milton F

    2010-02-01

    Analysing the fundamental methodological aspects which should be considered when drawing up calibration procedure for electro-medical equipment, thereby permitting international standard-based accreditation of electro-medical metrology laboratories in Colombia. NTC-ISO-IEC 17025:2005 and GTC-51-based procedures for calibrating electro-medical equipment were implemented and then used as patterns. The mathematical model for determining the estimated uncertainty value when calibrating electro-medical equipment for accreditation by the Electrical Variable Metrology Laboratory's Electro-medical Equipment Calibration Area accredited in compliance with Superintendence of Industry and Commerce Resolution 25771 May 26th 2009 consists of two equations depending on the case; they are: E = (Ai + sigmaAi) - (Ar + sigmaAr + deltaAr1) and E = (Ai + sigmaAi) - (Ar + sigmaA + deltaAr1). The mathematical modelling implemented for measuring uncertainty in the Universidad Tecnológica de Pereira's Electrical Variable Metrology Laboratory (Electro-medical Equipment Calibration Area) will become a good guide for calibration initiated in other laboratories in Colombia and Latin-America.

  14. NHEXAS PHASE I ARIZONA STUDY--STANDARD OPERATING PROCEDURE FOR CALIBRATION AND OPERATION OF BALANCES (UA-L-1.2)

    EPA Science Inventory

    The purpose of this SOP is to describe the procedures used when calibrating and operating balances during the Arizona NHEXAS project and the "Border" study. Keywords: lab; equipment; balances.

    The National Human Exposure Assessment Survey (NHEXAS) is a federal interagency rese...

  15. Application of the Reference Method Isotope Dilution Gas Chromatography Mass Spectrometry (ID/GC/MS) to Establish Metrological Traceability for Calibration and Control of Blood Glucose Test Systems

    PubMed Central

    Andreis, Elisabeth; Küllmer, Kai

    2014-01-01

    Self-monitoring of blood glucose (BG) by means of handheld BG systems is a cornerstone in diabetes therapy. The aim of this article is to describe a procedure with proven traceability for calibration and evaluation of BG systems to guarantee reliable BG measurements. Isotope dilution gas chromatography mass spectrometry (ID/GC/MS) is a method that fulfills all requirements to be used in a higher-order reference measurement procedure. However, this method is not applicable for routine measurements because of the time-consuming sample preparation. A hexokinase method with perchloric acid (PCA) sample pretreatment is used in a measurement procedure for such purposes. This method is directly linked to the ID/GC/MS method by calibration with a glucose solution that has an ID/GC/MS-determined target value. BG systems are calibrated with whole blood samples. The glucose levels in such samples are analyzed by this ID/GC/MS-linked hexokinase method to establish traceability to higher-order reference material. For method comparison, the glucose concentrations in 577 whole blood samples were measured using the PCA-hexokinase method and the ID/GC/MS method; this resulted in a mean deviation of 0.1%. The mean deviation between BG levels measured in >500 valid whole blood samples with BG systems and the ID/GC/MS was 1.1%. BG systems allow a reliable glucose measurement if a true reference measurement procedure, with a noninterrupted traceability chain using ID/GC/MS linked hexokinase method for calibration of BG systems, is implemented. Systems should be calibrated by means of a traceable and defined measurement procedure to avoid bias. PMID:24876614

  16. Cross modality registration of video and magnetic tracker data for 3D appearance and structure modeling

    NASA Astrophysics Data System (ADS)

    Sargent, Dusty; Chen, Chao-I.; Wang, Yuan-Fang

    2010-02-01

    The paper reports a fully-automated, cross-modality sensor data registration scheme between video and magnetic tracker data. This registration scheme is intended for use in computerized imaging systems to model the appearance, structure, and dimension of human anatomy in three dimensions (3D) from endoscopic videos, particularly colonoscopic videos, for cancer research and clinical practices. The proposed cross-modality calibration procedure operates this way: Before a colonoscopic procedure, the surgeon inserts a magnetic tracker into the working channel of the endoscope or otherwise fixes the tracker's position on the scope. The surgeon then maneuvers the scope-tracker assembly to view a checkerboard calibration pattern from a few different viewpoints for a few seconds. The calibration procedure is then completed, and the relative pose (translation and rotation) between the reference frames of the magnetic tracker and the scope is determined. During the colonoscopic procedure, the readings from the magnetic tracker are used to automatically deduce the pose (both position and orientation) of the scope's reference frame over time, without complicated image analysis. Knowing the scope movement over time then allows us to infer the 3D appearance and structure of the organs and tissues in the scene. While there are other well-established mechanisms for inferring the movement of the camera (scope) from images, they are often sensitive to mistakes in image analysis, error accumulation, and structure deformation. The proposed method using a magnetic tracker to establish the camera motion parameters thus provides a robust and efficient alternative for 3D model construction. Furthermore, the calibration procedure does not require special training nor use expensive calibration equipment (except for a camera calibration pattern-a checkerboard pattern-that can be printed on any laser or inkjet printer).

  17. PSA discriminator influence on (222)Rn efficiency detection in waters by liquid scintillation counting.

    PubMed

    Stojković, Ivana; Todorović, Nataša; Nikolov, Jovana; Tenjović, Branislava

    2016-06-01

    A procedure for the (222)Rn determination in aqueous samples using liquid scintillation counting (LSC) was evaluated and optimized. Measurements were performed by ultra-low background spectrometer Quantulus 1220™ equipped with PSA (Pulse Shape Analysis) circuit which discriminates alpha/beta spectra. Since calibration procedure is carried out with (226)Ra standard, which has both alpha and beta progenies, it is clear that PSA discriminator has vital importance in order to provide precise spectra separation. Improvement of calibration procedure was done through investigation of PSA discriminator level and, consequentially, the activity of (226)Ra calibration standard influence on (222)Rn efficiency detection. Quench effects on generated spectra i.e. determination of radon efficiency detection were also investigated with quench calibration curve obtained. Radon determination in waters based on modified procedure according to the activity of (226)Ra standard used, dependent on PSA setup, was evaluated with prepared (226)Ra solution samples and drinking water samples with assessment of measurement uncertainty variation included. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. An Automated Thermocouple Calibration System

    NASA Technical Reports Server (NTRS)

    Bethea, Mark D.; Rosenthal, Bruce N.

    1992-01-01

    An Automated Thermocouple Calibration System (ATCS) was developed for the unattended calibration of type K thermocouples. This system operates from room temperature to 650 C and has been used for calibration of thermocouples in an eight-zone furnace system which may employ as many as 60 thermocouples simultaneously. It is highly efficient, allowing for the calibration of large numbers of thermocouples in significantly less time than required for manual calibrations. The system consists of a personal computer, a data acquisition/control unit, and a laboratory calibration furnace. The calibration furnace is a microprocessor-controlled multipurpose temperature calibrator with an accuracy of +/- 0.7 C. The accuracy of the calibration furnace is traceable to the National Institute of Standards and Technology (NIST). The computer software is menu-based to give the user flexibility and ease of use. The user needs no programming experience to operate the systems. This system was specifically developed for use in the Microgravity Materials Science Laboratory (MMSL) at the NASA LeRC.

  19. Mass separation of deuterium and helium with conventional quadrupole mass spectrometer by using varied ionization energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Yaowei; Hu, Jiansheng, E-mail: hujs@ipp.ac.cn; Wan, Zhao

    2016-03-15

    Deuterium pressure in deuterium-helium mixture gas is successfully measured by a common quadrupole mass spectrometer (model: RGA200) with a resolution of ∼0.5 atomic mass unit (AMU), by using varied ionization energy together with new developed software and dedicated calibration for RGA200. The new software is developed by using MATLAB with the new functions: electron energy (EE) scanning, deuterium partial pressure measurement, and automatic data saving. RGA200 with new software is calibrated in pure deuterium and pure helium 1.0 × 10{sup −6}–5.0 × 10{sup −2} Pa, and the relation between pressure and ion current of AMU4 under EE = 25 eVmore » and EE = 70 eV is obtained. From the calibration result and RGA200 scanning with varied ionization energy in deuterium and helium mixture gas, both deuterium partial pressures (P{sub D{sub 2}}) and helium partial pressure (P{sub He}) could be obtained. The result shows that deuterium partial pressure could be measured if P{sub D{sub 2}} > 10{sup −6} Pa (limited by ultimate pressure of calibration vessel), and helium pressure could be measured only if P{sub He}/P{sub D{sub 2}} > 0.45, and the measurement error is evaluated as 15%. This method is successfully employed in EAST 2015 summer campaign to monitor deuterium outgassing/desorption during helium discharge cleaning.« less

  20. Technical Note: Procedure for the calibration and validation of kilo-voltage cone-beam CT models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vilches-Freixas, Gloria; Létang, Jean Michel; Rit,

    2016-09-15

    Purpose: The aim of this work is to propose a general and simple procedure for the calibration and validation of kilo-voltage cone-beam CT (kV CBCT) models against experimental data. Methods: The calibration and validation of the CT model is a two-step procedure: the source model then the detector model. The source is described by the direction dependent photon energy spectrum at each voltage while the detector is described by the pixel intensity value as a function of the direction and the energy of incident photons. The measurements for the source consist of a series of dose measurements in air performedmore » at each voltage with varying filter thicknesses and materials in front of the x-ray tube. The measurements for the detector are acquisitions of projection images using the same filters and several tube voltages. The proposed procedure has been applied to calibrate and assess the accuracy of simple models of the source and the detector of three commercial kV CBCT units. If the CBCT system models had been calibrated differently, the current procedure would have been exclusively used to validate the models. Several high-purity attenuation filters of aluminum, copper, and silver combined with a dosimeter which is sensitive to the range of voltages of interest were used. A sensitivity analysis of the model has also been conducted for each parameter of the source and the detector models. Results: Average deviations between experimental and theoretical dose values are below 1.5% after calibration for the three x-ray sources. The predicted energy deposited in the detector agrees with experimental data within 4% for all imaging systems. Conclusions: The authors developed and applied an experimental procedure to calibrate and validate any model of the source and the detector of a CBCT unit. The present protocol has been successfully applied to three x-ray imaging systems. The minimum requirements in terms of material and equipment would make its implementation suitable in most clinical environments.« less

  1. Open source pipeline for ESPaDOnS reduction and analysis

    NASA Astrophysics Data System (ADS)

    Martioli, Eder; Teeple, Doug; Manset, Nadine; Devost, Daniel; Withington, Kanoa; Venne, Andre; Tannock, Megan

    2012-09-01

    OPERA is a Canada-France-Hawaii Telescope (CFHT) open source collaborative software project currently under development for an ESPaDOnS echelle spectro-polarimetric image reduction pipeline. OPERA is designed to be fully automated, performing calibrations and reduction, producing one-dimensional intensity and polarimetric spectra. The calibrations are performed on two-dimensional images. Spectra are extracted using an optimal extraction algorithm. While primarily designed for CFHT ESPaDOnS data, the pipeline is being written to be extensible to other echelle spectrographs. A primary design goal is to make use of fast, modern object-oriented technologies. Processing is controlled by a harness, which manages a set of processing modules, that make use of a collection of native OPERA software libraries and standard external software libraries. The harness and modules are completely parametrized by site configuration and instrument parameters. The software is open- ended, permitting users of OPERA to extend the pipeline capabilities. All these features have been designed to provide a portable infrastructure that facilitates collaborative development, code re-usability and extensibility. OPERA is free software with support for both GNU/Linux and MacOSX platforms. The pipeline is hosted on SourceForge under the name "opera-pipeline".

  2. Force Measurement Services at Kebs: AN Overview of Equipment, Procedures and Uncertainty

    NASA Astrophysics Data System (ADS)

    Bangi, J. O.; Maranga, S. M.; Nganga, S. P.; Mutuli, S. M.

    This paper describes the facilities, instrumentation and procedures currently used in the force laboratory at the Kenya Bureau of Standards (KEBS) for force measurement services. The laboratory uses the Force Calibration Machine (FCM) to calibrate force-measuring instruments. The FCM derives its traceability via comparisons using reference transfer force transducers calibrated by the Force Standard Machines (FSM) of a National Metrology Institute (NMI). The force laboratory is accredited to ISO/IEC 17025 by the Germany Accreditation Body (DAkkS). The accredited measurement scope of the laboratory is 1 MN to calibrate force transducers in both compression and tension modes. ISO 376 procedures are used while calibrating force transducers. The KEBS reference transfer standards have capacities of 10, 50, 300 and 1000 kN to cover the full range of the FCM. The uncertainty in the forces measured by the FCM were reviewed and determined in accordance to the new EURAMET calibration guide. The relative expanded uncertainty of force W realized by FCM was evaluated in a range from 10 kN-1 MN, and was found to be 5.0 × 10-4 with the coverage factor k being equal to 2. The overall normalized error (En) of the comparison results was also found to be less than 1. The accredited Calibration and Measurement Capability (CMC) of the KEBS force laboratory was based on the results of those intercomparisons. The FCM enables KEBS to provide traceability for the calibration of class ‘1’ force instruments as per the ISO 376.

  3. 40 CFR 92.121 - Oxides of nitrogen analyzer calibration and check.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 20 2014-07-01 2013-07-01 true Oxides of nitrogen analyzer calibration... Procedures § 92.121 Oxides of nitrogen analyzer calibration and check. (a) Quench checks; NO X analyzer. (1... performed in step in paragraph (a)(3)(i) this section. (b) Oxides of nitrogen analyzer calibration. (1...

  4. Issues concerning international comparison of free-field calibrations of acoustical standards

    NASA Astrophysics Data System (ADS)

    Nedzelnitsky, Victor

    2002-11-01

    Primary free-field calibrations of laboratory standard microphones by the reciprocity method establish these microphones as reference standard devices for calibrating working standard microphones, other measuring microphones, and practical instruments such as sound level meters and personal sound exposure meters (noise dosimeters). These primary, secondary, and other calibrations are indispensable to the support of regulatory requirements, standards, and product characterization and quality control procedures important for industry, commerce, health, and safety. International Electrotechnical Commission (IEC) Technical Committee 29 Electroacoustics produces international documentary standards, including standards for primary and secondary free-field calibration and measurement procedures and their critically important application to practical instruments. This paper addresses some issues concerning calibrations, standards activities, and the international key comparison of primary free-field calibrations of IEC-type LS2 laboratory standard microphones that is being planned by the Consultative Committee for Acoustics, Ultrasound, and Vibration (CCAUV) of the International Committee for Weights and Measures (CIPM). This comparison will include free-field calibrations by the reciprocity method at participating major national metrology laboratories throughout the world.

  5. Statistical photocalibration of photodetectors for radiometry without calibrated light sources

    NASA Astrophysics Data System (ADS)

    Yielding, Nicholas J.; Cain, Stephen C.; Seal, Michael D.

    2018-01-01

    Calibration of CCD arrays for identifying bad pixels and achieving nonuniformity correction is commonly accomplished using dark frames. This kind of calibration technique does not achieve radiometric calibration of the array since only the relative response of the detectors is computed. For this, a second calibration is sometimes utilized by looking at sources with known radiances. This process can be used to calibrate photodetectors as long as a calibration source is available and is well-characterized. A previous attempt at creating a procedure for calibrating a photodetector using the underlying Poisson nature of the photodetection required calculations of the skewness of the photodetector measurements. Reliance on the third moment of measurement meant that thousands of samples would be required in some cases to compute that moment. A photocalibration procedure is defined that requires only first and second moments of the measurements. The technique is applied to image data containing a known light source so that the accuracy of the technique can be surmised. It is shown that the algorithm can achieve accuracy of nearly 2.7% of the predicted number of photons using only 100 frames of image data.

  6. Calibration of Viking imaging system pointing, image extraction, and optical navigation measure

    NASA Technical Reports Server (NTRS)

    Breckenridge, W. G.; Fowler, J. W.; Morgan, E. M.

    1977-01-01

    Pointing control and knowledge accuracy of Viking Orbiter science instruments is controlled by the scan platform. Calibration of the scan platform and the imaging system was accomplished through mathematical models. The calibration procedure and results obtained for the two Viking spacecraft are described. Included are both ground and in-flight scan platform calibrations, and the additional calibrations unique to optical navigation.

  7. Application of Calibrated Peer Review (CPR) Writing Assignments to Enhance Experiments with an Environmental Chemistry Focus

    ERIC Educational Resources Information Center

    Margerum, Lawrence D.; Gulsrud, Maren; Manlapez, Ronald; Rebong, Rachelle; Love, Austin

    2007-01-01

    The browser-based software program, Calibrated Peer Review (CPR) developed by the Molecular Science Project enables instructors to create structured writing assignments in which students learn by writing and reading for content. Though the CPR project covers only one experiment in general chemistry, it might provide lab instructors with a method…

  8. Calibration of Low Cost Digital Camera Using Data from Simultaneous LIDAR and Photogrammetric Surveys

    NASA Astrophysics Data System (ADS)

    Mitishita, E.; Debiasi, P.; Hainosz, F.; Centeno, J.

    2012-07-01

    Digital photogrammetric products from the integration of imagery and lidar datasets are a reality nowadays. When the imagery and lidar surveys are performed together and the camera is connected to the lidar system, a direct georeferencing can be applied to compute the exterior orientation parameters of the images. Direct georeferencing of the images requires accurate interior orientation parameters to perform photogrammetric application. Camera calibration is a procedure applied to compute the interior orientation parameters (IOPs). Calibration researches have established that to obtain accurate IOPs, the calibration must be performed with same or equal condition that the photogrammetric survey is done. This paper shows the methodology and experiments results from in situ self-calibration using a simultaneous images block and lidar dataset. The calibration results are analyzed and discussed. To perform this research a test field was fixed in an urban area. A set of signalized points was implanted on the test field to use as the check points or control points. The photogrammetric images and lidar dataset of the test field were taken simultaneously. Four strips of flight were used to obtain a cross layout. The strips were taken with opposite directions of flight (W-E, E-W, N-S and S-N). The Kodak DSC Pro SLR/c digital camera was connected to the lidar system. The coordinates of the exposition station were computed from the lidar trajectory. Different layouts of vertical control points were used in the calibration experiments. The experiments use vertical coordinates from precise differential GPS survey or computed by an interpolation procedure using the lidar dataset. The positions of the exposition stations are used as control points in the calibration procedure to eliminate the linear dependency of the group of interior and exterior orientation parameters. This linear dependency happens, in the calibration procedure, when the vertical images and flat test field are used. The mathematic correlation of the interior and exterior orientation parameters are analyzed and discussed. The accuracies of the calibration experiments are, as well, analyzed and discussed.

  9. The Very Large Array Data Processing Pipeline

    NASA Astrophysics Data System (ADS)

    Kent, Brian R.; Masters, Joseph S.; Chandler, Claire J.; Davis, Lindsey E.; Kern, Jeffrey S.; Ott, Juergen; Schinzel, Frank K.; Medlin, Drew; Muders, Dirk; Williams, Stewart; Geers, Vincent C.; Momjian, Emmanuel; Butler, Bryan J.; Nakazato, Takeshi; Sugimoto, Kanako

    2018-01-01

    We present the VLA Pipeline, software that is part of the larger pipeline processing framework used for the Karl G. Jansky Very Large Array (VLA), and Atacama Large Millimeter/sub-millimeter Array (ALMA) for both interferometric and single dish observations.Through a collection of base code jointly used by the VLA and ALMA, the pipeline builds a hierarchy of classes to execute individual atomic pipeline tasks within the Common Astronomy Software Applications (CASA) package. Each pipeline task contains heuristics designed by the team to actively decide the best processing path and execution parameters for calibration and imaging. The pipeline code is developed and written in Python and uses a "context" structure for tracking the heuristic decisions and processing results. The pipeline "weblog" acts as the user interface in verifying the quality assurance of each calibration and imaging stage. The majority of VLA scheduling blocks above 1 GHz are now processed with the standard continuum recipe of the pipeline and offer a calibrated measurement set as a basic data product to observatory users. In addition, the pipeline is used for processing data from the VLA Sky Survey (VLASS), a seven year community-driven endeavor started in September 2017 to survey the entire sky down to a declination of -40 degrees at S-band (2-4 GHz). This 5500 hour next-generation large radio survey will explore the time and spectral domains, relying on pipeline processing to generate calibrated measurement sets, polarimetry, and imaging data products that are available to the astronomical community with no proprietary period. Here we present an overview of the pipeline design philosophy, heuristics, and calibration and imaging results produced by the pipeline. Future development will include the testing of spectral line recipes, low signal-to-noise heuristics, and serving as a testing platform for science ready data products.The pipeline is developed as part of the CASA software package by an international consortium of scientists and software developers based at the National Radio Astronomical Observatory (NRAO), the European Southern Observatory (ESO), and the National Astronomical Observatory of Japan (NAOJ).

  10. Object oriented development of engineering software using CLIPS

    NASA Technical Reports Server (NTRS)

    Yoon, C. John

    1991-01-01

    Engineering applications involve numeric complexity and manipulations of a large amount of data. Traditionally, numeric computation has been the concern in developing an engineering software. As engineering application software became larger and more complex, management of resources such as data, rather than the numeric complexity, has become the major software design problem. Object oriented design and implementation methodologies can improve the reliability, flexibility, and maintainability of the resulting software; however, some tasks are better solved with the traditional procedural paradigm. The C Language Integrated Production System (CLIPS), with deffunction and defgeneric constructs, supports the procedural paradigm. The natural blending of object oriented and procedural paradigms has been cited as the reason for the popularity of the C++ language. The CLIPS Object Oriented Language's (COOL) object oriented features are more versatile than C++'s. A software design methodology based on object oriented and procedural approaches appropriate for engineering software, and to be implemented in CLIPS was outlined. A method for sensor placement for Space Station Freedom is being implemented in COOL as a sample problem.

  11. U.K. National Radiological Protection Board Radon Calibration Procedures

    PubMed Central

    Cliff, K. D.

    1990-01-01

    A procedure for the calibration of instruments for the detection of 222Rn in air is described. The method is based on the alpha-spectrometric determination of the concentration in air of 218Po in the calibration chamber. The calibration chamber is described, together with the method of maintaining a high aerosol concentration. The 218Po concentration at steady state in the chamber is found to be 98% of the 222Rn concentration typically. An assessment of the sources of uncertainty in the method presented indicate that the 222Rn concentration in the chamber can be determined with an overall uncertainty of about 7% at the 95% confidence level. PMID:28179765

  12. L5 TM radiometric recalibration procedure using the internal calibration trends from the NLAPS trending database

    USGS Publications Warehouse

    Chander, G.; Haque, Md. O.; Micijevic, E.; Barsi, J.A.

    2008-01-01

    From the Landsat program's inception in 1972 to the present, the earth science user community has benefited from a historical record of remotely sensed data. The multispectral data from the Landsat 5 (L5) Thematic Mapper (TM) sensor provide the backbone for this extensive archive. Historically, the radiometric calibration procedure for this imagery used the instrument's response to the Internal Calibrator (IC) on a scene-by-scene basis to determine the gain and offset for each detector. The IC system degraded with time causing radiometric calibration errors up to 20 percent. In May 2003 the National Landsat Archive Production System (NLAPS) was updated to use a gain model rather than the scene acquisition specific IC gains to calibrate TM data processed in the United States. Further modification of the gain model was performed in 2007. L5 TM data that were processed using IC prior to the calibration update do not benefit from the recent calibration revisions. A procedure has been developed to give users the ability to recalibrate their existing Level-1 products. The best recalibration results are obtained if the work order report that was originally included in the standard data product delivery is available. However, many users may not have the original work order report. In such cases, the IC gain look-up table that was generated using the radiometric gain trends recorded in the NLAPS database can be used for recalibration. This paper discusses the procedure to recalibrate L5 TM data when the work order report originally used in processing is not available. A companion paper discusses the generation of the NLAPS IC gain and bias look-up tables required to perform the recalibration.

  13. Estimating multilevel logistic regression models when the number of clusters is low: a comparison of different statistical software procedures.

    PubMed

    Austin, Peter C

    2010-04-22

    Multilevel logistic regression models are increasingly being used to analyze clustered data in medical, public health, epidemiological, and educational research. Procedures for estimating the parameters of such models are available in many statistical software packages. There is currently little evidence on the minimum number of clusters necessary to reliably fit multilevel regression models. We conducted a Monte Carlo study to compare the performance of different statistical software procedures for estimating multilevel logistic regression models when the number of clusters was low. We examined procedures available in BUGS, HLM, R, SAS, and Stata. We found that there were qualitative differences in the performance of different software procedures for estimating multilevel logistic models when the number of clusters was low. Among the likelihood-based procedures, estimation methods based on adaptive Gauss-Hermite approximations to the likelihood (glmer in R and xtlogit in Stata) or adaptive Gaussian quadrature (Proc NLMIXED in SAS) tended to have superior performance for estimating variance components when the number of clusters was small, compared to software procedures based on penalized quasi-likelihood. However, only Bayesian estimation with BUGS allowed for accurate estimation of variance components when there were fewer than 10 clusters. For all statistical software procedures, estimation of variance components tended to be poor when there were only five subjects per cluster, regardless of the number of clusters.

  14. Synthetic aperture imaging in ultrasound calibration

    NASA Astrophysics Data System (ADS)

    Ameri, Golafsoun; Baxter, John S. H.; McLeod, A. Jonathan; Jayaranthe, Uditha L.; Chen, Elvis C. S.; Peters, Terry M.

    2014-03-01

    Ultrasound calibration allows for ultrasound images to be incorporated into a variety of interventional applica­ tions. Traditional Z- bar calibration procedures rely on wired phantoms with an a priori known geometry. The line fiducials produce small, localized echoes which are then segmented from an array of ultrasound images from different tracked probe positions. In conventional B-mode ultrasound, the wires at greater depths appear blurred and are difficult to segment accurately, limiting the accuracy of ultrasound calibration. This paper presents a novel ultrasound calibration procedure that takes advantage of synthetic aperture imaging to reconstruct high resolution ultrasound images at arbitrary depths. In these images, line fiducials are much more readily and accu­ rately segmented, leading to decreased calibration error. The proposed calibration technique is compared to one based on B-mode ultrasound. The fiducial localization error was improved from 0.21mm in conventional B-mode images to 0.15mm in synthetic aperture images corresponding to an improvement of 29%. This resulted in an overall reduction of calibration error from a target registration error of 2.00mm to 1.78mm, an improvement of 11%. Synthetic aperture images display greatly improved segmentation capabilities due to their improved resolution and interpretability resulting in improved calibration.

  15. Design, installation, and performance evaluation of a custom dye matrix standard for automated capillary electrophoresis.

    PubMed

    Cloete, Kevin Wesley; Ristow, Peter Gustav; Kasu, Mohaimin; D'Amato, Maria Eugenia

    2017-03-01

    CE equipment detects and deconvolutes mixtures containing up to six fluorescently labeled DNA fragments. This deconvolution is done by the collection software that requires a spectral calibration file. The calibration file is used to adjust for the overlap that occurs between the emission spectra of fluorescence dyes. All commercial genotyping and sequencing kits require the installation of a corresponding matrix standard to generate a calibration file. Due to the differences in emission spectrum overlap between fluorescent dyes, the application of existing commercial matrix standards to the electrophoretic separation of DNA labeled with other fluorescent dyes can yield undesirable results. Currently, the number of fluorescent dyes available for oligonucleotide labeling surpasses the availability of commercial matrix standards. Therefore, in this study we developed and evaluated a customized matrix standard using ATTO 633, ATTO 565, ATTO 550, ATTO Rho6G, and 6-FAM dyes for which no commercial matrix standard is available. We highlighted the potential genotyping errors of using an incorrect matrix standard by evaluating the relative performance of our custom dye set using six matrix standards. The specific performance of two genotyping kits (UniQTyper™ Y-10 version 1.0 and PowerPlex® Y23 System) was also evaluated using their specific matrix standards. The procedure we followed for the construction of our custom dye matrix standard can be extended to other fluorescent dyes. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Radiometer Calibrations: Saving Time by Automating the Gathering and Analysis Procedures

    NASA Technical Reports Server (NTRS)

    Sadino, Jeffrey L.

    2005-01-01

    Mr. Abtahi custom-designs radiometers for Mr. Hook's research group. Inherently, when the radiometers report the temperature of arbitrary surfaces, the results are affected by errors in accuracy. This problem can be reduced if the errors can be accounted for in a polynomial. This is achieved by pointing the radiometer at a constant-temperature surface. We have been using a Hartford Scientific WaterBath. The measurements from the radiometer are collected at many different temperatures and compared to the measurements made by a Hartford Chubb thermometer with a four-decimal point resolution. The data is analyzed and fit to a fifth-order polynomial. This formula is then uploaded into the radiometer software, enabling accurate data gathering. Traditionally, Mr. Abtahi has done this by hand, spending several hours of his time setting the temperature, waiting for stabilization, taking measurements, and then repeating for other temperatures. My program, written in the Python language, has enabled the data gathering and analysis process to be handed off to a less-senior member of the team. Simply by entering several initial settings, the program will simultaneously control all three instruments and organize the data suitable for computer analyses, thus giving the desired fifth-order polynomial. This will save time, allow for a more complete calibration data set, and allow for base calibrations to be developed. The program is expandable to simultaneously take any type of measurement from up to nine distinct instruments.

  17. gPhoton: The GALEX Photon Data Archive

    NASA Astrophysics Data System (ADS)

    Million, Chase; Fleming, Scott W.; Shiao, Bernie; Seibert, Mark; Loyd, Parke; Tucker, Michael; Smith, Myron; Thompson, Randy; White, Richard L.

    2016-12-01

    gPhoton is a new database product and software package that enables analysis of GALEX ultraviolet data at the photon level. The project’s stand-alone, pure-Python calibration pipeline reproduces the functionality of the original mission pipeline to reduce raw spacecraft data to lists of time-tagged, sky-projected photons, which are then hosted in a publicly available database by the Mikulski Archive at Space Telescope. This database contains approximately 130 terabytes of data describing approximately 1.1 trillion sky-projected events with a timestamp resolution of five milliseconds. A handful of Python and command-line modules serve as a front end to interact with the database and to generate calibrated light curves and images from the photon-level data at user-defined temporal and spatial scales. The gPhoton software and source code are in active development and publicly available under a permissive license. We describe the motivation, design, and implementation of the calibration pipeline, database, and tools, with emphasis on divergence from prior work, as well as challenges created by the large data volume. We summarize the astrometric and photometric performance of gPhoton relative to the original mission pipeline. For a brief example of short time-domain science capabilities enabled by gPhoton, we show new flares from the known M-dwarf flare star CR Draconis. The gPhoton software has permanent object identifiers with the ASCL (ascl:1603.004) and DOI (doi:10.17909/T9CC7G). This paper describes the software as of version v1.27.2.

  18. 40 CFR 1065.330 - Exhaust-flow calibration.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Section 1065.330 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Calibrations and Verifications Flow-Related Measurements § 1065.330... ultrasonic flow meter for raw exhaust flow measurement, we recommend that you calibrate it as described in...

  19. 40 CFR 1066.130 - Measurement instrument calibrations and verifications.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 33 2014-07-01 2014-07-01 false Measurement instrument calibrations... (CONTINUED) AIR POLLUTION CONTROLS VEHICLE-TESTING PROCEDURES Equipment, Measurement Instruments, Fuel, and Analytical Gas Specifications § 1066.130 Measurement instrument calibrations and verifications. The...

  20. US Army Test and Evaluation Command, Test Operations Procedure, Gamma Ray Source Calibration

    DTIC Science & Technology

    1980-03-28

    SUPPLEMENTARY NOTES Safety, HTP 6-2-507 and AMCR-385-25. Related MTP’s: 8-3-171, AD 871790, 26 May 70 Radiation Detection Equipment 8-3-172, AD 728455, 1...perienced in Radiac calibration equipment and for that reason the procedures are not step-by-step but are planned to be interpreted by the operator in...is to be used by personnel trained and ex- perienced in the use of radiac calibration equipment and radiation prin- ciples. It ognnot be interpreted

  1. Penn State University ground software support for X-ray missions.

    NASA Astrophysics Data System (ADS)

    Townsley, L. K.; Nousek, J. A.; Corbet, R. H. D.

    1995-03-01

    The X-ray group at Penn State is charged with two software development efforts in support of X-ray satellite missions. As part of the ACIS instrument team for AXAF, the authors are developing part of the ground software to support the instrument's calibration. They are also designing a translation program for Ginga data, to change it from the non-standard FRF format, which closely parallels the original telemetry format, to FITS.

  2. U.S.-MEXICO BORDER PROGRAM ARIZONA BORDER STUDY--STANDARD OPERATING PROCEDURE FOR CALIBRATION AND OPERATION OF NHEXAS BALANCES (UA-L-1.2)

    EPA Science Inventory

    The purpose of this SOP is to describe the procedures used when calibrating and operating balances during the Arizona NHEXAS project and the Border study. Keywords: lab; equipment; balances.

    The U.S.-Mexico Border Program is sponsored by the Environmental Health Workgroup of t...

  3. A dose-response curve for biodosimetry from a 6 MV electron linear accelerator

    PubMed Central

    Lemos-Pinto, M.M.P.; Cadena, M.; Santos, N.; Fernandes, T.S.; Borges, E.; Amaral, A.

    2015-01-01

    Biological dosimetry (biodosimetry) is based on the investigation of radiation-induced biological effects (biomarkers), mainly dicentric chromosomes, in order to correlate them with radiation dose. To interpret the dicentric score in terms of absorbed dose, a calibration curve is needed. Each curve should be constructed with respect to basic physical parameters, such as the type of ionizing radiation characterized by low or high linear energy transfer (LET) and dose rate. This study was designed to obtain dose calibration curves by scoring of dicentric chromosomes in peripheral blood lymphocytes irradiated in vitro with a 6 MV electron linear accelerator (Mevatron M, Siemens, USA). Two software programs, CABAS (Chromosomal Aberration Calculation Software) and Dose Estimate, were used to generate the curve. The two software programs are discussed; the results obtained were compared with each other and with other published low LET radiation curves. Both software programs resulted in identical linear and quadratic terms for the curve presented here, which was in good agreement with published curves for similar radiation quality and dose rates. PMID:26445334

  4. Calibrating a tensor magnetic gradiometer using spin data

    USGS Publications Warehouse

    Bracken, Robert E.; Smith, David V.; Brown, Philip J.

    2005-01-01

    Scalar magnetic data are often acquired to discern characteristics of geologic source materials and buried objects. It is evident that a great deal can be done with scalar data, but there are significant advantages to direct measurement of the magnetic gradient tensor in applications with nearby sources, such as unexploded ordnance (UXO). To explore these advantages, we adapted a prototype tensor magnetic gradiometer system (TMGS) and successfully implemented a data-reduction procedure. One of several critical reduction issues is the precise determination of a large group of calibration coefficients for the sensors and sensor array. To resolve these coefficients, we devised a spin calibration method, after similar methods of calibrating space-based magnetometers (Snare, 2001). The spin calibration procedure consists of three parts: (1) collecting data by slowly revolving the sensor array in the Earth?s magnetic field, (2) deriving a comprehensive set of coefficients from the spin data, and (3) applying the coefficients to the survey data. To show that the TMGS functions as a tensor gradiometer, we conducted an experimental survey that verified that the reduction procedure was effective (Bracken and Brown, in press). Therefore, because it was an integral part of the reduction, it can be concluded that the spin calibration was correctly formulated with acceptably small errors.

  5. A measurement technique to determine the calibration accuracy of an electromagnetic tracking system to radiation isocenter.

    PubMed

    Litzenberg, Dale W; Gallagher, Ian; Masi, Kathryn J; Lee, Choonik; Prisciandaro, Joann I; Hamstra, Daniel A; Ritter, Timothy; Lam, Kwok L

    2013-08-01

    To present and characterize a measurement technique to quantify the calibration accuracy of an electromagnetic tracking system to radiation isocenter. This technique was developed as a quality assurance method for electromagnetic tracking systems used in a multi-institutional clinical hypofractionated prostate study. In this technique, the electromagnetic tracking system is calibrated to isocenter with the manufacturers recommended technique, using laser-based alignment. A test patient is created with a transponder at isocenter whose position is measured electromagnetically. Four portal images of the transponder are taken with collimator rotations of 45° 135°, 225°, and 315°, at each of four gantry angles (0°, 90°, 180°, 270°) using a 3×6 cm2 radiation field. In each image, the center of the copper-wrapped iron core of the transponder is determined. All measurements are made relative to this transponder position to remove gantry and imager sag effects. For each of the 16 images, the 50% collimation edges are identified and used to find a ray representing the rotational axis of each collimation edge. The 16 collimator rotation rays from four gantry angles pass through and bound the radiation isocenter volume. The center of the bounded region, relative to the transponder, is calculated and then transformed to tracking system coordinates using the transponder position, allowing the tracking system's calibration offset from radiation isocenter to be found. All image analysis and calculations are automated with inhouse software for user-independent accuracy. Three different tracking systems at two different sites were evaluated for this study. The magnitude of the calibration offset was always less than the manufacturer's stated accuracy of 0.2 cm using their standard clinical calibration procedure, and ranged from 0.014 to 0.175 cm. On three systems in clinical use, the magnitude of the offset was found to be 0.053±0.036, 0.121±0.023, and 0.093±0.013 cm. The method presented here provides an independent technique to verify the calibration of an electromagnetic tracking system to radiation isocenter. The calibration accuracy of the system was better than the 0.2 cm accuracy stated by the manufacturer. However, it should not be assumed to be zero, especially for stereotactic radiation therapy treatments where planning target volume margins are very small.

  6. Technical note on the validation of a semi-automated image analysis software application for estrogen and progesterone receptor detection in breast cancer.

    PubMed

    Krecsák, László; Micsik, Tamás; Kiszler, Gábor; Krenács, Tibor; Szabó, Dániel; Jónás, Viktor; Császár, Gergely; Czuni, László; Gurzó, Péter; Ficsor, Levente; Molnár, Béla

    2011-01-18

    The immunohistochemical detection of estrogen (ER) and progesterone (PR) receptors in breast cancer is routinely used for prognostic and predictive testing. Whole slide digitalization supported by dedicated software tools allows quantization of the image objects (e.g. cell membrane, nuclei) and an unbiased analysis of immunostaining results. Validation studies of image analysis applications for the detection of ER and PR in breast cancer specimens provided strong concordance between the pathologist's manual assessment of slides and scoring performed using different software applications. The effectiveness of two connected semi-automated image analysis software (NuclearQuant v. 1.13 application for Pannoramic™ Viewer v. 1.14) for determination of ER and PR status in formalin-fixed paraffin embedded breast cancer specimens immunostained with the automated Leica Bond Max system was studied. First the detection algorithm was calibrated to the scores provided an independent assessors (pathologist), using selected areas from 38 small digital slides (created from 16 cases) containing a mean number of 195 cells. Each cell was manually marked and scored according to the Allred-system combining frequency and intensity scores. The performance of the calibrated algorithm was tested on 16 cases (14 invasive ductal carcinoma, 2 invasive lobular carcinoma) against the pathologist's manual scoring of digital slides. The detection was calibrated to 87 percent object detection agreement and almost perfect Total Score agreement (Cohen's kappa 0.859, quadratic weighted kappa 0.986) from slight or moderate agreement at the start of the study, using the un-calibrated algorithm. The performance of the application was tested against the pathologist's manual scoring of digital slides on 53 regions of interest of 16 ER and PR slides covering all positivity ranges, and the quadratic weighted kappa provided almost perfect agreement (κ = 0.981) among the two scoring schemes. NuclearQuant v. 1.13 application for Pannoramic™ Viewer v. 1.14 software application proved to be a reliable image analysis tool for pathologists testing ER and PR status in breast cancer.

  7. Calibration and evaluation of a dispersant application system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shum, J.S.

    1987-05-01

    The report presents recommended methods for calibrating and operating boat-mounted dispersant application systems. Calibration of one commercially-available system and several unusual problems encountered in calibration are described. Charts and procedures for selecting pump rates and other operating parameters in order to achieve a desired dosage are provided. The calibration was performed at the EPA's Oil and Hazardous Materials Simulated Environmental Test Tank (OHMSETT) facility in Leonardo, New Jersey.

  8. Calibration procedure for Slocum glider deployed optical instruments.

    PubMed

    Cetinić, Ivona; Toro-Farmer, Gerardo; Ragan, Matthew; Oberg, Carl; Jones, Burton H

    2009-08-31

    Recent developments in the field of the autonomous underwater vehicles allow the wide usage of these platforms as part of scientific experiments, monitoring campaigns and more. The vehicles are often equipped with sensors measuring temperature, conductivity, chlorophyll a fluorescence (Chl a), colored dissolved organic matter (CDOM) fluorescence, phycoerithrin (PE) fluorescence and spectral volume scattering function at 117 degrees, providing users with high resolution, real time data. However, calibration of these instruments can be problematic. Most in situ calibrations are performed by deploying complementary instrument packages or water samplers in the proximity of the glider. Laboratory calibrations of the mounted sensors are difficult due to the placement of the instruments within the body of the vehicle. For the laboratory calibrations of the Slocum glider instruments we developed a small calibration chamber where we can perform precise calibrations of the optical instruments aboard our glider, as well as sensors from other deployment platforms. These procedures enable us to obtain pre- and post-deployment calibrations for optical fluorescence instruments, which may differ due to the biofouling and other physical damage that can occur during long-term glider deployments. We found that biofouling caused significant changes in the calibration scaling factors of fluorescent sensors, suggesting the need for consistent and repetitive calibrations for gliders as proposed in this paper.

  9. A Self-Adaptive Model-Based Wi-Fi Indoor Localization Method.

    PubMed

    Tuta, Jure; Juric, Matjaz B

    2016-12-06

    This paper presents a novel method for indoor localization, developed with the main aim of making it useful for real-world deployments. Many indoor localization methods exist, yet they have several disadvantages in real-world deployments-some are static, which is not suitable for long-term usage; some require costly human recalibration procedures; and others require special hardware such as Wi-Fi anchors and transponders. Our method is self-calibrating and self-adaptive thus maintenance free and based on Wi-Fi only. We have employed two well-known propagation models-free space path loss and ITU models-which we have extended with additional parameters for better propagation simulation. Our self-calibrating procedure utilizes one propagation model to infer parameters of the space and the other to simulate the propagation of the signal without requiring any additional hardware beside Wi-Fi access points, which is suitable for real-world usage. Our method is also one of the few model-based Wi-Fi only self-adaptive approaches that do not require the mobile terminal to be in the access-point mode. The only input requirements of the method are Wi-Fi access point positions, and positions and properties of the walls. Our method has been evaluated in single- and multi-room environments, with measured mean error of 2-3 and 3-4 m, respectively, which is similar to existing methods. The evaluation has proven that usable localization accuracy can be achieved in real-world environments solely by the proposed Wi-Fi method that relies on simple hardware and software requirements.

  10. A Self-Adaptive Model-Based Wi-Fi Indoor Localization Method

    PubMed Central

    Tuta, Jure; Juric, Matjaz B.

    2016-01-01

    This paper presents a novel method for indoor localization, developed with the main aim of making it useful for real-world deployments. Many indoor localization methods exist, yet they have several disadvantages in real-world deployments—some are static, which is not suitable for long-term usage; some require costly human recalibration procedures; and others require special hardware such as Wi-Fi anchors and transponders. Our method is self-calibrating and self-adaptive thus maintenance free and based on Wi-Fi only. We have employed two well-known propagation models—free space path loss and ITU models—which we have extended with additional parameters for better propagation simulation. Our self-calibrating procedure utilizes one propagation model to infer parameters of the space and the other to simulate the propagation of the signal without requiring any additional hardware beside Wi-Fi access points, which is suitable for real-world usage. Our method is also one of the few model-based Wi-Fi only self-adaptive approaches that do not require the mobile terminal to be in the access-point mode. The only input requirements of the method are Wi-Fi access point positions, and positions and properties of the walls. Our method has been evaluated in single- and multi-room environments, with measured mean error of 2–3 and 3–4 m, respectively, which is similar to existing methods. The evaluation has proven that usable localization accuracy can be achieved in real-world environments solely by the proposed Wi-Fi method that relies on simple hardware and software requirements. PMID:27929453

  11. [Determination of radioactivity by smartphones].

    PubMed

    Hartmann, H; Freudenberg, R; Andreeff, M; Kotzerke, J

    2013-01-01

    The interest in the detection of radioactive materials has strongly increased after the accident in the nuclear power plant Fukushima and has led to a bottleneck of suitable measuring instruments. Smartphones equipped with a commercially available software tool could be used for dose rate measurements following a calibration according to the specific camera module. We examined whether such measurements provide reliable data for typical activities and radionuclides in nuclear medicine. For the nuclides 99mTc (10 - 1000 MBq), 131I (3.7 - 1800 MBq, therapy capsule) and 68Ga (50 - 600 MBq) radioactivity with defined geometry in different distances was measured. The smartphones Milestone Droid 1 (Motorola) and HTC Desire (HTC Corporation) were compared with the standard instruments AD6 (automess) and DoseGUARD (AEA Technology). Measurements with the smartphones and the other devices show a good agreement: linear signal increase with rising activity and dose rate. The long time measurement (131I, 729 MBq, 0.5 m, 60 min) demonstrates a considerably higher variation (by 20%) of the measured smartphone data values compared with the AD6. For low dose rates (< 1 µGy/h), the sensitivity decreases so that measurements of e. g. the natural radiation exposure do not lead to valid results. The calibration of the camera responsivity for the smartphone has a big influence on the results caused by the small detector surface of the camera semiconductor. With commercial software the camera module of a smartphone can be used for the measurement of radioactivity. Dose rates resulting from typical nuclear medicine procedures can be measured reliably (e. g., dismissal dose after radioiodine therapy). The signal shows a high correlation to measured values of conventional dose measurement devices.

  12. Calibrating LOFAR using the Black Board Selfcal System

    NASA Astrophysics Data System (ADS)

    Pandey, V. N.; van Zwieten, J. E.; de Bruyn, A. G.; Nijboer, R.

    2009-09-01

    The Black Board SelfCal (BBS) system is designed as the final processing system to carry out the calibration of LOFAR in an efficient way. In this paper we give a brief description of its architectural and software design including its distributed computing approach. A confusion limited deep all sky image (from 38-62 MHz) by calibrating LOFAR test data with the BBS suite is shown as a sample result. The present status and future directions of development of BBS suite are also touched upon. Although BBS is mainly developed for LOFAR, it may also be used to calibrate other instruments once their specific algorithms are plugged in.

  13. Participation in the Infrared Space Observatory (ISO) Mission

    NASA Technical Reports Server (NTRS)

    Joseph, Robert D.

    2002-01-01

    All the Infrared Space Observatory (ISO) data have been transmitted from the ISO Data Centre, reduced, and calibrated. This has been rather labor-intensive as new calibrations for both the ISOPHOT and ISOCAM data have been released and the algorithms for data reduction have improved. We actually discovered errors in the calibration in earlier versions of the software. However the data reduction improvements have now converged and we have a self-consistent, well-calibrated database. It has also been a major effort to obtain the ground-based JHK imaging, 450 micrometer and 850 micrometer imaging and the 1-2.5 micrometer near-infrared spectroscopy for most of the sample galaxies.

  14. Simbol-X Telescope Scientific Calibrations: Requirements and Plans

    NASA Astrophysics Data System (ADS)

    Malaguti, G.; Angelini, L.; Raimondi, L.; Moretti, A.; Trifoglio, M.

    2009-05-01

    The Simbol-X telescope characteristics and the mission scientific requirements impose a challenging calibration plan with a number of unprecedented issues. The 20 m focal length implies for the incoming X-ray beam a divergence comparable to the incidence angle of the mirror surface also for 100 m-long facilities. Moreover this is the first time that a direct focussing X-ray telescope will be calibrated on an energy band covering about three decades, and with a complex focal plane. These problems require a careful plan and organization of the measurements, together with an evaluation of the calibration needs in terms of both hardware and software.

  15. Real-time calibration and alignment of the LHCb RICH detectors

    NASA Astrophysics Data System (ADS)

    HE, Jibo

    2017-12-01

    In 2015, the LHCb experiment established a new and unique software trigger strategy with the purpose of increasing the purity of the signal events by applying the same algorithms online and offline. To achieve this, real-time calibration and alignment of all LHCb sub-systems is needed to provide vertexing, tracking, and particle identification of the best possible quality. The calibration of the refractive index of the RICH radiators, the calibration of the Hybrid Photon Detector image, and the alignment of the RICH mirror system, are reported in this contribution. The stability of the RICH performance and the particle identification performance are also discussed.

  16. Electronic test and calibration circuits, a compilation

    NASA Technical Reports Server (NTRS)

    1972-01-01

    A wide variety of simple test calibration circuits are compiled for the engineer and laboratory technician. The majority of circuits were found inexpensive to assemble. Testing electronic devices and components, instrument and system test, calibration and reference circuits, and simple test procedures are presented.

  17. The algorithm for automatic detection of the calibration object

    NASA Astrophysics Data System (ADS)

    Artem, Kruglov; Irina, Ugfeld

    2017-06-01

    The problem of the automatic image calibration is considered in this paper. The most challenging task of the automatic calibration is a proper detection of the calibration object. The solving of this problem required the appliance of the methods and algorithms of the digital image processing, such as morphology, filtering, edge detection, shape approximation. The step-by-step process of the development of the algorithm and its adopting to the specific conditions of the log cuts in the image's background is presented. Testing of the automatic calibration module was carrying out under the conditions of the production process of the logging enterprise. Through the tests the average possibility of the automatic isolating of the calibration object is 86.1% in the absence of the type 1 errors. The algorithm was implemented in the automatic calibration module within the mobile software for the log deck volume measurement.

  18. Software quality assurance plan for GCS

    NASA Technical Reports Server (NTRS)

    Duncan, Stephen E.; Bailey, Elizabeth K.

    1990-01-01

    The software quality assurance (SQA) function for the Guidance and Control Software (GCS) project which is part of a software error studies research program is described. The SQA plan outlines all of the procedures, controls, and audits to be carried out by the SQA organization to ensure adherence to the policies, procedures, and standards for the GCS project.

  19. Evaluation on Radiometric Capability of Chinese Optical Satellite Sensors

    PubMed Central

    Yang, Aixia; Zhong, Bo; Wu, Shanlong; Liu, Qinhuo

    2017-01-01

    The radiometric capability of on-orbit sensors should be updated on time due to changes induced by space environmental factors and instrument aging. Some sensors, such as Moderate Resolution Imaging Spectroradiometer (MODIS), have onboard calibrators, which enable real-time calibration. However, most Chinese remote sensing satellite sensors lack onboard calibrators. Their radiometric calibrations have been updated once a year based on a vicarious calibration procedure, which has affected the applications of the data. Therefore, a full evaluation of the sensors’ radiometric capabilities is essential before quantitative applications can be made. In this study, a comprehensive procedure for evaluating the radiometric capability of several Chinese optical satellite sensors is proposed. In this procedure, long-term radiometric stability and radiometric accuracy are the two major indicators for radiometric evaluation. The radiometric temporal stability is analyzed by the tendency of long-term top-of-atmosphere (TOA) reflectance variation; the radiometric accuracy is determined by comparison with the TOA reflectance from MODIS after spectrally matching. Three Chinese sensors including the Charge-Coupled Device (CCD) camera onboard Huan Jing 1 satellite (HJ-1), as well as the Visible and Infrared Radiometer (VIRR) and Medium-Resolution Spectral Imager (MERSI) onboard the Feng Yun 3 satellite (FY-3) are evaluated in reflective bands based on this procedure. The results are reasonable, and thus can provide reliable reference for the sensors’ application, and as such will promote the development of Chinese satellite data. PMID:28117745

  20. Evaluation on Radiometric Capability of Chinese Optical Satellite Sensors.

    PubMed

    Yang, Aixia; Zhong, Bo; Wu, Shanlong; Liu, Qinhuo

    2017-01-22

    The radiometric capability of on-orbit sensors should be updated on time due to changes induced by space environmental factors and instrument aging. Some sensors, such as Moderate Resolution Imaging Spectroradiometer (MODIS), have onboard calibrators, which enable real-time calibration. However, most Chinese remote sensing satellite sensors lack onboard calibrators. Their radiometric calibrations have been updated once a year based on a vicarious calibration procedure, which has affected the applications of the data. Therefore, a full evaluation of the sensors' radiometric capabilities is essential before quantitative applications can be made. In this study, a comprehensive procedure for evaluating the radiometric capability of several Chinese optical satellite sensors is proposed. In this procedure, long-term radiometric stability and radiometric accuracy are the two major indicators for radiometric evaluation. The radiometric temporal stability is analyzed by the tendency of long-term top-of-atmosphere (TOA) reflectance variation; the radiometric accuracy is determined by comparison with the TOA reflectance from MODIS after spectrally matching. Three Chinese sensors including the Charge-Coupled Device (CCD) camera onboard Huan Jing 1 satellite (HJ-1), as well as the Visible and Infrared Radiometer (VIRR) and Medium-Resolution Spectral Imager (MERSI) onboard the Feng Yun 3 satellite (FY-3) are evaluated in reflective bands based on this procedure. The results are reasonable, and thus can provide reliable reference for the sensors' application, and as such will promote the development of Chinese satellite data.

  1. Polarimetric SAR calibration experiment using active radar calibrators

    NASA Astrophysics Data System (ADS)

    Freeman, Anthony; Shen, Yuhsyen; Werner, Charles L.

    1990-03-01

    Active radar calibrators are used to derive both the amplitude and phase characteristics of a multichannel polarimetric SAR from the complex image data. Results are presented from an experiment carried out using the NASA/JPL DC-8 aircraft SAR over a calibration site at Goldstone, California. As part of the experiment, polarimetric active radar calibrators (PARCs) with adjustable polarization signatures were deployed. Experimental results demonstrate that the PARCs can be used to calibrate polarimetric SAR images successfully. Restrictions on the application of the PARC calibration procedure are discussed.

  2. Polarimetric SAR calibration experiment using active radar calibrators

    NASA Technical Reports Server (NTRS)

    Freeman, Anthony; Shen, Yuhsyen; Werner, Charles L.

    1990-01-01

    Active radar calibrators are used to derive both the amplitude and phase characteristics of a multichannel polarimetric SAR from the complex image data. Results are presented from an experiment carried out using the NASA/JPL DC-8 aircraft SAR over a calibration site at Goldstone, California. As part of the experiment, polarimetric active radar calibrators (PARCs) with adjustable polarization signatures were deployed. Experimental results demonstrate that the PARCs can be used to calibrate polarimetric SAR images successfully. Restrictions on the application of the PARC calibration procedure are discussed.

  3. SEPAC software configuration control plan and procedures, revision 1

    NASA Technical Reports Server (NTRS)

    1981-01-01

    SEPAC Software Configuration Control Plan and Procedures are presented. The objective of the software configuration control is to establish the process for maintaining configuration control of the SEPAC software beginning with the baselining of SEPAC Flight Software Version 1 and encompass the integration and verification tests through Spacelab Level IV Integration. They are designed to provide a simplified but complete configuration control process. The intent is to require a minimum amount of paperwork but provide total traceability of SEPAC software.

  4. 48 CFR 208.7403 - Acquisition procedures.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... SYSTEM, DEPARTMENT OF DEFENSE ACQUISITION PLANNING REQUIRED SOURCES OF SUPPLIES AND SERVICES Enterprise Software Agreements 208.7403 Acquisition procedures. Follow the procedures at PGI 208.7403 when acquiring commercial software and related services. [71 FR 39005, July 11, 2006] ...

  5. Employing an Incentive Spirometer to Calibrate Tidal Volumes Estimated from a Smartphone Camera.

    PubMed

    Reyes, Bersain A; Reljin, Natasa; Kong, Youngsun; Nam, Yunyoung; Ha, Sangho; Chon, Ki H

    2016-03-18

    A smartphone-based tidal volume (V(T)) estimator was recently introduced by our research group, where an Android application provides a chest movement signal whose peak-to-peak amplitude is highly correlated with reference V(T) measured by a spirometer. We found a Normalized Root Mean Squared Error (NRMSE) of 14.998% ± 5.171% (mean ± SD) when the smartphone measures were calibrated using spirometer data. However, the availability of a spirometer device for calibration is not realistic outside clinical or research environments. In order to be used by the general population on a daily basis, a simple calibration procedure not relying on specialized devices is required. In this study, we propose taking advantage of the linear correlation between smartphone measurements and V(T) to obtain a calibration model using information computed while the subject breathes through a commercially-available incentive spirometer (IS). Experiments were performed on twelve (N = 12) healthy subjects. In addition to corroborating findings from our previous study using a spirometer for calibration, we found that the calibration procedure using an IS resulted in a fixed bias of -0.051 L and a RMSE of 0.189 ± 0.074 L corresponding to 18.559% ± 6.579% when normalized. Although it has a small underestimation and slightly increased error, the proposed calibration procedure using an IS has the advantages of being simple, fast, and affordable. This study supports the feasibility of developing a portable smartphone-based breathing status monitor that provides information about breathing depth, in addition to the more commonly estimated respiratory rate, on a daily basis.

  6. Employing an Incentive Spirometer to Calibrate Tidal Volumes Estimated from a Smartphone Camera

    PubMed Central

    Reyes, Bersain A.; Reljin, Natasa; Kong, Youngsun; Nam, Yunyoung; Ha, Sangho; Chon, Ki H.

    2016-01-01

    A smartphone-based tidal volume (VT) estimator was recently introduced by our research group, where an Android application provides a chest movement signal whose peak-to-peak amplitude is highly correlated with reference VT measured by a spirometer. We found a Normalized Root Mean Squared Error (NRMSE) of 14.998% ± 5.171% (mean ± SD) when the smartphone measures were calibrated using spirometer data. However, the availability of a spirometer device for calibration is not realistic outside clinical or research environments. In order to be used by the general population on a daily basis, a simple calibration procedure not relying on specialized devices is required. In this study, we propose taking advantage of the linear correlation between smartphone measurements and VT to obtain a calibration model using information computed while the subject breathes through a commercially-available incentive spirometer (IS). Experiments were performed on twelve (N = 12) healthy subjects. In addition to corroborating findings from our previous study using a spirometer for calibration, we found that the calibration procedure using an IS resulted in a fixed bias of −0.051 L and a RMSE of 0.189 ± 0.074 L corresponding to 18.559% ± 6.579% when normalized. Although it has a small underestimation and slightly increased error, the proposed calibration procedure using an IS has the advantages of being simple, fast, and affordable. This study supports the feasibility of developing a portable smartphone-based breathing status monitor that provides information about breathing depth, in addition to the more commonly estimated respiratory rate, on a daily basis. PMID:26999152

  7. Automatic Astrometric and Photometric Calibration with SCAMP

    NASA Astrophysics Data System (ADS)

    Bertin, E.

    2006-07-01

    Astrometric and photometric calibrations have remained the most tiresome step in the reduction of large imaging surveys. I present a new software package, SCAMP which has been written to address this problem. SCAMP efficiently computes accurate astrometric and photometric solutions for any arbitrary sequence of FITS images in a completely automatic way. SCAMP is released under the GNU General Public Licence.

  8. A framework for assessing the adequacy and effectiveness of software development methodologies

    NASA Technical Reports Server (NTRS)

    Arthur, James D.; Nance, Richard E.

    1990-01-01

    Tools, techniques, environments, and methodologies dominate the software engineering literature, but relatively little research in the evaluation of methodologies is evident. This work reports an initial attempt to develop a procedural approach to evaluating software development methodologies. Prominent in this approach are: (1) an explication of the role of a methodology in the software development process; (2) the development of a procedure based on linkages among objectives, principles, and attributes; and (3) the establishment of a basis for reduction of the subjective nature of the evaluation through the introduction of properties. An application of the evaluation procedure to two Navy methodologies has provided consistent results that demonstrate the utility and versatility of the evaluation procedure. Current research efforts focus on the continued refinement of the evaluation procedure through the identification and integration of product quality indicators reflective of attribute presence, and the validation of metrics supporting the measure of those indicators. The consequent refinement of the evaluation procedure offers promise of a flexible approach that admits to change as the field of knowledge matures. In conclusion, the procedural approach presented in this paper represents a promising path toward the end goal of objectively evaluating software engineering methodologies.

  9. An IMU-to-Body Alignment Method Applied to Human Gait Analysis.

    PubMed

    Vargas-Valencia, Laura Susana; Elias, Arlindo; Rocon, Eduardo; Bastos-Filho, Teodiano; Frizera, Anselmo

    2016-12-10

    This paper presents a novel calibration procedure as a simple, yet powerful, method to place and align inertial sensors with body segments. The calibration can be easily replicated without the need of any additional tools. The proposed method is validated in three different applications: a computer mathematical simulation; a simplified joint composed of two semi-spheres interconnected by a universal goniometer; and a real gait test with five able-bodied subjects. Simulation results demonstrate that, after the calibration method is applied, the joint angles are correctly measured independently of previous sensor placement on the joint, thus validating the proposed procedure. In the cases of a simplified joint and a real gait test with human volunteers, the method also performs correctly, although secondary plane errors appear when compared with the simulation results. We believe that such errors are caused by limitations of the current inertial measurement unit (IMU) technology and fusion algorithms. In conclusion, the presented calibration procedure is an interesting option to solve the alignment problem when using IMUs for gait analysis.

  10. An experimental protocol for the definition of upper limb anatomical frames on children using magneto-inertial sensors.

    PubMed

    Ricci, L; Formica, D; Tamilia, E; Taffoni, F; Sparaci, L; Capirci, O; Guglielmelli, E

    2013-01-01

    Motion capture based on magneto-inertial sensors is a technology enabling data collection in unstructured environments, allowing "out of the lab" motion analysis. This technology is a good candidate for motion analysis of children thanks to the reduced weight and size as well as the use of wireless communication that has improved its wearability and reduced its obtrusivity. A key issue in the application of such technology for motion analysis is its calibration, i.e. a process that allows mapping orientation information from each sensor to a physiological reference frame. To date, even if there are several calibration procedures available for adults, no specific calibration procedures have been developed for children. This work addresses this specific issue presenting a calibration procedure for motion capture of thorax and upper limbs on healthy children. Reported results suggest comparable performance with similar studies on adults and emphasize some critical issues, opening the way to further improvements.

  11. Method for 3D noncontact measurements of cut trees package area

    NASA Astrophysics Data System (ADS)

    Knyaz, Vladimir A.; Vizilter, Yuri V.

    2001-02-01

    Progress in imaging sensors and computers create the background for numerous 3D imaging application for wide variety of manufacturing activity. Many demands for automated precise measurements are in wood branch of industry. One of them is the accurate volume definition for cut trees carried on the truck. The key point for volume estimation is determination of the front area of the cut tree package. To eliminate slow and inaccurate manual measurements being now in practice the experimental system for automated non-contact wood measurements is developed. The system includes two non-metric CCD video cameras, PC as central processing unit, frame grabbers and original software for image processing and 3D measurements. The proposed method of measurement is based on capturing the stereo pair of front of trees package and performing the image orthotranformation into the front plane. This technique allows to process transformed image for circle shapes recognition and calculating their area. The metric characteristics of the system are provided by special camera calibration procedure. The paper presents the developed method of 3D measurements, describes the hardware used for image acquisition and the software realized the developed algorithms, gives the productivity and precision characteristics of the system.

  12. Multiple calibrator measurements improve accuracy and stability estimates of automated assays.

    PubMed

    Akbas, Neval; Budd, Jeffrey R; Klee, George G

    2016-01-01

    The effects of combining multiple calibrations on assay accuracy (bias) and measurement of calibration stability were investigated for total triiodothyronine (TT3), vitamin B12 and luteinizing hormone (LH) using Beckman Coulter's Access 2 analyzer. Three calibration procedures (CC1, CC2 and CC3) combined 12, 34 and 56 calibrator measurements over 1, 2, and 3 days. Bias was calculated between target values and average measured value over 3 consecutive days after calibration. Using regression analysis of calibrator measurements versus measurement date, calibration stability was determined as the maximum number of days before a calibrator measurement exceeded 5% tolerance limits. Competitive assays (TT3, vitamin B12) had positive time regression slopes, while sandwich assay (LH) had a negative slope. Bias values for TT3 were -2.49%, 1.49%, and -0.50% using CC1, CC2 and CC3 respectively, with calibrator stability of 32, 20, and 30 days. Bias values for vitamin B12 were 2.44%, 0.91%, and -0.50%, with calibrator stability of 4, 9, and 12 days. Bias values for LH were 2.26%, 1.44% and -0.29% with calibrator stability of >43, 39 and 36 days. Measured stability was more consistent across calibration procedures using percent change rather than difference from target: 26 days for TT3, 12 days for B12 and 31 days for LH. Averaging over multiple calibrations produced smaller bias, consistent with improved accuracy. Time regression slopes in percent change were unaffected by number of calibration measurements but calibrator stability measured from the target value was highly affected by the calibrator value at time zero.

  13. 40 CFR 1066.240 - Torque transducer verification and calibration.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) AIR POLLUTION CONTROLS VEHICLE-TESTING PROCEDURES Dynamometer Specifications § 1066.240 Torque transducer verification and calibration. Calibrate torque-measurement systems as described in 40 CFR 1065.310. ... 40 Protection of Environment 34 2013-07-01 2013-07-01 false Torque transducer verification and...

  14. 40 CFR 1066.240 - Torque transducer verification and calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) AIR POLLUTION CONTROLS VEHICLE-TESTING PROCEDURES Dynamometer Specifications § 1066.240 Torque transducer verification and calibration. Calibrate torque-measurement systems as described in 40 CFR 1065.310. ... 40 Protection of Environment 34 2012-07-01 2012-07-01 false Torque transducer verification and...

  15. Online C-arm calibration using a marked guide wire for 3D reconstruction of pulmonary arteries

    NASA Astrophysics Data System (ADS)

    Vachon, Étienne; Miró, Joaquim; Duong, Luc

    2017-03-01

    3D reconstruction of vessels from 2D X-ray angiography is highly relevant to improve the visualization and the assessment of vascular structures such as pulmonary arteries by interventional cardiologists. However, to ensure a robust and accurate reconstruction, C-arm gantry parameters must be properly calibrated to provide clinically acceptable results. Calibration procedures often rely on calibration objects and complex protocol which is not adapted to an intervention context. In this study, a novel calibration algorithm for C-arm gantry is presented using the instrumentation such as catheters and guide wire. This ensures the availability of a minimum set of correspondences and implies minimal changes to the clinical workflow. The method was evaluated on simulated data and on retrospective patient datasets. Experimental results on simulated datasets demonstrate a calibration that allows a 3D reconstruction of the guide wire up to a geometric transformation. Experiments with patients datasets show a significant decrease of the retro projection error to 0.17 mm 2D RMS. Consequently, such procedure might contribute to identify any calibration drift during the intervention.

  16. Gaussian process based modeling and experimental design for sensor calibration in drifting environments

    PubMed Central

    Geng, Zongyu; Yang, Feng; Chen, Xi; Wu, Nianqiang

    2016-01-01

    It remains a challenge to accurately calibrate a sensor subject to environmental drift. The calibration task for such a sensor is to quantify the relationship between the sensor’s response and its exposure condition, which is specified by not only the analyte concentration but also the environmental factors such as temperature and humidity. This work developed a Gaussian Process (GP)-based procedure for the efficient calibration of sensors in drifting environments. Adopted as the calibration model, GP is not only able to capture the possibly nonlinear relationship between the sensor responses and the various exposure-condition factors, but also able to provide valid statistical inference for uncertainty quantification of the target estimates (e.g., the estimated analyte concentration of an unknown environment). Built on GP’s inference ability, an experimental design method was developed to achieve efficient sampling of calibration data in a batch sequential manner. The resulting calibration procedure, which integrates the GP-based modeling and experimental design, was applied on a simulated chemiresistor sensor to demonstrate its effectiveness and its efficiency over the traditional method. PMID:26924894

  17. Assessment and certification of neonatal incubator sensors through an inferential neural network.

    PubMed

    de Araújo, José Medeiros; de Menezes, José Maria Pires; Moura de Albuquerque, Alberto Alexandre; da Mota Almeida, Otacílio; Ugulino de Araújo, Fábio Meneghetti

    2013-11-15

    Measurement and diagnostic systems based on electronic sensors have been increasingly essential in the standardization of hospital equipment. The technical standard IEC (International Electrotechnical Commission) 60601-2-19 establishes requirements for neonatal incubators and specifies the calibration procedure and validation tests for such devices using sensors systems. This paper proposes a new procedure based on an inferential neural network to evaluate and calibrate a neonatal incubator. The proposal presents significant advantages over the standard calibration process, i.e., the number of sensors is drastically reduced, and it runs with the incubator under operation. Since the sensors used in the new calibration process are already installed in the commercial incubator, no additional hardware is necessary; and the calibration necessity can be diagnosed in real time without the presence of technical professionals in the neonatal intensive care unit (NICU). Experimental tests involving the aforementioned calibration system are carried out in a commercial incubator in order to validate the proposal.

  18. Assessment and Certification of Neonatal Incubator Sensors through an Inferential Neural Network

    PubMed Central

    de Araújo Júnior, José Medeiros; de Menezes Júnior, José Maria Pires; de Albuquerque, Alberto Alexandre Moura; Almeida, Otacílio da Mota; de Araújo, Fábio Meneghetti Ugulino

    2013-01-01

    Measurement and diagnostic systems based on electronic sensors have been increasingly essential in the standardization of hospital equipment. The technical standard IEC (International Electrotechnical Commission) 60601-2-19 establishes requirements for neonatal incubators and specifies the calibration procedure and validation tests for such devices using sensors systems. This paper proposes a new procedure based on an inferential neural network to evaluate and calibrate a neonatal incubator. The proposal presents significant advantages over the standard calibration process, i.e., the number of sensors is drastically reduced, and it runs with the incubator under operation. Since the sensors used in the new calibration process are already installed in the commercial incubator, no additional hardware is necessary; and the calibration necessity can be diagnosed in real time without the presence of technical professionals in the neonatal intensive care unit (NICU). Experimental tests involving the aforementioned calibration system are carried out in a commercial incubator in order to validate the proposal. PMID:24248278

  19. Ability Estimation and Item Calibration Using the One and Three Parameter Logistic Models: A Comparative Study. Research Report 77-1.

    ERIC Educational Resources Information Center

    Reckase, Mark D.

    Latent trait model calibration procedures were used on data obtained from a group testing program. The one-parameter model of Wright and Panchapakesan and the three-parameter logistic model of Wingersky, Wood, and Lord were selected for comparison. These models and their corresponding estimation procedures were compared, using actual and simulated…

  20. NHEXAS PHASE I ARIZONA STUDY--STANDARD OPERATING PROCEDURE FOR PREPARATION OF CALIBRATION AND SURROGATE RECOVERY SOLUTIONS FOR GC/MS ANALYSIS OF PESTICIDES (BCO-L-21.1)

    EPA Science Inventory

    The purpose of this SOP is to describe procedures for preparing calibration curve solutions used for gas chromatography/mass spectrometry (GC/MS) analysis of chlorpyrifos, diazinon, malathion, DDT, DDE, DDD, a-chlordane, and g-chlordane in dust, soil, air, and handwipe sample ext...

  1. NHEXAS PHASE I ARIZONA STUDY--STANDARD OPERATING PROCEDURE FOR OPERATION, CALIBRATION AND MAINTENANCE OF THE PERKIN-ELMER ZEEMAN/5000 SYSTEM ATOMIC ABSORPTION SPECTROMETER (BCO-L-6.0)

    EPA Science Inventory

    The purpose of this SOP is to outline the start-up, calibration, operation, and maintenance procedures for the Perkin-Elmer 5000 atomic absorption spectrophotometer (PE 5000 AA), and the Perkin Elmer 5000 Zeeman graphite furnace atomic absorption spectrophotometer (PE 5000Z GFAA)...

  2. Metafitting: Weight optimization for least-squares fitting of PTTI data

    NASA Technical Reports Server (NTRS)

    Douglas, Rob J.; Boulanger, J.-S.

    1995-01-01

    For precise time intercomparisons between a master frequency standard and a slave time scale, we have found it useful to quantitatively compare different fitting strategies by examining the standard uncertainty in time or average frequency. It is particularly useful when designing procedures which use intermittent intercomparisons, with some parameterized fit used to interpolate or extrapolate from the calibrating intercomparisons. We use the term 'metafitting' for the choices that are made before a fitting procedure is operationally adopted. We present methods for calculating the standard uncertainty for general, weighted least-squares fits and a method for optimizing these weights for a general noise model suitable for many PTTI applications. We present the results of the metafitting of procedures for the use of a regular schedule of (hypothetical) high-accuracy frequency calibration of a maser time scale. We have identified a cumulative series of improvements that give a significant reduction of the expected standard uncertainty, compared to the simplest procedure of resetting the maser synthesizer after each calibration. The metafitting improvements presented include the optimum choice of weights for the calibration runs, optimized over a period of a week or 10 days.

  3. NASA Tech Briefs, October 2003

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Topics covered include: Cryogenic Temperature-Gradient Foam/Substrate Tensile Tester; Flight Test of an Intelligent Flight-Control System; Slat Heater Boxes for Thermal Vacuum Testing; System for Testing Thermal Insulation of Pipes; Electrical-Impedance-Based Ice-Thickness Gauges; Simulation System for Training in Laparoscopic Surgery; Flasher Powered by Photovoltaic Cells and Ultracapacitors; Improved Autoassociative Neural Networks; Toroidal-Core Microinductors Biased by Permanent Magnets; Using Correlated Photons to Suppress Background Noise; Atmospheric-Fade-Tolerant Tracking and Pointing in Wireless Optical Communication; Curved Focal-Plane Arrays Using Back-Illuminated High-Purity Photodetectors; Software for Displaying Data from Planetary Rovers; Software for Refining or Coarsening Computational Grids; Software for Diagnosis of Multiple Coordinated Spacecraft; Software Helps Retrieve Information Relevant to the User; Software for Simulating a Complex Robot; Software for Planning Scientific Activities on Mars; Software for Training in Pre-College Mathematics; Switching and Rectification in Carbon-Nanotube Junctions; Scandia-and-Yttria-Stabilized Zirconia for Thermal Barriers; Environmentally Safer, Less Toxic Fire-Extinguishing Agents; Multiaxial Temperature- and Time-Dependent Failure Model; Cloverleaf Vibratory Microgyroscope with Integrated Post; Single-Vector Calibration of Wind-Tunnel Force Balances; Microgyroscope with Vibrating Post as Rotation Transducer; Continuous Tuning and Calibration of Vibratory Gyroscopes; Compact, Pneumatically Actuated Filter Shuttle; Improved Bearingless Switched-Reluctance Motor; Fluorescent Quantum Dots for Biological Labeling; Growing Three-Dimensional Corneal Tissue in a Bioreactor; Scanning Tunneling Optical Resonance Microscopy; The Micro-Arcsecond Metrology Testbed; Detecting Moving Targets by Use of Soliton Resonances; and Finite-Element Methods for Real-Time Simulation of Surgery.

  4. Angle Measurement System (AMS) for Establishing Model Pitch and Roll Zero, and Performing Single Axis Angle Comparisons

    NASA Technical Reports Server (NTRS)

    Crawford, Bradley L.

    2007-01-01

    The angle measurement system (AMS) developed at NASA Langley Research Center (LaRC) is a system for many uses. It was originally developed to check taper fits in the wind tunnel model support system. The system was further developed to measure simultaneous pitch and roll angles using 3 orthogonally mounted accelerometers (3-axis). This 3-axis arrangement is used as a transfer standard from the calibration standard to the wind tunnel facility. It is generally used to establish model pitch and roll zero and performs the in-situ calibration on model attitude devices. The AMS originally used a laptop computer running DOS based software but has recently been upgraded to operate in a windows environment. Other improvements have also been made to the software to enhance its accuracy and add features. This paper will discuss the accuracy and calibration methodologies used in this system and some of the features that have contributed to its popularity.

  5. WDR-PK-AK-018

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hollister, R

    2009-08-26

    Method - CES SOP-HW-P556 'Field and Bulk Gamma Analysis'. Detector - High-purity germanium, 40% relative efficiency. Calibration - The detector was calibrated on February 8, 2006 using a NIST-traceable sealed source, and the calibration was verified using an independent sealed source. Count Time and Geometry - The sample was counted for 20 minutes at 72 inches from the detector. A lead collimator was used to limit the field-of-view to the region of the sample. The drum was rotated 180 degrees halfway through the count time. Date and Location of Scans - June 1,2006 in Building 235 Room 1136. Spectral Analysismore » Spectra were analyzed with ORTEC GammaVision software. Matrix and geometry corrections were calculated using OR TEC Isotopic software. A background spectrum was measured at the counting location. No man-made radioactivity was observed in the background. Results were determined from the sample spectra without background subtraction. Minimum detectable activities were calculated by the Nureg 4.16 method. Results - Detected Pu-238, Pu-239, Am-241 and Am-243.« less

  6. Wireless Acoustic Measurement System

    NASA Technical Reports Server (NTRS)

    Anderson, Paul D.; Dorland, Wade D.; Jolly, Ronald L.

    2007-01-01

    A prototype wireless acoustic measurement system (WAMS) is one of two main subsystems of the Acoustic Prediction/ Measurement Tool, which comprises software, acoustic instrumentation, and electronic hardware combined to afford integrated capabilities for predicting and measuring noise emitted by rocket and jet engines. The other main subsystem is described in the article on page 8. The WAMS includes analog acoustic measurement instrumentation and analog and digital electronic circuitry combined with computer wireless local-area networking to enable (1) measurement of sound-pressure levels at multiple locations in the sound field of an engine under test and (2) recording and processing of the measurement data. At each field location, the measurements are taken by a portable unit, denoted a field station. There are ten field stations, each of which can take two channels of measurements. Each field station is equipped with two instrumentation microphones, a micro- ATX computer, a wireless network adapter, an environmental enclosure, a directional radio antenna, and a battery power supply. The environmental enclosure shields the computer from weather and from extreme acoustically induced vibrations. The power supply is based on a marine-service lead-acid storage battery that has enough capacity to support operation for as long as 10 hours. A desktop computer serves as a control server for the WAMS. The server is connected to a wireless router for communication with the field stations via a wireless local-area network that complies with wireless-network standard 802.11b of the Institute of Electrical and Electronics Engineers. The router and the wireless network adapters are controlled by use of Linux-compatible driver software. The server runs custom Linux software for synchronizing the recording of measurement data in the field stations. The software includes a module that provides an intuitive graphical user interface through which an operator at the control server can control the operations of the field stations for calibration and for recording of measurement data. A test engineer positions and activates the WAMS. The WAMS automatically establishes the wireless network. Next, the engineer performs pretest calibrations. Then the engineer executes the test and measurement procedures. After the test, the raw measurement files are copied and transferred, through the wireless network, to a hard disk in the control server. Subsequently, the data are processed into 1.3-octave spectrograms.

  7. Wireless Acoustic Measurement System

    NASA Technical Reports Server (NTRS)

    Anderson, Paul D.; Dorland, Wade D.

    2005-01-01

    A prototype wireless acoustic measurement system (WAMS) is one of two main subsystems of the Acoustic Prediction/Measurement Tool, which comprises software, acoustic instrumentation, and electronic hardware combined to afford integrated capabilities for predicting and measuring noise emitted by rocket and jet engines. The other main subsystem is described in "Predicting Rocket or Jet Noise in Real Time" (SSC-00215-1), which appears elsewhere in this issue of NASA Tech Briefs. The WAMS includes analog acoustic measurement instrumentation and analog and digital electronic circuitry combined with computer wireless local-area networking to enable (1) measurement of sound-pressure levels at multiple locations in the sound field of an engine under test and (2) recording and processing of the measurement data. At each field location, the measurements are taken by a portable unit, denoted a field station. There are ten field stations, each of which can take two channels of measurements. Each field station is equipped with two instrumentation microphones, a micro-ATX computer, a wireless network adapter, an environmental enclosure, a directional radio antenna, and a battery power supply. The environmental enclosure shields the computer from weather and from extreme acoustically induced vibrations. The power supply is based on a marine-service lead-acid storage battery that has enough capacity to support operation for as long as 10 hours. A desktop computer serves as a control server for the WAMS. The server is connected to a wireless router for communication with the field stations via a wireless local-area network that complies with wireless-network standard 802.11b of the Institute of Electrical and Electronics Engineers. The router and the wireless network adapters are controlled by use of Linux-compatible driver software. The server runs custom Linux software for synchronizing the recording of measurement data in the field stations. The software includes a module that provides an intuitive graphical user interface through which an operator at the control server can control the operations of the field stations for calibration and for recording of measurement data. A test engineer positions and activates the WAMS. The WAMS automatically establishes the wireless network. Next, the engineer performs pretest calibrations. Then the engineer executes the test and measurement procedures. After the test, the raw measurement files are copied and transferred, through the wireless network, to a hard disk in the control server. Subsequently, the data are processed into 1/3-octave spectrograms.

  8. Development of dynamic calibration methods for POGO pressure transducers. [for space shuttle

    NASA Technical Reports Server (NTRS)

    Hilten, J. S.; Lederer, P. S.; Vezzetti, C. F.; Mayo-Wells, J. F.

    1976-01-01

    Two dynamic pressure sources are described for the calibration of pogo pressure transducers used to measure oscillatory pressures generated in the propulsion system of the space shuttle. Rotation of a mercury-filled tube in a vertical plane at frequencies below 5 Hz generates sinusoidal pressures up to 48 kPa, peak-to-peak; vibrating the same mercury-filled tube sinusoidally in the vertical plane extends the frequency response from 5 Hz to 100 Hz at pressures up to 140 kPa, peak-to-peak. The sinusoidal pressure fluctuations can be generated by both methods in the presence of high pressures (bias) up to 55 MPa. Calibration procedures are given in detail for the use of both sources. The dynamic performance of selected transducers was evaluated using these procedures; the results of these calibrations are presented. Calibrations made with the two sources near 5 Hz agree to within 3% of each other.

  9. 30 CFR 90.203 - Certified person; maintenance and calibration.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... COAL MINE SAFETY AND HEALTH MANDATORY HEALTH STANDARDS-COAL MINERS WHO HAVE EVIDENCE OF THE DEVELOPMENT OF PNEUMOCONIOSIS Sampling Procedures § 90.203 Certified person; maintenance and calibration. (a) Approved sampling devices shall be maintained and calibrated by a certified person. (b) To be certified, a...

  10. Procedure for calibrating the Technicon Colorimeter I.

    PubMed

    Black, J C; Furman, W B

    1975-05-01

    We describe a rapid method for calibrating the Technicon AutoAnalyzer colorimeter I. Test solutions of bromphenol blue are recommended for the calibration, in preference to solutions of potassium dichromate, based on considerations of the instrument's working range and of the stray light characteristics of the associated filters.

  11. 42 CFR 493.1255 - Standard: Calibration and calibration verification procedures.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... accuracy of the test system throughout the laboratory's reportable range of test results for the test system. Unless otherwise specified in this subpart, for each applicable test system the laboratory must... test system instructions, using calibration materials provided or specified, and with at least the...

  12. Calibration Issues and Operating System Requirements for Electron-Probe Microanalysis

    NASA Technical Reports Server (NTRS)

    Carpenter, P.

    2006-01-01

    Instrument purchase requirements and dialogue with manufacturers have established hardware parameters for alignment, stability, and reproducibility, which have helped improve the precision and accuracy of electron microprobe analysis (EPMA). The development of correction algorithms and the accurate solution to quantitative analysis problems requires the minimization of systematic errors and relies on internally consistent data sets. Improved hardware and computer systems have resulted in better automation of vacuum systems, stage and wavelength-dispersive spectrometer (WDS) mechanisms, and x-ray detector systems which have improved instrument stability and precision. Improved software now allows extended automated runs involving diverse setups and better integrates digital imaging and quantitative analysis. However, instrumental performance is not regularly maintained, as WDS are aligned and calibrated during installation but few laboratories appear to check and maintain this calibration. In particular, detector deadtime (DT) data is typically assumed rather than measured, due primarily to the difficulty and inconvenience of the measurement process. This is a source of fundamental systematic error in many microprobe laboratories and is unknown to the analyst, as the magnitude of DT correction is not listed in output by microprobe operating systems. The analyst must remain vigilant to deviations in instrumental alignment and calibration, and microprobe system software must conveniently verify the necessary parameters. Microanalysis of mission critical materials requires an ongoing demonstration of instrumental calibration. Possible approaches to improvements in instrument calibration, quality control, and accuracy will be discussed. Development of a set of core requirements based on discussions with users, researchers, and manufacturers can yield documents that improve and unify the methods by which instruments can be calibrated. These results can be used to continue improvements of EPMA.

  13. Accurate calibration of waveform data measured by the Plasma Wave Experiment on board the ARASE satellite

    NASA Astrophysics Data System (ADS)

    Kitahara, M.; Katoh, Y.; Hikishima, M.; Kasahara, Y.; Matsuda, S.; Kojima, H.; Ozaki, M.; Yagitani, S.

    2017-12-01

    The Plasma Wave Experiment (PWE) is installed on board the ARASE satellite to measure the electric field in the frequency range from DC to 10 MHz, and the magnetic field in the frequency range from a few Hz to 100 kHz using two dipole wire-probe antennas (WPT) and three magnetic search coils (MSC), respectively. In particular, the Waveform Capture (WFC), one of the receivers of the PWE, can detect electromagnetic field waveform in the frequency range from a few Hz to 20 kHz. The Software-type Wave Particle Interaction Analyzer (S-WPIA) is installed on the ARASE satellite to measure the energy exchange between plasma waves and particles. Since S-WPIA uses the waveform data measured by WFC to calculate the relative phase angle between the wave magnetic field and velocity of energetic electrons, the high-accuracy is required to calibration of both amplitude and phase of the waveform data. Generally, the calibration procedure of the signal passed through a receiver consists of three steps; the transformation into spectra, the calibration by the transfer function of a receiver, and the inverse transformation of the calibrated spectra into the time domain. Practically, in order to reduce the side robe effect, a raw data is filtered by a window function in the time domain before applying Fourier transform. However, for the case that a first order differential coefficient of the phase transfer function of the system is not negligible, the phase of the window function convoluted into the calibrated spectra is shifted differently at each frequency, resulting in a discontinuity in the time domain of the calibrated waveform data. To eliminate the effect of the phase shift of a window function, we suggest several methods to calibrate a waveform data accurately and carry out simulations assuming simple sinusoidal waves as an input signal and using transfer functions of WPT, MSC, and WFC obtained in pre-flight tests. In consequence, we conclude that the following two methods can reduce an error contaminated through the calibration to less than 0.1 % of amplitude of input waves; (1) a Turkey-type window function with a flat top region of one-third of the window length and (2) modification of the window function for each frequency by referring the estimation of the phase shift due to the first order differential coefficient from the transfer functions.

  14. Calibration and parameterization of a semi-distributed hydrological model to support sub-daily ensemble flood forecasting; a watershed in southeast Brazil

    NASA Astrophysics Data System (ADS)

    de Almeida Bressiani, D.; Srinivasan, R.; Mendiondo, E. M.

    2013-12-01

    The use of distributed or semi-distributed models to represent the processes and dynamics of a watershed in the last few years has increased. These models are important tools to predict and forecast the hydrological responses of the watersheds, and they can subside disaster risk management and planning. However they usually have a lot of parameters, of which, due to the spatial and temporal variability of the processes, are not known, specially in developing countries; therefore a robust and sensible calibration is very important. This study conduced a sub-daily calibration and parameterization of the Soil & Water Assessment Tool (SWAT) for a 12,600 km2 watershed in southeast Brazil, and uses ensemble forecasts to evaluate if the model can be used as a tool for flood forecasting. The Piracicaba Watershed, in São Paulo State, is mainly rural, but has about 4 million of population in highly relevant urban areas, and three cities in the list of critical cities of the National Center for Natural Disasters Monitoring and Alerts. For calibration: the watershed was divided in areas with similar hydrological characteristics, for each of these areas one gauge station was chosen for calibration; this procedure was performed to evaluate the effectiveness of calibrating in fewer places, since areas with the same group of groundwater, soil, land use and slope characteristics should have similar parameters; making calibration a less time-consuming task. The sensibility analysis and calibration were performed on the software SWAT-CUP with the optimization algorithm: Sequential Uncertainly Fitting Version 2 (SUFI-2), which uses Latin hypercube sampling scheme in an iterative process. The performance of the models to evaluate the calibration and validation was done with: Nash-Sutcliffe efficiency coefficient (NSE), determination coefficient (r2), root mean square error (RMSE), and percent bias (PBIAS), with monthly average values of NSE around 0.70, r2 of 0.9, normalized RMSE of 0.01, and PBIAS of 10. Past events were analysed to evaluate the possibility of using the SWAT developed model for Piracicaba watershed as a tool for ensemble flood forecasting. For the ensemble evaluation members from the numerical model Eta were used. Eta is an atmospheric model used for research and operational purposes, with 5km resolution, and is updated twice a day (00 e 12 UTC) for a ten day horizon, with precipitation and weather estimates for each hour. The parameterized SWAT model performed overall well for ensemble flood forecasting.

  15. Input variable selection and calibration data selection for storm water quality regression models.

    PubMed

    Sun, Siao; Bertrand-Krajewski, Jean-Luc

    2013-01-01

    Storm water quality models are useful tools in storm water management. Interest has been growing in analyzing existing data for developing models for urban storm water quality evaluations. It is important to select appropriate model inputs when many candidate explanatory variables are available. Model calibration and verification are essential steps in any storm water quality modeling. This study investigates input variable selection and calibration data selection in storm water quality regression models. The two selection problems are mutually interacted. A procedure is developed in order to fulfil the two selection tasks in order. The procedure firstly selects model input variables using a cross validation method. An appropriate number of variables are identified as model inputs to ensure that a model is neither overfitted nor underfitted. Based on the model input selection results, calibration data selection is studied. Uncertainty of model performances due to calibration data selection is investigated with a random selection method. An approach using the cluster method is applied in order to enhance model calibration practice based on the principle of selecting representative data for calibration. The comparison between results from the cluster selection method and random selection shows that the former can significantly improve performances of calibrated models. It is found that the information content in calibration data is important in addition to the size of calibration data.

  16. Contributed Review: Absolute spectral radiance calibration of fiber-optic shock-temperature pyrometers using a coiled-coil irradiance standard lamp

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fat’yanov, O. V., E-mail: fatyan1@gps.caltech.edu; Asimow, P. D., E-mail: asimow@gps.caltech.edu

    2015-10-15

    We describe an accurate and precise calibration procedure for multichannel optical pyrometers such as the 6-channel, 3-ns temporal resolution instrument used in the Caltech experimental geophysics laboratory. We begin with a review of calibration sources for shock temperatures in the 3000-30 000 K range. High-power, coiled tungsten halogen standards of spectral irradiance appear to be the only practical alternative to NIST-traceable tungsten ribbon lamps, which are no longer available with large enough calibrated area. However, non-uniform radiance complicates the use of such coiled lamps for reliable and reproducible calibration of pyrometers that employ imaging or relay optics. Careful analysis of documentedmore » methods of shock pyrometer calibration to coiled irradiance standard lamps shows that only one technique, not directly applicable in our case, is free of major radiometric errors. We provide a detailed description of the modified Caltech pyrometer instrument and a procedure for its absolute spectral radiance calibration, accurate to ±5%. We employ a designated central area of a 0.7× demagnified image of a coiled-coil tungsten halogen lamp filament, cross-calibrated against a NIST-traceable tungsten ribbon lamp. We give the results of the cross-calibration along with descriptions of the optical arrangement, data acquisition, and processing. We describe a procedure to characterize the difference between the static and dynamic response of amplified photodetectors, allowing time-dependent photodiode correction factors for spectral radiance histories from shock experiments. We validate correct operation of the modified Caltech pyrometer with actual shock temperature experiments on single-crystal NaCl and MgO and obtain very good agreement with the literature data for these substances. We conclude with a summary of the most essential requirements for error-free calibration of a fiber-optic shock-temperature pyrometer using a high-power coiled tungsten halogen irradiance standard lamp.« less

  17. gPhoton: THE GALEX PHOTON DATA ARCHIVE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Million, Chase; Fleming, Scott W.; Shiao, Bernie

    gPhoton is a new database product and software package that enables analysis of GALEX ultraviolet data at the photon level. The project’s stand-alone, pure-Python calibration pipeline reproduces the functionality of the original mission pipeline to reduce raw spacecraft data to lists of time-tagged, sky-projected photons, which are then hosted in a publicly available database by the Mikulski Archive at Space Telescope. This database contains approximately 130 terabytes of data describing approximately 1.1 trillion sky-projected events with a timestamp resolution of five milliseconds. A handful of Python and command-line modules serve as a front end to interact with the database andmore » to generate calibrated light curves and images from the photon-level data at user-defined temporal and spatial scales. The gPhoton software and source code are in active development and publicly available under a permissive license. We describe the motivation, design, and implementation of the calibration pipeline, database, and tools, with emphasis on divergence from prior work, as well as challenges created by the large data volume. We summarize the astrometric and photometric performance of gPhoton relative to the original mission pipeline. For a brief example of short time-domain science capabilities enabled by gPhoton, we show new flares from the known M-dwarf flare star CR Draconis. The gPhoton software has permanent object identifiers with the ASCL (ascl:1603.004) and DOI (doi:10.17909/T9CC7G). This paper describes the software as of version v1.27.2.« less

  18. Calibration of X-Ray diffractometer by the experimental comparison method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dudka, A. P., E-mail: dudka@ns.crys.ras.ru

    2015-07-15

    A software for calibrating an X-ray diffractometer with area detector has been developed. It is proposed to search for detector and goniometer calibration models whose parameters are reproduced in a series of measurements on a reference crystal. Reference (standard) crystals are prepared during the investigation; they should provide the agreement of structural models in repeated analyses. The technique developed has been used to calibrate Xcalibur Sapphire and Eos, Gemini Ruby (Agilent) and Apex x8 and Apex Duo (Bruker) diffractometers. The main conclusions are as follows: the calibration maps are stable for several years and can be used to improve structuralmore » results, verified CCD detectors exhibit significant inhomogeneity of the efficiency (response) function, and a Bruker goniometer introduces smaller distortions than an Agilent goniometer.« less

  19. 40 CFR 86.1422 - Analyzer calibration.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Trucks; Certification Short Test Procedures § 86.1422 Analyzer calibration. (a) Determine that the... check. Prior to its introduction into service and at specified periods thereafter, the analyzer must...

  20. 40 CFR 86.1323-2007 - Oxides of nitrogen analyzer calibration.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 19 2011-07-01 2011-07-01 false Oxides of nitrogen analyzer... Exhaust Test Procedures § 86.1323-2007 Oxides of nitrogen analyzer calibration. This section describes the initial and periodic calibration of the chemiluminescent oxides of nitrogen analyzer. (a) Prior to...

  1. 40 CFR 86.1323-2007 - Oxides of nitrogen analyzer calibration.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 19 2010-07-01 2010-07-01 false Oxides of nitrogen analyzer... Exhaust Test Procedures § 86.1323-2007 Oxides of nitrogen analyzer calibration. This section describes the initial and periodic calibration of the chemiluminescent oxides of nitrogen analyzer. (a) Prior to...

  2. 40 CFR 86.1323-2007 - Oxides of nitrogen analyzer calibration.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 20 2013-07-01 2013-07-01 false Oxides of nitrogen analyzer... Exhaust Test Procedures § 86.1323-2007 Oxides of nitrogen analyzer calibration. This section describes the initial and periodic calibration of the chemiluminescent oxides of nitrogen analyzer. (a) Prior to...

  3. 40 CFR 86.1323-2007 - Oxides of nitrogen analyzer calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 20 2012-07-01 2012-07-01 false Oxides of nitrogen analyzer... Exhaust Test Procedures § 86.1323-2007 Oxides of nitrogen analyzer calibration. This section describes the initial and periodic calibration of the chemiluminescent oxides of nitrogen analyzer. (a) Prior to...

  4. 40 CFR 86.524-78 - Carbon dioxide analyzer calibration.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 18 2010-07-01 2010-07-01 false Carbon dioxide analyzer calibration... Regulations for 1978 and Later New Motorcycles; Test Procedures § 86.524-78 Carbon dioxide analyzer calibration. (a) Prior to its introduction into service and monthly thereafter the NDIR carbon dioxide...

  5. 40 CFR 86.1416 - Calibration; frequency and overview.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) Emission Regulations for New Gasoline-Fueled Otto-Cycle Light-Duty Vehicles and New Gasoline-Fueled Otto-Cycle Light-Duty Trucks; Certification Short Test Procedures § 86.1416 Calibration; frequency and... calibration of the analyzer must be checked. The analyzer must be adjusted or repaired as necessary. (c) Water...

  6. 40 CFR 86.1416 - Calibration; frequency and overview.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) Emission Regulations for New Gasoline-Fueled Otto-Cycle Light-Duty Vehicles and New Gasoline-Fueled Otto-Cycle Light-Duty Trucks; Certification Short Test Procedures § 86.1416 Calibration; frequency and... calibration of the analyzer must be checked. The analyzer must be adjusted or repaired as necessary. (c) Water...

  7. 40 CFR 86.1416 - Calibration; frequency and overview.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) Emission Regulations for New Gasoline-Fueled Otto-Cycle Light-Duty Vehicles and New Gasoline-Fueled Otto-Cycle Light-Duty Trucks; Certification Short Test Procedures § 86.1416 Calibration; frequency and... calibration of the analyzer must be checked. The analyzer must be adjusted or repaired as necessary. (c) Water...

  8. 40 CFR 86.1416 - Calibration; frequency and overview.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) Emission Regulations for New Gasoline-Fueled Otto-Cycle Light-Duty Vehicles and New Gasoline-Fueled Otto-Cycle Light-Duty Trucks; Certification Short Test Procedures § 86.1416 Calibration; frequency and... calibration of the analyzer must be checked. The analyzer must be adjusted or repaired as necessary. (c) Water...

  9. 40 CFR 86.124-78 - Carbon dioxide analyzer calibration.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 18 2011-07-01 2011-07-01 false Carbon dioxide analyzer calibration... PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES Emission... Complete Heavy-Duty Vehicles; Test Procedures § 86.124-78 Carbon dioxide analyzer calibration. Prior to its...

  10. 40 CFR 90.315 - Analyzer initial calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... specifications in § 90.316(b). (c) Zero setting and calibration. Using purified synthetic air (or nitrogen), set the CO, CO2, NOX. and HC analyzers at zero. Connect the appropriate calibrating gases to the analyzers...) Rechecking of zero setting. Recheck the zero setting and, if necessary, repeat the procedure described in...

  11. 40 CFR 90.315 - Analyzer initial calibration.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... specifications in § 90.316(b). (c) Zero setting and calibration. Using purified synthetic air (or nitrogen), set the CO, CO2, NOX. and HC analyzers at zero. Connect the appropriate calibrating gases to the analyzers...) Rechecking of zero setting. Recheck the zero setting and, if necessary, repeat the procedure described in...

  12. 40 CFR 90.315 - Analyzer initial calibration.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... specifications in § 90.316(b). (c) Zero setting and calibration. Using purified synthetic air (or nitrogen), set the CO, CO2, NOX. and HC analyzers at zero. Connect the appropriate calibrating gases to the analyzers...) Rechecking of zero setting. Recheck the zero setting and, if necessary, repeat the procedure described in...

  13. 40 CFR 90.315 - Analyzer initial calibration.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... specifications in § 90.316(b). (c) Zero setting and calibration. Using purified synthetic air (or nitrogen), set the CO, CO2, NOX. and HC analyzers at zero. Connect the appropriate calibrating gases to the analyzers...) Rechecking of zero setting. Recheck the zero setting and, if necessary, repeat the procedure described in...

  14. 40 CFR 90.315 - Analyzer initial calibration.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... specifications in § 90.316(b). (c) Zero setting and calibration. Using purified synthetic air (or nitrogen), set the CO, CO2, NOX. and HC analyzers at zero. Connect the appropriate calibrating gases to the analyzers...) Rechecking of zero setting. Recheck the zero setting and, if necessary, repeat the procedure described in...

  15. Photogrammetric camera calibration

    USGS Publications Warehouse

    Tayman, W.P.; Ziemann, H.

    1984-01-01

    Section 2 (Calibration) of the document "Recommended Procedures for Calibrating Photogrammetric Cameras and Related Optical Tests" from the International Archives of Photogrammetry, Vol. XIII, Part 4, is reviewed in the light of recent practical work, and suggestions for changes are made. These suggestions are intended as a basis for a further discussion. ?? 1984.

  16. U.S.-MEXICO BORDER PROGRAM ARIZONA BORDER STUDY--STANDARD OPERATING PROCEDURE FOR OPERATION, CALIBRATION, AND MAINTENANCE OF THE PERKIN-ELMER ZEEMAN/5000 SYSTEM ATOMIC ABSORPTION SPECTROPHOTOMETER (BCO-L-6.0)

    EPA Science Inventory

    The purpose of this SOP is to outline the start-up, calibration, operation, and maintenance procedures for the Perkin-Elmer 5000 atomic absorption spectrophotometer (PE 5000 AA), and the Perkin Elmer 5000 Zeeman graphite furnace atomic absorption spectrophotometer (PE 5000Z GFAA)...

  17. LANDSAT-D conical scanner evaluation plan

    NASA Technical Reports Server (NTRS)

    Bilanow, S.; Chen, L. C. (Principal Investigator)

    1982-01-01

    The planned activities involved in the inflight sensor calibration and performance evaluation are discussed and the supporting software requirements are specified. The possible sensor error sources and their effects on sensor measurements are summarized. The methods by which the inflight sensor performance will be analyzed and the sensor modeling parameters will be calibrated are presented. In addition, a brief discussion on the data requirement for the study is provided.

  18. Performance of ground attitude determination procedures for HEAO-1

    NASA Technical Reports Server (NTRS)

    Fallon, L., III; Sturch, C. R.

    1978-01-01

    Ground attitude support for HEAO 1 provided at GSFC by the HEAO 1 Attitude Ground Support System (AGSS) is described. Information telemetered from Sun sensors, gyroscopes, star trackers, and an onboard computer are used by the AGSS to compute updates to the onboard attitude reference and gyro calibration parameters. The onboard computer utilizes these updates in providing continuous attitudes (accurate to 0.25degree) for use in the observatory's attitude control procedures. The relationship between HEAO 1 onboard and ground processing, the procedures used by the AGSS in computing attitude and gyro calibration updates, and the performance of these procedures in the HEAO 1 postlaunch environment is discussed.

  19. Calibration method for a large-scale structured light measurement system.

    PubMed

    Wang, Peng; Wang, Jianmei; Xu, Jing; Guan, Yong; Zhang, Guanglie; Chen, Ken

    2017-05-10

    The structured light method is an effective non-contact measurement approach. The calibration greatly affects the measurement precision of structured light systems. To construct a large-scale structured light system with high accuracy, a large-scale and precise calibration gauge is always required, which leads to an increased cost. To this end, in this paper, a calibration method with a planar mirror is proposed to reduce the calibration gauge size and cost. An out-of-focus camera calibration method is also proposed to overcome the defocusing problem caused by the shortened distance during the calibration procedure. The experimental results verify the accuracy of the proposed calibration method.

  20. Spacecraft attitude calibration/verification baseline study

    NASA Technical Reports Server (NTRS)

    Chen, L. C.

    1981-01-01

    A baseline study for a generalized spacecraft attitude calibration/verification system is presented. It can be used to define software specifications for three major functions required by a mission: the pre-launch parameter observability and data collection strategy study; the in-flight sensor calibration; and the post-calibration attitude accuracy verification. Analytical considerations are given for both single-axis and three-axis spacecrafts. The three-axis attitudes considered include the inertial-pointing attitudes, the reference-pointing attitudes, and attitudes undergoing specific maneuvers. The attitude sensors and hardware considered include the Earth horizon sensors, the plane-field Sun sensors, the coarse and fine two-axis digital Sun sensors, the three-axis magnetometers, the fixed-head star trackers, and the inertial reference gyros.

  1. Stormwater quality modelling in combined sewers: calibration and uncertainty analysis.

    PubMed

    Kanso, A; Chebbo, G; Tassin, B

    2005-01-01

    Estimating the level of uncertainty in urban stormwater quality models is vital for their utilization. This paper presents the results of application of a Monte Carlo Markov Chain method based on the Bayesian theory for the calibration and uncertainty analysis of a storm water quality model commonly used in available software. The tested model uses a hydrologic/hydrodynamic scheme to estimate the accumulation, the erosion and the transport of pollutants on surfaces and in sewers. It was calibrated for four different initial conditions of in-sewer deposits. Calibration results showed large variability in the model's responses in function of the initial conditions. They demonstrated that the model's predictive capacity is very low.

  2. Optimization of Skill Retention in the U.S. Army through Initial Training Analysis and Design. Volume 1.

    DTIC Science & Technology

    1983-05-01

    observed end-of-course scores for tasks .- trained to criterion. e MGA software was calibrated to provide retention estimates at two levels of...exceed the MGA estimates. Thirty-five out of forty, or 87.5,o0 of the tasks met this expectation. . * For these first trial data, MGA software predicts...Objective: The objective of this effort was to perform an operational test of the capability of MGA Skill Training and Retention (STAR©) software to

  3. Reference software implementation for GIFTS ground data processing

    NASA Astrophysics Data System (ADS)

    Garcia, R. K.; Howell, H. B.; Knuteson, R. O.; Martin, G. D.; Olson, E. R.; Smuga-Otto, M. J.

    2006-08-01

    Future satellite weather instruments such as high spectral resolution imaging interferometers pose a challenge to the atmospheric science and software development communities due to the immense data volumes they will generate. An open-source, scalable reference software implementation demonstrating the calibration of radiance products from an imaging interferometer, the Geosynchronous Imaging Fourier Transform Spectrometer1 (GIFTS), is presented. This paper covers essential design principles laid out in summary system diagrams, lessons learned during implementation and preliminary test results from the GIFTS Information Processing System (GIPS) prototype.

  4. Deep space network software cost estimation model

    NASA Technical Reports Server (NTRS)

    Tausworthe, R. C.

    1981-01-01

    A parametric software cost estimation model prepared for Deep Space Network (DSN) Data Systems implementation tasks is presented. The resource estimation model incorporates principles and data from a number of existing models. The model calibrates task magnitude and difficulty, development environment, and software technology effects through prompted responses to a set of approximately 50 questions. Parameters in the model are adjusted to fit DSN software life cycle statistics. The estimation model output scales a standard DSN Work Breakdown Structure skeleton, which is then input into a PERT/CPM system, producing a detailed schedule and resource budget for the project being planned.

  5. SU-E-T-24: Development and Implementation of an Automated Algorithm to Determine Radiation Isocenter, Radiation vs. Light Field Coincidence, and Analyze Strip Tests.

    PubMed

    Hyer, D; Mart, C

    2012-06-01

    The aim of this study was to develop a phantom and analysis software that could be used to quickly and accurately determine the location of radiation isocenter using the Electronic Portal Imaging Device (EPID). The phantom could then be used as a static reference point for performing other tests including: radiation vs. light field coincidence, MLC and Jaw strip tests, and Varian Optical Guidance Platform (OGP) calibration. The solution proposed uses a collimator setting of 10×10 cm to acquire EPID images of the new phantom constructed from LEGO® blocks. Images from a number of gantry and collimator angles are analyzed by the software to determine the position of the jaws and center of the phantom in each image. The distance between a chosen jaw and the phantom center is then compared to the same distance measured after a 180 degree collimator rotation to determine if the phantom is centered in the dimension being investigated. The accuracy of the algorithm's measurements were verified by independent measurement to be approximately equal to the detector's pitch. Light versus radiation field as well as MLC and Jaw strip tests are performed using measurements based on the phantom center once located at the radiation isocenter. Reproducibility tests show that the algorithm's results were objectively repeatable. Additionally, the phantom and software are completely independent of linac vendor and this study presents results from two major linac manufacturers. An OGP calibration array was also integrated into the phantom to allow calibration of the OGP while the phantom is positioned at radiation isocenter to reduce setup uncertainty contained in the calibration. This solution offers a quick, objective method to perform isocenter localization as well as laser alignment, OGP calibration, and other tests on a monthly basis. © 2012 American Association of Physicists in Medicine.

  6. WE-D-204-03: A Six-Year Longitudinal Evaluation of the DICOM GSDF Conformance Stability of LCD Monitors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McKenney, S; Bevins, N; Flynn, M

    2015-06-15

    Purpose: The calibration of monitors in radiology is critical to ensure a standardized reading environment. If left unchecked, monitors initially calibrated to follow the DICOM Grayscale Standard Display Function (GSDF) can fall out of calibration. This work presents a quantitative evaluation of the stability of a cohort of monitors with similar deployment times and clinical utilization. Methods: Fifty-four liquid crystal display (LCD) monitors (NEC L200ME) were deployed for clinical use in 2009. At that time, a subset of eight of these monitors were used to generate a look-up table (LUT) using the open-source software pacsDisplay. The software was used tomore » load the LUT to the graphics card of the computer in order to make the monitors compliant with the GSDF. The luminance response of the monitors was evaluated twice over six years, once in 2011 and again in 2015. Results: As expected, the maximum luminance of the monitors decreased over time, with an average reduction from 2009 of 35% in 2011, and 53% in 2015. The luminance ratio (maximum luminance divided by the minimum) also decreased, with the all of the decrease occurring in the first two years (average 20%). There was an overall increase in relative error compared with the DICOM GSDF from measurement to measurement, indicating that deviation from the GSDF increases with monitor luminance reduction. Along with changes in luminance, several other issues were identified during the testing, including non-uniformities, bad pixels, and missing calibration software. Conclusion: From the initial installation of these monitors, most of the degradation occurred during the first two years, highlighting the importance of routine clinical testing of displays. Following such quality assurance, displays could be either re-calibrated or replaced depending on different thresholds. In addition, other issues not related to luminance could be identified and corrected.« less

  7. Advanced communications technology satellite high burst rate link evaluation terminal communication protocol software user's guide, version 1.0

    NASA Technical Reports Server (NTRS)

    Reinhart, Richard C.

    1993-01-01

    The Communication Protocol Software was developed at the NASA Lewis Research Center to support the Advanced Communications Technology Satellite High Burst Rate Link Evaluation Terminal (ACTS HBR-LET). The HBR-LET is an experimenters terminal to communicate with the ACTS for various experiments by government, university, and industry agencies. The Communication Protocol Software is one segment of the Control and Performance Monitor (C&PM) Software system of the HBR-LET. The Communication Protocol Software allows users to control and configure the Intermediate Frequency Switch Matrix (IFSM) on board the ACTS to yield a desired path through the spacecraft payload. Besides IFSM control, the C&PM Software System is also responsible for instrument control during HBR-LET experiments, uplink power control of the HBR-LET to demonstrate power augmentation during signal fade events, and data display. The Communication Protocol Software User's Guide, Version 1.0 (NASA CR-189162) outlines the commands and procedures to install and operate the Communication Protocol Software. Configuration files used to control the IFSM, operator commands, and error recovery procedures are discussed. The Communication Protocol Software Maintenance Manual, Version 1.0 (NASA CR-189163, to be published) is a programmer's guide to the Communication Protocol Software. This manual details the current implementation of the software from a technical perspective. Included is an overview of the Communication Protocol Software, computer algorithms, format representations, and computer hardware configuration. The Communication Protocol Software Test Plan (NASA CR-189164, to be published) provides a step-by-step procedure to verify the operation of the software. Included in the Test Plan is command transmission, telemetry reception, error detection, and error recovery procedures.

  8. 40 CFR 1065.330 - Exhaust-flow calibration.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... CONTROLS ENGINE-TESTING PROCEDURES Calibrations and Verifications Flow-Related Measurements § 1065.330... use other reference meters such as laminar flow elements, which are not commonly designed to withstand...

  9. Experimental techniques for the calibration of lidar depolarization channels in EARLINET

    NASA Astrophysics Data System (ADS)

    Belegante, Livio; Bravo-Aranda, Juan Antonio; Freudenthaler, Volker; Nicolae, Doina; Nemuc, Anca; Ene, Dragos; Alados-Arboledas, Lucas; Amodeo, Aldo; Pappalardo, Gelsomina; D'Amico, Giuseppe; Amato, Francesco; Engelmann, Ronny; Baars, Holger; Wandinger, Ulla; Papayannis, Alexandros; Kokkalis, Panos; Pereira, Sérgio N.

    2018-02-01

    Particle depolarization ratio retrieved from lidar measurements are commonly used for aerosol-typing studies, microphysical inversion, or mass concentration retrievals. The particle depolarization ratio is one of the primary parameters that can differentiate several major aerosol components but only if the measurements are accurate enough. The accuracy related to the retrieval of particle depolarization ratios is the driving factor for assessing and improving the uncertainties of the depolarization products. This paper presents different depolarization calibration procedures used to improve the quality of the depolarization data. The results illustrate a significant improvement of the depolarization lidar products for all the selected lidar stations that have implemented depolarization calibration procedures. The calibrated volume and particle depolarization profiles at 532 nm show values that fall within a range that is generally accepted in the literature.

  10. PC based temporary shielding administrative procedure (TSAP)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olsen, D.E.; Pederson, G.E.; Hamby, P.N.

    1995-03-01

    A completely new Administrative Procedure for temporary shielding was developed for use at Commonwealth Edison`s six nuclear stations. This procedure promotes the use of shielding, and addresses industry requirements for the use and control of temporary shielding. The importance of an effective procedure has increased since more temporary shielding is being used as ALARA goals become more ambitious. To help implement the administrative procedure, a personal computer software program was written to incorporate the procedural requirements. This software incorporates the useability of a Windows graphical user interface with extensive help and database features. This combination of a comprehensive administrative proceduremore » and user friendly software promotes the effective use and management of temporary shielding while ensuring that industry requirements are met.« less

  11. Node-to-node field calibration of wireless distributed air pollution sensor network.

    PubMed

    Kizel, Fadi; Etzion, Yael; Shafran-Nathan, Rakefet; Levy, Ilan; Fishbain, Barak; Bartonova, Alena; Broday, David M

    2018-02-01

    Low-cost air quality sensors offer high-resolution spatiotemporal measurements that can be used for air resources management and exposure estimation. Yet, such sensors require frequent calibration to provide reliable data, since even after a laboratory calibration they might not report correct values when they are deployed in the field, due to interference with other pollutants, as a result of sensitivity to environmental conditions and due to sensor aging and drift. Field calibration has been suggested as a means for overcoming these limitations, with the common strategy involving periodical collocations of the sensors at an air quality monitoring station. However, the cost and complexity involved in relocating numerous sensor nodes back and forth, and the loss of data during the repeated calibration periods make this strategy inefficient. This work examines an alternative approach, a node-to-node (N2N) calibration, where only one sensor in each chain is directly calibrated against the reference measurements and the rest of the sensors are calibrated sequentially one against the other while they are deployed and collocated in pairs. The calibration can be performed multiple times as a routine procedure. This procedure minimizes the total number of sensor relocations, and enables calibration while simultaneously collecting data at the deployment sites. We studied N2N chain calibration and the propagation of the calibration error analytically, computationally and experimentally. The in-situ N2N calibration is shown to be generic and applicable for different pollutants, sensing technologies, sensor platforms, chain lengths, and sensor order within the chain. In particular, we show that chain calibration of three nodes, each calibrated for a week, propagate calibration errors that are similar to those found in direct field calibration. Hence, N2N calibration is shown to be suitable for calibration of distributed sensor networks. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Coronal Magnetography of Solar Active Regions Using Coordinated SOHO/CDS and VLA Observations

    NASA Technical Reports Server (NTRS)

    Brosius, Jeffrey W.

    1999-01-01

    The purpose of this project is to apply the coronal magnetographic technique to SOHO (Solar Heliospheric Observatory) /CDS (Coronal Diagnostic Spectrometer) EUV (Extreme Ultraviolet Radiation) and coordinated VLA microwave observations of solar active regions to derive the strength and structure of the coronal magnetic field. A CDS observing plan was developed for obtaining spectra needed to derive active region differential emission measures (DEMs) required for coronal magnetography. VLA observations were proposed and obtained. SOHO JOP 100 was developed, tested, approved, and implemented to obtain coordinated CDS (Coronal Diagnostic Spectrometer)/EIT (Ultraviolet Imaging Telescope)/ VLA (Very Large Array)/ TRACE (Transition Region and Coronal Explorer)/ SXT (Solar X Ray Telescope) observations of active regions on April 12, May 9, May 13, and May 23. Analysis of all four data sets began, with heaviest concentration on COS data. It is found that 200-pixel (14 A in NIS1) wavelength windows are appropriate for extracting broadened Gaussian line profile fit parameters for lines including Fe XIV at 334.2, Fe XVI at 335.4, Fe XVI at 360.8, and Mg IX at 368.1 over the 4 arcmin by 4 arcmin CDS field of view. Extensive efforts were focused on learning and applying were focused on learning and applying CDS software, and including it in new IDL procedures to carry out calculations relating to coronal magnetography. An important step is to extract Gaussian profile fits to all the lines needed to derive the DEM in each spatial pixel of any given active region. The standard CDS absolute intensity calibration software was applied to derived intensity images, revealing that ratios between density-insensitive lines like Fe XVI 360.8/335.4 yield good agreement with theory. However, the resulting absolute intensities of those lines are very high, indicating that revisions to the CDS absolute intensity calibrations remain to be included in the CDS software, an essential step to deriving reliable coronal magnetograms. With lessons learned and high quality data obtained during the past year, coronal magnetography will be successfully pursued under my new SOHO GI program.

  13. A procedure for radiometric recalibration of Landsat 5 TM reflective-band data

    USGS Publications Warehouse

    Chander, G.; Haque, M.O.; Micijevic, E.; Barsi, J.A.

    2010-01-01

    From the Landsat program's inception in 1972 to the present, the Earth science user community has been benefiting from a historical record of remotely sensed data. The multispectral data from the Landsat 5 (L5) Thematic Mapper (TM) sensor provide the backbone for this extensive archive. Historically, the radiometric calibration procedure for the L5 TM imagery used the detectors' response to the internal calibrator (IC) on a scene-by-scene basis to determine the gain and offset for each detector. The IC system degraded with time, causing radiometric calibration errors up to 20%. In May 2003, the L5 TM data processed and distributed by the U.S. Geological Survey (USGS) Earth Resources Observation and Science Center through the National Landsat Archive Production System (NLAPS) were updated to use a lifetime lookup-table (LUT) gain model to radiometrically calibrate TM data instead of using scene-specific IC gains. Further modification of the gain model was performed in 2007. The L5 TM data processed using IC prior to the calibration update do not benefit from the recent calibration revisions. A procedure has been developed to give users the ability to recalibrate their existing level-1 products. The best recalibration results are obtained if the work-order report that was included in the original standard data product delivery is available. However, if users do not have the original work-order report, the IC trends can be used for recalibration. The IC trends were generated using the radiometric gain trends recorded in the NLAPS database. This paper provides the details of the recalibration procedure for the following: 1) data processed using IC where users have the work-order file; 2) data processed using IC where users do not have the work-order file; 3) data processed using prelaunch calibration parameters; and 4) data processed using the previous version of the LUT (e.g., LUT03) that was released before April 2, 2007.

  14. Methods and Best Practice to Intercompare Dissolved Oxygen Sensors and Fluorometers/Turbidimeters for Oceanographic Applications.

    PubMed

    Pensieri, Sara; Bozzano, Roberto; Schiano, M Elisabetta; Ntoumas, Manolis; Potiris, Emmanouil; Frangoulis, Constantin; Podaras, Dimitrios; Petihakis, George

    2016-05-17

    In European seas, ocean monitoring strategies in terms of key parameters, space and time scale vary widely for a range of technical and economic reasons. Nonetheless, the growing interest in the ocean interior promotes the investigation of processes such as oxygen consumption, primary productivity and ocean acidity requiring that close attention is paid to the instruments in terms of measurement setup, configuration, calibration, maintenance procedures and quality assessment. To this aim, two separate hardware and software tools were developed in order to test and simultaneously intercompare several oxygen probes and fluorometers/turbidimeters, respectively in the same environmental conditions, with a configuration as close as possible to real in-situ deployment. The chamber designed to perform chlorophyll-a and turbidity tests allowed for the simultaneous acquisition of analogue and digital signals of several sensors at the same time, so it was sufficiently compact to be used in both laboratory and onboard vessels. Methodologies and best practice committed to the intercomparison of dissolved oxygen sensors and fluorometers/turbidimeters have been used, which aid in the promotion of interoperability to access key infrastructures, such as ocean observatories and calibration facilities. Results from laboratory tests as well as field tests in the Mediterranean Sea are presented.

  15. Morphometric analysis - Cone beam computed tomography to predict bone quality and quantity.

    PubMed

    Hohlweg-Majert, B; Metzger, M C; Kummer, T; Schulze, D

    2011-07-01

    Modified quantitative computed tomography is a method used to predict bone quality and quantify the bone mass of the jaw. The aim of this study was to determine whether bone quantity or quality was detected by cone beam computed tomography (CBCT) combined with image analysis. MATERIALS AND PROCEDURES: Different measurements recorded on two phantoms (Siemens phantom, Comac phantom) were evaluated on images taken with the Somatom VolumeZoom (Siemens Medical Solutions, Erlangen, Germany) and the NewTom 9000 (NIM s.r.l., Verona, Italy) in order to calculate a calibration curve. The spatial relationships of six sample cylinders and the repositioning from four pig skull halves relative to adjacent defined anatomical structures were assessed by means of three-dimensional visualization software. The calibration curves for computer tomography (CT) and cone beam computer tomography (CBCT) using the Siemens phantom showed linear correlation in both modalities between the Hounsfield Units (HU) and bone morphology. A correction factor for CBCT was calculated. Exact information about the micromorphology of the bone cylinders was only available using of micro computer tomography. Cone-beam computer tomography is a suitable choice for analysing bone mass, but, it does not give any information about bone quality. 2010 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  16. Methods and Best Practice to Intercompare Dissolved Oxygen Sensors and Fluorometers/Turbidimeters for Oceanographic Applications

    PubMed Central

    Pensieri, Sara; Bozzano, Roberto; Schiano, M. Elisabetta; Ntoumas, Manolis; Potiris, Emmanouil; Frangoulis, Constantin; Podaras, Dimitrios; Petihakis, George

    2016-01-01

    In European seas, ocean monitoring strategies in terms of key parameters, space and time scale vary widely for a range of technical and economic reasons. Nonetheless, the growing interest in the ocean interior promotes the investigation of processes such as oxygen consumption, primary productivity and ocean acidity requiring that close attention is paid to the instruments in terms of measurement setup, configuration, calibration, maintenance procedures and quality assessment. To this aim, two separate hardware and software tools were developed in order to test and simultaneously intercompare several oxygen probes and fluorometers/turbidimeters, respectively in the same environmental conditions, with a configuration as close as possible to real in-situ deployment. The chamber designed to perform chlorophyll-a and turbidity tests allowed for the simultaneous acquisition of analogue and digital signals of several sensors at the same time, so it was sufficiently compact to be used in both laboratory and onboard vessels. Methodologies and best practice committed to the intercomparison of dissolved oxygen sensors and fluorometers/turbidimeters have been used, which aid in the promotion of interoperability to access key infrastructures, such as ocean observatories and calibration facilities. Results from laboratory tests as well as field tests in the Mediterranean Sea are presented. PMID:27196908

  17. Characterization of large-area pressure sensitive robot skin

    NASA Astrophysics Data System (ADS)

    Saadatzi, Mohammad Nasser; Baptist, Joshua R.; Wijayasinghe, Indika B.; Popa, Dan O.

    2017-05-01

    Sensorized robot skin has considerable promise to enhance robots' tactile perception of surrounding environments. For physical human-robot interaction (pHRI) or autonomous manipulation, a high spatial sensor density is required, typically driven by the skin location on the robot. In our previous study, a 4x4 flexible array of strain sensors were printed and packaged onto Kapton sheets and silicone encapsulants. In this paper, we are extending the surface area of the patch to larger arrays with up to 128 tactel elements. To address scalability, sensitivity, and calibration challenges, a novel electronic module, free of the traditional signal conditioning circuitry was created. The electronic design relies on a software-based calibration scheme using high-resolution analog-to-digital converters with internal programmable gain amplifiers. In this paper, we first show the efficacy of the proposed method with a 4x4 skin array using controlled pressure tests, and then perform procedures to evaluate each sensor's characteristics such as dynamic force-to-strain property, repeatability, and signal-to-noise-ratio. In order to handle larger sensor surfaces, an automated force-controlled test cycle was carried out. Results demonstrate that our approach leads to reliable and efficient methods for extracting tactile models for use in future interaction with collaborative robots.

  18. Estimation of River Discharge at Ungauged Catchment using GIS Map Correlation Method as Applied in Sta. Lucia River in Mauban, Quezon, Philippines

    NASA Astrophysics Data System (ADS)

    Monjardin, Cris Edward F.; Uy, Francis Aldrine A.; Tan, Fibor J.

    2017-06-01

    This paper presents use of GIS Map Correlation Method, a novel method of Prediction of Ungauged Basin, which is used to estimate the river flow at an ungauged catchment. The PUB Method used here intends to reduce the time and costs of data gathering procedure since it will just rely on a reference calibrated watershed that has almost the same characteristics in terms of slope, curve number, land cover, climatic condition, and average basin elevation. Furthermore, this utilized a set of modelling software which used digital elevation models (DEM), rainfall and discharge data. The researchers estimated the river flow of Sta. Lucia River in Quezon province, which is the ungauged catchment. The researchers assessed 11 gauged catchments and determined which basin could be correlated to Sta. Lucia. After finding the most correlated basin, the researchers used the data considering adjusted parameters of the gauged catchment. In evaluating the accuracy of the method, the researchers simulated a rainfall event in the said catchment and compared the actual discharge and the generated discharge from HEC-HMS. The researchers found out that method showed a good fit in the compared results, proving GMC Method is effective for use in the calibration of ungauged catchments.

  19. A versatile calibration procedure for portable coded aperture gamma cameras and RGB-D sensors

    NASA Astrophysics Data System (ADS)

    Paradiso, V.; Crivellaro, A.; Amgarou, K.; de Lanaute, N. Blanc; Fua, P.; Liénard, E.

    2018-04-01

    The present paper proposes a versatile procedure for the geometrical calibration of coded aperture gamma cameras and RGB-D depth sensors, using only one radioactive point source and a simple experimental set-up. Calibration data is then used for accurately aligning radiation images retrieved by means of the γ-camera with the respective depth images computed with the RGB-D sensor. The system resulting from such a combination is thus able to retrieve, automatically, the distance of radioactive hotspots by means of pixel-wise mapping between gamma and depth images. This procedure is of great interest for a wide number of applications, ranging from precise automatic estimation of the shape and distance of radioactive objects to Augmented Reality systems. Incidentally, the corresponding results validated the choice of a perspective design model for a coded aperture γ-camera.

  20. TRACC: an open source software for processing sap flux data from thermal dissipation probes

    Treesearch

    Eric J. Ward; Jean-Christophe Domec; John King; Ge Sun; Steve McNulty; Asko Noormets

    2017-01-01

    Key message TRACC is an open-source software for standardizing the cleaning, conversion, and calibration of sap flux density data from thermal dissipation probes, which addresses issues of nighttime transpiration and water storage. Abstract Thermal dissipation probes (TDPs) have become a widely used method of monitoring plant water use in recent years. The use of TDPs...

  1. Preliminary Evaluation of the Radiometric Calibration of LANDSAT-4 Thematic Mapper Data by the Canada Centre for Remote Sensing

    NASA Technical Reports Server (NTRS)

    Murphy, J.; Park, W.; Fitzgerald, A.

    1985-01-01

    The radiometric characteristics of the LANDSAT-4 TM sensor are being studied with a view to developing absolute and relative radiometric calibration procedures. Preliminary results from several different approaches to the relative correction of all detectors within each band are reported. Topics covered include: the radiometric correction method; absolute calibration; the relative radiometric calibration algorithm; relative gain and offset calibration; relative gain and offset observations; and residual radiometric stripping.

  2. Automatic Calibration of Global Flow Routing Model Parameters in the Amazon Basin Using Virtual SWOT Data

    NASA Astrophysics Data System (ADS)

    Mouffe, Melodie; Getirana, Augusto; Ricci, Sophie; Lion, Christine; Biancamaria, Sylvian; Boone, Aaron; Mognard, Nelly; Rogel, Philippe

    2013-09-01

    The Surface Water and Ocean Topography (SWOT) wide swath altimetry mission will provide measurements of water surface elevations (WSE) at a global scale. The aim of this study is to investigate the potential of these satellite data for the calibration of the hydrological model HyMAP, over the Amazon river basin. Since SWOT has not yet been launched, synthetical observations are used to calibrate the river bed depth and width, the Manning coefficient and the baseflow concentration time. The calibration process stands in the minimization of a cost function using an evolutionnary, global and multi-objective algorithm that describes the difference between the simulated and the observed WSE. We found that the calibration procedure is able to retrieve an optimal set of parameters such that it brings the simulated WSE closer to the observation. Still with a global calibration procedure where a uniform correction is applied, the improvement is limited to a mean correction over the catchment and the simulation period. We conclude that in order to benefit from the high resolution and complete coverage of the SWOT mission, the calibration process should be achieved sequentially in time over sub-domains as observations become available.

  3. Calibration of transonic and supersonic wind tunnels

    NASA Technical Reports Server (NTRS)

    Reed, T. D.; Pope, T. C.; Cooksey, J. M.

    1977-01-01

    State-of-the art instrumentation and procedures for calibrating transonic (0.6 less than M less than 1.4) and supersonic (M less than or equal to 3.5) wind tunnels were reviewed and evaluated. Major emphasis was given to transonic tunnels. Continuous, blowdown and intermittent tunnels were considered. The required measurements of pressure, temperature, flow angularity, noise and humidity were discussed, and the effects of measurement uncertainties were summarized. A comprehensive review of instrumentation currently used to calibrate empty tunnel flow conditions was included. The recent results of relevant research are noted and recommendations for achieving improved data accuracy are made where appropriate. It is concluded, for general testing purposes, that satisfactory calibration measurements can be achieved in both transonic and supersonic tunnels. The goal of calibrating transonic tunnels to within 0.001 in centerline Mach number appears to be feasible with existing instrumentation, provided correct calibration procedures are carefully followed. A comparable accuracy can be achieved off-centerline with carefully designed, conventional probes, except near Mach 1. In the range 0.95 less than M less than 1.05, the laser Doppler velocimeter appears to offer the most promise for improved calibration accuracy off-centerline.

  4. 10 CFR 26.127 - Procedures.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ..., implement, and maintain written standard operating procedures for each assay performed for drug and specimen... facility shall develop, implement, and maintain written standard operating procedures for each test. The...; (2) Preparation of reagents, standards, and controls; (3) Calibration procedures; (4) Derivation of...

  5. 10 CFR 26.127 - Procedures.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ..., implement, and maintain written standard operating procedures for each assay performed for drug and specimen... facility shall develop, implement, and maintain written standard operating procedures for each test. The...; (2) Preparation of reagents, standards, and controls; (3) Calibration procedures; (4) Derivation of...

  6. 10 CFR 26.127 - Procedures.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ..., implement, and maintain written standard operating procedures for each assay performed for drug and specimen... facility shall develop, implement, and maintain written standard operating procedures for each test. The...; (2) Preparation of reagents, standards, and controls; (3) Calibration procedures; (4) Derivation of...

  7. NASA-6 atmospheric measuring station. [calibration, functional checks, and operation of measuring instruments

    NASA Technical Reports Server (NTRS)

    1973-01-01

    Information required to calibrate, functionally check, and operate the Instrumentation Branch equipment on the NASA-6 aircraft is provided. All procedures required for preflight checks and in-flight operation of the NASA-6 atmospheric measuring station are given. The calibration section is intended for only that portion of the system maintained and calibrated by IN-MSD-12 Systems Operation contractor personnel. Maintenance is not included.

  8. Flush Airdata Sensing (FADS) System Calibration Procedures and Results for Blunt Forebodies

    NASA Technical Reports Server (NTRS)

    Cobleigh, Brent R.; Whitmore, Stephen A.; Haering, Edward A., Jr.; Borrer, Jerry; Roback, V. Eric

    1999-01-01

    Blunt-forebody pressure data are used to study the behavior of the NASA Dryden Flight Research Center flush airdata sensing (FADS) pressure model and solution algorithm. The model relates surface pressure measurements to the airdata state. Spliced from the potential flow solution for uniform flow over a sphere and the modified Newtonian impact theory, the model was shown to apply to a wide range of blunt-forebody shapes and Mach numbers. Calibrations of a sphere, spherical cones, a Rankine half body, and the F-14, F/A-18, X-33, X-34, and X-38 configurations are shown. The three calibration parameters are well-behaved from Mach 0.25 to Mach 5.0, an angle-of-attack range extending to greater than 30 deg, and an angle-of-sideslip range extending to greater than 15 deg. Contrary to the sharp calibration changes found on traditional pitot-static systems at transonic speeds, the FADS calibrations are smooth, monotonic functions of Mach number and effective angles of attack and sideslip. Because the FADS calibration is sensitive to pressure port location, detailed measurements of the actual pressure port locations on the flight vehicle are required and the wind-tunnel calibration model should have pressure ports in similar locations. The procedure for calibrating a FADS system is outlined.

  9. User-friendly freehand ultrasound calibration using Lego bricks and automatic registration.

    PubMed

    Xiao, Yiming; Yan, Charles Xiao Bo; Drouin, Simon; De Nigris, Dante; Kochanowska, Anna; Collins, D Louis

    2016-09-01

    As an inexpensive, noninvasive, and portable clinical imaging modality, ultrasound (US) has been widely employed in many interventional procedures for monitoring potential tissue deformation, surgical tool placement, and locating surgical targets. The application requires the spatial mapping between 2D US images and 3D coordinates of the patient. Although positions of the devices (i.e., ultrasound transducer) and the patient can be easily recorded by a motion tracking system, the spatial relationship between the US image and the tracker attached to the US transducer needs to be estimated through an US calibration procedure. Previously, various calibration techniques have been proposed, where a spatial transformation is computed to match the coordinates of corresponding features in a physical phantom and those seen in the US scans. However, most of these methods are difficult to use for novel users. We proposed an ultrasound calibration method by constructing a phantom from simple Lego bricks and applying an automated multi-slice 2D-3D registration scheme without volumetric reconstruction. The method was validated for its calibration accuracy and reproducibility. Our method yields a calibration accuracy of [Formula: see text] mm and a calibration reproducibility of 1.29 mm. We have proposed a robust, inexpensive, and easy-to-use ultrasound calibration method.

  10. Calibration of a parsimonious distributed ecohydrological daily model in a data-scarce basin by exclusively using the spatio-temporal variation of NDVI

    NASA Astrophysics Data System (ADS)

    Ruiz-Pérez, Guiomar; Koch, Julian; Manfreda, Salvatore; Caylor, Kelly; Francés, Félix

    2017-12-01

    Ecohydrological modeling studies in developing countries, such as sub-Saharan Africa, often face the problem of extensive parametrical requirements and limited available data. Satellite remote sensing data may be able to fill this gap, but require novel methodologies to exploit their spatio-temporal information that could potentially be incorporated into model calibration and validation frameworks. The present study tackles this problem by suggesting an automatic calibration procedure, based on the empirical orthogonal function, for distributed ecohydrological daily models. The procedure is tested with the support of remote sensing data in a data-scarce environment - the upper Ewaso Ngiro river basin in Kenya. In the present application, the TETIS-VEG model is calibrated using only NDVI (Normalized Difference Vegetation Index) data derived from MODIS. The results demonstrate that (1) satellite data of vegetation dynamics can be used to calibrate and validate ecohydrological models in water-controlled and data-scarce regions, (2) the model calibrated using only satellite data is able to reproduce both the spatio-temporal vegetation dynamics and the observed discharge at the outlet and (3) the proposed automatic calibration methodology works satisfactorily and it allows for a straightforward incorporation of spatio-temporal data into the calibration and validation framework of a model.

  11. 40 CFR 600.110-08 - Equipment calibration.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... FUEL ECONOMY AND GREENHOUSE GAS EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy and Carbon-Related Exhaust Emission Test Procedures § 600.110-08 Equipment calibration. The equipment used for fuel economy...

  12. 40 CFR 600.110-08 - Equipment calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... FUEL ECONOMY AND GREENHOUSE GAS EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy and Carbon-Related Exhaust Emission Test Procedures § 600.110-08 Equipment calibration. The equipment used for fuel economy...

  13. Airborne Topographic Mapper Calibration Procedures and Accuracy Assessment

    NASA Technical Reports Server (NTRS)

    Martin, Chreston F.; Krabill, William B.; Manizade, Serdar S.; Russell, Rob L.; Sonntag, John G.; Swift, Robert N.; Yungel, James K.

    2012-01-01

    Description of NASA Airborn Topographic Mapper (ATM) lidar calibration procedures including analysis of the accuracy and consistancy of various ATM instrument parameters and the resulting influence on topographic elevation measurements. The ATM elevations measurements from a nominal operating altitude 500 to 750 m above the ice surface was found to be: Horizontal Accuracy 74 cm, Horizontal Precision 14 cm, Vertical Accuracy 6.6 cm, Vertical Precision 3 cm.

  14. Calibration of hydrological model with programme PEST

    NASA Astrophysics Data System (ADS)

    Brilly, Mitja; Vidmar, Andrej; Kryžanowski, Andrej; Bezak, Nejc; Šraj, Mojca

    2016-04-01

    PEST is tool based on minimization of an objective function related to the root mean square error between the model output and the measurement. We use "singular value decomposition", section of the PEST control file, and Tikhonov regularization method for successfully estimation of model parameters. The PEST sometimes failed if inverse problems were ill-posed, but (SVD) ensures that PEST maintains numerical stability. The choice of the initial guess for the initial parameter values is an important issue in the PEST and need expert knowledge. The flexible nature of the PEST software and its ability to be applied to whole catchments at once give results of calibration performed extremely well across high number of sub catchments. Use of parallel computing version of PEST called BeoPEST was successfully useful to speed up calibration process. BeoPEST employs smart slaves and point-to-point communications to transfer data between the master and slaves computers. The HBV-light model is a simple multi-tank-type model for simulating precipitation-runoff. It is conceptual balance model of catchment hydrology which simulates discharge using rainfall, temperature and estimates of potential evaporation. Version of HBV-light-CLI allows the user to run HBV-light from the command line. Input and results files are in XML form. This allows to easily connecting it with other applications such as pre and post-processing utilities and PEST itself. The procedure was applied on hydrological model of Savinja catchment (1852 km2) and consists of twenty one sub-catchments. Data are temporary processed on hourly basis.

  15. Measurement of electromagnetic tracking error in a navigated breast surgery setup

    NASA Astrophysics Data System (ADS)

    Harish, Vinyas; Baksh, Aidan; Ungi, Tamas; Lasso, Andras; Baum, Zachary; Gauvin, Gabrielle; Engel, Jay; Rudan, John; Fichtinger, Gabor

    2016-03-01

    PURPOSE: The measurement of tracking error is crucial to ensure the safety and feasibility of electromagnetically tracked, image-guided procedures. Measurement should occur in a clinical environment because electromagnetic field distortion depends on positioning relative to the field generator and metal objects. However, we could not find an accessible and open-source system for calibration, error measurement, and visualization. We developed such a system and tested it in a navigated breast surgery setup. METHODS: A pointer tool was designed for concurrent electromagnetic and optical tracking. Software modules were developed for automatic calibration of the measurement system, real-time error visualization, and analysis. The system was taken to an operating room to test for field distortion in a navigated breast surgery setup. Positional and rotational electromagnetic tracking errors were then calculated using optical tracking as a ground truth. RESULTS: Our system is quick to set up and can be rapidly deployed. The process from calibration to visualization also only takes a few minutes. Field distortion was measured in the presence of various surgical equipment. Positional and rotational error in a clean field was approximately 0.90 mm and 0.31°. The presence of a surgical table, an electrosurgical cautery, and anesthesia machine increased the error by up to a few tenths of a millimeter and tenth of a degree. CONCLUSION: In a navigated breast surgery setup, measurement and visualization of tracking error defines a safe working area in the presence of surgical equipment. Our system is available as an extension for the open-source 3D Slicer platform.

  16. Calibration and Temperature Profile of a Tungsten Filament Lamp

    ERIC Educational Resources Information Center

    de Izarra, Charles; Gitton, Jean-Michel

    2010-01-01

    The goal of this work proposed for undergraduate students and teachers is the calibration of a tungsten filament lamp from electric measurements that are both simple and precise, allowing to determine the temperature of tungsten filament as a function of the current intensity. This calibration procedure was first applied to a conventional filament…

  17. 40 CFR 92.121 - Oxides of nitrogen analyzer calibration and check.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 21 2012-07-01 2012-07-01 false Oxides of nitrogen analyzer... Procedures § 92.121 Oxides of nitrogen analyzer calibration and check. (a) Quench checks; NO X analyzer. (1... performed in step in paragraph (a)(3)(i) this section. (b) Oxides of nitrogen analyzer calibration. (1...

  18. 40 CFR 92.121 - Oxides of nitrogen analyzer calibration and check.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 20 2011-07-01 2011-07-01 false Oxides of nitrogen analyzer... Procedures § 92.121 Oxides of nitrogen analyzer calibration and check. (a) Quench checks; NO X analyzer. (1... performed in step in paragraph (a)(3)(i) this section. (b) Oxides of nitrogen analyzer calibration. (1...

  19. 40 CFR 92.121 - Oxides of nitrogen analyzer calibration and check.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 21 2013-07-01 2013-07-01 false Oxides of nitrogen analyzer... Procedures § 92.121 Oxides of nitrogen analyzer calibration and check. (a) Quench checks; NO X analyzer. (1... performed in step in paragraph (a)(3)(i) this section. (b) Oxides of nitrogen analyzer calibration. (1...

  20. Hydrological processes and model representation: impact of soft data on calibration

    Treesearch

    J.G. Arnold; M.A. Youssef; H. Yen; M.J. White; A.Y. Sheshukov; A.M. Sadeghi; D.N. Moriasi; J.L. Steiner; Devendra Amatya; R.W. Skaggs; E.B. Haney; J. Jeong; M. Arabi; P.H. Gowda

    2015-01-01

    Hydrologic and water quality models are increasingly used to determine the environmental impacts of climate variability and land management. Due to differing model objectives and differences in monitored data, there are currently no universally accepted procedures for model calibration and validation in the literature. In an effort to develop accepted model calibration...

Top