Sample records for calorimetry software calibration

  1. Calibration Software for Use with Jurassicprok

    NASA Technical Reports Server (NTRS)

    Chapin, Elaine; Hensley, Scott; Siqueira, Paul

    2004-01-01

    The Jurassicprok Interferometric Calibration Software (also called "Calibration Processor" or simply "CP") estimates the calibration parameters of an airborne synthetic-aperture-radar (SAR) system, the raw measurement data of which are processed by the Jurassicprok software described in the preceding article. Calibration parameters estimated by CP include time delays, baseline offsets, phase screens, and radiometric offsets. CP examines raw radar-pulse data, single-look complex image data, and digital elevation map data. For each type of data, CP compares the actual values with values expected on the basis of ground-truth data. CP then converts the differences between the actual and expected values into updates for the calibration parameters in an interferometric calibration file (ICF) and a radiometric calibration file (RCF) for the particular SAR system. The updated ICF and RCF are used as inputs to both Jurassicprok and to the companion Motion Measurement Processor software (described in the following article) for use in generating calibrated digital elevation maps.

  2. Software For Calibration Of Polarimetric SAR Data

    NASA Technical Reports Server (NTRS)

    Van Zyl, Jakob; Zebker, Howard; Freeman, Anthony; Holt, John; Dubois, Pascale; Chapman, Bruce

    1994-01-01

    POLCAL (Polarimetric Radar Calibration) software tool intended to assist in calibration of synthetic-aperture radar (SAR) systems. In particular, calibrates Stokes-matrix-format data produced as standard product by NASA/Jet Propulsion Laboratory (JPL) airborne imaging synthetic aperture radar (AIRSAR). Version 4.0 of POLCAL is upgrade of version 2.0. New options include automatic absolute calibration of 89/90 data, distributed-target analysis, calibration of nearby scenes with corner reflectors, altitude or roll-angle corrections, and calibration of errors introduced by known topography. Reduces crosstalk and corrects phase calibration without use of ground calibration equipment. Written in FORTRAN 77.

  3. Dynamic Calorimetry for Students

    ERIC Educational Resources Information Center

    Kraftmakher, Yaakov

    2007-01-01

    A student experiment on dynamic calorimetry is described. Dynamic calorimetry is a powerful technique for calorimetric studies, especially at high temperatures and pressures. A low-power incandescent lamp serves as the sample. The ScienceWorkshop data-acquisition system with DataStudio software from PASCO Scientific displays the results of the…

  4. Calibration of work zone impact analysis software for Missouri.

    DOT National Transportation Integrated Search

    2013-12-01

    This project calibrated two software programs used for estimating the traffic impacts of work zones. The WZ Spreadsheet : and VISSIM programs were recommended in a previous study by the authors. The two programs were calibrated using : field data fro...

  5. Validation of a new mixing chamber system for breath-by-breath indirect calorimetry.

    PubMed

    Kim, Do-Yeon; Robergs, Robert Andrew

    2012-02-01

    Limited validation research exists for applications of breath-by-breath systems of expired gas analysis indirect calorimetry (EGAIC) during exercise. We developed improved hardware and software for breath-by-breath indirect calorimetry (NEW) and validated this system as well as a commercial system (COM) against 2 methods: (i) mechanical ventilation with known calibration gas, and (ii) human subjects testing for 5 min each at rest and cycle ergometer exercise at 100 and 175 W. Mechanical calibration consisted of medical grade and certified calibration gas ((4.95% CO(2), 12.01% O(2), balance N(2)), room air (20.95% O(2), 0.03% CO(2), balance N(2)), and 100% nitrogen), and an air flow turbine calibrated with a 3-L calibration syringe. Ventilation was mimicked manually using complete 3-L calibration syringe manouvers at a rate of 10·min(-1) from a Douglas bag reservoir of calibration gas. The testing of human subjects was completed in a counterbalanced sequence based on 5 repeated tests of all conditions for a single subject. Rest periods of 5 and 10 min followed the 100 and 175 W conditions, respectively. COM and NEW had similar accuracy when tested with known ventilation and gas fractions. However, during human subjects testing COM significantly under-measured carbon dioxide gas fractions, over-measured oxygen gas fractions and minute ventilation, and resulted in errors to each of oxygen uptake, carbon dioxide output, and respiratory exchange ratio. These discrepant findings reveal that controlled ventilation and gas fractions are insufficient to validate breath-by-breath, and perhaps even time-averaged, systems of EGAIC. The errors of the COM system reveal the need for concern over the validity of commercial systems of EGAIC.

  6. ATLAS tile calorimeter cesium calibration control and analysis software

    NASA Astrophysics Data System (ADS)

    Solovyanov, O.; Solodkov, A.; Starchenko, E.; Karyukhin, A.; Isaev, A.; Shalanda, N.

    2008-07-01

    An online control system to calibrate and monitor ATLAS Barrel hadronic calorimeter (TileCal) with a movable radioactive source, driven by liquid flow, is described. To read out and control the system an online software has been developed, using ATLAS TDAQ components like DVS (Diagnostic and Verification System) to verify the hardware before running, IS (Information Server) for data and status exchange between networked computers, and other components like DDC (DCS to DAQ Connection), to connect to PVSS-based slow control systems of Tile Calorimeter, high voltage and low voltage. A system of scripting facilities, based on Python language, is used to handle all the calibration and monitoring processes from hardware perspective to final data storage, including various abnormal situations. A QT based graphical user interface to display the status of the calibration system during the cesium source scan is described. The software for analysis of the detector response, using online data, is discussed. Performance of the system and first experience from the ATLAS pit are presented.

  7. Isothermal Titration Calorimetry Can Provide Critical Thinking Opportunities

    ERIC Educational Resources Information Center

    Moore, Dale E.; Goode, David R.; Seney, Caryn S.; Boatwright, Jennifer M.

    2016-01-01

    College chemistry faculties might not have considered including isothermal titration calorimetry (ITC) in their majors' curriculum because experimental data from this instrumental method are often analyzed via automation (software). However, the software-based data analysis can be replaced with a spreadsheet-based analysis that is readily…

  8. Behavior driven testing in ALMA telescope calibration software

    NASA Astrophysics Data System (ADS)

    Gil, Juan P.; Garces, Mario; Broguiere, Dominique; Shen, Tzu-Chiang

    2016-07-01

    ALMA software development cycle includes well defined testing stages that involves developers, testers and scientists. We adapted Behavior Driven Development (BDD) to testing activities applied to Telescope Calibration (TELCAL) software. BDD is an agile technique that encourages communication between roles by defining test cases using natural language to specify features and scenarios, what allows participants to share a common language and provides a high level set of automated tests. This work describes how we implemented and maintain BDD testing for TELCAL, the infrastructure needed to support it and proposals to expand this technique to other subsystems.

  9. TweezPal - Optical tweezers analysis and calibration software

    NASA Astrophysics Data System (ADS)

    Osterman, Natan

    2010-11-01

    Optical tweezers, a powerful tool for optical trapping, micromanipulation and force transduction, have in recent years become a standard technique commonly used in many research laboratories and university courses. Knowledge about the optical force acting on a trapped object can be gained only after a calibration procedure which has to be performed (by an expert) for each type of trapped objects. In this paper we present TweezPal, a user-friendly, standalone Windows software tool for optical tweezers analysis and calibration. Using TweezPal, the procedure can be performed in a matter of minutes even by non-expert users. The calibration is based on the Brownian motion of a particle trapped in a stationary optical trap, which is being monitored using video or photodiode detection. The particle trajectory is imported into the software which instantly calculates position histogram, trapping potential, stiffness and anisotropy. Program summaryProgram title: TweezPal Catalogue identifier: AEGR_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGR_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 44 891 No. of bytes in distributed program, including test data, etc.: 792 653 Distribution format: tar.gz Programming language: Borland Delphi Computer: Any PC running Microsoft Windows Operating system: Windows 95, 98, 2000, XP, Vista, 7 RAM: 12 Mbytes Classification: 3, 4.14, 18, 23 Nature of problem: Quick, robust and user-friendly calibration and analysis of optical tweezers. The optical trap is calibrated from the trajectory of a trapped particle undergoing Brownian motion in a stationary optical trap (input data) using two methods. Solution method: Elimination of the experimental drift in position data. Direct calculation of the trap stiffness from the positional

  10. Coleman performs VO2 Max PFS Software Calibrations and Instrument Check

    NASA Image and Video Library

    2011-02-24

    ISS026-E-029180 (24 Feb. 2011) --- NASA astronaut Catherine (Cady) Coleman, Expedition 26 flight engineer, performs VO2max portable Pulmonary Function System (PFS) software calibrations and instrument check while using the Cycle Ergometer with Vibration Isolation System (CEVIS) in the Destiny laboratory of the International Space Station.

  11. A hardware-software system for the automation of verification and calibration of oil metering units secondary equipment

    NASA Astrophysics Data System (ADS)

    Boyarnikov, A. V.; Boyarnikova, L. V.; Kozhushko, A. A.; Sekachev, A. F.

    2017-08-01

    In the article the process of verification (calibration) of oil metering units secondary equipment is considered. The purpose of the work is to increase the reliability and reduce the complexity of this process by developing a software and hardware system that provides automated verification and calibration. The hardware part of this complex carries out the commutation of the measuring channels of the verified controller and the reference channels of the calibrator in accordance with the introduced algorithm. The developed software allows controlling the commutation of channels, setting values on the calibrator, reading the measured data from the controller, calculating errors and compiling protocols. This system can be used for checking the controllers of the secondary equipment of the oil metering units in the automatic verification mode (with the open communication protocol) or in the semi-automatic verification mode (without it). The peculiar feature of the approach used is the development of a universal signal switch operating under software control, which can be configured for various verification methods (calibration), which allows to cover the entire range of controllers of metering units secondary equipment. The use of automatic verification with the help of a hardware and software system allows to shorten the verification time by 5-10 times and to increase the reliability of measurements, excluding the influence of the human factor.

  12. WE-D-9A-06: Open Source Monitor Calibration and Quality Control Software for Enterprise Display Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bevins, N; Vanderhoek, M; Lang, S

    2014-06-15

    Purpose: Medical display monitor calibration and quality control present challenges to medical physicists. The purpose of this work is to demonstrate and share experiences with an open source package that allows for both initial monitor setup and routine performance evaluation. Methods: A software package, pacsDisplay, has been developed over the last decade to aid in the calibration of all monitors within the radiology group in our health system. The software is used to calibrate monitors to follow the DICOM Grayscale Standard Display Function (GSDF) via lookup tables installed on the workstation. Additional functionality facilitates periodic evaluations of both primary andmore » secondary medical monitors to ensure satisfactory performance. This software is installed on all radiology workstations, and can also be run as a stand-alone tool from a USB disk. Recently, a database has been developed to store and centralize the monitor performance data and to provide long-term trends for compliance with internal standards and various accrediting organizations. Results: Implementation and utilization of pacsDisplay has resulted in improved monitor performance across the health system. Monitor testing is now performed at regular intervals and the software is being used across multiple imaging modalities. Monitor performance characteristics such as maximum and minimum luminance, ambient luminance and illuminance, color tracking, and GSDF conformity are loaded into a centralized database for system performance comparisons. Compliance reports for organizations such as MQSA, ACR, and TJC are generated automatically and stored in the same database. Conclusion: An open source software solution has simplified and improved the standardization of displays within our health system. This work serves as an example method for calibrating and testing monitors within an enterprise health system.« less

  13. A Practical Guide to Calibration of a GSSHA Hydrologic Model Using ERDC Automated Model Calibration Software - Efficient Local Search

    DTIC Science & Technology

    2012-02-01

    use the ERDC software implementation of the secant LM method that accommodates the PEST model independent interface to calibrate a GSSHA...how the method works. We will also demonstrate how our LM/SLM implementation compares with its counterparts as implemented in the popular PEST ...function values and total model calls for local search to converge) associated with Examples 1 and 3 using the PEST LM/SLM implementations

  14. Benefits of Oxygen Saturation Targeting Trials: Oximeter Calibration Software Revision and Infant Saturations.

    PubMed

    Whyte, Robin K; Nelson, Harvey; Roberts, Robin S; Schmidt, Barbara

    2017-03-01

    It has been reported in the 3 Benefits of Oxygen Saturation Targeting (BOOST-II) trials that changes in oximeter calibration software resulted in clearer separation between the oxygen saturations in the two trial target groups. A revised analysis of the published BOOST-II data does not support this conclusion. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Development of a calibrated software reliability model for flight and supporting ground software for avionic systems

    NASA Technical Reports Server (NTRS)

    Lawrence, Stella

    1991-01-01

    The object of this project was to develop and calibrate quantitative models for predicting the quality of software. Reliable flight and supporting ground software is a highly important factor in the successful operation of the space shuttle program. The models used in the present study consisted of SMERFS (Statistical Modeling and Estimation of Reliability Functions for Software). There are ten models in SMERFS. For a first run, the results obtained in modeling the cumulative number of failures versus execution time showed fairly good results for our data. Plots of cumulative software failures versus calendar weeks were made and the model results were compared with the historical data on the same graph. If the model agrees with actual historical behavior for a set of data then there is confidence in future predictions for this data. Considering the quality of the data, the models have given some significant results, even at this early stage. With better care in data collection, data analysis, recording of the fixing of failures and CPU execution times, the models should prove extremely helpful in making predictions regarding the future pattern of failures, including an estimate of the number of errors remaining in the software and the additional testing time required for the software quality to reach acceptable levels. It appears that there is no one 'best' model for all cases. It is for this reason that the aim of this project was to test several models. One of the recommendations resulting from this study is that great care must be taken in the collection of data. When using a model, the data should satisfy the model assumptions.

  16. The MeqTrees software system and its use for third-generation calibration of radio interferometers

    NASA Astrophysics Data System (ADS)

    Noordam, J. E.; Smirnov, O. M.

    2010-12-01

    Context. The formulation of the radio interferometer measurement equation (RIME) for a generic radio telescope by Hamaker et al. has provided us with an elegant mathematical apparatus for better understanding, simulation and calibration of existing and future instruments. The calibration of the new radio telescopes (LOFAR, SKA) would be unthinkable without the RIME formalism, and new software to exploit it. Aims: The MeqTrees software system is designed to implement numerical models, and to solve for arbitrary subsets of their parameters. It may be applied to many problems, but was originally geared towards implementing Measurement Equations in radio astronomy for the purposes of simulation and calibration. The technical goal of MeqTrees is to provide a tool for rapid implementation of such models, while offering performance comparable to hand-written code. We are also pursuing the wider goal of increasing the rate of evolution of radio astronomical software, by offering a tool that facilitates rapid experimentation, and exchange of ideas (and scripts). Methods: MeqTrees is implemented as a Python-based front-end called the meqbrowser, and an efficient (C++-based) computational back-end called the meqserver. Numerical models are defined on the front-end via a Python-based Tree Definition Language (TDL), then rapidly executed on the back-end. The use of TDL facilitates an extremely short turn-around time (hours rather than weeks or months) for experimentation with new ideas. This is also helped by unprecedented visualization capabilities for all final and intermediate results. A flexible data model and a number of important optimizations in the back-end ensures that the numerical performance is comparable to that of hand-written code. Results: MeqTrees is already widely used as the simulation tool for new instruments (LOFAR, SKA) and technologies (focal plane arrays). It has demonstrated that it can achieve a noise-limited dynamic range in excess of a million, on

  17. Using direct calorimetry to test the accuracy of indirect calorimetry in an ectotherm.

    PubMed

    Walsberg, Glenn E; Hoffman, Ty C M

    2006-01-01

    We previously demonstrated that the relationship between respiratory gas exchange and metabolic heat production is unexpectedly variable and that conventional approaches to estimating energy expenditure by indirect calorimetry can incorporate large errors. Prior studies, however, comparing direct and indirect calorimetry of animals focused only on endothermic organisms. Given that endothermy and ectothermy represent a fundamental dichotomy of animal energetics, in this analysis we explore how these contrasting physiologies correlate with the relationship between heat production and respiratory gas exchange. Simultaneous indirect and direct calorimetry in an ectotherm, the ball python (Python regius Shaw), revealed that the relationships between gas exchange and heat production were within 1% of those expected when analyses using indirect calorimetry were based on the assumption that the fasting animal catabolized only protein. This accuracy of indirect calorimetry contrasts sharply with our previous conclusions for three species of birds and mammals.

  18. Standard Procedure for Calibrating an Areal Calorimetry Based Dosimeter

    DTIC Science & Technology

    2015-05-01

    detector target surface. In this case, the source was on for approximately 2.5 s, shortly after which the data acquisition ends. For this shot , the...no person shall be subject to any penalty for failing to comply with a collection of information if it does not display a currently valid OMB control...48 APPENDIX D – CALIBRATION DATA SHOT RESULTS ...................................................... 49 APPENDIX E – SAMPLE

  19. Absolute dosimetry on a dynamically scanned sample for synchrotron radiotherapy using graphite calorimetry and ionization chambers

    NASA Astrophysics Data System (ADS)

    Lye, J. E.; Harty, P. D.; Butler, D. J.; Crosbie, J. C.; Livingstone, J.; Poole, C. M.; Ramanathan, G.; Wright, T.; Stevenson, A. W.

    2016-06-01

    The absolute dose delivered to a dynamically scanned sample in the Imaging and Medical Beamline (IMBL) on the Australian Synchrotron was measured with a graphite calorimeter anticipated to be established as a primary standard for synchrotron dosimetry. The calorimetry was compared to measurements using a free-air chamber (FAC), a PTW 31 014 Pinpoint ionization chamber, and a PTW 34 001 Roos ionization chamber. The IMBL beam height is limited to approximately 2 mm. To produce clinically useful beams of a few centimetres the beam must be scanned in the vertical direction. In practice it is the patient/detector that is scanned and the scanning velocity defines the dose that is delivered. The calorimeter, FAC, and Roos chamber measure the dose area product which is then converted to central axis dose with the scanned beam area derived from Monte Carlo (MC) simulations and film measurements. The Pinpoint chamber measures the central axis dose directly and does not require beam area measurements. The calorimeter and FAC measure dose from first principles. The calorimetry requires conversion of the measured absorbed dose to graphite to absorbed dose to water using MC calculations with the EGSnrc code. Air kerma measurements from the free air chamber were converted to absorbed dose to water using the AAPM TG-61 protocol. The two ionization chambers are secondary standards requiring calibration with kilovoltage x-ray tubes. The Roos and Pinpoint chambers were calibrated against the Australian primary standard for air kerma at the Australian Radiation Protection and Nuclear Safety Agency (ARPANSA). Agreement of order 2% or better was obtained between the calorimetry and ionization chambers. The FAC measured a dose 3-5% higher than the calorimetry, within the stated uncertainties.

  20. FlowCal: A user-friendly, open source software tool for automatically converting flow cytometry data from arbitrary to calibrated units

    PubMed Central

    Castillo-Hair, Sebastian M.; Sexton, John T.; Landry, Brian P.; Olson, Evan J.; Igoshin, Oleg A.; Tabor, Jeffrey J.

    2017-01-01

    Flow cytometry is widely used to measure gene expression and other molecular biological processes with single cell resolution via fluorescent probes. Flow cytometers output data in arbitrary units (a.u.) that vary with the probe, instrument, and settings. Arbitrary units can be converted to the calibrated unit molecules of equivalent fluorophore (MEF) using commercially available calibration particles. However, there is no convenient, non-proprietary tool available to perform this calibration. Consequently, most researchers report data in a.u., limiting interpretation. Here, we report a software tool named FlowCal to overcome current limitations. FlowCal can be run using an intuitive Microsoft Excel interface, or customizable Python scripts. The software accepts Flow Cytometry Standard (FCS) files as inputs and is compatible with different calibration particles, fluorescent probes, and cell types. Additionally, FlowCal automatically gates data, calculates common statistics, and produces publication quality plots. We validate FlowCal by calibrating a.u. measurements of E. coli expressing superfolder GFP (sfGFP) collected at 10 different detector sensitivity (gain) settings to a single MEF value. Additionally, we reduce day-to-day variability in replicate E. coli sfGFP expression measurements due to instrument drift by 33%, and calibrate S. cerevisiae mVenus expression data to MEF units. Finally, we demonstrate a simple method for using FlowCal to calibrate fluorescence units across different cytometers. FlowCal should ease the quantitative analysis of flow cytometry data within and across laboratories and facilitate the adoption of standard fluorescence units in synthetic biology and beyond. PMID:27110723

  1. MIRO Continuum Calibration for Asteroid Mode

    NASA Technical Reports Server (NTRS)

    Lee, Seungwon

    2011-01-01

    MIRO (Microwave Instrument for the Rosetta Orbiter) is a lightweight, uncooled, dual-frequency heterodyne radiometer. The MIRO encountered asteroid Steins in 2008, and during the flyby, MIRO used the Asteroid Mode to measure the emission spectrum of Steins. The Asteroid Mode is one of the seven modes of the MIRO operation, and is designed to increase the length of time that a spectral line is in the MIRO pass-band during a flyby of an object. This software is used to calibrate the continuum measurement of Steins emission power during the asteroid flyby. The MIRO raw measurement data need to be calibrated in order to obtain physically meaningful data. This software calibrates the MIRO raw measurements in digital units to the brightness temperature in Kelvin. The software uses two calibration sequences that are included in the Asteroid Mode. One sequence is at the beginning of the mode, and the other at the end. The first six frames contain the measurement of a cold calibration target, while the last six frames measure a warm calibration target. The targets have known temperatures and are used to provide reference power and gain, which can be used to convert MIRO measurements into brightness temperature. The software was developed to calibrate MIRO continuum measurements from Asteroid Mode. The software determines the relationship between the raw digital unit measured by MIRO and the equivalent brightness temperature by analyzing data from calibration frames. The found relationship is applied to non-calibration frames, which are the measurements of an object of interest such as asteroids and other planetary objects that MIRO encounters during its operation. This software characterizes the gain fluctuations statistically and determines which method to estimate gain between calibration frames. For example, if the fluctuation is lower than a statistically significant level, the averaging method is used to estimate the gain between the calibration frames. If the

  2. pytc: Open-Source Python Software for Global Analyses of Isothermal Titration Calorimetry Data.

    PubMed

    Duvvuri, Hiranmayi; Wheeler, Lucas C; Harms, Michael J

    2018-05-08

    Here we describe pytc, an open-source Python package for global fits of thermodynamic models to multiple isothermal titration calorimetry experiments. Key features include simplicity, the ability to implement new thermodynamic models, a robust maximum likelihood fitter, a fast Bayesian Markov-Chain Monte Carlo sampler, rigorous implementation, extensive documentation, and full cross-platform compatibility. pytc fitting can be done using an application program interface or via a graphical user interface. It is available for download at https://github.com/harmslab/pytc .

  3. Calorimetry of electron beams and the calibration of dosimeters at high doses

    NASA Astrophysics Data System (ADS)

    Humphreys, J. C.; McLaughlin, W. L.

    Graphite or metal calorimeters are used to make absolute dosimetric measurements of high-energy electron beams. These calibrated beams are then used to calibrate several types of dosimeters for high-dose applications such as medical-product sterilization, polymer modification, food processing, or electronic-device hardness testing. The electron beams are produced either as continuous high-power beams at approximately 4.5 MeV by d.c. type accelerators or in the energy range of approximately 8 to 50 MeV using pulsed microwave linear accelerators (linacs). The continuous beams are generally magnetically scanned to produce a broad, uniform radiation environment for the processing of materials of extended lateral dimensions. The higher-energy pulsed beams may also be scanned for processing applications or may be used in an unscanned, tightly-focused mode to produce maximum absorbed dose rates such as may be required for electronic-device radiation hardness testing. The calorimeters are used over an absorbed dose range of 10 2 to 10 4 Gy. Intercomparison studies are reported between National Institute of Standards and Technology (NIST) and UK National Physical Laboratory (NPL) graphite disk calorimeters at high doses, using the NPL 10-MeV linac, and agreement was found within 1.5%. It was also shown that the electron-beam responses of radiochromic film dosimeters and alanine pellet dosimeters can be accurately calibrated by comparison with calorimeter readings.

  4. The CCD Photometric Calibration Cookbook

    NASA Astrophysics Data System (ADS)

    Palmer, J.; Davenhall, A. C.

    This cookbook presents simple recipes for the photometric calibration of CCD frames. Using these recipes you can calibrate the brightness of objects measured in CCD frames into magnitudes in standard photometric systems, such as the Johnson-Morgan UBV, system. The recipes use standard software available at all Starlink sites. The topics covered include: selecting standard stars, measuring instrumental magnitudes and calibrating instrumental magnitudes into a standard system. The recipes are appropriate for use with data acquired with optical CCDs and filters, operated in standard ways, and describe the usual calibration technique of observing standard stars. The software is robust and reliable, but the techniques are usually not suitable where very high accuracy is required. In addition to the recipes and scripts, sufficient background material is presented to explain the procedures and techniques used. The treatment is deliberately practical rather than theoretical, in keeping with the aim of providing advice on the actual calibration of observations. This cookbook is aimed firmly at people who are new to astronomical photometry. Typical readers might have a set of photometric observations to reduce (perhaps observed by a colleague) or be planning a programme of photometric observations, perhaps for the first time. No prior knowledge of astronomical photometry is assumed. The cookbook is not aimed at experts in astronomical photometry. Many finer points are omitted for clarity and brevity. Also, in order to make the most accurate possible calibration of high-precision photometry, it is usually necessary to use bespoke software tailored to the observing programme and photometric system you are using.

  5. Calibration of a COTS Integration Cost Model Using Local Project Data

    NASA Technical Reports Server (NTRS)

    Boland, Dillard; Coon, Richard; Byers, Kathryn; Levitt, David

    1997-01-01

    The software measures and estimation techniques appropriate to a Commercial Off the Shelf (COTS) integration project differ from those commonly used for custom software development. Labor and schedule estimation tools that model COTS integration are available. Like all estimation tools, they must be calibrated with the organization's local project data. This paper describes the calibration of a commercial model using data collected by the Flight Dynamics Division (FDD) of the NASA Goddard Spaceflight Center (GSFC). The model calibrated is SLIM Release 4.0 from Quantitative Software Management (QSM). By adopting the SLIM reuse model and by treating configuration parameters as lines of code, we were able to establish a consistent calibration for COTS integration projects. The paper summarizes the metrics, the calibration process and results, and the validation of the calibration.

  6. Radiometer Calibration and Characterization (RCC) User's Manual: Windows Version 4.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andreas, Afshin M.; Wilcox, Stephen M.

    2016-02-29

    The Radiometer Calibration and Characterization (RCC) software is a data acquisition and data archival system for performing Broadband Outdoor Radiometer Calibrations (BORCAL). RCC provides a unique method of calibrating broadband atmospheric longwave and solar shortwave radiometers using techniques that reduce measurement uncertainty and better characterize a radiometer's response profile. The RCC software automatically monitors and controls many of the components that contribute to uncertainty in an instrument's responsivity. This is a user's manual and guide to the RCC software.

  7. The importance of calorimetry for highly-boosted jet substructure

    DOE PAGES

    Coleman, Evan; Freytsis, Marat; Hinzmann, Andreas; ...

    2018-01-09

    Here, jet substructure techniques are playing an essential role in exploring the TeV scale at the Large Hadron Collider (LHC), since they facilitate the efficient reconstruction and identification of highly-boosted objects. Both for the LHC and for future colliders, there is a growing interest in using jet substructure methods based only on charged-particle information. The reason is that silicon-based tracking detectors offer excellent granularity and precise vertexing, which can improve the angular resolution on highly-collimated jets and mitigate the impact of pileup. In this paper, we assess how much jet substructure performance degrades by using track-only information, and we demonstratemore » physics contexts in which calorimetry is most beneficial. Specifically, we consider five different hadronic final states - W bosons, Z bosons, top quarks, light quarks, gluons - and test the pairwise discrimination power with a multi-variate combination of substructure observables. In the idealized case of perfect reconstruction, we quantify the loss in discrimination performance when using just charged particles compared to using all detected particles. We also consider the intermediate case of using charged particles plus photons, which provides valuable information about neutral pions. In the more realistic case of a segmented calorimeter, we assess the potential performance gains from improving calorimeter granularity and resolution, comparing a CMS-like detector to more ambitious future detector concepts. Broadly speaking, we find large performance gains from neutral-particle information and from improved calorimetry in cases where jet mass resolution drives the discrimination power, whereas the gains are more modest if an absolute mass scale calibration is not required.« less

  8. The importance of calorimetry for highly-boosted jet substructure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coleman, Evan; Freytsis, Marat; Hinzmann, Andreas

    2017-09-25

    Jet substructure techniques are playing an essential role in exploring the TeV scale at the Large Hadron Collider (LHC), since they facilitate the efficient reconstruction and identification of highly-boosted objects. Both for the LHC and for future colliders, there is a growing interest in using jet substructure methods based only on charged-particle information. The reason is that silicon-based tracking detectors offer excellent granularity and precise vertexing, which can improve the angular resolution on highly-collimated jets and mitigate the impact of pileup. In this paper, we assess how much jet substructure performance degrades by using track-only information, and we demonstrate physicsmore » contexts in which calorimetry is most beneficial. Specifically, we consider five different hadronic final states - W bosons, Z bosons, top quarks, light quarks, gluons - and test the pairwise discrimination power with a multi-variate combination of substructure observables. In the idealized case of perfect reconstruction, we quantify the loss in discrimination performance when using just charged particles compared to using all detected particles. We also consider the intermediate case of using charged particles plus photons, which provides valuable information about neutral pions. In the more realistic case of a segmented calorimeter, we assess the potential performance gains from improving calorimeter granularity and resolution, comparing a CMS-like detector to more ambitious future detector concepts. Broadly speaking, we find large performance gains from neutral-particle information and from improved calorimetry in cases where jet mass resolution drives the discrimination power, whereas the gains are more modest if an absolute mass scale calibration is not required.« less

  9. The importance of calorimetry for highly-boosted jet substructure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coleman, Evan; Freytsis, Marat; Hinzmann, Andreas

    Here, jet substructure techniques are playing an essential role in exploring the TeV scale at the Large Hadron Collider (LHC), since they facilitate the efficient reconstruction and identification of highly-boosted objects. Both for the LHC and for future colliders, there is a growing interest in using jet substructure methods based only on charged-particle information. The reason is that silicon-based tracking detectors offer excellent granularity and precise vertexing, which can improve the angular resolution on highly-collimated jets and mitigate the impact of pileup. In this paper, we assess how much jet substructure performance degrades by using track-only information, and we demonstratemore » physics contexts in which calorimetry is most beneficial. Specifically, we consider five different hadronic final states - W bosons, Z bosons, top quarks, light quarks, gluons - and test the pairwise discrimination power with a multi-variate combination of substructure observables. In the idealized case of perfect reconstruction, we quantify the loss in discrimination performance when using just charged particles compared to using all detected particles. We also consider the intermediate case of using charged particles plus photons, which provides valuable information about neutral pions. In the more realistic case of a segmented calorimeter, we assess the potential performance gains from improving calorimeter granularity and resolution, comparing a CMS-like detector to more ambitious future detector concepts. Broadly speaking, we find large performance gains from neutral-particle information and from improved calorimetry in cases where jet mass resolution drives the discrimination power, whereas the gains are more modest if an absolute mass scale calibration is not required.« less

  10. The Calibration Reference Data System

    NASA Astrophysics Data System (ADS)

    Greenfield, P.; Miller, T.

    2016-07-01

    We describe a software architecture and implementation for using rules to determine which calibration files are appropriate for calibrating a given observation. This new system, the Calibration Reference Data System (CRDS), replaces what had been previously used for the Hubble Space Telescope (HST) calibration pipelines, the Calibration Database System (CDBS). CRDS will be used for the James Webb Space Telescope (JWST) calibration pipelines, and is currently being used for HST calibration pipelines. CRDS can be easily generalized for use in similar applications that need a rules-based system for selecting the appropriate item for a given dataset; we give some examples of such generalizations that will likely be used for JWST. The core functionality of the Calibration Reference Data System is available under an Open Source license. CRDS is briefly contrasted with a sampling of other similar systems used at other observatories.

  11. Dual-readout calorimetry

    NASA Astrophysics Data System (ADS)

    Lee, Sehwook; Livan, Michele; Wigmans, Richard

    2018-04-01

    In the past 20 years, dual-readout calorimetry has emerged as a technique for measuring the properties of high-energy hadrons and hadron jets that offers considerable advantages compared with the instruments that are currently used for this purpose in experiments at the high-energy frontier. The status of this experimental technique and the challenges faced for its further development are reviewed.

  12. Parallel computing for automated model calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burke, John S.; Danielson, Gary R.; Schulz, Douglas A.

    2002-07-29

    Natural resources model calibration is a significant burden on computing and staff resources in modeling efforts. Most assessments must consider multiple calibration objectives (for example magnitude and timing of stream flow peak). An automated calibration process that allows real time updating of data/models, allowing scientists to focus effort on improving models is needed. We are in the process of building a fully featured multi objective calibration tool capable of processing multiple models cheaply and efficiently using null cycle computing. Our parallel processing and calibration software routines have been generically, but our focus has been on natural resources model calibration. Somore » far, the natural resources models have been friendly to parallel calibration efforts in that they require no inter-process communication, only need a small amount of input data and only output a small amount of statistical information for each calibration run. A typical auto calibration run might involve running a model 10,000 times with a variety of input parameters and summary statistical output. In the past model calibration has been done against individual models for each data set. The individual model runs are relatively fast, ranging from seconds to minutes. The process was run on a single computer using a simple iterative process. We have completed two Auto Calibration prototypes and are currently designing a more feature rich tool. Our prototypes have focused on running the calibration in a distributed computing cross platform environment. They allow incorporation of?smart? calibration parameter generation (using artificial intelligence processing techniques). Null cycle computing similar to SETI@Home has also been a focus of our efforts. This paper details the design of the latest prototype and discusses our plans for the next revision of the software.« less

  13. An Automated Thermocouple Calibration System

    NASA Technical Reports Server (NTRS)

    Bethea, Mark D.; Rosenthal, Bruce N.

    1992-01-01

    An Automated Thermocouple Calibration System (ATCS) was developed for the unattended calibration of type K thermocouples. This system operates from room temperature to 650 C and has been used for calibration of thermocouples in an eight-zone furnace system which may employ as many as 60 thermocouples simultaneously. It is highly efficient, allowing for the calibration of large numbers of thermocouples in significantly less time than required for manual calibrations. The system consists of a personal computer, a data acquisition/control unit, and a laboratory calibration furnace. The calibration furnace is a microprocessor-controlled multipurpose temperature calibrator with an accuracy of +/- 0.7 C. The accuracy of the calibration furnace is traceable to the National Institute of Standards and Technology (NIST). The computer software is menu-based to give the user flexibility and ease of use. The user needs no programming experience to operate the systems. This system was specifically developed for use in the Microgravity Materials Science Laboratory (MMSL) at the NASA LeRC.

  14. Current status of tritium calorimetry at TLK

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buekki-Deme, A.; Alecu, C.G.; Kloppe, B.

    2015-03-15

    Inside a tritium facility, calorimetry is an important analytical method as it is the only reference method for accountancy (it is based on the measurement of the heat generated by the radioactive decay). Presently, at Tritium Laboratory Karlsruhe (TLK), 4 calorimeters are in operation, one of isothermal type and three of inertial guidance control type (IGC). The volume of the calorimeters varies between 0.5 and 20.6 liters. About two years ago we started an extensive work to improve our calorimeters with regard to reliability and precision. We were forced to upgrade 3 of our 4 calorimeters due to the outdatedmore » interfaces and software. This work involved creating new LabView programs driving the devices, re-tuning control loops and replacing obsolete hardware components. In this paper we give a review on the current performance of our calorimeters, comparing it to recently available devices from the market and in the literature. We also show some ideas for a next generation calorimeter based on experiences with our IGC calorimeters and other devices reported in the literature. (authors)« less

  15. DEM Calibration Approach: design of experiment

    NASA Astrophysics Data System (ADS)

    Boikov, A. V.; Savelev, R. V.; Payor, V. A.

    2018-05-01

    The problem of DEM models calibration is considered in the article. It is proposed to divide models input parameters into those that require iterative calibration and those that are recommended to measure directly. A new method for model calibration based on the design of the experiment for iteratively calibrated parameters is proposed. The experiment is conducted using a specially designed stand. The results are processed with technical vision algorithms. Approximating functions are obtained and the error of the implemented software and hardware complex is estimated. The prospects of the obtained results are discussed.

  16. X-Band Acquisition Aid Software

    NASA Technical Reports Server (NTRS)

    Britcliffe, Michael J.; Strain, Martha M.; Wert, Michael

    2011-01-01

    The X-band Acquisition Aid (AAP) software is a low-cost acquisition aid for the Deep Space Network (DSN) antennas, and is used while acquiring a spacecraft shortly after it has launched. When enabled, the acquisition aid provides corrections to the antenna-predicted trajectory of the spacecraft to compensate for the variations that occur during the actual launch. The AAP software also provides the corrections to the antenna-predicted trajectory to the navigation team that uses the corrections to refine their model of the spacecraft in order to produce improved antenna-predicted trajectories for each spacecraft that passes over each complex. The software provides an automated Acquisition Aid receiver calibration, and provides graphical displays to the operator and remote viewers via an Ethernet connection. It has a Web server, and the remote workstations use the Firefox browser to view the displays. At any given time, only one operator can control any particular display in order to avoid conflicting commands from more than one control point. The configuration and control is accomplished solely via the graphical displays. The operator does not have to remember any commands. Only a few configuration parameters need to be changed, and can be saved to the appropriate spacecraft-dependent configuration file on the AAP s hard disk. AAP automates the calibration sequence by first commanding the antenna to the correct position, starting the receiver calibration sequence, and then providing the operator with the option of accepting or rejecting the new calibration parameters. If accepted, the new parameters are stored in the appropriate spacecraft-dependent configuration file. The calibration can be performed on the Sun, greatly expanding the window of opportunity for calibration. The spacecraft traditionally used for calibration is in view typically twice per day, and only for about ten minutes each pass.

  17. Fast Scanning Calorimetry Studies of Supercooled Liquids and Glasses

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Deepanjan

    This dissertation is a compilation of research results of extensive Fast Scanning Calorimetry studies of two non-crystalline materials: Toluene and Water. Motivation for fundamental studies of non-crystalline phases, a brief overview of glassy materials and concepts and definitions related to them is provided in Chapter 1. Chapter 2 provides fundamentals and details of experimental apparata, experimental protocol and calibration procedure. Chapter 3 & 4 provides extensive studies of stable non-crystalline toluene films of micrometer and nanometer thicknesses grown by vapor deposition at distinct deposition rates and temperatures and probed by Fast Scanning Calorimetry. Fast scanning calorimetry is shown to be extremely sensitive to the structure of the vapor-deposited phase and was used to characterize simultaneously its kinetic stability and its thermodynamic properties. According to our analysis, transformation of vapor -deposited samples of toluene during heating with rates in excess 100,000 K/s follows the zero-order kinetics. The transformation rate correlates strongly with the initial enthalpy of the sample, which increases with the deposition rate according to sub-linear law. Analysis of the transformation kinetics of vapor deposited toluene films of various thicknesses reveal a sudden increase in the transformation rate for films thinner than 250 nm. The change in kinetics correlates with the surface roughness scale of the substrate, which is interpreted as evidence for kinetic anisotropy of the samples. We also show that out-of-equilibrium relaxation kinetics and possibly the enthalpy of vapor-deposited (VD) films of toluene are distinct from those of ordinary supercooled (OS) phase even when the deposition takes place at temperatures above the glass softening (Tg). The implications of these findings for the formation mechanism and structure of vapor deposited stable glasses are discussed. Chapter 5 and 6 provide detailed Fast Scanning Calorimetry studies

  18. Calibration of the ROSAT HRI Spectral Response

    NASA Technical Reports Server (NTRS)

    Prestwich, Andrea

    1998-01-01

    The ROSAT High Resolution Imager has a limited (2-band) spectral response. This spectral capability can give X-ray hardness ratios on spatial scales of 5 arcseconds. The spectral response of the center of the detector was calibrated before the launch of ROSAT, but the gain decreases-with time and also is a function of position on the detector. To complicate matters further, the satellite is "wobbled", possibly moving a source across several spatial gain states. These difficulties have prevented the spectral response of the ROSAT HRI from being used for scientific measurements. We have used Bright Earth data and in-flight calibration sources to map the spatial and temporal gain changes, and written software which will allow ROSAT users to generate a calibrated XSPEC response matrix and hence determine a calibrated hardness ratio. In this report, we describe the calibration procedure and show how to obtain a response matrix. In Section 2 we give an overview of the calibration procedure, in Section 3 we give a summary of HRI spatial and temporal gain variations. Section 4 describes the routines used to determine the gain distribution of a source. In Sections 5 and 6, we describe in detail how the Bright Earth database and calibration sources are used to derive a corrected response matrix for a given observation. Finally, Section 7 describes how to use the software.

  19. Titration Calorimetry Standards and the Precision of Isothermal Titration Calorimetry Data

    PubMed Central

    Baranauskienė, Lina; Petrikaitė, Vilma; Matulienė, Jurgita; Matulis, Daumantas

    2009-01-01

    Current Isothermal Titration Calorimetry (ITC) data in the literature have relatively high errors in the measured enthalpies of protein-ligand binding reactions. There is a need for universal validation standards for titration calorimeters. Several inorganic salt co-precipitation and buffer protonation reactions have been suggested as possible enthalpy standards. The performances of several commercial calorimeters, including the VP-ITC, ITC200, and Nano ITC-III, were validated using these suggested standard reactions. PMID:19582227

  20. Automated Attitude Sensor Calibration: Progress and Plans

    NASA Technical Reports Server (NTRS)

    Sedlak, Joseph; Hashmall, Joseph

    2004-01-01

    This paper describes ongoing work a NASA/Goddard Space Flight Center to improve the quality of spacecraft attitude sensor calibration and reduce costs by automating parts of the calibration process. The new calibration software can autonomously preview data quality over a given time span, select a subset of the data for processing, perform the requested calibration, and output a report. This level of automation is currently being implemented for two specific applications: inertial reference unit (IRU) calibration and sensor alignment calibration. The IRU calibration utility makes use of a sequential version of the Davenport algorithm. This utility has been successfully tested with simulated and actual flight data. The alignment calibration is still in the early testing stage. Both utilities will be incorporated into the institutional attitude ground support system.

  1. Detectors for Linear Colliders: Calorimetry at a Future Electron-Positron Collider (3/4)

    ScienceCinema

    Thomson, Mark

    2018-04-16

    Calorimetry will play a central role in determining the physics reach at a future e+e- collider. The requirements for calorimetry place the emphasis on achieving an excellent jet energy resolution. The currently favoured option for calorimetry at a future e+e- collider is the concept of high granularity particle flow calorimetry. Here granularity and a high pattern recognition capability is more important than the single particle calorimetric response. In this lecture I will describe the recent progress in understanding the reach of high granularity particle flow calorimetry and the related R&D; efforts which concentrate on test beam demonstrations of the technological options for highly granular calorimeters. I will also discuss alternatives to particle flow, for example the technique of dual readout calorimetry.

  2. Galileo SSI/Ida Radiometrically Calibrated Images V1.0

    NASA Astrophysics Data System (ADS)

    Domingue, D. L.

    2016-05-01

    This data set includes Galileo Orbiter SSI radiometrically calibrated images of the asteroid 243 Ida, created using ISIS software and assuming nadir pointing. This is an original delivery of radiometrically calibrated files, not an update to existing files. All images archived include the asteroid within the image frame. Calibration was performed in 2013-2014.

  3. POLCAL - POLARIMETRIC RADAR CALIBRATION

    NASA Technical Reports Server (NTRS)

    Vanzyl, J.

    1994-01-01

    Calibration of polarimetric radar systems is a field of research in which great progress has been made over the last few years. POLCAL (Polarimetric Radar Calibration) is a software tool intended to assist in the calibration of Synthetic Aperture Radar (SAR) systems. In particular, POLCAL calibrates Stokes matrix format data produced as the standard product by the NASA/Jet Propulsion Laboratory (JPL) airborne imaging synthetic aperture radar (AIRSAR). POLCAL was designed to be used in conjunction with data collected by the NASA/JPL AIRSAR system. AIRSAR is a multifrequency (6 cm, 24 cm, and 68 cm wavelength), fully polarimetric SAR system which produces 12 x 12 km imagery at 10 m resolution. AIRSTAR was designed as a testbed for NASA's Spaceborne Imaging Radar program. While the images produced after 1991 are thought to be calibrated (phase calibrated, cross-talk removed, channel imbalance removed, and absolutely calibrated), POLCAL can and should still be used to check the accuracy of the calibration and to correct it if necessary. Version 4.0 of POLCAL is an upgrade of POLCAL version 2.0 released to AIRSAR investigators in June, 1990. New options in version 4.0 include automatic absolute calibration of 89/90 data, distributed target analysis, calibration of nearby scenes with calibration parameters from a scene with corner reflectors, altitude or roll angle corrections, and calibration of errors introduced by known topography. Many sources of error can lead to false conclusions about the nature of scatterers on the surface. Errors in the phase relationship between polarization channels result in incorrect synthesis of polarization states. Cross-talk, caused by imperfections in the radar antenna itself, can also lead to error. POLCAL reduces cross-talk and corrects phase calibration without the use of ground calibration equipment. Removing the antenna patterns during SAR processing also forms a very important part of the calibration of SAR data. Errors in the

  4. Differential Scanning Calorimetry Techniques: Applications in Biology and Nanoscience

    PubMed Central

    Gill, Pooria; Moghadam, Tahereh Tohidi; Ranjbar, Bijan

    2010-01-01

    This paper reviews the best-known differential scanning calorimetries (DSCs), such as conventional DSC, microelectromechanical systems-DSC, infrared-heated DSC, modulated-temperature DSC, gas flow-modulated DSC, parallel-nano DSC, pressure perturbation calorimetry, self-reference DSC, and high-performance DSC. Also, we describe here the most extensive applications of DSC in biology and nanoscience. PMID:21119929

  5. Automation is an Effective Way to Improve Quality of Verification (Calibration) of Measuring Instruments

    NASA Astrophysics Data System (ADS)

    Golobokov, M.; Danilevich, S.

    2018-04-01

    In order to assess calibration reliability and automate such assessment, procedures for data collection and simulation study of thermal imager calibration procedure have been elaborated. The existing calibration techniques do not always provide high reliability. A new method for analyzing the existing calibration techniques and developing new efficient ones has been suggested and tested. A type of software has been studied that allows generating instrument calibration reports automatically, monitoring their proper configuration, processing measurement results and assessing instrument validity. The use of such software allows reducing man-hours spent on finalization of calibration data 2 to 5 times and eliminating a whole set of typical operator errors.

  6. Laser Calibration of an Impact Disdrometer

    NASA Technical Reports Server (NTRS)

    Lane, John E.; Kasparis, Takis; Metzger, Philip T.; Jones, W. Linwood

    2014-01-01

    A practical approach to developing an operational low-cost disdrometer hinges on implementing an effective in situ adaptive calibration strategy. This calibration strategy lowers the cost of the device and provides a method to guarantee continued automatic calibration. In previous work, a collocated tipping bucket rain gauge was utilized to provide a calibration signal to the disdrometer's digital signal processing software. Rainfall rate is proportional to the 11/3 moment of the drop size distribution (a 7/2 moment can also be assumed, depending on the choice of terminal velocity relationship). In the previous case, the disdrometer calibration was characterized and weighted to the 11/3 moment of the drop size distribution (DSD). Optical extinction by rainfall is proportional to the 2nd moment of the DSD. Using visible laser light as a means to focus and generate an auxiliary calibration signal, the adaptive calibration processing is significantly improved.

  7. Application of solution calorimetry in pharmaceutical and biopharmaceutical research.

    PubMed

    Royall, P G; Gaisford, S

    2005-06-01

    In solution calorimetry the heat of solution (Delta(sol)H) is recorded as a solute (usually a solid) dissolves in an excess of solvent. Such measurements are valuable during all the phases of pharmaceutical formulation and the number of applications of the technique is growing. For instance, solution calorimetry is extremely useful during preformulation for the detection and quantification of polymorphs, degrees of crystallinity and percent amorphous content; knowledge of all of these parameters is essential in order to exert control over the manufacture and subsequent performance of a solid pharmaceutical. Careful experimental design and data interpretation also allows the measurement of the enthalpy of transfer (Delta(trans)H) of a solute between two phases. Because solution calorimetry does not require optically transparent solutions, and can be used to study cloudy or turbid solutions or suspensions directly, measurement of Delta(trans)H affords the opportunity to study the partitioning of drugs into, and across, biological membranes. It also allows the in-situ study of cellular systems. Furthermore, novel experimental methodologies have led to the increasing use of solution calorimetry to study a wider range of phenomena, such as the precipitation of drugs from supersaturated solutions or the formation of liposomes from phospholipid films. It is the purpose of this review to discuss some of these applications, in the context of pharmaceutical formulation and preformulation, and highlight some of the potential future areas where solution calorimetry might find applications.

  8. Calibration of the ROSAT HRI Spectral Response

    NASA Technical Reports Server (NTRS)

    Prestwich, Andrea H.; Silverman, John; McDowell, Jonathan; Callanan, Paul; Snowden, Steve

    2000-01-01

    The ROSAT High Resolution Imager has a limited (2-band) spectral response. This spectral capability can give X-ray hardness ratios on spatial scales of 5 arcseconds. The spectral response of the center of the detector was calibrated before the launch of ROSAT, but the gain decreases with time and also is a function of position on the detector. To complicate matters further, the satellite is 'wobbled', possibly moving a source across several spatial gain states. These difficulties have prevented the spectral response of the ROSAT High Resolution Imager (HRI) from being used for scientific measurements. We have used Bright Earth data and in-flight calibration sources to map the spatial and temporal gain changes, and written software which will allow ROSAT users to generate a calibrated XSPEC (an x ray spectral fitting package) response matrix and hence determine a calibrated hardness ratio. In this report, we describe the calibration procedure and show how to obtain a response matrix. In Section 2 we give an overview of the calibration procedure, in Section 3 we give a summary of HRI spatial and temporal gain variations. Section 4 describes the routines used to determine the gain distribution of a source. In Sections 5 and 6, we describe in detail how, the Bright Earth database and calibration sources are used to derive a corrected response matrix for a given observation. Finally, Section 7 describes how to use the software.

  9. Utility Bill Calibration Test Cases | Buildings | NREL

    Science.gov Websites

    illustrates the utility bill calibration test cases in BESTEST-EX. In these cases, participants are given software results have been generated. This diagram provides an overview of the BESTEST-EX utility bill calibration case process. On the left side of the diagram is a box labeled "BESTEST-EX Document"

  10. Direct Animal Calorimetry, the Underused Gold Standard for Quantifying the Fire of Life*

    PubMed Central

    Kaiyala, Karl J.; Ramsay, Douglas S.

    2012-01-01

    Direct animal calorimetry, the gold standard method for quantifying animal heat production (HP), has been largely supplanted by respirometric indirect calorimetry owing to the relative ease and ready commercial availability of the latter technique. Direct calorimetry, however, can accurately quantify HP and thus metabolic rate (MR) in both metabolically normal and abnormal states, whereas respirometric indirect calorimetry relies on important assumptions that apparently have never been tested in animals with genetic or pharmacologically-induced alterations that dysregulate metabolic fuel partitioning and storage so as to promote obesity and/or diabetes. Contemporary obesity and diabetes research relies heavily on metabolically abnormal animals. Recent data implicating individual and group variation in the gut microbiome in obesity and diabetes raise important questions about transforming aerobic gas exchange into HP because 99% of gut bacteria are anaerobic and they outnumber eukaryotic cells in the body by ~10-fold. Recent credible work in non-standard laboratory animals documents substantial errors in respirometry-based estimates of HP. Accordingly, it seems obvious that new research employing simultaneous direct and indirect calorimetry (total calorimetry) will be essential to validate respirometric MR phenotyping in existing and future pharmacological and genetic models of obesity and diabetes. We also detail the use of total calorimetry with simultaneous core temperature assessment as a model for studying homeostatic control in a variety of experimental situations, including acute and chronic drug administration. Finally, we offer some tips on performing direct calorimetry, both singly and in combination with indirect calorimetry and core temperature assessment. PMID:20427023

  11. Temperature calibration of cryoscopic solutions used in the milk industry by adiabatic calorimetry

    NASA Astrophysics Data System (ADS)

    Méndez-Lango, E.; Lira-Cortes, L.; Quiñones-Ibarra, R.

    2013-09-01

    One method to detect extraneous water in milk is through cryoscopy. This method is used to measure the freezing point of milk. For calibration of a cryoscope there are is a set of standardized solution with known freezing points values. These values are related with the solute concentration, based in almost a century old data; it was no found recent results. It was found that reference solution are not certified in temperature: they do not have traceability to the temperature unit or standards. We prepared four solutions and measured them on a cryoscope and on an adiabatic calorimeter. It was found that results obtained with one technique dose not coincide with the other one.

  12. Infrared stereo calibration for unmanned ground vehicle navigation

    NASA Astrophysics Data System (ADS)

    Harguess, Josh; Strange, Shawn

    2014-06-01

    The problem of calibrating two color cameras as a stereo pair has been heavily researched and many off-the-shelf software packages, such as Robot Operating System and OpenCV, include calibration routines that work in most cases. However, the problem of calibrating two infrared (IR) cameras for the purposes of sensor fusion and point could generation is relatively new and many challenges exist. We present a comparison of color camera and IR camera stereo calibration using data from an unmanned ground vehicle. There are two main challenges in IR stereo calibration; the calibration board (material, design, etc.) and the accuracy of calibration pattern detection. We present our analysis of these challenges along with our IR stereo calibration methodology. Finally, we present our results both visually and analytically with computed reprojection errors.

  13. Control Program for an Optical-Calibration Robot

    NASA Technical Reports Server (NTRS)

    Johnston, Albert

    2005-01-01

    A computer program provides semiautomatic control of a moveable robot used to perform optical calibration of video-camera-based optoelectronic sensor systems that will be used to guide automated rendezvous maneuvers of spacecraft. The function of the robot is to move a target and hold it at specified positions. With the help of limit switches, the software first centers or finds the target. Then the target is moved to a starting position. Thereafter, with the help of an intuitive graphical user interface, an operator types in coordinates of specified positions, and the software responds by commanding the robot to move the target to the positions. The software has capabilities for correcting errors and for recording data from the guidance-sensor system being calibrated. The software can also command that the target be moved in a predetermined sequence of motions between specified positions and can be run in an advanced control mode in which, among other things, the target can be moved beyond the limits set by the limit switches.

  14. The Chandra Source Catalog 2.0: Calibrations

    NASA Astrophysics Data System (ADS)

    Graessle, Dale E.; Evans, Ian N.; Rots, Arnold H.; Allen, Christopher E.; Anderson, Craig S.; Budynkiewicz, Jamie A.; Burke, Douglas; Chen, Judy C.; Civano, Francesca Maria; D'Abrusco, Raffaele; Doe, Stephen M.; Evans, Janet D.; Fabbiano, Giuseppina; Gibbs, Danny G., II; Glotfelty, Kenny J.; Grier, John D.; Hain, Roger; Hall, Diane M.; Harbo, Peter N.; Houck, John C.; Lauer, Jennifer L.; Laurino, Omar; Lee, Nicholas P.; Martínez-Galarza, Juan Rafael; McCollough, Michael L.; McDowell, Jonathan C.; Miller, Joseph; McLaughlin, Warren; Morgan, Douglas L.; Mossman, Amy E.; Nguyen, Dan T.; Nichols, Joy S.; Nowak, Michael A.; Paxson, Charles; Plummer, David A.; Primini, Francis Anthony; Siemiginowska, Aneta; Sundheim, Beth A.; Tibbetts, Michael; Van Stone, David W.; Zografou, Panagoula

    2018-01-01

    Among the many enhancements implemented for the release of Chandra Source Catalog (CSC) 2.0 are improvements in the processing calibration database (CalDB). We have included a thorough overhaul of the CalDB software used in the processing. The software system upgrade, called "CalDB version 4," allows for a more rational and consistent specification of flight configurations and calibration boundary conditions. Numerous improvements in the specific calibrations applied have also been added. Chandra's radiometric and detector response calibrations vary considerably with time, detector operating temperature, and position on the detector. The CalDB has been enhanced to provide the best calibrations possible to each observation over the fifteen-year period included in CSC 2.0. Calibration updates include an improved ACIS contamination model, as well as updated time-varying gain (i.e., photon energy) and quantum efficiency maps for ACIS and HRC-I. Additionally, improved corrections for the ACIS quantum efficiency losses due to CCD charge transfer inefficiency (CTI) have been added for each of the ten ACIS detectors. These CTI corrections are now time and temperature-dependent, allowing ACIS to maintain a 0.3% energy calibration accuracy over the 0.5-7.0 keV range for any ACIS source in the catalog. Radiometric calibration (effective area) accuracy is estimated at ~4% over that range. We include a few examples where improvements in the Chandra CalDB allow for improved data reduction and modeling for the new CSC.This work has been supported by NASA under contract NAS 8-03060 to the Smithsonian Astrophysical Observatory for operation of the Chandra X-ray Center.

  15. Automated Camera Array Fine Calibration

    NASA Technical Reports Server (NTRS)

    Clouse, Daniel; Padgett, Curtis; Ansar, Adnan; Cheng, Yang

    2008-01-01

    Using aerial imagery, the JPL FineCalibration (JPL FineCal) software automatically tunes a set of existing CAHVOR camera models for an array of cameras. The software finds matching features in the overlap region between images from adjacent cameras, and uses these features to refine the camera models. It is not necessary to take special imagery of a known target and no surveying is required. JPL FineCal was developed for use with an aerial, persistent surveillance platform.

  16. Comparing Single-Point and Multi-point Calibration Methods in Modulated DSC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Buskirk, Caleb Griffith

    2017-06-14

    Heat capacity measurements for High Density Polyethylene (HDPE) and Ultra-high Molecular Weight Polyethylene (UHMWPE) were performed using Modulated Differential Scanning Calorimetry (mDSC) over a wide temperature range, -70 to 115 °C, with a TA Instruments Q2000 mDSC. The default calibration method for this instrument involves measuring the heat capacity of a sapphire standard at a single temperature near the middle of the temperature range of interest. However, this method often fails for temperature ranges that exceed a 50 °C interval, likely because of drift or non-linearity in the instrument's heat capacity readings over time or over the temperature range. Therefore,more » in this study a method was developed to calibrate the instrument using multiple temperatures and the same sapphire standard.« less

  17. Autotune Calibrates Models to Building Use Data

    ScienceCinema

    None

    2018-01-16

    Models of existing buildings are currently unreliable unless calibrated manually by a skilled professional. Autotune, as the name implies, automates this process by calibrating the model of an existing building to measured data, and is now available as open source software. This enables private businesses to incorporate Autotune into their products so that their customers can more effectively estimate cost savings of reduced energy consumption measures in existing buildings.

  18. Development and Characterization of a Low-Pressure Calibration System for Hypersonic Wind Tunnels

    NASA Technical Reports Server (NTRS)

    Green, Del L.; Everhart, Joel L.; Rhode, Matthew N.

    2004-01-01

    Minimization of uncertainty is essential for accurate ESP measurements at very low free-stream static pressures found in hypersonic wind tunnels. Statistical characterization of environmental error sources requires a well defined and controlled calibration method. A calibration system has been constructed and environmental control software developed to control experimentation to eliminate human induced error sources. The initial stability study of the calibration system shows a high degree of measurement accuracy and precision in temperature and pressure control. Control manometer drift and reference pressure instabilities induce uncertainty into the repeatability of voltage responses measured from the PSI System 8400 between calibrations. Methods of improving repeatability are possible through software programming and further experimentation.

  19. Software for simulation of a computed tomography imaging spectrometer using optical design software

    NASA Astrophysics Data System (ADS)

    Spuhler, Peter T.; Willer, Mark R.; Volin, Curtis E.; Descour, Michael R.; Dereniak, Eustace L.

    2000-11-01

    Our Imaging Spectrometer Simulation Software known under the name Eikon should improve and speed up the design of a Computed Tomography Imaging Spectrometer (CTIS). Eikon uses existing raytracing software to simulate a virtual instrument. Eikon enables designers to virtually run through the design, calibration and data acquisition, saving significant cost and time when designing an instrument. We anticipate that Eikon simulations will improve future designs of CTIS by allowing engineers to explore more instrument options.

  20. Adsorption calorimetry during metal vapor deposition on single crystal surfaces: Increased flux, reduced optical radiation, and real-time flux and reflectivity measurements

    NASA Astrophysics Data System (ADS)

    Sellers, Jason R. V.; James, Trevor E.; Hemmingson, Stephanie L.; Farmer, Jason A.; Campbell, Charles T.

    2013-12-01

    Thin films of metals and other materials are often grown by physical vapor deposition. To understand such processes, it is desirable to measure the adsorption energy of the deposited species as the film grows, especially when grown on single crystal substrates where the structure of the adsorbed species, evolving interface, and thin film are more homogeneous and well-defined in structure. Our group previously described in this journal an adsorption calorimeter capable of such measurements on single-crystal surfaces under the clean conditions of ultrahigh vacuum [J. T. Stuckless, N. A. Frei, and C. T. Campbell, Rev. Sci. Instrum. 69, 2427 (1998)]. Here we describe several improvements to that original design that allow for heat measurements with ˜18-fold smaller standard deviation, greater absolute accuracy in energy calibration, and, most importantly, measurements of the adsorption of lower vapor-pressure materials which would have previously been impossible. These improvements are accomplished by: (1) using an electron beam evaporator instead of a Knudsen cell to generate the metal vapor at the source of the pulsed atomic beam, (2) changing the atomic beam design to decrease the relative amount of optical radiation that accompanies evaporation, (3) adding an off-axis quartz crystal microbalance for real-time measurement of the flux of the atomic beam during calorimetry experiments, and (4) adding capabilities for in situ relative diffuse optical reflectivity determinations (necessary for heat signal calibration). These improvements are not limited to adsorption calorimetry during metal deposition, but also could be applied to better study film growth of other elements and even molecular adsorbates.

  1. Adsorption calorimetry during metal vapor deposition on single crystal surfaces: increased flux, reduced optical radiation, and real-time flux and reflectivity measurements.

    PubMed

    Sellers, Jason R V; James, Trevor E; Hemmingson, Stephanie L; Farmer, Jason A; Campbell, Charles T

    2013-12-01

    Thin films of metals and other materials are often grown by physical vapor deposition. To understand such processes, it is desirable to measure the adsorption energy of the deposited species as the film grows, especially when grown on single crystal substrates where the structure of the adsorbed species, evolving interface, and thin film are more homogeneous and well-defined in structure. Our group previously described in this journal an adsorption calorimeter capable of such measurements on single-crystal surfaces under the clean conditions of ultrahigh vacuum [J. T. Stuckless, N. A. Frei, and C. T. Campbell, Rev. Sci. Instrum. 69, 2427 (1998)]. Here we describe several improvements to that original design that allow for heat measurements with ~18-fold smaller standard deviation, greater absolute accuracy in energy calibration, and, most importantly, measurements of the adsorption of lower vapor-pressure materials which would have previously been impossible. These improvements are accomplished by: (1) using an electron beam evaporator instead of a Knudsen cell to generate the metal vapor at the source of the pulsed atomic beam, (2) changing the atomic beam design to decrease the relative amount of optical radiation that accompanies evaporation, (3) adding an off-axis quartz crystal microbalance for real-time measurement of the flux of the atomic beam during calorimetry experiments, and (4) adding capabilities for in situ relative diffuse optical reflectivity determinations (necessary for heat signal calibration). These improvements are not limited to adsorption calorimetry during metal deposition, but also could be applied to better study film growth of other elements and even molecular adsorbates.

  2. Quantitative measurement of indomethacin crystallinity in indomethacin-silica gel binary system using differential scanning calorimetry and X-ray powder diffractometry.

    PubMed

    Pan, Xiaohong; Julian, Thomas; Augsburger, Larry

    2006-02-10

    Differential scanning calorimetry (DSC) and X-ray powder diffractometry (XRPD) methods were developed for the quantitative analysis of the crystallinity of indomethacin (IMC) in IMC and silica gel (SG) binary system. The DSC calibration curve exhibited better linearity than that of XRPD. No phase transformation occurred in the IMC-SG mixtures during DSC measurement. The major sources of error in DSC measurements were inhomogeneous mixing and sampling. Analyzing the amount of IMC in the mixtures using high-performance liquid chromatography (HPLC) could reduce the sampling error. DSC demonstrated greater sensitivity and had less variation in measurement than XRPD in quantifying crystalline IMC in the IMC-SG binary system.

  3. Calculation of Temperature Rise in Calorimetry.

    ERIC Educational Resources Information Center

    Canagaratna, Sebastian G.; Witt, Jerry

    1988-01-01

    Gives a simple but fuller account of the basis for accurately calculating temperature rise in calorimetry. Points out some misconceptions regarding these calculations. Describes two basic methods, the extrapolation to zero time and the equal area method. Discusses the theoretical basis of each and their underlying assumptions. (CW)

  4. Deep space network software cost estimation model

    NASA Technical Reports Server (NTRS)

    Tausworthe, R. C.

    1981-01-01

    A parametric software cost estimation model prepared for Jet PRopulsion Laboratory (JPL) Deep Space Network (DSN) Data System implementation tasks is described. The resource estimation mdel modifies and combines a number of existing models. The model calibrates the task magnitude and difficulty, development environment, and software technology effects through prompted responses to a set of approximately 50 questions. Parameters in the model are adjusted to fit JPL software life-cycle statistics.

  5. EOS MLS Level 1B Data Processing Software. Version 3

    NASA Technical Reports Server (NTRS)

    Perun, Vincent S.; Jarnot, Robert F.; Wagner, Paul A.; Cofield, Richard E., IV; Nguyen, Honghanh T.; Vuu, Christina

    2011-01-01

    This software is an improvement on Version 2, which was described in EOS MLS Level 1B Data Processing, Version 2.2, NASA Tech Briefs, Vol. 33, No. 5 (May 2009), p. 34. It accepts the EOS MLS Level 0 science/engineering data, and the EOS Aura spacecraft ephemeris/attitude data, and produces calibrated instrument radiances and associated engineering and diagnostic data. This version makes the code more robust, improves calibration, provides more diagnostics outputs, defines the Galactic core more finely, and fixes the equator crossing. The Level 1 processing software manages several different tasks. It qualifies each data quantity using instrument configuration and checksum data, as well as data transmission quality flags. Statistical tests are applied for data quality and reasonableness. The instrument engineering data (e.g., voltages, currents, temperatures, and encoder angles) is calibrated by the software, and the filter channel space reference measurements are interpolated onto the times of each limb measurement with the interpolates being differenced from the measurements. Filter channel calibration target measurements are interpolated onto the times of each limb measurement, and are used to compute radiometric gain. The total signal power is determined and analyzed by each digital autocorrelator spectrometer (DACS) during each data integration. The software converts each DACS data integration from an autocorrelation measurement in the time domain into a spectral measurement in the frequency domain, and estimates separately the spectrally, smoothly varying and spectrally averaged components of the limb port signal arising from antenna emission and scattering effects. Limb radiances are also calibrated.

  6. Developing of an automation for therapy dosimetry systems by using labview software

    NASA Astrophysics Data System (ADS)

    Aydin, Selim; Kam, Erol

    2018-06-01

    Traceability, accuracy and consistency of radiation measurements are essential in radiation dosimetry, particularly in radiotherapy, where the outcome of treatments is highly dependent on the radiation dose delivered to patients. Therefore it is very important to provide reliable, accurate and fast calibration services for therapy dosimeters since the radiation dose delivered to a radiotherapy patient is directly related to accuracy and reliability of these devices. In this study, we report the performance of in-house developed computer controlled data acquisition and monitoring software for the commercially available radiation therapy electrometers. LabVIEW® software suite is used to provide reliable, fast and accurate calibration services. The software also collects environmental data such as temperature, pressure and humidity in order to use to use these them in correction factor calculations. By using this software tool, a better control over the calibration process is achieved and the need for human intervention is reduced. This is the first software that can control frequently used dosimeter systems, in radiation thereapy field at hospitals, such as Unidos Webline, Unidos E, Dose-1 and PC Electrometers.

  7. Software cost/resource modeling: Deep space network software cost estimation model

    NASA Technical Reports Server (NTRS)

    Tausworthe, R. J.

    1980-01-01

    A parametric software cost estimation model prepared for JPL deep space network (DSN) data systems implementation tasks is presented. The resource estimation model incorporates principles and data from a number of existing models, such as those of the General Research Corporation, Doty Associates, IBM (Walston-Felix), Rome Air Force Development Center, University of Maryland, and Rayleigh-Norden-Putnam. The model calibrates task magnitude and difficulty, development environment, and software technology effects through prompted responses to a set of approximately 50 questions. Parameters in the model are adjusted to fit JPL software lifecycle statistics. The estimation model output scales a standard DSN work breakdown structure skeleton, which is then input to a PERT/CPM system, producing a detailed schedule and resource budget for the project being planned.

  8. Spacecraft attitude calibration/verification baseline study

    NASA Technical Reports Server (NTRS)

    Chen, L. C.

    1981-01-01

    A baseline study for a generalized spacecraft attitude calibration/verification system is presented. It can be used to define software specifications for three major functions required by a mission: the pre-launch parameter observability and data collection strategy study; the in-flight sensor calibration; and the post-calibration attitude accuracy verification. Analytical considerations are given for both single-axis and three-axis spacecrafts. The three-axis attitudes considered include the inertial-pointing attitudes, the reference-pointing attitudes, and attitudes undergoing specific maneuvers. The attitude sensors and hardware considered include the Earth horizon sensors, the plane-field Sun sensors, the coarse and fine two-axis digital Sun sensors, the three-axis magnetometers, the fixed-head star trackers, and the inertial reference gyros.

  9. Liquid Scintillation Counting - Packard Triple-Label Calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Torretto, P. A.

    2017-03-23

    The Radiological Measurements Laboratory (RML) maintains and operates nine Packard Liquid Scintillation Counters (LSCs). These counters were obtained through various sources and were generally purchased as 2500, 2700 or 3100 series counters. In 2004/2005 the software and firmware on the counters were upgraded. The counters are now designated as 3100 series counters running the Quantasmart software package. Thus, a single procedure can be used to calibrate and operate the Packard LSCs.

  10. Effect of Body Position on Energy Expenditure of Preterm Infants as Determined by Simultaneous Direct and Indirect Calorimetry.

    PubMed

    Bell, Edward F; Johnson, Karen J; Dove, Edwin L

    2017-04-01

    Background  Indirect calorimetry is the standard method for estimating energy expenditure in clinical research. Few studies have evaluated indirect calorimetry in infants by comparing it with simultaneous direct calorimetry. Our purpose was (1) to compare the energy expenditure of preterm infants determined by these two methods, direct calorimetry and indirect calorimetry; and (2) to examine the effect of body position, supine or prone, on energy expenditure. Study Design  We measured energy expenditure by simultaneous direct (heat loss by gradient-layer calorimeter corrected for heat storage) and indirect calorimetry (whole-body oxygen consumption and carbon dioxide production) in 15 growing preterm infants during two consecutive interfeeding intervals, once in the supine position and once in the prone position. Results  The mean energy expenditure for all measurements in both positions did not differ significantly by the method used: 2.82 (standard deviation [SD] 0.42) kcal/kg/h by direct calorimetry and 2.78 (SD 0.48) kcal/kg/h by indirect calorimetry. The energy expenditure was significantly lower, by 10%, in the prone than in the supine position, whether examined by direct calorimetry (2.67 vs. 2.97 kcal/kg/h, p  < 0.001) or indirect calorimetry (2.64 vs. 2.92 kcal/kg/h, p  = 0.017). Conclusion  Direct calorimetry and indirect calorimetry gave similar estimates of energy expenditure. Energy expenditure was 10% lower in the prone position than in the supine position. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  11. Preliminary design of the HARMONI science software

    NASA Astrophysics Data System (ADS)

    Piqueras, Laure; Jarno, Aurelien; Pécontal-Rousset, Arlette; Loupias, Magali; Richard, Johan; Schwartz, Noah; Fusco, Thierry; Sauvage, Jean-François; Neichel, Benoît; Correia, Carlos M.

    2016-08-01

    This paper introduces the science software of HARMONI. The Instrument Numerical Model simulates the instrument from the optical point of view and provides synthetic exposures simulating detector readouts from data-cubes containing astrophysical scenes. The Data Reduction Software converts raw-data frames into a fully calibrated, scientifically usable data cube. We present the functionalities and the preliminary design of this software, describe some of the methods and algorithms used and highlight the challenges that we will have to face.

  12. Differential scanning calorimetry of coal

    NASA Technical Reports Server (NTRS)

    Gold, P. I.

    1978-01-01

    Differential scanning calorimetry studies performed during the first year of this project demonstrated the occurrence of exothermic reactions associated with the production of volatile matter in or near the plastic region. The temperature and magnitude of the exothermic peak were observed to be strongly affected by the heating rate, sample mass and, to a lesser extent, by sample particle size. Thermal properties also were found to be influenced by oxidation of the coal sample due to weathering effects.

  13. Differential Binding Models for Direct and Reverse Isothermal Titration Calorimetry.

    PubMed

    Herrera, Isaac; Winnik, Mitchell A

    2016-03-10

    Isothermal titration calorimetry (ITC) is a technique to measure the stoichiometry and thermodynamics from binding experiments. Identifying an appropriate mathematical model to evaluate titration curves of receptors with multiple sites is challenging, particularly when the stoichiometry or binding mechanism is not available. In a recent theoretical study, we presented a differential binding model (DBM) to study calorimetry titrations independently of the interaction among the binding sites (Herrera, I.; Winnik, M. A. J. Phys. Chem. B 2013, 117, 8659-8672). Here, we build upon our DBM and show its practical application to evaluate calorimetry titrations of receptors with multiple sites independently of the titration direction. Specifically, we present a set of ordinary differential equations (ODEs) with the general form d[S]/dV that can be integrated numerically to calculate the equilibrium concentrations of free and bound species S at every injection step and, subsequently, to evaluate the volume-normalized heat signal (δQ(V) = δq/dV) of direct and reverse calorimetry titrations. Additionally, we identify factors that influence the shape of the titration curve and can be used to optimize the initial concentrations of titrant and analyte. We demonstrate the flexibility of our updated DBM by applying these differentials and a global regression analysis to direct and reverse calorimetric titrations of gadolinium ions with multidentate ligands of increasing denticity, namely, diglycolic acid (DGA), citric acid (CIT), and nitrilotriacetic acid (NTA), and use statistical tests to validate the stoichiometries for the metal-ligand pairs studied.

  14. Teaching Camera Calibration by a Constructivist Methodology

    ERIC Educational Resources Information Center

    Samper, D.; Santolaria, J.; Pastor, J. J.; Aguilar, J. J.

    2010-01-01

    This article describes the Metrovisionlab simulation software and practical sessions designed to teach the most important machine vision camera calibration aspects in courses for senior undergraduate students. By following a constructivist methodology, having received introductory theoretical classes, students use the Metrovisionlab application to…

  15. Medical color displays and their calibration

    NASA Astrophysics Data System (ADS)

    Fan, Jiahua; Roehrig, Hans; Dallas, W.; Krupinski, Elizabeth

    2009-08-01

    Color displays are increasingly used for medical imaging, replacing the traditional monochrome displays in radiology for multi-modality applications, 3D representation applications, etc. Color displays are also used increasingly because of wide spread application of Tele-Medicine, Tele-Dermatology and Digital Pathology. At this time, there is no concerted effort for calibration procedures for this diverse range of color displays in Telemedicine and in other areas of the medical field. Using a colorimeter to measure the display luminance and chrominance properties as well as some processing software we developed a first attempt to a color calibration protocol for the medical imaging field.

  16. Scintillating glasses for total absorption dual readout calorimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonvicini, V.; Driutti, A.; Cauz, D.

    2012-01-01

    Scintillating glasses are a potentially cheaper alternative to crystal - based calorimetry with common problems related to light collection, detection and processing. As such, their use and development are part of more extensive R&D aimed at investigating the potential of total absorption, combined with the readout (DR) technique, for hadron calorimetry. A recent series of measurements, using cosmic and particle beams from the Fermilab test beam facility and scintillating glass with the characteristics required for application of the DR technique, serve to illustrate the problems addressed and the progress achieved by this R&D. Alternative solutions for light collection (conventional andmore » silicon photomultipliers) and signal processing are compared, the separate contributions of scintillation and Cherenkov processes to the signal are evaluated and results are compared to simulation.« less

  17. Self calibrating monocular camera measurement of traffic parameters.

    DOT National Transportation Integrated Search

    2009-12-01

    This proposed project will extend the work of previous projects that have developed algorithms and software : to measure traffic speed under adverse conditions using un-calibrated cameras. The present implementation : uses the WSDOT CCTV cameras moun...

  18. Calibrating LOFAR using the Black Board Selfcal System

    NASA Astrophysics Data System (ADS)

    Pandey, V. N.; van Zwieten, J. E.; de Bruyn, A. G.; Nijboer, R.

    2009-09-01

    The Black Board SelfCal (BBS) system is designed as the final processing system to carry out the calibration of LOFAR in an efficient way. In this paper we give a brief description of its architectural and software design including its distributed computing approach. A confusion limited deep all sky image (from 38-62 MHz) by calibrating LOFAR test data with the BBS suite is shown as a sample result. The present status and future directions of development of BBS suite are also touched upon. Although BBS is mainly developed for LOFAR, it may also be used to calibrate other instruments once their specific algorithms are plugged in.

  19. Simbol-X Telescope Scientific Calibrations: Requirements and Plans

    NASA Astrophysics Data System (ADS)

    Malaguti, G.; Angelini, L.; Raimondi, L.; Moretti, A.; Trifoglio, M.

    2009-05-01

    The Simbol-X telescope characteristics and the mission scientific requirements impose a challenging calibration plan with a number of unprecedented issues. The 20 m focal length implies for the incoming X-ray beam a divergence comparable to the incidence angle of the mirror surface also for 100 m-long facilities. Moreover this is the first time that a direct focussing X-ray telescope will be calibrated on an energy band covering about three decades, and with a complex focal plane. These problems require a careful plan and organization of the measurements, together with an evaluation of the calibration needs in terms of both hardware and software.

  20. The algorithm for automatic detection of the calibration object

    NASA Astrophysics Data System (ADS)

    Artem, Kruglov; Irina, Ugfeld

    2017-06-01

    The problem of the automatic image calibration is considered in this paper. The most challenging task of the automatic calibration is a proper detection of the calibration object. The solving of this problem required the appliance of the methods and algorithms of the digital image processing, such as morphology, filtering, edge detection, shape approximation. The step-by-step process of the development of the algorithm and its adopting to the specific conditions of the log cuts in the image's background is presented. Testing of the automatic calibration module was carrying out under the conditions of the production process of the logging enterprise. Through the tests the average possibility of the automatic isolating of the calibration object is 86.1% in the absence of the type 1 errors. The algorithm was implemented in the automatic calibration module within the mobile software for the log deck volume measurement.

  1. Toward Millimagnitude Photometric Calibration (Abstract)

    NASA Astrophysics Data System (ADS)

    Dose, E.

    2014-12-01

    (Abstract only) Asteroid roation, exoplanet transits, and similar measurements will increasingly call for photometric precisions better than about 10 millimagnitudes, often between nights and ideally between distant observers. The present work applies detailed spectral simulations to test popular photometric calibration practices, and to test new extensions of these practices. Using 107 synthetic spectra of stars of diverse colors, detailed atmospheric transmission spectra computed by solar-energy software, realistic spectra of popular astronomy gear, and the option of three sources of noise added at realistic millimagnitude levels, we find that certain adjustments to current calibration practices can help remove small systematic errors, especially for imperfect filters, high airmasses, and possibly passing thin cirrus clouds.

  2. Methodological evaluation of indirect calorimetry data in lean and obese rats.

    PubMed

    Rafecas, I; Esteve, M; Fernández-López, J A; Remesar, X; Alemany, M

    1993-11-01

    1. The applicability of current indirect calorimetry formulae to the study of energy and substrate balances on obese rats has been evaluated. The energy consumption of series of 60-day rats of Wistar, lean and obese Zucker stock were studied by means of direct and indirect calorimetry, and by establishing their energy balance through measurement of food intake and retention. Calorimetric studies encompassed a 24 h period, with gas and heat output measurements every 2 or 5 min, respectively, for direct and indirect calorimetry. 2. The analysis of fat composition (diet, whole rat, and synthesized and oxidized fat) showed only small variations that had only a limited effect on the overall energy equation parameters. 3. A gap in the nitrogen balance, which represents a urinary N excretion lower than the actual protein oxidized, resulted in significant deviations in the estimation of carbohydrate and lipid oxidized when using the equations currently available for indirect calorimetry. 4. Analysis of the amino acid composition of diet and rat protein as well as of the portion actually oxidized, and correcting for the nitrogen gap allowed the establishment of a set of equations that gave better coincidence of the calculated data with the measured substrate balance. 5. The measured heat output of all rats was lower than the estimated values calculated by means of either indirect calorimetry of direct energy balance measurement; the difference corresponded to the energy lost in water evaporation, and was in the range of one-fifth of total energy produced in the three rat stocks. 6. Wistar rats showed a biphasic circadian rhythm of substrate utilization, with alternate lipid synthesis/degradation that reversed that of carbohydrate, concordant with nocturnal feeding habits. Zucker rats did not show this rhythm; obese rats synthesized large amounts of fat during most of the light period, consuming fat at the end of the dark period, which suggests more diurnal feeding habits

  3. The development of a dynamic software for the user interaction from the geographic information system environment with the database of the calibration site of the satellite remote electro-optic sensors

    NASA Astrophysics Data System (ADS)

    Zyelyk, Ya. I.; Semeniv, O. V.

    2015-12-01

    The state of the problem of the post-launch calibration of the satellite electro-optic remote sensors and its solutions in Ukraine is analyzed. The database is improved and dynamic services for user interaction with database from the environment of open geographical information system Quantum GIS for information support of calibration activities are created. A dynamic application under QGIS is developed, implementing these services in the direction of the possibility of data entering, editing and extraction from the database, using the technology of object-oriented programming and of modern complex program design patterns. The functional and algorithmic support of this dynamic software and its interface are developed.

  4. Assuring Software Cost Estimates: Is it an Oxymoron?

    NASA Technical Reports Server (NTRS)

    Hihn, Jarius; Tregre, Grant

    2013-01-01

    The software industry repeatedly observes cost growth of well over 100% even after decades of cost estimation research and well-known best practices, so "What's the problem?" In this paper we will provide an overview of the current state oj software cost estimation best practice. We then explore whether applying some of the methods used in software assurance might improve the quality of software cost estimates. This paper especially focuses on issues associated with model calibration, estimate review, and the development and documentation of estimates as part alan integrated plan.

  5. Calibration of radio-astronomical data on the cloud. LOFAR, the pathway to SKA

    NASA Astrophysics Data System (ADS)

    Sabater, J.; Sánchez-Expósito, S.; Garrido, J.; Ruiz, J. E.; Best, P. N.; Verdes-Montenegro, L.

    2015-05-01

    The radio interferometer LOFAR (LOw Frequency ARray) is fully operational now. This Square Kilometre Array (SKA) pathfinder allows the observation of the sky at frequencies between 10 and 240 MHz, a relatively unexplored region of the spectrum. LOFAR is a software defined telescope: the data is mainly processed using specialized software running in common computing facilities. That means that the capabilities of the telescope are virtually defined by software and mainly limited by the available computing power. However, the quantity of data produced can quickly reach huge volumes (several Petabytes per day). After the correlation and pre-processing of the data in a dedicated cluster, the final dataset is handled to the user (typically several Terabytes). The calibration of these data requires a powerful computing facility in which the specific state of the art software under heavy continuous development can be easily installed and updated. That makes this case a perfect candidate for a cloud infrastructure which adds the advantages of an on demand, flexible solution. We present our approach to the calibration of LOFAR data using Ibercloud, the cloud infrastructure provided by Ibergrid. With the calibration work-flow adapted to the cloud, we can explore calibration strategies for the SKA and show how private or commercial cloud infrastructures (Ibercloud, Amazon EC2, Google Compute Engine, etc.) can help to solve the problems with big datasets that will be prevalent in the future of astronomy.

  6. Geometric Calibration and Validation of Ultracam Aerial Sensors

    NASA Astrophysics Data System (ADS)

    Gruber, Michael; Schachinger, Bernhard; Muick, Marc; Neuner, Christian; Tschemmernegg, Helfried

    2016-03-01

    We present details of the calibration and validation procedure of UltraCam Aerial Camera systems. Results from the laboratory calibration and from validation flights are presented for both, the large format nadir cameras and the oblique cameras as well. Thus in this contribution we show results from the UltraCam Eagle and the UltraCam Falcon, both nadir mapping cameras, and the UltraCam Osprey, our oblique camera system. This sensor offers a mapping grade nadir component together with the four oblique camera heads. The geometric processing after the flight mission is being covered by the UltraMap software product. Thus we present details about the workflow as well. The first part consists of the initial post-processing which combines image information as well as camera parameters derived from the laboratory calibration. The second part, the traditional automated aerial triangulation (AAT) is the step from single images to blocks and enables an additional optimization process. We also present some special features of our software, which are designed to better support the operator to analyze large blocks of aerial images and to judge the quality of the photogrammetric set-up.

  7. Characterization of Novel Operation Modes for Secondary Emission Ionization Calorimetry

    NASA Astrophysics Data System (ADS)

    Tiras, Emrah; Dilsiz, Kamuran; Ogul, Hasan; Snyder, Christina; Bilki, Burak; Onel, Yasar; Winn, David

    2017-01-01

    Secondary Emission (SE) Ionization Calorimetry is a novel technique to measure electromagnetic showers in high radiation environments. We have developed new operation modes by modifying the bias of the conventional PMT circuits. Hamamatsu single anode R7761 and multi-anode R5900-00-M16 Photomultiplier Tubes (PMTs) with modified bases are used as SE detector modules in our SE calorimetry prototype. In this detector module, the first dynode is used as the active media as opposed to photocathode. Here, we report the technical design of new modes and characterization measurements for both SE and PMT modes.

  8. Isothermal Titration Calorimetry in the Student Laboratory

    ERIC Educational Resources Information Center

    Wadso, Lars; Li, Yujing; Li, Xi

    2011-01-01

    Isothermal titration calorimetry (ITC) is the measurement of the heat produced by the stepwise addition of one substance to another. It is a common experimental technique, for example, in pharmaceutical science, to measure equilibrium constants and reaction enthalpies. We describe a stirring device and an injection pump that can be used with a…

  9. Legato: Personal Computer Software for Analyzing Pressure-Sensitive Paint Data

    NASA Technical Reports Server (NTRS)

    Schairer, Edward T.

    2001-01-01

    'Legato' is personal computer software for analyzing radiometric pressure-sensitive paint (PSP) data. The software is written in the C programming language and executes under Windows 95/98/NT operating systems. It includes all operations normally required to convert pressure-paint image intensities to normalized pressure distributions mapped to physical coordinates of the test article. The program can analyze data from both single- and bi-luminophore paints and provides for both in situ and a priori paint calibration. In addition, there are functions for determining paint calibration coefficients from calibration-chamber data. The software is designed as a self-contained, interactive research tool that requires as input only the bare minimum of information needed to accomplish each function, e.g., images, model geometry, and paint calibration coefficients (for a priori calibration) or pressure-tap data (for in situ calibration). The program includes functions that can be used to generate needed model geometry files for simple model geometries (e.g., airfoils, trapezoidal wings, rotor blades) based on the model planform and airfoil section. All data files except images are in ASCII format and thus are easily created, read, and edited. The program does not use database files. This simplifies setup but makes the program inappropriate for analyzing massive amounts of data from production wind tunnels. Program output consists of Cartesian plots, false-colored real and virtual images, pressure distributions mapped to the surface of the model, assorted ASCII data files, and a text file of tabulated results. Graphical output is displayed on the computer screen and can be saved as publication-quality (PostScript) files.

  10. Calibration of areal surface topography measuring instruments

    NASA Astrophysics Data System (ADS)

    Seewig, J.; Eifler, M.

    2017-06-01

    The ISO standards which are related to the calibration of areal surface topography measuring instruments are the ISO 25178-6xx series which defines the relevant metrological characteristics for the calibration of different measuring principles and the ISO 25178-7xx series which defines the actual calibration procedures. As the field of areal measurement is however not yet fully standardized, there are still open questions to be addressed which are subject to current research. Based on this, selected research results of the authors in this area are presented. This includes the design and fabrication of areal material measures. For this topic, two examples are presented with the direct laser writing of a stepless material measure for the calibration of the height axis which is based on the Abbott- Curve and the manufacturing of a Siemens star for the determination of the lateral resolution limit. Based on these results, as well a new definition for the resolution criterion, the small scale fidelity, which is still under discussion, is presented. Additionally, a software solution for automated calibration procedures is outlined.

  11. Integration and global analysis of isothermal titration calorimetry data for studying macromolecular interactions.

    PubMed

    Brautigam, Chad A; Zhao, Huaying; Vargas, Carolyn; Keller, Sandro; Schuck, Peter

    2016-05-01

    Isothermal titration calorimetry (ITC) is a powerful and widely used method to measure the energetics of macromolecular interactions by recording a thermogram of differential heating power during a titration. However, traditional ITC analysis is limited by stochastic thermogram noise and by the limited information content of a single titration experiment. Here we present a protocol for bias-free thermogram integration based on automated shape analysis of the injection peaks, followed by combination of isotherms from different calorimetric titration experiments into a global analysis, statistical analysis of binding parameters and graphical presentation of the results. This is performed using the integrated public-domain software packages NITPIC, SEDPHAT and GUSSI. The recently developed low-noise thermogram integration approach and global analysis allow for more precise parameter estimates and more reliable quantification of multisite and multicomponent cooperative and competitive interactions. Titration experiments typically take 1-2.5 h each, and global analysis usually takes 10-20 min.

  12. Students' Calibration of Knowledge and Learning Processes: Implications for Designing Powerful Software Learning Environments

    ERIC Educational Resources Information Center

    Winne, Philip H.

    2004-01-01

    Calibration concerns (a) the deviation of a person's judgment from fact, introducing notions of bias and accuracy; and metric issues regarding (b) the validity of cues' contributions to judgments and (c) the grain size of cues. Miscalibration hinders self-regulated learning (SRL). Considering calibration in the context of Winne and Hadwin's…

  13. Development of Resistive Micromegas for Sampling Calorimetry

    NASA Astrophysics Data System (ADS)

    Geralis, T.; Fanourakis, G.; Kalamaris, A.; Nikas, D.; Psallidas, A.; Chefdeville, M.; Karyotakis, I.; Koletsou, I.; Titov, M.

    2018-02-01

    Resistive micromegas is proposed as an active element for sampling calorimetry. Future linear collider experiments or the HL-LHC experiments can profit from those developments for Particle Flow Calorimetry. Micromegas possesses remarkable properties concerning gain stability, reduced ion feedback, response linearity, adaptable sensitive element granularity, fast response and high rate capability. Recent developments on Micromegas with a protective resistive layer present excellent results, resolving the problem of discharges caused by local high charge deposition, thanks to its RC-slowed charge evacuation. Higher resistivity though, may cause loss of the response linearity at high rates. We have scanned a wide range of resistivities and performed laboratory tests with X-rays that demonstrate excellent response linearity up to rates of (a few) times 10MHz/cm2, with simultaneous mitigation of discharges. Beam test studies at SPS/CERN with hadrons have also shown a remarkable stability of the resistive Micromegas and low currents for rates up to 15MHz/cm2. We present results from the aforementioned studies confronted with MC simulation

  14. Automatic alignment method for calibration of hydrometers

    NASA Astrophysics Data System (ADS)

    Lee, Y. J.; Chang, K. H.; Chon, J. C.; Oh, C. Y.

    2004-04-01

    This paper presents a new method to automatically align specific scale-marks for the calibration of hydrometers. A hydrometer calibration system adopting the new method consists of a vision system, a stepping motor, and software to control the system. The vision system is composed of a CCD camera and a frame grabber, and is used to acquire images. The stepping motor moves the camera, which is attached to the vessel containing a reference liquid, along the hydrometer. The operating program has two main functions: to process images from the camera to find the position of the horizontal plane and to control the stepping motor for the alignment of the horizontal plane with a particular scale-mark. Any system adopting this automatic alignment method is a convenient and precise means of calibrating a hydrometer. The performance of the proposed method is illustrated by comparing the calibration results using the automatic alignment method with those obtained using the manual method.

  15. LV software support for supersonic flow analysis

    NASA Technical Reports Server (NTRS)

    Bell, William A.

    1991-01-01

    During 1991, the software developed allowed an operator to configure and checkout the TSI, Inc. laser velocimeter (LV) system prior to a run. This setup procedure established the operating conditions for the TSI MI-990 multichannel interface and the RMR-1989 rotating machinery resolver. In addition to initializing the instruments, the software package provides a means of specifying LV calibration constants, controlling the sampling process, and identifying the test parameters.

  16. Isoquinoline alkaloids and their binding with DNA: calorimetry and thermal analysis applications.

    PubMed

    Bhadra, Kakali; Kumar, Gopinatha Suresh

    2010-11-01

    Alkaloids are a group of natural products with unmatched chemical diversity and biological relevance forming potential quality pools in drug screening. The molecular aspects of their interaction with many cellular macromolecules like DNA, RNA and proteins are being currently investigated in order to evolve the structure activity relationship. Isoquinolines constitute an important group of alkaloids. They have extensive utility in cancer therapy and a large volume of data is now emerging in the literature on their mode, mechanism and specificity of binding to DNA. Thermodynamic characterization of the binding of these alkaloids to DNA may offer key insights into the molecular aspects that drive complex formation and these data can provide valuable information about the balance of driving forces. Various thermal techniques have been conveniently used for this purpose and modern calorimetric instrumentation provides direct and quick estimation of thermodynamic parameters. Thermal melting studies and calorimetric techniques like isothermal titration calorimetry and differential scanning calorimetry have further advanced the field by providing authentic, reliable and sensitive data on various aspects of temperature dependent structural analysis of the interaction. In this review we present the application of various thermal techniques, viz. isothermal titration calorimetry, differential scanning calorimetry and optical melting studies in the characterization of drug-DNA interactions with particular emphasis on isoquinoline alkaloid-DNA interaction.

  17. DAQ Software Contributions, Absolute Scale Energy Calibration and Background Evaluation for the NOvA Experiment at Fermilab

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flumerfelt, Eric Lewis

    2015-08-01

    The NOvA (NuMI Off-axis v e [nu_e] Appearance) Experiment is a long-baseline accelerator neutrino experiment currently in its second year of operations. NOvA uses the Neutrinos from the Main Injector (NuMI) beam at Fermilab, and there are two main off-axis detectors: a Near Detector at Fermilab and a Far Detector 810 km away at Ash River, MN. The work reported herein is in support of the NOvA Experiment, through contributions to the development of data acquisition software, providing an accurate, absolute-scale energy calibration for electromagnetic showers in NOvA detector elements, crucial to the primary electron neutrino search, and through anmore » initial evaluation of the cosmic background rate in the NOvA Far Detector, which is situated on the surface without significant overburden. Additional support work for the NOvA Experiment is also detailed, including DAQ Server Administration duties and a study of NOvA’s sensitivity to neutrino oscillations into a “sterile” state.« less

  18. Method calibration of the model 13145 infrared target projectors

    NASA Astrophysics Data System (ADS)

    Huang, Jianxia; Gao, Yuan; Han, Ying

    2014-11-01

    The SBIR Model 13145 Infrared Target Projectors ( The following abbreviation Evaluation Unit ) used for characterizing the performances of infrared imaging system. Test items: SiTF, MTF, NETD, MRTD, MDTD, NPS. Infrared target projectors includes two area blackbodies, a 12 position target wheel, all reflective collimator. It provide high spatial frequency differential targets, Precision differential targets imaged by infrared imaging system. And by photoelectricity convert on simulate signal or digital signal. Applications software (IR Windows TM 2001) evaluate characterizing the performances of infrared imaging system. With regards to as a whole calibration, first differently calibration for distributed component , According to calibration specification for area blackbody to calibration area blackbody, by means of to amend error factor to calibration of all reflective collimator, radiance calibration of an infrared target projectors using the SR5000 spectral radiometer, and to analyze systematic error. With regards to as parameter of infrared imaging system, need to integrate evaluation method. According to regulation with -GJB2340-1995 General specification for military thermal imaging sets -testing parameters of infrared imaging system, the results compare with results from Optical Calibration Testing Laboratory . As a goal to real calibration performances of the Evaluation Unit.

  19. Calibration strategies for the Cherenkov Telescope Array

    NASA Astrophysics Data System (ADS)

    Gaug, Markus; Berge, David; Daniel, Michael; Doro, Michele; Förster, Andreas; Hofmann, Werner; Maccarone, Maria C.; Parsons, Dan; de los Reyes Lopez, Raquel; van Eldik, Christopher

    2014-08-01

    The Central Calibration Facilities workpackage of the Cherenkov Telescope Array (CTA) observatory for very high energy gamma ray astronomy defines the overall calibration strategy of the array, develops dedicated hardware and software for the overall array calibration and coordinates the calibration efforts of the different telescopes. The latter include LED-based light pulsers, and various methods and instruments to achieve a calibration of the overall optical throughput. On the array level, methods for the inter-telescope calibration and the absolute calibration of the entire observatory are being developed. Additionally, the atmosphere above the telescopes, used as a calorimeter, will be monitored constantly with state-of-the-art instruments to obtain a full molecular and aerosol profile up to the stratosphere. The aim is to provide a maximal uncertainty of 10% on the reconstructed energy-scale, obtained through various independent methods. Different types of LIDAR in combination with all-sky-cameras will provide the observatory with an online, intelligent scheduling system, which, if the sky is partially covered by clouds, gives preference to sources observable under good atmospheric conditions. Wide-field optical telescopes and Raman Lidars will provide online information about the height-resolved atmospheric extinction, throughout the field-of-view of the cameras, allowing for the correction of the reconstructed energy of each gamma-ray event. The aim is to maximize the duty cycle of the observatory, in terms of usable data, while reducing the dead time introduced by calibration activities to an absolute minimum.

  20. Design and realization of photoelectric instrument binocular optical axis parallelism calibration system

    NASA Astrophysics Data System (ADS)

    Ying, Jia-ju; Chen, Yu-dan; Liu, Jie; Wu, Dong-sheng; Lu, Jun

    2016-10-01

    The maladjustment of photoelectric instrument binocular optical axis parallelism will affect the observe effect directly. A binocular optical axis parallelism digital calibration system is designed. On the basis of the principle of optical axis binocular photoelectric instrument calibration, the scheme of system is designed, and the binocular optical axis parallelism digital calibration system is realized, which include four modules: multiband parallel light tube, optical axis translation, image acquisition system and software system. According to the different characteristics of thermal infrared imager and low-light-level night viewer, different algorithms is used to localize the center of the cross reticle. And the binocular optical axis parallelism calibration is realized for calibrating low-light-level night viewer and thermal infrared imager.

  1. Deep space network software cost estimation model

    NASA Technical Reports Server (NTRS)

    Tausworthe, R. C.

    1981-01-01

    A parametric software cost estimation model prepared for Deep Space Network (DSN) Data Systems implementation tasks is presented. The resource estimation model incorporates principles and data from a number of existing models. The model calibrates task magnitude and difficulty, development environment, and software technology effects through prompted responses to a set of approximately 50 questions. Parameters in the model are adjusted to fit DSN software life cycle statistics. The estimation model output scales a standard DSN Work Breakdown Structure skeleton, which is then input into a PERT/CPM system, producing a detailed schedule and resource budget for the project being planned.

  2. Preparation of Solid Derivatives by Differential Scanning Calorimetry.

    ERIC Educational Resources Information Center

    Crandall, E. W.; Pennington, Maxine

    1980-01-01

    Describes the preparation of selected aldehydes and ketones, alcohols, amines, phenols, haloalkanes, and tertiaryamines by differential scanning calorimetry. Technique is advantageous because formation of the reaction product occurs and the melting point of the product is obtained on the same sample in a short time with no additional purification…

  3. Calibration and Validation of the Checkpoint Model to the Air Force Electronic Systems Center Software Database

    DTIC Science & Technology

    1997-09-01

    Illinois Institute of Technology Research Institute (IITRI) calibrated seven parametric models including SPQR /20, the forerunner of CHECKPOINT. The...a semicolon); thus, SPQR /20 was calibrated using SLOC sizing data (IITRI, 1989: 3-4). The results showed only slight overall improvements in accuracy...even when validating the calibrated models with the same data sets. The IITRI study demonstrated SPQR /20 to be one of two models that were most

  4. Accuracy of pre-interventional computed tomography angiography post-processing software and extravascularly calibrated devices to determine vessel diameters: comparison with an intravascularly located calibrated catheter.

    PubMed

    Stahlberg, Erik; Planert, Mathis; Anton, Susanne; Panagiotopoulos, Nikolaos; Horn, Marco; Barkhausen, Joerg; Goltz, Jan Peter

    2018-07-01

    Background Accurate vessel sizing might affect treatment outcome of endovascular therapy. Purpose To compare accuracy of peripheral vessel diameter measurements using pre-interventional computed tomography angiography post processing software (CTA-PPS) and extravascularly located calibrated devices used during digital subtraction angiography (DSA) with an intravascular scaled catheter (SC). Material and Methods In 33 patients (28 men, mean age = 72 ± 11 years) a SC was used during DSA of the femoro-popliteal territory. Simultaneously, one scaled radiopaque tape (SRT) was affixed to the lateral thigh, one scaled radiopaque ruler (SRR) was positioned on the angiography table. For each patient, diameters of five anatomic landmarks were measured on DSA images after calibration using different scaled devices and CTA-PPS. Diameters were compared to SC (reference) and between groups of non-obese (NOB) and obese (OB) patients. Results In total, 660 measurements were performed. Compared to the reference, SRT overestimated the diameter by 1.2% (range = -10-12, standard deviation [SD] = 4.1%, intraclass correlation coefficient [ICC] = 0.992, 95% confidence interval [CI] = 0.989-0.992, P = 0.01), the SRR and CTA-PPS underestimated it by 21.3% (range = 1-47, SD = 9.4%, ICC = 0.864, 95% CI = 0.11-0.963, P = 0.08) and 3.2% (range = 17-38, SD = 9.7%, ICC = 0.976, 95% CI = 0.964-0.983, P = 0.01), respectively. Underestimation using the SRR was greatest in the proximal superficial-femoral artery (31%) and lowest at the P2 level of the popliteal artery (15%). In the NOB group, diameter overestimation of the SRT was 0.8% (range = 4-7, SD = 4.2%, B = 0.071, 95% CI = 0.293-0.435, P = 0.08) compared to the OB group of 1.6% (range = -7-4, SD = 2.9%, B = 0.010, 95% CI = 0.474-0.454, P = 0.96). Diameter underestimation of the SRR was 17.3% (range = 13-21, SD = 3.1%, B = 0

  5. EOS MLS Level 2 Data Processing Software Version 3

    NASA Technical Reports Server (NTRS)

    Livesey, Nathaniel J.; VanSnyder, Livesey W.; Read, William G.; Schwartz, Michael J.; Lambert, Alyn; Santee, Michelle L.; Nguyen, Honghanh T.; Froidevaux, Lucien; wang, Shuhui; Manney, Gloria L.; hide

    2011-01-01

    This software accepts the EOS MLS calibrated measurements of microwave radiances products and operational meteorological data, and produces a set of estimates of atmospheric temperature and composition. This version has been designed to be as flexible as possible. The software is controlled by a Level 2 Configuration File that controls all aspects of the software: defining the contents of state and measurement vectors, defining the configurations of the various forward models available, reading appropriate a priori spectroscopic and calibration data, performing retrievals, post-processing results, computing diagnostics, and outputting results in appropriate files. In production mode, the software operates in a parallel form, with one instance of the program acting as a master, coordinating the work of multiple slave instances on a cluster of computers, each computing the results for individual chunks of data. In addition, to do conventional retrieval calculations and producing geophysical products, the Level 2 Configuration File can instruct the software to produce files of simulated radiances based on a state vector formed from a set of geophysical product files taken as input. Combining both the retrieval and simulation tasks in a single piece of software makes it far easier to ensure that identical forward model algorithms and parameters are used in both tasks. This also dramatically reduces the complexity of the code maintenance effort.

  6. Calibration Issues and Operating System Requirements for Electron-Probe Microanalysis

    NASA Technical Reports Server (NTRS)

    Carpenter, P.

    2006-01-01

    Instrument purchase requirements and dialogue with manufacturers have established hardware parameters for alignment, stability, and reproducibility, which have helped improve the precision and accuracy of electron microprobe analysis (EPMA). The development of correction algorithms and the accurate solution to quantitative analysis problems requires the minimization of systematic errors and relies on internally consistent data sets. Improved hardware and computer systems have resulted in better automation of vacuum systems, stage and wavelength-dispersive spectrometer (WDS) mechanisms, and x-ray detector systems which have improved instrument stability and precision. Improved software now allows extended automated runs involving diverse setups and better integrates digital imaging and quantitative analysis. However, instrumental performance is not regularly maintained, as WDS are aligned and calibrated during installation but few laboratories appear to check and maintain this calibration. In particular, detector deadtime (DT) data is typically assumed rather than measured, due primarily to the difficulty and inconvenience of the measurement process. This is a source of fundamental systematic error in many microprobe laboratories and is unknown to the analyst, as the magnitude of DT correction is not listed in output by microprobe operating systems. The analyst must remain vigilant to deviations in instrumental alignment and calibration, and microprobe system software must conveniently verify the necessary parameters. Microanalysis of mission critical materials requires an ongoing demonstration of instrumental calibration. Possible approaches to improvements in instrument calibration, quality control, and accuracy will be discussed. Development of a set of core requirements based on discussions with users, researchers, and manufacturers can yield documents that improve and unify the methods by which instruments can be calibrated. These results can be used to

  7. Glass transition of anhydrous starch by fast scanning calorimetry.

    PubMed

    Monnier, Xavier; Maigret, Jean-Eudes; Lourdin, Denis; Saiter, Allisson

    2017-10-01

    By means of fast scanning calorimetry, the glass transition of anhydrous amorphous starch has been measured. With a scanning rate of 2000Ks -1 , thermal degradation of starch prior to the glass transition has been inhibited. To certify the glass transition measurement, structural relaxation of the glassy state has been investigated through physical aging as well as the concept of limiting fictive temperature. In both cases, characteristic enthalpy recovery peaks related to the structural relaxation of the glass have been observed. Thermal lag corrections based on the comparison of glass transition temperatures measured by means of differential and fast scanning calorimetry have been proposed. The complementary investigations give an anhydrous amorphous starch glass transition temperature of 312±7°C. This estimation correlates with previous extrapolation performed on hydrated starches. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Using LabVIEW to facilitate calibration and verification for respiratory impedance plethysmography.

    PubMed

    Ellis, W S; Jones, R T

    1991-12-01

    A system for calibrating the Respitrace impedance plethysmograph was developed with the capacity to quantitatively verify the accuracy of calibration. LabVIEW software was used on a Macintosh II computer to create a user-friendly environment, with the added benefit of reducing development time. The system developed enabled a research assistant to calibrate the Respitrace within 15 min while achieving an accuracy within the normally accepted 10% deviation when the Respitrace output is compared to a water spirometer standard. The system and methods described were successfully used in a study of 10 subjects smoking cigarettes containing marijuana or cocaine under four conditions, calibrating all subjects to 10% accuracy within 15 min.

  9. Real-time calibration and alignment of the LHCb RICH detectors

    NASA Astrophysics Data System (ADS)

    HE, Jibo

    2017-12-01

    In 2015, the LHCb experiment established a new and unique software trigger strategy with the purpose of increasing the purity of the signal events by applying the same algorithms online and offline. To achieve this, real-time calibration and alignment of all LHCb sub-systems is needed to provide vertexing, tracking, and particle identification of the best possible quality. The calibration of the refractive index of the RICH radiators, the calibration of the Hybrid Photon Detector image, and the alignment of the RICH mirror system, are reported in this contribution. The stability of the RICH performance and the particle identification performance are also discussed.

  10. A Study of Concept Mapping as an Instructional Intervention in an Undergraduate General Chemistry Calorimetry Laboratory

    NASA Astrophysics Data System (ADS)

    Stroud, Mary W.

    This investigation, rooted in both chemistry and education, considers outcomes occurring in a small-scale study in which concept mapping was used as an instructional intervention in an undergraduate calorimetry laboratory. A quasi-experimental, multiple-methods approach was employed since the research questions posed in this study warranted the use of both qualitative and quantitative perspectives and evaluations. For the intervention group of students, a convenience sample, post-lab concept maps, written discussions, quiz responses and learning surveys were characterized and evaluated. Archived quiz responses for non-intervention students were also analyzed for comparison. Students uniquely constructed individual concept maps containing incorrect, conceptually correct and "scientifically thin" calorimetry characterizations. Students more greatly emphasized mathematical relationships and equations utilized during the calorimetry experiment; the meaning of calorimetry concepts was demonstrated to a lesser extent.

  11. Some applications of indirect calorimetry to sports medicine.

    PubMed

    Severi, S; Malavolti, M; Battistini, N; Bedogni, G

    2001-01-01

    Some applications of indirect calorimetry to sports medicine are discussed and exemplified by case reports. In particular, it is suggested that oxigen consumption can be employed to assess the effects of physical activity on fat-free tissues and that the respiratory quotient may offer some insights into the food habits of athletes.

  12. Color calibration and color-managed medical displays: does the calibration method matter?

    NASA Astrophysics Data System (ADS)

    Roehrig, Hans; Rehm, Kelly; Silverstein, Louis D.; Dallas, William J.; Fan, Jiahua; Krupinski, Elizabeth A.

    2010-02-01

    Our laboratory has investigated the efficacy of a suite of color calibration and monitor profiling packages which employ a variety of color measurement sensors. Each of the methods computes gamma correction tables for the red, green and blue color channels of a monitor that attempt to: a) match a desired luminance range and tone reproduction curve; and b) maintain a target neutral point across the range of grey values. All of the methods examined here produce International Color Consortium (ICC) profiles that describe the color rendering capabilities of the monitor after calibration. Color profiles incorporate a transfer matrix that establishes the relationship between RGB driving levels and the International Commission on Illumination (CIE) XYZ (tristimulus) values of the resulting on-screen color; the matrix is developed by displaying color patches of known RGB values on the monitor and measuring the tristimulus values with a sensor. The number and chromatic distribution of color patches varies across methods and is usually not under user control. In this work we examine the effect of employing differing calibration and profiling methods on rendition of color images. A series of color patches encoded in sRGB color space were presented on the monitor using color-management software that utilized the ICC profile produced by each method. The patches were displayed on the calibrated monitor and measured with a Minolta CS200 colorimeter. Differences in intended and achieved luminance and chromaticity were computed using the CIE DE2000 color-difference metric, in which a value of ▵E = 1 is generally considered to be approximately one just noticeable difference (JND) in color. We observed between one and 17 JND's for individual colors, depending on calibration method and target.

  13. Calibration of X-Ray diffractometer by the experimental comparison method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dudka, A. P., E-mail: dudka@ns.crys.ras.ru

    2015-07-15

    A software for calibrating an X-ray diffractometer with area detector has been developed. It is proposed to search for detector and goniometer calibration models whose parameters are reproduced in a series of measurements on a reference crystal. Reference (standard) crystals are prepared during the investigation; they should provide the agreement of structural models in repeated analyses. The technique developed has been used to calibrate Xcalibur Sapphire and Eos, Gemini Ruby (Agilent) and Apex x8 and Apex Duo (Bruker) diffractometers. The main conclusions are as follows: the calibration maps are stable for several years and can be used to improve structuralmore » results, verified CCD detectors exhibit significant inhomogeneity of the efficiency (response) function, and a Bruker goniometer introduces smaller distortions than an Agilent goniometer.« less

  14. A methodology to develop computational phantoms with adjustable posture for WBC calibration.

    PubMed

    Fonseca, T C Ferreira; Bogaerts, R; Hunt, John; Vanhavere, F

    2014-11-21

    A Whole Body Counter (WBC) is a facility to routinely assess the internal contamination of exposed workers, especially in the case of radiation release accidents. The calibration of the counting device is usually done by using anthropomorphic physical phantoms representing the human body. Due to such a challenge of constructing representative physical phantoms a virtual calibration has been introduced. The use of computational phantoms and the Monte Carlo method to simulate radiation transport have been demonstrated to be a worthy alternative. In this study we introduce a methodology developed for the creation of realistic computational voxel phantoms with adjustable posture for WBC calibration. The methodology makes use of different software packages to enable the creation and modification of computational voxel phantoms. This allows voxel phantoms to be developed on demand for the calibration of different WBC configurations. This in turn helps to study the major source of uncertainty associated with the in vivo measurement routine which is the difference between the calibration phantoms and the real persons being counted. The use of realistic computational phantoms also helps the optimization of the counting measurement. Open source codes such as MakeHuman and Blender software packages have been used for the creation and modelling of 3D humanoid characters based on polygonal mesh surfaces. Also, a home-made software was developed whose goal is to convert the binary 3D voxel grid into a MCNPX input file. This paper summarizes the development of a library of phantoms of the human body that uses two basic phantoms called MaMP and FeMP (Male and Female Mesh Phantoms) to create a set of male and female phantoms that vary both in height and in weight. Two sets of MaMP and FeMP phantoms were developed and used for efficiency calibration of two different WBC set-ups: the Doel NPP WBC laboratory and AGM laboratory of SCK-CEN in Mol, Belgium.

  15. A methodology to develop computational phantoms with adjustable posture for WBC calibration

    NASA Astrophysics Data System (ADS)

    Ferreira Fonseca, T. C.; Bogaerts, R.; Hunt, John; Vanhavere, F.

    2014-11-01

    A Whole Body Counter (WBC) is a facility to routinely assess the internal contamination of exposed workers, especially in the case of radiation release accidents. The calibration of the counting device is usually done by using anthropomorphic physical phantoms representing the human body. Due to such a challenge of constructing representative physical phantoms a virtual calibration has been introduced. The use of computational phantoms and the Monte Carlo method to simulate radiation transport have been demonstrated to be a worthy alternative. In this study we introduce a methodology developed for the creation of realistic computational voxel phantoms with adjustable posture for WBC calibration. The methodology makes use of different software packages to enable the creation and modification of computational voxel phantoms. This allows voxel phantoms to be developed on demand for the calibration of different WBC configurations. This in turn helps to study the major source of uncertainty associated with the in vivo measurement routine which is the difference between the calibration phantoms and the real persons being counted. The use of realistic computational phantoms also helps the optimization of the counting measurement. Open source codes such as MakeHuman and Blender software packages have been used for the creation and modelling of 3D humanoid characters based on polygonal mesh surfaces. Also, a home-made software was developed whose goal is to convert the binary 3D voxel grid into a MCNPX input file. This paper summarizes the development of a library of phantoms of the human body that uses two basic phantoms called MaMP and FeMP (Male and Female Mesh Phantoms) to create a set of male and female phantoms that vary both in height and in weight. Two sets of MaMP and FeMP phantoms were developed and used for efficiency calibration of two different WBC set-ups: the Doel NPP WBC laboratory and AGM laboratory of SCK-CEN in Mol, Belgium.

  16. [Analysis of energy expenditure in adults with cystic fibrosis: comparison of indirect calorimetry and prediction equations].

    PubMed

    Fuster, Casilda Olveira; Fuster, Gabriel Olveira; Galindo, Antonio Dorado; Galo, Alicia Padilla; Verdugo, Julio Merino; Lozano, Francisco Miralles

    2007-07-01

    Undernutrition, which implies an imbalance between energy intake and energy requirements, is common in patients with cystic fibrosis. The aim of this study was to compare resting energy expenditure determined by indirect calorimetry with that obtained with commonly used predictive equations in adults with cystic fibrosis and to assess the influence of clinical variables on the values obtained. We studied 21 patients with clinically stable cystic fibrosis, obtaining data on anthropometric variables, hand grip dynamometry, electrical bioimpedance, and resting energy expenditure by indirect calorimetry. We used the intraclass correlation coefficient (ICC) and the Bland-Altman method to assess agreement between the values obtained for resting energy expenditure measured by indirect calorimetry and those obtained with the World Health Organization (WHO) and Harris-Benedict prediction equations. The prediction equations underestimated resting energy expenditure in more than 90% of cases. The agreement between the value obtained by indirect calorimetry and that calculated with the prediction equations was poor (ICC for comparisons with the WHO and Harris-Benedict equations, 0.47 and 0.41, respectively). Bland-Altman analysis revealed a variable bias between the results of indirect calorimetry and those obtained with prediction equations, irrespective of the resting energy expenditure. The difference between the values measured by indirect calorimetry and those obtained with the WHO equation was significantly larger in patients homozygous for the DeltaF508 mutation and in those with exocrine pancreatic insufficiency. The WHO and Harris-Benedict prediction equations underestimate resting energy expenditure in adults with cystic fibrosis. There is poor agreement between the values for resting energy expenditure determined by indirect calorimetry and those estimated with prediction equations. Underestimation was greater in patients with exocrine pancreatic insufficiency and

  17. Proceedings of the Eleventh International Conference on Calorimetry in Particle Physics

    NASA Astrophysics Data System (ADS)

    Cecchi, Claudia

    The Pamela silicon tungsten calorimeter / G. Zampa -- Design and development of a dense, fine grained silicon tungsten calorimeter with integrated electronics / D. Strom -- High resolution silicon detector for 1.2-3.1 eV (400-1000 nm) photons / D. Groom -- The KLEM high energy cosmic rays collector for the NUCLEON satellite mission / M. Merkin (contribution not received) -- The electromagnetic calorimeter of the Hera-b experiment / I. Matchikhilian -- The status of the ATLAS tile calorimeter / J. Mendes Saraiva -- Design and mass production of Scintillator Pad Detector (SPD) / Preshower (PS) detector for LHC-b experiment / E. Gushchin -- Study of new FNAL-NICADD extruded scintillator as active media of large EMCal of ALICE at LHC / O. Grachov -- The CMS hadron calorimeter / D. Karmgard (contribution not received) -- Test beam study of the KOPIO Shashlyk calorimeter prototype / A. Poblaguev -- The Shashlik electro-magnetic calorimeter for the LHCb experiment / S. Barsuk -- Quality of mass produced lead-tungstate crystals / R. Zhu -- Status of the CMS electromagnetic calorimeter / J. Fay -- Scintillation detectors for radiation-hard electromagnetic calorimeters / H. Loehner -- Energy, timing and two-photon invariant mass resolution of a 256-channel PBWO[symbol] calorimeter / M. Ippolitov -- A high performance hybrid electromagnetic calorimeter at Jefferson Lab / A. Gasparian -- CsI(Tl) calorimetry on BESHI / T. Hu (contribution not received) -- The crystal ball and TAPS detectors at the MAMI electron beam facility / D. Watts -- Front-end electronics of the ATLAS tile calorimeter / R. Teuscher -- The ATLAS tilecal detector control system / A. Gomes -- Performance of the liquid argon final calibration board / C. de la Taille -- Overview of the LHCb calorimeter electronics / F. Machefert -- LHCb preshower photodetector and electronics / S. Monteil -- The CMS ECAL readout architecture and the clock and control system / K. Kloukinas -- Test of the CMS-ECAL trigger

  18. Automatic Camera Calibration for Cultural Heritage Applications Using Unstructured Planar Objects

    NASA Astrophysics Data System (ADS)

    Adam, K.; Kalisperakis, I.; Grammatikopoulos, L.; Karras, G.; Petsa, E.

    2013-07-01

    As a rule, image-based documentation of cultural heritage relies today on ordinary digital cameras and commercial software. As such projects often involve researchers not familiar with photogrammetry, the question of camera calibration is important. Freely available open-source user-friendly software for automatic camera calibration, often based on simple 2D chess-board patterns, are an answer to the demand for simplicity and automation. However, such tools cannot respond to all requirements met in cultural heritage conservation regarding possible imaging distances and focal lengths. Here we investigate the practical possibility of camera calibration from unknown planar objects, i.e. any planar surface with adequate texture; we have focused on the example of urban walls covered with graffiti. Images are connected pair-wise with inter-image homographies, which are estimated automatically through a RANSAC-based approach after extracting and matching interest points with the SIFT operator. All valid points are identified on all images on which they appear. Provided that the image set includes a "fronto-parallel" view, inter-image homographies with this image are regarded as emulations of image-to-world homographies and allow computing initial estimates for the interior and exterior orientation elements. Following this initialization step, the estimates are introduced into a final self-calibrating bundle adjustment. Measures are taken to discard unsuitable images and verify object planarity. Results from practical experimentation indicate that this method may produce satisfactory results. The authors intend to incorporate the described approach into their freely available user-friendly software tool, which relies on chess-boards, to assist non-experts in their projects with image-based approaches.

  19. Users manual for an expert system (HSPEXP) for calibration of the hydrological simulation program; Fortran

    USGS Publications Warehouse

    Lumb, A.M.; McCammon, R.B.; Kittle, J.L.

    1994-01-01

    Expert system software was developed to assist less experienced modelers with calibration of a watershed model and to facilitate the interaction between the modeler and the modeling process not provided by mathematical optimization. A prototype was developed with artificial intelligence software tools, a knowledge engineer, and two domain experts. The manual procedures used by the domain experts were identified and the prototype was then coded by the knowledge engineer. The expert system consists of a set of hierarchical rules designed to guide the calibration of the model through a systematic evaluation of model parameters. When the prototype was completed and tested, it was rewritten for portability and operational use and was named HSPEXP. The watershed model Hydrological Simulation Program--Fortran (HSPF) is used in the expert system. This report is the users manual for HSPEXP and contains a discussion of the concepts and detailed steps and examples for using the software. The system has been tested on watersheds in the States of Washington and Maryland, and the system correctly identified the model parameters to be adjusted and the adjustments led to improved calibration.

  20. Automatic Astrometric and Photometric Calibration with SCAMP

    NASA Astrophysics Data System (ADS)

    Bertin, E.

    2006-07-01

    Astrometric and photometric calibrations have remained the most tiresome step in the reduction of large imaging surveys. I present a new software package, SCAMP which has been written to address this problem. SCAMP efficiently computes accurate astrometric and photometric solutions for any arbitrary sequence of FITS images in a completely automatic way. SCAMP is released under the GNU General Public Licence.

  1. Stormwater quality modelling in combined sewers: calibration and uncertainty analysis.

    PubMed

    Kanso, A; Chebbo, G; Tassin, B

    2005-01-01

    Estimating the level of uncertainty in urban stormwater quality models is vital for their utilization. This paper presents the results of application of a Monte Carlo Markov Chain method based on the Bayesian theory for the calibration and uncertainty analysis of a storm water quality model commonly used in available software. The tested model uses a hydrologic/hydrodynamic scheme to estimate the accumulation, the erosion and the transport of pollutants on surfaces and in sewers. It was calibrated for four different initial conditions of in-sewer deposits. Calibration results showed large variability in the model's responses in function of the initial conditions. They demonstrated that the model's predictive capacity is very low.

  2. Calibration of the Geosar Dual Frequency Interferometric SAR

    NASA Technical Reports Server (NTRS)

    Chapine, Elaine

    1999-01-01

    GeoSAR is an airborne, interferometric Synthetic Aperture Radar (INSAR) system for terrain mapping, currently under development by a consortium including NASA's Jet Propulsion Laboratory (JPL), Calgis, Inc., and the California Department of Conservation (CalDOC) with funding provided by the Topographic Engineering Center (TEC) of the U.S. Army Corps of Engineers and the Defense Advanced Research Projects Agency (DARPA). The radar simultaneously maps swaths on both sides of the aircraft at two frequencies, X-Band and P-Band. For the P-Band system, data is collected for two across track interferometric baselines and at the crossed polarization. The aircraft position and attitude are measured using two Honeywell Embedded GPS Inertial Navigation Units (EGI) and an Ashtech Z12 GPS receiver. The mechanical orientation and position of the antennas are actively measured using a Laser Baseline Metrology System (LBMS). In the GeoSAR motion measurement software, these data are optimally combined with data from a nearby ground station using Ashtech PNAV software to produce the position, orientation, and baseline information are used to process the dual frequency radar data. Proper calibration of the GeoSAR system is essential to obtaining digital elevation models (DEMS) with the required sub-meter level planimetric and vertical accuracies. Calibration begins with the determination of the yaw and pitch biases for the two EGI units. Common range delays are determined for each mode, along with differential time and phase delays between channels. Because the antennas are measured by the LBMS, baseline calibration consists primarily of measuring a constant offset between mechanical center and the electrical phase center of the antennas. A phase screen, an offset to the interferometric phase difference which is a function of absolute phase, is applied to the interferometric data to compensate for multipath and leakage. Calibration parameters are calculated for each of the ten

  3. Liquid scintillator tiles for calorimetry

    DOE PAGES

    Amouzegar, M.; Belloni, A.; Bilki, B.; ...

    2016-11-28

    Future experiments in high energy and nuclear physics may require large, inexpensive calorimeters that can continue to operate after receiving doses of 50 Mrad or more. Also, the light output of liquid scintillators suffers little degradation under irradiation. However, many challenges exist before liquids can be used in sampling calorimetry, especially regarding developing a packaging that has sufficient efficiency and uniformity of light collection, as well as suitable mechanical properties. We present the results of a study of a scintillator tile based on the EJ-309 liquid scintillator using cosmic rays and test beam on the light collection efficiency and uniformity,more » and some preliminary results on radiation hardness.« less

  4. Liquid scintillator tiles for calorimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amouzegar, M.; Belloni, A.; Bilki, B.

    Future experiments in high energy and nuclear physics may require large, inexpensive calorimeters that can continue to operate after receiving doses of 50 Mrad or more. Also, the light output of liquid scintillators suffers little degradation under irradiation. However, many challenges exist before liquids can be used in sampling calorimetry, especially regarding developing a packaging that has sufficient efficiency and uniformity of light collection, as well as suitable mechanical properties. We present the results of a study of a scintillator tile based on the EJ-309 liquid scintillator using cosmic rays and test beam on the light collection efficiency and uniformity,more » and some preliminary results on radiation hardness.« less

  5. VS2DI: Model use, calibration, and validation

    USGS Publications Warehouse

    Healy, Richard W.; Essaid, Hedeff I.

    2012-01-01

    VS2DI is a software package for simulating water, solute, and heat transport through soils or other porous media under conditions of variable saturation. The package contains a graphical preprocessor for constructing simulations, a postprocessor for displaying simulation results, and numerical models that solve for flow and solute transport (VS2DT) and flow and heat transport (VS2DH). Flow is described by the Richards equation, and solute and heat transport are described by advection-dispersion equations; the finite-difference method is used to solve these equations. Problems can be simulated in one, two, or three (assuming radial symmetry) dimensions. This article provides an overview of calibration techniques that have been used with VS2DI; included is a detailed description of calibration procedures used in simulating the interaction between groundwater and a stream fed by drainage from agricultural fields in central Indiana. Brief descriptions of VS2DI and the various types of problems that have been addressed with the software package are also presented.

  6. AMBER instrument control software

    NASA Astrophysics Data System (ADS)

    Le Coarer, Etienne P.; Zins, Gerard; Gluck, Laurence; Duvert, Gilles; Driebe, Thomas; Ohnaka, Keiichi; Heininger, Matthias; Connot, Claus; Behrend, Jan; Dugue, Michel; Clausse, Jean Michel; Millour, Florentin

    2004-09-01

    AMBER (Astronomical Multiple BEam Recombiner) is a 3 aperture interferometric recombiner operating between 1 and 2.5 um, for the Very Large Telescope Interferometer (VLTI). The control software of the instrument, based on the VLT Common Software, has been written to comply with specific features of the AMBER hardware, such as the Infrared detector read out modes or piezo stage drivers, as well as with the very specific operation modes of an interferomtric instrument. In this respect, the AMBER control software was designed to insure that all operations, from the preparation of the observations to the control/command of the instrument during the observations, would be kept as simple as possible for the users and operators, opening the use of an interferometric instrument to the largest community of astronomers. Peculiar attention was given to internal checks and calibration procedures both to evaluate data quality in real time, and improve the successes of long term UV plane coverage observations.

  7. The DFMS sensor of ROSINA onboard Rosetta: A computer-assisted approach to resolve mass calibration, flux calibration, and fragmentation issues

    NASA Astrophysics Data System (ADS)

    Dhooghe, Frederik; De Keyser, Johan; Altwegg, Kathrin; Calmonte, Ursina; Fuselier, Stephen; Hässig, Myrtha; Berthelier, Jean-Jacques; Mall, Urs; Gombosi, Tamas; Fiethe, Björn

    2014-05-01

    the pixel gain and related detector aging. The software automatically corrects for these effects to calibrate the fluxes. The COPS sensor can be used for an a posteriori calibration of the fluxes. Neutral gas number densities: Neutrals are ionized in the ion source before they are transferred to the mass analyser, but during this process fragmentation may occur. Our software allows one to identify which neutrals entered the instrument, given the ion fragments that are detected. First, multiple spectra with a limited mass range are combined to provide an overview of as many ion fragments as possible. We then exploit a fragmentation database to assist in figuring out the relation between entering species and recorded fragments. Finally, using experimentally determined sensitivities, gas number densities are obtained. The instrument characterisation (experimental determination of sensitivities, fragmentation patterns for the most common neutral species, etc.) has been conducted by the consortium using an instrument copy in the University of Bern test facilities during the cruise phase of the mission.

  8. The STARLINK software collection

    NASA Astrophysics Data System (ADS)

    Penny, A. J.; Wallace, P. T.; Sherman, J. C.; Terret, D. L.

    1993-12-01

    A demonstration will be given of some recent Starlink software. STARLINK is: a network of computers used by UK astronomers; a collection of programs for the calibration and analysis of astronomical data; a team of people giving hardware, software and administrative support. The Starlink Project has been in operation since 1980 to provide UK astronomers with interactive image processing and data reduction facilities. There are now Starlink computer systems at 25 UK locations, serving about 1500 registered users. The Starlink software collection now has about 25 major packages covering a wide range of astronomical data reduction and analysis techniques, as well as many smaller programs and utilities. At the core of most of the packages is a common `software environment', which provides many of the functions which applications need and offers standardized methods of structuring and accessing data. The software environment simplifies programming and support, and makes it easy to use different packages for different stages of the data reduction. Users see a consistent style, and can mix applications without hitting problems of differing data formats. The Project group coordinates the writing and distribution of this software collection, which is Unix based. Outside the UK, Starlink is used at a large number of places, which range from installations at major UK telescopes, which are Starlink-compatible and managed like Starlink sites, to individuals who run only small parts of the Starlink software collection.

  9. Calibration of 4π NaI(Tl) detectors with coincidence summing correction using new numerical procedure and ANGLE4 software

    NASA Astrophysics Data System (ADS)

    Badawi, Mohamed S.; Jovanovic, Slobodan I.; Thabet, Abouzeid A.; El-Khatib, Ahmed M.; Dlabac, Aleksandar D.; Salem, Bohaysa A.; Gouda, Mona M.; Mihaljevic, Nikola N.; Almugren, Kholud S.; Abbas, Mahmoud I.

    2017-03-01

    The 4π NaI(Tl) γ-ray detectors are consisted of the well cavity with cylindrical cross section, and the enclosing geometry of measurements with large detection angle. This leads to exceptionally high efficiency level and a significant coincidence summing effect, much more than a single cylindrical or coaxial detector especially in very low activity measurements. In the present work, the detection effective solid angle in addition to both full-energy peak and total efficiencies of well-type detectors, were mainly calculated by the new numerical simulation method (NSM) and ANGLE4 software. To obtain the coincidence summing correction factors through the previously mentioned methods, the simulation of the coincident emission of photons was modeled mathematically, based on the analytical equations and complex integrations over the radioactive volumetric sources including the self-attenuation factor. The measured full-energy peak efficiencies and correction factors were done by using 152Eu, where an exact adjustment is required for the detector efficiency curve, because neglecting the coincidence summing effect can make the results inconsistent with the whole. These phenomena, in general due to the efficiency calibration process and the coincidence summing corrections, appear jointly. The full-energy peak and the total efficiencies from the two methods typically agree with discrepancy 10%. The discrepancy between the simulation, ANGLE4 and measured full-energy peak after corrections for the coincidence summing effect was on the average, while not exceeding 14%. Therefore, this technique can be easily applied in establishing the efficiency calibration curves of well-type detectors.

  10. On-orbit Performance and Calibration of the HMI Instrument

    NASA Astrophysics Data System (ADS)

    Hoeksema, J. Todd; Bush, Rock; HMI Calibration Team

    2016-10-01

    The Helioseismic and Magnetic Imager (HMI) on the Solar Dynamics Observatory (SDO) has observed the Sun almost continuously since the completion of commissioning in May 2010, returning more than 100,000,000 filtergrams from geosynchronous orbit. Diligent and exhaustive monitoring of the instrument's performance ensures that HMI functions properly and allows proper calibration of the full-disk images and processing of the HMI observables. We constantly monitor trends in temperature, pointing, mechanism behavior, and software errors. Cosmic ray contamination is detected and bad pixels are removed from each image. Routine calibration sequences and occasional special observing programs are used to measure the instrument focus, distortion, scattered light, filter profiles, throughput, and detector characteristics. That information is used to optimize instrument performance and adjust calibration of filtergrams and observables.

  11. APEX calibration facility: status and first commissioning results

    NASA Astrophysics Data System (ADS)

    Suhr, Birgit; Fries, Jochen; Gege, Peter; Schwarzer, Horst

    2006-09-01

    The paper presents the current status of the operational calibration facility that can be used for radiometric, spectral and geometric on-ground characterisation and calibration of imaging spectrometers. The European Space Agency (ESA) co-funded this establishment at DLR Oberpfaffenhofen within the framework of the hyper-spectral imaging spectrometer Airborne Prism Experiment (APEX). It was designed to fulfil the requirements for calibration of APEX, but can also be used for other imaging spectrometers. A description of the hardware set-up of the optical bench will be given. Signals from two sides can alternatively be sent to the hyper-spectral sensor under investigation. Frome one side the spatial calibration will be done by using an off-axis collimator and six slits of different width and orientation to measure the line spread function (LSF) in flight direction as well as across flight direction. From the other side the spectral calibration will be performed. A monochromator provides radiation in a range from 380 nm to 13 μm with a bandwidth between 0.1 nm in the visible and 5 nm in the thermal infrared. For the relative radiometric calibration a large integrating sphere of 1.65 m diameter and exit port size of 55 cm × 40 cm is used. The absolute radiometric calibration will be done using a small integrating sphere with 50 cm diameter that is regularly calibrated according to national standards. This paper describes the hardware components and their accuracy, and it presents the software interface for automation of the measurements.

  12. Ionic liquids. Combination of combustion calorimetry with high-level quantum chemical calculations for deriving vaporization enthalpies.

    PubMed

    Emel'yanenko, Vladimir N; Verevkin, Sergey P; Heintz, Andreas; Schick, Christoph

    2008-07-10

    In this work, the molar enthalpies of formation of the ionic liquids [C2MIM][NO3] and [C4MIM][NO3] were measured by means of combustion calorimetry. The molar enthalpy of fusion of [C2MIM][NO3] was measured using differential scanning calorimetry. Ab initio calculations of the enthalpy of formation in the gaseous phase have been performed for the ionic species using the G3MP2 theory. We have used a combination of traditional combustion calorimetry with modern high-level ab initio calculations in order to obtain the molar enthalpies of vaporization of a series of the ionic liquids under study.

  13. Hand-Eye Calibration of Robonaut

    NASA Technical Reports Server (NTRS)

    Nickels, Kevin; Huber, Eric

    2004-01-01

    NASA's Human Space Flight program depends heavily on Extra-Vehicular Activities (EVA's) performed by human astronauts. EVA is a high risk environment that requires extensive training and ground support. In collaboration with the Defense Advanced Research Projects Agency (DARPA), NASA is conducting a ground development project to produce a robotic astronaut's assistant, called Robonaut, that could help reduce human EVA time and workload. The project described in this paper designed and implemented a hand-eye calibration scheme for Robonaut, Unit A. The intent of this calibration scheme is to improve hand-eye coordination of the robot. The basic approach is to use kinematic and stereo vision measurements, namely the joint angles self-reported by the right arm and 3-D positions of a calibration fixture as measured by vision, to estimate the transformation from Robonaut's base coordinate system to its hand coordinate system and to its vision coordinate system. Two methods of gathering data sets have been developed, along with software to support each. In the first, the system observes the robotic arm and neck angles as the robot is operated under external control, and measures the 3-D position of a calibration fixture using Robonaut's stereo cameras, and logs these data. In the second, the system drives the arm and neck through a set of pre-recorded configurations, and data are again logged. Two variants of the calibration scheme have been developed. The full calibration scheme is a batch procedure that estimates all relevant kinematic parameters of the arm and neck of the robot The daily calibration scheme estimates only joint offsets for each rotational joint on the arm and neck, which are assumed to change from day to day. The schemes have been designed to be automatic and easy to use so that the robot can be fully recalibrated when needed such as after repair, upgrade, etc, and can be partially recalibrated after each power cycle. The scheme has been implemented on

  14. SU-E-T-749: Thorough Calibration of MOSFET Dosimeters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Plenkovich, D; Thomas, J

    Purpose: To improve the accuracy of the MOSFET calibration procedure by performing the measurement several times and calculating the average value of the calibration factor for various photon and electron energies. Methods: The output of three photon and six electron beams of Varian Trilogy linear accelerator SN 5878 was calibrated. Five reinforced standard sensitivity MOSFET dosimeters were placed in the calibration jig and connected to the Reader Module. As the backscatter material was used 7 cm of Virtual Water. The MOSFET dosimeters were covered with 1.5 cm thick bolus for the regular and SRS 6 MV beams, 3 cm bolusmore » for 15 MV beam, 1.5 cm bolus for 6 MeV electron beam, and 2 cm bolus for the electron energies of 9, 12, 15, 18, and 22 MeV. The dosimeters were exposed to 100 MU, and the calibration factor was determined using the mobileMOSFET software. To improve the accuracy of calibration, this procedure was repeated ten times and the calibration factors were averaged. Results: As the number of calibrations was increasing the variability of calibration factors of different dosimeters was decreasing. After ten calibrations, the calibration factors for all five dosimeters were within 1% of one another for all energies, except 6 MV SRS photons and 6 MeV electrons, for which the variability was 2%. Conclusions: The described process results in calibration factors which are almost independent of modality or energy. Once calibrated, the dosimeters may be used for in-vivo dosimetry or for daily verification of the beam output. Measurement of the radiation dose under bolus and scatter to the eye are examples of frequent use of calibrated MOSFET dosimeters. The calibration factor determined for full build-up is used under these circumstances. To the best of our knowledge, such thorough procedure for calibrating MOSFET dosimeters has not been reported previously. Best Medical Canada provided MOSFET dosimeters for this project.« less

  15. Java-Library for the Access, Storage and Editing of Calibration Metadata of Optical Sensors

    NASA Astrophysics Data System (ADS)

    Firlej, M.; Kresse, W.

    2016-06-01

    The standardization of the calibration of optical sensors in photogrammetry and remote sensing has been discussed for more than a decade. Projects of the German DGPF and the European EuroSDR led to the abstract International Technical Specification ISO/TS 19159-1:2014 "Calibration and validation of remote sensing imagery sensors and data - Part 1: Optical sensors". This article presents the first software interface for a read- and write-access to all metadata elements standardized in the ISO/TS 19159-1. This interface is based on an xml-schema that was automatically derived by ShapeChange from the UML-model of the Specification. The software interface serves two cases. First, the more than 300 standardized metadata elements are stored individually according to the xml-schema. Secondly, the camera manufacturers are using many administrative data that are not a part of the ISO/TS 19159-1. The new software interface provides a mechanism for input, storage, editing, and output of both types of data. Finally, an output channel towards a usual calibration protocol is provided. The interface is written in Java. The article also addresses observations made when analysing the ISO/TS 19159-1 and compiles a list of proposals for maturing the document, i.e. for an updated version of the Specification.

  16. An Integrated-Circuit Temperature Sensor for Calorimetry and Differential Temperature Measurement.

    ERIC Educational Resources Information Center

    Muyskens, Mark A.

    1997-01-01

    Describes the application of an integrated-circuit (IC) chip which provides an easy-to-use, inexpensive, rugged, computer-interfaceable temperature sensor for calorimetry and differential temperature measurement. Discusses its design and advantages. (JRH)

  17. Reference software implementation for GIFTS ground data processing

    NASA Astrophysics Data System (ADS)

    Garcia, R. K.; Howell, H. B.; Knuteson, R. O.; Martin, G. D.; Olson, E. R.; Smuga-Otto, M. J.

    2006-08-01

    Future satellite weather instruments such as high spectral resolution imaging interferometers pose a challenge to the atmospheric science and software development communities due to the immense data volumes they will generate. An open-source, scalable reference software implementation demonstrating the calibration of radiance products from an imaging interferometer, the Geosynchronous Imaging Fourier Transform Spectrometer1 (GIFTS), is presented. This paper covers essential design principles laid out in summary system diagrams, lessons learned during implementation and preliminary test results from the GIFTS Information Processing System (GIPS) prototype.

  18. Integrated calibration sphere and calibration step fixture for improved coordinate measurement machine calibration

    DOEpatents

    Clifford, Harry J [Los Alamos, NM

    2011-03-22

    A method and apparatus for mounting a calibration sphere to a calibration fixture for Coordinate Measurement Machine (CMM) calibration and qualification is described, decreasing the time required for such qualification, thus allowing the CMM to be used more productively. A number of embodiments are disclosed that allow for new and retrofit manufacture to perform as integrated calibration sphere and calibration fixture devices. This invention renders unnecessary the removal of a calibration sphere prior to CMM measurement of calibration features on calibration fixtures, thereby greatly reducing the time spent qualifying a CMM.

  19. Automatic calibration and signal switching system for the particle beam fusion research data acquisition facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boyer, W.B.

    1979-09-01

    This report describes both the hardware and software components of an automatic calibration and signal system (Autocal) for the data acquisition system for the Sandia particle beam fusion research accelerators Hydra, Proto I, and Proto II. The Autocal hardware consists of off-the-shelf commercial equipment. The various hardware components, special modifications and overall system configuration are described. Special software has been developed to support the Autocal hardware. Software operation and maintenance are described.

  20. Estimating Software-Development Costs With Greater Accuracy

    NASA Technical Reports Server (NTRS)

    Baker, Dan; Hihn, Jairus; Lum, Karen

    2008-01-01

    COCOMOST is a computer program for use in estimating software development costs. The goal in the development of COCOMOST was to increase estimation accuracy in three ways: (1) develop a set of sensitivity software tools that return not only estimates of costs but also the estimation error; (2) using the sensitivity software tools, precisely define the quantities of data needed to adequately tune cost estimation models; and (3) build a repository of software-cost-estimation information that NASA managers can retrieve to improve the estimates of costs of developing software for their project. COCOMOST implements a methodology, called '2cee', in which a unique combination of well-known pre-existing data-mining and software-development- effort-estimation techniques are used to increase the accuracy of estimates. COCOMOST utilizes multiple models to analyze historical data pertaining to software-development projects and performs an exhaustive data-mining search over the space of model parameters to improve the performances of effort-estimation models. Thus, it is possible to both calibrate and generate estimates at the same time. COCOMOST is written in the C language for execution in the UNIX operating system.

  1. Calibration of LOFAR data on the cloud

    NASA Astrophysics Data System (ADS)

    Sabater, J.; Sánchez-Expósito, S.; Best, P.; Garrido, J.; Verdes-Montenegro, L.; Lezzi, D.

    2017-04-01

    New scientific instruments are starting to generate an unprecedented amount of data. The Low Frequency Array (LOFAR), one of the Square Kilometre Array (SKA) pathfinders, is already producing data on a petabyte scale. The calibration of these data presents a huge challenge for final users: (a) extensive storage and computing resources are required; (b) the installation and maintenance of the software required for the processing is not trivial; and (c) the requirements of calibration pipelines, which are experimental and under development, are quickly evolving. After encountering some limitations in classical infrastructures like dedicated clusters, we investigated the viability of cloud infrastructures as a solution. We found that the installation and operation of LOFAR data calibration pipelines is not only possible, but can also be efficient in cloud infrastructures. The main advantages were: (1) the ease of software installation and maintenance, and the availability of standard APIs and tools, widely used in the industry; this reduces the requirement for significant manual intervention, which can have a highly negative impact in some infrastructures; (2) the flexibility to adapt the infrastructure to the needs of the problem, especially as those demands change over time; (3) the on-demand consumption of (shared) resources. We found that a critical factor (also in other infrastructures) is the availability of scratch storage areas of an appropriate size. We found no significant impediments associated with the speed of data transfer, the use of virtualization, the use of external block storage, or the memory available (provided a minimum threshold is reached). Finally, we considered the cost-effectiveness of a commercial cloud like Amazon Web Services. While a cloud solution is more expensive than the operation of a large, fully-utilized cluster completely dedicated to LOFAR data reduction, we found that its costs are competitive if the number of datasets to be

  2. Studies in Software Cost Model Behavior: Do We Really Understand Cost Model Performance?

    NASA Technical Reports Server (NTRS)

    Lum, Karen; Hihn, Jairus; Menzies, Tim

    2006-01-01

    While there exists extensive literature on software cost estimation techniques, industry practice continues to rely upon standard regression-based algorithms. These software effort models are typically calibrated or tuned to local conditions using local data. This paper cautions that current approaches to model calibration often produce sub-optimal models because of the large variance problem inherent in cost data and by including far more effort multipliers than the data supports. Building optimal models requires that a wider range of models be considered while correctly calibrating these models requires rejection rules that prune variables and records and use multiple criteria for evaluating model performance. The main contribution of this paper is to document a standard method that integrates formal model identification, estimation, and validation. It also documents what we call the large variance problem that is a leading cause of cost model brittleness or instability.

  3. Offline software for the DAMPE experiment

    NASA Astrophysics Data System (ADS)

    Wang, Chi; Liu, Dong; Wei, Yifeng; Zhang, Zhiyong; Zhang, Yunlong; Wang, Xiaolian; Xu, Zizong; Huang, Guangshun; Tykhonov, Andrii; Wu, Xin; Zang, Jingjing; Liu, Yang; Jiang, Wei; Wen, Sicheng; Wu, Jian; Chang, Jin

    2017-10-01

    A software system has been developed for the DArk Matter Particle Explorer (DAMPE) mission, a satellite-based experiment. The DAMPE software is mainly written in C++ and steered using a Python script. This article presents an overview of the DAMPE offline software, including the major architecture design and specific implementation for simulation, calibration and reconstruction. The whole system has been successfully applied to DAMPE data analysis. Some results obtained using the system, from simulation and beam test experiments, are presented. Supported by Chinese 973 Program (2010CB833002), the Strategic Priority Research Program on Space Science of the Chinese Academy of Science (CAS) (XDA04040202-4), the Joint Research Fund in Astronomy under cooperative agreement between the National Natural Science Foundation of China (NSFC) and CAS (U1531126) and 100 Talents Program of the Chinese Academy of Science

  4. On constraining pilot point calibration with regularization in PEST

    USGS Publications Warehouse

    Fienen, M.N.; Muffels, C.T.; Hunt, R.J.

    2009-01-01

    Ground water model calibration has made great advances in recent years with practical tools such as PEST being instrumental for making the latest techniques available to practitioners. As models and calibration tools get more sophisticated, however, the power of these tools can be misapplied, resulting in poor parameter estimates and/or nonoptimally calibrated models that do not suit their intended purpose. Here, we focus on an increasingly common technique for calibrating highly parameterized numerical models - pilot point parameterization with Tikhonov regularization. Pilot points are a popular method for spatially parameterizing complex hydrogeologic systems; however, additional flexibility offered by pilot points can become problematic if not constrained by Tikhonov regularization. The objective of this work is to explain and illustrate the specific roles played by control variables in the PEST software for Tikhonov regularization applied to pilot points. A recent study encountered difficulties implementing this approach, but through examination of that analysis, insight into underlying sources of potential misapplication can be gained and some guidelines for overcoming them developed. ?? 2009 National Ground Water Association.

  5. Installation and calibration of Kayzero-assisted NAA in three Central European countries via a Copernicus project.

    PubMed

    De Corte, F; van Sluijs, R; Simonits, A; Kucera, J; Smodis, B; Byrne, A R; De Wispelaere, A; Bossus, D; Frána, J; Horák, Z; Jaćimović, R

    2001-09-01

    An account is given of the installation and calibration of k0-based NAA--assisted by the DSM Kayzero/Solcoi software package--at the KFKI-AEKI, Budapest, the NPI, Rez and the IJS, Ljubljana. Not only the calibration of the Ge-detectors and the irradiation facilities are discussed, but also other important topics such as gamma-spectrometric hard- and software, QC/QA of the IRMM-530 Al-Au flux monitor and the upgrade of the Kayzero/Solcoi code. The work was performed in the framework of a European Copernicus JRP, coordinated by the Laboratory of Analytical Chemistry, Gent, with DSM Research, Geleen, as the industrial partner.

  6. Penn State University ground software support for X-ray missions.

    NASA Astrophysics Data System (ADS)

    Townsley, L. K.; Nousek, J. A.; Corbet, R. H. D.

    1995-03-01

    The X-ray group at Penn State is charged with two software development efforts in support of X-ray satellite missions. As part of the ACIS instrument team for AXAF, the authors are developing part of the ground software to support the instrument's calibration. They are also designing a translation program for Ginga data, to change it from the non-standard FRF format, which closely parallels the original telemetry format, to FITS.

  7. Identifying Hydrated Salts Using Simultaneous Thermogravimetric Analysis and Differential Scanning Calorimetry

    ERIC Educational Resources Information Center

    Harris, Jerry D.; Rusch, Aaron W.

    2013-01-01

    simultaneous thermogravimetric analysis (TGA) and differential scanning calorimetry (DSC) to characterize colorless, hydrated salts with anhydrous melting points less than 1100 degrees C. The experiment could be used to supplement the lecture discussing gravimetric techniques. It is…

  8. Pre-Launch Algorithm and Data Format for the Level 1 Calibration Products for the EOS AM-1 Moderate Resolution Imaging Spectroradiometer (MODIS)

    NASA Technical Reports Server (NTRS)

    Guenther, Bruce W.; Godden, Gerald D.; Xiong, Xiao-Xiong; Knight, Edward J.; Qiu, Shi-Yue; Montgomery, Harry; Hopkins, M. M.; Khayat, Mohammad G.; Hao, Zhi-Dong; Smith, David E. (Technical Monitor)

    2000-01-01

    The Moderate Resolution Imaging Spectroradiometer (MODIS) radiometric calibration product is described for the thermal emissive and the reflective solar bands. Specific sensor design characteristics are identified to assist in understanding how the calibration algorithm software product is designed. The reflected solar band software products of radiance and reflectance factor both are described. The product file format is summarized and the MODIS Characterization Support Team (MCST) Homepage location for the current file format is provided.

  9. Medical-grade Sterilizable Target for Fluid-immersed Fetoscope Optical Distortion Calibration.

    PubMed

    Nikitichev, Daniil I; Shakir, Dzhoshkun I; Chadebecq, François; Tella, Marcel; Deprest, Jan; Stoyanov, Danail; Ourselin, Sébastien; Vercauteren, Tom

    2017-02-23

    We have developed a calibration target for use with fluid-immersed endoscopes within the context of the GIFT-Surg (Guided Instrumentation for Fetal Therapy and Surgery) project. One of the aims of this project is to engineer novel, real-time image processing methods for intra-operative use in the treatment of congenital birth defects, such as spina bifida and the twin-to-twin transfusion syndrome. The developed target allows for the sterility-preserving optical distortion calibration of endoscopes within a few minutes. Good optical distortion calibration and compensation are important for mitigating undesirable effects like radial distortions, which not only hamper accurate imaging using existing endoscopic technology during fetal surgery, but also make acquired images less suitable for potentially very useful image computing applications, like real-time mosaicing. In this paper proposes a novel fabrication method to create an affordable, sterilizable calibration target suitable for use in a clinical setup. This method involves etching a calibration pattern by laser cutting a sandblasted stainless steel sheet. This target was validated using the camera calibration module provided by OpenCV, a state-of-the-art software library popular in the computer vision community.

  10. Medical-grade Sterilizable Target for Fluid-immersed Fetoscope Optical Distortion Calibration

    PubMed Central

    Chadebecq, François; Tella, Marcel; Deprest, Jan; Stoyanov, Danail; Ourselin, Sébastien; Vercauteren, Tom

    2017-01-01

    We have developed a calibration target for use with fluid-immersed endoscopes within the context of the GIFT-Surg (Guided Instrumentation for Fetal Therapy and Surgery) project. One of the aims of this project is to engineer novel, real-time image processing methods for intra-operative use in the treatment of congenital birth defects, such as spina bifida and the twin-to-twin transfusion syndrome. The developed target allows for the sterility-preserving optical distortion calibration of endoscopes within a few minutes. Good optical distortion calibration and compensation are important for mitigating undesirable effects like radial distortions, which not only hamper accurate imaging using existing endoscopic technology during fetal surgery, but also make acquired images less suitable for potentially very useful image computing applications, like real-time mosaicing. In this paper proposes a novel fabrication method to create an affordable, sterilizable calibration target suitable for use in a clinical setup. This method involves etching a calibration pattern by laser cutting a sandblasted stainless steel sheet. This target was validated using the camera calibration module provided by OpenCV, a state-of-the-art software library popular in the computer vision community. PMID:28287588

  11. Laser Calorimetry Spectroscopy for ppm-level Dissolved Gas Detection and Analysis

    PubMed Central

    K. S., Nagapriya; Sinha, Shashank; R., Prashanth; Poonacha, Samhitha; Chaudhry, Gunaranjan; Bhattacharya, Anandaroop; Choudhury, Niloy; Mahalik, Saroj; Maity, Sandip

    2017-01-01

    In this paper we report a newly developed technique – laser calorimetry spectroscopy (LCS), which is a combination of laser absorption spectroscopy and calorimetry - for the detection of gases dissolved in liquids. The technique involves determination of concentration of a dissolved gas by irradiating the liquid with light of a wavelength where the gas absorbs, and measuring the temperature change caused by the absorbance. Conventionally, detection of dissolved gases with sufficient sensitivity and specificity was done by first extracting the gases from the liquid and then analyzing the gases using techniques such as gas chromatography. Using LCS, we have been able to detect ppm levels of dissolved gases without extracting them from the liquid. In this paper, we show the detection of dissolved acetylene in transformer oil in the mid infrared (MIR) wavelength (3021 nm) region. PMID:28218304

  12. Laser Calorimetry Spectroscopy for ppm-level Dissolved Gas Detection and Analysis.

    PubMed

    K S, Nagapriya; Sinha, Shashank; R, Prashanth; Poonacha, Samhitha; Chaudhry, Gunaranjan; Bhattacharya, Anandaroop; Choudhury, Niloy; Mahalik, Saroj; Maity, Sandip

    2017-02-20

    In this paper we report a newly developed technique - laser calorimetry spectroscopy (LCS), which is a combination of laser absorption spectroscopy and calorimetry - for the detection of gases dissolved in liquids. The technique involves determination of concentration of a dissolved gas by irradiating the liquid with light of a wavelength where the gas absorbs, and measuring the temperature change caused by the absorbance. Conventionally, detection of dissolved gases with sufficient sensitivity and specificity was done by first extracting the gases from the liquid and then analyzing the gases using techniques such as gas chromatography. Using LCS, we have been able to detect ppm levels of dissolved gases without extracting them from the liquid. In this paper, we show the detection of dissolved acetylene in transformer oil in the mid infrared (MIR) wavelength (3021 nm) region.

  13. Parallelism between gradient temperature raman spectroscopy and differential scanning calorimetry results

    USDA-ARS?s Scientific Manuscript database

    Temperature dependent Raman spectroscopy (TDR) applies the temperature gradients utilized in differential scanning calorimetry (DSC) to Raman spectroscopy, providing a straightforward technique to identify molecular rearrangements that occur just prior to phase transitions. Herein we apply TDR and D...

  14. CrossTalk: The Journal of Defense Software Engineering. Volume 20, Number 6, June 2007

    DTIC Science & Technology

    2007-06-01

    California. He has co-authored the book Software Cost Estimation With COCOMO II with Barry Boehm and others. Clark helped define the COCOMO II model...Software Engineering at the University of Southern California. She worked with Barry Boehm and Chris Abts to develop and calibrate a cost-estimation...2003/02/ schorsch.html>. 2. See “Software Engineering, A Practitioners Approach” by Roger Pressman for a good description of coupling, cohesion

  15. Laser-Shock Experiments: Calorimetry Measurements to TPa Pressures

    NASA Astrophysics Data System (ADS)

    Jeanloz, R.

    2012-12-01

    Laser-driven shock experiments are more like calorimetry measurements, characterized by determinations of Hugoniot temperature (TH) as a function of shock velocity (US), rather than the equation-of-state measurements afforded by mechanical-impact experiments. This is because particle velocity (up) is often not accessible to direct measurement in laser-shock experiments, so must be inferred with reference to a material having a well-determined, independently calibrated Hugoniot equation of state (up is obtained from the impact velocity in traditional shock experiments, and the combination of US and up yields the pressure-density equation of state for the sample). Application of a Mie-Grüneisen model shows that the isochoric specific heat for a given phase is: CV = (US - c0)2 {s2US (dTH/dUS) + γ0 c0 s (TH/US)}-1 with US = c0 + s up, and γ0 is the zero-pressure Grüneisen parameter (γ/V = constant is assumed here). This result is a generalization to TH-US variables of the Walsh and Christian (1955) formula for the temperature rise along the Hugoniot of a given phase (identified here with a US - up relation that is locally linear); it can be analytically integrated to give TH(US) in terms of an average value of CV, if no phase transition takes place. Analysis of the TH-US slopes obtained from laser-shock measurements on MgO yields specific-heat values ranging from 1.02 (± 0.05) kJ/kg/K at 320-345 GPa and TH = 7700-9000 K to 1.50 (± 0.05) kJ/kg/K at 350-380 GPa and TH = 8700-9500 K. A fit to the absolute values of TH(US) in this pressure-temperature range gives CV = 1.26 (± 0.10) kJ/kg/K, in good accord with the Dulong-Petit value CV = 1.24 kJ/kg/K.

  16. The Philosophy and Feasibility of Dual Readout Calorimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hauptman, John

    2006-10-27

    I will discuss the general physical ideas behind dual-readout calorimetry, their implementation in DREAM (Dual REAdout Module) with exact separation of scintillation and Cerenkov light, implementation with mixed light in DREAM fibers, anticipated implementation in PbWO4 crystals with applications to the 4th Concept detector and to CMS, use in high energy gamma-ray and cosmic ray astrophysics with Cerenkov and N2 fluorescent light, and implementation in the 4th Concept detector for muon identification.

  17. AN EVALUATION OF FIVE COMMERCIAL IMMUNOASSAY DATA ANALYSIS SOFTWARE SYSTEMS

    EPA Science Inventory

    An evaluation of five commercial software systems used for immunoassay data analysis revealed numerous deficiencies. Often, the utility of statistical output was compromised by poor documentation. Several data sets were run through each system using a four-parameter calibration f...

  18. Approaches in highly parameterized inversion: TSPROC, a general time-series processor to assist in model calibration and result summarization

    USGS Publications Warehouse

    Westenbroek, Stephen M.; Doherty, John; Walker, John F.; Kelson, Victor A.; Hunt, Randall J.; Cera, Timothy B.

    2012-01-01

    The TSPROC (Time Series PROCessor) computer software uses a simple scripting language to process and analyze time series. It was developed primarily to assist in the calibration of environmental models. The software is designed to perform calculations on time-series data commonly associated with surface-water models, including calculation of flow volumes, transformation by means of basic arithmetic operations, and generation of seasonal and annual statistics and hydrologic indices. TSPROC can also be used to generate some of the key input files required to perform parameter optimization by means of the PEST (Parameter ESTimation) computer software. Through the use of TSPROC, the objective function for use in the model-calibration process can be focused on specific components of a hydrograph.

  19. Obtaining continuous BrAC/BAC estimates in the field: A hybrid system integrating transdermal alcohol biosensor, Intellidrink smartphone app, and BrAC Estimator software tools.

    PubMed

    Luczak, Susan E; Hawkins, Ashley L; Dai, Zheng; Wichmann, Raphael; Wang, Chunming; Rosen, I Gary

    2018-08-01

    Biosensors have been developed to measure transdermal alcohol concentration (TAC), but converting TAC into interpretable indices of blood/breath alcohol concentration (BAC/BrAC) is difficult because of variations that occur in TAC across individuals, drinking episodes, and devices. We have developed mathematical models and the BrAC Estimator software for calibrating and inverting TAC into quantifiable BrAC estimates (eBrAC). The calibration protocol to determine the individualized parameters for a specific individual wearing a specific device requires a drinking session in which BrAC and TAC measurements are obtained simultaneously. This calibration protocol was originally conducted in the laboratory with breath analyzers used to produce the BrAC data. Here we develop and test an alternative calibration protocol using drinking diary data collected in the field with the smartphone app Intellidrink to produce the BrAC calibration data. We compared BrAC Estimator software results for 11 drinking episodes collected by an expert user when using Intellidrink versus breath analyzer measurements as BrAC calibration data. Inversion phase results indicated the Intellidrink calibration protocol produced similar eBrAC curves and captured peak eBrAC to within 0.0003%, time of peak eBrAC to within 18min, and area under the eBrAC curve to within 0.025% alcohol-hours as the breath analyzer calibration protocol. This study provides evidence that drinking diary data can be used in place of breath analyzer data in the BrAC Estimator software calibration procedure, which can reduce participant and researcher burden and expand the potential software user pool beyond researchers studying participants who can drink in the laboratory. Copyright © 2017. Published by Elsevier Ltd.

  20. Identification and quantitation of semi-crystalline microplastics using image analysis and differential scanning calorimetry.

    PubMed

    Rodríguez Chialanza, Mauricio; Sierra, Ignacio; Pérez Parada, Andrés; Fornaro, Laura

    2018-06-01

    There are several techniques used to analyze microplastics. These are often based on a combination of visual and spectroscopic techniques. Here we introduce an alternative workflow for identification and mass quantitation through a combination of optical microscopy with image analysis (IA) and differential scanning calorimetry (DSC). We studied four synthetic polymers with environmental concern: low and high density polyethylene (LDPE and HDPE, respectively), polypropylene (PP), and polyethylene terephthalate (PET). Selected experiments were conducted to investigate (i) particle characterization and counting procedures based on image analysis with open-source software, (ii) chemical identification of microplastics based on DSC signal processing, (iii) dependence of particle size on DSC signal, and (iv) quantitation of microplastics mass based on DSC signal. We describe the potential and limitations of these techniques to increase reliability for microplastic analysis. Particle size demonstrated to have particular incidence in the qualitative and quantitative performance of DSC signals. Both, identification (based on characteristic onset temperature) and mass quantitation (based on heat flow) showed to be affected by particle size. As a result, a proper sample treatment which includes sieving of suspended particles is particularly required for this analytical approach.

  1. Design and development of an ultrasound calibration phantom and system

    NASA Astrophysics Data System (ADS)

    Cheng, Alexis; Ackerman, Martin K.; Chirikjian, Gregory S.; Boctor, Emad M.

    2014-03-01

    Image-guided surgery systems are often used to provide surgeons with informational support. Due to several unique advantages such as ease of use, real-time image acquisition, and no ionizing radiation, ultrasound is a common medical imaging modality used in image-guided surgery systems. To perform advanced forms of guidance with ultrasound, such as virtual image overlays or automated robotic actuation, an ultrasound calibration process must be performed. This process recovers the rigid body transformation between a tracked marker attached to the ultrasound transducer and the ultrasound image. A phantom or model with known geometry is also required. In this work, we design and test an ultrasound calibration phantom and software. The two main considerations in this work are utilizing our knowledge of ultrasound physics to design the phantom and delivering an easy to use calibration process to the user. We explore the use of a three-dimensional printer to create the phantom in its entirety without need for user assembly. We have also developed software to automatically segment the three-dimensional printed rods from the ultrasound image by leveraging knowledge about the shape and scale of the phantom. In this work, we present preliminary results from using this phantom to perform ultrasound calibration. To test the efficacy of our method, we match the projection of the points segmented from the image to the known model and calculate a sum squared difference between each point for several combinations of motion generation and filtering methods. The best performing combination of motion and filtering techniques had an error of 1.56 mm and a standard deviation of 1.02 mm.

  2. Ultra-Fast Hadronic Calorimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Denisov, Dmitri; Lukić, Strahinja; Mokhov, Nikolai

    2017-12-18

    Calorimeters for particle physics experiments with integration time of a few ns will substantially improve the capability of the experiment to resolve event pileup and to reject backgrounds. In this paper time development of hadronic showers induced by 30 and 60 GeV positive pions and 120 GeV protons is studied using Monte Carlo simulation and beam tests with a prototype of a sampling steel-scintillator hadronic calorimeter. In the beam tests, scintillator signals induced by hadronic showers in steel are sampled with a period of 0.2 ns and precisely time-aligned in order to study the average signal waveform at various locationsmore » w.r.t. the beam particle impact. Simulations of the same setup are performed using the MARS15 code. Both simulation and test beam results suggest that energy deposition in steel calorimeters develop over a time shorter than 3 ns providing opportunity for ultra-fast calorimetry. Simulation results for an "ideal" calorimeter consisting exclusively of bulk tungsten or copper are presented to establish the lower limit of the signal integration window.« less

  3. Estimating BrAC from transdermal alcohol concentration data using the BrAC estimator software program.

    PubMed

    Luczak, Susan E; Rosen, I Gary

    2014-08-01

    Transdermal alcohol sensor (TAS) devices have the potential to allow researchers and clinicians to unobtrusively collect naturalistic drinking data for weeks at a time, but the transdermal alcohol concentration (TAC) data these devices produce do not consistently correspond with breath alcohol concentration (BrAC) data. We present and test the BrAC Estimator software, a program designed to produce individualized estimates of BrAC from TAC data by fitting mathematical models to a specific person wearing a specific TAS device. Two TAS devices were worn simultaneously by 1 participant for 18 days. The trial began with a laboratory alcohol session to calibrate the model and was followed by a field trial with 10 drinking episodes. Model parameter estimates and fit indices were compared across drinking episodes to examine the calibration phase of the software. Software-generated estimates of peak BrAC, time of peak BrAC, and area under the BrAC curve were compared with breath analyzer data to examine the estimation phase of the software. In this single-subject design with breath analyzer peak BrAC scores ranging from 0.013 to 0.057, the software created consistent models for the 2 TAS devices, despite differences in raw TAC data, and was able to compensate for the attenuation of peak BrAC and latency of the time of peak BrAC that are typically observed in TAC data. This software program represents an important initial step for making it possible for non mathematician researchers and clinicians to obtain estimates of BrAC from TAC data in naturalistic drinking environments. Future research with more participants and greater variation in alcohol consumption levels and patterns, as well as examination of gain scheduling calibration procedures and nonlinear models of diffusion, will help to determine how precise these software models can become. Copyright © 2014 by the Research Society on Alcoholism.

  4. An Innovative Software Tool Suite for Power Plant Model Validation and Parameter Calibration using PMU Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Yuanyuan; Diao, Ruisheng; Huang, Renke

    Maintaining good quality of power plant stability models is of critical importance to ensure the secure and economic operation and planning of today’s power grid with its increasing stochastic and dynamic behavior. According to North American Electric Reliability (NERC) standards, all generators in North America with capacities larger than 10 MVA are required to validate their models every five years. Validation is quite costly and can significantly affect the revenue of generator owners, because the traditional staged testing requires generators to be taken offline. Over the past few years, validating and calibrating parameters using online measurements including phasor measurement unitsmore » (PMUs) and digital fault recorders (DFRs) has been proven to be a cost-effective approach. In this paper, an innovative open-source tool suite is presented for validating power plant models using PPMV tool, identifying bad parameters with trajectory sensitivity analysis, and finally calibrating parameters using an ensemble Kalman filter (EnKF) based algorithm. The architectural design and the detailed procedures to run the tool suite are presented, with results of test on a realistic hydro power plant using PMU measurements for 12 different events. The calibrated parameters of machine, exciter, governor and PSS models demonstrate much better performance than the original models for all the events and show the robustness of the proposed calibration algorithm.« less

  5. Modular modeling system for building distributed hydrologic models with a user-friendly software package

    NASA Astrophysics Data System (ADS)

    Wi, S.; Ray, P. A.; Brown, C.

    2015-12-01

    A software package developed to facilitate building distributed hydrologic models in a modular modeling system is presented. The software package provides a user-friendly graphical user interface that eases its practical use in water resources-related research and practice. The modular modeling system organizes the options available to users when assembling models according to the stages of hydrological cycle, such as potential evapotranspiration, soil moisture accounting, and snow/glacier melting processes. The software is intended to be a comprehensive tool that simplifies the task of developing, calibrating, validating, and using hydrologic models through the inclusion of intelligent automation to minimize user effort, and reduce opportunities for error. Processes so far automated include the definition of system boundaries (i.e., watershed delineation), climate and geographical input generation, and parameter calibration. Built-in post-processing toolkits greatly improve the functionality of the software as a decision support tool for water resources system management and planning. Example post-processing toolkits enable streamflow simulation at ungauged sites with predefined model parameters, and perform climate change risk assessment by means of the decision scaling approach. The software is validated through application to watersheds representing a variety of hydrologic regimes.

  6. GP Workbench Manual: Technical Manual, User's Guide, and Software Guide

    USGS Publications Warehouse

    Oden, Charles P.; Moulton, Craig W.

    2006-01-01

    GP Workbench is an open-source general-purpose geophysical data processing software package written primarily for ground penetrating radar (GPR) data. It also includes support for several USGS prototype electromagnetic instruments such as the VETEM and ALLTEM. The two main programs in the package are GP Workbench and GP Wave Utilities. GP Workbench has routines for filtering, gridding, and migrating GPR data; as well as an inversion routine for characterizing UXO (unexploded ordinance) using ALLTEM data. GP Workbench provides two-dimensional (section view) and three-dimensional (plan view or time slice view) processing for GPR data. GP Workbench can produce high-quality graphics for reports when Surfer 8 or higher (Golden Software) is installed. GP Wave Utilities provides a wide range of processing algorithms for single waveforms, such as filtering, correlation, deconvolution, and calculating GPR waveforms. GP Wave Utilities is used primarily for calibrating radar systems and processing individual traces. Both programs also contain research features related to the calibration of GPR systems and calculating subsurface waveforms. The software is written to run on the Windows operating systems. GP Workbench can import GPR data file formats used by major commercial instrument manufacturers including Sensors and Software, GSSI, and Mala. The GP Workbench native file format is SU (Seismic Unix), and subsequently, files generated by GP Workbench can be read by Seismic Unix as well as many other data processing packages.

  7. MOSAIC: Software for creating mosaics from collections of images

    NASA Technical Reports Server (NTRS)

    Varosi, F.; Gezari, D. Y.

    1992-01-01

    We have developed a powerful, versatile image processing and analysis software package called MOSAIC, designed specifically for the manipulation of digital astronomical image data obtained with (but not limited to) two-dimensional array detectors. The software package is implemented using the Interactive Data Language (IDL), and incorporates new methods for processing, calibration, analysis, and visualization of astronomical image data, stressing effective methods for the creation of mosaic images from collections of individual exposures, while at the same time preserving the photometric integrity of the original data. Since IDL is available on many computers, the MOSAIC software runs on most UNIX and VAX workstations with the X-Windows or Sun View graphics interface.

  8. Texas flexible pavements and overlays : calibration plans for M-E models and related software.

    DOT National Transportation Integrated Search

    2013-06-01

    This five-year project was initiated to collect materials and pavement performance data on a minimum of 100 highway test sections around the State of Texas, incorporating flexible pavements and overlays. Besides being used to calibrate and validate m...

  9. Subsite binding energies of an exo-polygalacturonase using isothermal titration calorimetry

    USDA-ARS?s Scientific Manuscript database

    Thermodynamic parameters for binding of a series of galacturonic acid oligomers to an exo-polygalacturonase, RPG16 from Rhizopus oryzae, were determined by isothermal titration calorimetry. Binding of oligomers varying in chain length from two to five galacturonic acid residues is an exothermic proc...

  10. A Practical Guide to Calibration of a GSSHA Hydrologic Model Using ERDC Automated Model Calibration Software - Effective and Efficient Stochastic Global Optimization

    DTIC Science & Technology

    2012-02-01

    parameter estimation method, but rather to carefully describe how to use the ERDC software implementation of MLSL that accommodates the PEST model...model independent LM method based parameter estimation software PEST (Doherty, 2004, 2007a, 2007b), which quantifies model to measure- ment misfit...et al. (2011) focused on one drawback associated with LM-based model independent parameter estimation as implemented in PEST ; viz., that it requires

  11. Reliably detectable flaw size for NDE methods that use calibration

    NASA Astrophysics Data System (ADS)

    Koshti, Ajay M.

    2017-04-01

    Probability of detection (POD) analysis is used in assessing reliably detectable flaw size in nondestructive evaluation (NDE). MIL-HDBK-1823 and associated mh18232 POD software gives most common methods of POD analysis. In this paper, POD analysis is applied to an NDE method, such as eddy current testing, where calibration is used. NDE calibration standards have known size artificial flaws such as electro-discharge machined (EDM) notches and flat bottom hole (FBH) reflectors which are used to set instrument sensitivity for detection of real flaws. Real flaws such as cracks and crack-like flaws are desired to be detected using these NDE methods. A reliably detectable crack size is required for safe life analysis of fracture critical parts. Therefore, it is important to correlate signal responses from real flaws with signal responses form artificial flaws used in calibration process to determine reliably detectable flaw size.

  12. Reliably Detectable Flaw Size for NDE Methods that Use Calibration

    NASA Technical Reports Server (NTRS)

    Koshti, Ajay M.

    2017-01-01

    Probability of detection (POD) analysis is used in assessing reliably detectable flaw size in nondestructive evaluation (NDE). MIL-HDBK-1823 and associated mh1823 POD software gives most common methods of POD analysis. In this paper, POD analysis is applied to an NDE method, such as eddy current testing, where calibration is used. NDE calibration standards have known size artificial flaws such as electro-discharge machined (EDM) notches and flat bottom hole (FBH) reflectors which are used to set instrument sensitivity for detection of real flaws. Real flaws such as cracks and crack-like flaws are desired to be detected using these NDE methods. A reliably detectable crack size is required for safe life analysis of fracture critical parts. Therefore, it is important to correlate signal responses from real flaws with signal responses form artificial flaws used in calibration process to determine reliably detectable flaw size.

  13. Calibration of raw accelerometer data to measure physical activity: A systematic review.

    PubMed

    de Almeida Mendes, Márcio; da Silva, Inácio C M; Ramires, Virgílio V; Reichert, Felipe F; Martins, Rafaela C; Tomasi, Elaine

    2018-03-01

    Most of calibration studies based on accelerometry were developed using count-based analyses. In contrast, calibration studies based on raw acceleration signals are relatively recent and their evidences are incipient. The aim of the current study was to systematically review the literature in order to summarize methodological characteristics and results from raw data calibration studies. The review was conducted up to May 2017 using four databases: PubMed, Scopus, SPORTDiscus and Web of Science. Methodological quality of the included studies was evaluated using the Landis and Koch's guidelines. Initially, 1669 titles were identified and, after assessing titles, abstracts and full-articles, 20 studies were included. All studies were conducted in high-income countries, most of them with relatively small samples and specific population groups. Physical activity protocols were different among studies and the indirect calorimetry was the criterion measure mostly used. High mean values of sensitivity, specificity and accuracy from the intensity thresholds of cut-point-based studies were observed (93.7%, 91.9% and 95.8%, respectively). The most frequent statistical approach applied was machine learning-based modelling, in which the mean coefficient of determination was 0.70 to predict physical activity energy expenditure. Regarding the recognition of physical activity types, the mean values of accuracy for sedentary, household and locomotive activities were 82.9%, 55.4% and 89.7%, respectively. In conclusion, considering the construct of physical activity that each approach assesses, linear regression, machine-learning and cut-point-based approaches presented promising validity parameters. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Ultra-Fast Hadronic Calorimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Denisov, Dmitri; Lukić, Strahinja; Mokhov, Nikolai

    2018-08-01

    Calorimeters for particle physics experiments with integration time of a few ns will substantially improve the capability of the experiment to resolve event pileup and to reject backgrounds. In this paper the time development of hadronic showers induced by 30 and 60 GeV positive pions and 120 GeV protons is studied using Monte Carlo simulation and beam tests with a prototype of a sampling steel-scintillator hadronic calorimeter. In the beam tests, scintillator signals induced by hadronic showers in steel are sampled with a period of 0.2 ns and precisely time-aligned in order to study the average signal waveform at various locations with respectmore » to the beam particle impact. Simulations of the same setup are performed using the MARS15 code. Both simulation and test beam results suggest that energy deposition in steel calorimeters develop over a time shorter than 2 ns providing opportunity for ultra-fast calorimetry. Simulation results for an “ideal” calorimeter consisting exclusively of bulk tungsten or copper are presented to establish the lower limit of the signal integration window.« less

  15. Ultra-fast hadronic calorimetry

    DOE PAGES

    Denisov, Dmitri; Lukic, Strahinja; Mokhov, Nikolai; ...

    2018-05-08

    Calorimeters for particle physics experiments with integration time of a few ns will substantially improve the capability of the experiment to resolve event pileup and to reject backgrounds. In this paper the time development of hadronic showers induced by 30 and 60 GeV positive pions and 120 GeV protons is studied using Monte Carlo simulation and beam tests with a prototype of a sampling steel-scintillator hadronic calorimeter. In the beam tests, scintillator signals induced by hadronic showers in steel are sampled with a period of 0.2 ns and precisely time-aligned in order to study the average signal waveform at various locations with respectmore » to the beam particle impact. Simulations of the same setup are performed using the MARS15 code. Both simulation and test beam results suggest that energy deposition in steel calorimeters develop over a time shorter than 2 ns providing opportunity for ultra-fast calorimetry. As a result, simulation results for an “ideal” calorimeter consisting exclusively of bulk tungsten or copper are presented to establish the lower limit of the signal integration window.« less

  16. New software tools for enhanced precision in robot-assisted laser phonomicrosurgery.

    PubMed

    Dagnino, Giulio; Mattos, Leonardo S; Caldwell, Darwin G

    2012-01-01

    This paper describes a new software package created to enhance precision during robot-assisted laser phonomicrosurgery procedures. The new software is composed of three tools for camera calibration, automatic tumor segmentation, and laser tracking. These were designed and developed to improve the outcome of this demanding microsurgical technique, and were tested herein to produce quantitative performance data. The experimental setup was based on the motorized laser micromanipulator created by Istituto Italiano di Tecnologia and the experimental protocols followed are fully described in this paper. The results show the new tools are robust and effective: The camera calibration tool reduced residual errors (RMSE) to 0.009 ± 0.002 mm under 40× microscope magnification; the automatic tumor segmentation tool resulted in deep lesion segmentations comparable to manual segmentations (RMSE= 0.160 ± 0.028 mm under 40× magnification); and the laser tracker tool proved to be reliable even during cutting procedures (RMSE= 0.073 ± 0.023 mm under 40× magnification). These results demonstrate the new software package can provide excellent improvements to the previous microsurgical system, leading to important enhancements in surgical outcome.

  17. In Search of Easy-to-Use Methods for Calibrating ADCP's for Velocity and Discharge Measurements

    USGS Publications Warehouse

    Oberg, K.; ,

    2002-01-01

    A cost-effective procedure for calibrating acoustic Doppler current profilers (ADCP) in the field was presented. The advantages and disadvantages of various methods which are used for calibrating ADCP were discussed. The proposed method requires the use of differential global positioning system (DGPS) with sub-meter accuracy and standard software for collecting ADCP data. The method involves traversing a long (400-800 meter) course at a constant compass heading and speed, while collecting simultaneous DGPS and ADCP data.

  18. Direct calorimetry of free-moving eels with manipulated thyroid status

    NASA Astrophysics Data System (ADS)

    van Ginneken, Vincent; Ballieux, Bart; Antonissen, Erik; van der Linden, Rob; Gluvers, Ab; van den Thillart, Guido

    2007-02-01

    In birds and mammals, the thyroid gland secretes the iodothyronine hormones of which tetraiodothyronine (T4) is less active than triiodothyronine (T3). The action of T3 and T4 is calorigenic and is involved in the control of metabolic rate. Across all vertebrates, thyroid hormones also play a major role in differentiation, development and growth. Although the fish thyroidal system has been researched extensively, its role in thermogenesis is unclear. In this study, we measured overall heat production to an accuracy of 0.1 mW by direct calorimetry in a free-moving European eel ( Anguilla anguilla L.) with different thyroid status. Hyperthyroidism was induced by injection of T3 and T4, and hypothyroidism was induced with phenylthiourea. The results show for the first time at the organismal level, using direct calorimetry, that neither overall heat production nor overall oxygen consumption in eels is affected by hyperthyroidism. Therefore, we conclude that the thermogenic metabolism-stimulating effect of thyroid hormones (TH) is not present with a cold-blooded fish species like the European eel. This supports the concept that TH does not stimulate thermogenesis in poikilothermic species.

  19. Calibration of fluorescence resonance energy transfer in microscopy

    DOEpatents

    Youvan, Dougalas C.; Silva, Christopher M.; Bylina, Edward J.; Coleman, William J.; Dilworth, Michael R.; Yang, Mary M.

    2003-12-09

    Imaging hardware, software, calibrants, and methods are provided to visualize and quantitate the amount of Fluorescence Resonance Energy Transfer (FRET) occurring between donor and acceptor molecules in epifluorescence microscopy. The MicroFRET system compensates for overlap among donor, acceptor, and FRET spectra using well characterized fluorescent beads as standards in conjunction with radiometrically calibrated image processing techniques. The MicroFRET system also provides precisely machined epifluorescence cubes to maintain proper image registration as the sample is illuminated at the donor and acceptor excitation wavelengths. Algorithms are described that pseudocolor the image to display pixels exhibiting radiometrically-corrected fluorescence emission from the donor (blue), the acceptor (green) and FRET (red). The method is demonstrated on samples exhibiting FRET between genetically engineered derivatives of the Green Fluorescent Protein (GFP) bound to the surface of Ni chelating beads by histidine-tags.

  20. Calibration of fluorescence resonance energy transfer in microscopy

    DOEpatents

    Youvan, Douglas C.; Silva, Christopher M.; Bylina, Edward J.; Coleman, William J.; Dilworth, Michael R.; Yang, Mary M.

    2002-09-24

    Imaging hardware, software, calibrants, and methods are provided to visualize and quantitate the amount of Fluorescence Resonance Energy Transfer (FRET) occurring between donor and acceptor molecules in epifluorescence microscopy. The MicroFRET system compensates for overlap among donor, acceptor, and FRET spectra using well characterized fluorescent beads as standards in conjunction with radiometrically calibrated image processing techniques. The MicroFRET system also provides precisely machined epifluorescence cubes to maintain proper image registration as the sample is illuminated at the donor and acceptor excitation wavelengths. Algorithms are described that pseudocolor the image to display pixels exhibiting radiometrically-corrected fluorescence emission from the donor (blue), the acceptor (green) and FRET (red). The method is demonstrated on samples exhibiting FRET between genetically engineered derivatives of the Green Fluorescent Protein (GFP) bound to the surface of Ni chelating beads by histidine-tags.

  1. Photometric Calibration of Consumer Video Cameras

    NASA Technical Reports Server (NTRS)

    Suggs, Robert; Swift, Wesley, Jr.

    2007-01-01

    analyze. The light source used to generate the calibration images is an artificial variable star comprising a Newtonian collimator illuminated by a light source modulated by a rotating variable neutral- density filter. This source acts as a point source, the brightness of which varies at a known rate. A video camera to be calibrated is aimed at this source. Fixed neutral-density filters are inserted in or removed from the light path as needed to make the video image of the source appear to fluctuate between dark and saturated bright. The resulting video-image data are analyzed by use of custom software that determines the integrated signal in each video frame and determines the system response curve (measured output signal versus input brightness). These determinations constitute the calibration, which is thereafter used in automatic, frame-by-frame processing of the data from the video images to be analyzed.

  2. ACTS (Advanced Communications Technology Satellite) Propagation Experiment: Preprocessing Software User's Manual

    NASA Technical Reports Server (NTRS)

    Crane, Robert K.; Wang, Xuhe; Westenhaver, David

    1996-01-01

    The preprocessing software manual describes the Actspp program originally developed to observe and diagnose Advanced Communications Technology Satellite (ACTS) propagation terminal/receiver problems. However, it has been quite useful for automating the preprocessing functions needed to convert the terminal output to useful attenuation estimates. Prior to having data acceptable for archival functions, the individual receiver system must be calibrated and the power level shifts caused by ranging tone modulation must be received. Actspp provides three output files: the daylog, the diurnal coefficient file, and the file that contains calibration information.

  3. gr-MRI: A software package for magnetic resonance imaging using software defined radios

    NASA Astrophysics Data System (ADS)

    Hasselwander, Christopher J.; Cao, Zhipeng; Grissom, William A.

    2016-09-01

    The goal of this work is to develop software that enables the rapid implementation of custom MRI spectrometers using commercially-available software defined radios (SDRs). The developed gr-MRI software package comprises a set of Python scripts, flowgraphs, and signal generation and recording blocks for GNU Radio, an open-source SDR software package that is widely used in communications research. gr-MRI implements basic event sequencing functionality, and tools for system calibrations, multi-radio synchronization, and MR signal processing and image reconstruction. It includes four pulse sequences: a single-pulse sequence to record free induction signals, a gradient-recalled echo imaging sequence, a spin echo imaging sequence, and an inversion recovery spin echo imaging sequence. The sequences were used to perform phantom imaging scans with a 0.5 Tesla tabletop MRI scanner and two commercially-available SDRs. One SDR was used for RF excitation and reception, and the other for gradient pulse generation. The total SDR hardware cost was approximately 2000. The frequency of radio desynchronization events and the frequency with which the software recovered from those events was also measured, and the SDR's ability to generate frequency-swept RF waveforms was validated and compared to the scanner's commercial spectrometer. The spin echo images geometrically matched those acquired using the commercial spectrometer, with no unexpected distortions. Desynchronization events were more likely to occur at the very beginning of an imaging scan, but were nearly eliminated if the user invoked the sequence for a short period before beginning data recording. The SDR produced a 500 kHz bandwidth frequency-swept pulse with high fidelity, while the commercial spectrometer produced a waveform with large frequency spike errors. In conclusion, the developed gr-MRI software can be used to develop high-fidelity, low-cost custom MRI spectrometers using commercially-available SDRs.

  4. gr-MRI: A software package for magnetic resonance imaging using software defined radios.

    PubMed

    Hasselwander, Christopher J; Cao, Zhipeng; Grissom, William A

    2016-09-01

    The goal of this work is to develop software that enables the rapid implementation of custom MRI spectrometers using commercially-available software defined radios (SDRs). The developed gr-MRI software package comprises a set of Python scripts, flowgraphs, and signal generation and recording blocks for GNU Radio, an open-source SDR software package that is widely used in communications research. gr-MRI implements basic event sequencing functionality, and tools for system calibrations, multi-radio synchronization, and MR signal processing and image reconstruction. It includes four pulse sequences: a single-pulse sequence to record free induction signals, a gradient-recalled echo imaging sequence, a spin echo imaging sequence, and an inversion recovery spin echo imaging sequence. The sequences were used to perform phantom imaging scans with a 0.5Tesla tabletop MRI scanner and two commercially-available SDRs. One SDR was used for RF excitation and reception, and the other for gradient pulse generation. The total SDR hardware cost was approximately $2000. The frequency of radio desynchronization events and the frequency with which the software recovered from those events was also measured, and the SDR's ability to generate frequency-swept RF waveforms was validated and compared to the scanner's commercial spectrometer. The spin echo images geometrically matched those acquired using the commercial spectrometer, with no unexpected distortions. Desynchronization events were more likely to occur at the very beginning of an imaging scan, but were nearly eliminated if the user invoked the sequence for a short period before beginning data recording. The SDR produced a 500kHz bandwidth frequency-swept pulse with high fidelity, while the commercial spectrometer produced a waveform with large frequency spike errors. In conclusion, the developed gr-MRI software can be used to develop high-fidelity, low-cost custom MRI spectrometers using commercially-available SDRs. Copyright

  5. A frequentist approach to computer model calibration

    DOE PAGES

    Wong, Raymond K. W.; Storlie, Curtis Byron; Lee, Thomas C. M.

    2016-05-05

    The paper considers the computer model calibration problem and provides a general frequentist solution. Under the framework proposed, the data model is semiparametric with a non-parametric discrepancy function which accounts for any discrepancy between physical reality and the computer model. In an attempt to solve a fundamentally important (but often ignored) identifiability issue between the computer model parameters and the discrepancy function, the paper proposes a new and identifiable parameterization of the calibration problem. It also develops a two-step procedure for estimating all the relevant quantities under the new parameterization. This estimation procedure is shown to enjoy excellent rates ofmore » convergence and can be straightforwardly implemented with existing software. For uncertainty quantification, bootstrapping is adopted to construct confidence regions for the quantities of interest. As a result, the practical performance of the methodology is illustrated through simulation examples and an application to a computational fluid dynamics model.« less

  6. Space environment simulation and sensor calibration facility

    NASA Astrophysics Data System (ADS)

    Engelhart, Daniel P.; Patton, James; Plis, Elena; Cooper, Russell; Hoffmann, Ryan; Ferguson, Dale; Hilmer, Robert V.; McGarity, John; Holeman, Ernest

    2018-02-01

    The Mumbo space environment simulation chamber discussed here comprises a set of tools to calibrate a variety of low flux, low energy electron and ion detectors used in satellite-mounted particle sensors. The chamber features electron and ion beam sources, a Lyman-alpha ultraviolet lamp, a gimbal table sensor mounting system, cryogenic sample mount and chamber shroud, and beam characterization hardware and software. The design of the electron and ion sources presented here offers a number of unique capabilities for space weather sensor calibration. Both sources create particle beams with narrow, well-characterized energetic and angular distributions with beam diameters that are larger than most space sensor apertures. The electron and ion sources can produce consistently low fluxes that are representative of quiescent space conditions. The particle beams are characterized by 2D beam mapping with several co-located pinhole aperture electron multipliers to capture relative variation in beam intensity and a large aperture Faraday cup to measure absolute current density.

  7. Space environment simulation and sensor calibration facility.

    PubMed

    Engelhart, Daniel P; Patton, James; Plis, Elena; Cooper, Russell; Hoffmann, Ryan; Ferguson, Dale; Hilmer, Robert V; McGarity, John; Holeman, Ernest

    2018-02-01

    The Mumbo space environment simulation chamber discussed here comprises a set of tools to calibrate a variety of low flux, low energy electron and ion detectors used in satellite-mounted particle sensors. The chamber features electron and ion beam sources, a Lyman-alpha ultraviolet lamp, a gimbal table sensor mounting system, cryogenic sample mount and chamber shroud, and beam characterization hardware and software. The design of the electron and ion sources presented here offers a number of unique capabilities for space weather sensor calibration. Both sources create particle beams with narrow, well-characterized energetic and angular distributions with beam diameters that are larger than most space sensor apertures. The electron and ion sources can produce consistently low fluxes that are representative of quiescent space conditions. The particle beams are characterized by 2D beam mapping with several co-located pinhole aperture electron multipliers to capture relative variation in beam intensity and a large aperture Faraday cup to measure absolute current density.

  8. Higher Throughput Calorimetry: Opportunities, Approaches and Challenges

    PubMed Central

    Recht, Michael I.; Coyle, Joseph E.; Bruce, Richard H.

    2010-01-01

    Higher throughput thermodynamic measurements can provide value in structure-based drug discovery during fragment screening, hit validation, and lead optimization. Enthalpy can be used to detect and characterize ligand binding, and changes that affect the interaction of protein and ligand can sometimes be detected more readily from changes in the enthalpy of binding than from the corresponding free-energy changes or from protein-ligand structures. Newer, higher throughput calorimeters are being incorporated into the drug discovery process. Improvements in titration calorimeters come from extensions of a mature technology and face limitations in scaling. Conversely, array calorimetry, an emerging technology, shows promise for substantial improvements in throughput and material utilization, but improved sensitivity is needed. PMID:20888754

  9. Thermodynamic investigations of protein's behaviour with ionic liquids in aqueous medium studied by isothermal titration calorimetry.

    PubMed

    Bharmoria, Pankaj; Kumar, Arvind

    2016-05-01

    While a number of reports appear on ionic liquids-proteins interactions, their thermodynamic behaviour using suitable technique like isothermal titration calorimetry is not systematically presented. Isothermal titration calorimetry (ITC) is a key technique which can directly measure the thermodynamic contribution of IL binding to protein, particularly the enthalpy, heat capacities and binding stoichiometry. Ionic liquids (ILs), owing to their unique and tunable physicochemical properties have been the central area of scientific research besides graphene in the last decade, and growing unabated. Their encounter with proteins in the biological system is inevitable considering their environmental discharge though most of them are recyclable for a number of cycles. In this article we will cover the thermodynamics of proteins upon interaction with ILs as osmolyte and surfactant. The up to date literature survey of IL-protein interactions using isothermal titration calorimetry will be discussed and parallel comparison with the results obtained for such studies with other techniques will be highlighted to demonstrate the accuracy of ITC technique. Net stability of proteins can be obtained from the difference in the free energy (ΔG) of the native (folded) and denatured (unfolded) state using the Gibbs-Helmholtz equation (ΔG=ΔH-TΔS). Isothermal titration calorimetry can directly measure the heat changes upon IL-protein interactions. Calculation of other thermodynamic parameters such as entropy, binding constant and free energy depends upon the proper fitting of the binding isotherms using various fitting models. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Do PICU patients meet technical criteria for performing indirect calorimetry?

    PubMed

    Beggs, Megan R; Garcia Guerra, Gonzalo; Larsen, Bodil M K

    2016-10-01

    Indirect calorimetry (IC) is considered gold standard for assessing energy needs of critically ill children as predictive equations and clinical status indicators are often unreliable. Accurate assessment of energy requirements in this vulnerable population is essential given the high risk of over or underfeeding and the consequences thereof. The proportion of patients and patient days in pediatric intensive care (PICU) for which energy expenditure (EE) can be measured using IC is currently unknown. In the current study, we aimed to quantify the daily proportion of consecutive PICU patients who met technical criteria to perform indirect calorimetry and describe the technical contraindications when criteria were not met. Prospective, observational, single-centre study conducted in a cardiac and general PICU. All consecutive patients admitted for at least 96 h were included in the study. Variables collected for each patient included age at admission, admission diagnosis, and if technical criteria for indirect calorimetry were met. Technical criteria variables were collected within the same 2 h each morning and include: provision of supplemental oxygen, ventilator settings, endotracheal tube (ETT) leak, diagnosis of chest tube air leak, provision of external gas support (i.e. nitric oxide), and provision of extracorporeal membrane oxygenation (ECMO). 288 patients were included for a total of 3590 patient days between June 2014 and February 2015. The main reasons for admission were: surgery (cardiac and non-cardiac), respiratory distress, trauma, oncology and medicine/other. The median (interquartile range) patient age was 0.7 (0.3-4.6) years. The median length of PICU stay was 7 (5-14) days. Only 34% (95% CI, 32.4-35.5%) of patient days met technical criteria for IC. For patients less than 6 months of age, technical criteria were met on significantly fewer patient days (29%, p < 0.01). Moreover, 27% of patients did not meet technical criteria for IC on any day

  11. Status of the calibration and alignment framework at the Belle II experiment

    NASA Astrophysics Data System (ADS)

    Dossett, D.; Sevior, M.; Ritter, M.; Kuhr, T.; Bilka, T.; Yaschenko, S.; Belle Software Group, II

    2017-10-01

    The Belle II detector at the Super KEKB e+e-collider plans to take first collision data in 2018. The monetary and CPU time costs associated with storing and processing the data mean that it is crucial for the detector components at Belle II to be calibrated quickly and accurately. A fast and accurate calibration system would allow the high level trigger to increase the efficiency of event selection, and can give users analysis-quality reconstruction promptly. A flexible framework to automate the fast production of calibration constants is being developed in the Belle II Analysis Software Framework (basf2). Detector experts only need to create two components from C++ base classes in order to use the automation system. The first collects data from Belle II event data files and outputs much smaller files to pass to the second component. This runs the main calibration algorithm to produce calibration constants ready for upload into the conditions database. A Python framework coordinates the input files, order of processing, and submission of jobs. Splitting the operation into collection and algorithm processing stages allows the framework to optionally parallelize the collection stage on a batch system.

  12. Calibration of Wide-Field Deconvolution Microscopy for Quantitative Fluorescence Imaging

    PubMed Central

    Lee, Ji-Sook; Wee, Tse-Luen (Erika); Brown, Claire M.

    2014-01-01

    Deconvolution enhances contrast in fluorescence microscopy images, especially in low-contrast, high-background wide-field microscope images, improving characterization of features within the sample. Deconvolution can also be combined with other imaging modalities, such as confocal microscopy, and most software programs seek to improve resolution as well as contrast. Quantitative image analyses require instrument calibration and with deconvolution, necessitate that this process itself preserves the relative quantitative relationships between fluorescence intensities. To ensure that the quantitative nature of the data remains unaltered, deconvolution algorithms need to be tested thoroughly. This study investigated whether the deconvolution algorithms in AutoQuant X3 preserve relative quantitative intensity data. InSpeck Green calibration microspheres were prepared for imaging, z-stacks were collected using a wide-field microscope, and the images were deconvolved using the iterative deconvolution algorithms with default settings. Afterwards, the mean intensities and volumes of microspheres in the original and the deconvolved images were measured. Deconvolved data sets showed higher average microsphere intensities and smaller volumes than the original wide-field data sets. In original and deconvolved data sets, intensity means showed linear relationships with the relative microsphere intensities given by the manufacturer. Importantly, upon normalization, the trend lines were found to have similar slopes. In original and deconvolved images, the volumes of the microspheres were quite uniform for all relative microsphere intensities. We were able to show that AutoQuant X3 deconvolution software data are quantitative. In general, the protocol presented can be used to calibrate any fluorescence microscope or image processing and analysis procedure. PMID:24688321

  13. Dual-readout calorimetry: recent results from RD52 and plans for experiments at future e+e- colliders

    NASA Astrophysics Data System (ADS)

    Ferrari, R.

    2018-02-01

    The Dual-Readout calorimetry, developed to overcome the main limiting factor in hadronic energy measurements, has been thoroughly investigated by the DREAM/RD52 collaboration during the last 15 years. The latest results show that very interesting performance may be obtained for both e.m. and hadronic showers, together with excellent standalone e/pi separation. These results and the plans (and the expected performance) for dual-readout calorimetry in the CepC/FCC-ee environment, are presented and discussed.

  14. Polarimetric SAR calibration experiment using active radar calibrators

    NASA Astrophysics Data System (ADS)

    Freeman, Anthony; Shen, Yuhsyen; Werner, Charles L.

    1990-03-01

    Active radar calibrators are used to derive both the amplitude and phase characteristics of a multichannel polarimetric SAR from the complex image data. Results are presented from an experiment carried out using the NASA/JPL DC-8 aircraft SAR over a calibration site at Goldstone, California. As part of the experiment, polarimetric active radar calibrators (PARCs) with adjustable polarization signatures were deployed. Experimental results demonstrate that the PARCs can be used to calibrate polarimetric SAR images successfully. Restrictions on the application of the PARC calibration procedure are discussed.

  15. Polarimetric SAR calibration experiment using active radar calibrators

    NASA Technical Reports Server (NTRS)

    Freeman, Anthony; Shen, Yuhsyen; Werner, Charles L.

    1990-01-01

    Active radar calibrators are used to derive both the amplitude and phase characteristics of a multichannel polarimetric SAR from the complex image data. Results are presented from an experiment carried out using the NASA/JPL DC-8 aircraft SAR over a calibration site at Goldstone, California. As part of the experiment, polarimetric active radar calibrators (PARCs) with adjustable polarization signatures were deployed. Experimental results demonstrate that the PARCs can be used to calibrate polarimetric SAR images successfully. Restrictions on the application of the PARC calibration procedure are discussed.

  16. ROSAS: a robotic station for atmosphere and surface characterization dedicated to on-orbit calibration

    NASA Astrophysics Data System (ADS)

    Meygret, Aimé; Santer, Richard P.; Berthelot, Béatrice

    2011-10-01

    La Crau test site is used by CNES since 1987 for vicarious calibration of SPOT cameras. The former calibration activities were conducted during field campaigns devoted to the characterization of the atmosphere and the site reflectances. Since 1997, au automatic photometric station (ROSAS) was set up on the site on a 10m height pole. This station measures at different wavelengths, the solar extinction and the sky radiances to fully characterize the optical properties of the atmosphere. It also measures the upwelling radiance over the ground to fully characterize the surface reflectance properties. The photometer samples the spectrum from 380nm to 1600nm with 9 narrow bands. Every non cloudy days the photometer automatically and sequentially performs its measurements. Data are transmitted by GSM (Global System for Mobile communications) to CNES and processed. The photometer is calibrated in situ over the sun for irradiance and cross-band calibration, and over the Rayleigh scattering for the short wavelengths radiance calibration. The data are processed by an operational software which calibrates the photometer, estimates the atmosphere properties, computes the bidirectional reflectance distribution function of the site, then simulates the top of atmosphere radiance seen by any sensor over-passing the site and calibrates it. This paper describes the instrument, its measurement protocol and its calibration principle. Calibration results are discussed and compared to laboratory calibration. It details the surface reflectance characterization and presents SPOT4 calibration results deduced from the estimated TOA radiance. The results are compared to the official calibration.

  17. Heat of supersaturation-limited amyloid burst directly monitored by isothermal titration calorimetry.

    PubMed

    Ikenoue, Tatsuya; Lee, Young-Ho; Kardos, József; Yagi, Hisashi; Ikegami, Takahisa; Naiki, Hironobu; Goto, Yuji

    2014-05-06

    Amyloid fibrils form in supersaturated solutions via a nucleation and growth mechanism. Although the structural features of amyloid fibrils have become increasingly clearer, knowledge on the thermodynamics of fibrillation is limited. Furthermore, protein aggregation is not a target of calorimetry, one of the most powerful approaches used to study proteins. Here, with β2-microglobulin, a protein responsible for dialysis-related amyloidosis, we show direct heat measurements of the formation of amyloid fibrils using isothermal titration calorimetry (ITC). The spontaneous fibrillation after a lag phase was accompanied by exothermic heat. The thermodynamic parameters of fibrillation obtained under various protein concentrations and temperatures were consistent with the main-chain dominated structural model of fibrils, in which overall packing was less than that of the native structures. We also characterized the thermodynamics of amorphous aggregation, enabling the comparison of protein folding, amyloid fibrillation, and amorphous aggregation. These results indicate that ITC will become a promising approach for clarifying comprehensively the thermodynamics of protein folding and misfolding.

  18. The 2005 HST Calibration Workshop Hubble After the Transition to Two-Gyro Mode

    NASA Technical Reports Server (NTRS)

    Koekemoer, Anton M. (Editor); Goodfrooij, Paul (Editor); Dressel, Linda L. (Editor)

    2006-01-01

    The 2005 HST Calibration Workshop was held at the Space Telescope Science Institute during October 26, 2005 to bring together members of the observing community, the instrument development teams, and the STScI instrument support teams to share information and techniques. Presentations included the two-gyro performance of HST and FGS, advances in the calibration of a number of instruments, the results of other instruments after their return from space, and the status of still others which are scheduled for installation during the next servicing mission. Cross-calibration between HST and JWST was discussed, as well as the new Guide Star Catalog and advances in data analysis software. This book contains the published record of the workshop, while all the talks and posters are available electronically on the workshop Web site.

  19. Investigation of Phototriangulation Accuracy with Using of Various Techniques Laboratory and Field Calibration

    NASA Astrophysics Data System (ADS)

    Chibunichev, A. G.; Kurkov, V. M.; Smirnov, A. V.; Govorov, A. V.; Mikhalin, V. A.

    2016-10-01

    Nowadays, aerial survey technology using aerial systems based on unmanned aerial vehicles (UAVs) becomes more popular. UAVs physically can not carry professional aerocameras. Consumer digital cameras are used instead. Such cameras usually have rolling, lamellar or global shutter. Quite often manufacturers and users of such aerial systems do not use camera calibration. In this case self-calibration techniques are used. However such approach is not confirmed by extensive theoretical and practical research. In this paper we compare results of phototriangulation based on laboratory, test-field or self-calibration. For investigations we use Zaoksky test area as an experimental field provided dense network of target and natural control points. Racurs PHOTOMOD and Agisoft PhotoScan software were used in evaluation. The results of investigations, conclusions and practical recommendations are presented in this article.

  20. 16 CFR 1633.7 - Mattress test procedure.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ...) Apparatus and test materials—(1) Calorimetry. The rate of heat release must be measured by means of oxygen consumption calorimetry. The calibration should follow generally accepted practices for calibration. The... maintained at a temperature greater than 15 °C (59 °F) and less than 27 °C (80.6 °F) and a relative humidity...

  1. 16 CFR 1633.7 - Mattress test procedure.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...) Apparatus and test materials—(1) Calorimetry. The rate of heat release must be measured by means of oxygen consumption calorimetry. The calibration should follow generally accepted practices for calibration. The... maintained at a temperature greater than 15 °C (59 °F) and less than 27 °C (80.6 °F) and a relative humidity...

  2. 16 CFR 1633.7 - Mattress test procedure.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ...) Apparatus and test materials—(1) Calorimetry. The rate of heat release must be measured by means of oxygen consumption calorimetry. The calibration should follow generally accepted practices for calibration. The... maintained at a temperature greater than 15 °C (59 °F) and less than 27 °C (80.6 °F) and a relative humidity...

  3. 16 CFR 1633.7 - Mattress test procedure.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ...) Apparatus and test materials—(1) Calorimetry. The rate of heat release must be measured by means of oxygen consumption calorimetry. The calibration should follow generally accepted practices for calibration. The... maintained at a temperature greater than 15 °C (59 °F) and less than 27 °C (80.6 °F) and a relative humidity...

  4. Calibration and validation of rockfall models

    NASA Astrophysics Data System (ADS)

    Frattini, Paolo; Valagussa, Andrea; Zenoni, Stefania; Crosta, Giovanni B.

    2013-04-01

    actual blocks, (2) the percentage of trajectories passing through the buffer of the actual rockfall path, (3) the mean distance between the location of arrest of each simulated blocks and the location of the nearest actual blocks; (4) the mean distance between the location of detachment of each simulated block and the location of detachment of the actual block located closer to the arrest position. By applying the four measures to the case studies, we observed that all measures are able to represent the model performance for validation purposes. However, the third measure is more simple and reliable than the others, and seems to be optimal for model calibration, especially when using a parameter estimation and optimization modelling software for automated calibration.

  5. Nitrous oxide emissions from cropland: a procedure for calibrating the DayCent biogeochemical model using inverse modelling

    USGS Publications Warehouse

    Rafique, Rashad; Fienen, Michael N.; Parkin, Timothy B.; Anex, Robert P.

    2013-01-01

    DayCent is a biogeochemical model of intermediate complexity widely used to simulate greenhouse gases (GHG), soil organic carbon and nutrients in crop, grassland, forest and savannah ecosystems. Although this model has been applied to a wide range of ecosystems, it is still typically parameterized through a traditional “trial and error” approach and has not been calibrated using statistical inverse modelling (i.e. algorithmic parameter estimation). The aim of this study is to establish and demonstrate a procedure for calibration of DayCent to improve estimation of GHG emissions. We coupled DayCent with the parameter estimation (PEST) software for inverse modelling. The PEST software can be used for calibration through regularized inversion as well as model sensitivity and uncertainty analysis. The DayCent model was analysed and calibrated using N2O flux data collected over 2 years at the Iowa State University Agronomy and Agricultural Engineering Research Farms, Boone, IA. Crop year 2003 data were used for model calibration and 2004 data were used for validation. The optimization of DayCent model parameters using PEST significantly reduced model residuals relative to the default DayCent parameter values. Parameter estimation improved the model performance by reducing the sum of weighted squared residual difference between measured and modelled outputs by up to 67 %. For the calibration period, simulation with the default model parameter values underestimated mean daily N2O flux by 98 %. After parameter estimation, the model underestimated the mean daily fluxes by 35 %. During the validation period, the calibrated model reduced sum of weighted squared residuals by 20 % relative to the default simulation. Sensitivity analysis performed provides important insights into the model structure providing guidance for model improvement.

  6. ALMA software architecture

    NASA Astrophysics Data System (ADS)

    Schwarz, Joseph; Raffi, Gianni

    2002-12-01

    The Atacama Large Millimeter Array (ALMA) is a joint project involving astronomical organizations in Europe and North America. ALMA will consist of at least 64 12-meter antennas operating in the millimeter and sub-millimeter range. It will be located at an altitude of about 5000m in the Chilean Atacama desert. The primary challenge to the development of the software architecture is the fact that both its development and runtime environments will be distributed. Groups at different institutes will develop the key elements such as Proposal Preparation tools, Instrument operation, On-line calibration and reduction, and Archiving. The Proposal Preparation software will be used primarily at scientists' home institutions (or on their laptops), while Instrument Operations will execute on a set of networked computers at the ALMA Operations Support Facility. The ALMA Science Archive, itself to be replicated at several sites, will serve astronomers worldwide. Building upon the existing ALMA Common Software (ACS), the system architects will prepare a robust framework that will use XML-encoded entity objects to provide an effective solution to the persistence needs of this system, while remaining largely independent of any underlying DBMS technology. Independence of distributed subsystems will be facilitated by an XML- and CORBA-based pass-by-value mechanism for exchange of objects. Proof of concept (as well as a guide to subsystem developers) will come from a prototype whose details will be presented.

  7. Imaging hadron calorimetry for future Lepton Colliders

    NASA Astrophysics Data System (ADS)

    Repond, José

    2013-12-01

    To fully exploit the physics potential of a future Lepton Collider requires detectors with unprecedented jet energy and dijet-mass resolution. To meet these challenges, detectors optimized for the application of Particle Flow Algorithms (PFAs) are being designed and developed. The application of PFAs, in turn, requires calorimeters with very fine segmentation of the readout, so-called imaging calorimeters. This talk reviews progress in imaging hadron calorimetry as it is being developed for implementation in a detector at a future Lepton Collider. Recent results from the large prototypes built by the CALICE Collaboration, such as the Scintillator Analog Hadron Calorimeter (AHCAL) and the Digital Hadron Calorimeters (DHCAL and SDHCAL) are being presented. In addition, various R&D efforts beyond the present prototypes are being discussed.

  8. STARS 2.0: 2nd-generation open-source archiving and query software

    NASA Astrophysics Data System (ADS)

    Winegar, Tom

    2008-07-01

    The Subaru Telescope is in process of developing an open-source alternative to the 1st-generation software and databases (STARS 1) used for archiving and query. For STARS 2, we have chosen PHP and Python for scripting and MySQL as the database software. We have collected feedback from staff and observers, and used this feedback to significantly improve the design and functionality of our future archiving and query software. Archiving - We identified two weaknesses in 1st-generation STARS archiving software: a complex and inflexible table structure and uncoordinated system administration for our business model: taking pictures from the summit and archiving them in both Hawaii and Japan. We adopted a simplified and normalized table structure with passive keyword collection, and we are designing an archive-to-archive file transfer system that automatically reports real-time status and error conditions and permits error recovery. Query - We identified several weaknesses in 1st-generation STARS query software: inflexible query tools, poor sharing of calibration data, and no automatic file transfer mechanisms to observers. We are developing improved query tools and sharing of calibration data, and multi-protocol unassisted file transfer mechanisms for observers. In the process, we have redefined a 'query': from an invisible search result that can only transfer once in-house right now, with little status and error reporting and no error recovery - to a stored search result that can be monitored, transferred to different locations with multiple protocols, reporting status and error conditions and permitting recovery from errors.

  9. The mechanism of interactions between tea polyphenols and porcine pancreatic alpha‐amylase: Analysis by inhibition kinetics, fluorescence quenching, differential scanning calorimetry and isothermal titration calorimetry

    PubMed Central

    Sun, Lijun; Gidley, Michael J.

    2017-01-01

    Scope This study aims to use a combination of biochemical and biophysical methods to derive greater mechanistic understanding of the interactions between tea polyphenols and porcine pancreatic α‐amylase (PPA). Methods and results The interaction mechanism was studied through fluorescence quenching (FQ), differential scanning calorimetry (DSC) and isothermal titration calorimetry (ITC) and compared with inhibition kinetics. The results showed that a higher quenching effect of polyphenols corresponded to a stronger inhibitory activity against PPA. The red‐shift of maximum emission wavelength of PPA bound with some polyphenols indicated a potential structural unfolding of PPA. This was also suggested by the decreased thermostability of PPA with these polyphenols in DSC thermograms. Through thermodynamic binding analysis of ITC and inhibition kinetics, the equilibrium of competitive inhibition was shown to result from the binding of particularly galloylated polyphenols with specific sites on PPA. There were positive linear correlations between the reciprocal of competitive inhibition constant (1/K ic), quenching constant (K FQ) and binding constant (K itc). Conclusion The combination of inhibition kinetics, FQ, DSC and ITC can reasonably characterize the interactions between tea polyphenols and PPA. The galloyl moiety is an important group in catechins and theaflavins in terms of binding with and inhibiting the activity of PPA. PMID:28618113

  10. Benchmark testing of DIII-D neutral beam modeling with water flow calorimetry

    DOE PAGES

    Rauch, J. M.; Crowley, B. J.; Scoville, J. T.; ...

    2016-06-02

    Power loading on beamline components in the DIII-D neutral beam system is measured in this paper using water flow calorimetry. The results are used to benchmark beam transport models. Finally, anomalously high heat loads in the magnet region are investigated and a speculative hypothesis as to their origin is presented.

  11. Medical color displays and their color calibration: investigations of various calibration methods, tools, and potential improvement in color difference ΔE

    NASA Astrophysics Data System (ADS)

    Roehrig, Hans; Hashmi, Syed F.; Dallas, William J.; Krupinski, Elizabeth A.; Rehm, Kelly; Fan, Jiahua

    2010-08-01

    Our laboratory has investigated the efficacy of a suite of color calibration and monitor profiling packages which employ a variety of color measurement sensors. Each of the methods computes gamma correction tables for the red, green and blue color channels of a monitor that attempt to: a) match a desired luminance range and tone reproduction curve; and b) maintain a target neutral point across the range of grey values. All of the methods examined here produce International Color Consortium (ICC) profiles that describe the color rendering capabilities of the monitor after calibration. Color profiles incorporate a transfer matrix that establishes the relationship between RGB driving levels and the International Commission on Illumination (CIE) XYZ (tristimulus) values of the resulting on-screen color; the matrix is developed by displaying color patches of known RGB values on the monitor and measuring the tristimulus values with a sensor. The number and chromatic distribution of color patches varies across methods and is usually not under user control. In this work we examine the effect of employing differing calibration and profiling methods on rendition of color images. A series of color patches encoded in sRGB color space were presented on the monitor using color-management software that utilized the ICC profile produced by each method. The patches were displayed on the calibrated monitor and measured with a Minolta CS200 colorimeter. Differences in intended and achieved luminance and chromaticity were computed using the CIE DE2000 color-difference metric, in which a value of ΔE = 1 is generally considered to be approximately one just noticeable difference (JND) in color. We observed between one and 17 JND's for individual colors, depending on calibration method and target. As an extension of this fundamental work1, we further improved our calibration method by defining concrete calibration parameters for the display, using the NEC wide gamut puck, and making sure

  12. Kinect Fusion improvement using depth camera calibration

    NASA Astrophysics Data System (ADS)

    Pagliari, D.; Menna, F.; Roncella, R.; Remondino, F.; Pinto, L.

    2014-06-01

    Scene's 3D modelling, gesture recognition and motion tracking are fields in rapid and continuous development which have caused growing demand on interactivity in video-game and e-entertainment market. Starting from the idea of creating a sensor that allows users to play without having to hold any remote controller, the Microsoft Kinect device was created. The Kinect has always attract researchers in different fields, from robotics to Computer Vision (CV) and biomedical engineering as well as third-party communities that have released several Software Development Kit (SDK) versions for Kinect in order to use it not only as a game device but as measurement system. Microsoft Kinect Fusion control libraries (firstly released in March 2013) allow using the device as a 3D scanning and produce meshed polygonal of a static scene just moving the Kinect around. A drawback of this sensor is the geometric quality of the delivered data and the low repeatability. For this reason the authors carried out some investigation in order to evaluate the accuracy and repeatability of the depth measured delivered by the Kinect. The paper will present a throughout calibration analysis of the Kinect imaging sensor, with the aim of establishing the accuracy and precision of the delivered information: a straightforward calibration of the depth sensor in presented and then the 3D data are correct accordingly. Integrating the depth correction algorithm and correcting the IR camera interior and exterior orientation parameters, the Fusion Libraries are corrected and a new reconstruction software is created to produce more accurate models.

  13. Calibration Procedures on Oblique Camera Setups

    NASA Astrophysics Data System (ADS)

    Kemper, G.; Melykuti, B.; Yu, C.

    2016-06-01

    Beside the creation of virtual animated 3D City models, analysis for homeland security and city planning, the accurately determination of geometric features out of oblique imagery is an important task today. Due to the huge number of single images the reduction of control points force to make use of direct referencing devices. This causes a precise camera-calibration and additional adjustment procedures. This paper aims to show the workflow of the various calibration steps and will present examples of the calibration flight with the final 3D City model. In difference to most other software, the oblique cameras are used not as co-registered sensors in relation to the nadir one, all camera images enter the AT process as single pre-oriented data. This enables a better post calibration in order to detect variations in the single camera calibration and other mechanical effects. The shown sensor (Oblique Imager) is based o 5 Phase One cameras were the nadir one has 80 MPIX equipped with a 50 mm lens while the oblique ones capture images with 50 MPix using 80 mm lenses. The cameras are mounted robust inside a housing to protect this against physical and thermal deformations. The sensor head hosts also an IMU which is connected to a POS AV GNSS Receiver. The sensor is stabilized by a gyro-mount which creates floating Antenna -IMU lever arms. They had to be registered together with the Raw GNSS-IMU Data. The camera calibration procedure was performed based on a special calibration flight with 351 shoots of all 5 cameras and registered the GPS/IMU data. This specific mission was designed in two different altitudes with additional cross lines on each flying heights. The five images from each exposure positions have no overlaps but in the block there are many overlaps resulting in up to 200 measurements per points. On each photo there were in average 110 well distributed measured points which is a satisfying number for the camera calibration. In a first step with the help of

  14. BASKET on-board software library

    NASA Astrophysics Data System (ADS)

    Luntzer, Armin; Ottensamer, Roland; Kerschbaum, Franz

    2014-07-01

    The University of Vienna is a provider of on-board data processing software with focus on data compression, such as used on board the highly successful Herschel/PACS instrument, as well as in the small BRITE-Constellation fleet of cube-sats. Current contributions are made to CHEOPS, SAFARI and PLATO. The effort was taken to review the various functions developed for Herschel and provide a consolidated software library to facilitate the work for future missions. This library is a shopping basket of algorithms. Its contents are separated into four classes: auxiliary functions (e.g. circular buffers), preprocessing functions (e.g. for calibration), lossless data compression (arithmetic or Rice coding) and lossy reduction steps (ramp fitting etc.). The "BASKET" has all functionality that is needed to create an on-board data processing chain. All sources are written in C, supplemented by optimized versions in assembly, targeting popular CPU architectures for space applications. BASKET is open source and constantly growing

  15. Determination of Heats of Fusion: Using Differential Scanning Calorimetry for the AP Chemistry Courses.

    ERIC Educational Resources Information Center

    Temme, Susan M.

    1995-01-01

    Describes an exercise designed to be used in an Advanced Placement (AP) chemistry course to accompany the study of thermodynamics. Uses Differential Scanning Calorimetry in teaching the concepts of thermochemistry and thermodynamics. (JRH)

  16. Absolute calibration for complex-geometry biomedical diffuse optical spectroscopy

    NASA Astrophysics Data System (ADS)

    Mastanduno, Michael A.; Jiang, Shudong; El-Ghussein, Fadi; diFlorio-Alexander, Roberta; Pogue, Brian W.; Paulsen, Keith D.

    2013-03-01

    We have presented methodology to calibrate data in NIRS/MRI imaging versus an absolute reference phantom and results in both phantoms and healthy volunteers. This method directly calibrates data to a diffusion-based model, takes advantage of patient specific geometry from MRI prior information, and generates an initial guess without the need for a large data set. This method of calibration allows for more accurate quantification of total hemoglobin, oxygen saturation, water content, scattering, and lipid concentration as compared with other, slope-based methods. We found the main source of error in the method to be derived from incorrect assignment of reference phantom optical properties rather than initial guess in reconstruction. We also present examples of phantom and breast images from a combined frequency domain and continuous wave MRI-coupled NIRS system. We were able to recover phantom data within 10% of expected contrast and within 10% of the actual value using this method and compare these results with slope-based calibration methods. Finally, we were able to use this technique to calibrate and reconstruct images from healthy volunteers. Representative images are shown and discussion is provided for comparison with existing literature. These methods work towards fully combining the synergistic attributes of MRI and NIRS for in-vivo imaging of breast cancer. Complete software and hardware integration in dual modality instruments is especially important due to the complexity of the technology and success will contribute to complex anatomical and molecular prognostic information that can be readily obtained in clinical use.

  17. Calibration of the computer model describing flows in the water supply system; example of the application of a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Orłowska-Szostak, Maria; Orłowski, Ryszard

    2017-11-01

    The paper discusses some relevant aspects of the calibration of a computer model describing flows in the water supply system. The authors described an exemplary water supply system and used it as a practical illustration of calibration. A range of measures was discussed and applied, which improve the convergence and effective use of calculations in the calibration process and also the effect of such calibration which is the validity of the results obtained. Drawing up results of performed measurements, i.e. estimating pipe roughnesses, the authors performed using the genetic algorithm implementation of which is a software developed by Resan Labs company from Brazil.

  18. Effect of dissolved oxygen level of water on ultrasonic power measured using calorimetry

    NASA Astrophysics Data System (ADS)

    Uchida, Takeyoshi; Yoshioka, Masahiro; Horiuchi, Ryuzo

    2018-07-01

    Ultrasonic therapeutic equipment, which exposes the human body to high-power ultrasound, is used in clinical practice to treat cancer. However, the safety of high-power ultrasound has been questioned because the equipment affects not only cancer cells but also normal cells. To evaluate the safety of ultrasound, it is necessary to accurately measure the ultrasonic power of the equipment. This is because ultrasonic power is a key quantity related to the thermal hazard of ultrasound. However, precise techniques for measuring ultrasonic power in excess of 15 W are yet to be established. We have been studying calorimetry as a precise measurement technique. In this study, we investigated the effect of the dissolved oxygen (DO) level of water on ultrasonic power by calorimetry. The results show that the measured ultrasonic power differed significantly between water samples of different DO levels. This difference in ultrasonic power arose from acoustic cavitation.

  19. Calibration of a complex activated sludge model for the full-scale wastewater treatment plant.

    PubMed

    Liwarska-Bizukojc, Ewa; Olejnik, Dorota; Biernacki, Rafal; Ledakowicz, Stanislaw

    2011-08-01

    In this study, the results of the calibration of the complex activated sludge model implemented in BioWin software for the full-scale wastewater treatment plant are presented. Within the calibration of the model, sensitivity analysis of its parameters and the fractions of carbonaceous substrate were performed. In the steady-state and dynamic calibrations, a successful agreement between the measured and simulated values of the output variables was achieved. Sensitivity analysis revealed that upon the calculations of normalized sensitivity coefficient (S(i,j)) 17 (steady-state) or 19 (dynamic conditions) kinetic and stoichiometric parameters are sensitive. Most of them are associated with growth and decay of ordinary heterotrophic organisms and phosphorus accumulating organisms. The rankings of ten most sensitive parameters established on the basis of the calculations of the mean square sensitivity measure (δ(msqr)j) indicate that irrespective of the fact, whether the steady-state or dynamic calibration was performed, there is an agreement in the sensitivity of parameters.

  20. Characterizing Optical Loss in Orientation Patterned III-V Materials using Laser Calorimetry

    DTIC Science & Technology

    2014-03-27

    nm and solid state fiber lasers . A comparison of the important properties of commonly used frequency conversion materials are shown in Table 1 [9......templates at AFRL. 32 Laser Calorimetry Experiment A THOR Labs ITC 4001 Laser diode with a 1625 nm, 50 mW fiber pigtail was used as the source

  1. Waveguide Calibrator for Multi-Element Probe Calibration

    NASA Technical Reports Server (NTRS)

    Sommerfeldt, Scott D.; Blotter, Jonathan D.

    2007-01-01

    A calibrator, referred to as the spider design, can be used to calibrate probes incorporating multiple acoustic sensing elements. The application is an acoustic energy density probe, although the calibrator can be used for other types of acoustic probes. The calibrator relies on the use of acoustic waveguide technology to produce the same acoustic field at each of the sensing elements. As a result, the sensing elements can be separated from each other, but still calibrated through use of the acoustic waveguides. Standard calibration techniques involve placement of an individual microphone into a small cavity with a known, uniform pressure to perform the calibration. If a cavity is manufactured with sufficient size to insert the energy density probe, it has been found that a uniform pressure field can only be created at very low frequencies, due to the size of the probe. The size of the energy density probe prevents one from having the same pressure at each microphone in a cavity, due to the wave effects. The "spider" design probe is effective in calibrating multiple microphones separated from each other. The spider design ensures that the same wave effects exist for each microphone, each with an indivdual sound path. The calibrator s speaker is mounted at one end of a 14-cm-long and 4.1-cm diameter small plane-wave tube. This length was chosen so that the first evanescent cross mode of the plane-wave tube would be attenuated by about 90 dB, thus leaving just the plane wave at the termination plane of the tube. The tube terminates with a small, acrylic plate with five holes placed symmetrically about the axis of the speaker. Four ports are included for the four microphones on the probe. The fifth port is included for the pre-calibrated reference microphone. The ports in the acrylic plate are in turn connected to the probe sensing elements via flexible PVC tubes. These five tubes are the same length, so the acoustic wave effects are the same in each tube. The

  2. Absolute calorimetric calibration of low energy brachytherapy sources

    NASA Astrophysics Data System (ADS)

    Stump, Kurt E.

    In the past decade there has been a dramatic increase in the use of permanent radioactive source implants in the treatment of prostate cancer. A small radioactive source encapsulated in a titanium shell is used in this type of treatment. The radioisotopes used are generally 125I or 103Pd. Both of these isotopes have relatively short half-lives, 59.4 days and 16.99 days, respectively, and have low-energy emissions and a low dose rate. These factors make these sources well suited for this application, but the calibration of these sources poses significant metrological challenges. The current standard calibration technique involves the measurement of ionization in air to determine the source air-kerma strength. While this has proved to be an improvement over previous techniques, the method has been shown to be metrologically impure and may not be the ideal means of calbrating these sources. Calorimetric methods have long been viewed to be the most fundamental means of determining source strength for a radiation source. This is because calorimetry provides a direct measurement of source energy. However, due to the low energy and low power of the sources described above, current calorimetric methods are inadequate. This thesis presents work oriented toward developing novel methods to provide direct and absolute measurements of source power for low-energy low dose rate brachytherapy sources. The method is the first use of an actively temperature-controlled radiation absorber using the electrical substitution method to determine total contained source power of these sources. The instrument described operates at cryogenic temperatures. The method employed provides a direct measurement of source power. The work presented here is focused upon building a metrological foundation upon which to establish power-based calibrations of clinical-strength sources. To that end instrument performance has been assessed for these source strengths. The intent is to establish the limits of

  3. Mathematical model of cycad cones' thermogenic temperature responses: inverse calorimetry to estimate metabolic heating rates.

    PubMed

    Roemer, R B; Booth, D; Bhavsar, A A; Walter, G H; Terry, L I

    2012-12-21

    A mathematical model based on conservation of energy has been developed and used to simulate the temperature responses of cones of the Australian cycads Macrozamia lucida and Macrozamia. macleayi during their daily thermogenic cycle. These cones generate diel midday thermogenic temperature increases as large as 12 °C above ambient during their approximately two week pollination period. The cone temperature response model is shown to accurately predict the cones' temperatures over multiple days as based on simulations of experimental results from 28 thermogenic events from 3 different cones, each simulated for either 9 or 10 sequential days. The verified model is then used as the foundation of a new, parameter estimation based technique (termed inverse calorimetry) that estimates the cones' daily metabolic heating rates from temperature measurements alone. The inverse calorimetry technique's predictions of the major features of the cones' thermogenic metabolism compare favorably with the estimates from conventional respirometry (indirect calorimetry). Because the new technique uses only temperature measurements, and does not require measurements of oxygen consumption, it provides a simple, inexpensive and portable complement to conventional respirometry for estimating metabolic heating rates. It thus provides an additional tool to facilitate field and laboratory investigations of the bio-physics of thermogenic plants. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. Software to Control and Monitor Gas Streams

    NASA Technical Reports Server (NTRS)

    Arkin, C.; Curley, Charles; Gore, Eric; Floyd, David; Lucas, Damion

    2012-01-01

    This software package interfaces with various gas stream devices such as pressure transducers, flow meters, flow controllers, valves, and analyzers such as a mass spectrometer. The software provides excellent user interfacing with various windows that provide time-domain graphs, valve state buttons, priority- colored messages, and warning icons. The user can configure the software to save as much or as little data as needed to a comma-delimited file. The software also includes an intuitive scripting language for automated processing. The configuration allows for the assignment of measured values or calibration so that raw signals can be viewed as usable pressures, flows, or concentrations in real time. The software is based on those used in two safety systems for shuttle processing and one volcanic gas analysis system. Mass analyzers typically have very unique applications and vary from job to job. As such, software available on the market is usually inadequate or targeted on a specific application (such as EPA methods). The goal was to develop powerful software that could be used with prototype systems. The key problem was to generalize the software to be easily and quickly reconfigurable. At Kennedy Space Center (KSC), the prior art consists of two primary methods. The first method was to utilize Lab- VIEW and a commercial data acquisition system. This method required rewriting code for each different application and only provided raw data. To obtain data in engineering units, manual calculations were required. The second method was to utilize one of the embedded computer systems developed for another system. This second method had the benefit of providing data in engineering units, but was limited in the number of control parameters.

  5. Calibrated Blade-Element/Momentum Theory Aerodynamic Model of the MARIN Stock Wind Turbine: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goupee, A.; Kimball, R.; de Ridder, E. J.

    2015-04-02

    In this paper, a calibrated blade-element/momentum theory aerodynamic model of the MARIN stock wind turbine is developed and documented. The model is created using open-source software and calibrated to closely emulate experimental data obtained by the DeepCwind Consortium using a genetic algorithm optimization routine. The provided model will be useful for those interested in validating interested in validating floating wind turbine numerical simulators that rely on experiments utilizing the MARIN stock wind turbine—for example, the International Energy Agency Wind Task 30’s Offshore Code Comparison Collaboration Continued, with Correlation project.

  6. On-ground calibration of the BEPICOLOMBO/SIMBIO-SYS at instrument level

    NASA Astrophysics Data System (ADS)

    Rodriguez-Ferreira, J.; Poulet, F.; Eng, P.; Longval, Y.; Dassas, K.; Arondel, A.; Langevin, Y.; Capaccioni, F.; Filacchione, G.; Palumbo, P.; Cremonese, G.; Dami, M.

    2012-04-01

    The Mercury Planetary Orbiter/BepiColombo carries an integrated suite of instruments, the Spectrometer and Imagers for MPO BepiColombo-Integrated Observatory SYStem (SIMBIO-SYS). SIMBIO-SYS has 3 channels: a stereo imaging system (STC), a high-resolution imager (HRIC) and a visible-near-infrared imaging spectrometer (VIHI). SIMBIO-SYS will scan the surface of Mercury with these three channels and determine the physical, morphological and compositional properties of the entire planet. Before integration on the S/C, an on-ground calibration at the channels and at the instrument levels will be performed so as to describe the instrumental responses as a function of various parameters that might evolve while the instruments will be operating [1]. The Institut d'Astrophysique Spatiale (IAS) is responsible for the on-ground instrument calibration at the instrument level. During the 4 weeks of calibration campaign planned for June 2012, the instrument will be maintained in a mechanical and thermal environment simulating the space conditions. Four Optical stimuli (QTH lamp, Integrating Sphere, BlackBody with variable temperature from 50 to 1200°C and Monochromator), are placed over an optical bench to illuminate the four channels so as to make the radiometric calibration, straylight monitoring, as well as spectral proofing based on laboratory mineral samples. The instrument will be mounted on a hexapod placed inside a thermal vacuum chamber during the calibration campaign. The hexapod will move the channels within the well-characterized incoming beam. We will present the key activities of the preparation of this calibration: the derivation of the instrument radiometric model, the implementation of the optical, mechanical and software interfaces of the calibration assembly, the characterization of the optical bench and the definition of the calibration procedures.

  7. Data processing and in-flight calibration systems for OMI-EOS-Aura

    NASA Astrophysics Data System (ADS)

    van den Oord, G. H. J.; Dobber, M.; van de Vegte, J.; van der Neut, I.; Som de Cerff, W.; Rozemeijer, N. C.; Schenkelaars, V.; ter Linden, M.

    2006-08-01

    The OMI instrument that flies on the EOS Aura mission was launched in July 2004. OMI is a UV-VIS imaging spectrometer that measures in the 270 - 500 nm wavelength range. OMI provides daily global coverage with high spatial resolution. Every orbit of 100 minutes OMI generates about 0.5 GB of Level 0 data and 1.2 GB of Level 1 data. About half of the Level 1 data consists of in-flight calibration measurements. These data rates make it necessary to automate the process of in-flight calibration. For that purpose two facilities have been developed at KNMI in the Netherlands: the OMI Dutch Processing System (ODPS) and the Trend Monitoring and In-flight Calibration Facility (TMCF). A description of these systems is provided with emphasis on the use for radiometric, spectral and detector calibration and characterization. With the advance of detector technology and the need for higher spatial resolution, data rates will become even higher for future missions. To make effective use of automated systems like the TMCF, it is of paramount importance to integrate the instrument operations concept, the information contained in the Level 1 (meta-)data products and the inflight calibration software and system databases. In this way a robust but also flexible end-to-end system can be developed that serves the needs of the calibration staff, the scientific data users and the processing staff. The way this has been implemented for OMI may serve as an example of a cost-effective and user friendly solution for future missions. The basic system requirements for in-flight calibration are discussed and examples are given how these requirements have been implemented for OMI. Special attention is paid to the aspect of supporting the Level 0 - 1 processing with timely and accurate calibration constants.

  8. Calorimetry exchange program amendment to 3rd quarter CY92 report LLNL isotopic data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnett, T.M.

    1996-08-01

    This report is a series of ammendments to the Calorimetry Exchange Quarterly Data Report for third quarter CY1992. The ammendment is needed due to reporting errors encountered in the Lawrence Livermore National Laboratory isotopic data.

  9. Particle Flow Calorimetry for the ILC

    NASA Astrophysics Data System (ADS)

    Magill, Stephen

    2006-04-01

    The Particle Flow approach to detector design is seen as the best way to achieve dijet mass resolutions suitable for the precision measurements anticipated at a future e^+e^- Linear Collider (LC). Particle Flow Algorithms (PFAs) affect not only the way data is analyzed, but are necessary and crucial elements used even in initial stages of detector design. In particular, the Calorimeter design parameters are almost entirely dependent on the optimized performance of the PFA. Use of PFAs imposes constraints on the granularity and segmentation of the readout cells, the choices of absorber and active media, and overall detector parameters such as the strength of the B-field, magnet bore, hermeticity, etc. PFAs must be flexible and modular in order to evaluate many detector models in simulation. The influence of PFA development on calorimetry is presented here with particular emphasis on results from the use of PFAs on several LC detector models.

  10. The ATLAS Inner Detector commissioning and calibration

    DOE PAGES

    Aad, G.; Abbott, B.; Abdallah, J.; ...

    2010-08-20

    The ATLAS Inner Detector is a composite tracking system consisting of silicon pixels, silicon strips and straw tubes in a 2 T magnetic field. Its installation was completed in August 2008 and the detector took part in data-taking with single LHC beams and cosmic rays. The initial detector operation, hardware commissioning and in-situ calibrations are described. Tracking performance has been measured with 7. 6 million cosmic-ray events, collected using a tracking trigger and reconstructed with modular pattern-recognition and fitting software. The intrinsic hit efficiency and tracking trigger efficiencies are close to 100%. Lorentz angle measurements for both electrons and holes,more » specific energy-loss calibration and transition radiation turn-on measurements have been performed. Different alignment techniques have been used to reconstruct the detector geometry. After the initial alignment, a transverse impact parameter resolution of 22.1±0.9 μm and a relative momentum resolution σ p/p=(4. 83 ± 0.16)×10 -4 GeV -1×p T have been measured for high momentum tracks.« less

  11. Analog VS Digital Hadron Calorimetry at a Future Electron-Positron Linear Collider

    NASA Astrophysics Data System (ADS)

    Magill, Stephen R.

    2005-02-01

    Precision jet measurements at a future e+e- linear collider may only be possible using so-called Particle Flow Algorithms (PFAs). While there are many possible implementations of P-flow techniques, they all have in common separation of induced calorimeter showers from charged and neutral hadrons (as well as photons) within a jet. Shower reconstruction in the calorimeter becomes more important than energy measurement of hadrons. The calorimeter cells must be highly granular both transverse to the particle trajectory and in longitudinal segmentation. It is probable that as the cell size decreases, it will be harder to get an energy measure from each cell (analog calorimetry). Using only the hit information (digital calorimetry) may be the best way to measure the neutral hadron energy contribution to jets. In this paper, comparisons of analog and digital methods of measuring the contributions of neutral hadrons to jets are made in simulation and in the context of a particular PFA, indicating that the digital method is at least equal to the analog case in jet energy resolution.

  12. Analysis-Software for Hyperspectral Algal Reflectance Probes v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Timlin, Jerilyn A.; Reichardt, Thomas A.; Jenson, Travis J.

    This software provides onsite analysis of the hyperspectral reflectance data acquired on an outdoor algal pond by a multichannel, fiber-coupled spectroradiometer. The analysis algorithm is based on numerical inversion of a reflectance model, in which the above-water reflectance is expressed as a function of the single backscattering albedo, which is dependent on the backscatter and absorption coefficients of the algal culture, which are in turn related to the algal biomass and pigment optical activity, respectively. Prior to the development of this software, while raw multichannel data were displayed in real time, analysis required a post-processing procedure to extract the relevantmore » parameters. This software provides the capability to track the temporal variation of such culture parameters in real time, as raw data are being acquired, or can be run in a post processing mode. The software allows the user to select between different algal species, incorporate the appropriate calibration data, and observe the quality of the resulting model inversions.« less

  13. Thermodynamics of Surfactants, Block Copolymers and Their Mixtures in Water: The Role of the Isothermal Calorimetry

    PubMed Central

    De Lisi, Rosario; Milioto, Stefania; Muratore, Nicola

    2009-01-01

    The thermodynamics of conventional surfactants, block copolymers and their mixtures in water was described to the light of the enthalpy function. The two methodologies, i.e. the van’t Hoff approach and the isothermal calorimetry, used to determine the enthalpy of micellization of pure surfactants and block copolymers were described. The van’t Hoff method was critically discussed. The aqueous copolymer+surfactant mixtures were analyzed by means of the isothermal titration calorimetry and the enthalpy of transfer of the copolymer from the water to the aqueous surfactant solutions. Thermodynamic models were presented to show the procedure to extract straightforward molecular insights from the bulk properties. PMID:19742173

  14. The polyGeVero® software for fast and easy computation of 3D radiotherapy dosimetry data

    NASA Astrophysics Data System (ADS)

    Kozicki, Marek; Maras, Piotr

    2015-01-01

    The polyGeVero® software package was elaborated for calculations of 3D dosimetry data such as the polymer gel dosimetry. It comprises four workspaces designed for: i) calculating calibrations, ii) storing calibrations in a database, iii) calculating dose distribution 3D cubes, iv) comparing two datasets e.g. a measured one with a 3D dosimetry with a calculated one with the aid of a treatment planning system. To accomplish calculations the software was equipped with a number of tools such as the brachytherapy isotopes database, brachytherapy dose versus distance calculation based on the line approximation approach, automatic spatial alignment of two 3D dose cubes for comparison purposes, 3D gamma index, 3D gamma angle, 3D dose difference, Pearson's coefficient, histograms calculations, isodoses superimposition for two datasets, and profiles calculations in any desired direction. This communication is to briefly present the main functions of the software and report on the speed of calculations performed by polyGeVero®.

  15. Root zone water quality model (RZWQM2): Model use, calibration and validation

    USGS Publications Warehouse

    Ma, Liwang; Ahuja, Lajpat; Nolan, B.T.; Malone, Robert; Trout, Thomas; Qi, Z.

    2012-01-01

    The Root Zone Water Quality Model (RZWQM2) has been used widely for simulating agricultural management effects on crop production and soil and water quality. Although it is a one-dimensional model, it has many desirable features for the modeling community. This article outlines the principles of calibrating the model component by component with one or more datasets and validating the model with independent datasets. Users should consult the RZWQM2 user manual distributed along with the model and a more detailed protocol on how to calibrate RZWQM2 provided in a book chapter. Two case studies (or examples) are included in this article. One is from an irrigated maize study in Colorado to illustrate the use of field and laboratory measured soil hydraulic properties on simulated soil water and crop production. It also demonstrates the interaction between soil and plant parameters in simulated plant responses to water stresses. The other is from a maize-soybean rotation study in Iowa to show a manual calibration of the model for crop yield, soil water, and N leaching in tile-drained soils. Although the commonly used trial-and-error calibration method works well for experienced users, as shown in the second example, an automated calibration procedure is more objective, as shown in the first example. Furthermore, the incorporation of the Parameter Estimation Software (PEST) into RZWQM2 made the calibration of the model more efficient than a grid (ordered) search of model parameters. In addition, PEST provides sensitivity and uncertainty analyses that should help users in selecting the right parameters to calibrate.

  16. Chip calorimetry for evaluation of biofilm treatment with biocides, antibiotics, and biological agents.

    PubMed

    Morais, Frida Mariana; Buchholz, Friederike; Maskow, Thomas

    2014-01-01

    Any growth or bioconversion in biofilms is accompanied by the release of heat. The heat (in J) is tightly related to the stoichiometry of the respective process via law of Hess, and the heat production rate (in W or J/s) is additionally related to the process kinetics. This heat and the heat production rate can nowadays be measured by modern calorimetry with extremely high sensitivity. Flow-through calorimetry allows the measurement of bioprocesses in biofilms in real time, without the need of invasive sample preparation and disturbing of biofilm processes. Furthermore, it can be applied for long-term measurements and is even applicable to turbid media. Chip or miniaturized calorimeters have the additional advantages of extremely short thermal equilibration times and the requirement of very small amounts of media and chemicals. The precision of flow-through chip calorimeters (about 3 mW/L) allows the detection of early stages of biofilm development (about 10(5) bacteria cm(-2)).

  17. Characterization of photomultiplier tubes in a novel operation mode for Secondary Emission Ionization Calorimetry

    NASA Astrophysics Data System (ADS)

    Tiras, E.; Dilsiz, K.; Ogul, H.; Southwick, D.; Bilki, B.; Wetzel, J.; Nachtman, J.; Onel, Y.; Winn, D.

    2016-10-01

    Hamamatsu single anode R7761 and multi-anode R5900-00-M16 Photomultiplier Tubes have been characterized for use in a Secondary Emission (SE) Ionization Calorimetry study. SE Ionization Calorimetry is a novel technique to measure electromagnetic shower particles in extreme radiation environments. The different operation modes used in these tests were developed by modifying the conventional PMT bias circuit. These modifications were simple changes to the arrangement of the voltage dividers of the baseboard circuits. The PMTs with modified bases, referred to as operating in SE mode, are used as an SE detector module in an SE calorimeter prototype, and placed between absorber materials (Fe, Cu, Pb, W, etc.). Here, the technical design of different operation modes, as well as the characterization measurements of both SE modes and the conventional PMT mode are reported.

  18. Fabrication of 12% {sup 240}Pu calorimetry standards

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Long, S.M.; Hildner, S.; Gutierrez, D.

    1995-08-01

    Throughout the DOE complex, laboratories are performing calorimetric assays on items containing high burnup plutonium. These materials contain higher isotopic range and higher wattages than materials previously encountered in vault holdings. Currently, measurement control standards have been limited to utilizing 6% {sup 240}Pu standards. The lower isotopic and wattage value standards do not complement the measurement of the higher burnup material. Participants of the Calorimetry Exchange (CALEX) Program have identified the need for new calorimetric assay standards with a higher wattage and isotopic range. This paper describes the fabrication and verification measurements of the new CALEX standard containing 12% {supmore » 240}Pu oxide with a wattage of about 6 to 8 watts.« less

  19. ORBS: A reduction software for SITELLE and SpiOMM data

    NASA Astrophysics Data System (ADS)

    Martin, Thomas

    2014-09-01

    ORBS merges, corrects, transforms and calibrates interferometric data cubes and produces a spectral cube of the observed region for analysis. It is a fully automatic data reduction software for use with SITELLE (installed at the Canada-France-Hawaii Telescope) and SpIOMM (a prototype attached to the Observatoire du Mont Mégantic); these imaging Fourier transform spectrometers obtain a hyperspectral data cube which samples a 12 arc-minutes field of view into 4 millions of visible spectra. ORBS is highly parallelized; its core classes (ORB) have been designed to be used in a suite of softwares for data analysis (ORCS and OACS), data simulation (ORUS) and data acquisition (IRIS).

  20. Development and verification of an innovative photomultiplier calibration system with a 10-fold increase in photometer resolution

    NASA Astrophysics Data System (ADS)

    Jiang, Shyh-Biau; Yeh, Tse-Liang; Chen, Li-Wu; Liu, Jann-Yenq; Yu, Ming-Hsuan; Huang, Yu-Qin; Chiang, Chen-Kiang; Chou, Chung-Jen

    2018-05-01

    In this study, we construct a photomultiplier calibration system. This calibration system can help scientists measuring and establishing the characteristic curve of the photon count versus light intensity. The system uses an innovative 10-fold optical attenuator to enable an optical power meter to calibrate photomultiplier tubes which have the resolution being much greater than that of the optical power meter. A simulation is firstly conducted to validate the feasibility of the system, and then the system construction, including optical design, circuit design, and software algorithm, is realized. The simulation generally agrees with measurement data of the constructed system, which are further used to establish the characteristic curve of the photon count versus light intensity.

  1. New Software for Ensemble Creation in the Spitzer-Space-Telescope Operations Database

    NASA Technical Reports Server (NTRS)

    Laher, Russ; Rector, John

    2004-01-01

    Some of the computer pipelines used to process digital astronomical images from NASA's Spitzer Space Telescope require multiple input images, in order to generate high-level science and calibration products. The images are grouped into ensembles according to well documented ensemble-creation rules by making explicit associations in the operations Informix database at the Spitzer Science Center (SSC). The advantage of this approach is that a simple database query can retrieve the required ensemble of pipeline input images. New and improved software for ensemble creation has been developed. The new software is much faster than the existing software because it uses pre-compiled database stored-procedures written in Informix SPL (SQL programming language). The new software is also more flexible because the ensemble creation rules are now stored in and read from newly defined database tables. This table-driven approach was implemented so that ensemble rules can be inserted, updated, or deleted without modifying software.

  2. Software validation applied to spreadsheets used in laboratories working under ISO/IEC 17025

    NASA Astrophysics Data System (ADS)

    Banegas, J. M.; Orué, M. W.

    2016-07-01

    Several documents deal with software validation. Nevertheless, more are too complex to be applied to validate spreadsheets - surely the most used software in laboratories working under ISO/IEC 17025. The method proposed in this work is intended to be directly applied to validate spreadsheets. It includes a systematic way to document requirements, operational aspects regarding to validation, and a simple method to keep records of validation results and modifications history. This method is actually being used in an accredited calibration laboratory, showing to be practical and efficient.

  3. Temperature stability of proteins: Analysis of irreversible denaturation using isothermal calorimetry.

    PubMed

    Schön, Arne; Clarkson, Benjamin R; Jaime, Maria; Freire, Ernesto

    2017-11-01

    The structural stability of proteins has been traditionally studied under conditions in which the folding/unfolding reaction is reversible, since thermodynamic parameters can only be determined under these conditions. Achieving reversibility conditions in temperature stability experiments has often required performing the experiments at acidic pH or other nonphysiological solvent conditions. With the rapid development of protein drugs, the fastest growing segment in the pharmaceutical industry, the need to evaluate protein stability under formulation conditions has acquired renewed urgency. Under formulation conditions and the required high protein concentration (∼100 mg/mL), protein denaturation is irreversible and frequently coupled to aggregation and precipitation. In this article, we examine the thermal denaturation of hen egg white lysozyme (HEWL) under irreversible conditions and concentrations up to 100 mg/mL using several techniques, especially isothermal calorimetry which has been used to measure the enthalpy and kinetics of the unfolding and aggregation/precipitation at 12°C below the transition temperature measured by DSC. At those temperatures the rate of irreversible protein denaturation and aggregation of HEWL is measured to be on the order of 1 day -1 . Isothermal calorimetry appears a suitable technique to identify buffer formulation conditions that maximize the long term stability of protein drugs. © 2017 Wiley Periodicals, Inc.

  4. Temperature stability of proteins: Analysis of irreversible denaturation using isothermal calorimetry

    PubMed Central

    Schön, Arne; Clarkson, Benjamin R; Jaime, Maria; Freire, Ernesto

    2017-01-01

    The structural stability of proteins has been traditionally studied under conditions in which the folding/unfolding reaction is reversible, since thermodynamic parameters can only be determined under these conditions. Achieving reversibility conditions in temperature stability experiments has often required performing the experiments at acidic pH or other nonphysiological solvent conditions. With the rapid development of protein drugs, the fastest growing segment in the pharmaceutical industry, the need to evaluate protein stability under formulation conditions has acquired renewed urgency. Under formulation conditions and the required high protein concentration (~100 mg/mL), protein denaturation is irreversible and frequently coupled to aggregation and precipitation. In this article, we examine the thermal denaturation of hen egg white lysozyme (HEWL) under irreversible conditions and concentrations up to 100 mg/mL using several techniques, especially isothermal calorimetry which has been used to measure the enthalpy and kinetics of the unfolding and aggregation/precipitation at 12°C below the transition temperature measured by DSC. At those temperatures the rate of irreversible protein denaturation and aggregation of HEWL is measured to be on the order of 1 day−1. Isothermal calorimetry appears a suitable technique to identify buffer formulation conditions that maximize the long term stability of protein drugs. PMID:28722205

  5. Real-time self-calibration of a tracked augmented reality display

    NASA Astrophysics Data System (ADS)

    Baum, Zachary; Lasso, Andras; Ungi, Tamas; Fichtinger, Gabor

    2016-03-01

    PURPOSE: Augmented reality systems have been proposed for image-guided needle interventions but they have not become widely used in clinical practice due to restrictions such as limited portability, low display refresh rates, and tedious calibration procedures. We propose a handheld tablet-based self-calibrating image overlay system. METHODS: A modular handheld augmented reality viewbox was constructed from a tablet computer and a semi-transparent mirror. A consistent and precise self-calibration method, without the use of any temporary markers, was designed to achieve an accurate calibration of the system. Markers attached to the viewbox and patient are simultaneously tracked using an optical pose tracker to report the position of the patient with respect to a displayed image plane that is visualized in real-time. The software was built using the open-source 3D Slicer application platform's SlicerIGT extension and the PLUS toolkit. RESULTS: The accuracy of the image overlay with image-guided needle interventions yielded a mean absolute position error of 0.99 mm (95th percentile 1.93 mm) in-plane of the overlay and a mean absolute position error of 0.61 mm (95th percentile 1.19 mm) out-of-plane. This accuracy is clinically acceptable for tool guidance during various procedures, such as musculoskeletal injections. CONCLUSION: A self-calibration method was developed and evaluated for a tracked augmented reality display. The results show potential for the use of handheld image overlays in clinical studies with image-guided needle interventions.

  6. Study of interactions between hyaluronan and cationic surfactants by means of calorimetry, turbidimetry, potentiometry and conductometry.

    PubMed

    Krouská, J; Pekař, M; Klučáková, M; Šarac, B; Bešter-Rogač, M

    2017-02-10

    The thermodynamics of the micelle formation of the cationic surfactants tetradecyltrimethylammonium bromide (TTAB) and cetyltrimethylammonium bromide (CTAB) with and without the addition of hyaluronan of two molecular weights was studied in aqueous solution by titration calorimetry. Macroscopic phase separation, which was detected by calorimetry and also by conductometry, occurs when charges on the surfactant and hyaluronan are balanced. In contrast, turbidimetry and potentiometry showed hyaluronan-surfactant interactions at very low surfactant concentrations. The observed differences between systems prepared with CTAB and TTAB indicate that besides the electrostatic interactions, which probably predominate, hydrophobic effects also play a significant role in hyaluronan interactions with cationic surfactants. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Accelerating rate calorimetry: A new technique for safety studies in lithium systems

    NASA Technical Reports Server (NTRS)

    Ebner, W. B.

    1982-01-01

    The role of exothermic reactions in battery test modes is discussed. The exothermic reactions are characterized with respect to their time-temperature and time-pressure behavior. Reactions occuring for any major exotherm were examined. The accelerating rate calorimetry methods was developed to study lithium cells susceptibility to thermal runaway reactions following certain abuse modes such as forced discharge into reversal and charging.

  8. MODIS calibration

    NASA Technical Reports Server (NTRS)

    Barker, John L.

    1992-01-01

    The MODIS/MCST (MODIS Characterization Support Team) Status Report contains an outline of the calibration strategy, handbook, and plan. It also contains an outline of the MODIS/MCST action item from the 4th EOS Cal/Val Meeting, for which the objective was to locate potential MODIS calibration targets on the Earth's surface that are radiometrically homogeneous on a scale of 3 by 3 Km. As appendices, draft copies of the handbook table of contents, calibration plan table of contents, and detailed agenda for MODIS calibration working group are included.

  9. Testing Software Development Project Productivity Model

    NASA Astrophysics Data System (ADS)

    Lipkin, Ilya

    Software development is an increasingly influential factor in today's business environment, and a major issue affecting software development is how an organization estimates projects. If the organization underestimates cost, schedule, and quality requirements, the end results will not meet customer needs. On the other hand, if the organization overestimates these criteria, resources that could have been used more profitably will be wasted. There is no accurate model or measure available that can guide an organization in a quest for software development, with existing estimation models often underestimating software development efforts as much as 500 to 600 percent. To address this issue, existing models usually are calibrated using local data with a small sample size, with resulting estimates not offering improved cost analysis. This study presents a conceptual model for accurately estimating software development, based on an extensive literature review and theoretical analysis based on Sociotechnical Systems (STS) theory. The conceptual model serves as a solution to bridge organizational and technological factors and is validated using an empirical dataset provided by the DoD. Practical implications of this study allow for practitioners to concentrate on specific constructs of interest that provide the best value for the least amount of time. This study outlines key contributing constructs that are unique for Software Size E-SLOC, Man-hours Spent, and Quality of the Product, those constructs having the largest contribution to project productivity. This study discusses customer characteristics and provides a framework for a simplified project analysis for source selection evaluation and audit task reviews for the customers and suppliers. Theoretical contributions of this study provide an initial theory-based hypothesized project productivity model that can be used as a generic overall model across several application domains such as IT, Command and Control

  10. Software Tools For Building Decision-support Models For Flood Emergency Situations

    NASA Astrophysics Data System (ADS)

    Garrote, L.; Molina, M.; Ruiz, J. M.; Mosquera, J. C.

    The SAIDA decision-support system was developed by the Spanish Ministry of the Environment to provide assistance to decision-makers during flood situations. SAIDA has been tentatively implemented in two test basins: Jucar and Guadalhorce, and the Ministry is currently planning to have it implemented in all major Spanish basins in a few years' time. During the development cycle of SAIDA, the need for providing as- sistance to end-users in model definition and calibration was clearly identified. System developers usually emphasise abstraction and generality with the goal of providing a versatile software environment. End users, on the other hand, require concretion and specificity to adapt the general model to their local basins. As decision-support models become more complex, the gap between model developers and users gets wider: Who takes care of model definition, calibration and validation?. Initially, model developers perform these tasks, but the scope is usually limited to a few small test basins. Before the model enters operational stage, end users must get involved in model construction and calibration, in order to gain confidence in the model recommendations. However, getting the users involved in these activities is a difficult task. The goal of this re- search is to develop representation techniques for simulation and management models in order to define, develop and validate a mechanism, supported by a software envi- ronment, oriented to provide assistance to the end-user in building decision models for the prediction and management of river floods in real time. The system is based on three main building blocks: A library of simulators of the physical system, an editor to assist the user in building simulation models, and a machine learning method to calibrate decision models based on the simulation models provided by the user.

  11. A General Water Resources Regulation Software System in China

    NASA Astrophysics Data System (ADS)

    LEI, X.

    2017-12-01

    To avoid iterative development of core modules in water resource normal regulation and emergency regulation and improve the capability of maintenance and optimization upgrading of regulation models and business logics, a general water resources regulation software framework was developed based on the collection and analysis of common demands for water resources regulation and emergency management. It can provide a customizable, secondary developed and extensible software framework for the three-level platform "MWR-Basin-Province". Meanwhile, this general software system can realize business collaboration and information sharing of water resources regulation schemes among the three-level platforms, so as to improve the decision-making ability of national water resources regulation. There are four main modules involved in the general software system: 1) A complete set of general water resources regulation modules allows secondary developer to custom-develop water resources regulation decision-making systems; 2) A complete set of model base and model computing software released in the form of Cloud services; 3) A complete set of tools to build the concept map and model system of basin water resources regulation, as well as a model management system to calibrate and configure model parameters; 4) A database which satisfies business functions and functional requirements of general water resources regulation software can finally provide technical support for building basin or regional water resources regulation models.

  12. A new polarimetric active radar calibrator and calibration technique

    NASA Astrophysics Data System (ADS)

    Tang, Jianguo; Xu, Xiaojian

    2015-10-01

    Polarimetric active radar calibrator (PARC) is one of the most important calibrators with high radar cross section (RCS) for polarimetry measurement. In this paper, a new double-antenna polarimetric active radar calibrator (DPARC) is proposed, which consists of two rotatable antennas with wideband electromagnetic polarization filters (EMPF) to achieve lower cross-polarization for transmission and reception. With two antennas which are rotatable around the radar line of sight (LOS), the DPARC provides a variety of standard polarimetric scattering matrices (PSM) through the rotation combination of receiving and transmitting polarization, which are useful for polarimatric calibration in different applications. In addition, a technique based on Fourier analysis is proposed for calibration processing. Numerical simulation results are presented to demonstrate the superior performance of the proposed DPARC and processing technique.

  13. AeroADL: applying the integration of the Suomi-NPP science algorithms with the Algorithm Development Library to the calibration and validation task

    NASA Astrophysics Data System (ADS)

    Houchin, J. S.

    2014-09-01

    A common problem for the off-line validation of the calibration algorithms and algorithm coefficients is being able to run science data through the exact same software used for on-line calibration of that data. The Joint Polar Satellite System (JPSS) program solved part of this problem by making the Algorithm Development Library (ADL) available, which allows the operational algorithm code to be compiled and run on a desktop Linux workstation using flat file input and output. However, this solved only part of the problem, as the toolkit and methods to initiate the processing of data through the algorithms were geared specifically toward the algorithm developer, not the calibration analyst. In algorithm development mode, a limited number of sets of test data are staged for the algorithm once, and then run through the algorithm over and over as the software is developed and debugged. In calibration analyst mode, we are continually running new data sets through the algorithm, which requires significant effort to stage each of those data sets for the algorithm without additional tools. AeroADL solves this second problem by providing a set of scripts that wrap the ADL tools, providing both efficient means to stage and process an input data set, to override static calibration coefficient look-up-tables (LUT) with experimental versions of those tables, and to manage a library containing multiple versions of each of the static LUT files in such a way that the correct set of LUTs required for each algorithm are automatically provided to the algorithm without analyst effort. Using AeroADL, The Aerospace Corporation's analyst team has demonstrated the ability to quickly and efficiently perform analysis tasks for both the VIIRS and OMPS sensors with minimal training on the software tools.

  14. PREFACE: 16th International Conference on Calorimetry in High Energy Physics (CALOR 2014)

    NASA Astrophysics Data System (ADS)

    Novotny, Rainer W.

    2015-02-01

    The XVIth International Conference on Calorimetry in High Energy Physics - CALOR 2014 - was held in Giessen, Germany from 6-11 April 2014 at the Science Campus of the University. It was hosted by the Justus-Liebig-University and the HIC for FAIR Helmholtz International Center. The series of conferences on calorimetry were started in 1990 at Fermilab and are focusing primarily on operating and future calorimeter systems within the Hadron and High-Energy Physics community without neglecting the impact on other fields such as Astrophysics or Medical Imaging. Confirmed by the impressive list of over 70 oral presentations, 5 posters and over 100 attendees, the field of calorimetry appears alive and attractive. The present volume contains the written contributions of almost all presentations which can be found at http://calor2014.de. Time slots of 15 or 30 minutes including discussion were allocated. The conference was accompanied by a small exhibition of several industrial companies related to the field. The day before the opening of the scientific program, Richard Wigmans gave an excellent and vivid tutorial on basic aspects on calorimetry meant as an introduction for students and conference attendees new in the field. The opening ceremony was used to give an impression of the present and future status and the scientific program of the new FAIR facility nearby at Darmstadt presented by Klaus Peters from GSI. The conference program of the first day was dedicated to the performance and required future upgrade of the LHC experiments, dominated by ATLAS, CMS and LHCb. The program of the next day contained specific aspects on electronics and readout as well as calorimetry in outer space. Several contributions discussed in detail new concepts for hadron calorimeters within the CALICE collaboration completed by a session on sampling calorimeters. The next sections were dedicated to operating and future calorimeters at various laboratories and covering a wide range of

  15. Trend analysis of Terra/ASTER/VNIR radiometric calibration coefficient through onboard and vicarious calibrations as well as cross calibration with MODIS

    NASA Astrophysics Data System (ADS)

    Arai, Kohei

    2012-07-01

    More than 11 years Radiometric Calibration Coefficients (RCC) derived from onboard and vicarious calibrations are compared together with cross comparison to the well calibrated MODIS RCC. Fault Tree Analysis (FTA) is also conducted for clarification of possible causes of the RCC degradation together with sensitivity analysis for vicarious calibration. One of the suspects of causes of RCC degradation is clarified through FTA. Test site dependency on vicarious calibration is quite obvious. It is because of the vicarious calibration RCC is sensitive to surface reflectance measurement accuracy, not atmospheric optical depth. The results from cross calibration with MODIS support that significant sensitivity of surface reflectance measurements on vicarious calibration.

  16. A High Precision $3.50 Open Source 3D Printed Rain Gauge Calibrator

    NASA Astrophysics Data System (ADS)

    Lopez Alcala, J. M.; Udell, C.; Selker, J. S.

    2017-12-01

    Currently available rain gauge calibrators tend to be designed for specific rain gauges, are expensive, employ low-precision water reservoirs, and do not offer the flexibility needed to test the ever more popular small-aperture rain gauges. The objective of this project was to develop and validate a freely downloadable, open-source, 3D printed rain gauge calibrator that can be adjusted for a wide range of gauges. The proposed calibrator provides for applying low, medium, and high intensity flow, and allows the user to modify the design to conform to unique system specifications based on parametric design, which may be modified and printed using CAD software. To overcome the fact that different 3D printers yield different print qualities, we devised a simple post-printing step that controlled critical dimensions to assure robust performance. Specifically, the three orifices of the calibrator are drilled to reach the three target flow rates. Laboratory tests showed that flow rates were consistent between prints, and between trials of each part, while the total applied water was precisely controlled by the use of a volumetric flask as the reservoir.

  17. The 4MOST facility control software

    NASA Astrophysics Data System (ADS)

    Pramskiy, Alexander; Mandel, Holger; Rothmaier, Florian; Stilz, Ingo; Winkler, Roland; Hahn, Thomas

    2016-07-01

    The 4-m Multi-Object Spectrographic Telescope (4MOST) is one high-resolution (R 18000) and two lowresolution (R fi 5000) spectrographs covering the wavelength range between 390 and 950 nm. The spectrographs will be installed on ESO VISTA telescope and will be fed by approximately 2400 fibres. The instrument is capable to simultaneously obtain spectra of about 2400 objects distributed over an hexagonal field-of-view of four square degrees. This paper aims at giving an overview of the control software design, which is based on the standard ESO VLT software architecture and customised to fit the needs of the 4MOST instrument. In particular, the facility control software is intended to arrange the precise positioning of the fibres, to schedule and observe many surveys in parallel, and to combine the output from the three spectrographs. Moreover, 4MOST's software will include user-friendly graphical user interfaces that enable users to interact with the facility control system and to monitor all data-taking and calibration tasks of the instrument. A secondary guiding system will be implemented to correct for any fibre exure and thus to improve 4MOST's guiding performance. The large amount of fibres requires the custom design of data exchange to avoid performance issues. The observation sequences are designed to use spectrographs in parallel with synchronous points for data exchange between subsystems. In order to control hardware devices, Programmable Logic Controller (PLC) components will be used, the new standard for future instruments at ESO.

  18. Spectral responsivity-based calibration of photometer and colorimeter standards

    NASA Astrophysics Data System (ADS)

    Eppeldauer, George P.

    2013-08-01

    Several new generation transfer- and working-standard illuminance meters and tristimulus colorimeters have been developed at the National Institute of Standards and Technology (NIST) [1] to measure all kinds of light sources with low uncertainty. The spectral and broad-band (illuminance) responsivities of the photometer (Y) channels of two tristimulus meters were determined at both the Spectral Irradiance and Radiance Responsivity Calibrations using Uniform Sources (SIRCUS) facility and the Spectral Comparator Facility (SCF) [2]. The two illuminance responsivities agreed within 0.1% with an overall uncertainty of 0.2% (k = 2), which is a factor of two improvement over the present NIST photometric scale. The first detector-based tristimulus color scale [3] was realized. All channels of the reference tristimulus colorimeter were calibrated at the SIRCUS. The other tristimulus meters were calibrated at the SCF and also against the reference meter on the photometry bench in broad-band measurement mode. The agreement between detector- and source-based calibrations was within 3 K when a tungsten lamp-standard was measured at 2856 K and 3100 K [4]. The color-temperature uncertainty of tungsten lamp measurements was 4 K (k = 2) between 2300 K and 3200 K, which is a factor of two improvement over the presently used NIST source-based color temperature scale. One colorimeter was extended with an additional (fifth) channel to apply software implemented matrix corrections. With this correction, the spectral mismatch caused color difference errors were decreased by a factor of 20 for single-color LEDs.

  19. Full-Field Calibration of Color Camera Chromatic Aberration using Absolute Phase Maps.

    PubMed

    Liu, Xiaohong; Huang, Shujun; Zhang, Zonghua; Gao, Feng; Jiang, Xiangqian

    2017-05-06

    The refractive index of a lens varies for different wavelengths of light, and thus the same incident light with different wavelengths has different outgoing light. This characteristic of lenses causes images captured by a color camera to display chromatic aberration (CA), which seriously reduces image quality. Based on an analysis of the distribution of CA, a full-field calibration method based on absolute phase maps is proposed in this paper. Red, green, and blue closed sinusoidal fringe patterns are generated, consecutively displayed on an LCD (liquid crystal display), and captured by a color camera from the front viewpoint. The phase information of each color fringe is obtained using a four-step phase-shifting algorithm and optimum fringe number selection method. CA causes the unwrapped phase of the three channels to differ. These pixel deviations can be computed by comparing the unwrapped phase data of the red, blue, and green channels in polar coordinates. CA calibration is accomplished in Cartesian coordinates. The systematic errors introduced by the LCD are analyzed and corrected. Simulated results show the validity of the proposed method and experimental results demonstrate that the proposed full-field calibration method based on absolute phase maps will be useful for practical software-based CA calibration.

  20. Prospects of second generation artificial intelligence tools in calibration of chemical sensors.

    PubMed

    Braibanti, Antonio; Rao, Rupenaguntla Sambasiva; Ramam, Veluri Anantha; Rao, Gollapalli Nageswara; Rao, Vaddadi Venkata Panakala

    2005-05-01

    Multivariate data driven calibration models with neural networks (NNs) are developed for binary (Cu++ and Ca++) and quaternary (K+, Ca++, NO3- and Cl-) ion-selective electrode (ISE) data. The response profiles of ISEs with concentrations are non-linear and sub-Nernstian. This task represents function approximation of multi-variate, multi-response, correlated, non-linear data with unknown noise structure i.e. multi-component calibration/prediction in chemometric parlance. Radial distribution function (RBF) and Fuzzy-ARTMAP-NN models implemented in the software packages, TRAJAN and Professional II, are employed for the calibration. The optimum NN models reported are based on residuals in concentration space. Being a data driven information technology, NN does not require a model, prior- or posterior- distribution of data or noise structure. Missing information, spikes or newer trends in different concentration ranges can be modeled through novelty detection. Two simulated data sets generated from mathematical functions are modeled as a function of number of data points and network parameters like number of neurons and nearest neighbors. The success of RBF and Fuzzy-ARTMAP-NNs to develop adequate calibration models for experimental data and function approximation models for more complex simulated data sets ensures AI2 (artificial intelligence, 2nd generation) as a promising technology in quantitation.

  1. Application of Calibrated Peer Review (CPR) Writing Assignments to Enhance Experiments with an Environmental Chemistry Focus

    ERIC Educational Resources Information Center

    Margerum, Lawrence D.; Gulsrud, Maren; Manlapez, Ronald; Rebong, Rachelle; Love, Austin

    2007-01-01

    The browser-based software program, Calibrated Peer Review (CPR) developed by the Molecular Science Project enables instructors to create structured writing assignments in which students learn by writing and reading for content. Though the CPR project covers only one experiment in general chemistry, it might provide lab instructors with a method…

  2. Thermodynamically consistent model calibration in chemical kinetics

    PubMed Central

    2011-01-01

    Background The dynamics of biochemical reaction systems are constrained by the fundamental laws of thermodynamics, which impose well-defined relationships among the reaction rate constants characterizing these systems. Constructing biochemical reaction systems from experimental observations often leads to parameter values that do not satisfy the necessary thermodynamic constraints. This can result in models that are not physically realizable and may lead to inaccurate, or even erroneous, descriptions of cellular function. Results We introduce a thermodynamically consistent model calibration (TCMC) method that can be effectively used to provide thermodynamically feasible values for the parameters of an open biochemical reaction system. The proposed method formulates the model calibration problem as a constrained optimization problem that takes thermodynamic constraints (and, if desired, additional non-thermodynamic constraints) into account. By calculating thermodynamically feasible values for the kinetic parameters of a well-known model of the EGF/ERK signaling cascade, we demonstrate the qualitative and quantitative significance of imposing thermodynamic constraints on these parameters and the effectiveness of our method for accomplishing this important task. MATLAB software, using the Systems Biology Toolbox 2.1, can be accessed from http://www.cis.jhu.edu/~goutsias/CSS lab/software.html. An SBML file containing the thermodynamically feasible EGF/ERK signaling cascade model can be found in the BioModels database. Conclusions TCMC is a simple and flexible method for obtaining physically plausible values for the kinetic parameters of open biochemical reaction systems. It can be effectively used to recalculate a thermodynamically consistent set of parameter values for existing thermodynamically infeasible biochemical reaction models of cellular function as well as to estimate thermodynamically feasible values for the parameters of new models. Furthermore, TCMC can

  3. Development of Rapid, Continuous Calibration Techniques and Implementation as a Prototype System for Civil Engineering Materials Evaluation

    NASA Astrophysics Data System (ADS)

    Scott, M. L.; Gagarin, N.; Mekemson, J. R.; Chintakunta, S. R.

    2011-06-01

    Until recently, civil engineering material calibration data could only be obtained from material sample cores or via time consuming, stationary calibration measurements in a limited number of locations. Calibration data are used to determine material propagation velocities of electromagnetic waves in test materials for use in layer thickness measurements and subsurface imaging. Limitations these calibration methods impose have been a significant impediment to broader use of nondestructive evaluation methods such as ground-penetrating radar (GPR). In 2006, a new rapid, continuous calibration approach was designed using simulation software to address these measurement limitations during a Federal Highway Administration (FHWA) research and development effort. This continuous calibration method combines a digitally-synthesized step-frequency (SF)-GPR array and a data collection protocol sequence for the common midpoint (CMP) method. Modeling and laboratory test results for various data collection protocols and materials are presented in this paper. The continuous-CMP concept was finally implemented for FHWA in a prototype demonstration system called the Advanced Pavement Evaluation (APE) system in 2009. Data from the continuous-CMP protocol is processed using a semblance/coherency analysis to determine material propagation velocities. Continuously calibrated pavement thicknesses measured with the APE system in 2009 are presented. This method is efficient, accurate, and cost-effective.

  4. Development of rapid, continuous calibration techniques and implementation as a prototype system for civil engineering materials evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scott, M. L.; Gagarin, N.; Mekemson, J. R.

    Until recently, civil engineering material calibration data could only be obtained from material sample cores or via time consuming, stationary calibration measurements in a limited number of locations. Calibration data are used to determine material propagation velocities of electromagnetic waves in test materials for use in layer thickness measurements and subsurface imaging. Limitations these calibration methods impose have been a significant impediment to broader use of nondestructive evaluation methods such as ground-penetrating radar (GPR). In 2006, a new rapid, continuous calibration approach was designed using simulation software to address these measurement limitations during a Federal Highway Administration (FHWA) research andmore » development effort. This continuous calibration method combines a digitally-synthesized step-frequency (SF)-GPR array and a data collection protocol sequence for the common midpoint (CMP) method. Modeling and laboratory test results for various data collection protocols and materials are presented in this paper. The continuous-CMP concept was finally implemented for FHWA in a prototype demonstration system called the Advanced Pavement Evaluation (APE) system in 2009. Data from the continuous-CMP protocol is processed using a semblance/coherency analysis to determine material propagation velocities. Continuously calibrated pavement thicknesses measured with the APE system in 2009 are presented. This method is efficient, accurate, and cost-effective.« less

  5. AC calorimetry of H2O at pressures up to 9 GPa in diamond anvil cells

    NASA Astrophysics Data System (ADS)

    Geballe, Zachary M.; Struzhkin, Viktor V.

    2017-06-01

    If successfully developed, calorimetry at tens of GPa of pressure could help characterize phase transitions in materials such as high-pressure minerals, metals, and molecular solids. Here, we extend alternating-current calorimetry to 9 GPa and 300 K in a diamond anvil cell and use it to study phase transitions in H2O. In particular, water is loaded into the sample chambers of diamond-cells, along with thin metal heaters (1 μm-thick platinum or 20 nm-thick gold on a glass substrate) that drive high-frequency temperature oscillations (20 Hz to 600 kHz; 1 to 10 K). The heaters also act as thermometers via the third-harmonic technique, yielding calorimetric data on (1) heat conduction to the diamonds and (2) heat transport into substrate and sample. Using this method during temperature cycles from 300 to 200 K, we document melting, freezing, and proton ordering and disordering transitions of H2O at 0 to 9 GPa, and characterize changes in thermal conductivity and heat capacity across these transitions. The technique and analysis pave the way for calorimetry experiments on any non-metal at pressures up to ˜100 GPa, provided a thin layer (several μm-thick) of thermal insulation supports a metallic thin-film (tens of nm thick) Joule-heater attached to low contact resistance leads inside the sample chamber of a diamond-cell.

  6. TRACC: An open source software for processing sap flux data from thermal dissipation probes

    DOE PAGES

    Ward, Eric J.; Domec, Jean-Christophe; King, John; ...

    2017-05-02

    Here, thermal dissipation probes (TDPs) have become a widely used method of monitoring plant water use in recent years. The use of TDPs requires calibration to a theoretical zero-flow value (ΔT0); usually based upon the assumption that at least some nighttime measurements represent zero-flow conditions. Fully automating the processing of data from TDPs is made exceedingly difficult due to errors arising from many sources. However, it is desirable to minimize variation arising from different researchers’ processing data, and thus, a common platform for processing data, including editing raw data and determination of ΔT0, is useful and increases the transparency andmore » replicability of TDP-based research. Here, we present the TDP data processing software TRACC (Thermal dissipation Review Assessment Cleaning and Conversion) to serve this purpose. TRACC is an open-source software written in the language R, using graphical presentation of data and on screen prompts with yes/no or simple numerical responses. It allows the user to select several important options, such as calibration coefficients and the exclusion of nights when vapor pressure deficit does not approach zero. Although it is designed for users with no coding experience, the outputs of TRACC could be easily incorporated into more complex models or software.« less

  7. TRACC: An open source software for processing sap flux data from thermal dissipation probes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ward, Eric J.; Domec, Jean-Christophe; King, John

    Here, thermal dissipation probes (TDPs) have become a widely used method of monitoring plant water use in recent years. The use of TDPs requires calibration to a theoretical zero-flow value (ΔT0); usually based upon the assumption that at least some nighttime measurements represent zero-flow conditions. Fully automating the processing of data from TDPs is made exceedingly difficult due to errors arising from many sources. However, it is desirable to minimize variation arising from different researchers’ processing data, and thus, a common platform for processing data, including editing raw data and determination of ΔT0, is useful and increases the transparency andmore » replicability of TDP-based research. Here, we present the TDP data processing software TRACC (Thermal dissipation Review Assessment Cleaning and Conversion) to serve this purpose. TRACC is an open-source software written in the language R, using graphical presentation of data and on screen prompts with yes/no or simple numerical responses. It allows the user to select several important options, such as calibration coefficients and the exclusion of nights when vapor pressure deficit does not approach zero. Although it is designed for users with no coding experience, the outputs of TRACC could be easily incorporated into more complex models or software.« less

  8. Model-based software for simulating ultrasonic pulse/echo inspections of metal components

    NASA Astrophysics Data System (ADS)

    Chiou, Chien-Ping; Margetan, Frank J.; Taylor, Jared L.; McKillip, Matthew; Engle, Brady J.; Roberts, Ronald A.; Barnard, Daniel J.

    2017-02-01

    Under the sponsorship of the National Science Foundation's Industry/University Cooperative Research Center at Iowa State University, an effort was initiated in 2015 to repackage existing research-grade software into user friendly tools for the rapid estimation of signal-to-noise ratio (S/N) for ultrasonic inspections of metals. The software combines: (1) a Python-based graphical user interface for specifying an inspection scenario and displaying results; and (2) a Fortran-based engine for computing defect signals and backscattered grain noise characteristics. The later makes use the Thompson-Gray Model for the response from an internal defect and the Independent Scatterer Model for backscattered grain noise. This paper provides an overview of the ongoing modeling effort with emphasis on recent developments. These include: treatment of angle-beam inspections, implementation of distance-amplitude corrections, changes in the generation of "invented" calibration signals, efforts to simulate ultrasonic C-scans; and experimental testing of model predictions. The simulation software can now treat both normal and oblique-incidence immersion inspections of curved metal components having equiaxed microstructures in which the grain size varies with depth. Both longitudinal and shear-wave inspections are treated. The model transducer can either be planar, spherically-focused, or bi-cylindrically-focused. A calibration (or reference) signal is required and is used to deduce the measurement system efficiency function. This can be "invented" by the software using center frequency and bandwidth information specified by the user, or, alternatively, a measured calibration signal can be used. Defect types include flat-bottomed-hole reference reflectors, and spherical pores and inclusions. Simulation outputs include estimated defect signal amplitudes, root-mean-squared grain noise amplitudes, and S/N as functions of the depth of the defect within the metal component. At any particular

  9. A proposed standard method for polarimetric calibration and calibration verification

    NASA Astrophysics Data System (ADS)

    Persons, Christopher M.; Jones, Michael W.; Farlow, Craig A.; Morell, L. Denise; Gulley, Michael G.; Spradley, Kevin D.

    2007-09-01

    Accurate calibration of polarimetric sensors is critical to reducing and analyzing phenomenology data, producing uniform polarimetric imagery for deployable sensors, and ensuring predictable performance of polarimetric algorithms. It is desirable to develop a standard calibration method, including verification reporting, in order to increase credibility with customers and foster communication and understanding within the polarimetric community. This paper seeks to facilitate discussions within the community on arriving at such standards. Both the calibration and verification methods presented here are performed easily with common polarimetric equipment, and are applicable to visible and infrared systems with either partial Stokes or full Stokes sensitivity. The calibration procedure has been used on infrared and visible polarimetric imagers over a six year period, and resulting imagery has been presented previously at conferences and workshops. The proposed calibration method involves the familiar calculation of the polarimetric data reduction matrix by measuring the polarimeter's response to a set of input Stokes vectors. With this method, however, linear combinations of Stokes vectors are used to generate highly accurate input states. This allows the direct measurement of all system effects, in contrast with fitting modeled calibration parameters to measured data. This direct measurement of the data reduction matrix allows higher order effects that are difficult to model to be discovered and corrected for in calibration. This paper begins with a detailed tutorial on the proposed calibration and verification reporting methods. Example results are then presented for a LWIR rotating half-wave retarder polarimeter.

  10. Optical Mass Displacement Tracking: A simplified field calibration method for the electro-mechanical seismometer.

    NASA Astrophysics Data System (ADS)

    Burk, D. R.; Mackey, K. G.; Hartse, H. E.

    2016-12-01

    We have developed a simplified field calibration method for use in seismic networks that still employ the classical electro-mechanical seismometer. Smaller networks may not always have the financial capability to purchase and operate modern, state of the art equipment. Therefore these networks generally operate a modern, low-cost digitizer that is paired to an existing electro-mechanical seismometer. These systems are typically poorly calibrated. Calibration of the station is difficult to estimate because coil loading, digitizer input impedance, and amplifier gain differences vary by station and digitizer model. Therefore, it is necessary to calibrate the station channel as a complete system to take into account all components from instrument, to amplifier, to even the digitizer. Routine calibrations at the smaller networks are not always consistent, because existing calibration techniques require either specialized equipment or significant technical expertise. To improve station data quality at the small network, we developed a calibration method that utilizes open source software and a commonly available laser position sensor. Using a signal generator and a small excitation coil, we force the mass of the instrument to oscillate at various frequencies across its operating range. We then compare the channel voltage output to the laser-measured mass displacement to determine the instrument voltage sensitivity at each frequency point. Using the standard equations of forced motion, a representation of the calibration curve as a function of voltage per unit of ground velocity is calculated. A computer algorithm optimizes the curve and then translates the instrument response into a Seismic Analysis Code (SAC) poles & zeros format. Results have been demonstrated to fall within a few percent of a standard laboratory calibration. This method is an effective and affordable option for networks that employ electro-mechanical seismometers, and it is currently being deployed in

  11. Isothermal Titration Calorimetry and Macromolecular Visualization for the Interaction of Lysozyme and Its Inhibitors

    ERIC Educational Resources Information Center

    Wei, Chin-Chuan; Jensen, Drake; Boyle, Tiffany; O'Brien, Leah C.; De Meo, Cristina; Shabestary, Nahid; Eder, Douglas J.

    2015-01-01

    To provide a research-like experience to upper-division undergraduate students in a biochemistry teaching laboratory, isothermal titration calorimetry (ITC) is employed to determine the binding constants of lysozyme and its inhibitors, N-acetyl glucosamine trimer (NAG[subscript 3]) and monomer (NAG). The extremely weak binding of lysozyme/NAG is…

  12. Validity of endothelial cell analysis methods and recommendations for calibration in Topcon SP-2000P specular microscopy.

    PubMed

    van Schaick, Willem; van Dooren, Bart T H; Mulder, Paul G H; Völker-Dieben, Hennie J M

    2005-07-01

    To report on the calibration of the Topcon SP-2000P specular microscope and the Endothelial Cell Analysis Module of the IMAGEnet 2000 software, and to establish the validity of the different endothelial cell density (ECD) assessment methods available in these instruments. Using an external microgrid, we calibrated the magnification of the SP-2000P and the IMAGEnet software. In both eyes of 36 volunteers, we validated 4 ECD assessment methods by comparing these methods to the gold standard manual ECD, manual counting of cells on a video print. These methods were: the estimated ECD, estimation of ECD with a reference grid on the camera screen; the SP-2000P ECD, pointing out whole contiguous cells on the camera screen; the uncorrected IMAGEnet ECD, using automatically drawn cell borders, and the corrected IMAGEnet ECD, with manual correction of incorrectly drawn cell borders in the automated analysis. Validity of each method was evaluated by calculating both the mean difference with the manual ECD and the limits of agreement as described by Bland and Altman. Preset factory values of magnification were incorrect, resulting in errors in ECD of up to 9%. All assessments except 1 of the estimated ECDs differed significantly from manual ECDs, with most differences being similar (< or =6.5%), except for uncorrected IMAGEnet ECD (30.2%). Corrected IMAGEnet ECD showed the narrowest limits of agreement (-4.9 to +19.3%). We advise checking the calibration of magnification in any specular microscope or endothelial analysis software as it may be erroneous. Corrected IMAGEnet ECD is the most valid of the investigated methods in the Topcon SP-2000P/IMAGEnet 2000 combination.

  13. Integrating model behavior, optimization, and sensitivity/uncertainty analysis: overview and application of the MOUSE software toolbox

    USDA-ARS?s Scientific Manuscript database

    This paper provides an overview of the Model Optimization, Uncertainty, and SEnsitivity Analysis (MOUSE) software application, an open-source, Java-based toolbox of visual and numerical analysis components for the evaluation of environmental models. MOUSE is based on the OPTAS model calibration syst...

  14. Instrument control software development process for the multi-star AO system ARGOS

    NASA Astrophysics Data System (ADS)

    Kulas, M.; Barl, L.; Borelli, J. L.; Gässler, W.; Rabien, S.

    2012-09-01

    The ARGOS project (Advanced Rayleigh guided Ground layer adaptive Optics System) will upgrade the Large Binocular Telescope (LBT) with an AO System consisting of six Rayleigh laser guide stars. This adaptive optics system integrates several control loops and many different components like lasers, calibration swing arms and slope computers that are dispersed throughout the telescope. The purpose of the instrument control software (ICS) is running this AO system and providing convenient client interfaces to the instruments and the control loops. The challenges for the ARGOS ICS are the development of a distributed and safety-critical software system with no defects in a short time, the creation of huge and complex software programs with a maintainable code base, the delivery of software components with the desired functionality and the support of geographically distributed project partners. To tackle these difficult tasks, the ARGOS software engineers reuse existing software like the novel middleware from LINC-NIRVANA, an instrument for the LBT, provide many tests at different functional levels like unit tests and regression tests, agree about code and architecture style and deliver software incrementally while closely collaborating with the project partners. Many ARGOS ICS components are already successfully in use in the laboratories for testing ARGOS control loops.

  15. TWSTFT Link Calibration Report

    DTIC Science & Technology

    2015-09-01

    1 Annex II. TWSTFT link calibration with a GPS calibrator Calibration reference: CI-888-2015 Version history: ZJ/V0/25Feb2015, V0a,b/HE/ZJ...7Mar; V0s/VZ9Mar; V0d,e,f+/DM10,17Mar; V1.0/1Apr; Final version 1Sept2015 TWSTFT link calibration report -- Calibration of the Lab(k)-PTB UTC...bipm.org * Coordinator Abstract This report includes the calibration results of the Lab(k)-PTB TWSTFT link and closure measurements of the BIPM

  16. Simple transfer calibration method for a Cimel Sun-Moon photometer: calculating lunar calibration coefficients from Sun calibration constants.

    PubMed

    Li, Zhengqiang; Li, Kaitao; Li, Donghui; Yang, Jiuchun; Xu, Hua; Goloub, Philippe; Victori, Stephane

    2016-09-20

    The Cimel new technologies allow both daytime and nighttime aerosol optical depth (AOD) measurements. Although the daytime AOD calibration protocols are well established, accurate and simple nighttime calibration is still a challenging task. Standard lunar-Langley and intercomparison calibration methods both require specific conditions in terms of atmospheric stability and site condition. Additionally, the lunar irradiance model also has some known limits on its uncertainty. This paper presents a simple calibration method that transfers the direct-Sun calibration constant, V0,Sun, to the lunar irradiance calibration coefficient, CMoon. Our approach is a pure calculation method, independent of site limits, e.g., Moon phase. The method is also not affected by the lunar irradiance model limitations, which is the largest error source of traditional calibration methods. Besides, this new transfer calibration approach is easy to use in the field since CMoon can be obtained directly once V0,Sun is known. Error analysis suggests that the average uncertainty of CMoon over the 440-1640 nm bands obtained with the transfer method is 2.4%-2.8%, depending on the V0,Sun approach (Langley or intercomparison), which is comparable with that of lunar-Langley approach, theoretically. In this paper, the Sun-Moon transfer and the Langley methods are compared based on site measurements in Beijing, and the day-night measurement continuity and performance are analyzed.

  17. Spitzer/JWST Cross Calibration: IRAC Observations of Potential Calibrators for JWST

    NASA Astrophysics Data System (ADS)

    Carey, Sean J.; Gordon, Karl D.; Lowrance, Patrick; Ingalls, James G.; Glaccum, William J.; Grillmair, Carl J.; E Krick, Jessica; Laine, Seppo J.; Fazio, Giovanni G.; Hora, Joseph L.; Bohlin, Ralph

    2017-06-01

    We present observations at 3.6 and 4.5 microns using IRAC on the Spitzer Space Telescope of a set of main sequence A stars and white dwarfs that are potential calibrators across the JWST instrument suite. The stars range from brightnesses of 4.4 to 15 mag in K band. The calibration observations use a similar redundancy to the observing strategy for the IRAC primary calibrators (Reach et al. 2005) and the photometry is obtained using identical methods and instrumental photometric corrections as those applied to the IRAC primary calibrators (Carey et al. 2009). The resulting photometry is then compared to the predictions based on spectra from the CALSPEC Calibration Database (http://www.stsci.edu/hst/observatory/crds/calspec.html) and the IRAC bandpasses. These observations are part of an ongoing collaboration between IPAC and STScI investigating absolute calibration in the infrared.

  18. Simultaneous calibration phantom commission and geometry calibration in cone beam CT

    NASA Astrophysics Data System (ADS)

    Xu, Yuan; Yang, Shuai; Ma, Jianhui; Li, Bin; Wu, Shuyu; Qi, Hongliang; Zhou, Linghong

    2017-09-01

    Geometry calibration is a vital step for describing the geometry of a cone beam computed tomography (CBCT) system and is a prerequisite for CBCT reconstruction. In current methods, calibration phantom commission and geometry calibration are divided into two independent tasks. Small errors in ball-bearing (BB) positioning in the phantom-making step will severely degrade the quality of phantom calibration. To solve this problem, we propose an integrated method to simultaneously realize geometry phantom commission and geometry calibration. Instead of assuming the accuracy of the geometry phantom, the integrated method considers BB centers in the phantom as an optimized parameter in the workflow. Specifically, an evaluation phantom and the corresponding evaluation contrast index are used to evaluate geometry artifacts for optimizing the BB coordinates in the geometry phantom. After utilizing particle swarm optimization, the CBCT geometry and BB coordinates in the geometry phantom are calibrated accurately and are then directly used for the next geometry calibration task in other CBCT systems. To evaluate the proposed method, both qualitative and quantitative studies were performed on simulated and realistic CBCT data. The spatial resolution of reconstructed images using dental CBCT can reach up to 15 line pair cm-1. The proposed method is also superior to the Wiesent method in experiments. This paper shows that the proposed method is attractive for simultaneous and accurate geometry phantom commission and geometry calibration.

  19. Improving mass measurement accuracy in mass spectrometry based proteomics by combining open source tools for chromatographic alignment and internal calibration.

    PubMed

    Palmblad, Magnus; van der Burgt, Yuri E M; Dalebout, Hans; Derks, Rico J E; Schoenmaker, Bart; Deelder, André M

    2009-05-02

    Accurate mass determination enhances peptide identification in mass spectrometry based proteomics. We here describe the combination of two previously published open source software tools to improve mass measurement accuracy in Fourier transform ion cyclotron resonance mass spectrometry (FTICRMS). The first program, msalign, aligns one MS/MS dataset with one FTICRMS dataset. The second software, recal2, uses peptides identified from the MS/MS data for automated internal calibration of the FTICR spectra, resulting in sub-ppm mass measurement errors.

  20. SPRT Calibration Uncertainties and Internal Quality Control at a Commercial SPRT Calibration Facility

    NASA Astrophysics Data System (ADS)

    Wiandt, T. J.

    2008-06-01

    The Hart Scientific Division of the Fluke Corporation operates two accredited standard platinum resistance thermometer (SPRT) calibration facilities, one at the Hart Scientific factory in Utah, USA, and the other at a service facility in Norwich, UK. The US facility is accredited through National Voluntary Laboratory Accreditation Program (NVLAP), and the UK facility is accredited through UKAS. Both provide SPRT calibrations using similar equipment and procedures, and at similar levels of uncertainty. These uncertainties are among the lowest available commercially. To achieve and maintain low uncertainties, it is required that the calibration procedures be thorough and optimized. However, to minimize customer downtime, it is also important that the instruments be calibrated in a timely manner and returned to the customer. Consequently, subjecting the instrument to repeated calibrations or extensive repeated measurements is not a viable approach. Additionally, these laboratories provide SPRT calibration services involving a wide variety of SPRT designs. These designs behave differently, yet predictably, when subjected to calibration measurements. To this end, an evaluation strategy involving both statistical process control and internal consistency measures is utilized to provide confidence in both the instrument calibration and the calibration process. This article describes the calibration facilities, procedure, uncertainty analysis, and internal quality assurance measures employed in the calibration of SPRTs. Data will be reviewed and generalities will be presented. Finally, challenges and considerations for future improvements will be discussed.

  1. Development of a new calibration procedure and its experimental validation applied to a human motion capture system.

    PubMed

    Royo Sánchez, Ana Cristina; Aguilar Martín, Juan José; Santolaria Mazo, Jorge

    2014-12-01

    Motion capture systems are often used for checking and analyzing human motion in biomechanical applications. It is important, in this context, that the systems provide the best possible accuracy. Among existing capture systems, optical systems are those with the highest accuracy. In this paper, the development of a new calibration procedure for optical human motion capture systems is presented. The performance and effectiveness of that new calibration procedure are also checked by experimental validation. The new calibration procedure consists of two stages. In the first stage, initial estimators of intrinsic and extrinsic parameters are sought. The camera calibration method used in this stage is the one proposed by Tsai. These parameters are determined from the camera characteristics, the spatial position of the camera, and the center of the capture volume. In the second stage, a simultaneous nonlinear optimization of all parameters is performed to identify the optimal values, which minimize the objective function. The objective function, in this case, minimizes two errors. The first error is the distance error between two markers placed in a wand. The second error is the error of position and orientation of the retroreflective markers of a static calibration object. The real co-ordinates of the two objects are calibrated in a co-ordinate measuring machine (CMM). The OrthoBio system is used to validate the new calibration procedure. Results are 90% lower than those from the previous calibration software and broadly comparable with results from a similarly configured Vicon system.

  2. Improvement in QEPAS system utilizing a second harmonic based wavelength calibration technique

    NASA Astrophysics Data System (ADS)

    Zhang, Qinduan; Chang, Jun; Wang, Fupeng; Wang, Zongliang; Xie, Yulei; Gong, Weihua

    2018-05-01

    A simple laser wavelength calibration technique, based on second harmonic signal, is demonstrated in this paper to improve the performance of quartz enhanced photoacoustic spectroscopy (QEPAS) gas sensing system, e.g. improving the signal to noise ratio (SNR), detection limit and long-term stability. Constant current, corresponding to the gas absorption line, combining f/2 frequency sinusoidal signal are used to drive the laser (constant driving mode), a software based real-time wavelength calibration technique is developed to eliminate the wavelength drift due to ambient fluctuations. Compared to conventional wavelength modulation spectroscopy (WMS), this method allows lower filtering bandwidth and averaging algorithm applied to QEPAS system, improving SNR and detection limit. In addition, the real-time wavelength calibration technique guarantees the laser output is modulated steadily at gas absorption line. Water vapor is chosen as an objective gas to evaluate its performance compared to constant driving mode and conventional WMS system. The water vapor sensor was designed insensitive to the incoherent external acoustic noise by the numerical averaging technique. As a result, the SNR increases 12.87 times in wavelength calibration technique based system compared to conventional WMS system. The new system achieved a better linear response (R2 = 0 . 9995) in concentration range from 300 to 2000 ppmv, and achieved a minimum detection limit (MDL) of 630 ppbv.

  3. A novel implementation of homodyne time interval analysis method for primary vibration calibration

    NASA Astrophysics Data System (ADS)

    Sun, Qiao; Zhou, Ling; Cai, Chenguang; Hu, Hongbo

    2011-12-01

    In this paper, the shortcomings and their causes of the conventional homodyne time interval analysis (TIA) method is described with respect to its software algorithm and hardware implementation, based on which a simplified TIA method is proposed with the help of virtual instrument technology. Equipped with an ordinary Michelson interferometer and dual channel synchronous data acquisition card, the primary vibration calibration system using the simplified method can perform measurements of complex sensitivity of accelerometers accurately, meeting the uncertainty requirements laid down in pertaining ISO standard. The validity and accuracy of the simplified TIA method is verified by simulation and comparison experiments with its performance analyzed. This simplified method is recommended to apply in national metrology institute of developing countries and industrial primary vibration calibration labs for its simplified algorithm and low requirements on hardware.

  4. Assessment of uncertainty in ROLO lunar irradiance for on-orbit calibration

    USGS Publications Warehouse

    Stone, T.C.; Kieffer, H.H.; Barnes, W.L.; Butler, J.J.

    2004-01-01

    A system to provide radiometric calibration of remote sensing imaging instruments on-orbit using the Moon has been developed by the US Geological Survey RObotic Lunar Observatory (ROLO) project. ROLO has developed a model for lunar irradiance which treats the primary geometric variables of phase and libration explicitly. The model fits hundreds of data points in each of 23 VNIR and 9 SWIR bands; input data are derived from lunar radiance images acquired by the project's on-site telescopes, calibrated to exoatmospheric radiance and converted to disk-equivalent reflectance. Experimental uncertainties are tracked through all stages of the data processing and modeling. Model fit residuals are ???1% in each band over the full range of observed phase and libration angles. Application of ROLO lunar calibration to SeaWiFS has demonstrated the capability for long-term instrument response trending with precision approaching 0.1% per year. Current work involves assessing the error in absolute responsivity and relative spectral response of the ROLO imaging systems, and propagation of error through the data reduction and modeling software systems with the goal of reducing the uncertainty in the absolute scale, now estimated at 5-10%. This level is similar to the scatter seen in ROLO lunar irradiance comparisons of multiple spacecraft instruments that have viewed the Moon. A field calibration campaign involving NASA and NIST has been initiated that ties the ROLO lunar measurements to the NIST (SI) radiometric scale.

  5. Evaluation of the interaction of coumarins with biomembrane models studied by differential scanning calorimetry and Langmuir-Blodgett techniques.

    PubMed

    Sarpietro, Maria Grazia; Giuffrida, Maria Chiara; Ottimo, Sara; Micieli, Dorotea; Castelli, Francesco

    2011-04-25

    Three coumarins, scopoletin (1), esculetin (2), and esculin (3), were investigated by differential scanning calorimetry and Langmuir-Blodgett techniques to gain information about the interaction of these compounds with cellular membranes. Phospholipids assembled as multilamellar vesicles or monolayers (at the air-water interface) were used as biomembrane models. Differential scanning calorimetry was employed to study the interaction of these coumarins with multilamellar vesicles and to evaluate their absorption by multilamellar vesicles. These experiments indicated that 1-3 interact in this manner to different extents. The Langmuir-Blodgett technique was used to study the effect of these coumarins on the organization of phospholipids assembled as a monolayer. The data obtained were in agreement with those obtained in the calorimetric experiments.

  6. Calibration of Action Cameras for Photogrammetric Purposes

    PubMed Central

    Balletti, Caterina; Guerra, Francesco; Tsioukas, Vassilios; Vernier, Paolo

    2014-01-01

    The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution. PMID:25237898

  7. Calibration of action cameras for photogrammetric purposes.

    PubMed

    Balletti, Caterina; Guerra, Francesco; Tsioukas, Vassilios; Vernier, Paolo

    2014-09-18

    The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution.

  8. Review of MEMS differential scanning calorimetry for biomolecular study

    NASA Astrophysics Data System (ADS)

    Yu, Shifeng; Wang, Shuyu; Lu, Ming; Zuo, Lei

    2017-12-01

    Differential scanning calorimetry (DSC) is one of the few techniques that allow direct determination of enthalpy values for binding reactions and conformational transitions in biomolecules. It provides the thermodynamics information of the biomolecules which consists of Gibbs free energy, enthalpy and entropy in a straightforward manner that enables deep understanding of the structure function relationship in biomolecules such as the folding/unfolding of protein and DNA, and ligand bindings. This review provides an up to date overview of the applications of DSC in biomolecular study such as the bovine serum albumin denaturation study, the relationship between the melting point of lysozyme and the scanning rate. We also introduce the recent advances of the development of micro-electro-mechanic-system (MEMS) based DSCs.

  9. Analysis of Performance of Stereoscopic-Vision Software

    NASA Technical Reports Server (NTRS)

    Kim, Won; Ansar, Adnan; Steele, Robert; Steinke, Robert

    2007-01-01

    A team of JPL researchers has analyzed stereoscopic vision software and produced a document describing its performance. This software is of the type used in maneuvering exploratory robotic vehicles on Martian terrain. The software in question utilizes correlations between portions of the images recorded by two electronic cameras to compute stereoscopic disparities, which, in conjunction with camera models, are used in computing distances to terrain points to be included in constructing a three-dimensional model of the terrain. The analysis included effects of correlation- window size, a pyramidal image down-sampling scheme, vertical misalignment, focus, maximum disparity, stereo baseline, and range ripples. Contributions of sub-pixel interpolation, vertical misalignment, and foreshortening to stereo correlation error were examined theoretically and experimentally. It was found that camera-calibration inaccuracy contributes to both down-range and cross-range error but stereo correlation error affects only the down-range error. Experimental data for quantifying the stereo disparity error were obtained by use of reflective metrological targets taped to corners of bricks placed at known positions relative to the cameras. For the particular 1,024-by-768-pixel cameras of the system analyzed, the standard deviation of the down-range disparity error was found to be 0.32 pixel.

  10. SAR calibration technology review

    NASA Technical Reports Server (NTRS)

    Walker, J. L.; Larson, R. W.

    1981-01-01

    Synthetic Aperture Radar (SAR) calibration technology including a general description of the primary calibration techniques and some of the factors which affect the performance of calibrated SAR systems are reviewed. The use of reference reflectors for measurement of the total system transfer function along with an on-board calibration signal generator for monitoring the temporal variations of the receiver to processor output is a practical approach for SAR calibration. However, preliminary error analysis and previous experimental measurements indicate that reflectivity measurement accuracies of better than 3 dB will be difficult to achieve. This is not adequate for many applications and, therefore, improved end-to-end SAR calibration techniques are required.

  11. Liquid Argon Calorimetry for ATLAS

    NASA Astrophysics Data System (ADS)

    Robinson, Alan

    2008-05-01

    This summer, the largest collaborative physics project since the Manhattan project will go online. One of four experiments for the Large Hadron Collider at CERN in Geneva, ATLAS, employs over 2000 people. Canadians have helped design, construct, and calibrate the liquid argon calorimeters for ATLAS to capture the products of the high energy collisions produced by the LHC. From an undergraduate's perspective, explore how these calorimeters are made to handle their harsh requirement. From nearly a billion proton-proton collisions a second, physicists hope to discover the Higgs boson and other new fundamental particles.

  12. Feasibility analysis on integration of luminous environment measuring and design based on exposure curve calibration

    NASA Astrophysics Data System (ADS)

    Zou, Yuan; Shen, Tianxing

    2013-03-01

    Besides illumination calculating during architecture and luminous environment design, to provide more varieties of photometric data, the paper presents combining relation between luminous environment design and SM light environment measuring system, which contains a set of experiment devices including light information collecting and processing modules, and can offer us various types of photometric data. During the research process, we introduced a simulation method for calibration, which mainly includes rebuilding experiment scenes in 3ds Max Design, calibrating this computer aid design software in simulated environment under conditions of various typical light sources, and fitting the exposure curves of rendered images. As analytical research went on, the operation sequence and points for attention during the simulated calibration were concluded, connections between Mental Ray renderer and SM light environment measuring system were established as well. From the paper, valuable reference conception for coordination between luminous environment design and SM light environment measuring system was pointed out.

  13. Calibration Procedures in Mid Format Camera Setups

    NASA Astrophysics Data System (ADS)

    Pivnicka, F.; Kemper, G.; Geissler, S.

    2012-07-01

    A growing number of mid-format cameras are used for aerial surveying projects. To achieve a reliable and geometrically precise result also in the photogrammetric workflow, awareness on the sensitive parts is important. The use of direct referencing systems (GPS/IMU), the mounting on a stabilizing camera platform and the specific values of the mid format camera make a professional setup with various calibration and misalignment operations necessary. An important part is to have a proper camera calibration. Using aerial images over a well designed test field with 3D structures and/or different flight altitudes enable the determination of calibration values in Bingo software. It will be demonstrated how such a calibration can be performed. The direct referencing device must be mounted in a solid and reliable way to the camera. Beside the mechanical work especially in mounting the camera beside the IMU, 2 lever arms have to be measured in mm accuracy. Important are the lever arms from the GPS Antenna to the IMU's calibrated centre and also the lever arm from the IMU centre to the Camera projection centre. In fact, the measurement with a total station is not a difficult task but the definition of the right centres and the need for using rotation matrices can cause serious accuracy problems. The benefit of small and medium format cameras is that also smaller aircrafts can be used. Like that, a gyro bases stabilized platform is recommended. This causes, that the IMU must be mounted beside the camera on the stabilizer. The advantage is, that the IMU can be used to control the platform, the problematic thing is, that the IMU to GPS antenna lever arm is floating. In fact we have to deal with an additional data stream, the values of the movement of the stabiliser to correct the floating lever arm distances. If the post-processing of the GPS-IMU data by taking the floating levers into account, delivers an expected result, the lever arms between IMU and camera can be applied

  14. A curve fitting method for extrinsic camera calibration from a single image of a cylindrical object

    NASA Astrophysics Data System (ADS)

    Winkler, A. W.; Zagar, B. G.

    2013-08-01

    An important step in the process of optical steel coil quality assurance is to measure the proportions of width and radius of steel coils as well as the relative position and orientation of the camera. This work attempts to estimate these extrinsic parameters from single images by using the cylindrical coil itself as the calibration target. Therefore, an adaptive least-squares algorithm is applied to fit parametrized curves to the detected true coil outline in the acquisition. The employed model allows for strictly separating the intrinsic and the extrinsic parameters. Thus, the intrinsic camera parameters can be calibrated beforehand using available calibration software. Furthermore, a way to segment the true coil outline in the acquired images is motivated. The proposed optimization method yields highly accurate results and can be generalized even to measure other solids which cannot be characterized by the identification of simple geometric primitives.

  15. Evolution of the JPSS Ground Project Calibration and Validation System

    NASA Technical Reports Server (NTRS)

    Purcell, Patrick; Chander, Gyanesh; Jain, Peyush

    2016-01-01

    The Joint Polar Satellite System (JPSS) is the National Oceanic and Atmospheric Administration's (NOAA) next-generation operational Earth observation Program that acquires and distributes global environmental data from multiple polar-orbiting satellites. The JPSS Program plays a critical role to NOAA's mission to understand and predict changes in weather, climate, oceans, coasts, and space environments, which supports the Nation's economy and protection of lives and property. The National Aeronautics and Space Administration (NASA) is acquiring and implementing the JPSS, comprised of flight and ground systems, on behalf of NOAA. The JPSS satellites are planned to fly in the afternoon orbit and will provide operational continuity of satellite-based observations and products for NOAA Polar-orbiting Operational Environmental Satellites (POES) and the Suomi National Polar-orbiting Partnership (SNPP) satellite. To support the JPSS Calibration and Validation (CalVal) node Government Resource for Algorithm Verification, Independent Test, and Evaluation (GRAVITE) services facilitate: Algorithm Integration and Checkout, Algorithm and Product Operational Tuning, Instrument Calibration, Product Validation, Algorithm Investigation, and Data Quality Support and Monitoring. GRAVITE is a mature, deployed system that currently supports the SNPP Mission and has been in operations since SNPP launch. This paper discusses the major re-architecture for Block 2.0 that incorporates SNPP lessons learned, architecture of the system, and demonstrates how GRAVITE has evolved as a system with increased performance. It is now a robust, stable, reliable, maintainable, scalable, and secure system that supports development, test, and production strings, replaces proprietary and custom software, uses open source software, and is compliant with NASA and NOAA standards.

  16. Evolution of the JPSS Ground Project Calibration and Validation System

    NASA Technical Reports Server (NTRS)

    Chander, Gyanesh; Jain, Peyush

    2014-01-01

    The Joint Polar Satellite System (JPSS) is the National Oceanic and Atmospheric Administrations (NOAA) next-generation operational Earth observation Program that acquires and distributes global environmental data from multiple polar-orbiting satellites. The JPSS Program plays a critical role to NOAAs mission to understand and predict changes in weather, climate, oceans, coasts, and space environments, which supports the Nation’s economy and protection of lives and property. The National Aerospace and Atmospheric Administration (NASA) is acquiring and implementing the JPSS, comprised of flight and ground systems on behalf of NOAA. The JPSS satellites are planned to fly in the afternoon orbit and will provide operational continuity of satellite-based observations and products for NOAA Polar-orbiting Operational Environmental Satellites (POES) and the Suomi National Polar-orbiting Partnership (SNPP) satellite. To support the JPSS Calibration and Validation (CalVal) node Government Resource for Algorithm Verification, Independent Test, and Evaluation (GRAVITE) services facilitate: Algorithm Integration and Checkout, Algorithm and Product Operational Tuning, Instrument Calibration, Product Validation, Algorithm Investigation, and Data Quality Support and Monitoring. GRAVITE is a mature, deployed system that currently supports the SNPP Mission and has been in operations since SNPP launch. This paper discusses the major re-architecture for Block 2.0 that incorporates SNPP lessons learned, architecture of the system, and demonstrates how GRAVITE has evolved as a system with increased performance. It is now a robust, stable, reliable, maintainable, scalable, and secure system that supports development, test, and production strings, replaces proprietary and custom software, uses open source software, and is compliant with NASA and NOAA standards.

  17. TRACC: an open source software for processing sap flux data from thermal dissipation probes

    Treesearch

    Eric J. Ward; Jean-Christophe Domec; John King; Ge Sun; Steve McNulty; Asko Noormets

    2017-01-01

    Key message TRACC is an open-source software for standardizing the cleaning, conversion, and calibration of sap flux density data from thermal dissipation probes, which addresses issues of nighttime transpiration and water storage. Abstract Thermal dissipation probes (TDPs) have become a widely used method of monitoring plant water use in recent years. The use of TDPs...

  18. Isothermal titration calorimetry for measuring macromolecule-ligand affinity.

    PubMed

    Duff, Michael R; Grubbs, Jordan; Howell, Elizabeth E

    2011-09-07

    Isothermal titration calorimetry (ITC) is a useful tool for understanding the complete thermodynamic picture of a binding reaction. In biological sciences, macromolecular interactions are essential in understanding the machinery of the cell. Experimental conditions, such as buffer and temperature, can be tailored to the particular binding system being studied. However, careful planning is needed since certain ligand and macromolecule concentration ranges are necessary to obtain useful data. Concentrations of the macromolecule and ligand need to be accurately determined for reliable results. Care also needs to be taken when preparing the samples as impurities can significantly affect the experiment. When ITC experiments, along with controls, are performed properly, useful binding information, such as the stoichiometry, affinity and enthalpy, are obtained. By running additional experiments under different buffer or temperature conditions, more detailed information can be obtained about the system. A protocol for the basic setup of an ITC experiment is given.

  19. [Application of AOTF in spectral analysis. 1. Hardware and software designs for the self-constructed visible AOTF spectrophotometer].

    PubMed

    He, Jia-yao; Peng, Rong-fei; Zhang, Zhan-xia

    2002-02-01

    A self-constructed visible spectrophotometer using an acousto-optic tunable filter(AOTF) as a dispersing element is described. Two different AOTFs (one from The Institute for Silicate (Shanghai, China) and the other from Brimrose(USA)) are tested. The software written with visual C++ and operated on a Window98 platform is an applied program with dual database and multi-windows. Four independent windows, namely scanning, quantitative, calibration and result are incorporated. The Fourier self-deconvolution algorithm is also incorporated to improve the spectral resolution. The wavelengths are calibrated using the polynomial curve fitting method. The spectra and calibration curves of soluble aniline blue and phenol red are presented to show the feasibility of the constructed spectrophotometer.

  20. Absolute radiometric calibration of Landsat using a pseudo invariant calibration site

    USGS Publications Warehouse

    Helder, D.; Thome, K.J.; Mishra, N.; Chander, G.; Xiong, Xiaoxiong; Angal, A.; Choi, Tae-young

    2013-01-01

    Pseudo invariant calibration sites (PICS) have been used for on-orbit radiometric trending of optical satellite systems for more than 15 years. This approach to vicarious calibration has demonstrated a high degree of reliability and repeatability at the level of 1-3% depending on the site, spectral channel, and imaging geometries. A variety of sensors have used this approach for trending because it is broadly applicable and easy to implement. Models to describe the surface reflectance properties, as well as the intervening atmosphere have also been developed to improve the precision of the method. However, one limiting factor of using PICS is that an absolute calibration capability has not yet been fully developed. Because of this, PICS are primarily limited to providing only long term trending information for individual sensors or cross-calibration opportunities between two sensors. This paper builds an argument that PICS can be used more extensively for absolute calibration. To illustrate this, a simple empirical model is developed for the well-known Libya 4 PICS based on observations by Terra MODIS and EO-1 Hyperion. The model is validated by comparing model predicted top-of-atmosphere reflectance values to actual measurements made by the Landsat ETM+ sensor reflective bands. Following this, an outline is presented to develop a more comprehensive and accurate PICS absolute calibration model that can be Système international d'unités (SI) traceable. These initial concepts suggest that absolute calibration using PICS is possible on a broad scale and can lead to improved on-orbit calibration capabilities for optical satellite sensors.

  1. Multi-species beam hardening calibration device for x-ray microtomography

    NASA Astrophysics Data System (ADS)

    Evershed, Anthony N. Z.; Mills, David; Davis, Graham

    2012-10-01

    Impact-source X-ray microtomography (XMT) is a widely-used benchtop alternative to synchrotron radiation microtomography. Since X-rays from a tube are polychromatic, however, greyscale `beam hardening' artefacts are produced by the preferential absorption of low-energy photons in the beam path. A multi-material `carousel' test piece was developed to offer a wider range of X-ray attenuations from well-characterised filters than single-material step wedges can produce practically, and optimization software was developed to produce a beam hardening correction by use of the Nelder-Mead optimization method, tuned for specimens composed of other materials (such as hydroxyapatite [HA] or barium for dental applications.) The carousel test piece produced calibration polynomials reliably and with a significantly smaller discrepancy between the calculated and measured attenuations than the calibration step wedge previously in use. An immersion tank was constructed and used to simplify multi-material samples in order to negate the beam hardening effect of low atomic number materials within the specimen when measuring mineral concentration of higher-Z regions. When scanned in water at an acceleration voltage of 90 kV a Scanco AG hydroxyapatite / poly(methyl methacrylate) calibration phantom closely approximates a single-material system, producing accurate hydroxyapatite concentration measurements. This system can then be corrected for beam hardening for the material of interest.

  2. SU-F-E-19: A Novel Method for TrueBeam Jaw Calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Corns, R; Zhao, Y; Huang, V

    2016-06-15

    Purpose: A simple jaw calibration method is proposed for Varian TrueBeam using an EPID-Encoder combination that gives accurate fields sizes and a homogeneous junction dose. This benefits clinical applications such as mono-isocentric half-beam block breast cancer or head and neck cancer treatment with junction/field matching. Methods: We use EPID imager with pixel size 0.392 mm × 0.392 mm to determine the radiation jaw position as measured from radio-opaque markers aligned with the crosshair. We acquire two images with different symmetric field sizes and record each individual jaw encoder values. A linear relationship between each jaw’s position and its encoder valuemore » is established, from which we predict the encoder values that produce the jaw positions required by TrueBeam’s calibration procedure. During TrueBeam’s jaw calibration procedure, we move the jaw with the pendant to set the jaw into position using the predicted encoder value. The overall accuracy is under 0.1 mm. Results: Our in-house software analyses images and provides sub-pixel accuracy to determine field centre and radiation edges (50% dose of the profile). We verified the TrueBeam encoder provides a reliable linear relationship for each individual jaw position (R{sup 2}>0.9999) from which the encoder values necessary to set jaw calibration points (1 cm and 19 cm) are predicted. Junction matching dose inhomogeneities were improved from >±20% to <±6% using this new calibration protocol. However, one technical challenge exists for junction matching, if the collimator walkout is large. Conclusion: Our new TrueBeam jaw calibration method can systematically calibrate the jaws to crosshair within sub-pixel accuracy and provides both good junction doses and field sizes. This method does not compensate for a larger collimator walkout, but can be used as the underlying foundation for addressing the walkout issue.« less

  3. Self-Calibrating Pressure Transducer

    NASA Technical Reports Server (NTRS)

    Lueck, Dale E. (Inventor)

    2006-01-01

    A self-calibrating pressure transducer is disclosed. The device uses an embedded zirconia membrane which pumps a determined quantity of oxygen into the device. The associated pressure can be determined, and thus, the transducer pressure readings can be calibrated. The zirconia membrane obtains oxygen .from the surrounding environment when possible. Otherwise, an oxygen reservoir or other source is utilized. In another embodiment, a reversible fuel cell assembly is used to pump oxygen and hydrogen into the system. Since a known amount of gas is pumped across the cell, the pressure produced can be determined, and thus, the device can be calibrated. An isolation valve system is used to allow the device to be calibrated in situ. Calibration is optionally automated so that calibration can be continuously monitored. The device is preferably a fully integrated MEMS device. Since the device can be calibrated without removing it from the process, reductions in costs and down time are realized.

  4. Evaluation of three flame retardant (FR) grey cotton blend nonwoven fabrics using micro-scale combustion calorimetry

    USDA-ARS?s Scientific Manuscript database

    Unbleached (grey or greige) cotton nonwoven (NW) fabrics (with 12.5% polypropylene scrim) were treated with three phosphate-nitrogen based FR formulations and evaluated with micro-scale combustion calorimetry (MCC). Heat release rate (HRR), Peak heat rate (PHRR), temperature at peak heat release ra...

  5. A quantitative reconstruction software suite for SPECT imaging

    NASA Astrophysics Data System (ADS)

    Namías, Mauro; Jeraj, Robert

    2017-11-01

    Quantitative Single Photon Emission Tomography (SPECT) imaging allows for measurement of activity concentrations of a given radiotracer in vivo. Although SPECT has usually been perceived as non-quantitative by the medical community, the introduction of accurate CT based attenuation correction and scatter correction from hybrid SPECT/CT scanners has enabled SPECT systems to be as quantitative as Positron Emission Tomography (PET) systems. We implemented a software suite to reconstruct quantitative SPECT images from hybrid or dedicated SPECT systems with a separate CT scanner. Attenuation, scatter and collimator response corrections were included in an Ordered Subset Expectation Maximization (OSEM) algorithm. A novel scatter fraction estimation technique was introduced. The SPECT/CT system was calibrated with a cylindrical phantom and quantitative accuracy was assessed with an anthropomorphic phantom and a NEMA/IEC image quality phantom. Accurate activity measurements were achieved at an organ level. This software suite helps increasing quantitative accuracy of SPECT scanners.

  6. Energy balance in man measured by direct and indirect calorimetry.

    PubMed

    Webb, P; Annis, J F; Troutman, S J

    1980-06-01

    In six 24-hr measurements of energy balance, direct and indirect calorimetry agreed within +/-3%, which is probably the range of experimental error. But in seven other 24-hr periods there was disagreement in the range of 8 to 23%, and these were usually days when the subjects ate much less than they spent metabolically. Our direct calorimeter is an insulated, water cooled suit. Continous measurements of O2 consumption and CO2 production provided data on metabolic expenditure (M) by indirect calorimetry. The 24-hr values for M matched the energy losses within +/-60 kcal (+/-3% of M) in four men who rested all day and lay down to sleep at night. Similar agreement was seen in one of the four who worked on a treadmill for 4 hr and stayed busy all day. but in another energy losses were 342 kcal greater than M (10% of M). When the experiments gave values for M minus the losses greater than +/-60 kcal, this is called "unmeasured energy". In further experiments, two subjects stayed awake for 24 hr, and their unmeasured energies were 279 and 393 kcal. The same two men, eating sparingly, also worked for 24 hr so as to double their resting metabolic expenditures; the unmeasured energies were even larger, 380 and 958 kcal. When they repeated the 24 hr of mild work, but ate nearly as much as they spent metabolically, one man was near energy balance, while the other showed an unmeasured energy of -363 kcal. Little heat storage was evident in these experiments; therefore, heat balance was present and energy balance should have been present. In the group of 13 experiments, it appeared that the greater the food deficit, the larger was the unmeasured energy (excess of metabolic expenditure over loss of energy).

  7. Mathematical Model for Localised and Surface Heat Flux of the Human Body Obtained from Measurements Performed with a Calorimetry Minisensor

    PubMed Central

    Socorro, Fabiola; Rodríguez de Rivera, Pedro Jesús; Rodríguez de Rivera, Miriam

    2017-01-01

    The accuracy of the direct and local measurements of the heat power dissipated by the surface of the human body, using a calorimetry minisensor, is directly related to the calibration rigor of the sensor and the correct interpretation of the experimental results. For this, it is necessary to know the characteristics of the body’s local heat dissipation. When the sensor is placed on the surface of the human body, the body reacts until a steady state is reached. We propose a mathematical model that represents the rate of heat flow at a given location on the surface of a human body by the sum of a series of exponentials: W(t) = A0 + ∑Aiexp(−t/τi). In this way, transient and steady states of heat dissipation can be interpreted. This hypothesis has been tested by simulating the operation of the sensor. At the steady state, the power detected in the measurement area (4 cm2) varies depending on the sensor’s thermostat temperature, as well as the physical state of the subject. For instance, for a thermostat temperature of 24 °C, this power can vary between 100–250 mW in a healthy adult. In the transient state, two exponentials are sufficient to represent this dissipation, with 3 and 70 s being the mean values of its time constants. PMID:29182567

  8. Mathematical Model for Localised and Surface Heat Flux of the Human Body Obtained from Measurements Performed with a Calorimetry Minisensor.

    PubMed

    Socorro, Fabiola; Rodríguez de Rivera, Pedro Jesús; Rodríguez de Rivera, Miriam; Rodríguez de Rivera, Manuel

    2017-11-28

    The accuracy of the direct and local measurements of the heat power dissipated by the surface of the human body, using a calorimetry minisensor, is directly related to the calibration rigor of the sensor and the correct interpretation of the experimental results. For this, it is necessary to know the characteristics of the body's local heat dissipation. When the sensor is placed on the surface of the human body, the body reacts until a steady state is reached. We propose a mathematical model that represents the rate of heat flow at a given location on the surface of a human body by the sum of a series of exponentials: W ( t ) = A ₀ + ∑A i exp( -t / τ i ). In this way, transient and steady states of heat dissipation can be interpreted. This hypothesis has been tested by simulating the operation of the sensor. At the steady state, the power detected in the measurement area (4 cm²) varies depending on the sensor's thermostat temperature, as well as the physical state of the subject. For instance, for a thermostat temperature of 24 °C, this power can vary between 100-250 mW in a healthy adult. In the transient state, two exponentials are sufficient to represent this dissipation, with 3 and 70 s being the mean values of its time constants.

  9. Photolyses of mammalian carboxy-hemoglobin studied by photoacoustic calorimetry

    NASA Astrophysics Data System (ADS)

    Zhao, JinYu; Li, JiaHuang; Zhang, Zheng; Zhang, ShuYi; Qu, Min; Fang, JianWen; Hua, ZiChun

    2013-07-01

    The enthalpy and conformational volume changes in the photolyses of carboxy-hemoglobin (HbCO) of human, bovine, pig, horse and rabbit are investigated by photoacoustic calorimetry. Considering the time scales of the exciting laser pulse and the receiving ultrasound transducers (PVDF films and PZT ceramics), as well as the reaction lifetimes in the photolysis processes of HbCO, the measured results are related to the geminate recombination and tertiary relaxation in photolyses of HbCO. Moreover, the quantum yields of the five mammals are also measured by laser pump-probe technique. The results show that the dynamic parameters, such as enthalpy and conformational volume changes, differ between the processes of the geminate recombination and tertiary relaxation. Also, the dynamic parameters differ among the five mammals although some of them may be consistent with each other.

  10. SUMS calibration test report

    NASA Technical Reports Server (NTRS)

    Robertson, G.

    1982-01-01

    Calibration was performed on the shuttle upper atmosphere mass spectrometer (SUMS). The results of the calibration and the as run test procedures are presented. The output data is described, and engineering data conversion factors, tables and curves, and calibration on instrument gauges are included. Static calibration results which include: instrument sensitive versus external pressure for N2 and O2, data from each scan of calibration, data plots from N2 and O2, and sensitivity of SUMS at inlet for N2 and O2, and ratios of 14/28 for nitrogen and 16/32 for oxygen are given.

  11. Calibration of X-Ray Observatories

    NASA Technical Reports Server (NTRS)

    Weisskopf, Martin C.; L'Dell, Stephen L.

    2011-01-01

    Accurate calibration of x-ray observatories has proved an elusive goal. Inaccuracies and inconsistencies amongst on-ground measurements, differences between on-ground and in-space performance, in-space performance changes, and the absence of cosmic calibration standards whose physics we truly understand have precluded absolute calibration better than several percent and relative spectral calibration better than a few percent. The philosophy "the model is the calibration" relies upon a complete high-fidelity model of performance and an accurate verification and calibration of this model. As high-resolution x-ray spectroscopy begins to play a more important role in astrophysics, additional issues in accurately calibrating at high spectral resolution become more evident. Here we review the challenges of accurately calibrating the absolute and relative response of x-ray observatories. On-ground x-ray testing by itself is unlikely to achieve a high-accuracy calibration of in-space performance, especially when the performance changes with time. Nonetheless, it remains an essential tool in verifying functionality and in characterizing and verifying the performance model. In the absence of verified cosmic calibration sources, we also discuss the notion of an artificial, in-space x-ray calibration standard. 6th

  12. TH-CD-201-09: High Spatial Resolution Absorbed Dose to Water Measurements Using Optical Calorimetry in Megavoltage External Beam Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flores-Martinez, E; DeWerd, L; Radtke, J

    2016-06-15

    Purpose: To develop and implement a high spatial resolution calorimeter methodology to measure absorbed dose to water (ADW) using phase shifts (PSs) of light passing through a water phantom and to compare measurements with theoretical calculations. Methods: Radiation-induced temperature changes were measured using the PSs of a He-Ne laser beam passing through a (10×10×10) cm{sup 3} water phantom. PSs were measured using a Michelson interferometer and recording the time-dependent fringe patterns on a CCD camera. The phantom was positioned at the center of the radiation field. A Varian 21EX was used to deliver 500 MU from a 9 MeV beammore » using a (6×6) cm{sup 2} cone. A 127cm SSD was used and the PSs were measured at depths ranging from of 1.90cm to 2.10cm in steps of 0.05cm by taking profiles at the corresponding rows across the image. PSs were computed by taking the difference between pre- and post-irradiation image frames and then measuring the amplitude of the resulting image profiles. An amplitude-to-PS calibration curve was generated using a piezoelectric transducer to mechanically induce PSs between 0.05 and 1.50 radians in steps of 0.05 radians. The temperature dependence of the refractive index of water at 632.8nm was used to convert PSs to ADW. Measured results were compared with ADW values calculated using the linac output calibration and commissioning data. Results: Milli-radian resolution in PS measurement was achieved using the described methodology. Measured radiation-induced PSs ranged from 0.10 ± 0.01 to 0.12 ± 0.01 radians at the investigated depths. After converting PSs to ADW, measured and calculated ADW values agreed within the measurement uncertainty. Conclusion: This work shows that interferometer-based calorimetry measurements are capable of achieving sub-millimeter resolution measuring 2D temperature/dose distributions, which are particularly useful for characterizing beams from modalities such as SRS, proton therapy, or microbeams.« less

  13. Evaluation of “Autotune” calibration against manual calibration of building energy models

    DOE PAGES

    Chaudhary, Gaurav; New, Joshua; Sanyal, Jibonananda; ...

    2016-08-26

    Our paper demonstrates the application of Autotune, a methodology aimed at automatically producing calibrated building energy models using measured data, in two case studies. In the first case, a building model is de-tuned by deliberately injecting faults into more than 60 parameters. This model was then calibrated using Autotune and its accuracy with respect to the original model was evaluated in terms of the industry-standard normalized mean bias error and coefficient of variation of root mean squared error metrics set forth in ASHRAE Guideline 14. In addition to whole-building energy consumption, outputs including lighting, plug load profiles, HVAC energy consumption,more » zone temperatures, and other variables were analyzed. In the second case, Autotune calibration is compared directly to experts’ manual calibration of an emulated-occupancy, full-size residential building with comparable calibration results in much less time. Lastly, our paper concludes with a discussion of the key strengths and weaknesses of auto-calibration approaches.« less

  14. Approaches to highly parameterized inversion-A guide to using PEST for groundwater-model calibration

    USGS Publications Warehouse

    Doherty, John E.; Hunt, Randall J.

    2010-01-01

    Highly parameterized groundwater models can create calibration difficulties. Regularized inversion-the combined use of large numbers of parameters with mathematical approaches for stable parameter estimation-is becoming a common approach to address these difficulties and enhance the transfer of information contained in field measurements to parameters used to model that system. Though commonly used in other industries, regularized inversion is somewhat imperfectly understood in the groundwater field. There is concern that this unfamiliarity can lead to underuse, and misuse, of the methodology. This document is constructed to facilitate the appropriate use of regularized inversion for calibrating highly parameterized groundwater models. The presentation is directed at an intermediate- to advanced-level modeler, and it focuses on the PEST software suite-a frequently used tool for highly parameterized model calibration and one that is widely supported by commercial graphical user interfaces. A brief overview of the regularized inversion approach is provided, and techniques for mathematical regularization offered by PEST are outlined, including Tikhonov, subspace, and hybrid schemes. Guidelines for applying regularized inversion techniques are presented after a logical progression of steps for building suitable PEST input. The discussion starts with use of pilot points as a parameterization device and processing/grouping observations to form multicomponent objective functions. A description of potential parameter solution methodologies and resources available through the PEST software and its supporting utility programs follows. Directing the parameter-estimation process through PEST control variables is then discussed, including guidance for monitoring and optimizing the performance of PEST. Comprehensive listings of PEST control variables, and of the roles performed by PEST utility support programs, are presented in the appendixes.

  15. Applications of isothermal titration calorimetry - the research and technical developments from 2011 to 2015.

    PubMed

    Falconer, Robert J

    2016-10-01

    Isothermal titration calorimetry is a widely used biophysical technique for studying the formation or dissociation of molecular complexes. Over the last 5 years, much work has been published on the interpretation of isothermal titration calorimetry (ITC) data for single binding and multiple binding sites. As over 80% of ITC papers are on macromolecules of biological origin, this interpretation is challenging. Some researchers have attempted to link the thermodynamics constants to events at the molecular level. This review highlights work carried out using binding sites characterized using x-ray crystallography techniques that allow speculation about individual bond formation and the displacement of individual water molecules during ligand binding and link these events to the thermodynamic constants for binding. The review also considers research conducted with synthetic binding partners where specific binding events like anion-π and π-π interactions were studied. The revival of assays that enable both thermodynamic and kinetic information to be collected from ITC data is highlighted. Lastly, published criticism of ITC research from a physical chemistry perspective is appraised and practical advice provided for researchers unfamiliar with thermodynamics and its interpretation. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  16. Calibrated intercepts for solar radiometers used in remote sensor calibration

    NASA Technical Reports Server (NTRS)

    Gellman, David I.; Biggar, Stuart F.; Slater, Philip N.; Bruegge, Carol J.

    1991-01-01

    Calibrated solar radiometer intercepts allow spectral optical depths to be determined for days with intermittently clear skies. This is of particular importance on satellite sensor calibration days that are cloudy except at the time of image acquisition. This paper describes the calibration of four solar radiometers using the Langley-Bouguer technique for data collected on days with a clear, stable atmosphere. Intercepts are determined with an uncertainty of less than six percent, corresponding to a maximum uncertainty of 0.06 in optical depth. The spread of voltage intercepts calculated in this process is carried through three methods of radiometric calibration of satellite sensors to yield an uncertainty in radiance at the top of the atmosphere of less than one percent associated with the uncertainty in solar radiometer intercepts for a range of ground reflectances.

  17. An Enclosed Laser Calibration Standard

    NASA Astrophysics Data System (ADS)

    Adams, Thomas E.; Fecteau, M. L.

    1985-02-01

    We have designed, evaluated and calibrated an enclosed, safety-interlocked laser calibration standard for use in US Army Secondary Reference Calibration Laboratories. This Laser Test Set Calibrator (LTSC) represents the Army's first-generation field laser calibration standard. Twelve LTSC's are now being fielded world-wide. The main requirement on the LTSC is to provide calibration support for the Test Set (TS3620) which, in turn, is a GO/NO GO tester of the Hand-Held Laser Rangefinder (AN/GVS-5). However, we believe it's design is flexible enough to accommodate the calibration of other laser test, measurement and diagnostic equipment (TMDE) provided that single-shot capability is adequate to perform the task. In this paper we describe the salient aspects and calibration requirements of the AN/GVS-5 Rangefinder and the Test Set which drove the basic LTSC design. Also, we detail our evaluation and calibration of the LTSC, in particular, the LTSC system standards. We conclude with a review of our error analysis from which uncertainties were assigned to the LTSC calibration functions.

  18. Thermal characteristics of second harmonic generation by phase matched calorimetry.

    PubMed

    Lim, Hwan Hong; Kurimura, Sunao; Noguchi, Keisuke; Shoji, Ichiro

    2014-07-28

    We analyze a solution of the heat equation for second harmonic generation (SHG) with a focused Gaussian beam and simulate the temperature rise in SHG materials as a function of the second harmonic power and the focusing conditions. We also propose a quantitative value of the heat removal performance of SHG devices, referred to as the effective heat capacity Cα in phase matched calorimetry. We demonstrate the inverse relation between Cα and the focusing parameter ξ, and propose the universal quantity of the product of Cα and ξ for characterizing the thermal property of SHG devices. Finally, we discuss the strategy to manage thermal dephasing in SHG using the results from simulations.

  19. Calibrating Wide Field Surveys

    NASA Astrophysics Data System (ADS)

    González Fernández, Carlos; Irwin, M.; Lewis, J.; González Solares, E.

    2017-09-01

    "In this talk I will review the strategies in CASU to calibrate wide field surveys, in particular applied to data taken with the VISTA telescope. These include traditional night-by-night calibrations along with the search for a global, coherent calibration of all the data once observations are finished. The difficulties of obtaining photometric accuracy of a few percent and a good absolute calibration will also be discussed."

  20. One-calibrant kinetic calibration for on-site water sampling with solid-phase microextraction.

    PubMed

    Ouyang, Gangfeng; Cui, Shufen; Qin, Zhipei; Pawliszyn, Janusz

    2009-07-15

    The existing solid-phase microextraction (SPME) kinetic calibration technique, using the desorption of the preloaded standards to calibrate the extraction of the analytes, requires that the physicochemical properties of the standard should be similar to those of the analyte, which limited the application of the technique. In this study, a new method, termed the one-calibrant kinetic calibration technique, which can use the desorption of a single standard to calibrate all extracted analytes, was proposed. The theoretical considerations were validated by passive water sampling in laboratory and rapid water sampling in the field. To mimic the variety of the environment, such as temperature, turbulence, and the concentration of the analytes, the flow-through system for the generation of standard aqueous polycyclic aromatic hydrocarbons (PAHs) solution was modified. The experimental results of the passive samplings in the flow-through system illustrated that the effect of the environmental variables was successfully compensated with the kinetic calibration technique, and all extracted analytes can be calibrated through the desorption of a single calibrant. On-site water sampling with rotated SPME fibers also illustrated the feasibility of the new technique for rapid on-site sampling of hydrophobic organic pollutants in water. This technique will accelerate the application of the kinetic calibration method and also will be useful for other microextraction techniques.

  1. Development of a calibration protocol and identification of the most sensitive parameters for the particulate biofilm models used in biological wastewater treatment.

    PubMed

    Eldyasti, Ahmed; Nakhla, George; Zhu, Jesse

    2012-05-01

    Biofilm models are valuable tools for process engineers to simulate biological wastewater treatment. In order to enhance the use of biofilm models implemented in contemporary simulation software, model calibration is both necessary and helpful. The aim of this work was to develop a calibration protocol of the particulate biofilm model with a help of the sensitivity analysis of the most important parameters in the biofilm model implemented in BioWin® and verify the predictability of the calibration protocol. A case study of a circulating fluidized bed bioreactor (CFBBR) system used for biological nutrient removal (BNR) with a fluidized bed respirometric study of the biofilm stoichiometry and kinetics was used to verify and validate the proposed calibration protocol. Applying the five stages of the biofilm calibration procedures enhanced the applicability of BioWin®, which was capable of predicting most of the performance parameters with an average percentage error (APE) of 0-20%. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. Initial Radiometric Calibration of the AWiFS using Vicarious Calibration Techniques

    NASA Technical Reports Server (NTRS)

    Pagnutti, Mary; Thome, Kurtis; Aaron, David; Leigh, Larry

    2006-01-01

    NASA SSC maintains four ASD FieldSpec FR spectroradiometers: 1) Laboratory transfer radiometers; 2) Ground surface reflectance for V&V field collection activities. Radiometric Calibration consists of a NIST-calibrated integrating sphere which serves as a source with known spectral radiance. Spectral Calibration consists of a laser and pen lamp illumination of integrating sphere. Environmental Testing includes temperature stability tests performed in environmental chamber.

  3. Calibrating Building Energy Models Using Supercomputer Trained Machine Learning Agents

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanyal, Jibonananda; New, Joshua Ryan; Edwards, Richard

    2014-01-01

    Building Energy Modeling (BEM) is an approach to model the energy usage in buildings for design and retrofit purposes. EnergyPlus is the flagship Department of Energy software that performs BEM for different types of buildings. The input to EnergyPlus can often extend in the order of a few thousand parameters which have to be calibrated manually by an expert for realistic energy modeling. This makes it challenging and expensive thereby making building energy modeling unfeasible for smaller projects. In this paper, we describe the Autotune research which employs machine learning algorithms to generate agents for the different kinds of standardmore » reference buildings in the U.S. building stock. The parametric space and the variety of building locations and types make this a challenging computational problem necessitating the use of supercomputers. Millions of EnergyPlus simulations are run on supercomputers which are subsequently used to train machine learning algorithms to generate agents. These agents, once created, can then run in a fraction of the time thereby allowing cost-effective calibration of building models.« less

  4. Examination of water phase transitions in Loblolly pine and cell wall components by differential scanning calorimetry

    Treesearch

    Samuel L. Zelinka; Michael J. Lambrecht; Samuel V. Glass; Alex C. Wiedenhoeft; Daniel J. Yelle

    2012-01-01

    This paper examines phase transformations of water in wood and isolated wood cell wall components using differential scanning calorimetry with the purpose of better understanding "Type II water" or "freezable bound water" that has been reported for cellulose and other hydrophilic polymers. Solid loblolly pine (Pinus taeda...

  5. Comparison Between One-Point Calibration and Two-Point Calibration Approaches in a Continuous Glucose Monitoring Algorithm

    PubMed Central

    Mahmoudi, Zeinab; Johansen, Mette Dencker; Christiansen, Jens Sandahl

    2014-01-01

    Background: The purpose of this study was to investigate the effect of using a 1-point calibration approach instead of a 2-point calibration approach on the accuracy of a continuous glucose monitoring (CGM) algorithm. Method: A previously published real-time CGM algorithm was compared with its updated version, which used a 1-point calibration instead of a 2-point calibration. In addition, the contribution of the corrective intercept (CI) to the calibration performance was assessed. Finally, the sensor background current was estimated real-time and retrospectively. The study was performed on 132 type 1 diabetes patients. Results: Replacing the 2-point calibration with the 1-point calibration improved the CGM accuracy, with the greatest improvement achieved in hypoglycemia (18.4% median absolute relative differences [MARD] in hypoglycemia for the 2-point calibration, and 12.1% MARD in hypoglycemia for the 1-point calibration). Using 1-point calibration increased the percentage of sensor readings in zone A+B of the Clarke error grid analysis (EGA) in the full glycemic range, and also enhanced hypoglycemia sensitivity. Exclusion of CI from calibration reduced hypoglycemia accuracy, while slightly increased euglycemia accuracy. Both real-time and retrospective estimation of the sensor background current suggest that the background current can be considered zero in the calibration of the SCGM1 sensor. Conclusions: The sensor readings calibrated with the 1-point calibration approach indicated to have higher accuracy than those calibrated with the 2-point calibration approach. PMID:24876420

  6. Software Program: Software Management Guidebook

    NASA Technical Reports Server (NTRS)

    1996-01-01

    The purpose of this NASA Software Management Guidebook is twofold. First, this document defines the core products and activities required of NASA software projects. It defines life-cycle models and activity-related methods but acknowledges that no single life-cycle model is appropriate for all NASA software projects. It also acknowledges that the appropriate method for accomplishing a required activity depends on characteristics of the software project. Second, this guidebook provides specific guidance to software project managers and team leaders in selecting appropriate life cycles and methods to develop a tailored plan for a software engineering project.

  7. Isothermal Titration Calorimetry for Measuring Macromolecule-Ligand Affinity

    PubMed Central

    Duff,, Michael R.; Grubbs, Jordan; Howell, Elizabeth E.

    2011-01-01

    Isothermal titration calorimetry (ITC) is a useful tool for understanding the complete thermodynamic picture of a binding reaction. In biological sciences, macromolecular interactions are essential in understanding the machinery of the cell. Experimental conditions, such as buffer and temperature, can be tailored to the particular binding system being studied. However, careful planning is needed since certain ligand and macromolecule concentration ranges are necessary to obtain useful data. Concentrations of the macromolecule and ligand need to be accurately determined for reliable results. Care also needs to be taken when preparing the samples as impurities can significantly affect the experiment. When ITC experiments, along with controls, are performed properly, useful binding information, such as the stoichiometry, affinity and enthalpy, are obtained. By running additional experiments under different buffer or temperature conditions, more detailed information can be obtained about the system. A protocol for the basic setup of an ITC experiment is given. PMID:21931288

  8. Calibrating Accelerometers Using an Electromagnetic Launcher

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erik Timpson

    A Pulse Forming Network (PFN), Helical Electromagnetic Launcher (HEML), Command Module (CM), and Calibration Table (CT) were built and evaluated for the combined ability to calibrate an accelerometer. The PFN has a maximum stored energy of 19.25 kJ bank and is fired by a silicon controlled rectifier (SCR), with appropriate safety precautions. The HEML is constructed out of G-10 fiberglass and is designed to accelerate 600 grams to 10 meters per second. The CM is microcontroller based running Arduino Software. The CM has a keypad input and 7 segment outputs of the bank voltage and desired voltage. After entering amore » desired bank voltage, the CM controls the charge of the PFN. When the two voltages are equal it allows the fire button to send a pulse to the SCR to fire the PFN and in turn, the HEML. The HEML projectile's tip hits a target that is held by the CT. The CT consists of a table to hold the PFN and HEML, a vacuum chuck, air bearing, velocity meter and catch pot. The Target is held with the vacuum chuck awaiting impact. After impact, the air bearing allows the target to fall freely for the velocity meter to get an accurate reading. A known acceleration is determined from the known change in velocity of the target. Thus, if an accelerometer was attached to the target, the measured value can be compared to the known value.« less

  9. Calibration method for spectroscopic systems

    DOEpatents

    Sandison, David R.

    1998-01-01

    Calibration spots of optically-characterized material placed in the field of view of a spectroscopic system allow calibration of the spectroscopic system. Response from the calibration spots is measured and used to calibrate for varying spectroscopic system operating parameters. The accurate calibration achieved allows quantitative spectroscopic analysis of responses taken at different times, different excitation conditions, and of different targets.

  10. Calibration method for spectroscopic systems

    DOEpatents

    Sandison, D.R.

    1998-11-17

    Calibration spots of optically-characterized material placed in the field of view of a spectroscopic system allow calibration of the spectroscopic system. Response from the calibration spots is measured and used to calibrate for varying spectroscopic system operating parameters. The accurate calibration achieved allows quantitative spectroscopic analysis of responses taken at different times, different excitation conditions, and of different targets. 3 figs.

  11. Coda Calibration Tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Addair, Travis; Barno, Justin; Dodge, Doug

    CCT is a Java based application for calibrating 10 shear wave coda measurement models to observed data using a much smaller set of reference moment magnitudes (MWs) calculated from other means (waveform modeling, etc.). These calibrated measurement models can then be used in other tools to generate coda moment magnitude measurements, source spectra, estimated stress drop, and other useful measurements for any additional events and any new data collected in the calibrated region.

  12. Hydrochemical tracers in the middle Rio Grande Basin, USA: 2. Calibration of a groundwater-flow model

    USGS Publications Warehouse

    Sanford, W.E.; Plummer, Niel; McAda, D.P.; Bexfield, L.M.; Anderholm, S.K.

    2004-01-01

    The calibration of a groundwater model with the aid of hydrochemical data has demonstrated that low recharge rates in the Middle Rio Grande Basin may be responsible for a groundwater trough in the center of the basin and for a substantial amount of Rio Grande water in the regional flow system. Earlier models of the basin had difficulty reproducing these features without any hydrochemical data to constrain the rates and distribution of recharge. The objective of this study was to use the large quantity of available hydrochemical data to help calibrate the model parameters, including the recharge rates. The model was constructed using the US Geological Survey's software MODFLOW, MODPATH, and UCODE, and calibrated using 14C activities and the positions of certain flow zones defined by the hydrochemical data. Parameter estimation was performed using a combination of nonlinear regression techniques and a manual search for the minimum difference between field and simulated observations. The calibrated recharge values were substantially smaller than those used in previous models. Results from a 30,000-year transient simulation suggest that recharge was at a maximum about 20,000 years ago and at a minimum about 10,000 years ago. ?? Springer-Verlag 2004.

  13. Kinetics of enzymatic high-solid hydrolysis of lignocellulosic biomass studied by calorimetry.

    PubMed

    Olsen, Søren N; Lumby, Erik; McFarland, Kc; Borch, Kim; Westh, Peter

    2011-03-01

    Enzymatic hydrolysis of high-solid biomass (>10% w/w dry mass) has become increasingly important as a key step in the production of second-generation bioethanol. To this end, development of quantitative real-time assays is desirable both for empirical optimization and for detailed kinetic analysis. In the current work, we have investigated the application of isothermal calorimetry to study the kinetics of enzymatic hydrolysis of two substrates (pretreated corn stover and Avicel) at high-solid contents (up to 29% w/w). It was found that the calorimetric heat flow provided a true measure of the hydrolysis rate with a detection limit of about 500 pmol glucose s(-1). Hence, calorimetry is shown to be a highly sensitive real-time method, applicable for high solids, and independent on the complexity of the substrate. Dose-response experiments with a typical cellulase cocktail enabled a multidimensional analysis of the interrelationships of enzyme load and the rate, time, and extent of the reaction. The results suggest that the hydrolysis rate of pretreated corn stover is limited initially by available attack points on the substrate surface (<10% conversion) but becomes proportional to enzyme dosage (excess of attack points) at later stages (>10% conversion). This kinetic profile is interpreted as an increase in polymer end concentration (substrate for CBH) as the hydrolysis progresses, probably due to EG activity in the enzyme cocktail. Finally, irreversible enzyme inactivation did not appear to be the source of reduced hydrolysis rate over time.

  14. NRL Hyperspectral Imagery Trafficability Tool (HITT): Software andSpectral-Geotechnical Look-up Tables for Estimation and Mapping of Soil Bearing Strength from Hyperspectral Imagery

    DTIC Science & Technology

    2012-09-28

    spectral-geotechnical libraries and models developed during remote sensing and calibration/ validation campaigns conducted by NRL and collaborating...geotechnical libraries and models developed during remote sensing and calibration/ validation campaigns conducted by NRL and collaborating institutions in four...2010; Bachmann, Fry, et al, 2012a). The NRL HITT tool is a model for how we develop and validate software, and the future development of tools by

  15. Software analysis handbook: Software complexity analysis and software reliability estimation and prediction

    NASA Technical Reports Server (NTRS)

    Lee, Alice T.; Gunn, Todd; Pham, Tuan; Ricaldi, Ron

    1994-01-01

    This handbook documents the three software analysis processes the Space Station Software Analysis team uses to assess space station software, including their backgrounds, theories, tools, and analysis procedures. Potential applications of these analysis results are also presented. The first section describes how software complexity analysis provides quantitative information on code, such as code structure and risk areas, throughout the software life cycle. Software complexity analysis allows an analyst to understand the software structure, identify critical software components, assess risk areas within a software system, identify testing deficiencies, and recommend program improvements. Performing this type of analysis during the early design phases of software development can positively affect the process, and may prevent later, much larger, difficulties. The second section describes how software reliability estimation and prediction analysis, or software reliability, provides a quantitative means to measure the probability of failure-free operation of a computer program, and describes the two tools used by JSC to determine failure rates and design tradeoffs between reliability, costs, performance, and schedule.

  16. GPI Calibrations

    NASA Astrophysics Data System (ADS)

    Rantakyrö, Fredrik T.

    2017-09-01

    "The Gemini Planet Imager requires a large set of Calibrations. These can be split into two major sets, one set associated with each observation and one set related to biweekly calibrations. The observation set is to optimize the correction of miscroshifts in the IFU spectra and the latter set is for correction of detector and instrument cosmetics."

  17. A transition matrix approach to the Davenport gryo calibration scheme

    NASA Technical Reports Server (NTRS)

    Natanson, G. A.

    1998-01-01

    The in-flight gyro calibration scheme commonly used by NASA Goddard Space Flight Center (GSFC) attitude ground support teams closely follows an original version of the Davenport algorithm developed in the late seventies. Its basic idea is to minimize the least-squares differences between attitudes gyro- propagated over the course of a maneuver and those determined using post- maneuver sensor measurements. The paper represents the scheme in a recursive form by combining necessary partials into a rectangular matrix, which is propagated in exactly the same way as a Kalman filters square transition matrix. The nontrivial structure of the propagation matrix arises from the fact that attitude errors are not included in the state vector, and therefore their derivatives with respect to estimated a parameters do not appear in the transition matrix gyro defined in the conventional way. In cases when the required accuracy can be achieved by a single iteration, representation of the Davenport gyro calibration scheme in a recursive form allows one to discard each gyro measurement immediately after it was used to propagate the attitude and state transition matrix. Another advantage of the new approach is that it utilizes the same expression for the error sensitivity matrix as that used by the Kalman filter. As a result the suggested modification of the Davenport algorithm made it possible to reuse software modules implemented in the Kalman filter estimator, where both attitude errors and gyro calibration parameters are included in the state vector. The new approach has been implemented in the ground calibration utilities used to support the Tropical Rainfall Measuring Mission (TRMM). The paper analyzes some preliminary results of gyro calibration performed by the TRMM ground attitude support team. It is demonstrated that an effect of the second iteration on estimated values of calibration parameters is negligibly small, and therefore there is no need to store processed gyro data

  18. Calibration of mass spectrometric peptide mass fingerprint data without specific external or internal calibrants

    PubMed Central

    Wolski, Witold E; Lalowski, Maciej; Jungblut, Peter; Reinert, Knut

    2005-01-01

    Background Peptide Mass Fingerprinting (PMF) is a widely used mass spectrometry (MS) method of analysis of proteins and peptides. It relies on the comparison between experimentally determined and theoretical mass spectra. The PMF process requires calibration, usually performed with external or internal calibrants of known molecular masses. Results We have introduced two novel MS calibration methods. The first method utilises the local similarity of peptide maps generated after separation of complex protein samples by two-dimensional gel electrophoresis. It computes a multiple peak-list alignment of the data set using a modified Minimum Spanning Tree (MST) algorithm. The second method exploits the idea that hundreds of MS samples are measured in parallel on one sample support. It improves the calibration coefficients by applying a two-dimensional Thin Plate Splines (TPS) smoothing algorithm. We studied the novel calibration methods utilising data generated by three different MALDI-TOF-MS instruments. We demonstrate that a PMF data set can be calibrated without resorting to external or relying on widely occurring internal calibrants. The methods developed here were implemented in R and are part of the BioConductor package mscalib available from . Conclusion The MST calibration algorithm is well suited to calibrate MS spectra of protein samples resulting from two-dimensional gel electrophoretic separation. The TPS based calibration algorithm might be used to correct systematic mass measurement errors observed for large MS sample supports. As compared to other methods, our combined MS spectra calibration strategy increases the peptide/protein identification rate by an additional 5 – 15%. PMID:16102175

  19. MO-AB-BRA-03: Calorimetry-Based Absorbed Dose to Water Measurements Using Interferometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flores-Martinez, E; Malin, M; DeWerd, L

    2015-06-15

    Purpose: Interferometry-based calorimetry is a novel technique to measure radiation-induced temperature changes allowing the measurement of absorbed dose to water (ADW). There are no mechanical components in the field. This technique also has the possibility of obtaining 2D dose distributions. The goal of this investigation is to calorimetrically-measure doses between 2.5 and 5 Gy over a single projection in a photon beam using interferometry and compare the results with doses calculated using the TG-51 linac calibration. Methods: ADW was determined by measuring radiation-induced phase shifts (PSs) of light passing through water irradiated with a 6 MV photon beam. A 9×9×9more » cm{sup 3} glass phantom filled with water and placed in an arm of a Michelson interferometer was irradiated with 300, 400, 500 and 600 monitor units. The whole system was thermally insulated to achieve sufficient passive temperature control. The depth of measurement was 4.5 cm with a field size of 7×7 cm{sup 2}. The intensity of the fringe pattern was monitored with a photodiode and used to calculate the time-dependent PS curve. Data was acquired 60 s before and after the irradiation. The radiation-induced PS was calculated by taking the difference in the pre- and post-irradiation drifts extrapolated to the midpoint of the irradiation. Results were compared to computed doses. Results: Average comparison of calculated ADW values with interferometry-measured values showed an agreement to within 9.5%. k=1 uncertainties were 4.3% for calculations and 14.7% for measurements. The dominant source of uncertainty for the measurements was a temperature drift of about 30 µK/s caused by heat conduction from the interferometer’s surroundings. Conclusion: This work presented the first absolute ADW measurements using interferometry in the dose range of linac-based radiotherapy. Future work to improve measurements’ reproducibility includes the implementation of active thermal control techniques.« less

  20. A variable acceleration calibration system

    NASA Astrophysics Data System (ADS)

    Johnson, Thomas H.

    2011-12-01

    A variable acceleration calibration system that applies loads using gravitational and centripetal acceleration serves as an alternative, efficient and cost effective method for calibrating internal wind tunnel force balances. Two proof-of-concept variable acceleration calibration systems are designed, fabricated and tested. The NASA UT-36 force balance served as the test balance for the calibration experiments. The variable acceleration calibration systems are shown to be capable of performing three component calibration experiments with an approximate applied load error on the order of 1% of the full scale calibration loads. Sources of error are indentified using experimental design methods and a propagation of uncertainty analysis. Three types of uncertainty are indentified for the systems and are attributed to prediction error, calibration error and pure error. Angular velocity uncertainty is shown to be the largest indentified source of prediction error. The calibration uncertainties using a production variable acceleration based system are shown to be potentially equivalent to current methods. The production quality system can be realized using lighter materials and a more precise instrumentation. Further research is needed to account for balance deflection, forcing effects due to vibration, and large tare loads. A gyroscope measurement technique is shown to be capable of resolving the balance deflection angle calculation. Long term research objectives include a demonstration of a six degree of freedom calibration, and a large capacity balance calibration.

  1. Calibration of the LHAASO-KM2A electromagnetic particle detectors using charged particles within the extensive air showers

    NASA Astrophysics Data System (ADS)

    Lv, Hongkui; He, Huihai; Sheng, Xiangdong; Liu, Jia; Chen, Songzhan; Liu, Ye; Hou, Chao; Zhao, Jing; Zhang, Zhongquan; Wu, Sha; Wang, Yaping; Lhaaso Collaboration

    2018-07-01

    In the Large High Altitude Air Shower Observatory (LHAASO), one square kilometer array (KM2A), with 5242 electromagnetic particle detectors (EDs) and 1171 muon detectors (MDs), is designed to study ultra-high energy gamma-ray astronomy and cosmic ray physics. The remoteness and numerous detectors extremely demand a robust and automatic calibration procedure. In this paper, a self-calibration method which relies on the measurement of charged particles within the extensive air showers is proposed. The method is fully validated by Monte Carlo simulation and successfully applied in a KM2A prototype array experiment. Experimental results show that the self-calibration method can be used to determine the detector time offset constants at the sub-nanosecond level and the number density of particles collected by each ED with an accuracy of a few percents, which are adequate to meet the physical requirements of LHAASO experiment. This software calibration also offers an ideal method to realtime monitor the detector performances for next generation ground-based EAS experiments covering an area above square kilometers scale.

  2. An automated calibration laboratory - Requirements and design approach

    NASA Technical Reports Server (NTRS)

    O'Neil-Rood, Nora; Glover, Richard D.

    1990-01-01

    NASA's Dryden Flight Research Facility (Ames-Dryden), operates a diverse fleet of research aircraft which are heavily instrumented to provide both real time data for in-flight monitoring and recorded data for postflight analysis. Ames-Dryden's existing automated calibration (AUTOCAL) laboratory is a computerized facility which tests aircraft sensors to certify accuracy for anticipated harsh flight environments. Recently, a major AUTOCAL lab upgrade was initiated; the goal of this modernization is to enhance productivity and improve configuration management for both software and test data. The new system will have multiple testing stations employing distributed processing linked by a local area network to a centralized database. The baseline requirements for the new AUTOCAL lab and the design approach being taken for its mechanization are described.

  3. Radiometer calibration methods and resulting irradiance differences: Radiometer calibration methods and resulting irradiance differences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habte, Aron; Sengupta, Manajit; Andreas, Afshin

    Accurate solar radiation measured by radiometers depends on instrument performance specifications, installation method, calibration procedure, measurement conditions, maintenance practices, location, and environmental conditions. This study addresses the effect of different calibration methodologies and resulting differences provided by radiometric calibration service providers such as the National Renewable Energy Laboratory (NREL) and manufacturers of radiometers. Some of these methods calibrate radiometers indoors and some outdoors. To establish or understand the differences in calibration methodologies, we processed and analyzed field-measured data from radiometers deployed for 10 months at NREL's Solar Radiation Research Laboratory. These different methods of calibration resulted in a difference ofmore » +/-1% to +/-2% in solar irradiance measurements. Analyzing these differences will ultimately assist in determining the uncertainties of the field radiometer data and will help develop a consensus on a standard for calibration. Further advancing procedures for precisely calibrating radiometers to world reference standards that reduce measurement uncertainties will help the accurate prediction of the output of planned solar conversion projects and improve the bankability of financing solar projects.« less

  4. Spelling is Just a Click Away - A User-Centered Brain-Computer Interface Including Auto-Calibration and Predictive Text Entry.

    PubMed

    Kaufmann, Tobias; Völker, Stefan; Gunesch, Laura; Kübler, Andrea

    2012-01-01

    Brain-computer interfaces (BCI) based on event-related potentials (ERP) allow for selection of characters from a visually presented character-matrix and thus provide a communication channel for users with neurodegenerative disease. Although they have been topic of research for more than 20 years and were multiply proven to be a reliable communication method, BCIs are almost exclusively used in experimental settings, handled by qualified experts. This study investigates if ERP-BCIs can be handled independently by laymen without expert support, which is inevitable for establishing BCIs in end-user's daily life situations. Furthermore we compared the classic character-by-character text entry against a predictive text entry (PTE) that directly incorporates predictive text into the character-matrix. N = 19 BCI novices handled a user-centered ERP-BCI application on their own without expert support. The software individually adjusted classifier weights and control parameters in the background, invisible to the user (auto-calibration). All participants were able to operate the software on their own and to twice correctly spell a sentence with the auto-calibrated classifier (once with PTE, once without). Our PTE increased spelling speed and, importantly, did not reduce accuracy. In sum, this study demonstrates feasibility of auto-calibrating ERP-BCI use, independently by laymen and the strong benefit of integrating predictive text directly into the character-matrix.

  5. Technical note on the validation of a semi-automated image analysis software application for estrogen and progesterone receptor detection in breast cancer.

    PubMed

    Krecsák, László; Micsik, Tamás; Kiszler, Gábor; Krenács, Tibor; Szabó, Dániel; Jónás, Viktor; Császár, Gergely; Czuni, László; Gurzó, Péter; Ficsor, Levente; Molnár, Béla

    2011-01-18

    The immunohistochemical detection of estrogen (ER) and progesterone (PR) receptors in breast cancer is routinely used for prognostic and predictive testing. Whole slide digitalization supported by dedicated software tools allows quantization of the image objects (e.g. cell membrane, nuclei) and an unbiased analysis of immunostaining results. Validation studies of image analysis applications for the detection of ER and PR in breast cancer specimens provided strong concordance between the pathologist's manual assessment of slides and scoring performed using different software applications. The effectiveness of two connected semi-automated image analysis software (NuclearQuant v. 1.13 application for Pannoramic™ Viewer v. 1.14) for determination of ER and PR status in formalin-fixed paraffin embedded breast cancer specimens immunostained with the automated Leica Bond Max system was studied. First the detection algorithm was calibrated to the scores provided an independent assessors (pathologist), using selected areas from 38 small digital slides (created from 16 cases) containing a mean number of 195 cells. Each cell was manually marked and scored according to the Allred-system combining frequency and intensity scores. The performance of the calibrated algorithm was tested on 16 cases (14 invasive ductal carcinoma, 2 invasive lobular carcinoma) against the pathologist's manual scoring of digital slides. The detection was calibrated to 87 percent object detection agreement and almost perfect Total Score agreement (Cohen's kappa 0.859, quadratic weighted kappa 0.986) from slight or moderate agreement at the start of the study, using the un-calibrated algorithm. The performance of the application was tested against the pathologist's manual scoring of digital slides on 53 regions of interest of 16 ER and PR slides covering all positivity ranges, and the quadratic weighted kappa provided almost perfect agreement (κ = 0.981) among the two scoring schemes. NuclearQuant v

  6. Psychophysical contrast calibration

    PubMed Central

    To, Long; Woods, Russell L; Goldstein, Robert B; Peli, Eli

    2013-01-01

    Electronic displays and computer systems offer numerous advantages for clinical vision testing. Laboratory and clinical measurements of various functions and in particular of (letter) contrast sensitivity require accurately calibrated display contrast. In the laboratory this is achieved using expensive light meters. We developed and evaluated a novel method that uses only psychophysical responses of a person with normal vision to calibrate the luminance contrast of displays for experimental and clinical applications. Our method combines psychophysical techniques (1) for detection (and thus elimination or reduction) of display saturating nonlinearities; (2) for luminance (gamma function) estimation and linearization without use of a photometer; and (3) to measure without a photometer the luminance ratios of the display’s three color channels that are used in a bit-stealing procedure to expand the luminance resolution of the display. Using a photometer we verified that the calibration achieved with this procedure is accurate for both LCD and CRT displays enabling testing of letter contrast sensitivity to 0.5%. Our visual calibration procedure enables clinical, internet and home implementation and calibration verification of electronic contrast testing. PMID:23643843

  7. Link calibration against receiver calibration: an assessment of GPS time transfer uncertainties

    NASA Astrophysics Data System (ADS)

    Rovera, G. D.; Torre, J.-M.; Sherwood, R.; Abgrall, M.; Courde, C.; Laas-Bourez, M.; Uhrich, P.

    2014-10-01

    We present a direct comparison between two different techniques for the relative calibration of time transfer between remote time scales when using the signals transmitted by the Global Positioning System (GPS). Relative calibration estimates the delay of equipment or the delay of a time transfer link with respect to reference equipment. It is based on the circulation of some travelling GPS equipment between the stations in the network, against which the local equipment is measured. Two techniques can be considered: first a station calibration by the computation of the hardware delays of the local GPS equipment; second the computation of a global hardware delay offset for the time transfer between the reference points of two remote time scales. This last technique is called a ‘link’ calibration, with respect to the other one, which is a ‘receiver’ calibration. The two techniques require different measurements on site, which change the uncertainty budgets, and we discuss this and related issues. We report on one calibration campaign organized during Autumn 2013 between Observatoire de Paris (OP), Paris, France, Observatoire de la Côte d'Azur (OCA), Calern, France, and NERC Space Geodesy Facility (SGF), Herstmonceux, United Kingdom. The travelling equipment comprised two GPS receivers of different types, along with the required signal generator and distribution amplifier, and one time interval counter. We show the different ways to compute uncertainty budgets, leading to improvement factors of 1.2 to 1.5 on the hardware delay uncertainties when comparing the relative link calibration to the relative receiver calibration.

  8. Validation of software for calculating the likelihood ratio for parentage and kinship.

    PubMed

    Drábek, J

    2009-03-01

    Although the likelihood ratio is a well-known statistical technique, commercial off-the-shelf (COTS) software products for its calculation are not sufficiently validated to suit general requirements for the competence of testing and calibration laboratories (EN/ISO/IEC 17025:2005 norm) per se. The software in question can be considered critical as it directly weighs the forensic evidence allowing judges to decide on guilt or innocence or to identify person or kin (i.e.: in mass fatalities). For these reasons, accredited laboratories shall validate likelihood ratio software in accordance with the above norm. To validate software for calculating the likelihood ratio in parentage/kinship scenarios I assessed available vendors, chose two programs (Paternity Index and familias) for testing, and finally validated them using tests derived from elaboration of the available guidelines for the field of forensics, biomedicine, and software engineering. MS Excel calculation using known likelihood ratio formulas or peer-reviewed results of difficult paternity cases were used as a reference. Using seven testing cases, it was found that both programs satisfied the requirements for basic paternity cases. However, only a combination of two software programs fulfills the criteria needed for our purpose in the whole spectrum of functions under validation with the exceptions of providing algebraic formulas in cases of mutation and/or silent allele.

  9. On aspects of characterising and calibrating the interferometric gravitational wave detector, GEO 600

    NASA Astrophysics Data System (ADS)

    Hewitson, Martin R.

    Gravitational waves are small disturbances, or strains, in the fabric of space-time. The detection of these waves has been a major goal of modern physics since they were predicted as a consequence of Einstein's General Theory of Relativity. Large-scale astro- physical events, such as colliding neutron stars or supernovae, are predicted to release energy in the form of gravitational waves. However, even with such cataclysmic events, the strain amplitudes of the gravitational waves expected to be seen at the Earth are incredibly small: of the order 1 part in 10. 21 or less at audio frequencies. Because of theseextremely small amplitudes, the search for gravitational waves remains one of the most challenging goals of modem physics. This thesis starts by detailing the data recording system of GEO 600: an essential part of producing a calibrated data set. The full data acquisition system, including all hardware and software aspects, is described in detail. Comprehensive tests of the stability and timing accuracy of the system show that it has a typical duty cycle of greater than 99% with an absolute timing accuracy (measured against GPS) of the order 15 mus. The thesis then goes on to describe the design and implementation of a time-domain calibration method, based on the use of time-domain filters, for the power-recycled configuration of GEO 600. This time-domain method is then extended to deal with the more complicated case of calibrating the dual-recycled configuration of GEO 600. The time-domain calibration method was applied to two long data-taking (science) runs. The method proved successful in recovering (in real-time) a calibrated strain time-series suitable for use in astrophysical searches. The accuracy of the calibration process was shown to be good to 10% or less across the detection band of the detector. In principle, the time-domain method presents no restrictions in the achievable calibration accuracy; most of the uncertainty in the calibration process is

  10. RGB color calibration for quantitative image analysis: the "3D thin-plate spline" warping approach.

    PubMed

    Menesatti, Paolo; Angelini, Claudio; Pallottino, Federico; Antonucci, Francesca; Aguzzi, Jacopo; Costa, Corrado

    2012-01-01

    In the last years the need to numerically define color by its coordinates in n-dimensional space has increased strongly. Colorimetric calibration is fundamental in food processing and other biological disciplines to quantitatively compare samples' color during workflow with many devices. Several software programmes are available to perform standardized colorimetric procedures, but they are often too imprecise for scientific purposes. In this study, we applied the Thin-Plate Spline interpolation algorithm to calibrate colours in sRGB space (the corresponding Matlab code is reported in the Appendix). This was compared with other two approaches. The first is based on a commercial calibration system (ProfileMaker) and the second on a Partial Least Square analysis. Moreover, to explore device variability and resolution two different cameras were adopted and for each sensor, three consecutive pictures were acquired under four different light conditions. According to our results, the Thin-Plate Spline approach reported a very high efficiency of calibration allowing the possibility to create a revolution in the in-field applicative context of colour quantification not only in food sciences, but also in other biological disciplines. These results are of great importance for scientific color evaluation when lighting conditions are not controlled. Moreover, it allows the use of low cost instruments while still returning scientifically sound quantitative data.

  11. A method of calibrating wind velocity sensors with a modified gas flow calibrator

    NASA Technical Reports Server (NTRS)

    Stump, H. P.

    1978-01-01

    A procedure was described for calibrating air velocity sensors in the exhaust flow of a gas flow calibrator. The average velocity in the test section located at the calibrator exhaust was verified from the mass flow rate accurately measured by the calibrator's precision sonic nozzles. Air at elevated pressures flowed through a series of screens, diameter changes, and flow straighteners, resulting in a smooth flow through the open test section. The modified system generated air velocities of 2 to 90 meters per second with an uncertainty of about two percent for speeds below 15 meters per second and four percent for the higher speeds. Wind tunnel data correlated well with that taken in the flow calibrator.

  12. Synthesis Polarimetry Calibration

    NASA Astrophysics Data System (ADS)

    Moellenbrock, George

    2017-10-01

    Synthesis instrumental polarization calibration fundamentals for both linear (ALMA) and circular (EVLA) feed bases are reviewed, with special attention to the calibration heuristics supported in CASA. Practical problems affecting modern instruments are also discussed.

  13. The solar vector error within the SNPP Common GEO code, the correction, and the effects on the VIIRS SDR RSB calibration

    NASA Astrophysics Data System (ADS)

    Fulbright, Jon; Anderson, Samuel; Lei, Ning; Efremova, Boryana; Wang, Zhipeng; McIntire, Jeffrey; Chiang, Kwofu; Xiong, Xiaoxiong

    2014-11-01

    Due to a software error, the solar and lunar vectors reported in the on-board calibrator intermediate product (OBC-IP) files for SNPP VIIRS are incorrect. The magnitude of the error is about 0.2 degree, and the magnitude is increasing by about 0.01 degree per year. This error, although small, has an effect on the radiometric calibration of the reflective solar bands (RSB) because accurate solar angles are required for calculating the screen transmission functions and for calculating the illumination of the Solar Diffuser panel. In this paper, we describe the error in the Common GEO code, and how it may be fixed. We present evidence for the error from within the OBC-IP data. We also describe the effects of the solar vector error on the RSB calibration and the Sensor Data Record (SDR). In order to perform this evaluation, we have reanalyzed the yaw-maneuver data to compute the vignetting functions required for the on-orbit SD RSB radiometric calibration. After the reanalysis, we find effect of up to 0.5% on the shortwave infrared (SWIR) RSB calibration.

  14. Calibration of the ARID robot

    NASA Technical Reports Server (NTRS)

    Doty, Keith L

    1992-01-01

    The author has formulated a new, general model for specifying the kinematic properties of serial manipulators. The new model kinematic parameters do not suffer discontinuities when nominally parallel adjacent axes deviate from exact parallelism. From this new theory the author develops a first-order, lumped-parameter, calibration-model for the ARID manipulator. Next, the author develops a calibration methodology for the ARID based on visual and acoustic sensing. A sensor platform, consisting of a camera and four sonars attached to the ARID end frame, performs calibration measurements. A calibration measurement consists of processing one visual frame of an accurately placed calibration image and recording four acoustic range measurements. A minimum of two measurement protocols determine the kinematics calibration-model of the ARID for a particular region: assuming the joint displacements are accurately measured, the calibration surface is planar, and the kinematic parameters do not vary rapidly in the region. No theoretical or practical limitations appear to contra-indicate the feasibility of the calibration method developed here.

  15. Research on camera on orbit radial calibration based on black body and infrared calibration stars

    NASA Astrophysics Data System (ADS)

    Wang, YuDu; Su, XiaoFeng; Zhang, WanYing; Chen, FanSheng

    2018-05-01

    Affected by launching process and space environment, the response capability of a space camera must be attenuated. So it is necessary for a space camera to have a spaceborne radiant calibration. In this paper, we propose a method of calibration based on accurate Infrared standard stars was proposed for increasing infrared radiation measurement precision. As stars can be considered as a point target, we use them as the radiometric calibration source and establish the Taylor expansion method and the energy extrapolation model based on WISE catalog and 2MASS catalog. Then we update the calibration results from black body. Finally, calibration mechanism is designed and the technology of design is verified by on orbit test. The experimental calibration result shows the irradiance extrapolation error is about 3% and the accuracy of calibration methods is about 10%, the results show that the methods could satisfy requirements of on orbit calibration.

  16. Validation of XMALab software for marker-based XROMM.

    PubMed

    Knörlein, Benjamin J; Baier, David B; Gatesy, Stephen M; Laurence-Chasen, J D; Brainerd, Elizabeth L

    2016-12-01

    Marker-based XROMM requires software tools for: (1) correcting fluoroscope distortion; (2) calibrating X-ray cameras; (3) tracking radio-opaque markers; and (4) calculating rigid body motion. In this paper we describe and validate XMALab, a new open-source software package for marker-based XROMM (C++ source and compiled versions on Bitbucket). Most marker-based XROMM studies to date have used XrayProject in MATLAB. XrayProject can produce results with excellent accuracy and precision, but it is somewhat cumbersome to use and requires a MATLAB license. We have designed XMALab to accelerate the XROMM process and to make it more accessible to new users. Features include the four XROMM steps (listed above) in one cohesive user interface, real-time plot windows for detecting errors, and integration with an online data management system, XMAPortal. Accuracy and precision of XMALab when tracking markers in a machined object are ±0.010 and ±0.043 mm, respectively. Mean precision for nine users tracking markers in a tutorial dataset of minipig feeding was ±0.062 mm in XMALab and ±0.14 mm in XrayProject. Reproducibility of 3D point locations across nine users was 10-fold greater in XMALab than in XrayProject, and six degree-of-freedom bone motions calculated with a joint coordinate system were 3- to 6-fold more reproducible in XMALab. XMALab is also suitable for tracking white or black markers in standard light videos with optional checkerboard calibration. We expect XMALab to increase both the quality and quantity of animal motion data available for comparative biomechanics research. © 2016. Published by The Company of Biologists Ltd.

  17. Improved dewpoint-probe calibration

    NASA Technical Reports Server (NTRS)

    Stephenson, J. G.; Theodore, E. A.

    1978-01-01

    Relatively-simple pressure-control apparatus calibrates dewpoint probes considerably faster than conventional methods, with no loss of accuracy. Technique requires only pressure measurement at each calibration point and single absolute-humidity measurement at beginning of run. Several probes can be calibrated simultaneously and points can be checked above room temperature.

  18. Calibration of medium-resolution monochrome cathode ray tube displays for the purpose of board examinations.

    PubMed

    Evanoff, M G; Roehrig, H; Giffords, R S; Capp, M P; Rovinelli, R J; Hartmann, W H; Merritt, C

    2001-06-01

    This report discusses calibration and set-up procedures for medium-resolution monochrome cathode ray tubes (CRTs) taken in preparation of the oral portion of the board examination of the American Board of Radiology (ABR). The board examinations took place in more than 100 rooms of a hotel. There was one display-station (a computer and the associated CRT display) in each of the hotel rooms used for the examinations. The examinations covered the radiologic specialties cardiopulmonary, musculoskeletal, gastrointestinal, vascular, pediatric, and genitourinary. The software used for set-up and calibration was the VeriLUM 4.0 package from Image Smiths in Germantown, MD. The set-up included setting minimum luminance and maximum luminance, as well as positioning of the CRT in each examination room with respect to reflections of roomlights. The calibration for the grey scale rendition was done meeting the Digital Imaging and communication in Medicine (DICOM) 14 Standard Display Function. We describe these procedures, and present the calibration data in. tables and graphs, listing initial values of minimum luminance, maximum luminance, and grey scale rendition (DICOM 14 standard display function). Changes of these parameters over the duration of the examination were observed and recorded on 11 monitors in a particular room. These changes strongly suggest that all calibrated CRTs be monitored over the duration of the examination. In addition, other CRT performance data affecting image quality such as spatial resolution should be included in set-up and image quality-control procedures.

  19. Efficient multi-objective calibration of a computationally intensive hydrologic model with parallel computing software in Python

    USDA-ARS?s Scientific Manuscript database

    With enhanced data availability, distributed watershed models for large areas with high spatial and temporal resolution are increasingly used to understand water budgets and examine effects of human activities and climate change/variability on water resources. Developing parallel computing software...

  20. Kinetic properties of two Rhizopus exo-polygalacturonase enzymes hydrolyzing galacturonic acid oligomers using isothermal titration calorimetry

    USDA-ARS?s Scientific Manuscript database

    The kinetic characteristics of two Rhizopus oryzae exo-polygalacturonases acting on galacturonic acid oligomers (GalpA) were determined using isothermal titration calorimetry (ITC). RPG15 hydrolyzing (GalpA)2 demonstrated a Km of 55 uM and kcat of 10.3 s^-1^ while RPG16 was shown to have greater af...

  1. The site-scale saturated zone flow model for Yucca Mountain: Calibration of different conceptual models and their impact on flow paths

    USGS Publications Warehouse

    Zyvoloski, G.; Kwicklis, E.; Eddebbarh, A.-A.; Arnold, B.; Faunt, C.; Robinson, B.A.

    2003-01-01

    This paper presents several different conceptual models of the Large Hydraulic Gradient (LHG) region north of Yucca Mountain and describes the impact of those models on groundwater flow near the potential high-level repository site. The results are based on a numerical model of site-scale saturated zone beneath Yucca Mountain. This model is used for performance assessment predictions of radionuclide transport and to guide future data collection and modeling activities. The numerical model is calibrated by matching available water level measurements using parameter estimation techniques, along with more informal comparisons of the model to hydrologic and geochemical information. The model software (hydrologic simulation code FEHM and parameter estimation software PEST) and model setup allows for efficient calibration of multiple conceptual models. Until now, the Large Hydraulic Gradient has been simulated using a low-permeability, east-west oriented feature, even though direct evidence for this feature is lacking. In addition to this model, we investigate and calibrate three additional conceptual models of the Large Hydraulic Gradient, all of which are based on a presumed zone of hydrothermal chemical alteration north of Yucca Mountain. After examining the heads and permeabilities obtained from the calibrated models, we present particle pathways from the potential repository that record differences in the predicted groundwater flow regime. The results show that Large Hydraulic Gradient can be represented with the alternate conceptual models that include the hydrothermally altered zone. The predicted pathways are mildly sensitive to the choice of the conceptual model and more sensitive to the quality of calibration in the vicinity on the repository. These differences are most likely due to different degrees of fit of model to data, and do not represent important differences in hydrologic conditions for the different conceptual models. ?? 2002 Elsevier Science B

  2. Technique for calibrating angular measurement devices when calibration standards are unavailable

    NASA Technical Reports Server (NTRS)

    Finley, Tom D.

    1991-01-01

    A calibration technique is proposed that will allow the calibration of certain angular measurement devices without requiring the use of absolute standard. The technique assumes that the device to be calibrated has deterministic bias errors. A comparison device must be available that meets the same requirements. The two devices are compared; one device is then rotated with respect to the other, and a second comparison is performed. If the data are reduced using the described technique, the individual errors of the two devices can be determined.

  3. The self-calibration method for multiple systems at the CHARA Array

    NASA Astrophysics Data System (ADS)

    O'Brien, David

    The self-calibration method, a new interferometric technique at the CHARA Array, has been used to derive orbits for several spectroscopic binaries. This method uses the wide component of a hierarchical triple system to calibrate visibility measurements of the triple's close binary system. At certain baselines and separations, the calibrator in one of these systems can be observed quasi-simultaneously with the target. Depending on the orientation of the CHARA observation baseline relative to the orientation of the wide orbit of the triple system, separated fringe packets may be observed. A sophisticated observing scheme must be put in place to ensure the existence of separated fringe packets on nights of observation. Prior to the onset of this project, the reduction of separated fringe packet data had never included the goal of deriving visibilities for both fringe packets, so new data reduction software has been written. Visibilities obtained with separated fringe packet data for the target close binary are run through both Monte Carlo simulations and grid search programs in order to determine the best-fit orbital elements of the close binary. Several targets have been observed in this fashion, and orbits have been derived for seven targets, including three new orbits. Derivation of the orbit of the close pair in a triple system allows for the calculation of the mutual inclination, which is the angle between the planes of the wide and close orbit. Knowledge of this quantity may give insight into the formation processes that create multiple star systems. INDEX WORDS: Long-baseline interferometry, Self calibration, Separated fringe packets, Triple systems, Close binaries, Multiple systems, Orbital parameters, Near-infrared interferometry

  4. Spelling is Just a Click Away – A User-Centered Brain–Computer Interface Including Auto-Calibration and Predictive Text Entry

    PubMed Central

    Kaufmann, Tobias; Völker, Stefan; Gunesch, Laura; Kübler, Andrea

    2012-01-01

    Brain–computer interfaces (BCI) based on event-related potentials (ERP) allow for selection of characters from a visually presented character-matrix and thus provide a communication channel for users with neurodegenerative disease. Although they have been topic of research for more than 20 years and were multiply proven to be a reliable communication method, BCIs are almost exclusively used in experimental settings, handled by qualified experts. This study investigates if ERP–BCIs can be handled independently by laymen without expert support, which is inevitable for establishing BCIs in end-user’s daily life situations. Furthermore we compared the classic character-by-character text entry against a predictive text entry (PTE) that directly incorporates predictive text into the character-matrix. N = 19 BCI novices handled a user-centered ERP–BCI application on their own without expert support. The software individually adjusted classifier weights and control parameters in the background, invisible to the user (auto-calibration). All participants were able to operate the software on their own and to twice correctly spell a sentence with the auto-calibrated classifier (once with PTE, once without). Our PTE increased spelling speed and, importantly, did not reduce accuracy. In sum, this study demonstrates feasibility of auto-calibrating ERP–BCI use, independently by laymen and the strong benefit of integrating predictive text directly into the character-matrix. PMID:22833713

  5. PV Calibration Insights | NREL

    Science.gov Websites

    PV Calibration Insights PV Calibration Insights The Photovoltaic (PV) Calibration Insights blog will provide updates on the testing done by the NREL PV Device Performance group. This NREL research group measures the performance of any and all technologies and sizes of PV devices from around the world

  6. Development of a software package for solid-angle calculations using the Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Zhang, Jie; Chen, Xiulian; Zhang, Changsheng; Li, Gang; Xu, Jiayun; Sun, Guangai

    2014-02-01

    Solid-angle calculations play an important role in the absolute calibration of radioactivity measurement systems and in the determination of the activity of radioactive sources, which are often complicated. In the present paper, a software package is developed to provide a convenient tool for solid-angle calculations in nuclear physics. The proposed software calculates solid angles using the Monte Carlo method, in which a new type of variance reduction technique was integrated. The package, developed under the environment of Microsoft Foundation Classes (MFC) in Microsoft Visual C++, has a graphical user interface, in which, the visualization function is integrated in conjunction with OpenGL. One advantage of the proposed software package is that it can calculate the solid angle subtended by a detector with different geometric shapes (e.g., cylinder, square prism, regular triangular prism or regular hexagonal prism) to a point, circular or cylindrical source without any difficulty. The results obtained from the proposed software package were compared with those obtained from previous studies and calculated using Geant4. It shows that the proposed software package can produce accurate solid-angle values with a greater computation speed than Geant4.

  7. Procedure for the Selection and Validation of a Calibration Model I-Description and Application.

    PubMed

    Desharnais, Brigitte; Camirand-Lemyre, Félix; Mireault, Pascal; Skinner, Cameron D

    2017-05-01

    Calibration model selection is required for all quantitative methods in toxicology and more broadly in bioanalysis. This typically involves selecting the equation order (quadratic or linear) and weighting factor correctly modelizing the data. A mis-selection of the calibration model will generate lower quality control (QC) accuracy, with an error up to 154%. Unfortunately, simple tools to perform this selection and tests to validate the resulting model are lacking. We present a stepwise, analyst-independent scheme for selection and validation of calibration models. The success rate of this scheme is on average 40% higher than a traditional "fit and check the QCs accuracy" method of selecting the calibration model. Moreover, the process was completely automated through a script (available in Supplemental Data 3) running in RStudio (free, open-source software). The need for weighting was assessed through an F-test using the variances of the upper limit of quantification and lower limit of quantification replicate measurements. When weighting was required, the choice between 1/x and 1/x2 was determined by calculating which option generated the smallest spread of weighted normalized variances. Finally, model order was selected through a partial F-test. The chosen calibration model was validated through Cramer-von Mises or Kolmogorov-Smirnov normality testing of the standardized residuals. Performance of the different tests was assessed using 50 simulated data sets per possible calibration model (e.g., linear-no weight, quadratic-no weight, linear-1/x, etc.). This first of two papers describes the tests, procedures and outcomes of the developed procedure using real LC-MS-MS results for the quantification of cocaine and naltrexone. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  8. OLI Radiometric Calibration

    NASA Technical Reports Server (NTRS)

    Markham, Brian; Morfitt, Ron; Kvaran, Geir; Biggar, Stuart; Leisso, Nathan; Czapla-Myers, Jeff

    2011-01-01

    Goals: (1) Present an overview of the pre-launch radiance, reflectance & uniformity calibration of the Operational Land Imager (OLI) (1a) Transfer to orbit/heliostat (1b) Linearity (2) Discuss on-orbit plans for radiance, reflectance and uniformity calibration of the OLI

  9. NASA Metrology and Calibration, 1980

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The proceedings of the fourth annual NASA Metrology and Calibration Workshop are presented. This workshop covered (1) review and assessment of NASA metrology and calibration activities by NASA Headquarters, (2) results of audits by the Office of Inspector General, (3) review of a proposed NASA Equipment Management System, (4) current and planned field center activities, (5) National Bureau of Standards (NBS) calibration services for NASA, (6) review of NBS's Precision Measurement and Test Equipment Project activities, (7) NASA instrument loan pool operations at two centers, (8) mobile cart calibration systems at two centers, (9) calibration intervals and decals, (10) NASA Calibration Capabilities Catalog, and (11) development of plans and objectives for FY 1981. Several papers in this proceedings are slide presentations only.

  10. Calibrated FMRI.

    PubMed

    Hoge, Richard D

    2012-08-15

    Functional magnetic resonance imaging with blood oxygenation level-dependent (BOLD) contrast has had a tremendous influence on human neuroscience in the last twenty years, providing a non-invasive means of mapping human brain function with often exquisite sensitivity and detail. However the BOLD method remains a largely qualitative approach. While the same can be said of anatomic MRI techniques, whose clinical and research impact has not been diminished in the slightest by the lack of a quantitative interpretation of their image intensity, the quantitative expression of BOLD responses as a percent of the baseline T2*- weighted signal has been viewed as necessary since the earliest days of fMRI. Calibrated MRI attempts to dissociate changes in oxygen metabolism from changes in blood flow and volume, the latter three quantities contributing jointly to determine the physiologically ambiguous percent BOLD change. This dissociation is typically performed using a "calibration" procedure in which subjects inhale a gas mixture containing small amounts of carbon dioxide or enriched oxygen to produce changes in blood flow and BOLD signal which can be measured under well-defined hemodynamic conditions. The outcome is a calibration parameter M which can then be substituted into an expression providing the fractional change in oxygen metabolism given changes in blood flow and BOLD signal during a task. The latest generation of calibrated MRI methods goes beyond fractional changes to provide absolute quantification of resting-state oxygen consumption in micromolar units, in addition to absolute measures of evoked metabolic response. This review discusses the history, challenges, and advances in calibrated MRI, from the personal perspective of the author. Copyright © 2012 Elsevier Inc. All rights reserved.

  11. Cscibox: A Software System for Age-Model Construction and Evaluation

    NASA Astrophysics Data System (ADS)

    Bradley, E.; Anderson, K. A.; Marchitto, T. M., Jr.; de Vesine, L. R.; White, J. W. C.; Anderson, D. M.

    2014-12-01

    CSciBox is an integrated software system for the construction and evaluation of age models of paleo-environmetal archives, both directly dated and cross dated. The time has come to encourage cross-pollinization between earth science and computer science in dating paleorecords. This project addresses that need. The CSciBox code, which is being developed by a team of computer scientists and geoscientists, is open source and freely available on github. The system employs modern database technology to store paleoclimate proxy data and analysis results in an easily accessible and searchable form. This makes it possible to do analysis on the whole core at once, in an interactive fashion, or to tailor the analysis to a subset of the core without loading the entire data file. CSciBox provides a number of 'components' that perform the common steps in age-model construction and evaluation: calibrations, reservoir-age correction, interpolations, statistics, and so on. The user employs these components via a graphical user interface (GUI) to go from raw data to finished age model in a single tool: e.g., an IntCal09 calibration of 14C data from a marine sediment core, followed by a piecewise-linear interpolation. CSciBox's GUI supports plotting of any measurement in the core against any other measurement, or against any of the variables in the calculation of the age model-with or without explicit error representations. Using the GUI, CSciBox's user can import a new calibration curve or other background data set and define a new module that employs that information. Users can also incorporate other software (e.g., Calib, BACON) as 'plug ins.' In the case of truly large data or significant computational effort, CSciBox is parallelizable across modern multicore processors, or clusters, or even the cloud. The next generation of the CSciBox code, currently in the testing stages, includes an automated reasoning engine that supports a more-thorough exploration of plausible age models

  12. Resting energy expenditure per lean body mass determined by indirect calorimetry and bioelectrical impedance analysis in cats.

    PubMed

    Center, S A; Warner, K L; Randolph, J F; Wakshlag, J J; Sunvold, G D

    2011-01-01

    Resting energy expenditure (REE) approximates ≥60% of daily energy expenditure (DEE). Accurate REE determination could facilitate sequential comparisons among patients and diseases if normalized against lean body mass (LBM). (1) Validate open-flow indirect calorimetry (IC) system and multifrequency bioelectrical impedance analysis (MF-BIA) to determine REE and LBM, respectively, in healthy nonsedated cats of varied body conditions; (2) normalize REE against LBM. Fifty-seven adult neutered domestic short-haired cats with stable BW. Continuous (45-min) IC-measurements determined least observed metabolism REE. Cage gas flow regulated with mass flow controllers was verified using nitrogen dilution; span gases calibrated gas measurements. Respiratory quotient accuracy was verified using alcohol combustion. IC-REE was compared to DEE, determined using doubly labeled water. MF-BIA LBM was validated against criterion references (deuterium, sodium bromide). Intra- and interassay variation was determined for IC and MF-BIA. Mean IC-REE (175 ± 38.7 kcal; 1.5-14% intra- and interassay CV%) represented 61 ± 14.3% of DEE. Best MF-BIA measurements were collected in sternal recumbency and with electrodes in neck-tail configuration. MF-BIA LBM was not significantly different from criterion references and generated LBM interassay CV% of 6.6-10.1%. Over- and underconditioned cats had significantly (P ≤ .05) lower and higher IC-REE (kcal/kg) respectively, compared with normal-conditioned cats. However, differences resolved with REE/LBM (approximating 53 ± 10.3 kcal/LBM [kg]). IC and MF-BIA validated herein reasonably estimate REE and LBM in cats. REE/LBM(kg) may permit comparison of energy utilization in sequential studies or among different cats. Copyright © 2011 by the American College of Veterinary Internal Medicine.

  13. Automatic force balance calibration system

    NASA Technical Reports Server (NTRS)

    Ferris, Alice T. (Inventor)

    1995-01-01

    A system for automatically calibrating force balances is provided. The invention uses a reference balance aligned with the balance being calibrated to provide superior accuracy while minimizing the time required to complete the calibration. The reference balance and the test balance are rigidly attached together with closely aligned moment centers. Loads placed on the system equally effect each balance, and the differences in the readings of the two balances can be used to generate the calibration matrix for the test balance. Since the accuracy of the test calibration is determined by the accuracy of the reference balance and current technology allows for reference balances to be calibrated to within +/-0.05% the entire system has an accuracy of +/-0.2%. The entire apparatus is relatively small and can be mounted on a movable base for easy transport between test locations. The system can also accept a wide variety of reference balances, thus allowing calibration under diverse load and size requirements.

  14. Automatic force balance calibration system

    NASA Technical Reports Server (NTRS)

    Ferris, Alice T. (Inventor)

    1996-01-01

    A system for automatically calibrating force balances is provided. The invention uses a reference balance aligned with the balance being calibrated to provide superior accuracy while minimizing the time required to complete the calibration. The reference balance and the test balance are rigidly attached together with closely aligned moment centers. Loads placed on the system equally effect each balance, and the differences in the readings of the two balances can be used to generate the calibration matrix for the test balance. Since the accuracy of the test calibration is determined by the accuracy of the reference balance and current technology allows for reference balances to be calibrated to within .+-.0.05%, the entire system has an accuracy of a .+-.0.2%. The entire apparatus is relatively small and can be mounted on a movable base for easy transport between test locations. The system can also accept a wide variety of reference balances, thus allowing calibration under diverse load and size requirements.

  15. Some aspects of robotics calibration, design and control

    NASA Technical Reports Server (NTRS)

    Tawfik, Hazem

    1990-01-01

    The main objective is to introduce techniques in the areas of testing and calibration, design, and control of robotic systems. A statistical technique is described that analyzes a robot's performance and provides quantitative three-dimensional evaluation of its repeatability, accuracy, and linearity. Based on this analysis, a corrective action should be taken to compensate for any existing errors and enhance the robot's overall accuracy and performance. A comparison between robotics simulation software packages that were commercially available (SILMA, IGRIP) and that of Kennedy Space Center (ROBSIM) is also included. These computer codes simulate the kinematics and dynamics patterns of various robot arm geometries to help the design engineer in sizing and building the robot manipulator and control system. A brief discussion on an adaptive control algorithm is provided.

  16. Calibration process of highly parameterized semi-distributed hydrological model

    NASA Astrophysics Data System (ADS)

    Vidmar, Andrej; Brilly, Mitja

    2017-04-01

    Hydrological phenomena take place in the hydrological system, which is governed by nature, and are essentially stochastic. These phenomena are unique, non-recurring, and changeable across space and time. Since any river basin with its own natural characteristics and any hydrological event therein, are unique, this is a complex process that is not researched enough. Calibration is a procedure of determining the parameters of a model that are not known well enough. Input and output variables and mathematical model expressions are known, while only some parameters are unknown, which are determined by calibrating the model. The software used for hydrological modelling nowadays is equipped with sophisticated algorithms for calibration purposes without possibility to manage process by modeler. The results are not the best. We develop procedure for expert driven process of calibration. We use HBV-light-CLI hydrological model which has command line interface and coupling it with PEST. PEST is parameter estimation tool which is used widely in ground water modeling and can be used also on surface waters. Process of calibration managed by expert directly, and proportionally to the expert knowledge, affects the outcome of the inversion procedure and achieves better results than if the procedure had been left to the selected optimization algorithm. First step is to properly define spatial characteristic and structural design of semi-distributed model including all morphological and hydrological phenomena, like karstic area, alluvial area and forest area. This step includes and requires geological, meteorological, hydraulic and hydrological knowledge of modeler. Second step is to set initial parameter values at their preferred values based on expert knowledge. In this step we also define all parameter and observation groups. Peak data are essential in process of calibration if we are mainly interested in flood events. Each Sub Catchment in the model has own observations group

  17. Results from Source-Based and Detector-Based Calibrations of a CLARREO Calibration Demonstration System

    NASA Technical Reports Server (NTRS)

    Angal, Amit; Mccorkel, Joel; Thome, Kurt

    2016-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission is formulated to determine long-term climate trends using SI-traceable measurements. The CLARREO mission will include instruments operating in the reflected solar (RS) wavelength region from 320 nm to 2300 nm. The Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO and facilitates testing and evaluation of calibration approaches. The basis of CLARREO and SOLARIS calibration is the Goddard Laser for Absolute Measurement of Response (GLAMR) that provides a radiance-based calibration at reflective solar wavelengths using continuously tunable lasers. SI-traceability is achieved via detector-based standards that, in GLAMRs case, are a set of NIST-calibrated transfer radiometers. A portable version of the SOLARIS, Suitcase SOLARIS is used to evaluate GLAMRs calibration accuracies. The calibration of Suitcase SOLARIS using GLAMR agrees with that obtained from source-based results of the Remote Sensing Group (RSG) at the University of Arizona to better than 5 (k2) in the 720-860 nm spectral range. The differences are within the uncertainties of the NIST-calibrated FEL lamp-based approach of RSG and give confidence that GLAMR is operating at 5 (k2) absolute uncertainties. Limitations of the Suitcase SOLARIS instrument also discussed and the next edition of the SOLARIS instrument (Suitcase SOLARIS- 2) is expected to provide an improved mechanism to further assess GLAMR and CLARREO calibration approaches. (2016) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  18. Online particle detection with Neural Networks based on topological calorimetry information

    NASA Astrophysics Data System (ADS)

    Ciodaro, T.; Deva, D.; de Seixas, J. M.; Damazio, D.

    2012-06-01

    This paper presents the latest results from the Ringer algorithm, which is based on artificial neural networks for the electron identification at the online filtering system of the ATLAS particle detector, in the context of the LHC experiment at CERN. The algorithm performs topological feature extraction using the ATLAS calorimetry information (energy measurements). The extracted information is presented to a neural network classifier. Studies showed that the Ringer algorithm achieves high detection efficiency, while keeping the false alarm rate low. Optimizations, guided by detailed analysis, reduced the algorithm execution time by 59%. Also, the total memory necessary to store the Ringer algorithm information represents less than 6.2 percent of the total filtering system amount.

  19. Exploring and validating physicochemical properties of mangiferin through GastroPlus® software

    PubMed Central

    Khurana, Rajneet Kaur; Kaur, Ranjot; Kaur, Manninder; Kaur, Rajpreet; Kaur, Jasleen; Kaur, Harpreet; Singh, Bhupinder

    2017-01-01

    Aim: Mangiferin (Mgf), a promising therapeutic polyphenol, exhibits poor oral bioavailability. Hence, apt delivery systems are required to facilitate its gastrointestinal absorption. The requisite details on its physicochemical properties have not yet been well documented in literature. Accordingly, in order to have explicit insight into its physicochemical characteristics, the present work was undertaken using GastroPlus™ software. Results: Aqueous solubility (0.38 mg/ml), log P (-0.65), Peff (0.16 × 10-4 cm/s) and ability to act as P-gp substrate were defined. Potency to act as a P-gp substrate was verified through Caco-2 cells, while Peff was estimated through single pass intestinal perfusion studies. Characterization of Mgf through transmission electron microscopy, differential scanning calorimetry, infrared spectroscopy and powder x-ray diffraction has also been reported. Conclusion: The values of physicochemical properties for Mgf reported in the current manuscript would certainly enable the researchers to develop newer delivery systems for Mgf. PMID:28344830

  20. Radiochromic film calibration for the RQT9 quality beam

    NASA Astrophysics Data System (ADS)

    Costa, K. C.; Gomez, A. M. L.; Alonso, T. C.; Mourao, A. P.

    2017-11-01

    When ionizing radiation interacts with matter it generates energy deposition. Radiation dosimetry is important for medical applications of ionizing radiation due to the increasing demand for diagnostic radiology and radiotherapy. Different dosimetry methods are used and each one has its advantages and disadvantages. The film is a dose measurement method that records the energy deposition by the darkening of its emulsion. Radiochromic films have a little visible light sensitivity and respond better to ionizing radiation exposure. The aim of this study is to obtain the resulting calibration curve by the irradiation of radiochromic film strips, making it possible to relate the darkening of the film with the absorbed dose, in order to measure doses in experiments with X-ray beam of 120 kV, in computed tomography (CT). Film strips of GAFCHROMIC XR-QA2 were exposed according to RQT9 reference radiation, which defines an X-ray beam generated from a voltage of 120 kV. Strips were irradiated in "Laboratório de Calibração de Dosímetros do Centro de Desenvolvimento da Tecnologia Nuclear" (LCD / CDTN) at a dose range of 5-30 mGy, corresponding to the range values commonly used in CT scans. Digital images of the irradiated films were analyzed by using the ImageJ software. The darkening responses on film strips according to the doses were observed and they allowed obtaining the corresponding numeric values to the darkening for each specific dose value. From the numerical values of darkening, a calibration curve was obtained, which correlates the darkening of the film strip with dose values in mGy. The calibration curve equation is a simplified method for obtaining absorbed dose values using digital images of radiochromic films irradiated. With the calibration curve, radiochromic films may be applied on dosimetry in experiments on CT scans using X-ray beam of 120 kV, in order to improve CT acquisition image processes.

  1. Blind calibration of radio interferometric arrays using sparsity constraints and its implications for self-calibration

    NASA Astrophysics Data System (ADS)

    Chiarucci, Simone; Wijnholds, Stefan J.

    2018-02-01

    Blind calibration, i.e. calibration without a priori knowledge of the source model, is robust to the presence of unknown sources such as transient phenomena or (low-power) broad-band radio frequency interference that escaped detection. In this paper, we present a novel method for blind calibration of a radio interferometric array assuming that the observed field only contains a small number of discrete point sources. We show the huge computational advantage over previous blind calibration methods and we assess its statistical efficiency and robustness to noise and the quality of the initial estimate. We demonstrate the method on actual data from a Low-Frequency Array low-band antenna station showing that our blind calibration is able to recover the same gain solutions as the regular calibration approach, as expected from theory and simulations. We also discuss the implications of our findings for the robustness of regular self-calibration to poor starting models.

  2. LC-IM-TOF Instrument Control & Data Visualization Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2011-05-12

    Liquid Chromatography-Ion Mobility-time of Flight Instrument Control and Data Visualization software is designed to control instrument voltages for the Ion Mobility drift tube. It collects and stores information collected from the Agilent TOF instrument and analyses/displays the ion intensity information acquired. The software interface can be split into 3 categories -- Instrument Settings/Controls, Data Acquisition, and Viewer. The Instrument Settings/Controls prepares the instrument for Data Acquisition. The Viewer contains common objects that are used by Instrument Settings/Controls and Data Acquisition. Intensity information is collected in 1 nanosec bins and separated by TOF pulses called scans. A collection of scans aremore » stored side by side making up an accumulation. In order for the computer to keep up with the stream of data, 30-50 accumulations are commonly summed into a single frame. A collection of frames makes up an experiment. The Viewer software then takes the experiment and presents the data in several possible ways, each frame can be viewed in TOF bins or m/z (mass to charge ratio). The experiment can be viewed frame by frame, merging several frames, or by viewing the peak chromatogram. The user can zoom into the data, export data, and/or animate frames. Additional features include calibration of the data and even post-processing multiplexed data.« less

  3. Excimer laser calibration system.

    PubMed

    Gottsch, J D; Rencs, E V; Cambier, J L; Hall, D; Azar, D T; Stark, W J

    1996-01-01

    Excimer laser photoablation for refractive and therapeutic keratectomies has been demonstrated to be feasible and practicable. However, corneal laser ablations are not without problems, including the delivery and maintenance of a homogeneous beam. We have developed an excimer laser calibration system capable of characterizing a laser ablation profile. Beam homogeneity is determined by the analysis of a polymethylmethacrylate (PMMA)-based thin-film using video capture and image processing. The ablation profile is presented as a color-coded map. Interpolation of excimer calibration system analysis provides a three-dimensional representation of elevation profiles that correlates with two-dimensional scanning profilometry. Excimer calibration analysis was performed before treating a monkey undergoing phototherapeutic keratectomy and two human subjects undergoing myopic spherocylindrical photorefractive keratectomy. Excimer calibration analysis was performed before and after laser refurbishing. Laser ablation profiles in PMMA are resolved by the excimer calibration system to .006 microns/pulse. Correlations with ablative patterns in a monkey cornea were demonstrated with preoperative and postoperative keratometry using corneal topography, and two human subjects using video-keratography. Excimer calibration analysis predicted a central-steep-island ablative pattern with the VISX Twenty/Twenty laser, which was confirmed by corneal topography immediately postoperatively and at 1 week after reepithelialization in the monkey. Predicted central steep islands in the two human subjects were confirmed by video-keratography at 1 week and at 1 month. Subsequent technical refurbishing of the laser resulted in a beam with an overall increased ablation rate measured as microns/pulse with a donut ablation profile. A patient treated after repair of the laser electrodes demonstrated no central island. This excimer laser calibration system can precisely detect laser-beam ablation

  4. Calibrating the MicroBooNE Photomultiplier Tube (PMT) Array with Michel Electrons from Cosmic Ray Muons

    NASA Astrophysics Data System (ADS)

    Greene, Amy

    2013-04-01

    MicroBooNE is a neutrino experiment at Fermilab designed to investigate the 3σ low-energy electron candidate events measured by the MiniBooNE experiment. Neutrinos from the Booster Neutrino Beam are detected by a 89-ton liquid argon time projection chamber, which is expected to start taking data in 2014. MicroBooNE measures both the ionization electrons and scintillation light produced by neutrino interactions in the liquid argon. The scintillation light is collected by an array of 30 PMTs located at one side of the detector. This array can be calibrated using Michel electrons from stopping cosmic ray muons, by fitting the measured PMT response with the theoretical expectation. I will report on the progress of the PMT calibration software that has been developed using the MicroBooNE Monte Carlo.

  5. VIIRS thermal emissive bands on-orbit calibration coefficient performance using vicarious calibration results

    NASA Astrophysics Data System (ADS)

    Moyer, D.; Moeller, C.; De Luccia, F.

    2013-09-01

    The Visible Infrared Imager Radiometer Suite (VIIRS), a primary sensor on-board the Suomi-National Polar-orbiting Partnership (SNPP) spacecraft, was launched October 28, 2011. It has 22 bands: 7 thermal emissive bands (TEBs), 14 reflective solar bands (RSBs) and a Day Night Band (DNB). The TEBs cover the spectral wavelengths between 3.7 to 12 μm and have two 371 m and five 742 m spatial resolution bands. A VIIRS Key Performance Parameter (KPP) is the sea surface temperature (SST) which uses bands M12 (3.7 μm), M15 (10.8 μm) and M16's (12.0 μm) calibrated Science Data Records (SDRs). The TEB SDRs rely on pre-launch calibration coefficients used in a quadratic algorithm to convert the detector's response to calibrated radiance. This paper will evaluate the performance of these prelaunch calibration coefficients using vicarious calibration information from the Cross-track Infrared Sounder (CrIS) also onboard the SNPP spacecraft and the Infrared Atmospheric Sounding Interferometer (IASI) on-board the Meteorological Operational (MetOp) satellite. Changes to the pre-launch calibration coefficients' offset term c0 to improve the SDR's performance at cold scene temperatures will also be discussed.

  6. Energy calibration issues in nuclear resonant vibrational spectroscopy: observing small spectral shifts and making fast calibrations.

    PubMed

    Wang, Hongxin; Yoda, Yoshitaka; Dong, Weibing; Huang, Songping D

    2013-09-01

    The conventional energy calibration for nuclear resonant vibrational spectroscopy (NRVS) is usually long. Meanwhile, taking NRVS samples out of the cryostat increases the chance of sample damage, which makes it impossible to carry out an energy calibration during one NRVS measurement. In this study, by manipulating the 14.4 keV beam through the main measurement chamber without moving out the NRVS sample, two alternative calibration procedures have been proposed and established: (i) an in situ calibration procedure, which measures the main NRVS sample at stage A and the calibration sample at stage B simultaneously, and calibrates the energies for observing extremely small spectral shifts; for example, the 0.3 meV energy shift between the 100%-(57)Fe-enriched [Fe4S4Cl4](=) and 10%-(57)Fe and 90%-(54)Fe labeled [Fe4S4Cl4](=) has been well resolved; (ii) a quick-switching energy calibration procedure, which reduces each calibration time from 3-4 h to about 30 min. Although the quick-switching calibration is not in situ, it is suitable for normal NRVS measurements.

  7. Automated Heat-Flux-Calibration Facility

    NASA Technical Reports Server (NTRS)

    Liebert, Curt H.; Weikle, Donald H.

    1989-01-01

    Computer control speeds operation of equipment and processing of measurements. New heat-flux-calibration facility developed at Lewis Research Center. Used for fast-transient heat-transfer testing, durability testing, and calibration of heat-flux gauges. Calibrations performed at constant or transient heat fluxes ranging from 1 to 6 MW/m2 and at temperatures ranging from 80 K to melting temperatures of most materials. Facility developed because there is need to build and calibrate very-small heat-flux gauges for Space Shuttle main engine (SSME).Includes lamp head attached to side of service module, an argon-gas-recirculation module, reflector, heat exchanger, and high-speed positioning system. This type of automated heat-flux calibration facility installed in industrial plants for onsite calibration of heat-flux gauges measuring fluxes of heat in advanced gas-turbine and rocket engines.

  8. Automatic Calibration of an Airborne Imaging System to an Inertial Navigation Unit

    NASA Technical Reports Server (NTRS)

    Ansar, Adnan I.; Clouse, Daniel S.; McHenry, Michael C.; Zarzhitsky, Dimitri V.; Pagdett, Curtis W.

    2013-01-01

    This software automatically calibrates a camera or an imaging array to an inertial navigation system (INS) that is rigidly mounted to the array or imager. In effect, it recovers the coordinate frame transformation between the reference frame of the imager and the reference frame of the INS. This innovation can automatically derive the camera-to-INS alignment using image data only. The assumption is that the camera fixates on an area while the aircraft flies on orbit. The system then, fully automatically, solves for the camera orientation in the INS frame. No manual intervention or ground tie point data is required.

  9. Larger Optics and Improved Calibration Techniques for Small Satellite Observations with the ERAU OSCOM System

    NASA Astrophysics Data System (ADS)

    Bilardi, S.; Barjatya, A.; Gasdia, F.

    OSCOM, Optical tracking and Spectral characterization of CubeSats for Operational Missions, is a system capable of providing time-resolved satellite photometry using commercial-off-the-shelf (COTS) hardware and custom tracking and analysis software. This system has acquired photometry of objects as small as CubeSats using a Celestron 11” RASA and an inexpensive CMOS machine vision camera. For satellites with known shapes, these light curves can be used to verify a satellite’s attitude and the state of its deployed solar panels or antennae. While the OSCOM system can successfully track satellites and produce light curves, there is ongoing improvement towards increasing its automation while supporting additional mounts and telescopes. A newly acquired Celestron 14” Edge HD can be used with a Starizona Hyperstar to increase the SNR for small objects as well as extend beyond the limiting magnitude of the 11” RASA. OSCOM currently corrects instrumental brightness measurements for satellite range and observatory site average atmospheric extinction, but calibrated absolute brightness is required to determine information about satellites other than their spin rate, such as surface albedo. A calibration method that automatically detects and identifies background stars can use their catalog magnitudes to calibrate the brightness of the satellite in the image. We present a photometric light curve from both the 14” Edge HD and 11” RASA optical systems as well as plans for a calibration method that will perform background star photometry to efficiently determine calibrated satellite brightness in each frame.

  10. MODIS airborne simulator visible and near-infrared calibration, 1992 ASTEX field experiment. Calibration version: ASTEX King 1.0

    NASA Technical Reports Server (NTRS)

    Arnold, G. Thomas; Fitzgerald, Michael; Grant, Patrick S.; King, Michael D.

    1994-01-01

    Calibration of the visible and near-infrared (near-IR) channels of the MODIS Airborne Simulator (MAS) is derived from observations of a calibrated light source. For the 1992 Atlantic Stratocumulus Transition Experiment (ASTEX) field deployment, the calibrated light source was the NASA Goddard 48-inch integrating hemisphere. Tests during the ASTEX deployment were conducted to calibrate the hemisphere and then the MAS. This report summarizes the ASTEX hemisphere calibration, and then describes how the MAS was calibrated from the hemisphere data. All MAS calibration measurements are presented and determination of the MAS calibration coefficients (raw counts to radiance conversion) is discussed. In addition, comparisons to an independent MAS calibration by Ames personnel using their 30-inch integrating sphere is discussed.

  11. LabVIEW control software for scanning micro-beam X-ray fluorescence spectrometer.

    PubMed

    Wrobel, Pawel; Czyzycki, Mateusz; Furman, Leszek; Kolasinski, Krzysztof; Lankosz, Marek; Mrenca, Alina; Samek, Lucyna; Wegrzynek, Dariusz

    2012-05-15

    Confocal micro-beam X-ray fluorescence microscope was constructed. The system was assembled from commercially available components - a low power X-ray tube source, polycapillary X-ray optics and silicon drift detector - controlled by an in-house developed LabVIEW software. A video camera coupled to optical microscope was utilized to display the area excited by X-ray beam. The camera image calibration and scan area definition software were also based entirely on LabVIEW code. Presently, the main area of application of the newly constructed spectrometer is 2-dimensional mapping of element distribution in environmental, biological and geological samples with micrometer spatial resolution. The hardware and the developed software can already handle volumetric 3-D confocal scans. In this work, a front panel graphical user interface as well as communication protocols between hardware components were described. Two applications of the spectrometer, to homogeneity testing of titanium layers and to imaging of various types of grains in air particulate matter collected on membrane filters, were presented. Copyright © 2012 Elsevier B.V. All rights reserved.

  12. Modeling in vivo fluorescence of small animals using TracePro software

    NASA Astrophysics Data System (ADS)

    Leavesley, Silas; Rajwa, Bartek; Freniere, Edward R.; Smith, Linda; Hassler, Richard; Robinson, J. Paul

    2007-02-01

    The theoretical modeling of fluorescence excitation, emission, and propagation within living tissue has been a limiting factor in the development and calibration of in vivo small animal fluorescence imagers. To date, no definitive calibration standard, or phantom, has been developed for use with small animal fluorescence imagers. Our work in the theoretical modeling of fluorescence in small animals using solid modeling software is useful in optimizing the design of small animal imaging systems, and in predicting their response to a theoretical model. In this respect, it is also valuable in the design of a fluorescence phantom for use in in vivo small animal imaging. The use of phantoms is a critical step in the testing and calibration of most diagnostic medical imaging systems. Despite this, a realistic, reproducible, and informative phantom has yet to be produced for use in small animal fluorescence imaging. By modeling the theoretical response of various types of phantoms, it is possible to determine which parameters are necessary for accurately modeling fluorescence within inhomogenous scattering media such as tissue. Here, we present the model that has been developed, the challenges and limitations associated with developing such a model, and the applicability of this model to experimental results obtained in a commercial small animal fluorescence imager.

  13. CALIBRATED ULTRA FAST IMAGE SIMULATIONS FOR THE DARK ENERGY SURVEY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bruderer, Claudio; Chang, Chihway; Refregier, Alexandre

    2016-01-20

    Image simulations are becoming increasingly important in understanding the measurement process of the shapes of galaxies for weak lensing and the associated systematic effects. For this purpose we present the first implementation of the Monte Carlo Control Loops (MCCL), a coherent framework for studying systematic effects in weak lensing. It allows us to model and calibrate the shear measurement process using image simulations from the Ultra Fast Image Generator (UFig) and the image analysis software SExtractor. We apply this framework to a subset of the data taken during the Science Verification period (SV) of the Dark Energy Survey (DES). Wemore » calibrate the UFig simulations to be statistically consistent with one of the SV images, which covers ∼0.5 square degrees. We then perform tolerance analyses by perturbing six simulation parameters and study their impact on the shear measurement at the one-point level. This allows us to determine the relative importance of different parameters. For spatially constant systematic errors and point-spread function, the calibration of the simulation reaches the weak lensing precision needed for the DES SV survey area. Furthermore, we find a sensitivity of the shear measurement to the intrinsic ellipticity distribution, and an interplay between the magnitude-size and the pixel value diagnostics in constraining the noise model. This work is the first application of the MCCL framework to data and shows how it can be used to methodically study the impact of systematics on the cosmic shear measurement.« less

  14. Imaging Sensor Flight and Test Equipment Software

    NASA Technical Reports Server (NTRS)

    Freestone, Kathleen; Simeone, Louis; Robertson, Byran; Frankford, Maytha; Trice, David; Wallace, Kevin; Wilkerson, DeLisa

    2007-01-01

    The Lightning Imaging Sensor (LIS) is one of the components onboard the Tropical Rainfall Measuring Mission (TRMM) satellite, and was designed to detect and locate lightning over the tropics. The LIS flight code was developed to run on a single onboard digital signal processor, and has operated the LIS instrument since 1997 when the TRMM satellite was launched. The software provides controller functions to the LIS Real-Time Event Processor (RTEP) and onboard heaters, collects the lightning event data from the RTEP, compresses and formats the data for downlink to the satellite, collects housekeeping data and formats the data for downlink to the satellite, provides command processing and interface to the spacecraft communications and data bus, and provides watchdog functions for error detection. The Special Test Equipment (STE) software was designed to operate specific test equipment used to support the LIS hardware through development, calibration, qualification, and integration with the TRMM spacecraft. The STE software provides the capability to control instrument activation, commanding (including both data formatting and user interfacing), data collection, decompression, and display and image simulation. The LIS STE code was developed for the DOS operating system in the C programming language. Because of the many unique data formats implemented by the flight instrument, the STE software was required to comprehend the same formats, and translate them for the test operator. The hardware interfaces to the LIS instrument using both commercial and custom computer boards, requiring that the STE code integrate this variety into a working system. In addition, the requirement to provide RTEP test capability dictated the need to provide simulations of background image data with short-duration lightning transients superimposed. This led to the development of unique code used to control the location, intensity, and variation above background for simulated lightning strikes

  15. Photogrammetric camera calibration

    USGS Publications Warehouse

    Tayman, W.P.; Ziemann, H.

    1984-01-01

    Section 2 (Calibration) of the document "Recommended Procedures for Calibrating Photogrammetric Cameras and Related Optical Tests" from the International Archives of Photogrammetry, Vol. XIII, Part 4, is reviewed in the light of recent practical work, and suggestions for changes are made. These suggestions are intended as a basis for a further discussion. ?? 1984.

  16. Novel, Miniature Multi-Hole Probes and High-Accuracy Calibration Algorithms for their use in Compressible Flowfields

    NASA Technical Reports Server (NTRS)

    Rediniotis, Othon K.

    1999-01-01

    Two new calibration algorithms were developed for the calibration of non-nulling multi-hole probes in compressible, subsonic flowfields. The reduction algorithms are robust and able to reduce data from any multi-hole probe inserted into any subsonic flowfield to generate very accurate predictions of the velocity vector, flow direction, total pressure and static pressure. One of the algorithms PROBENET is based on the theory of neural networks, while the other is of a more conventional nature (polynomial approximation technique) and introduces a novel idea of local least-squares fits. Both algorithms have been developed to complete, user-friendly software packages. New technology was developed for the fabrication of miniature multi-hole probes, with probe tip diameters all the way down to 0.035". Several miniature 5- and 7-hole probes, with different probe tip geometries (hemispherical, conical, faceted) and different overall shapes (straight, cobra, elbow probes) were fabricated, calibrated and tested. Emphasis was placed on the development of four stainless-steel conical 7-hole probes, 1/16" in diameter calibrated at NASA Langley for the entire subsonic regime. The developed calibration algorithms were extensively tested with these probes demonstrating excellent prediction capabilities. The probes were used in the "trap wing" wind tunnel tests in the 14'x22' wind tunnel at NASA Langley, providing valuable information on the flowfield over the wing. This report is organized in the following fashion. It consists of a "Technical Achievements" section that summarizes the major achievements, followed by an assembly of journal articles that were produced from this project and ends with two manuals for the two probe calibration algorithms developed.

  17. Uncertainty Analysis of Inertial Model Attitude Sensor Calibration and Application with a Recommended New Calibration Method

    NASA Technical Reports Server (NTRS)

    Tripp, John S.; Tcheng, Ping

    1999-01-01

    Statistical tools, previously developed for nonlinear least-squares estimation of multivariate sensor calibration parameters and the associated calibration uncertainty analysis, have been applied to single- and multiple-axis inertial model attitude sensors used in wind tunnel testing to measure angle of attack and roll angle. The analysis provides confidence and prediction intervals of calibrated sensor measurement uncertainty as functions of applied input pitch and roll angles. A comparative performance study of various experimental designs for inertial sensor calibration is presented along with corroborating experimental data. The importance of replicated calibrations over extended time periods has been emphasized; replication provides independent estimates of calibration precision and bias uncertainties, statistical tests for calibration or modeling bias uncertainty, and statistical tests for sensor parameter drift over time. A set of recommendations for a new standardized model attitude sensor calibration method and usage procedures is included. The statistical information provided by these procedures is necessary for the uncertainty analysis of aerospace test results now required by users of industrial wind tunnel test facilities.

  18. A Comprehensive Plan for the Long-Term Calibration and Validation of Oceanic Biogeochemical Satellite Data

    NASA Technical Reports Server (NTRS)

    Hooker, Stanford B.; McClain, Charles R.; Mannino, Antonio

    2007-01-01

    The primary objective of this planning document is to establish a long-term capability and validating oceanic biogeochemical satellite data. It is a pragmatic solution to a practical problem based primarily o the lessons learned from prior satellite missions. All of the plan's elements are seen to be interdependent, so a horizontal organizational scheme is anticipated wherein the overall leadership comes from the NASA Ocean Biology and Biogeochemistry (OBB) Program Manager and the entire enterprise is split into two components of equal sature: calibration and validation plus satellite data processing. The detailed elements of the activity are based on the basic tasks of the two main components plus the current objectives of the Carbon Cycle and Ecosystems Roadmap. The former is distinguished by an internal core set of responsibilities and the latter is facilitated through an external connecting-core ring of competed or contracted activities. The core elements for the calibration and validation component include a) publish protocols and performance metrics; b) verify uncertainty budgets; c) manage the development and evaluation of instrumentation; and d) coordinate international partnerships. The core elements for the satellite data processing component are e) process and reprocess multisensor data; f) acquire, distribute, and archive data products; and g) implement new data products. Both components have shared responsibilities for initializing and temporally monitoring satellite calibration. Connecting-core elements include (but are not restricted to) atmospheric correction and characterization, standards and traceability, instrument and analysis round robins, field campaigns and vicarious calibration sites, in situ database, bio-optical algorithm (and product) validation, satellite characterization and vicarious calibration, and image processing software. The plan also includes an accountability process, creating a Calibration and Validation Team (to help manage

  19. Office Computer Software: A Comprehensive Review of Software Programs.

    ERIC Educational Resources Information Center

    Secretary, 1992

    1992-01-01

    Describes types of software including system software, application software, spreadsheets, accounting software, graphics packages, desktop publishing software, database, desktop and personal information management software, project and records management software, groupware, and shareware. (JOW)

  20. Dutch X-band SLAR calibration

    NASA Technical Reports Server (NTRS)

    Groot, J. S.

    1990-01-01

    In August 1989 the NASA/JPL airborne P/L/C-band DC-8 SAR participated in several remote sensing campaigns in Europe. Amongst other test sites, data were obtained of the Flevopolder test site in the Netherlands on August the 16th. The Dutch X-band SLAR was flown on the same date and imaged parts of the same area as the SAR. To calibrate the two imaging radars a set of 33 calibration devices was deployed. 16 trihedrals were used to calibrate a part of the SLAR data. This short paper outlines the X-band SLAR characteristics, the experimental set-up and the calibration method used to calibrate the SLAR data. Finally some preliminary results are given.

  1. Calorimetry, activity, and micro-FTIR analysis of CO chemisorption, titration, and oxidation on supported Pt

    NASA Technical Reports Server (NTRS)

    Sermon, Paul A.; Self, Valerie A.; Vong, Mariana S. W.; Wurie, Alpha T.

    1990-01-01

    The value of in situ analysis on CO chemisorption, titration and oxidation over supported Pt catalysts using calorimetry, catalytic and micro-FTIR methods is illustrated using silica- and titania-supported samples. Isothermal CO-O and O2-CO titrations have not been widely used on metal surfaces and may be complicated if some oxide supports are reduced by CO titrant. However, they can illuminate the kinetics of CO oxidation on metal/oxide catalysts since during such titrations all O and CO coverages are scanned as a function of time. There are clear advantages in following the rates of the catalyzed CO oxidation via calorimetry and gc-ms simultaneously. At lower temperatures the evidence they provide is complementary. CO oxidation and its catalysis of CO oxidation have been extensively studied with hysteresis and oscillations apparent, and the present results suggest the benefits of a combined approach. Silica support porosity may be important in defining activity-temperature hysteresis. FTIR microspectroscopy reveals the chemical heterogeneity of the catalytic surfaces used; it is interesting that the evidence with regard to the dominant CO surface species and their reactivities with regard to surface oxygen for present oxide-supported Pt are different from those seen on graphite-supported Pt.

  2. Dual-Readout Calorimetry for High-Quality Energy Measurements. Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wigmans, Richard; Nural, Akchurin

    2013-09-01

    This document constitutes the final report on the project Dual-Readout Calorimetry for High-Quality Energy Measurements. The project was carried out by a consortium of US and Italian physicists, led by Dr. Richard Wigmans (Texas tech University). This consortium built several particle detectors and tested these at the European Center for Nuclear Research (CERN) in Geneva, Switzerland. The idea arose to use scintillating crystals as dual-readout calorimeters. Such crystals were of course already known to provide excellent energy resolution for the detection of particles developing electromagnetic (em) showers. The efforts to separate the signals from scintillating crystals into scintillation and Cerenkovmore » components led to four different methods by which this could be accomplished. These methods are based on a) the directionality, b) spectral differences, c) the time structure, and d) the polarization of the signals.« less

  3. Limitations and possibilities of AC calorimetry in diamond anvil cells

    NASA Astrophysics Data System (ADS)

    Geballe, Zachary; Colins, Gilbert; Jeanloz, Raymond

    2013-06-01

    Dynamic laser heating or internal resistive heating could allow for the determination of calorimetric properties of samples that are held statically at high pressure. However, the highly non-adiabatic environment of high-pressure cells presents several challenges. Here, we quantify the errors in AC calorimetry measurements using laser heating or internal resistive heating inside diamond anvil cells, summarize the equipment requirements of supplying sufficient power modulated at a high enough frequency to measure specific heats and latent heats of phase transitions, and propose two new experiments in internally-heated diamond anvil cells: an absolute measurement of specific heat (with ~10% uncertainty) of non-magnetic metals using resistive heating at ~10 MHz, and a relative measurement to detect changes in either the specific heat of metals or in the effusively (the product of specific heat, density and thermal conductivity) of an insulator.

  4. Software Library: A Reusable Software Issue.

    DTIC Science & Technology

    1984-06-01

    On reverse aide it neceeary aid Identify by block number) Software Library; Program Library; Reusability; Generator 20 ABSTRACT (Cmlnue on revere... Software Library. A particular example of the Software Library, the Program Library, is described as a prototype of a reusable library. A hierarchical... programming libraries are described. Finally, non code products in the Software Library are discussed. Accesson Fo NTIS R~jS DrrC TA Availability Codes 0

  5. Addressing the impact of environmental uncertainty in plankton model calibration with a dedicated software system: the Marine Model Optimization Testbed (MarMOT 1.1 alpha)

    NASA Astrophysics Data System (ADS)

    Hemmings, J. C. P.; Challenor, P. G.

    2012-04-01

    A wide variety of different plankton system models have been coupled with ocean circulation models, with the aim of understanding and predicting aspects of environmental change. However, an ability to make reliable inferences about real-world processes from the model behaviour demands a quantitative understanding of model error that remains elusive. Assessment of coupled model output is inhibited by relatively limited observing system coverage of biogeochemical components. Any direct assessment of the plankton model is further inhibited by uncertainty in the physical state. Furthermore, comparative evaluation of plankton models on the basis of their design is inhibited by the sensitivity of their dynamics to many adjustable parameters. Parameter uncertainty has been widely addressed by calibrating models at data-rich ocean sites. However, relatively little attention has been given to quantifying uncertainty in the physical fields required by the plankton models at these sites, and tendencies in the biogeochemical properties due to the effects of horizontal processes are often neglected. Here we use model twin experiments, in which synthetic data are assimilated to estimate a system's known "true" parameters, to investigate the impact of error in a plankton model's environmental input data. The experiments are supported by a new software tool, the Marine Model Optimization Testbed, designed for rigorous analysis of plankton models in a multi-site 1-D framework. Simulated errors are derived from statistical characterizations of the mixed layer depth, the horizontal flux divergence tendencies of the biogeochemical tracers and the initial state. Plausible patterns of uncertainty in these data are shown to produce strong temporal and spatial variability in the expected simulation error variance over an annual cycle, indicating variation in the significance attributable to individual model-data differences. An inverse scheme using ensemble-based estimates of the

  6. Computerized tomography calibrator

    NASA Technical Reports Server (NTRS)

    Engel, Herbert P. (Inventor)

    1991-01-01

    A set of interchangeable pieces comprising a computerized tomography calibrator, and a method of use thereof, permits focusing of a computerized tomographic (CT) system. The interchangeable pieces include a plurality of nestable, generally planar mother rings, adapted for the receipt of planar inserts of predetermined sizes, and of predetermined material densities. The inserts further define openings therein for receipt of plural sub-inserts. All pieces are of known sizes and densities, permitting the assembling of different configurations of materials of known sizes and combinations of densities, for calibration (i.e., focusing) of a computerized tomographic system through variation of operating variables thereof. Rather than serving as a phanton, which is intended to be representative of a particular workpiece to be tested, the set of interchangeable pieces permits simple and easy standardized calibration of a CT system. The calibrator and its related method of use further includes use of air or of particular fluids for filling various openings, as part of a selected configuration of the set of pieces.

  7. Portable open-path optical remote sensing (ORS) Fourier transform infrared (FTIR) instrumentation miniaturization and software for point and click real-time analysis

    NASA Astrophysics Data System (ADS)

    Zemek, Peter G.; Plowman, Steven V.

    2010-04-01

    Advances in hardware have miniaturized the emissions spectrometer and associated optics, rendering them easily deployed in the field. Such systems are also suitable for vehicle mounting, and can provide high quality data and concentration information in minutes. Advances in software have accompanied this hardware evolution, enabling the development of portable point-and-click OP-FTIR systems that weigh less than 16 lbs. These systems are ideal for first-responders, military, law enforcement, forensics, and screening applications using optical remote sensing (ORS) methodologies. With canned methods and interchangeable detectors, the new generation of OP-FTIR technology is coupled to the latest forward reference-type model software to provide point-and-click technology. These software models have been established for some time. However, refined user-friendly models that use active, passive, and solar occultation methodologies now allow the user to quickly field-screen and quantify plumes, fence-lines, and combustion incident scenarios in high-temporal-resolution. Synthetic background generation is now redundant as the models use highly accurate instrument line shape (ILS) convolutions and several other parameters, in conjunction with radiative transfer model databases to model a single calibration spectrum to collected sample spectra. Data retrievals are performed directly on single beam spectra using non-linear classical least squares (NLCLS). Typically, the Hitran line database is used to generate the initial calibration spectrum contained within the software.

  8. Global Space-Based Inter-Calibration System Reflective Solar Calibration Reference: From Aqua MODIS to S-NPP VIIRS

    NASA Technical Reports Server (NTRS)

    Xiong, Xiaoxiong; Angal, Amit; Butler, James; Cao, Changyong; Doelling, Daivd; Wu, Aisheng; Wu, Xiangqian

    2016-01-01

    The MODIS has successfully operated on-board the NASA's EOS Terra and Aqua spacecraft for more than 16 and 14 years, respectively. MODIS instrument was designed with stringent calibration requirements and comprehensive on-board calibration capability. In the reflective solar spectral region, Aqua MODIS has performed better than Terra MODIS and, therefore, has been chosen by the Global Space-based Inter-Calibration System (GSICS) operational community as the calibration reference sensor in cross-sensor calibration and calibration inter-comparisons. For the same reason, it has also been used by a number of earth observing sensors as their calibration reference. Considering that Aqua MODIS has already operated for nearly 14 years, it is essential to transfer its calibration to a follow-on reference sensor with a similar calibration capability and stable performance. The VIIRS is a follow-on instrument to MODIS and has many similar design features as MODIS, including their on-board calibrators (OBC). As a result, VIIRS is an ideal candidate to replace MODIS to serve as the future GSICS reference sensor. Since launch, the S-NPP VIIRS has already operated for more than 4 years and its overall performance has been extensively characterized and demonstrated to meet its overall design requirements. This paper provides an overview of Aqua MODIS and S-NPP VIIRS reflective solar bands (RSB) calibration methodologies and strategies, traceability, and their on-orbit performance. It describes and illustrates different methods and approaches that can be used to facilitate the calibration reference transfer, including the use of desert and Antarctic sites, deep convective clouds (DCC), and the lunar observations.

  9. MODIS Instrument Operation and Calibration Improvements

    NASA Technical Reports Server (NTRS)

    Xiong, X.; Angal, A.; Madhavan, S.; Link, D.; Geng, X.; Wenny, B.; Wu, A.; Chen, H.; Salomonson, V.

    2014-01-01

    Terra and Aqua MODIS have successfully operated for over 14 and 12 years since their respective launches in 1999 and 2002. The MODIS on-orbit calibration is performed using a set of on-board calibrators, which include a solar diffuser for calibrating the reflective solar bands (RSB) and a blackbody for the thermal emissive bands (TEB). On-orbit changes in the sensor responses as well as key performance parameters are monitored using the measurements of these on-board calibrators. This paper provides an overview of MODIS on-orbit operation and calibration activities, and instrument long-term performance. It presents a brief summary of the calibration enhancements made in the latest MODIS data collection 6 (C6). Future improvements in the MODIS calibration and their potential applications to the S-NPP VIIRS are also discussed.

  10. Product-oriented Software Certification Process for Software Synthesis

    NASA Technical Reports Server (NTRS)

    Nelson, Stacy; Fischer, Bernd; Denney, Ewen; Schumann, Johann; Richardson, Julian; Oh, Phil

    2004-01-01

    The purpose of this document is to propose a product-oriented software certification process to facilitate use of software synthesis and formal methods. Why is such a process needed? Currently, software is tested until deemed bug-free rather than proving that certain software properties exist. This approach has worked well in most cases, but unfortunately, deaths still occur due to software failure. Using formal methods (techniques from logic and discrete mathematics like set theory, automata theory and formal logic as opposed to continuous mathematics like calculus) and software synthesis, it is possible to reduce this risk by proving certain software properties. Additionally, software synthesis makes it possible to automate some phases of the traditional software development life cycle resulting in a more streamlined and accurate development process.

  11. A new software tool for computing Earth's atmospheric transmission of near- and far-infrared radiation

    NASA Technical Reports Server (NTRS)

    Lord, Steven D.

    1992-01-01

    This report describes a new software tool, ATRAN, which computes the transmittance of Earth's atmosphere at near- and far-infrared wavelengths. We compare the capabilities of this program with others currently available and demonstrate its utility for observational data calibration and reduction. The program employs current water-vapor and ozone models to produce fast and accurate transmittance spectra for wavelengths ranging from 0.8 microns to 10 mm.

  12. TIME CALIBRATED OSCILLOSCOPE SWEEP CIRCUIT

    DOEpatents

    Smith, V.L.; Carstensen, H.K.

    1959-11-24

    An improved time calibrated sweep circuit is presented, which extends the range of usefulness of conventional oscilloscopes as utilized for time calibrated display applications in accordance with U. S. Patent No. 2,832,002. Principal novelty resides in the provision of a pair of separate signal paths, each of which is phase and amplitude adjustable, to connect a high-frequency calibration oscillator to the output of a sawtooth generator also connected to the respective horizontal deflection plates of an oscilloscope cathode ray tube. The amplitude and phase of the calibration oscillator signals in the two signal paths are adjusted to balance out feedthrough currents capacitively coupled at high frequencies of the calibration oscillator from each horizontal deflection plate to the vertical plates of the cathode ray tube.

  13. Single Vector Calibration System for Multi-Axis Load Cells and Method for Calibrating a Multi-Axis Load Cell

    NASA Technical Reports Server (NTRS)

    Parker, Peter A. (Inventor)

    2003-01-01

    A single vector calibration system is provided which facilitates the calibration of multi-axis load cells, including wind tunnel force balances. The single vector system provides the capability to calibrate a multi-axis load cell using a single directional load, for example loading solely in the gravitational direction. The system manipulates the load cell in three-dimensional space, while keeping the uni-directional calibration load aligned. The use of a single vector calibration load reduces the set-up time for the multi-axis load combinations needed to generate a complete calibration mathematical model. The system also reduces load application inaccuracies caused by the conventional requirement to generate multiple force vectors. The simplicity of the system reduces calibration time and cost, while simultaneously increasing calibration accuracy.

  14. Approaches on calibration of bolometer and establishment of bolometer calibration device

    NASA Astrophysics Data System (ADS)

    Xia, Ming; Gao, Jianqiang; Ye, Jun'an; Xia, Junwen; Yin, Dejin; Li, Tiecheng; Zhang, Dong

    2015-10-01

    Bolometer is mainly used for measuring thermal radiation in the field of public places, labor hygiene, heating and ventilation and building energy conservation. The working principle of bolometer is under the exposure of thermal radiation, temperature of black absorbing layer of detector rise after absorption of thermal radiation, which makes the electromotive force produced by thermoelectric. The white light reflective layer of detector does not absorb thermal radiation, so the electromotive force produced by thermoelectric is almost zero. A comparison of electromotive force produced by thermoelectric of black absorbing layer and white reflective layer can eliminate the influence of electric potential produced by the basal background temperature change. After the electromotive force which produced by thermal radiation is processed by the signal processing unit, the indication displays through the indication display unit. The measurement unit of thermal radiation intensity is usually W/m2 or kW/m2. Its accurate and reliable value has important significance for high temperature operation, labor safety and hygiene grading management. Bolometer calibration device is mainly composed of absolute radiometer, the reference light source, electric measuring instrument. Absolute radiometer is a self-calibration type radiometer. Its working principle is using the electric power which can be accurately measured replaces radiation power to absolutely measure the radiation power. Absolute radiometer is the standard apparatus of laser low power standard device, the measurement traceability is guaranteed. Using the calibration method of comparison, the absolute radiometer and bolometer measure the reference light source in the same position alternately which can get correction factor of irradiance indication. This paper is mainly about the design and calibration method of the bolometer calibration device. The uncertainty of the calibration result is also evaluated.

  15. The influence of the spectral emissivity of flat-plate calibrators on the calibration of IR thermometers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cárdenas-García, D.; Méndez-Lango, E.

    Flat Calibrators (FC) are an option for calibration of infrared thermometers (IT) with a fixed large target. FCs are neither blackbodies, nor gray-bodies; their spectral emissivity is lower than one and depends on wavelength. Nevertheless they are used as gray-bodies with a nominal emissivity value. FCs can be calibrated radiometrically using as reference a calibrated IR thermometer (RT). If an FC will be used to calibrate ITs that work in the same spectral range as the RT then its calibration is straightforward: the actual FC spectral emissivity is not required. This result is valid for any given fixed emissivity assessedmore » to the FC. On the other hand, when the RT working spectral range does not match with that of the ITs to be calibrated with the FC then it is required to know the FC spectral emissivity as part of the calibration process. For this purpose, at CENAM, we developed an experimental setup to measure spectral emissivity in the infrared spectral range, based on a Fourier transform infrared spectrometer. Not all laboratories have emissivity measurement capability in the appropriate wavelength and temperature ranges to obtain the spectral emissivity. Thus, we present an estimation of the error introduced when the spectral range of the RT used to calibrate an FC and the spectral ranges of the ITs to be calibrated with the FC do not match. Some examples are developed for the cases when RT and IT spectral ranges are [8,13] μm and [8,14] μm respectively.« less

  16. Self-Calibration of CMB Polarimeters

    NASA Astrophysics Data System (ADS)

    Keating, Brian

    2013-01-01

    Precision measurements of the polarization of the cosmic microwave background (CMB) radiation, especially experiments seeking to detect the odd-parity "B-modes", have far-reaching implications for cosmology. To detect the B-modes generated during inflation the flux response and polarization angle of these experiments must be calibrated to exquisite precision. While suitable flux calibration sources abound, polarization angle calibrators are deficient in many respects. Man-made polarized sources are often not located in the antenna's far-field, have spectral properties that are radically different from the CMB's, are cumbersome to implement and may be inherently unstable over the (long) duration these searches require to detect the faint signature of the inflationary epoch. Astrophysical sources suffer from time, frequency and spatial variability, are not visible from all CMB observatories, and none are understood with sufficient accuracy to calibrate future CMB polarimeters seeking to probe inflationary energy scales of ~1000 TeV. CMB TB and EB modes, expected to identically vanish in the standard cosmological model, can be used to calibrate CMB polarimeters. By enforcing the observed EB and TB power spectra to be consistent with zero, CMB polarimeters can be calibrated to levels not possible with man-made or astrophysical sources. All of this can be accomplished without any loss of observing time using a calibration source which is spectrally identical to the CMB B-modes. The calibration procedure outlined here can be used for any CMB polarimeter.

  17. Calibration of water-velocity meters

    USGS Publications Warehouse

    Kaehrle, William R.; Bowie, James E.

    1988-01-01

    The U.S. Geological Survey, Department of the Interior, as part of its responsibility to appraise the quantity of water resources in the United States, maintains facilities for the calibration of water-velocity meters at the Gulf Coast Hydroscience Center's Hydraulic Laboratory Facility, NSTL, Mississippi. These meters are used in hydrologic studies by the Geological Survey, U.S. Army Corps of Engineers, U.S. Department of Energy, state agencies, universities, and others in the public and private sector. This paper describes calibration facilities, types of water-velocity meters calibrated, and calibration standards, methods and results.

  18. RGB Color Calibration for Quantitative Image Analysis: The “3D Thin-Plate Spline” Warping Approach

    PubMed Central

    Menesatti, Paolo; Angelini, Claudio; Pallottino, Federico; Antonucci, Francesca; Aguzzi, Jacopo; Costa, Corrado

    2012-01-01

    In the last years the need to numerically define color by its coordinates in n-dimensional space has increased strongly. Colorimetric calibration is fundamental in food processing and other biological disciplines to quantitatively compare samples' color during workflow with many devices. Several software programmes are available to perform standardized colorimetric procedures, but they are often too imprecise for scientific purposes. In this study, we applied the Thin-Plate Spline interpolation algorithm to calibrate colours in sRGB space (the corresponding Matlab code is reported in the Appendix). This was compared with other two approaches. The first is based on a commercial calibration system (ProfileMaker) and the second on a Partial Least Square analysis. Moreover, to explore device variability and resolution two different cameras were adopted and for each sensor, three consecutive pictures were acquired under four different light conditions. According to our results, the Thin-Plate Spline approach reported a very high efficiency of calibration allowing the possibility to create a revolution in the in-field applicative context of colour quantification not only in food sciences, but also in other biological disciplines. These results are of great importance for scientific color evaluation when lighting conditions are not controlled. Moreover, it allows the use of low cost instruments while still returning scientifically sound quantitative data. PMID:22969337

  19. (Mis)use of (133)Ba as a calibration surrogate for (131)I in clinical activity calibrators.

    PubMed

    Zimmerman, B E; Bergeron, D E

    2016-03-01

    Using NIST-calibrated solutions of (131)Ba and (131)I in the 5mL NIST ampoule geometry, measurements were made in three NIST-maintained Capintec activity calibrators and the NIST Vinten 671 ionization chamber to evaluate the suitability of using (133)Ba as a calibration surrogate for (131)I. For the Capintec calibrators, the (133)Ba response was a factor of about 300% higher than that of the same amount of (131)I. For the Vinten 671, the Ba-133 response was about 7% higher than that of (131)I. These results demonstrate that (133)Ba is a poor surrogate for (131)I. New calibration factors for these radionuclides in the ampoule geometry for the Vinten 671 and Capintec activity calibrators were also determined. Published by Elsevier Ltd.

  20. High Gain Antenna Calibration on Three Spacecraft

    NASA Technical Reports Server (NTRS)

    Hashmall, Joseph A.

    2011-01-01

    This paper describes the alignment calibration of spacecraft High Gain Antennas (HGAs) for three missions. For two of the missions (the Lunar Reconnaissance Orbiter and the Solar Dynamics Observatory) the calibration was performed on orbit. For the third mission (the Global Precipitation Measurement core satellite) ground simulation of the calibration was performed in a calibration feasibility study. These three satellites provide a range of calibration situations-Lunar orbit transmitting to a ground antenna for LRO, geosynchronous orbit transmitting to a ground antenna fer SDO, and low Earth orbit transmitting to TDRS satellites for GPM The calibration results depend strongly on the quality and quantity of calibration data. With insufficient data the calibration Junction may give erroneous solutions. Manual intervention in the calibration allowed reliable parameters to be generated for all three missions.

  1. Cross-calibration between airborne SAR sensors

    NASA Technical Reports Server (NTRS)

    Zink, Manfred; Olivier, Philippe; Freeman, Anthony

    1993-01-01

    As Synthetic Aperture Radar (SAR) system performance and experience in SAR signature evaluation increase, quantitative analysis becomes more and more important. Such analyses require an absolute radiometric calibration of the complete SAR system. To keep the expenditure on calibration of future multichannel and multisensor remote sensing systems (e.g., X-SAR/SIR-C) within a tolerable level, data from different tracks and different sensors (channels) must be cross calibrated. The 1989 joint E-SAR/DC-8 SAR calibration campaign gave a first opportunity for such an experiment, including cross sensor and cross track calibration. A basic requirement for successful cross calibration is the stability of the SAR systems. The calibration parameters derived from different tracks and the polarimetric properties of the uncalibrated data are used to describe this stability. Quality criteria for a successful cross calibration are the agreement of alpha degree values and the consistency of radar cross sections of equally sized corner reflectors. Channel imbalance and cross talk provide additional quality in case of the polarimetric DC-8 SAR.

  2. Multiple-Objective Stepwise Calibration Using Luca

    USGS Publications Warehouse

    Hay, Lauren E.; Umemoto, Makiko

    2007-01-01

    This report documents Luca (Let us calibrate), a multiple-objective, stepwise, automated procedure for hydrologic model calibration and the associated graphical user interface (GUI). Luca is a wizard-style user-friendly GUI that provides an easy systematic way of building and executing a calibration procedure. The calibration procedure uses the Shuffled Complex Evolution global search algorithm to calibrate any model compiled with the U.S. Geological Survey's Modular Modeling System. This process assures that intermediate and final states of the model are simulated consistently with measured values.

  3. Antenna Calibration and Measurement Equipment

    NASA Technical Reports Server (NTRS)

    Rochblatt, David J.; Cortes, Manuel Vazquez

    2012-01-01

    A document describes the Antenna Calibration & Measurement Equipment (ACME) system that will provide the Deep Space Network (DSN) with instrumentation enabling a trained RF engineer at each complex to perform antenna calibration measurements and to generate antenna calibration data. This data includes continuous-scan auto-bore-based data acquisition with all-sky data gathering in support of 4th order pointing model generation requirements. Other data includes antenna subreflector focus, system noise temperature and tipping curves, antenna efficiency, reports system linearity, and instrument calibration. The ACME system design is based on the on-the-fly (OTF) mapping technique and architecture. ACME has contributed to the improved RF performance of the DSN by approximately a factor of two. It improved the pointing performances of the DSN antennas and productivity of its personnel and calibration engineers.

  4. Calibrating Historical IR Sensors Using GEO, and AVHRR Infrared Tropical Mean Calibration Models

    NASA Technical Reports Server (NTRS)

    Scarino, Benjamin; Doelling, David R.; Minnis, Patrick; Gopalan, Arun; Haney, Conor; Bhatt, Rajendra

    2014-01-01

    Long-term, remote-sensing-based climate data records (CDRs) are highly dependent on having consistent, wellcalibrated satellite instrument measurements of the Earth's radiant energy. Therefore, by making historical satellite calibrations consistent with those of today's imagers, the Earth-observing community can benefit from a CDR that spans a minimum of 30 years. Most operational meteorological satellites rely on an onboard blackbody and space looks to provide on-orbit IR calibration, but neither target is traceable to absolute standards. The IR channels can also be affected by ice on the detector window, angle dependency of the scan mirror emissivity, stray-light, and detector-to-detector striping. Being able to quantify and correct such degradations would mean IR data from any satellite imager could contribute to a CDR. Recent efforts have focused on utilizing well-calibrated modern hyper-spectral sensors to intercalibrate concurrent operational IR imagers to a single reference. In order to consistently calibrate both historical and current IR imagers to the same reference, however, another strategy is needed. Large, well-characterized tropical-domain Earth targets have the potential of providing an Earth-view reference accuracy of within 0.5 K. To that effort, NASA Langley is developing an IR tropical mean calibration model in order to calibrate historical Advanced Very High Resolution Radiometer (AVHRR) instruments. Using Meteosat-9 (Met-9) as a reference, empirical models are built based on spatially/temporally binned Met-9 and AVHRR tropical IR brightness temperatures. By demonstrating the stability of the Met-9 tropical models, NOAA-18 AVHRR can be calibrated to Met-9 by matching the AVHRR monthly histogram averages with the Met-9 model. This method is validated with ray-matched AVHRR and Met-9 biasdifference time series. Establishing the validity of this empirical model will allow for the calibration of historical AVHRR sensors to within 0.5 K, and thereby

  5. Software Defined Radio with Parallelized Software Architecture

    NASA Technical Reports Server (NTRS)

    Heckler, Greg

    2013-01-01

    This software implements software-defined radio procession over multicore, multi-CPU systems in a way that maximizes the use of CPU resources in the system. The software treats each processing step in either a communications or navigation modulator or demodulator system as an independent, threaded block. Each threaded block is defined with a programmable number of input or output buffers; these buffers are implemented using POSIX pipes. In addition, each threaded block is assigned a unique thread upon block installation. A modulator or demodulator system is built by assembly of the threaded blocks into a flow graph, which assembles the processing blocks to accomplish the desired signal processing. This software architecture allows the software to scale effortlessly between single CPU/single-core computers or multi-CPU/multi-core computers without recompilation. NASA spaceflight and ground communications systems currently rely exclusively on ASICs or FPGAs. This software allows low- and medium-bandwidth (100 bps to approx.50 Mbps) software defined radios to be designed and implemented solely in C/C++ software, while lowering development costs and facilitating reuse and extensibility.

  6. Software Defined Radio with Parallelized Software Architecture

    NASA Technical Reports Server (NTRS)

    Heckler, Greg

    2013-01-01

    This software implements software-defined radio procession over multi-core, multi-CPU systems in a way that maximizes the use of CPU resources in the system. The software treats each processing step in either a communications or navigation modulator or demodulator system as an independent, threaded block. Each threaded block is defined with a programmable number of input or output buffers; these buffers are implemented using POSIX pipes. In addition, each threaded block is assigned a unique thread upon block installation. A modulator or demodulator system is built by assembly of the threaded blocks into a flow graph, which assembles the processing blocks to accomplish the desired signal processing. This software architecture allows the software to scale effortlessly between single CPU/single-core computers or multi-CPU/multi-core computers without recompilation. NASA spaceflight and ground communications systems currently rely exclusively on ASICs or FPGAs. This software allows low- and medium-bandwidth (100 bps to .50 Mbps) software defined radios to be designed and implemented solely in C/C++ software, while lowering development costs and facilitating reuse and extensibility.

  7. Attitude Sensor and Gyro Calibration for Messenger

    NASA Technical Reports Server (NTRS)

    O'Shaughnessy, Daniel; Pittelkau, Mark E.

    2007-01-01

    The Redundant Inertial Measurement Unit Attitude Determination/Calibration (RADICAL(TM)) filter was used to estimate star tracker and gyro calibration parameters using MESSENGER telemetry data from three calibration events. We present an overview of the MESSENGER attitude sensors and their configuration is given, the calibration maneuvers are described, the results are compared with previous calibrations, and variations and trends in the estimated calibration parameters are examined. The warm restart and covariance bump features of the RADICAL(TM) filter were used to estimate calibration parameters from two disjoint telemetry streams. Results show that the calibration parameters converge faster with much less transient variation during convergence than when the filter is cold-started at the start of each telemetry stream.

  8. Daytime sky polarization calibration limitations

    NASA Astrophysics Data System (ADS)

    Harrington, David M.; Kuhn, Jeffrey R.; Ariste, Arturo López

    2017-01-01

    The daytime sky has recently been demonstrated as a useful calibration tool for deriving polarization cross-talk properties of large astronomical telescopes. The Daniel K. Inouye Solar Telescope and other large telescopes under construction can benefit from precise polarimetric calibration of large mirrors. Several atmospheric phenomena and instrumental errors potentially limit the technique's accuracy. At the 3.67-m AEOS telescope on Haleakala, we performed a large observing campaign with the HiVIS spectropolarimeter to identify limitations and develop algorithms for extracting consistent calibrations. Effective sampling of the telescope optical configurations and filtering of data for several derived parameters provide robustness to the derived Mueller matrix calibrations. Second-order scattering models of the sky show that this method is relatively insensitive to multiple-scattering in the sky, provided calibration observations are done in regions of high polarization degree. The technique is also insensitive to assumptions about telescope-induced polarization, provided the mirror coatings are highly reflective. Zemax-derived polarization models show agreement between the functional dependence of polarization predictions and the corresponding on-sky calibrations.

  9. Cross-Calibration of Secondary Electron Multiplier in Noble Gas Analysis

    NASA Astrophysics Data System (ADS)

    Santato, Alessandro; Hamilton, Doug; Deerberg, Michael; Wijbrans, Jan; Kuiper, Klaudia; Bouman, Claudia

    2015-04-01

    The latest generation of multi-collector noble gas mass spectrometers has decisively improved the precision in isotopic ratio analysis [1, 2] and helped the scientific community to address new questions [3]. Measuring numerous isotopes simultaneously has two significant advantages: firstly, any fluctuations in signal intensity have no effect on the isotope ratio and secondly, the analysis time is reduced. This particular point becomes very important in static vacuum mass spectrometry where during the analysis, the signal intensity decays and at the same time the background increases. However, when multi-collector analysis is utilized, it is necessary to pay special attention to the cross calibration of the detectors. This is a key point in order to have accurate and reproducible isotopic ratios. In isotope ratio mass spectrometry, with regard to the type of detector (i.e. Faraday or Secondary Electron Multiplier, SEM), analytical technique (TIMS, MC-ICP-MS or IRMS) and isotope system of interest, several techniques are currently applied to cross-calibrate the detectors. Specifically, the gain of the Faraday cups is generally stable and only the associated amplifier must be calibrated. For example, on the Thermo Scientific instrument control systems, the 1011 and 1012 ohm amplifiers can easily be calibrated through a fully software controlled procedure by inputting a constant electric signal to each amplifier sequentially [4]. On the other hand, the yield of the SEMs can drift up to 0.2% / hour and other techniques such as peak hopping, standard-sample bracketing and multi-dynamic measurement must be used. Peak hopping allows the detectors to be calibrated by measuring an ion beam of constant intensity across the detectors whereas standard-sample bracketing corrects the drift of the detectors through the analysis of a reference standard of a known isotopic ratio. If at least one isotopic pair of the sample is known, multi-dynamic measurement can be used; in this

  10. Calibration of hydrological model with programme PEST

    NASA Astrophysics Data System (ADS)

    Brilly, Mitja; Vidmar, Andrej; Kryžanowski, Andrej; Bezak, Nejc; Šraj, Mojca

    2016-04-01

    PEST is tool based on minimization of an objective function related to the root mean square error between the model output and the measurement. We use "singular value decomposition", section of the PEST control file, and Tikhonov regularization method for successfully estimation of model parameters. The PEST sometimes failed if inverse problems were ill-posed, but (SVD) ensures that PEST maintains numerical stability. The choice of the initial guess for the initial parameter values is an important issue in the PEST and need expert knowledge. The flexible nature of the PEST software and its ability to be applied to whole catchments at once give results of calibration performed extremely well across high number of sub catchments. Use of parallel computing version of PEST called BeoPEST was successfully useful to speed up calibration process. BeoPEST employs smart slaves and point-to-point communications to transfer data between the master and slaves computers. The HBV-light model is a simple multi-tank-type model for simulating precipitation-runoff. It is conceptual balance model of catchment hydrology which simulates discharge using rainfall, temperature and estimates of potential evaporation. Version of HBV-light-CLI allows the user to run HBV-light from the command line. Input and results files are in XML form. This allows to easily connecting it with other applications such as pre and post-processing utilities and PEST itself. The procedure was applied on hydrological model of Savinja catchment (1852 km2) and consists of twenty one sub-catchments. Data are temporary processed on hourly basis.

  11. Calibration Test Set for a Phase-Comparison Digital Tracker

    NASA Technical Reports Server (NTRS)

    Boas, Amy; Li, Samuel; McMaster, Robert

    2007-01-01

    An apparatus that generates four signals at a frequency of 7.1 GHz having precisely controlled relative phases and equal amplitudes has been designed and built. This apparatus is intended mainly for use in computer-controlled automated calibration and testing of a phase-comparison digital tracker (PCDT) that measures the relative phases of replicas of the same X-band signal received by four antenna elements in an array. (The relative direction of incidence of the signal on the array is then computed from the relative phases.) The present apparatus can also be used to generate precisely phased signals for steering a beam transmitted from a phased antenna array. The apparatus (see figure) includes a 7.1-GHz signal generator, the output of which is fed to a four-way splitter. Each of the four splitter outputs is attenuated by 10 dB and fed as input to a vector modulator, wherein DC bias voltages are used to control the in-phase (I) and quadrature (Q) signal components. The bias voltages are generated by digital-to-analog- converter circuits on a control board that receives its digital control input from a computer running a LabVIEW program. The outputs of the vector modulators are further attenuated by 10 dB, then presented at high-grade radio-frequency connectors. The attenuation reduces the effects of changing mismatch and reflections. The apparatus was calibrated in a process in which the bias voltages were first stepped through all possible IQ settings. Then in a reverse interpolation performed by use of MATLAB software, a lookup table containing 3,600 IQ settings, representing equal amplitude and phase increments of 0.1 , was created for each vector modulator. During operation of the apparatus, these lookup tables are used in calibrating the PCDT.

  12. Wavelength-Filter Based Spectral Calibrated Wave number - Linearization in 1.3 mm Spectral Domain Optical Coherence.

    PubMed

    Wijeisnghe, Ruchire Eranga Henry; Cho, Nam Hyun; Park, Kibeom; Shin, Yongseung; Kim, Jeehyun

    2013-12-01

    In this study, we demonstrate the enhanced spectral calibration method for 1.3 μm spectral-domain optical coherence tomography (SD-OCT). The calibration method using wavelength-filter simplifies the SD-OCT system, and also the axial resolution and the entire speed of the OCT system can be dramatically improved as well. An externally connected wavelength-filter is utilized to obtain the information of the wavenumber and the pixel position. During the calibration process the wavelength-filter is placed after a broadband source by connecting through an optical circulator. The filtered spectrum with a narrow line width of 0.5 nm is detected by using a line-scan camera. The method does not require a filter or a software recalibration algorithm for imaging as it simply resamples the OCT signal from the detector array without employing rescaling or interpolation methods. One of the main drawbacks of SD-OCT is the broadened point spread functions (PSFs) with increasing imaging depth can be compensated by increasing the wavenumber-linearization order. The sensitivity of our system was measured at 99.8 dB at an imaging depth of 2.1 mm compared with the uncompensated case.

  13. Airdata Measurement and Calibration

    NASA Technical Reports Server (NTRS)

    Haering, Edward A., Jr.

    1995-01-01

    This memorandum provides a brief introduction to airdata measurement and calibration. Readers will learn about typical test objectives, quantities to measure, and flight maneuvers and operations for calibration. The memorandum informs readers about tower-flyby, trailing cone, pacer, radar-tracking, and dynamic airdata calibration maneuvers. Readers will also begin to understand how some data analysis considerations and special airdata cases, including high-angle-of-attack flight, high-speed flight, and nonobtrusive sensors are handled. This memorandum is not intended to be all inclusive; this paper contains extensive reference and bibliography sections.

  14. A fully automated calibration method for an optical see-through head-mounted operating microscope with variable zoom and focus.

    PubMed

    Figl, Michael; Ede, Christopher; Hummel, Johann; Wanschitz, Felix; Ewers, Rolf; Bergmann, Helmar; Birkfellner, Wolfgang

    2005-11-01

    Ever since the development of the first applications in image-guided therapy (IGT), the use of head-mounted displays (HMDs) was considered an important extension of existing IGT technologies. Several approaches to utilizing HMDs and modified medical devices for augmented reality (AR) visualization were implemented. These approaches include video-see through systems, semitransparent mirrors, modified endoscopes, and modified operating microscopes. Common to all these devices is the fact that a precise calibration between the display and three-dimensional coordinates in the patient's frame of reference is compulsory. In optical see-through devices based on complex optical systems such as operating microscopes or operating binoculars-as in the case of the system presented in this paper-this procedure can become increasingly difficult since precise camera calibration for every focus and zoom position is required. We present a method for fully automatic calibration of the operating binocular Varioscope M5 AR for the full range of zoom and focus settings available. Our method uses a special calibration pattern, a linear guide driven by a stepping motor, and special calibration software. The overlay error in the calibration plane was found to be 0.14-0.91 mm, which is less than 1% of the field of view. Using the motorized calibration rig as presented in the paper, we were also able to assess the dynamic latency when viewing augmentation graphics on a mobile target; spatial displacement due to latency was found to be in the range of 1.1-2.8 mm maximum, the disparity between the true object and its computed overlay represented latency of 0.1 s. We conclude that the automatic calibration method presented in this paper is sufficient in terms of accuracy and time requirements for standard uses of optical see-through systems in a clinical environment.

  15. On the prospects of cross-calibrating the Cherenkov Telescope Array with an airborne calibration platform

    NASA Astrophysics Data System (ADS)

    Brown, Anthony M.

    2018-01-01

    Recent advances in unmanned aerial vehicle (UAV) technology have made UAVs an attractive possibility as an airborne calibration platform for astronomical facilities. This is especially true for arrays of telescopes spread over a large area such as the Cherenkov Telescope Array (CTA). In this paper, the feasibility of using UAVs to calibrate CTA is investigated. Assuming a UAV at 1km altitude above CTA, operating on astronomically clear nights with stratified, low atmospheric dust content, appropriate thermal protection for the calibration light source and an onboard photodiode to monitor its absolute light intensity, inter-calibration of CTA's telescopes of the same size class is found to be achievable with a 6 - 8 % uncertainty. For cross-calibration of different telescope size classes, a systematic uncertainty of 8 - 10 % is found to be achievable. Importantly, equipping the UAV with a multi-wavelength calibration light source affords us the ability to monitor the wavelength-dependent degradation of CTA telescopes' optical system, allowing us to not only maintain this 6 - 10 % uncertainty after the first few years of telescope deployment, but also to accurately account for the effect of multi-wavelength degradation on the cross-calibration of CTA by other techniques, namely with images of air showers and local muons. A UAV-based system thus provides CTA with several independent and complementary methods of cross-calibrating the optical throughput of individual telescopes. Furthermore, housing environmental sensors on the UAV system allows us to not only minimise the systematic uncertainty associated with the atmospheric transmission of the calibration signal, it also allows us to map the dust content above CTA as well as monitor the temperature, humidity and pressure profiles of the first kilometre of atmosphere above CTA with each UAV flight.

  16. The relationships between software publications and software systems

    NASA Astrophysics Data System (ADS)

    Hogg, David W.

    2017-01-01

    When we build software systems or software tools for astronomy, we sometimes do and sometimes don't also write and publish standard scientific papers about those software systems. I will discuss the pros and cons of writing such publications. There are impacts of writing such papers immediately (they can affect the design and structure of the software project itself), in the short term (they can promote adoption and legitimize the software), in the medium term (they can provide a platform for all the literature's mechanisms for citation, criticism, and reuse), and in the long term (they can preserve ideas that are embodied in the software, possibly on timescales much longer than the lifetime of any software context). I will argue that as important as pure software contributions are to astronomy—and I am both a preacher and a practitioner—software contributions are even more valuable when they are associated with traditional scientific publications. There are exceptions and complexities of course, which I will discuss.

  17. Model Calibration with Censored Data

    DOE PAGES

    Cao, Fang; Ba, Shan; Brenneman, William A.; ...

    2017-06-28

    Here, the purpose of model calibration is to make the model predictions closer to reality. The classical Kennedy-O'Hagan approach is widely used for model calibration, which can account for the inadequacy of the computer model while simultaneously estimating the unknown calibration parameters. In many applications, the phenomenon of censoring occurs when the exact outcome of the physical experiment is not observed, but is only known to fall within a certain region. In such cases, the Kennedy-O'Hagan approach cannot be used directly, and we propose a method to incorporate the censoring information when performing model calibration. The method is applied tomore » study the compression phenomenon of liquid inside a bottle. The results show significant improvement over the traditional calibration methods, especially when the number of censored observations is large.« less

  18. Calibration of the MCAO Canopus Bench

    NASA Astrophysics Data System (ADS)

    Garcia-Rissmann, Aurea; Rigaut, François; Bec, Matthieu; Boccas, Maxime; Galvez, Ramon; Gausachs, Gaston; Gratadour, Damien; Neichel, Benoit

    The final phase of implementation of all optical components, as well as their integration and tests on the Canopus MCAO bench is currently underway. We present here a detailed description of the LGS and NGS WFS calibration sequences implemented through MYST (MCAO Yorick Smart Tool), a yorick+python+glade software package developed in-house which allows multiple users to control and monitor the bench remotely over the network using EPICS commands. A fine tuning of the optical setup and a better understanding of the flexure/temperature dependencies is being carried out and will allow us to build the many look-up tables to be eventually used by the system (e.g. telescope primary and secondary mirrors). Preliminary work on non-common path aberrations to account for the static aberrations in the central 60 arcsec science field of view (FoV) has been done iteratively using a science focal plane wavefront sensor and has shown good results both in individual directions as well as simultaneously over the entire FoV, the latter using the tomographic approach (presented in another paper in this conference).

  19. Development and calibration of an air-floating six-axis force measurement platform using self-calibration

    NASA Astrophysics Data System (ADS)

    Huang, Bin; Wang, Xiaomeng; Li, Chengwei; Yi, Jiajing; Lu, Rongsheng; Tao, Jiayue

    2016-09-01

    This paper describes the design, working principle, as well as calibration of an air-floating six-axis force measurement platform, where the floating plate and nozzles were connected without contact, preventing inter-dimensional coupling and increasing precision significantly. The measurement repeatability error of the force size in the platform is less than 0.2% full scale (FS), which is significantly better than the precision of 1% FS in the six-axis force sensors on the current market. We overcame the difficulties of weight loading device in high-precision calibration by proposing a self-calibration method based on the floating plate gravity and met the calibration precision requirement of 0.02% FS. This study has general implications for the development and calibration of high-precision multi-axis force sensors. In particular, the air-floating six-axis force measurement platform could be applied to the calibration of some special sensors such as flexible tactile sensors and may be used as a micro-nano mechanical assembly platform for real-time assembly force testing.

  20. Calibration of the TOPEX altimeter using a GPS buoy

    NASA Technical Reports Server (NTRS)

    Born, G. H.; Parke, Michael E.; Axelrad, P.; Gold, K. L.; Johnson, James; Key, K.; Kubitschek, Daniel G.; Christensen, Edward J.

    1994-01-01

    The use of a spar buoy equipped with a Global Positioning System (GPS) antenna to calibrate the height measurement of the TOPEX radar altimeter is described. In order to determine the height of the GPS antenna phase center above the ocean surface, the buoy was also equipped with instrumentation to measure the instantaneous location of the waterline, and tilt of the bouy from vertical. The experiment was conducted off the California coast near the Texaco offshore oil platform, Harvest, during cycle 34 of the TOPEX/POSEIDON observational period. GPS solutions were computed for the bouy position using two different software packages, K&RS and GIPSY-OASIS II. These solutions were combined with estimates of the waterline location on the bouy to yield the height of the ocean surface. The ocean surface height in an absolute coordinate system combined with knowledge of the spacecraft height from tracking data provides a computed altimeter range measurement. By comparing this computed value to the actual altimeter measurement, the altimeter bias can be calibrated. The altimeter height bias obtained with the buoy using K&RS was -14.6 +/- 4 cm, while with GIPSY-OASIS II it was -13.1 +/- 4 cm. These are 0.1 cm and 1.6 cm different from the -14.7 +/- 4 cm result obtained for this flight overflight with the tide gauge instruments located on Platform Harvest.

  1. Cross-calibration of liquid and solid QCT calibration standards: corrections to the UCSF normative data

    NASA Technical Reports Server (NTRS)

    Faulkner, K. G.; Gluer, C. C.; Grampp, S.; Genant, H. K.

    1993-01-01

    Quantitative computed tomography (QCT) has been shown to be a precise and sensitive method for evaluating spinal bone mineral density (BMD) and skeletal response to aging and therapy. Precise and accurate determination of BMD using QCT requires a calibration standard to compensate for and reduce the effects of beam-hardening artifacts and scanner drift. The first standards were based on dipotassium hydrogen phosphate (K2HPO4) solutions. Recently, several manufacturers have developed stable solid calibration standards based on calcium hydroxyapatite (CHA) in water-equivalent plastic. Due to differences in attenuating properties of the liquid and solid standards, the calibrated BMD values obtained with each system do not agree. In order to compare and interpret the results obtained on both systems, cross-calibration measurements were performed in phantoms and patients using the University of California San Francisco (UCSF) liquid standard and the Image Analysis (IA) solid standard on the UCSF GE 9800 CT scanner. From the phantom measurements, a highly linear relationship was found between the liquid- and solid-calibrated BMD values. No influence on the cross-calibration due to simulated variations in body size or vertebral fat content was seen, though a significant difference in the cross-calibration was observed between scans acquired at 80 and 140 kVp. From the patient measurements, a linear relationship between the liquid (UCSF) and solid (IA) calibrated values was derived for GE 9800 CT scanners at 80 kVp (IA = [1.15 x UCSF] - 7.32).(ABSTRACT TRUNCATED AT 250 WORDS).

  2. Software Engineering Program: Software Process Improvement Guidebook

    NASA Technical Reports Server (NTRS)

    1996-01-01

    The purpose of this document is to provide experience-based guidance in implementing a software process improvement program in any NASA software development or maintenance community. This guidebook details how to define, operate, and implement a working software process improvement program. It describes the concept of the software process improvement program and its basic organizational components. It then describes the structure, organization, and operation of the software process improvement program, illustrating all these concepts with specific NASA examples. The information presented in the document is derived from the experiences of several NASA software organizations, including the SEL, the SEAL, and the SORCE. Their experiences reflect many of the elements of software process improvement within NASA. This guidebook presents lessons learned in a form usable by anyone considering establishing a software process improvement program within his or her own environment. This guidebook attempts to balance general and detailed information. It provides material general enough to be usable by NASA organizations whose characteristics do not directly match those of the sources of the information and models presented herein. It also keeps the ideas sufficiently close to the sources of the practical experiences that have generated the models and information.

  3. Evaluation of factors affecting CGMS calibration.

    PubMed

    Buckingham, Bruce A; Kollman, Craig; Beck, Roy; Kalajian, Andrea; Fiallo-Scharer, Rosanna; Tansey, Michael J; Fox, Larry A; Wilson, Darrell M; Weinzimer, Stuart A; Ruedy, Katrina J; Tamborlane, William V

    2006-06-01

    The optimal number/timing of calibrations entered into the CGMS (Medtronic MiniMed, Northridge, CA) continuous glucose monitoring system have not been previously described. Fifty subjects with Type 1 diabetes mellitus (10-18 years old) were hospitalized in a clinical research center for approximately 24 h on two separate days. CGMS and OneTouch Ultra meter (LifeScan, Milpitas, CA) data were obtained. The CGMS was retrospectively recalibrated using the Ultra data varying the number and timing of calibrations. Resulting CGMS values were compared against laboratory reference values. There was a modest improvement in accuracy with increasing number of calibrations. The median relative absolute deviation (RAD) was 14%, 15%, 13%, and 13% when using three, four, five, and seven calibration values, respectively (P < 0.001). Corresponding percentages of CGMS-reference pairs meeting the International Organisation for Standardisation criteria were 66%, 67%, 71%, and 72% (P < 0.001). Nighttime accuracy improved when daytime calibrations (pre-lunch and pre-dinner) were removed leaving only two calibrations at 9 p.m. and 6 a.m. (median difference, -2 vs. -9 mg/dL, P < 0.001; median RAD, 12% vs. 15%, P = 0.001). Accuracy was better on visits where the average absolute rate of glucose change at the times of calibration was lower. On visits with average absolute rates <0.5, 0.5 to <1.0, 1.0 to <1.5, and >or=1.5 mg/dL/min, median RAD values were 13% versus 14% versus 17% versus 19%, respectively (P = 0.05). Although accuracy is slightly improved with more calibrations, the timing of the calibrations appears more important. Modifying the algorithm to put less weight on daytime calibrations for nighttime values and calibrating during times of relative glucose stability may have greater impact on accuracy.

  4. Evaluation of Factors Affecting CGMS Calibration

    PubMed Central

    2006-01-01

    Background The optimal number/timing of calibrations entered into the Continuous Glucose Monitoring System (“CGMS”; Medtronic MiniMed, Northridge, CA) have not been previously described. Methods Fifty subjects with T1DM (10–18y) were hospitalized in a clinical research center for ~24h on two separate days. CGMS and OneTouch® Ultra® Meter (“Ultra”; LifeScan, Milpitas, CA) data were obtained. The CGMS was retrospectively recalibrated using the Ultra data varying the number and timing of calibrations. Resulting CGMS values were compared against laboratory reference values. Results There was a modest improvement in accuracy with increasing number of calibrations. The median relative absolute deviation (RAD) was 14%, 15%, 13% and 13% when using 3, 4, 5 and 7 calibration values, respectively (p<0.001). Corresponding percentages of CGMS-reference pairs meeting the ISO criteria were 66%, 67%, 71% and 72% (p<0.001). Nighttime accuracy improved when daytime calibrations (pre-lunch and pre-dinner) were removed leaving only two calibrations at 9p.m. and 6a.m. (median difference: −2 vs. −9mg/dL, p<0.001; median RAD: 12% vs. 15%, p=0.001). Accuracy was better on visits where the average absolute rate of glucose change at the times of calibration was lower. On visits with average absolute rates <0.5, 0.5-<1.0, 1.0-<1.5 and ≥1.5mg/dL/min, median RAD values were 13% vs. 14% vs. 17% vs. 19%, respectively (p=0.05). Conclusions Although accuracy is slightly improved with more calibrations, the timing of the calibrations appears more important. Modifying the algorithm to put less weight on daytime calibrations for nighttime values and calibrating during times of relative glucose stability may have greater impact on accuracy. PMID:16800753

  5. The Characterization of a Piston Displacement-Type Flowmeter Calibration Facility and the Calibration and Use of Pulsed Output Type Flowmeters

    PubMed Central

    Mattingly, G. E.

    1992-01-01

    Critical measurement performance of fluid flowmeters requires proper and quantified verification data. These data should be generated using calibration and traceability techniques established for these verification purposes. In these calibration techniques, the calibration facility should be well-characterized and its components and performance properly traced to pertinent higher standards. The use of this calibrator to calibrate flowmeters should be appropriately established and the manner in which the calibrated flowmeter is used should be specified in accord with the conditions of the calibration. These three steps: 1) characterizing the calibration facility itself, 2) using the characterized facility to calibrate a flowmeter, and 3) using the calibrated flowmeter to make a measurement are described and the pertinent equations are given for an encoded-stroke, piston displacement-type calibrator and a pulsed output flowmeter. It is concluded that, given these equations and proper instrumentation of this type of calibrator, very high levels of performance can be attained and, in turn, these can be used to achieve high fluid flow rate measurement accuracy with pulsed output flowmeters. PMID:28053444

  6. Iterative Calibration: A Novel Approach for Calibrating the Molecular Clock Using Complex Geological Events.

    PubMed

    Loeza-Quintana, Tzitziki; Adamowicz, Sarah J

    2018-02-01

    During the past 50 years, the molecular clock has become one of the main tools for providing a time scale for the history of life. In the era of robust molecular evolutionary analysis, clock calibration is still one of the most basic steps needing attention. When fossil records are limited, well-dated geological events are the main resource for calibration. However, biogeographic calibrations have often been used in a simplistic manner, for example assuming simultaneous vicariant divergence of multiple sister lineages. Here, we propose a novel iterative calibration approach to define the most appropriate calibration date by seeking congruence between the dates assigned to multiple allopatric divergences and the geological history. Exploring patterns of molecular divergence in 16 trans-Bering sister clades of echinoderms, we demonstrate that the iterative calibration is predominantly advantageous when using complex geological or climatological events-such as the opening/reclosure of the Bering Strait-providing a powerful tool for clock dating that can be applied to other biogeographic calibration systems and further taxa. Using Bayesian analysis, we observed that evolutionary rate variability in the COI-5P gene is generally distributed in a clock-like fashion for Northern echinoderms. The results reveal a large range of genetic divergences, consistent with multiple pulses of trans-Bering migrations. A resulting rate of 2.8% pairwise Kimura-2-parameter sequence divergence per million years is suggested for the COI-5P gene in Northern echinoderms. Given that molecular rates may vary across latitudes and taxa, this study provides a new context for dating the evolutionary history of Arctic marine life.

  7. Solid laboratory calibration of a nonimaging spectroradiometer.

    PubMed

    Schaepman, M E; Dangel, S

    2000-07-20

    Field-based nonimaging spectroradiometers are often used in vicarious calibration experiments for airborne or spaceborne imaging spectrometers. The calibration uncertainties associated with these ground measurements contribute substantially to the overall modeling error in radiance- or reflectance-based vicarious calibration experiments. Because of limitations in the radiometric stability of compact field spectroradiometers, vicarious calibration experiments are based primarily on reflectance measurements rather than on radiance measurements. To characterize the overall uncertainty of radiance-based approaches and assess the sources of uncertainty, we carried out a full laboratory calibration. This laboratory calibration of a nonimaging spectroradiometer is based on a measurement plan targeted at achieving a calibration. The individual calibration steps include characterization of the signal-to-noise ratio, the noise equivalent signal, the dark current, the wavelength calibration, the spectral sampling interval, the nonlinearity, directional and positional effects, the spectral scattering, the field of view, the polarization, the size-of-source effects, and the temperature dependence of a particular instrument. The traceability of the radiance calibration is established to a secondary National Institute of Standards and Technology calibration standard by use of a 95% confidence interval and results in an uncertainty of less than ?7.1% for all spectroradiometer bands.

  8. Muon Energy Calibration of the MINOS Detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miyagawa, Paul S.

    MINOS is a long-baseline neutrino oscillation experiment designed to search for conclusive evidence of neutrino oscillations and to measure the oscillation parameters precisely. MINOS comprises two iron tracking calorimeters located at Fermilab and Soudan. The Calibration Detector at CERN is a third MINOS detector used as part of the detector response calibration programme. A correct energy calibration between these detectors is crucial for the accurate measurement of oscillation parameters. This thesis presents a calibration developed to produce a uniform response within a detector using cosmic muons. Reconstruction of tracks in cosmic ray data is discussed. This data is utilized tomore » calculate calibration constants for each readout channel of the Calibration Detector. These constants have an average statistical error of 1.8%. The consistency of the constants is demonstrated both within a single run and between runs separated by a few days. Results are presented from applying the calibration to test beam particles measured by the Calibration Detector. The responses are calibrated to within 1.8% systematic error. The potential impact of the calibration on the measurement of oscillation parameters by MINOS is also investigated. Applying the calibration reduces the errors in the measured parameters by ~ 10%, which is equivalent to increasing the amount of data by 20%.« less

  9. Automated calibration of multistatic arrays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Henderer, Bruce

    A method is disclosed for calibrating a multistatic array having a plurality of transmitter and receiver pairs spaced from one another along a predetermined path and relative to a plurality of bin locations, and further being spaced at a fixed distance from a stationary calibration implement. A clock reference pulse may be generated, and each of the transmitters and receivers of each said transmitter/receiver pair turned on at a monotonically increasing time delay interval relative to the clock reference pulse. Ones of the transmitters and receivers may be used such that a previously calibrated transmitter or receiver of a givenmore » one of the transmitter/receiver pairs is paired with a subsequently un-calibrated one of the transmitters or receivers of an immediately subsequently positioned transmitter/receiver pair, to calibrate the transmitter or receiver of the immediately subsequent transmitter/receiver pair.« less

  10. Neural networks for calibration tomography

    NASA Technical Reports Server (NTRS)

    Decker, Arthur

    1993-01-01

    Artificial neural networks are suitable for performing pattern-to-pattern calibrations. These calibrations are potentially useful for facilities operations in aeronautics, the control of optical alignment, and the like. Computed tomography is compared with neural net calibration tomography for estimating density from its x-ray transform. X-ray transforms are measured, for example, in diffuse-illumination, holographic interferometry of fluids. Computed tomography and neural net calibration tomography are shown to have comparable performance for a 10 degree viewing cone and 29 interferograms within that cone. The system of tomography discussed is proposed as a relevant test of neural networks and other parallel processors intended for using flow visualization data.

  11. Landsat-7 Enhanced Thematic Mapper plus radiometric calibration

    USGS Publications Warehouse

    Markham, B.L.; Boncyk, Wayne C.; Helder, D.L.; Barker, J.L.

    1997-01-01

    Landsat-7 is currently being built and tested for launch in 1998. The Enhanced Thematic Mapper Plus (ETM+) sensor for Landsat-7, a derivative of the highly successful Thematic Mapper (TM) sensors on Landsats 4 and 5, and the Landsat-7 ground system are being built to provide enhanced radiometric calibration performance. In addition, regular vicarious calibration campaigns are being planned to provide additional information for calibration of the ETM+ instrument. The primary upgrades to the instrument include the addition of two solar calibrators: the full aperture solar calibrator, a deployable diffuser, and the partial aperture solar calibrator, a passive device that allows the ETM+ to image the sun. The ground processing incorporates for the first time an off-line facility, the Image Assessment System (IAS), to perform calibration, evaluation and analysis. Within the IAS, processing capabilities include radiometric artifact characterization and correction, radiometric calibration from the multiple calibrator sources, inclusion of results from vicarious calibration and statistical trending of calibration data to improve calibration estimation. The Landsat Product Generation System, the portion of the ground system responsible for producing calibrated products, will incorporate the radiometric artifact correction algorithms and will use the calibration information generated by the IAS. This calibration information will also be supplied to ground processing systems throughout the world.

  12. In-situ calibration: migrating control system IP module calibration from the bench to the storage ring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weber, Jonah M.; Chin, Michael

    2002-04-30

    The Control System for the Advanced Light Source (ALS) at Lawrence Berkeley National Lab (LBNL) uses in-house designed IndustryPack(registered trademark) (IP) modules contained in compact PCI (cPCI) crates with 16-bit analog I/O to control instrumentation. To make the IP modules interchangeable, each module is calibrated for gain and offset compensation. We initially developed a method of verifying and calibrating the IP modules in a lab bench test environment using a PC with LabVIEW. The subsequent discovery that the ADCs have significant drift characteristics over periods of days of installed operation prompted development of an ''in-situ'' calibration process--one in which themore » IP modules can be calibrated without removing them from the cPCI crates in the storage ring. This paper discusses the original LabVIEW PC calibration and the migration to the proposed in-situ EPICS control system calibration.« less

  13. Dynamic Pressure Calibration Standard

    NASA Technical Reports Server (NTRS)

    Schutte, P. C.; Cate, K. H.; Young, S. D.

    1986-01-01

    Vibrating columns of fluid used to calibrate transducers. Dynamic pressure calibration standard developed for calibrating flush diaphragm-mounted pressure transducers. Pressures up to 20 kPa (3 psi) accurately generated over frequency range of 50 to 1,800 Hz. System includes two conically shaped aluminum columns one 5 cm (2 in.) high for low pressures and another 11 cm (4.3 in.) high for higher pressures, each filled with viscous fluid. Each column mounted on armature of vibration exciter, which imparts sinusoidally varying acceleration to fluid column. Signal noise low, and waveform highly dependent on quality of drive signal in vibration exciter.

  14. FTIR Calibration Methods and Issues

    NASA Astrophysics Data System (ADS)

    Perron, Gaetan

    Over the past 10 years, several space-borne FTIR missions were launched for atmospheric research, environmental monitoring and meteorology. One can think of the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) launched by the European Space Agency, the Atmospheric Chemistry Experiment (ACE) launched by the Canadian Space Agency, the Tropospheric Emission Spectrometer (TES) launched by NASA and the Infrared Atmospheric Sounding Interferometer (IASI) launched by Eumetsat in Europe. Others are near to be launched, namely the Cross-track Infrared Sounder (CrIS) from the Integrated Program Of- fice in the United States and the Thermal And Near infrared Sensor for carbon Observation (TANSO) from the Japan Aerospace Exploration Agency. Moreover, several missions under definition foresee the use of this technology as sensor, e.g. Meteosat Third Generation (MTG), Eumetsat Polar System (EPS) and the Premier mission, one of the six candidates of the next ESA Earth Explorer Core Mission. In order to produce good quality products, calibration is essential. Calibrated data is the output of three main sub-systems that are tightly coupled: the instrument, the calibration targets and the level 1B processor. Calibration requirements must be carefully defined and propagated to each sub-system. Often, they are carried out by different parties which add to the complexity. Under budget and schedule pressure, some aspects are sometimes neglected and jeopardized final quality. For space-borne FTIR, level 1B outputs are spectra that are radiometrically, spectrally calibrated and geolocated. Radiometric calibration means to assign an intensity value in units to the y-axis. Spectral calibration means to assign to the x-axis the proper frequency value in units. Finally, geolocated means to assign a target position over the earth geoid i.e. longitude, latitude and altitude. This paper will present calibration methods and issues related to space-borne FTIR missions, e.g. two

  15. Use of Transportable Radiation Detection Instruments to Assess Internal Contamination from Intakes of Radionuclides Part II: Calibration Factors and ICAT Computer Program.

    PubMed

    Anigstein, Robert; Olsher, Richard H; Loomis, Donald A; Ansari, Armin

    2016-12-01

    The detonation of a radiological dispersion device or other radiological incidents could result in widespread releases of radioactive materials and intakes of radionuclides by affected individuals. Transportable radiation monitoring instruments could be used to measure radiation from gamma-emitting radionuclides in the body for triaging individuals and assigning priorities to their bioassay samples for in vitro assessments. The present study derived sets of calibration factors for four instruments: the Ludlum Model 44-2 gamma scintillator, a survey meter containing a 2.54 × 2.54-cm NaI(Tl) crystal; the Captus 3000 thyroid uptake probe, which contains a 5.08 × 5.08-cm NaI(Tl) crystal; the Transportable Portal Monitor Model TPM-903B, which contains two 3.81 × 7.62 × 182.9-cm polyvinyltoluene plastic scintillators; and a generic instrument, such as an ionization chamber, that measures exposure rates. The calibration factors enable these instruments to be used for assessing inhaled or ingested intakes of any of four radionuclides: Co, I, Cs, and Ir. The derivations used biokinetic models embodied in the DCAL computer software system developed by the Oak Ridge National Laboratory and Monte Carlo simulations using the MCNPX radiation transport code. The three physical instruments were represented by MCNP models that were developed previously. The affected individuals comprised children of five ages who were represented by the revised Oak Ridge National Laboratory pediatric phantoms, and adult men and adult women represented by the Adult Reference Computational Phantoms described in Publication 110 of the International Commission on Radiological Protection. These calibration factors can be used to calculate intakes; the intakes can be converted to committed doses by the use of tabulated dose coefficients. These calibration factors also constitute input data to the ICAT computer program, an interactive Microsoft Windows-based software package that estimates intakes of

  16. USE OF TRANSPORTABLE RADIATION DETECTION INSTRUMENTS TO ASSESS INTERNAL CONTAMINATION FROM INTAKES OF RADIONUCLIDES PART II: CALIBRATION FACTORS AND ICAT COMPUTER PROGRAM

    PubMed Central

    Anigstein, Robert; Olsher, Richard H.; Loomis, Donald A.; Ansari, Armin

    2017-01-01

    The detonation of a radiological dispersion device or other radiological incidents could result in widespread releases of radioactive materials and intakes of radionuclides by affected individuals. Transportable radiation monitoring instruments could be used to measure radiation from gamma-emitting radionuclides in the body for triaging individuals and assigning priorities to their bioassay samples for in vitro assessments. The present study derived sets of calibration factors for four instruments: the Ludlum Model 44-2 gamma scintillator, a survey meter containing a 2.54 × 2.54-cm NaI(Tl) crystal; the Captus 3000 thyroid uptake probe, which contains a 5.08 × 5.08-cm NaI(Tl) crystal; the Transportable Portal Monitor Model TPM-903B, which contains two 3.81 × 7.62 × 182.9-cm polyvinyltoluene plastic scintillators; and a generic instrument, such as an ionization chamber, that measures exposure rates. The calibration factors enable these instruments to be used for assessing inhaled or ingested intakes of any of four radionuclides: 60Co, 131I, 137Cs, and 192Ir. The derivations used biokinetic models embodied in the DCAL computer software system developed by the Oak Ridge National Laboratory and Monte Carlo simulations using the MCNPX radiation transport code. The three physical instruments were represented by MCNP models that were developed previously. The affected individuals comprised children of five ages who were represented by the revised Oak Ridge National Laboratory pediatric phantoms, and adult men and adult women represented by the Adult Reference Computational Phantoms described in Publication 110 of the International Commission on Radiological Protection. These calibration factors can be used to calculate intakes; the intakes can be converted to committed doses by the use of tabulated dose coefficients. These calibration factors also constitute input data to the ICAT computer program, an interactive Microsoft Windows-based software package that estimates

  17. Recent Infrasound Calibration Activity at Los Alamos

    NASA Astrophysics Data System (ADS)

    Whitaker, R. W.; Marcillo, O. E.

    2014-12-01

    Absolute infrasound sensor calibration is necessary for estimating source sizes from measured waveforms. This can be an important function in treaty monitoring. The Los Alamos infrasound calibration chamber is capable of absolute calibration. Early in 2014 the Los Alamos infrasound calibration chamber resumed operations in its new location after an unplanned move two years earlier. The chamber has two sources of calibration signals. The first is the original mechanical piston, and the second is a CLD Dynamics Model 316 electro-mechanical unit that can be digitally controlled and provide a richer set of calibration options. During 2008-2010 a number of upgrades were incorporated for improved operation and recording. In this poster we give an overview of recent chamber work on sensor calibrations, calibration with the CLD unit, some measurements with different porous hoses and work with impulse sources.

  18. NASA Data Acquisition System Software Development for Rocket Propulsion Test Facilities

    NASA Technical Reports Server (NTRS)

    Herbert, Phillip W., Sr.; Elliot, Alex C.; Graves, Andrew R.

    2015-01-01

    Current NASA propulsion test facilities include Stennis Space Center in Mississippi, Marshall Space Flight Center in Alabama, Plum Brook Station in Ohio, and White Sands Test Facility in New Mexico. Within and across these centers, a diverse set of data acquisition systems exist with different hardware and software platforms. The NASA Data Acquisition System (NDAS) is a software suite designed to operate and control many critical aspects of rocket engine testing. The software suite combines real-time data visualization, data recording to a variety formats, short-term and long-term acquisition system calibration capabilities, test stand configuration control, and a variety of data post-processing capabilities. Additionally, data stream conversion functions exist to translate test facility data streams to and from downstream systems, including engine customer systems. The primary design goals for NDAS are flexibility, extensibility, and modularity. Providing a common user interface for a variety of hardware platforms helps drive consistency and error reduction during testing. In addition, with an understanding that test facilities have different requirements and setups, the software is designed to be modular. One engine program may require real-time displays and data recording; others may require more complex data stream conversion, measurement filtering, or test stand configuration management. The NDAS suite allows test facilities to choose which components to use based on their specific needs. The NDAS code is primarily written in LabVIEW, a graphical, data-flow driven language. Although LabVIEW is a general-purpose programming language; large-scale software development in the language is relatively rare compared to more commonly used languages. The NDAS software suite also makes extensive use of a new, advanced development framework called the Actor Framework. The Actor Framework provides a level of code reuse and extensibility that has previously been difficult

  19. Radiation calibration for LWIR Hyperspectral Imager Spectrometer

    NASA Astrophysics Data System (ADS)

    Yang, Zhixiong; Yu, Chunchao; Zheng, Wei-jian; Lei, Zhenggang; Yan, Min; Yuan, Xiaochun; Zhang, Peizhong

    2014-11-01

    The radiometric calibration of LWIR Hyperspectral imager Spectrometer is presented. The lab has been developed to LWIR Interferometric Hyperspectral imager Spectrometer Prototype(CHIPED-I) to study Lab Radiation Calibration, Two-point linear calibration is carried out for the spectrometer by using blackbody respectively. Firstly, calibration measured relative intensity is converted to the absolute radiation lightness of the object. Then, radiation lightness of the object is is converted the brightness temperature spectrum by the method of brightness temperature. The result indicated †that this method of Radiation Calibration calibration was very good.

  20. Calibration sets selection strategy for the construction of robust PLS models for prediction of biodiesel/diesel blends physico-chemical properties using NIR spectroscopy

    NASA Astrophysics Data System (ADS)

    Palou, Anna; Miró, Aira; Blanco, Marcelo; Larraz, Rafael; Gómez, José Francisco; Martínez, Teresa; González, Josep Maria; Alcalà, Manel

    2017-06-01

    Even when the feasibility of using near infrared (NIR) spectroscopy combined with partial least squares (PLS) regression for prediction of physico-chemical properties of biodiesel/diesel blends has been widely demonstrated, inclusion in the calibration sets of the whole variability of diesel samples from diverse production origins still remains as an important challenge when constructing the models. This work presents a useful strategy for the systematic selection of calibration sets of samples of biodiesel/diesel blends from diverse origins, based on a binary code, principal components analysis (PCA) and the Kennard-Stones algorithm. Results show that using this methodology the models can keep their robustness over time. PLS calculations have been done using a specialized chemometric software as well as the software of the NIR instrument installed in plant, and both produced RMSEP under reproducibility values of the reference methods. The models have been proved for on-line simultaneous determination of seven properties: density, cetane index, fatty acid methyl esters (FAME) content, cloud point, boiling point at 95% of recovery, flash point and sulphur.

  1. Tool calibration system for micromachining system

    DOEpatents

    Miller, Donald M.

    1979-03-06

    A tool calibration system including a tool calibration fixture and a tool height and offset calibration insert for calibrating the position of a tool bit in a micromachining tool system. The tool calibration fixture comprises a yokelike structure having a triangular head, a cavity in the triangular head, and a port which communicates a side of the triangular head with the cavity. Yoke arms integral with the triangular head extend along each side of a tool bar and a tool head of the micromachining tool system. The yoke arms are secured to the tool bar to place the cavity around a tool bit which may be mounted to the end of the tool head. Three linear variable differential transformer's (LVDT) are adjustably mounted in the triangular head along an X axis, a Y axis, and a Z axis. The calibration insert comprises a main base which can be mounted in the tool head of the micromachining tool system in place of a tool holder and a reference projection extending from a front surface of the main base. Reference surfaces of the calibration insert and a reference surface on a tool bar standard length are used to set the three LVDT's of the calibration fixture to the tool reference position. These positions are transferred permanently to a mastering station. The tool calibration fixture is then used to transfer the tool reference position of the mastering station to the tool bit.

  2. Synthetic aperture imaging in ultrasound calibration

    NASA Astrophysics Data System (ADS)

    Ameri, Golafsoun; Baxter, John S. H.; McLeod, A. Jonathan; Jayaranthe, Uditha L.; Chen, Elvis C. S.; Peters, Terry M.

    2014-03-01

    Ultrasound calibration allows for ultrasound images to be incorporated into a variety of interventional applica­ tions. Traditional Z- bar calibration procedures rely on wired phantoms with an a priori known geometry. The line fiducials produce small, localized echoes which are then segmented from an array of ultrasound images from different tracked probe positions. In conventional B-mode ultrasound, the wires at greater depths appear blurred and are difficult to segment accurately, limiting the accuracy of ultrasound calibration. This paper presents a novel ultrasound calibration procedure that takes advantage of synthetic aperture imaging to reconstruct high resolution ultrasound images at arbitrary depths. In these images, line fiducials are much more readily and accu­ rately segmented, leading to decreased calibration error. The proposed calibration technique is compared to one based on B-mode ultrasound. The fiducial localization error was improved from 0.21mm in conventional B-mode images to 0.15mm in synthetic aperture images corresponding to an improvement of 29%. This resulted in an overall reduction of calibration error from a target registration error of 2.00mm to 1.78mm, an improvement of 11%. Synthetic aperture images display greatly improved segmentation capabilities due to their improved resolution and interpretability resulting in improved calibration.

  3. Error-in-variables models in calibration

    NASA Astrophysics Data System (ADS)

    Lira, I.; Grientschnig, D.

    2017-12-01

    In many calibration operations, the stimuli applied to the measuring system or instrument under test are derived from measurement standards whose values may be considered to be perfectly known. In that case, it is assumed that calibration uncertainty arises solely from inexact measurement of the responses, from imperfect control of the calibration process and from the possible inaccuracy of the calibration model. However, the premise that the stimuli are completely known is never strictly fulfilled and in some instances it may be grossly inadequate. Then, error-in-variables (EIV) regression models have to be employed. In metrology, these models have been approached mostly from the frequentist perspective. In contrast, not much guidance is available on their Bayesian analysis. In this paper, we first present a brief summary of the conventional statistical techniques that have been developed to deal with EIV models in calibration. We then proceed to discuss the alternative Bayesian framework under some simplifying assumptions. Through a detailed example about the calibration of an instrument for measuring flow rates, we provide advice on how the user of the calibration function should employ the latter framework for inferring the stimulus acting on the calibrated device when, in use, a certain response is measured.

  4. Commodity-Free Calibration

    NASA Technical Reports Server (NTRS)

    2008-01-01

    Commodity-free calibration is a reaction rate calibration technique that does not require the addition of any commodities. This technique is a specific form of the reaction rate technique, where all of the necessary reactants, other than the sample being analyzed, are either inherent in the analyzing system or specifically added or provided to the system for a reason other than calibration. After introduction, the component of interest is exposed to other reactants or flow paths already present in the system. The instrument detector records one of the following to determine the rate of reaction: the increase in the response of the reaction product, a decrease in the signal of the analyte response, or a decrease in the signal from the inherent reactant. With this data, the initial concentration of the analyte is calculated. This type of system can analyze and calibrate simultaneously, reduce the risk of false positives and exposure to toxic vapors, and improve accuracy. Moreover, having an excess of the reactant already present in the system eliminates the need to add commodities, which further reduces cost, logistic problems, and potential contamination. Also, the calculations involved can be simplified by comparison to those of the reaction rate technique. We conducted tests with hypergols as an initial investigation into the feasiblility of the technique.

  5. Taking a look at the calibration of a CCD detector with a fiber-optic taper

    PubMed Central

    Alkire, R. W.; Rotella, F. J.; Duke, N. E. C.; Otwinowski, Zbyszek; Borek, Dominika

    2016-01-01

    At the Structural Biology Center beamline 19BM, located at the Advanced Photon Source, the operational characteristics of the equipment are routinely checked to ensure they are in proper working order. After performing a partial flat-field calibration for the ADSC Quantum 210r CCD detector, it was confirmed that the detector operates within specifications. However, as a secondary check it was decided to scan a single reflection across one-half of a detector module to validate the accuracy of the calibration. The intensities from this single reflection varied by more than 30% from the module center to the corner of the module. Redistribution of light within bent fibers of the fiber-optic taper was identified to be a source of this variation. The degree to which the diffraction intensities are corrected to account for characteristics of the fiber-optic tapers depends primarily upon the experimental strategy of data collection, approximations made by the data processing software during scaling, and crystal symmetry. PMID:27047303

  6. Calibration Errors in Interferometric Radio Polarimetry

    NASA Astrophysics Data System (ADS)

    Hales, Christopher A.

    2017-08-01

    Residual calibration errors are difficult to predict in interferometric radio polarimetry because they depend on the observational calibration strategy employed, encompassing the Stokes vector of the calibrator and parallactic angle coverage. This work presents analytic derivations and simulations that enable examination of residual on-axis instrumental leakage and position-angle errors for a suite of calibration strategies. The focus is on arrays comprising alt-azimuth antennas with common feeds over which parallactic angle is approximately uniform. The results indicate that calibration schemes requiring parallactic angle coverage in the linear feed basis (e.g., the Atacama Large Millimeter/submillimeter Array) need only observe over 30°, beyond which no significant improvements in calibration accuracy are obtained. In the circular feed basis (e.g., the Very Large Array above 1 GHz), 30° is also appropriate when the Stokes vector of the leakage calibrator is known a priori, but this rises to 90° when the Stokes vector is unknown. These findings illustrate and quantify concepts that were previously obscure rules of thumb.

  7. Mexican national pyronometer network calibration

    NASA Astrophysics Data System (ADS)

    VAldes, M.; Villarreal, L.; Estevez, H.; Riveros, D.

    2013-12-01

    In order to take advantage of the solar radiation as an alternate energy source it is necessary to evaluate the spatial and temporal availability. The Mexican National Meterological Service (SMN) has a network with 136 meteorological stations, each coupled with a pyronometer for measuring the global solar radiation. Some of these stations had not been calibrated in several years. The Mexican Department of Energy (SENER) in order to count on a reliable evaluation of the solar resource funded this project to calibrate the SMN pyrometer network and validate the data. The calibration of the 136 pyronometers by the intercomparison method recommended by the World Meterological Organization (WMO) requires lengthy observations and specific environmental conditions such as clear skies and a stable atmosphere, circumstances that determine the site and season of the calibration. The Solar Radiation Section of the Instituto de Geofísica of the Universidad Nacional Autónoma de México is a Regional Center of the WMO and is certified to carry out the calibration procedures and emit certificates. We are responsible for the recalibration of the pyronometer network of the SMN. A continuous emission solar simulator with exposed areas with 30cm diameters was acquired to reduce the calibration time and not depend on atmospheric conditions. We present the results of the calibration of 10 thermopile pyronometers and one photovoltaic cell by the intercomparison method with more than 10000 observations each and those obtained with the solar simulator.

  8. Effects of energy supplementation on energy losses and nitrogen balance of steers fed green-chopped wheat pasture I. Calorimetry

    USDA-ARS?s Scientific Manuscript database

    Providing an energy supplement to cattle grazing high-quality wheat pasture can increase average daily gain; however the effects on greenhouse gas emissions are not known. Therefore we used 10 British cross-bred steers (initial weight: 206 ± 10.7 kg) in a respiration calorimetry study to evaluate t...

  9. Use of scanning calorimetry and microrespiration to determine effects of Bt toxin doses on Pandemis leafroller (Lepidoptera: Tortricidae) metabolism

    USDA-ARS?s Scientific Manuscript database

    Differential scanning calorimetry and microrespiration were used to determine the effects of the biopesticide, Bt toxin, on the metabolism of infected Pandemis leafroller, Pandemis purusana (Kearfott). The metabolic heat rate, CO2 evolution, O2 consumption of 2nd and 3rd instars following a 2 h expo...

  10. Wildlife software: procedures for publication of computer software

    USGS Publications Warehouse

    Samuel, M.D.

    1990-01-01

    Computers and computer software have become an integral part of the practice of wildlife science. Computers now play an important role in teaching, research, and management applications. Because of the specialized nature of wildlife problems, specific computer software is usually required to address a given problem (e.g., home range analysis). This type of software is not usually available from commercial vendors and therefore must be developed by those wildlife professionals with particular skill in computer programming. Current journal publication practices generally prevent a detailed description of computer software associated with new techniques. In addition, peer review of journal articles does not usually include a review of associated computer software. Thus, many wildlife professionals are usually unaware of computer software that would meet their needs or of major improvements in software they commonly use. Indeed most users of wildlife software learn of new programs or important changes only by word of mouth.

  11. Radiometric calibration updates to the Landsat collection

    USGS Publications Warehouse

    Micijevic, Esad; Haque, Md. Obaidul; Mishra, Nischal

    2016-01-01

    The Landsat Project is planning to implement a new collection management strategy for Landsat products generated at the U.S. Geological Survey (USGS) Earth Resources Observation and Science (EROS) Center. The goal of the initiative is to identify a collection of consistently geolocated and radiometrically calibrated images across the entire Landsat archive that is readily suitable for time-series analyses. In order to perform an accurate land change analysis, the data from all Landsat sensors must be on the same radiometric scale. Landsat 7 Enhanced Thematic Mapper Plus (ETM+) is calibrated to a radiance standard and all previous sensors are cross-calibrated to its radiometric scale. Landsat 8 Operational Land Imager (OLI) is calibrated to both radiance and reflectance standards independently. The Landsat 8 OLI reflectance calibration is considered to be most accurate. To improve radiometric calibration accuracy of historical data, Landsat 1-7 sensors also need to be cross-calibrated to the OLI reflectance scale. Results of that effort, as well as other calibration updates including the absolute and relative radiometric calibration and saturated pixel replacement for Landsat 8 OLI and absolute calibration for Landsat 4 and 5 Thematic Mappers (TM), will be implemented into Landsat products during the archive reprocessing campaign planned within the new collection management strategy. This paper reports on the planned radiometric calibration updates to the solar reflective bands of the new Landsat collection.

  12. SWAT: Model use, calibration, and validation

    USDA-ARS?s Scientific Manuscript database

    SWAT (Soil and Water Assessment Tool) is a comprehensive, semi-distributed river basin model that requires a large number of input parameters which complicates model parameterization and calibration. Several calibration techniques have been developed for SWAT including manual calibration procedures...

  13. Calibration-free optical chemical sensors

    DOEpatents

    DeGrandpre, Michael D.

    2006-04-11

    An apparatus and method for taking absorbance-based chemical measurements are described. In a specific embodiment, an indicator-based pCO2 (partial pressure of CO2) sensor displays sensor-to-sensor reproducibility and measurement stability. These qualities are achieved by: 1) renewing the sensing solution, 2) allowing the sensing solution to reach equilibrium with the analyte, and 3) calculating the response from a ratio of the indicator solution absorbances which are determined relative to a blank solution. Careful solution preparation, wavelength calibration, and stray light rejection also contribute to this calibration-free system. Three pCO2 sensors were calibrated and each had response curves which were essentially identical within the uncertainty of the calibration. Long-term laboratory and field studies showed the response had no drift over extended periods (months). The theoretical response, determined from thermodynamic characterization of the indicator solution, also predicted the observed calibration-free performance.

  14. High-accuracy Aspheric X-ray Mirror Metrology Using Software Configurable Optical Test System/deflectometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Run; Su, Peng; Burge, James H.

    The Software Configurable Optical Test System (SCOTS) uses deflectometry to measure surface slopes of general optical shapes without the need for additional null optics. Careful alignment of test geometry and calibration of inherent system error improve the accuracy of SCOTS to a level where it competes with interferometry. We report a SCOTS surface measurement of an off-axis superpolished elliptical x-ray mirror that achieves <1 nm<1 nm root-mean-square accuracy for the surface measurement with low-order term included.

  15. Internal Water Vapor Photoacoustic Calibration

    NASA Technical Reports Server (NTRS)

    Pilgrim, Jeffrey S.

    2009-01-01

    Water vapor absorption is ubiquitous in the infrared wavelength range where photoacoustic trace gas detectors operate. This technique allows for discontinuous wavelength tuning by temperature-jumping a laser diode from one range to another within a time span suitable for photoacoustic calibration. The use of an internal calibration eliminates the need for external calibrated reference gases. Commercial applications include an improvement of photoacoustic spectrometers in all fields of use.

  16. Automatic colorimetric calibration of human wounds

    PubMed Central

    2010-01-01

    Background Recently, digital photography in medicine is considered an acceptable tool in many clinical domains, e.g. wound care. Although ever higher resolutions are available, reproducibility is still poor and visual comparison of images remains difficult. This is even more the case for measurements performed on such images (colour, area, etc.). This problem is often neglected and images are freely compared and exchanged without further thought. Methods The first experiment checked whether camera settings or lighting conditions could negatively affect the quality of colorimetric calibration. Digital images plus a calibration chart were exposed to a variety of conditions. Precision and accuracy of colours after calibration were quantitatively assessed with a probability distribution for perceptual colour differences (dE_ab). The second experiment was designed to assess the impact of the automatic calibration procedure (i.e. chart detection) on real-world measurements. 40 Different images of real wounds were acquired and a region of interest was selected in each image. 3 Rotated versions of each image were automatically calibrated and colour differences were calculated. Results 1st Experiment: Colour differences between the measurements and real spectrophotometric measurements reveal median dE_ab values respectively 6.40 for the proper patches of calibrated normal images and 17.75 for uncalibrated images demonstrating an important improvement in accuracy after calibration. The reproducibility, visualized by the probability distribution of the dE_ab errors between 2 measurements of the patches of the images has a median of 3.43 dE* for all calibrated images, 23.26 dE_ab for all uncalibrated images. If we restrict ourselves to the proper patches of normal calibrated images the median is only 2.58 dE_ab! Wilcoxon sum-rank testing (p < 0.05) between uncalibrated normal images and calibrated normal images with proper squares were equal to 0 demonstrating a highly

  17. MODIS airborne simulator visible and near-infrared calibration, 1991 FIRE-Cirrus field experiment. Calibration version: FIRE King 1.1

    NASA Technical Reports Server (NTRS)

    Arnold, G. Thomas; Fitzgerald, Michael; Grant, Patrick S.; King, Michael D.

    1994-01-01

    Calibration of the visible and near-infrared channels of the MODIS Airborne Simulator (MAS) is derived from observations of a calibrated light source. For the 1991 FIRE-Cirrus field experiment, the calibrated light source was the NASA Goddard 48-inch integrating hemisphere. Laboratory tests during the FIRE Cirrus field experiment were conducted to calibrate the hemisphere and from the hemisphere to the MAS. The purpose of this report is to summarize the FIRE-Cirrus hemisphere calibration, and then describe how the MAS was calibrated from observations of the hemisphere data. All MAS calibration measurements are presented, and determination of the MAS calibration coefficients (raw counts to radiance conversion) is discussed. Thermal sensitivity of the MAS visible and near-infrared calibration is also discussed. Typically, the MAS in-flight is 30 to 60 degrees C colder than the room temperature laboratory calibration. Results from in-flight temperature measurements and tests of the MAS in a cold chamber are given, and from these, equations are derived to adjust the MAS in-flight data to what the value would be at laboratory conditions. For FIRE-Cirrus data, only channels 3 through 6 were found to be temperature sensitive. The final section of this report describes comparisons to an independent MAS (room temperature) calibration by Ames personnel using their 30-inch integrating sphere.

  18. Space Flight Software Development Software for Intelligent System Health Management

    NASA Technical Reports Server (NTRS)

    Trevino, Luis C.; Crumbley, Tim

    2004-01-01

    The slide presentation examines the Marshall Space Flight Center Flight Software Branch, including software development projects, mission critical space flight software development, software technical insight, advanced software development technologies, and continuous improvement in the software development processes and methods.

  19. Calibration of a polarimetric imaging SAR

    NASA Technical Reports Server (NTRS)

    Sarabandi, K.; Pierce, L. E.; Ulaby, F. T.

    1991-01-01

    Calibration of polarimetric imaging Synthetic Aperture Radars (SAR's) using point calibration targets is discussed. The four-port network calibration technique is used to describe the radar error model. The polarimetric ambiguity function of the SAR is then found using a single point target, namely a trihedral corner reflector. Based on this, an estimate for the backscattering coefficient of the terrain is found by a deconvolution process. A radar image taken by the JPL Airborne SAR (AIRSAR) is used for verification of the deconvolution calibration method. The calibrated responses of point targets in the image are compared both with theory and the POLCAL technique. Also, response of a distributed target are compared using the deconvolution and POLCAL techniques.

  20. e-Calibrations: using the Internet to deliver calibration services in real time at lower cost

    NASA Astrophysics Data System (ADS)

    Desrosiers, Marc; Nagy, Vitaly; Puhl, James; Glenn, Robert; Densock, Robert; Stieren, David; Lang, Brian; Kamlowski, Andreas; Maier, Diether; Heiss, Arthur

    2002-03-01

    The National Institute of Standards and Technology (NIST) is expanding into a new frontier in the delivery of measurement services. The Internet will be employed to provide industry with electronic traceability to national standards. This is a radical departure from the traditional modes of traceability and presents many new challenges. The traditional mail-based calibration service relies on sending artifacts to the user, who then mails them back to NIST for evaluation. The new service will deliver calibration results to the industry customer on-demand, in real-time, at a lower cost. The calibration results can be incorporated rapidly into the production process to ensure the highest quality manufacturing. The service would provide the US radiation processing industry with a direct link to the NIST calibration facilities and its expertise, and provide an interactive feedback process between industrial processing and the national measurement standard. Moreover, an Internet calibration system should contribute to the removal of measurement-related trade barriers.

  1. DIRBE External Calibrator (DEC)

    NASA Technical Reports Server (NTRS)

    Wyatt, Clair L.; Thurgood, V. Alan; Allred, Glenn D.

    1987-01-01

    Under NASA Contract No. NAS5-28185, the Center for Space Engineering at Utah State University has produced a calibration instrument for the Diffuse Infrared Background Experiment (DIRBE). DIRBE is one of the instruments aboard the Cosmic Background Experiment Observatory (COBE). The calibration instrument is referred to as the DEC (Dirbe External Calibrator). DEC produces a steerable, infrared beam of controlled spectral content and intensity and with selectable point source or diffuse source characteristics, that can be directed into the DIRBE to map fields and determine response characteristics. This report discusses the design of the DEC instrument, its operation and characteristics, and provides an analysis of the systems capabilities and performance.

  2. Simplified stereo-optical ultrasound plane calibration

    NASA Astrophysics Data System (ADS)

    Hoßbach, Martin; Noll, Matthias; Wesarg, Stefan

    2013-03-01

    Image guided therapy is a natural concept and commonly used in medicine. In anesthesia, a common task is the injection of an anesthetic close to a nerve under freehand ultrasound guidance. Several guidance systems exist using electromagnetic tracking of the ultrasound probe as well as the needle, providing the physician with a precise projection of the needle into the ultrasound image. This, however, requires additional expensive devices. We suggest using optical tracking with miniature cameras attached to a 2D ultrasound probe to achieve a higher acceptance among physicians. The purpose of this paper is to present an intuitive method to calibrate freehand ultrasound needle guidance systems employing a rigid stereo camera system. State of the art methods are based on a complex series of error prone coordinate system transformations which makes them susceptible to error accumulation. By reducing the amount of calibration steps to a single calibration procedure we provide a calibration method that is equivalent, yet not prone to error accumulation. It requires a linear calibration object and is validated on three datasets utilizing di erent calibration objects: a 6mm metal bar and a 1:25mm biopsy needle were used for experiments. Compared to existing calibration methods for freehand ultrasound needle guidance systems, we are able to achieve higher accuracy results while additionally reducing the overall calibration complexity. Ke

  3. Van’t Hoff global analyses of variable temperature isothermal titration calorimetry data

    PubMed Central

    Freiburger, Lee A.; Auclair, Karine; Mittermaier, Anthony K.

    2016-01-01

    Isothermal titration calorimetry (ITC) can provide detailed information on the thermodynamics of biomolecular interactions in the form of equilibrium constants, KA, and enthalpy changes, ΔHA. A powerful application of this technique involves analyzing the temperature dependences of ITC-derived KA and ΔHA values to gain insight into thermodynamic linkage between binding and additional equilibria, such as protein folding. We recently developed a general method for global analysis of variable temperature ITC data that significantly improves the accuracy of extracted thermodynamic parameters and requires no prior knowledge of the coupled equilibria. Here we report detailed validation of this method using Monte Carlo simulations and an application to study coupled folding and binding in an aminoglycoside acetyltransferase enzyme. PMID:28018008

  4. Software attribute visualization for high integrity software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pollock, G.M.

    1998-03-01

    This report documents a prototype tool developed to investigate the use of visualization and virtual reality technologies for improving software surety confidence. The tool is utilized within the execution phase of the software life cycle. It provides a capability to monitor an executing program against prespecified requirements constraints provided in a program written in the requirements specification language SAGE. The resulting Software Attribute Visual Analysis Tool (SAVAnT) also provides a technique to assess the completeness of a software specification.

  5. Simulating the behavior of volatiles belonging to the C-O-H-S system in silicate melts under magmatic conditions with the software D-Compress

    NASA Astrophysics Data System (ADS)

    Burgisser, Alain; Alletti, Marina; Scaillet, Bruno

    2015-06-01

    Modeling magmatic degassing, or how the volatile distribution between gas and melt changes at pressure varies, is a complex task that involves a large number of thermodynamical relationships and that requires dedicated software. This article presents the software D-Compress, which computes the gas and melt volatile composition of five element sets in magmatic systems (O-H, S-O-H, C-S-O-H, C-S-O-H-Fe, and C-O-H). It has been calibrated so as to simulate the volatiles coexisting with three common types of silicate melts (basalt, phonolite, and rhyolite). Operational temperatures depend on melt composition and range from 790 to 1400 °C. A specificity of D-Compress is the calculation of volatile composition as pressure varies along a (de)compression path between atmospheric and 3000 bars. This software was prepared so as to maximize versatility by proposing different sets of input parameters. In particular, whenever new solubility laws on specific melt compositions are available, the model parameters can be easily tuned to run the code on that composition. Parameter gaps were minimized by including sets of chemical species for which calibration data were available over a wide range of pressure, temperature, and melt composition. A brief description of the model rationale is followed by the presentation of the software capabilities. Examples of use are then presented with outputs comparisons between D-Compress and other currently available thermodynamical models. The compiled software and the source code are available as electronic supplementary materials.

  6. MODIS In-flight Calibration Methodologies

    NASA Technical Reports Server (NTRS)

    Xiong, X.; Barnes, W.

    2004-01-01

    MODIS is a key instrument for the NASA's Earth Observing System (EOS) currently operating on the Terra spacecraft launched in December 1999 and Aqua spacecraft launched in May 2002. It is a cross-track scanning radiometer, making measurements over a wide field of view in 36 spectral bands with wavelengths from 0.41 to 14.5 micrometers and providing calibrated data products for science and research communities in their studies of the Earth s system of land, oceans, and atmosphere. A complete suite of on-board calibrators (OBC) have been designed for the instruments in-flight calibration and characterization, including a solar diffuser (SD) and solar diffuser stability monitor (SDSM) system for the radiometric calibration of the 20 reflective solar bands (RSB), a blackbody (BB) for the radiometric calibration of the 16 thermal emissive bands (TEB), and a spectro-radiometric calibration assembly (SRCA) for the spatial (all bands) and spectral (RSB only) characterization. This paper discusses MODIS in-flight Cali bration methodologies of using its on-board calibrators. Challenging issues and examples of tracking and correcting instrument on-orbit response changes are presented, including SD degradation (20% at 412nm, 12% at 466nm, and 7% at 530nm over four and a half years) and response versus scan angle changes (10%, 4%, and 1% differences between beginning of the scan and end of the scan at 412nm, 466nm, and 530nm) in the VIS spectral region. Current instrument performance and lessons learned are also provided.

  7. Calibration for the SAGE III/EOS instruments

    NASA Technical Reports Server (NTRS)

    Chu, W. P.; Mccormick, M. P.; Zawodny, J. M.; Mcmaster, L. R.

    1991-01-01

    The calibration plan for the SAGE III instruments for maintaining instrument performance during the Earth Observing System (EOS) mission lifetime is described. The SAGE III calibration plan consists of detailed preflight and inflight calibration on the instrument performance together with the correlative measurement program to validate the data products from the inverted satellite measurements. Since the measurement technique is primarily solar/lunar occultation, the instrument will be self-calibrating by using the sun as the calibration source during the routine operation of the instrument in flight. The instrument is designed to perform radiometric calibration of throughput, spectral, and spatial response in flight during routine operation. Spectral calibration can be performed in-flight from observation of the solar Fraunhofer lines within the spectral region from 290 to 1030 nm wavelength.

  8. Comparison of Calibration Methods for Tristimulus Colorimeters.

    PubMed

    Gardner, James L

    2007-01-01

    Uncertainties in source color measurements with a tristimulus colorimeter are estimated for calibration factors determined, based on a known source spectral distribution or on accurate measurements of the spectral responsivities of the colorimeter channels. Application is to the National Institute of Standards and Technology (NIST) colorimeter and an International Commission on Illumination (CIE) Illuminant A calibration. Detector-based calibration factors generally have lower uncertainties than source-based calibration factors. Uncertainties are also estimated for calculations of spectral mismatch factors. Where both spectral responsivities of the colorimeter channels and the spectral power distributions of the calibration and test sources are known, uncertainties are lowest if the colorimeter calibration factors are recalculated for the test source; this process also avoids correlations between the CIE Source A calibration factors and the spectral mismatch factors.

  9. Comparison of Calibration Methods for Tristimulus Colorimeters

    PubMed Central

    Gardner, James L.

    2007-01-01

    Uncertainties in source color measurements with a tristimulus colorimeter are estimated for calibration factors determined, based on a known source spectral distribution or on accurate measurements of the spectral responsivities of the colorimeter channels. Application is to the National Institute of Standards and Technology (NIST) colorimeter and an International Commission on Illumination (CIE) Illuminant A calibration. Detector-based calibration factors generally have lower uncertainties than source-based calibration factors. Uncertainties are also estimated for calculations of spectral mismatch factors. Where both spectral responsivities of the colorimeter channels and the spectral power distributions of the calibration and test sources are known, uncertainties are lowest if the colorimeter calibration factors are recalculated for the test source; this process also avoids correlations between the CIE Source A calibration factors and the spectral mismatch factors. PMID:27110460

  10. Calibration of the COBE FIRAS instrument

    NASA Technical Reports Server (NTRS)

    Fixsen, D. J.; Cheng, E. S.; Cottingham, D. A.; Eplee, R. E., Jr.; Hewagama, T.; Isaacman, R. B.; Jensen, K. A.; Mather, J. C.; Massa, D. L.; Meyer, S. S.

    1994-01-01

    The Far-Infrared Absolute Spectrophotometer (FIRAS) instrument on the Cosmic Background Explorer (COBE) satellite was designed to accurately measure the spectrum of the cosmic microwave background radiation (CMBR) in the frequency range 1-95/cm with an angular resolution of 7 deg. We describe the calibration of this instrument, including the method of obtaining calibration data, reduction of data, the instrument model, fitting the model to the calibration data, and application of the resulting model solution to sky observations. The instrument model fits well for calibration data that resemble sky condition. The method of propagating detector noise through the calibration process to yield a covariance matrix of the calibrated sky data is described. The final uncertainties are variable both in frequency and position, but for a typical calibrated sky 2.6 deg square pixel and 0.7/cm spectral element the random detector noise limit is of order of a few times 10(exp -7) ergs/sq cm/s/sr cm for 2-20/cm, and the difference between the sky and the best-fit cosmic blackbody can be measured with a gain uncertainty of less than 3%.

  11. An investigation of student thinking regarding calorimetry, entropy, and the second law of thermodynamics

    NASA Astrophysics Data System (ADS)

    Christensen, Warren Michael

    This thesis constitutes an investigation into student understanding of concepts in thermal physics in an introductory calculus-based university physics course. Nearly 90% of students enrolled in the course had previous exposure to thermodynamics concepts in chemistry and/or high-school physics courses. The two major thrusts of this work are (1) an exploration of student approaches to solving calorimetry problems involving two substances with differing specific heats, and (2) a careful probing of student ideas regarding certain aspects of entropy and the second law of thermodynamics. We present extensive free-response, interview, and multiple-choice data regarding students' ideas, collected both before and after instruction from a diverse set of course semesters and instructors. For topics in calorimetry, we found via interviews that students frequently get confused by, or tend to overlook, the detailed proportional reasoning or algebraic procedures that could lead to correct solutions. Instead, students often proceed with semi-intuitive reasoning that at times may be productive, but more often leads to inconsistencies and non-uniform conceptual understanding. Our investigation of student thinking regarding entropy suggests that prior to instruction, students have consistent and distinct patterns of incorrect or incomplete responses that often persist despite deliberate and focused efforts by the instructor. With modified instruction based on research-based materials, significant learning gains were observed on certain key concepts, e.g., that the entropy of the universe increases for all non-ideal processes. The methodology for our work is described, the data are discussed and analyzed, and a description is given of goals for future work in this area.

  12. An image-processing software package: UU and Fig for optical metrology applications

    NASA Astrophysics Data System (ADS)

    Chen, Lujie

    2013-06-01

    Modern optical metrology applications are largely supported by computational methods, such as phase shifting [1], Fourier Transform [2], digital image correlation [3], camera calibration [4], etc, in which image processing is a critical and indispensable component. While it is not too difficult to obtain a wide variety of image-processing programs from the internet; few are catered for the relatively special area of optical metrology. This paper introduces an image-processing software package: UU (data processing) and Fig (data rendering) that incorporates many useful functions to process optical metrological data. The cross-platform programs UU and Fig are developed based on wxWidgets. At the time of writing, it has been tested on Windows, Linux and Mac OS. The userinterface is designed to offer precise control of the underline processing procedures in a scientific manner. The data input/output mechanism is designed to accommodate diverse file formats and to facilitate the interaction with other independent programs. In terms of robustness, although the software was initially developed for personal use, it is comparably stable and accurate to most of the commercial software of similar nature. In addition to functions for optical metrology, the software package has a rich collection of useful tools in the following areas: real-time image streaming from USB and GigE cameras, computational geometry, computer vision, fitting of data, 3D image processing, vector image processing, precision device control (rotary stage, PZT stage, etc), point cloud to surface reconstruction, volume rendering, batch processing, etc. The software package is currently used in a number of universities for teaching and research.

  13. Software cost/resource modeling: Software quality tradeoff measurement

    NASA Technical Reports Server (NTRS)

    Lawler, R. W.

    1980-01-01

    A conceptual framework for treating software quality from a total system perspective is developed. Examples are given to show how system quality objectives may be allocated to hardware and software; to illustrate trades among quality factors, both hardware and software, to achieve system performance objectives; and to illustrate the impact of certain design choices on software functionality.

  14. Cross-Calibration between ASTER and MODIS Visible to Near-Infrared Bands for Improvement of ASTER Radiometric Calibration

    PubMed Central

    Tsuchida, Satoshi; Thome, Kurtis

    2017-01-01

    Radiometric cross-calibration between the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) and the Terra-Moderate Resolution Imaging Spectroradiometer (MODIS) has been partially used to derive the ASTER radiometric calibration coefficient (RCC) curve as a function of date on visible to near-infrared bands. However, cross-calibration is not sufficiently accurate, since the effects of the differences in the sensor’s spectral and spatial responses are not fully mitigated. The present study attempts to evaluate radiometric consistency across two sensors using an improved cross-calibration algorithm to address the spectral and spatial effects and derive cross-calibration-based RCCs, which increases the ASTER calibration accuracy. Overall, radiances measured with ASTER bands 1 and 2 are on averages 3.9% and 3.6% greater than the ones measured on the same scene with their MODIS counterparts and ASTER band 3N (nadir) is 0.6% smaller than its MODIS counterpart in current radiance/reflectance products. The percentage root mean squared errors (%RMSEs) between the radiances of two sensors are 3.7, 4.2, and 2.3 for ASTER band 1, 2, and 3N, respectively, which are slightly greater or smaller than the required ASTER radiometric calibration accuracy (4%). The uncertainty of the cross-calibration is analyzed by elaborating the error budget table to evaluate the International System of Units (SI)-traceability of the results. The use of the derived RCCs will allow further reduction of errors in ASTER radiometric calibration and subsequently improve interoperability across sensors for synergistic applications. PMID:28777329

  15. Prowess - A Software Model for the Ooty Wide Field Array

    NASA Astrophysics Data System (ADS)

    Marthi, Visweshwar Ram

    2017-03-01

    One of the scientific objectives of the Ooty Wide Field Array (OWFA) is to observe the redshifted H i emission from z ˜ 3.35. Although predictions spell out optimistic outcomes in reasonable integration times, these studies were based purely on analytical assumptions, without accounting for limiting systematics. A software model for OWFA has been developed with a view to understanding the instrument-induced systematics, by describing a complete software model for the instrument. This model has been implemented through a suite of programs, together called Prowess, which has been conceived with the dual role of an emulator as well as observatory data analysis software. The programming philosophy followed in building Prowess enables a general user to define an own set of functions and add new functionality. This paper describes a co-ordinate system suitable for OWFA in which the baselines are defined. The foregrounds are simulated from their angular power spectra. The visibilities are then computed from the foregrounds. These visibilities are then used for further processing, such as calibration and power spectrum estimation. The package allows for rich visualization features in multiple output formats in an interactive fashion, giving the user an intuitive feel for the data. Prowess has been extensively used for numerical predictions of the foregrounds for the OWFA H i experiment.

  16. Software for Generating Strip Maps from SAR Data

    NASA Technical Reports Server (NTRS)

    Hensley, Scott; Michel, Thierry; Madsen, Soren; Chapin, Elaine; Rodriguez, Ernesto

    2004-01-01

    Jurassicprok is a computer program that generates strip-map digital elevation models and other data products from raw data acquired by an airborne synthetic-aperture radar (SAR) system. This software can process data from a variety of airborne SAR systems but is designed especially for the GeoSAR system, which is a dual-frequency (P- and X-band), single-pass interferometric SAR system for measuring elevation both at the bare ground surface and top of the vegetation canopy. Jurassicprok is a modified version of software developed previously for airborne-interferometric- SAR applications. The modifications were made to accommodate P-band interferometric processing, remove approximations that are not generally valid, and reduce processor-induced mapping errors to the centimeter level. Major additions and other improvements over the prior software include the following: a) A new, highly efficient multi-stage-modified wave-domain processing algorithm for accurately motion compensating ultra-wideband data; b) Adaptive regridding algorithms based on estimated noise and actual measured topography to reduce noise while maintaining spatial resolution; c) Exact expressions for height determination from interferogram data; d) Fully calibrated volumetric correlation data based on rigorous removal of geometric and signal-to-noise decorrelation terms; e) Strip range-Doppler image output in user-specified Doppler coordinates; f) An improved phase-unwrapping and absolute-phase-determination algorithm; g) A more flexible user interface with many additional processing options; h) Increased interferogram filtering options; and i) Ability to use disk space instead of random- access memory for some processing steps.

  17. Advances in model-based software for simulating ultrasonic immersion inspections of metal components

    NASA Astrophysics Data System (ADS)

    Chiou, Chien-Ping; Margetan, Frank J.; Taylor, Jared L.; Engle, Brady J.; Roberts, Ronald A.

    2018-04-01

    Under the sponsorship of the National Science Foundation's Industry/University Cooperative Research Center at ISU, an effort was initiated in 2015 to repackage existing research-grade software into user-friendly tools for the rapid estimation of signal-to-noise ratio (SNR) for ultrasonic inspections of metals. The software combines: (1) a Python-based graphical user interface for specifying an inspection scenario and displaying results; and (2) a Fortran-based engine for computing defect signals and backscattered grain noise characteristics. The later makes use the Thompson-Gray measurement model for the response from an internal defect, and the Thompson-Margetan independent scatterer model for backscattered grain noise. This paper, the third in the series [1-2], provides an overview of the ongoing modeling effort with emphasis on recent developments. These include the ability to: (1) treat microstructures where grain size, shape and tilt relative to the incident sound direction can all vary with depth; and (2) simulate C-scans of defect signals in the presence of backscattered grain noise. The simulation software can now treat both normal and oblique-incidence immersion inspections of curved metal components. Both longitudinal and shear-wave inspections are treated. The model transducer can either be planar, spherically-focused, or bi-cylindrically-focused. A calibration (or reference) signal is required and is used to deduce the measurement system efficiency function. This can be "invented" by the software using center frequency and bandwidth information specified by the user, or, alternatively, a measured calibration signal can be used. Defect types include flat-bottomed-hole reference reflectors, and spherical pores and inclusions. Simulation outputs include estimated defect signal amplitudes, root-mean-square values of grain noise amplitudes, and SNR as functions of the depth of the defect within the metal component. At any particular depth, the user can view

  18. Oxygen-Mass-Flow Calibration Cell

    NASA Technical Reports Server (NTRS)

    Martin, Robert E.

    1996-01-01

    Proposed calibration standard for mass flow rate of oxygen based on conduction of oxygen ions through solid electrolyte membrane made of zirconia and heated to temperature of 1,000 degrees C. Flow of oxygen ions proportional to applied electric current. Unaffected by variations in temperature and pressure, and requires no measurement of volume. Calibration cell based on concept used to calibrate variety of medical and scientific instruments required to operate with precise rates of flow of oxygen.

  19. 14 CFR 33.45 - Calibration tests.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 1 2013-01-01 2013-01-01 false Calibration tests. 33.45 Section 33.45... STANDARDS: AIRCRAFT ENGINES Block Tests; Reciprocating Aircraft Engines § 33.45 Calibration tests. (a) Each engine must be subjected to the calibration tests necessary to establish its power characteristics and...

  20. 14 CFR 33.45 - Calibration tests.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Calibration tests. 33.45 Section 33.45... STANDARDS: AIRCRAFT ENGINES Block Tests; Reciprocating Aircraft Engines § 33.45 Calibration tests. (a) Each engine must be subjected to the calibration tests necessary to establish its power characteristics and...