Sample records for fully automatic calibration

  1. A real-time freehand ultrasound calibration system with automatic accuracy feedback and control.

    PubMed

    Chen, Thomas Kuiran; Thurston, Adrian D; Ellis, Randy E; Abolmaesumi, Purang

    2009-01-01

    This article describes a fully automatic, real-time, freehand ultrasound calibration system. The system was designed to be simple and sterilizable, intended for operating-room usage. The calibration system employed an automatic-error-retrieval and accuracy-control mechanism based on a set of ground-truth data. Extensive validations were conducted on a data set of 10,000 images in 50 independent calibration trials to thoroughly investigate the accuracy, robustness, and performance of the calibration system. On average, the calibration accuracy (measured in three-dimensional reconstruction error against a known ground truth) of all 50 trials was 0.66 mm. In addition, the calibration errors converged to submillimeter in 98% of all trials within 12.5 s on average. Overall, the calibration system was able to consistently, efficiently and robustly achieve high calibration accuracy with real-time performance.

  2. Recent Research on the Automated Mass Measuring System

    NASA Astrophysics Data System (ADS)

    Yao, Hong; Ren, Xiao-Ping; Wang, Jian; Zhong, Rui-Lin; Ding, Jing-An

    The research development of robotic measurement system as well as the representative automatic system were introduced in the paper, and then discussed a sub-multiple calibration scheme adopted on a fully-automatic CCR10 system effectively. Automatic robot system can be able to perform the dissemination of the mass scale without any manual intervention as well as the fast speed calibration of weight samples against a reference weight. At the last, evaluation of the expanded uncertainty was given out.

  3. Automatic Calibration of an Airborne Imaging System to an Inertial Navigation Unit

    NASA Technical Reports Server (NTRS)

    Ansar, Adnan I.; Clouse, Daniel S.; McHenry, Michael C.; Zarzhitsky, Dimitri V.; Pagdett, Curtis W.

    2013-01-01

    This software automatically calibrates a camera or an imaging array to an inertial navigation system (INS) that is rigidly mounted to the array or imager. In effect, it recovers the coordinate frame transformation between the reference frame of the imager and the reference frame of the INS. This innovation can automatically derive the camera-to-INS alignment using image data only. The assumption is that the camera fixates on an area while the aircraft flies on orbit. The system then, fully automatically, solves for the camera orientation in the INS frame. No manual intervention or ground tie point data is required.

  4. Fully integrated low-noise readout circuit with automatic offset cancellation loop for capacitive microsensors.

    PubMed

    Song, Haryong; Park, Yunjong; Kim, Hyungseup; Cho, Dong-Il Dan; Ko, Hyoungho

    2015-10-14

    Capacitive sensing schemes are widely used for various microsensors; however, such microsensors suffer from severe parasitic capacitance problems. This paper presents a fully integrated low-noise readout circuit with automatic offset cancellation loop (AOCL) for capacitive microsensors. The output offsets of the capacitive sensing chain due to the parasitic capacitances and process variations are automatically removed using AOCL. The AOCL generates electrically equivalent offset capacitance and enables charge-domain fine calibration using a 10-bit R-2R digital-to-analog converter, charge-transfer switches, and a charge-storing capacitor. The AOCL cancels the unwanted offset by binary-search algorithm based on 10-bit successive approximation register (SAR) logic. The chip is implemented using 0.18 μm complementary metal-oxide-semiconductor (CMOS) process with an active area of 1.76 mm². The power consumption is 220 μW with 3.3 V supply. The input parasitic capacitances within the range of -250 fF to 250 fF can be cancelled out automatically, and the required calibration time is lower than 10 ms.

  5. Fully Integrated Low-Noise Readout Circuit with Automatic Offset Cancellation Loop for Capacitive Microsensors

    PubMed Central

    Song, Haryong; Park, Yunjong; Kim, Hyungseup; Cho, Dong-il Dan; Ko, Hyoungho

    2015-01-01

    Capacitive sensing schemes are widely used for various microsensors; however, such microsensors suffer from severe parasitic capacitance problems. This paper presents a fully integrated low-noise readout circuit with automatic offset cancellation loop (AOCL) for capacitive microsensors. The output offsets of the capacitive sensing chain due to the parasitic capacitances and process variations are automatically removed using AOCL. The AOCL generates electrically equivalent offset capacitance and enables charge-domain fine calibration using a 10-bit R-2R digital-to-analog converter, charge-transfer switches, and a charge-storing capacitor. The AOCL cancels the unwanted offset by binary-search algorithm based on 10-bit successive approximation register (SAR) logic. The chip is implemented using 0.18 μm complementary metal-oxide-semiconductor (CMOS) process with an active area of 1.76 mm2. The power consumption is 220 μW with 3.3 V supply. The input parasitic capacitances within the range of −250 fF to 250 fF can be cancelled out automatically, and the required calibration time is lower than 10 ms. PMID:26473877

  6. ROSAS: a robotic station for atmosphere and surface characterization dedicated to on-orbit calibration

    NASA Astrophysics Data System (ADS)

    Meygret, Aimé; Santer, Richard P.; Berthelot, Béatrice

    2011-10-01

    La Crau test site is used by CNES since 1987 for vicarious calibration of SPOT cameras. The former calibration activities were conducted during field campaigns devoted to the characterization of the atmosphere and the site reflectances. Since 1997, au automatic photometric station (ROSAS) was set up on the site on a 10m height pole. This station measures at different wavelengths, the solar extinction and the sky radiances to fully characterize the optical properties of the atmosphere. It also measures the upwelling radiance over the ground to fully characterize the surface reflectance properties. The photometer samples the spectrum from 380nm to 1600nm with 9 narrow bands. Every non cloudy days the photometer automatically and sequentially performs its measurements. Data are transmitted by GSM (Global System for Mobile communications) to CNES and processed. The photometer is calibrated in situ over the sun for irradiance and cross-band calibration, and over the Rayleigh scattering for the short wavelengths radiance calibration. The data are processed by an operational software which calibrates the photometer, estimates the atmosphere properties, computes the bidirectional reflectance distribution function of the site, then simulates the top of atmosphere radiance seen by any sensor over-passing the site and calibrates it. This paper describes the instrument, its measurement protocol and its calibration principle. Calibration results are discussed and compared to laboratory calibration. It details the surface reflectance characterization and presents SPOT4 calibration results deduced from the estimated TOA radiance. The results are compared to the official calibration.

  7. Point-and-stare operation and high-speed image acquisition in real-time hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Driver, Richard D.; Bannon, David P.; Ciccone, Domenic; Hill, Sam L.

    2010-04-01

    The design and optical performance of a small-footprint, low-power, turnkey, Point-And-Stare hyperspectral analyzer, capable of fully automated field deployment in remote and harsh environments, is described. The unit is packaged for outdoor operation in an IP56 protected air-conditioned enclosure and includes a mechanically ruggedized fully reflective, aberration-corrected hyperspectral VNIR (400-1000 nm) spectrometer with a board-level detector optimized for point and stare operation, an on-board computer capable of full system data-acquisition and control, and a fully functioning internal hyperspectral calibration system for in-situ system spectral calibration and verification. Performance data on the unit under extremes of real-time survey operation and high spatial and high spectral resolution will be discussed. Hyperspectral acquisition including full parameter tracking is achieved by the addition of a fiber-optic based downwelling spectral channel for solar illumination tracking during hyperspectral acquisition and the use of other sensors for spatial and directional tracking to pinpoint view location. The system is mounted on a Pan-And-Tilt device, automatically controlled from the analyzer's on-board computer, making the HyperspecTM particularly adaptable for base security, border protection and remote deployments. A hyperspectral macro library has been developed to control hyperspectral image acquisition, system calibration and scene location control. The software allows the system to be operated in a fully automatic mode or under direct operator control through a GigE interface.

  8. A fully automated calibration method for an optical see-through head-mounted operating microscope with variable zoom and focus.

    PubMed

    Figl, Michael; Ede, Christopher; Hummel, Johann; Wanschitz, Felix; Ewers, Rolf; Bergmann, Helmar; Birkfellner, Wolfgang

    2005-11-01

    Ever since the development of the first applications in image-guided therapy (IGT), the use of head-mounted displays (HMDs) was considered an important extension of existing IGT technologies. Several approaches to utilizing HMDs and modified medical devices for augmented reality (AR) visualization were implemented. These approaches include video-see through systems, semitransparent mirrors, modified endoscopes, and modified operating microscopes. Common to all these devices is the fact that a precise calibration between the display and three-dimensional coordinates in the patient's frame of reference is compulsory. In optical see-through devices based on complex optical systems such as operating microscopes or operating binoculars-as in the case of the system presented in this paper-this procedure can become increasingly difficult since precise camera calibration for every focus and zoom position is required. We present a method for fully automatic calibration of the operating binocular Varioscope M5 AR for the full range of zoom and focus settings available. Our method uses a special calibration pattern, a linear guide driven by a stepping motor, and special calibration software. The overlay error in the calibration plane was found to be 0.14-0.91 mm, which is less than 1% of the field of view. Using the motorized calibration rig as presented in the paper, we were also able to assess the dynamic latency when viewing augmentation graphics on a mobile target; spatial displacement due to latency was found to be in the range of 1.1-2.8 mm maximum, the disparity between the true object and its computed overlay represented latency of 0.1 s. We conclude that the automatic calibration method presented in this paper is sufficient in terms of accuracy and time requirements for standard uses of optical see-through systems in a clinical environment.

  9. Automatic Coregistration and orthorectification (ACRO) and subsequent mosaicing of NASA high-resolution imagery over the Mars MC11 quadrangle, using HRSC as a baseline

    NASA Astrophysics Data System (ADS)

    Sidiropoulos, Panagiotis; Muller, Jan-Peter; Watson, Gillian; Michael, Gregory; Walter, Sebastian

    2018-02-01

    This work presents the coregistered, orthorectified and mosaiced high-resolution products of the MC11 quadrangle of Mars, which have been processed using novel, fully automatic, techniques. We discuss the development of a pipeline that achieves fully automatic and parameter independent geometric alignment of high-resolution planetary images, starting from raw input images in NASA PDS format and following all required steps to produce a coregistered geotiff image, a corresponding footprint and useful metadata. Additionally, we describe the development of a radiometric calibration technique that post-processes coregistered images to make them radiometrically consistent. Finally, we present a batch-mode application of the developed techniques over the MC11 quadrangle to validate their potential, as well as to generate end products, which are released to the planetary science community, thus assisting in the analysis of Mars static and dynamic features. This case study is a step towards the full automation of signal processing tasks that are essential to increase the usability of planetary data, but currently, require the extensive use of human resources.

  10. Gait analysis--precise, rapid, automatic, 3-D position and orientation kinematics and dynamics.

    PubMed

    Mann, R W; Antonsson, E K

    1983-01-01

    A fully automatic optoelectronic photogrammetric technique is presented for measuring the spatial kinematics of human motion (both position and orientation) and estimating the inertial (net) dynamics. Calibration and verification showed that in a two-meter cube viewing volume, the system achieves one millimeter of accuracy and resolution in translation and 20 milliradians in rotation. Since double differentiation of generalized position data to determine accelerations amplifies noise, the frequency domain characteristics of the system were investigated. It was found that the noise and all other errors in the kinematic data contribute less than five percent error to the resulting dynamics.

  11. Calibration and application of an automated seepage meter for monitoring water flow across the sediment-water interface.

    PubMed

    Zhu, Tengyi; Fu, Dafang; Jenkinson, Byron; Jafvert, Chad T

    2015-04-01

    The advective flow of sediment pore water is an important parameter for understanding natural geochemical processes within lake, river, wetland, and marine sediments and also for properly designing permeable remedial sediment caps placed over contaminated sediments. Automated heat pulse seepage meters can be used to measure the vertical component of sediment pore water flow (i.e., vertical Darcy velocity); however, little information on meter calibration as a function of ambient water temperature exists in the literature. As a result, a method with associated equations for calibrating a heat pulse seepage meter as a function of ambient water temperature is fully described in this paper. Results of meter calibration over the temperature range 7.5 to 21.2 °C indicate that errors in accuracy are significant if proper temperature-dependence calibration is not performed. The proposed calibration method allows for temperature corrections to be made automatically in the field at any ambient water temperature. The significance of these corrections is discussed.

  12. Improved pressure measurement system for calibration of the NASA LeRC 10x10 supersonic wind tunnel

    NASA Technical Reports Server (NTRS)

    Blumenthal, Philip Z.; Helland, Stephen M.

    1994-01-01

    This paper discusses a method used to provide a significant improvement in the accuracy of the Electronically Scanned Pressure (ESP) Measurement System by means of a fully automatic floating pressure generating system for the ESP calibration and reference pressures. This system was used to obtain test section Mach number and flow angularity measurements over the full envelope of test conditions for the 10 x 10 Supersonic Wind Tunnel. The uncertainty analysis and actual test data demonstrated that, for most test conditions, this method could reduce errors to about one-third to one-half that obtained with the standard system.

  13. The algorithm for automatic detection of the calibration object

    NASA Astrophysics Data System (ADS)

    Artem, Kruglov; Irina, Ugfeld

    2017-06-01

    The problem of the automatic image calibration is considered in this paper. The most challenging task of the automatic calibration is a proper detection of the calibration object. The solving of this problem required the appliance of the methods and algorithms of the digital image processing, such as morphology, filtering, edge detection, shape approximation. The step-by-step process of the development of the algorithm and its adopting to the specific conditions of the log cuts in the image's background is presented. Testing of the automatic calibration module was carrying out under the conditions of the production process of the logging enterprise. Through the tests the average possibility of the automatic isolating of the calibration object is 86.1% in the absence of the type 1 errors. The algorithm was implemented in the automatic calibration module within the mobile software for the log deck volume measurement.

  14. Development of a calibration equipment for spectrometer qualification

    NASA Astrophysics Data System (ADS)

    Michel, C.; Borguet, B.; Boueé, A.; Blain, P.; Deep, A.; Moreau, V.; François, M.; Maresi, L.; Myszkowiak, A.; Taccola, M.; Versluys, J.; Stockman, Y.

    2017-09-01

    With the development of new spectrometer concepts, it is required to adapt the calibration facilities to characterize correctly their performances. These spectro-imaging performances are mainly Modulation Transfer Function, spectral response, resolution and registration; polarization, straylight and radiometric calibration. The challenge of this calibration development is to achieve better performance than the item under test using mostly standard items. Because only the subsystem spectrometer needs to be calibrated, the calibration facility needs to simulate the geometrical "behaviours" of the imaging system. A trade-off study indicates that no commercial devices are able to fulfil completely all the requirements so that it was necessary to opt for an in home telecentric achromatic design. The proposed concept is based on an Offner design. This allows mainly to use simple spherical mirrors and to cover the spectral range. The spectral range is covered with a monochromator. Because of the large number of parameters to record the calibration facility is fully automatized. The performances of the calibration system have been verified by analysis and experimentally. Results achieved recently on a free-form grating Offner spectrometer demonstrate the capacities of this new calibration facility. In this paper, a full calibration facility is described, developed specifically for a new free-form spectro-imager.

  15. A Semi-Automatic Image-Based Close Range 3D Modeling Pipeline Using a Multi-Camera Configuration

    PubMed Central

    Rau, Jiann-Yeou; Yeh, Po-Chia

    2012-01-01

    The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum. PMID:23112656

  16. A semi-automatic image-based close range 3D modeling pipeline using a multi-camera configuration.

    PubMed

    Rau, Jiann-Yeou; Yeh, Po-Chia

    2012-01-01

    The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum.

  17. Camera calibration: active versus passive targets

    NASA Astrophysics Data System (ADS)

    Schmalz, Christoph; Forster, Frank; Angelopoulou, Elli

    2011-11-01

    Traditionally, most camera calibrations rely on a planar target with well-known marks. However, the localization error of the marks in the image is a source of inaccuracy. We propose the use of high-resolution digital displays as active calibration targets to obtain more accurate calibration results for all types of cameras. The display shows a series of coded patterns to generate correspondences between world points and image points. This has several advantages. No special calibration hardware is necessary because suitable displays are practically ubiquitious. The method is fully automatic, and no identification of marks is necessary. For a coding scheme based on phase shifting, the localization accuracy is approximately independent of the camera's focus settings. Most importantly, higher accuracy can be achieved compared to passive targets, such as printed checkerboards. A rigorous evaluation is performed to substantiate this claim. Our active target method is compared to standard calibrations using a checkerboard target. We perform camera, calibrations with different combinations of displays, cameras, and lenses, as well as with simulated images and find markedly lower reprojection errors when using active targets. For example, in a stereo reconstruction task, the accuracy of a system calibrated with an active target is five times better.

  18. Fully automated spectrometric protocols for determination of antioxidant activity: advantages and disadvantages.

    PubMed

    Sochor, Jiri; Ryvolova, Marketa; Krystofova, Olga; Salas, Petr; Hubalek, Jaromir; Adam, Vojtech; Trnkova, Libuse; Havel, Ladislav; Beklova, Miroslava; Zehnalek, Josef; Provaznik, Ivo; Kizek, Rene

    2010-11-29

    The aim of this study was to describe behaviour, kinetics, time courses and limitations of the six different fully automated spectrometric methods--DPPH, TEAC, FRAP, DMPD, Free Radicals and Blue CrO5. Absorption curves were measured and absorbance maxima were found. All methods were calibrated using the standard compounds Trolox® and/or gallic acid. Calibration curves were determined (relative standard deviation was within the range from 1.5 to 2.5%). The obtained characteristics were compared and discussed. Moreover, the data obtained were applied to optimize and to automate all mentioned protocols. Automatic analyzer allowed us to analyse simultaneously larger set of samples, to decrease the measurement time, to eliminate the errors and to provide data of higher quality in comparison to manual analysis. The total time of analysis for one sample was decreased to 10 min for all six methods. In contrary, the total time of manual spectrometric determination was approximately 120 min. The obtained data provided good correlations between studied methods (R=0.97-0.99).

  19. Man vs. Machine: An interactive poll to evaluate hydrological model performance of a manual and an automatic calibration

    NASA Astrophysics Data System (ADS)

    Wesemann, Johannes; Burgholzer, Reinhard; Herrnegger, Mathew; Schulz, Karsten

    2017-04-01

    In recent years, a lot of research in hydrological modelling has been invested to improve the automatic calibration of rainfall-runoff models. This includes for example (1) the implementation of new optimisation methods, (2) the incorporation of new and different objective criteria and signatures in the optimisation and (3) the usage of auxiliary data sets apart from runoff. Nevertheless, in many applications manual calibration is still justifiable and frequently applied. The hydrologist performing the manual calibration, with his expert knowledge, is able to judge the hydrographs simultaneously concerning details but also in a holistic view. This integrated eye-ball verification procedure available to man can be difficult to formulate in objective criteria, even when using a multi-criteria approach. Comparing the results of automatic and manual calibration is not straightforward. Automatic calibration often solely involves objective criteria such as Nash-Sutcliffe Efficiency Coefficient or the Kling-Gupta-Efficiency as a benchmark during the calibration. Consequently, a comparison based on such measures is intrinsically biased towards automatic calibration. Additionally, objective criteria do not cover all aspects of a hydrograph leaving questions concerning the quality of a simulation open. This contribution therefore seeks to examine the quality of manually and automatically calibrated hydrographs by interactively involving expert knowledge in the evaluation. Simulations have been performed for the Mur catchment in Austria with the rainfall-runoff model COSERO using two parameter sets evolved from a manual and an automatic calibration. A subset of resulting hydrographs for observation and simulation, representing the typical flow conditions and events, will be evaluated in this study. In an interactive crowdsourcing approach experts attending the session can vote for their preferred simulated hydrograph without having information on the calibration method that produced the respective hydrograph. Therefore, the result of the poll can be seen as an additional quality criterion for the comparison of the two different approaches and help in the evaluation of the automatic calibration method.

  20. Automated feature detection and identification in digital point-ordered signals

    DOEpatents

    Oppenlander, Jane E.; Loomis, Kent C.; Brudnoy, David M.; Levy, Arthur J.

    1998-01-01

    A computer-based automated method to detect and identify features in digital point-ordered signals. The method is used for processing of non-destructive test signals, such as eddy current signals obtained from calibration standards. The signals are first automatically processed to remove noise and to determine a baseline. Next, features are detected in the signals using mathematical morphology filters. Finally, verification of the features is made using an expert system of pattern recognition methods and geometric criteria. The method has the advantage that standard features can be, located without prior knowledge of the number or sequence of the features. Further advantages are that standard features can be differentiated from irrelevant signal features such as noise, and detected features are automatically verified by parameters extracted from the signals. The method proceeds fully automatically without initial operator set-up and without subjective operator feature judgement.

  1. Calibration of the LHAASO-KM2A electromagnetic particle detectors using charged particles within the extensive air showers

    NASA Astrophysics Data System (ADS)

    Lv, Hongkui; He, Huihai; Sheng, Xiangdong; Liu, Jia; Chen, Songzhan; Liu, Ye; Hou, Chao; Zhao, Jing; Zhang, Zhongquan; Wu, Sha; Wang, Yaping; Lhaaso Collaboration

    2018-07-01

    In the Large High Altitude Air Shower Observatory (LHAASO), one square kilometer array (KM2A), with 5242 electromagnetic particle detectors (EDs) and 1171 muon detectors (MDs), is designed to study ultra-high energy gamma-ray astronomy and cosmic ray physics. The remoteness and numerous detectors extremely demand a robust and automatic calibration procedure. In this paper, a self-calibration method which relies on the measurement of charged particles within the extensive air showers is proposed. The method is fully validated by Monte Carlo simulation and successfully applied in a KM2A prototype array experiment. Experimental results show that the self-calibration method can be used to determine the detector time offset constants at the sub-nanosecond level and the number density of particles collected by each ED with an accuracy of a few percents, which are adequate to meet the physical requirements of LHAASO experiment. This software calibration also offers an ideal method to realtime monitor the detector performances for next generation ground-based EAS experiments covering an area above square kilometers scale.

  2. New software tools for enhanced precision in robot-assisted laser phonomicrosurgery.

    PubMed

    Dagnino, Giulio; Mattos, Leonardo S; Caldwell, Darwin G

    2012-01-01

    This paper describes a new software package created to enhance precision during robot-assisted laser phonomicrosurgery procedures. The new software is composed of three tools for camera calibration, automatic tumor segmentation, and laser tracking. These were designed and developed to improve the outcome of this demanding microsurgical technique, and were tested herein to produce quantitative performance data. The experimental setup was based on the motorized laser micromanipulator created by Istituto Italiano di Tecnologia and the experimental protocols followed are fully described in this paper. The results show the new tools are robust and effective: The camera calibration tool reduced residual errors (RMSE) to 0.009 ± 0.002 mm under 40× microscope magnification; the automatic tumor segmentation tool resulted in deep lesion segmentations comparable to manual segmentations (RMSE= 0.160 ± 0.028 mm under 40× magnification); and the laser tracker tool proved to be reliable even during cutting procedures (RMSE= 0.073 ± 0.023 mm under 40× magnification). These results demonstrate the new software package can provide excellent improvements to the previous microsurgical system, leading to important enhancements in surgical outcome.

  3. 3D image processing architecture for camera phones

    NASA Astrophysics Data System (ADS)

    Atanassov, Kalin; Ramachandra, Vikas; Goma, Sergio R.; Aleksic, Milivoje

    2011-03-01

    Putting high quality and easy-to-use 3D technology into the hands of regular consumers has become a recent challenge as interest in 3D technology has grown. Making 3D technology appealing to the average user requires that it be made fully automatic and foolproof. Designing a fully automatic 3D capture and display system requires: 1) identifying critical 3D technology issues like camera positioning, disparity control rationale, and screen geometry dependency, 2) designing methodology to automatically control them. Implementing 3D capture functionality on phone cameras necessitates designing algorithms to fit within the processing capabilities of the device. Various constraints like sensor position tolerances, sensor 3A tolerances, post-processing, 3D video resolution and frame rate should be carefully considered for their influence on 3D experience. Issues with migrating functions such as zoom and pan from the 2D usage model (both during capture and display) to 3D needs to be resolved to insure the highest level of user experience. It is also very important that the 3D usage scenario (including interactions between the user and the capture/display device) is carefully considered. Finally, both the processing power of the device and the practicality of the scheme needs to be taken into account while designing the calibration and processing methodology.

  4. Considerations in Phase Estimation and Event Location Using Small-aperture Regional Seismic Arrays

    NASA Astrophysics Data System (ADS)

    Gibbons, Steven J.; Kværna, Tormod; Ringdal, Frode

    2010-05-01

    The global monitoring of earthquakes and explosions at decreasing magnitudes necessitates the fully automatic detection, location and classification of an ever increasing number of seismic events. Many seismic stations of the International Monitoring System are small-aperture arrays designed to optimize the detection and measurement of regional phases. Collaboration with operators of mines within regional distances of the ARCES array, together with waveform correlation techniques, has provided an unparalleled opportunity to assess the ability of a small-aperture array to provide robust and accurate direction and slowness estimates for phase arrivals resulting from well-constrained events at sites of repeating seismicity. A significant reason for the inaccuracy of current fully-automatic event location estimates is the use of f- k slowness estimates measured in variable frequency bands. The variability of slowness and azimuth measurements for a given phase from a given source region is reduced by the application of almost any constant frequency band. However, the frequency band resulting in the most stable estimates varies greatly from site to site. Situations are observed in which regional P- arrivals from two sites, far closer than the theoretical resolution of the array, result in highly distinct populations in slowness space. This means that the f- k estimates, even at relatively low frequencies, can be sensitive to source and path-specific characteristics of the wavefield and should be treated with caution when inferring a geographical backazimuth under the assumption of a planar wavefront arriving along the great-circle path. Moreover, different frequency bands are associated with different biases meaning that slowness and azimuth station corrections (commonly denoted SASCs) cannot be calibrated, and should not be used, without reference to the frequency band employed. We demonstrate an example where fully-automatic locations based on a source-region specific fixed-parameter template are more stable than the corresponding analyst reviewed estimates. The reason is that the analyst selects a frequency band and analysis window which appears optimal for each event. In this case, the frequency band which produces the most consistent direction estimates has neither the best SNR or the greatest beam-gain, and is therefore unlikely to be chosen by an analyst without calibration data.

  5. On the possibility of producing definitive magnetic observatory data within less than one year

    NASA Astrophysics Data System (ADS)

    Mandić, Igor; Korte, Monika

    2017-04-01

    Geomagnetic observatory data are fundamental in geomagnetic field studies and are widely used in other applications. Often they are combined with satellite and ground survey data. Unfortunately, the observatory definitive data are only available with a time lag ranging from several months up to more than a year. The reason for this lag is the annual production of the final calibration values, i.e. baselines that are used to correct preliminary data from continuously recording magnetometers. In this paper, we will show that the preparation of definitive geomagnetic data is possible within a calendar year and presents an original method for prompt and automatic estimation of the observatory baselines. The new baselines, obtained in a mostly automatic manner, are compared with the baselines reported on INTERMAGNET DVDs for the 2009-2011 period. The high quality of the baselines obtained by the proposed method indicates its suitability for data processing in fully automatic observatories when automated absolute instruments will be deployed at remote sites.

  6. Automatic alignment method for calibration of hydrometers

    NASA Astrophysics Data System (ADS)

    Lee, Y. J.; Chang, K. H.; Chon, J. C.; Oh, C. Y.

    2004-04-01

    This paper presents a new method to automatically align specific scale-marks for the calibration of hydrometers. A hydrometer calibration system adopting the new method consists of a vision system, a stepping motor, and software to control the system. The vision system is composed of a CCD camera and a frame grabber, and is used to acquire images. The stepping motor moves the camera, which is attached to the vessel containing a reference liquid, along the hydrometer. The operating program has two main functions: to process images from the camera to find the position of the horizontal plane and to control the stepping motor for the alignment of the horizontal plane with a particular scale-mark. Any system adopting this automatic alignment method is a convenient and precise means of calibrating a hydrometer. The performance of the proposed method is illustrated by comparing the calibration results using the automatic alignment method with those obtained using the manual method.

  7. Applying Hierarchical Model Calibration to Automatically Generated Items.

    ERIC Educational Resources Information Center

    Williamson, David M.; Johnson, Matthew S.; Sinharay, Sandip; Bejar, Isaac I.

    This study explored the application of hierarchical model calibration as a means of reducing, if not eliminating, the need for pretesting of automatically generated items from a common item model prior to operational use. Ultimately the successful development of automatic item generation (AIG) systems capable of producing items with highly similar…

  8. A Fully Integrated Sensor SoC with Digital Calibration Hardware and Wireless Transceiver at 2.4 GHz

    PubMed Central

    Kim, Dong-Sun; Jang, Sung-Joon; Hwang, Tae-Ho

    2013-01-01

    A single-chip sensor system-on-a-chip (SoC) that implements radio for 2.4 GHz, complete digital baseband physical layer (PHY), 10-bit sigma-delta analog-to-digital converter and dedicated sensor calibration hardware for industrial sensing systems has been proposed and integrated in a 0.18-μm CMOS technology. The transceiver's building block includes a low-noise amplifier, mixer, channel filter, receiver signal-strength indicator, frequency synthesizer, voltage-controlled oscillator, and power amplifier. In addition, the digital building block consists of offset quadrature phase-shift keying (OQPSK) modulation, demodulation, carrier frequency offset compensation, auto-gain control, digital MAC function, sensor calibration hardware and embedded 8-bit microcontroller. The digital MAC function supports cyclic redundancy check (CRC), inter-symbol timing check, MAC frame control, and automatic retransmission. The embedded sensor signal processing block consists of calibration coefficient calculator, sensing data calibration mapper and sigma-delta analog-to-digital converter with digital decimation filter. The sensitivity of the overall receiver and the error vector magnitude (EVM) of the overall transmitter are −99 dBm and 18.14%, respectively. The proposed calibration scheme has a reduction of errors by about 45.4% compared with the improved progressive polynomial calibration (PPC) method and the maximum current consumption of the SoC is 16 mA. PMID:23698271

  9. Automatic calibration system for analog instruments based on DSP and CCD sensor

    NASA Astrophysics Data System (ADS)

    Lan, Jinhui; Wei, Xiangqin; Bai, Zhenlong

    2008-12-01

    Currently, the calibration work of analog measurement instruments is mainly completed by manual and there are many problems waiting for being solved. In this paper, an automatic calibration system (ACS) based on Digital Signal Processor (DSP) and Charge Coupled Device (CCD) sensor is developed and a real-time calibration algorithm is presented. In the ACS, TI DM643 DSP processes the data received by CCD sensor and the outcome is displayed on Liquid Crystal Display (LCD) screen. For the algorithm, pointer region is firstly extracted for improving calibration speed. And then a math model of the pointer is built to thin the pointer and determine the instrument's reading. Through numbers of experiments, the time of once reading is no more than 20 milliseconds while it needs several seconds if it is done manually. At the same time, the error of the instrument's reading satisfies the request of the instruments. It is proven that the automatic calibration system can effectively accomplish the calibration work of the analog measurement instruments.

  10. Fully automatic three-dimensional visualization of intravascular optical coherence tomography images: methods and feasibility in vivo

    PubMed Central

    Ughi, Giovanni J; Adriaenssens, Tom; Desmet, Walter; D’hooge, Jan

    2012-01-01

    Intravascular optical coherence tomography (IV-OCT) is an imaging modality that can be used for the assessment of intracoronary stents. Recent publications pointed to the fact that 3D visualizations have potential advantages compared to conventional 2D representations. However, 3D imaging still requires a time consuming manual procedure not suitable for on-line application during coronary interventions. We propose an algorithm for a rapid and fully automatic 3D visualization of IV-OCT pullbacks. IV-OCT images are first processed for the segmentation of the different structures. This also allows for automatic pullback calibration. Then, according to the segmentation results, different structures are depicted with different colors to visualize the vessel wall, the stent and the guide-wire in details. Final 3D rendering results are obtained through the use of a commercial 3D DICOM viewer. Manual analysis was used as ground-truth for the validation of the segmentation algorithms. A correlation value of 0.99 and good limits of agreement (Bland Altman statistics) were found over 250 images randomly extracted from 25 in vivo pullbacks. Moreover, 3D rendering was compared to angiography, pictures of deployed stents made available by the manufacturers and to conventional 2D imaging corroborating visualization results. Computational time for the visualization of an entire data sets resulted to be ~74 sec. The proposed method allows for the on-line use of 3D IV-OCT during percutaneous coronary interventions, potentially allowing treatments optimization. PMID:23243578

  11. Automatic Calibration Method for Driver’s Head Orientation in Natural Driving Environment

    PubMed Central

    Fu, Xianping; Guan, Xiao; Peli, Eli; Liu, Hongbo; Luo, Gang

    2013-01-01

    Gaze tracking is crucial for studying driver’s attention, detecting fatigue, and improving driver assistance systems, but it is difficult in natural driving environments due to nonuniform and highly variable illumination and large head movements. Traditional calibrations that require subjects to follow calibrators are very cumbersome to be implemented in daily driving situations. A new automatic calibration method, based on a single camera for determining the head orientation and which utilizes the side mirrors, the rear-view mirror, the instrument board, and different zones in the windshield as calibration points, is presented in this paper. Supported by a self-learning algorithm, the system tracks the head and categorizes the head pose in 12 gaze zones based on facial features. The particle filter is used to estimate the head pose to obtain an accurate gaze zone by updating the calibration parameters. Experimental results show that, after several hours of driving, the automatic calibration method without driver’s corporation can achieve the same accuracy as a manual calibration method. The mean error of estimated eye gazes was less than 5°in day and night driving. PMID:24639620

  12. ORBS: A reduction software for SITELLE and SpiOMM data

    NASA Astrophysics Data System (ADS)

    Martin, Thomas

    2014-09-01

    ORBS merges, corrects, transforms and calibrates interferometric data cubes and produces a spectral cube of the observed region for analysis. It is a fully automatic data reduction software for use with SITELLE (installed at the Canada-France-Hawaii Telescope) and SpIOMM (a prototype attached to the Observatoire du Mont Mégantic); these imaging Fourier transform spectrometers obtain a hyperspectral data cube which samples a 12 arc-minutes field of view into 4 millions of visible spectra. ORBS is highly parallelized; its core classes (ORB) have been designed to be used in a suite of softwares for data analysis (ORCS and OACS), data simulation (ORUS) and data acquisition (IRIS).

  13. Automatic colorimetric calibration of human wounds

    PubMed Central

    2010-01-01

    Background Recently, digital photography in medicine is considered an acceptable tool in many clinical domains, e.g. wound care. Although ever higher resolutions are available, reproducibility is still poor and visual comparison of images remains difficult. This is even more the case for measurements performed on such images (colour, area, etc.). This problem is often neglected and images are freely compared and exchanged without further thought. Methods The first experiment checked whether camera settings or lighting conditions could negatively affect the quality of colorimetric calibration. Digital images plus a calibration chart were exposed to a variety of conditions. Precision and accuracy of colours after calibration were quantitatively assessed with a probability distribution for perceptual colour differences (dE_ab). The second experiment was designed to assess the impact of the automatic calibration procedure (i.e. chart detection) on real-world measurements. 40 Different images of real wounds were acquired and a region of interest was selected in each image. 3 Rotated versions of each image were automatically calibrated and colour differences were calculated. Results 1st Experiment: Colour differences between the measurements and real spectrophotometric measurements reveal median dE_ab values respectively 6.40 for the proper patches of calibrated normal images and 17.75 for uncalibrated images demonstrating an important improvement in accuracy after calibration. The reproducibility, visualized by the probability distribution of the dE_ab errors between 2 measurements of the patches of the images has a median of 3.43 dE* for all calibrated images, 23.26 dE_ab for all uncalibrated images. If we restrict ourselves to the proper patches of normal calibrated images the median is only 2.58 dE_ab! Wilcoxon sum-rank testing (p < 0.05) between uncalibrated normal images and calibrated normal images with proper squares were equal to 0 demonstrating a highly significant improvement of reproducibility. In the second experiment, the reproducibility of the chart detection during automatic calibration is presented using a probability distribution of dE_ab errors between 2 measurements of the same ROI. Conclusion The investigators proposed an automatic colour calibration algorithm that ensures reproducible colour content of digital images. Evidence was provided that images taken with commercially available digital cameras can be calibrated independently of any camera settings and illumination features. PMID:20298541

  14. A novel automatic segmentation workflow of axial breast DCE-MRI

    NASA Astrophysics Data System (ADS)

    Besbes, Feten; Gargouri, Norhene; Damak, Alima; Sellami, Dorra

    2018-04-01

    In this paper we propose a novel process of a fully automatic breast tissue segmentation which is independent from expert calibration and contrast. The proposed algorithm is composed by two major steps. The first step consists in the detection of breast boundaries. It is based on image content analysis and Moore-Neighbour tracing algorithm. As a processing step, Otsu thresholding and neighbors algorithm are applied. Then, the external area of breast is removed to get an approximated breast region. The second preprocessing step is the delineation of the chest wall which is considered as the lowest cost path linking three key points; These points are located automatically at the breast. They are respectively, the left and right boundary points and the middle upper point placed at the sternum region using statistical method. For the minimum cost path search problem, we resolve it through Dijkstra algorithm. Evaluation results reveal the robustness of our process face to different breast densities, complex forms and challenging cases. In fact, the mean overlap between manual segmentation and automatic segmentation through our method is 96.5%. A comparative study shows that our proposed process is competitive and faster than existing methods. The segmentation of 120 slices with our method is achieved at least in 20.57+/-5.2s.

  15. Markerless 3D motion capture for animal locomotion studies

    PubMed Central

    Sellers, William Irvin; Hirasaki, Eishi

    2014-01-01

    ABSTRACT Obtaining quantitative data describing the movements of animals is an essential step in understanding their locomotor biology. Outside the laboratory, measuring animal locomotion often relies on video-based approaches and analysis is hampered because of difficulties in calibration and often the limited availability of possible camera positions. It is also usually restricted to two dimensions, which is often an undesirable over-simplification given the essentially three-dimensional nature of many locomotor performances. In this paper we demonstrate a fully three-dimensional approach based on 3D photogrammetric reconstruction using multiple, synchronised video cameras. This approach allows full calibration based on the separation of the individual cameras and will work fully automatically with completely unmarked and undisturbed animals. As such it has the potential to revolutionise work carried out on free-ranging animals in sanctuaries and zoological gardens where ad hoc approaches are essential and access within enclosures often severely restricted. The paper demonstrates the effectiveness of video-based 3D photogrammetry with examples from primates and birds, as well as discussing the current limitations of this technique and illustrating the accuracies that can be obtained. All the software required is open source so this can be a very cost effective approach and provides a methodology of obtaining data in situations where other approaches would be completely ineffective. PMID:24972869

  16. Automatic Calibration of Stereo-Cameras Using Ordinary Chess-Board Patterns

    NASA Astrophysics Data System (ADS)

    Prokos, A.; Kalisperakis, I.; Petsa, E.; Karras, G.

    2012-07-01

    Automation of camera calibration is facilitated by recording coded 2D patterns. Our toolbox for automatic camera calibration using images of simple chess-board patterns is freely available on the Internet. But it is unsuitable for stereo-cameras whose calibration implies recovering camera geometry and their true-to-scale relative orientation. In contrast to all reported methods requiring additional specific coding to establish an object space coordinate system, a toolbox for automatic stereo-camera calibration relying on ordinary chess-board patterns is presented here. First, the camera calibration algorithm is applied to all image pairs of the pattern to extract nodes of known spacing, order them in rows and columns, and estimate two independent camera parameter sets. The actual node correspondences on stereo-pairs remain unknown. Image pairs of a textured 3D scene are exploited for finding the fundamental matrix of the stereo-camera by applying RANSAC to point matches established with the SIFT algorithm. A node is then selected near the centre of the left image; its match on the right image is assumed as the node closest to the corresponding epipolar line. This yields matches for all nodes (since these have already been ordered), which should also satisfy the 2D epipolar geometry. Measures for avoiding mismatching are taken. With automatically estimated initial orientation values, a bundle adjustment is performed constraining all pairs on a common (scaled) relative orientation. Ambiguities regarding the actual exterior orientations of the stereo-camera with respect to the pattern are irrelevant. Results from this automatic method show typical precisions not above 1/4 pixels for 640×480 web cameras.

  17. Wind Tunnel Force Balance Calibration Study - Interim Results

    NASA Technical Reports Server (NTRS)

    Rhew, Ray D.

    2012-01-01

    Wind tunnel force balance calibration is preformed utilizing a variety of different methods and does not have a direct traceable standard such as standards used for most calibration practices (weights, and voltmeters). These different calibration methods and practices include, but are not limited to, the loading schedule, the load application hardware, manual and automatic systems, re-leveling and non-re-leveling. A study of the balance calibration techniques used by NASA was undertaken to develop metrics for reviewing and comparing results using sample calibrations. The study also includes balances of different designs, single and multi-piece. The calibration systems include, the manual, and the automatic that are provided by NASA and its vendors. The results to date will be presented along with the techniques for comparing the results. In addition, future planned calibrations and investigations based on the results will be provided.

  18. Externally Calibrated Parallel Imaging for 3D Multispectral Imaging Near Metallic Implants Using Broadband Ultrashort Echo Time Imaging

    PubMed Central

    Wiens, Curtis N.; Artz, Nathan S.; Jang, Hyungseok; McMillan, Alan B.; Reeder, Scott B.

    2017-01-01

    Purpose To develop an externally calibrated parallel imaging technique for three-dimensional multispectral imaging (3D-MSI) in the presence of metallic implants. Theory and Methods A fast, ultrashort echo time (UTE) calibration acquisition is proposed to enable externally calibrated parallel imaging techniques near metallic implants. The proposed calibration acquisition uses a broadband radiofrequency (RF) pulse to excite the off-resonance induced by the metallic implant, fully phase-encoded imaging to prevent in-plane distortions, and UTE to capture rapidly decaying signal. The performance of the externally calibrated parallel imaging reconstructions was assessed using phantoms and in vivo examples. Results Phantom and in vivo comparisons to self-calibrated parallel imaging acquisitions show that significant reductions in acquisition times can be achieved using externally calibrated parallel imaging with comparable image quality. Acquisition time reductions are particularly large for fully phase-encoded methods such as spectrally resolved fully phase-encoded three-dimensional (3D) fast spin-echo (SR-FPE), in which scan time reductions of up to 8 min were obtained. Conclusion A fully phase-encoded acquisition with broadband excitation and UTE enabled externally calibrated parallel imaging for 3D-MSI, eliminating the need for repeated calibration regions at each frequency offset. Significant reductions in acquisition time can be achieved, particularly for fully phase-encoded methods like SR-FPE. PMID:27403613

  19. Digital holographic microscopy for detection of Trypanosoma cruzi parasites in fresh blood mounts

    NASA Astrophysics Data System (ADS)

    Romero, G. G.; Monaldi, A. C.; Alanís, E. E.

    2012-03-01

    An off-axis holographic microscope, in a transmission mode, calibrated to automatically detect the presence of Trypanosoma cruzi in blood is developed as an alternative diagnosis tool for Chagas disease. Movements of the microorganisms are detected by measuring the phase shift they produce on the transmitted wave front. A thin layer of blood infected by Trypanosoma cruzi parasites is examined in the holographic microscope, the images of the visual field being registered with a CCD camera. Two consecutive holograms of the same visual field are subtracted point by point and a phase contrast image of the resulting hologram is reconstructed by means of the angular spectrum propagation algorithm. This method enables the measurement of phase distributions corresponding to temporal differences between digital holograms in order to detect whether parasites are present or not. Experimental results obtained using this technique show that it is an efficient alternative that can be incorporated successfully as a part of a fully automatic system for detection and counting of this type of microorganisms.

  20. SCAMP: Automatic Astrometric and Photometric Calibration

    NASA Astrophysics Data System (ADS)

    Bertin, Emmanuel

    2010-10-01

    Astrometric and photometric calibrations have remained the most tiresome step in the reduction of large imaging surveys. SCAMP has been written to address this problem. The program efficiently computes accurate astrometric and photometric solutions for any arbitrary sequence of FITS images in a completely automatic way. SCAMP is released under the GNU General Public License.

  1. Automatic force balance calibration system

    NASA Technical Reports Server (NTRS)

    Ferris, Alice T. (Inventor)

    1995-01-01

    A system for automatically calibrating force balances is provided. The invention uses a reference balance aligned with the balance being calibrated to provide superior accuracy while minimizing the time required to complete the calibration. The reference balance and the test balance are rigidly attached together with closely aligned moment centers. Loads placed on the system equally effect each balance, and the differences in the readings of the two balances can be used to generate the calibration matrix for the test balance. Since the accuracy of the test calibration is determined by the accuracy of the reference balance and current technology allows for reference balances to be calibrated to within +/-0.05% the entire system has an accuracy of +/-0.2%. The entire apparatus is relatively small and can be mounted on a movable base for easy transport between test locations. The system can also accept a wide variety of reference balances, thus allowing calibration under diverse load and size requirements.

  2. Automatic force balance calibration system

    NASA Technical Reports Server (NTRS)

    Ferris, Alice T. (Inventor)

    1996-01-01

    A system for automatically calibrating force balances is provided. The invention uses a reference balance aligned with the balance being calibrated to provide superior accuracy while minimizing the time required to complete the calibration. The reference balance and the test balance are rigidly attached together with closely aligned moment centers. Loads placed on the system equally effect each balance, and the differences in the readings of the two balances can be used to generate the calibration matrix for the test balance. Since the accuracy of the test calibration is determined by the accuracy of the reference balance and current technology allows for reference balances to be calibrated to within .+-.0.05%, the entire system has an accuracy of a .+-.0.2%. The entire apparatus is relatively small and can be mounted on a movable base for easy transport between test locations. The system can also accept a wide variety of reference balances, thus allowing calibration under diverse load and size requirements.

  3. The VIRUS data reduction pipeline

    NASA Astrophysics Data System (ADS)

    Goessl, Claus A.; Drory, Niv; Relke, Helena; Gebhardt, Karl; Grupp, Frank; Hill, Gary; Hopp, Ulrich; Köhler, Ralf; MacQueen, Phillip

    2006-06-01

    The Hobby-Eberly Telescope Dark Energy Experiment (HETDEX) will measure baryonic acoustic oscillations, first discovered in the Cosmic Microwave Background (CMB), to constrain the nature of dark energy by performing a blind search for Ly-α emitting galaxies within a 200 deg2 field and a redshift bin of 1.8 < z < 3.7. This will be achieved by VIRUS, a wide field, low resolution, 145 IFU spectrograph. The data reduction pipeline will have to extract ~ 35.000 spectra per exposure (~5 million per night, i.e. 500 million in total), perform an astrometric, photometric, and wavelength calibration, and find and classify objects in the spectra fully automatically. We will describe our ideas how to achieve this goal.

  4. COSMOS: Carnegie Observatories System for MultiObject Spectroscopy

    NASA Astrophysics Data System (ADS)

    Oemler, A.; Clardy, K.; Kelson, D.; Walth, G.; Villanueva, E.

    2017-05-01

    COSMOS (Carnegie Observatories System for MultiObject Spectroscopy) reduces multislit spectra obtained with the IMACS and LDSS3 spectrographs on the Magellan Telescopes. It can be used for the quick-look analysis of data at the telescope as well as for pipeline reduction of large data sets. COSMOS is based on a precise optical model of the spectrographs, which allows (after alignment and calibration) an accurate prediction of the location of spectra features. This eliminates the line search procedure which is fundamental to many spectral reduction programs, and allows a robust data pipeline to be run in an almost fully automatic mode, allowing large amounts of data to be reduced with minimal intervention.

  5. The VIRUS Emission Line Detection Recipe

    NASA Astrophysics Data System (ADS)

    Gössl, C. A.; Hopp, U.; Köhler, R.; Grupp, F.; Relke, H.; Drory, N.; Gebhardt, K.; Hill, G.; MacQueen, P.

    2007-10-01

    HETDEX, the Hobby-Eberly Telescope Dark Energy Experiment, will measure the imprint of the baryonic acoustic oscillations on the galaxy population at redshifts of 1.8 < z < 3.7 to constrain the nature of dark energy. The survey will be performed over at least 200 deg^2. The tracer population for this blind search will be Ly-α emitting galaxies through their most prominent emission line. The data reduction pipeline will extract these emission line objects from ˜35,000 spectra per exposure (5 million per night, i.e. 500 million in total) while performing astrometric, photometric, and wavelength calibration fully automatically. Here we will present our ideas how to find and classify objects even at low signal-to-noise ratios.

  6. Argon Triple-Point Device for Calibration of SPRTs

    NASA Astrophysics Data System (ADS)

    Kołodziej, B.; Manuszkiewicz, H.; Szmyrka-Grzebyk, A.; Lipiński, L.; Kowal, A.; Steur, P. P. M.; Pavese, F.

    2015-03-01

    This paper presents an apparatus for the calibration of long-stem platinum resistance thermometers at the argon triple point , designed at the Institute of Low Temperature and Structural Research, Poland (INTiBS). A hermetically sealed cell filled at the Istituto Nazionale di Ricerca Metrologica, Italy with high purity gas (6N) is the main element of this apparatus. The cell is placed in a cryostat fully immersed in liquid nitrogen. A temperature-controlled shield ensures the quasi-adiabatic condition needed for proper realization of the phase transition. A system for correcting the temperature distribution along the thermometer well is also implemented. The cell cooling and argon solidification is carried out by filling the thermometer well with liquid nitrogen. A LabVIEW computer program written at INTiBS automatically controls the triple-point realization process. The duration of a melting plateau in the apparatus lasts for about 24 h. The melting width for between 20 % and 80 % was mK. The reproducibility of the plateau temperature is better than.

  7. Automatic calibration method for plenoptic camera

    NASA Astrophysics Data System (ADS)

    Luan, Yinsen; He, Xing; Xu, Bing; Yang, Ping; Tang, Guomao

    2016-04-01

    An automatic calibration method is proposed for a microlens-based plenoptic camera. First, all microlens images on the white image are searched and recognized automatically based on digital morphology. Then, the center points of microlens images are rearranged according to their relative position relationships. Consequently, the microlens images are located, i.e., the plenoptic camera is calibrated without the prior knowledge of camera parameters. Furthermore, this method is appropriate for all types of microlens-based plenoptic cameras, even the multifocus plenoptic camera, the plenoptic camera with arbitrarily arranged microlenses, or the plenoptic camera with different sizes of microlenses. Finally, we verify our method by the raw data of Lytro. The experiments show that our method has higher intelligence than the methods published before.

  8. Corner detection and sorting method based on improved Harris algorithm in camera calibration

    NASA Astrophysics Data System (ADS)

    Xiao, Ying; Wang, Yonghong; Dan, Xizuo; Huang, Anqi; Hu, Yue; Yang, Lianxiang

    2016-11-01

    In traditional Harris corner detection algorithm, the appropriate threshold which is used to eliminate false corners is selected manually. In order to detect corners automatically, an improved algorithm which combines Harris and circular boundary theory of corners is proposed in this paper. After detecting accurate corner coordinates by using Harris algorithm and Forstner algorithm, false corners within chessboard pattern of the calibration plate can be eliminated automatically by using circular boundary theory. Moreover, a corner sorting method based on an improved calibration plate is proposed to eliminate false background corners and sort remaining corners in order. Experiment results show that the proposed algorithms can eliminate all false corners and sort remaining corners correctly and automatically.

  9. Object Occlusion Detection Using Automatic Camera Calibration for a Wide-Area Video Surveillance System

    PubMed Central

    Jung, Jaehoon; Yoon, Inhye; Paik, Joonki

    2016-01-01

    This paper presents an object occlusion detection algorithm using object depth information that is estimated by automatic camera calibration. The object occlusion problem is a major factor to degrade the performance of object tracking and recognition. To detect an object occlusion, the proposed algorithm consists of three steps: (i) automatic camera calibration using both moving objects and a background structure; (ii) object depth estimation; and (iii) detection of occluded regions. The proposed algorithm estimates the depth of the object without extra sensors but with a generic red, green and blue (RGB) camera. As a result, the proposed algorithm can be applied to improve the performance of object tracking and object recognition algorithms for video surveillance systems. PMID:27347978

  10. Externally calibrated parallel imaging for 3D multispectral imaging near metallic implants using broadband ultrashort echo time imaging.

    PubMed

    Wiens, Curtis N; Artz, Nathan S; Jang, Hyungseok; McMillan, Alan B; Reeder, Scott B

    2017-06-01

    To develop an externally calibrated parallel imaging technique for three-dimensional multispectral imaging (3D-MSI) in the presence of metallic implants. A fast, ultrashort echo time (UTE) calibration acquisition is proposed to enable externally calibrated parallel imaging techniques near metallic implants. The proposed calibration acquisition uses a broadband radiofrequency (RF) pulse to excite the off-resonance induced by the metallic implant, fully phase-encoded imaging to prevent in-plane distortions, and UTE to capture rapidly decaying signal. The performance of the externally calibrated parallel imaging reconstructions was assessed using phantoms and in vivo examples. Phantom and in vivo comparisons to self-calibrated parallel imaging acquisitions show that significant reductions in acquisition times can be achieved using externally calibrated parallel imaging with comparable image quality. Acquisition time reductions are particularly large for fully phase-encoded methods such as spectrally resolved fully phase-encoded three-dimensional (3D) fast spin-echo (SR-FPE), in which scan time reductions of up to 8 min were obtained. A fully phase-encoded acquisition with broadband excitation and UTE enabled externally calibrated parallel imaging for 3D-MSI, eliminating the need for repeated calibration regions at each frequency offset. Significant reductions in acquisition time can be achieved, particularly for fully phase-encoded methods like SR-FPE. Magn Reson Med 77:2303-2309, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  11. Research and Development of Fully Automatic Alien Smoke Stack and Packaging System

    NASA Astrophysics Data System (ADS)

    Yang, Xudong; Ge, Qingkuan; Peng, Tao; Zuo, Ping; Dong, Weifu

    2017-12-01

    The problem of low efficiency of manual sorting packaging for the current tobacco distribution center, which developed a set of safe efficient and automatic type of alien smoke stack and packaging system. The functions of fully automatic alien smoke stack and packaging system adopt PLC control technology, servo control technology, robot technology, image recognition technology and human-computer interaction technology. The characteristics, principles, control process and key technology of the system are discussed in detail. Through the installation and commissioning fully automatic alien smoke stack and packaging system has a good performance and has completed the requirements for shaped cigarette.

  12. Automatic Astrometric and Photometric Calibration with SCAMP

    NASA Astrophysics Data System (ADS)

    Bertin, E.

    2006-07-01

    Astrometric and photometric calibrations have remained the most tiresome step in the reduction of large imaging surveys. I present a new software package, SCAMP which has been written to address this problem. SCAMP efficiently computes accurate astrometric and photometric solutions for any arbitrary sequence of FITS images in a completely automatic way. SCAMP is released under the GNU General Public Licence.

  13. Automatic Calibration of a Distributed Rainfall-Runoff Model, Using the Degree-Day Formulation for Snow Melting, Within DMIP2 Project

    NASA Astrophysics Data System (ADS)

    Frances, F.; Orozco, I.

    2010-12-01

    This work presents the assessment of the TETIS distributed hydrological model in mountain basins of the American and Carson rivers in Sierra Nevada (USA) at hourly time discretization, as part of the DMIP2 Project. In TETIS each cell of the spatial grid conceptualizes the water cycle using six tanks connected among them. The relationship between tanks depends on the case, although at the end in most situations, simple linear reservoirs and flow thresholds schemes are used with exceptional results (Vélez et al., 1999; Francés et al., 2002). In particular, within the snow tank, snow melting is based in this work on the simple degree-day method with spatial constant parameters. The TETIS model includes an automatic calibration module, based on the SCE-UA algorithm (Duan et al., 1992; Duan et al., 1994) and the model effective parameters are organized following a split structure, as presented by Francés and Benito (1995) and Francés et al. (2007). In this way, the calibration involves in TETIS up to 9 correction factors (CFs), which correct globally the different parameter maps instead of each parameter cell value, thus reducing drastically the number of variables to be calibrated. This strategy allows for a fast and agile modification in different hydrological processes preserving the spatial structure of each parameter map. With the snowmelt submodel, automatic model calibration was carried out in three steps, separating the calibration of rainfall-runoff and snowmelt parameters. In the first step, the automatic calibration of the CFs during the period 05/20/1990 to 07/31/1990 in the American River (without snow influence), gave a Nash-Sutcliffe Efficiency (NSE) index of 0.92. The calibration of the three degree-day parameters was done using all the SNOTEL stations in the American and Carson rivers. Finally, using previous calibrations as initial values, the complete calibration done in the Carson River for the period 10/01/1992 to 07/31/1993 gave a NSE index of 0.86. The temporal and spatial validation using five periods must be considered in both rivers excellent for discharges (NSEs higher than 0.76) and good for snow distribution (daily spatial coverage errors ranging from -10 to 27%). In conclusion, this work demonstrates: 1.- The viability of automatic calibration of distributed models, with the corresponding personal time saving and maximum exploitation of the available information. 2.- The good performance of the degree-day snowmelt formulation even at hourly time discretization, in spite of its simplicity.

  14. AUTOMATIC CALIBRATING SYSTEM FOR PRESSURE TRANSDUCERS

    DOEpatents

    Amonette, E.L.; Rodgers, G.W.

    1958-01-01

    An automatic system for calibrating a number of pressure transducers is described. The disclosed embodiment of the invention uses a mercurial manometer to measure the air pressure applied to the transducer. A servo system follows the top of the mercury column as the pressure is changed and operates an analog- to-digital converter This converter furnishes electrical pulses, each representing an increment of pressure change, to a reversible counterThe transducer furnishes a signal at each calibration point, causing an electric typewriter and a card-punch machine to record the pressure at the instant as indicated by the counter. Another counter keeps track of the calibration points so that a number identifying each point is recorded with the corresponding pressure. A special relay control system controls the pressure trend and programs the sequential calibration of several transducers.

  15. Calibrating reaction rates for the CREST model

    NASA Astrophysics Data System (ADS)

    Handley, Caroline A.; Christie, Michael A.

    2017-01-01

    The CREST reactive-burn model uses entropy-dependent reaction rates that, until now, have been manually tuned to fit shock-initiation and detonation data in hydrocode simulations. This paper describes the initial development of an automatic method for calibrating CREST reaction-rate coefficients, using particle swarm optimisation. The automatic method is applied to EDC32, to help develop the first CREST model for this conventional high explosive.

  16. Improving integrity of on-line grammage measurement with traceable basic calibration.

    PubMed

    Kangasrääsiö, Juha

    2010-07-01

    The automatic control of grammage (basis weight) in paper and board production is based upon on-line grammage measurement. Furthermore, the automatic control of other quality variables such as moisture, ash content and coat weight, may rely on the grammage measurement. The integrity of Kr-85 based on-line grammage measurement systems was studied, by performing basic calibrations with traceably calibrated plastic reference standards. The calibrations were performed according to the EN ISO/IEC 17025 standard, which is a requirement for calibration laboratories. The observed relative measurement errors were 3.3% in the first time calibrations at the 95% confidence level. With the traceable basic calibration method, however, these errors can be reduced to under 0.5%, thus improving the integrity of on-line grammage measurements. Also a standardised algorithm, based on the experience from the performed calibrations, is proposed to ease the adjustment of the different grammage measurement systems. The calibration technique can basically be applied to all beta-radiation based grammage measurements. 2010 ISA. Published by Elsevier Ltd. All rights reserved.

  17. Automatic orientation and 3D modelling from markerless rock art imagery

    NASA Astrophysics Data System (ADS)

    Lerma, J. L.; Navarro, S.; Cabrelles, M.; Seguí, A. E.; Hernández, D.

    2013-02-01

    This paper investigates the use of two detectors and descriptors on image pyramids for automatic image orientation and generation of 3D models. The detectors and descriptors replace manual measurements and are used to detect, extract and match features across multiple imagery. The Scale-Invariant Feature Transform (SIFT) and the Speeded Up Robust Features (SURF) will be assessed based on speed, number of features, matched features, and precision in image and object space depending on the adopted hierarchical matching scheme. The influence of applying in addition Area Based Matching (ABM) with normalised cross-correlation (NCC) and least squares matching (LSM) is also investigated. The pipeline makes use of photogrammetric and computer vision algorithms aiming minimum interaction and maximum accuracy from a calibrated camera. Both the exterior orientation parameters and the 3D coordinates in object space are sequentially estimated combining relative orientation, single space resection and bundle adjustment. The fully automatic image-based pipeline presented herein to automate the image orientation step of a sequence of terrestrial markerless imagery is compared with manual bundle block adjustment and terrestrial laser scanning (TLS) which serves as ground truth. The benefits of applying ABM after FBM will be assessed both in image and object space for the 3D modelling of a complex rock art shelter.

  18. Computer-aided endovascular aortic repair using fully automated two- and three-dimensional fusion imaging.

    PubMed

    Panuccio, Giuseppe; Torsello, Giovanni Federico; Pfister, Markus; Bisdas, Theodosios; Bosiers, Michel J; Torsello, Giovanni; Austermann, Martin

    2016-12-01

    To assess the usability of a fully automated fusion imaging engine prototype, matching preinterventional computed tomography with intraoperative fluoroscopic angiography during endovascular aortic repair. From June 2014 to February 2015, all patients treated electively for abdominal and thoracoabdominal aneurysms were enrolled prospectively. Before each procedure, preoperative planning was performed with a fully automated fusion engine prototype based on computed tomography angiography, creating a mesh model of the aorta. In a second step, this three-dimensional dataset was registered with the two-dimensional intraoperative fluoroscopy. The main outcome measure was the applicability of the fully automated fusion engine. Secondary outcomes were freedom from failure of automatic segmentation or of the automatic registration as well as accuracy of the mesh model, measuring deviations from intraoperative angiography in millimeters, if applicable. Twenty-five patients were enrolled in this study. The fusion imaging engine could be used in successfully 92% of the cases (n = 23). Freedom from failure of automatic segmentation was 44% (n = 11). The freedom from failure of the automatic registration was 76% (n = 19), the median error of the automatic registration process was 0 mm (interquartile range, 0-5 mm). The fully automated fusion imaging engine was found to be applicable in most cases, albeit in several cases a fully automated data processing was not possible, requiring manual intervention. The accuracy of the automatic registration yielded excellent results and promises a useful and simple to use technology. Copyright © 2016 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.

  19. Phantom study and accuracy evaluation of an image-to-world registration approach used with electro-magnetic tracking system for neurosurgery

    NASA Astrophysics Data System (ADS)

    Li, Senhu; Sarment, David

    2015-12-01

    Minimally invasive neurosurgery needs intraoperative imaging updates and high efficient image guide system to facilitate the procedure. An automatic image guided system utilized with a compact and mobile intraoperative CT imager was introduced in this work. A tracking frame that can be easily attached onto the commercially available skull clamp was designed. With known geometry of fiducial and tracking sensor arranged on this rigid frame that was fabricated through high precision 3D printing, not only was an accurate, fully automatic registration method developed in a simple and less-costly approach, but also it helped in estimating the errors from fiducial localization in image space through image processing, and in patient space through the calibration of tracking frame. Our phantom study shows the fiducial registration error as 0.348+/-0.028mm, comparing the manual registration error as 1.976+/-0.778mm. The system in this study provided a robust and accurate image-to-patient registration without interruption of routine surgical workflow and any user interactions involved through the neurosurgery.

  20. System Design, Calibration and Performance Analysis of a Novel 360° Stereo Panoramic Mobile Mapping System

    NASA Astrophysics Data System (ADS)

    Blaser, S.; Nebiker, S.; Cavegn, S.

    2017-05-01

    Image-based mobile mapping systems enable the efficient acquisition of georeferenced image sequences, which can later be exploited in cloud-based 3D geoinformation services. In order to provide a 360° coverage with accurate 3D measuring capabilities, we present a novel 360° stereo panoramic camera configuration. By using two 360° panorama cameras tilted forward and backward in combination with conventional forward and backward looking stereo camera systems, we achieve a full 360° multi-stereo coverage. We furthermore developed a fully operational new mobile mapping system based on our proposed approach, which fulfils our high accuracy requirements. We successfully implemented a rigorous sensor and system calibration procedure, which allows calibrating all stereo systems with a superior accuracy compared to that of previous work. Our study delivered absolute 3D point accuracies in the range of 4 to 6 cm and relative accuracies of 3D distances in the range of 1 to 3 cm. These results were achieved in a challenging urban area. Furthermore, we automatically reconstructed a 3D city model of our study area by employing all captured and georeferenced mobile mapping imagery. The result is a very high detailed and almost complete 3D city model of the street environment.

  1. Automatic multi-camera calibration for deployable positioning systems

    NASA Astrophysics Data System (ADS)

    Axelsson, Maria; Karlsson, Mikael; Rudner, Staffan

    2012-06-01

    Surveillance with automated positioning and tracking of subjects and vehicles in 3D is desired in many defence and security applications. Camera systems with stereo or multiple cameras are often used for 3D positioning. In such systems, accurate camera calibration is needed to obtain a reliable 3D position estimate. There is also a need for automated camera calibration to facilitate fast deployment of semi-mobile multi-camera 3D positioning systems. In this paper we investigate a method for automatic calibration of the extrinsic camera parameters (relative camera pose and orientation) of a multi-camera positioning system. It is based on estimation of the essential matrix between each camera pair using the 5-point method for intrinsically calibrated cameras. The method is compared to a manual calibration method using real HD video data from a field trial with a multicamera positioning system. The method is also evaluated on simulated data from a stereo camera model. The results show that the reprojection error of the automated camera calibration method is close to or smaller than the error for the manual calibration method and that the automated calibration method can replace the manual calibration.

  2. Cross modality registration of video and magnetic tracker data for 3D appearance and structure modeling

    NASA Astrophysics Data System (ADS)

    Sargent, Dusty; Chen, Chao-I.; Wang, Yuan-Fang

    2010-02-01

    The paper reports a fully-automated, cross-modality sensor data registration scheme between video and magnetic tracker data. This registration scheme is intended for use in computerized imaging systems to model the appearance, structure, and dimension of human anatomy in three dimensions (3D) from endoscopic videos, particularly colonoscopic videos, for cancer research and clinical practices. The proposed cross-modality calibration procedure operates this way: Before a colonoscopic procedure, the surgeon inserts a magnetic tracker into the working channel of the endoscope or otherwise fixes the tracker's position on the scope. The surgeon then maneuvers the scope-tracker assembly to view a checkerboard calibration pattern from a few different viewpoints for a few seconds. The calibration procedure is then completed, and the relative pose (translation and rotation) between the reference frames of the magnetic tracker and the scope is determined. During the colonoscopic procedure, the readings from the magnetic tracker are used to automatically deduce the pose (both position and orientation) of the scope's reference frame over time, without complicated image analysis. Knowing the scope movement over time then allows us to infer the 3D appearance and structure of the organs and tissues in the scene. While there are other well-established mechanisms for inferring the movement of the camera (scope) from images, they are often sensitive to mistakes in image analysis, error accumulation, and structure deformation. The proposed method using a magnetic tracker to establish the camera motion parameters thus provides a robust and efficient alternative for 3D model construction. Furthermore, the calibration procedure does not require special training nor use expensive calibration equipment (except for a camera calibration pattern-a checkerboard pattern-that can be printed on any laser or inkjet printer).

  3. ATLAS fast physics monitoring: TADA

    NASA Astrophysics Data System (ADS)

    Sabato, G.; Elsing, M.; Gumpert, C.; Kamioka, S.; Moyse, E.; Nairz, A.; Eifert, T.; ATLAS Collaboration

    2017-10-01

    The ATLAS experiment at the LHC has been recording data from proton-proton collisions with 13 TeV center-of-mass energy since spring 2015. The collaboration is using a fast physics monitoring framework (TADA) to automatically perform a broad range of fast searches for early signs of new physics and to monitor the data quality across the year with the full analysis level calibrations applied to the rapidly growing data. TADA is designed to provide fast feedback directly after the collected data has been fully calibrated and processed at the Tier-0. The system can monitor a large range of physics channels, offline data quality and physics performance quantities. TADA output is available on a website accessible by the whole collaboration. It gets updated twice a day with the data from newly processed runs. Hints of potentially interesting physics signals or performance issues identified in this way are reported to be followed up by physics or combined performance groups. The note reports as well about the technical aspects of TADA: the software structure to obtain the input TAG files, the framework workflow and structure, the webpage and its implementation.

  4. A fully automatic processing chain to produce Burn Scar Mapping products, using the full Landsat archive over Greece

    NASA Astrophysics Data System (ADS)

    Kontoes, Charalampos; Papoutsis, Ioannis; Herekakis, Themistoklis; Michail, Dimitrios; Ieronymidi, Emmanuela

    2013-04-01

    Remote sensing tools for the accurate, robust and timely assessment of the damages inflicted by forest wildfires provide information that is of paramount importance to public environmental agencies and related stakeholders before, during and after the crisis. The Institute for Astronomy, Astrophysics, Space Applications and Remote Sensing of the National Observatory of Athens (IAASARS/NOA) has developed a fully automatic single and/or multi date processing chain that takes as input archived Landsat 4, 5 or 7 raw images and produces precise diachronic burnt area polygons and damage assessments over the Greek territory. The methodology consists of three fully automatic stages: 1) the pre-processing stage where the metadata of the raw images are extracted, followed by the application of the LEDAPS software platform for calibration and mask production and the Automated Precise Orthorectification Package, developed by NASA, for image geo-registration and orthorectification, 2) the core-BSM (Burn Scar Mapping) processing stage which incorporates a published classification algorithm based on a series of physical indexes, the application of two filters for noise removal using graph-based techniques and the grouping of pixels classified as burnt to form the appropriate pixels clusters before proceeding to conversion from raster to vector, and 3) the post-processing stage where the products are thematically refined and enriched using auxiliary GIS layers (underlying land cover/use, administrative boundaries, etc.) and human logic/evidence to suppress false alarms and omission errors. The established processing chain has been successfully applied to the entire archive of Landsat imagery over Greece spanning from 1984 to 2012, which has been collected and managed in IAASARS/NOA. The number of full Landsat frames that were subject of process in the framework of the study was 415. These burn scar mapping products are generated for the first time to such a temporal and spatial extent and are ideal to use in further environmental time series analyzes, production of statistical indexes (frequency, geographical distribution and number of fires per prefecture) and applications, including change detection and climate change models, urban planning, correlation with manmade activities, etc.

  5. A novel fully automatic scheme for fiducial marker-based alignment in electron tomography.

    PubMed

    Han, Renmin; Wang, Liansan; Liu, Zhiyong; Sun, Fei; Zhang, Fa

    2015-12-01

    Although the topic of fiducial marker-based alignment in electron tomography (ET) has been widely discussed for decades, alignment without human intervention remains a difficult problem. Specifically, the emergence of subtomogram averaging has increased the demand for batch processing during tomographic reconstruction; fully automatic fiducial marker-based alignment is the main technique in this process. However, the lack of an accurate method for detecting and tracking fiducial markers precludes fully automatic alignment. In this paper, we present a novel, fully automatic alignment scheme for ET. Our scheme has two main contributions: First, we present a series of algorithms to ensure a high recognition rate and precise localization during the detection of fiducial markers. Our proposed solution reduces fiducial marker detection to a sampling and classification problem and further introduces an algorithm to solve the parameter dependence of marker diameter and marker number. Second, we propose a novel algorithm to solve the tracking of fiducial markers by reducing the tracking problem to an incomplete point set registration problem. Because a global optimization of a point set registration occurs, the result of our tracking is independent of the initial image position in the tilt series, allowing for the robust tracking of fiducial markers without pre-alignment. The experimental results indicate that our method can achieve an accurate tracking, almost identical to the current best one in IMOD with half automatic scheme. Furthermore, our scheme is fully automatic, depends on fewer parameters (only requires a gross value of the marker diameter) and does not require any manual interaction, providing the possibility of automatic batch processing of electron tomographic reconstruction. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. A hardware-software system for the automation of verification and calibration of oil metering units secondary equipment

    NASA Astrophysics Data System (ADS)

    Boyarnikov, A. V.; Boyarnikova, L. V.; Kozhushko, A. A.; Sekachev, A. F.

    2017-08-01

    In the article the process of verification (calibration) of oil metering units secondary equipment is considered. The purpose of the work is to increase the reliability and reduce the complexity of this process by developing a software and hardware system that provides automated verification and calibration. The hardware part of this complex carries out the commutation of the measuring channels of the verified controller and the reference channels of the calibrator in accordance with the introduced algorithm. The developed software allows controlling the commutation of channels, setting values on the calibrator, reading the measured data from the controller, calculating errors and compiling protocols. This system can be used for checking the controllers of the secondary equipment of the oil metering units in the automatic verification mode (with the open communication protocol) or in the semi-automatic verification mode (without it). The peculiar feature of the approach used is the development of a universal signal switch operating under software control, which can be configured for various verification methods (calibration), which allows to cover the entire range of controllers of metering units secondary equipment. The use of automatic verification with the help of a hardware and software system allows to shorten the verification time by 5-10 times and to increase the reliability of measurements, excluding the influence of the human factor.

  7. The DFMS sensor of ROSINA onboard Rosetta: A computer-assisted approach to resolve mass calibration, flux calibration, and fragmentation issues

    NASA Astrophysics Data System (ADS)

    Dhooghe, Frederik; De Keyser, Johan; Altwegg, Kathrin; Calmonte, Ursina; Fuselier, Stephen; Hässig, Myrtha; Berthelier, Jean-Jacques; Mall, Urs; Gombosi, Tamas; Fiethe, Björn

    2014-05-01

    Rosetta will rendezvous with comet 67P/Churyumov-Gerasimenko in May 2014. The Rosetta Orbiter Spectrometer for Ion and Neutral Analysis (ROSINA) instrument comprises three sensors: the pressure sensor (COPS) and two mass spectrometers (RTOF and DFMS). The double focusing mass spectrometer DFMS is optimized for mass resolution and consists of an ion source, a mass analyser and a detector package operated in analogue mode. The magnetic sector of the analyser provides the mass dispersion needed for use with the position-sensitive microchannel plate (MCP) detector. Ions that hit the MCP release electrons that are recorded digitally using a linear electron detector array with 512 pixels. Raw data for a given commanded mass are obtained as ADC counts as a function of pixel number. We have developed a computer-assisted approach to address the problem of calibrating such raw data. Mass calibration: Ion identification is based on their mass-over-charge (m/Z) ratio and requires an accurate correlation of pixel number and m/Z. The m/Z scale depends on the commanded mass and the magnetic field and can be described by an offset of the pixel associated with the commanded mass from the centre of the detector array and a scaling factor. Mass calibration is aided by the built-in gas calibration unit (GCU), which allows one to inject a known gas mixture into the instrument. In a first, fully automatic step of the mass calibration procedure, the calibration uses all GCU spectra and extracts information about the mass peak closest to the centre pixel, since those peaks can be identified unambiguously. This preliminary mass-calibration relation can then be applied to all spectra. Human-assisted identification of additional mass peaks further improves the mass calibration. Ion flux calibration: ADC counts per pixel are converted to ion counts per second using the overall gain, the individual pixel gain, and the total data accumulation time. DFMS can perform an internal scan to determine the pixel gain and related detector aging. The software automatically corrects for these effects to calibrate the fluxes. The COPS sensor can be used for an a posteriori calibration of the fluxes. Neutral gas number densities: Neutrals are ionized in the ion source before they are transferred to the mass analyser, but during this process fragmentation may occur. Our software allows one to identify which neutrals entered the instrument, given the ion fragments that are detected. First, multiple spectra with a limited mass range are combined to provide an overview of as many ion fragments as possible. We then exploit a fragmentation database to assist in figuring out the relation between entering species and recorded fragments. Finally, using experimentally determined sensitivities, gas number densities are obtained. The instrument characterisation (experimental determination of sensitivities, fragmentation patterns for the most common neutral species, etc.) has been conducted by the consortium using an instrument copy in the University of Bern test facilities during the cruise phase of the mission.

  8. Dynamic photogrammetric calibration of industrial robots

    NASA Astrophysics Data System (ADS)

    Maas, Hans-Gerd

    1997-07-01

    Today's developments in industrial robots focus on aims like gain of flexibility, improvement of the interaction between robots and reduction of down-times. A very important method to achieve these goals are off-line programming techniques. In contrast to conventional teach-in-robot programming techniques, where sequences of actions are defined step-by- step via remote control on the real object, off-line programming techniques design complete robot (inter-)action programs in a CAD/CAM environment. This poses high requirements to the geometric accuracy of a robot. While the repeatability of robot poses in the teach-in mode is often better than 0.1 mm, the absolute pose accuracy potential of industrial robots is usually much worse due to tolerances, eccentricities, elasticities, play, wear-out, load, temperature and insufficient knowledge of model parameters for the transformation from poses into robot axis angles. This fact necessitates robot calibration techniques, including the formulation of a robot model describing kinematics and dynamics of the robot, and a measurement technique to provide reference data. Digital photogrammetry as an accurate, economic technique with realtime potential offers itself for this purpose. The paper analyzes the requirements posed to a measurement technique by industrial robot calibration tasks. After an overview on measurement techniques used for robot calibration purposes in the past, a photogrammetric robot calibration system based on off-the- shelf lowcost hardware components will be shown and results of pilot studies will be discussed. Besides aspects of accuracy, reliability and self-calibration in a fully automatic dynamic photogrammetric system, realtime capabilities are discussed. In the pilot studies, standard deviations of 0.05 - 0.25 mm in the three coordinate directions could be achieved over a robot work range of 1.7 X 1.5 X 1.0 m3. The realtime capabilities of the technique allow to go beyond kinematic robot calibration and perform dynamic robot calibration as well as photogrammetric on-line control of a robot in action.

  9. Automatically calibrating admittances in KATE's autonomous launch operations model

    NASA Technical Reports Server (NTRS)

    Morgan, Steve

    1992-01-01

    This report documents a 1000-line Symbolics LISP program that automatically calibrates all 15 fluid admittances in KATE's Autonomous Launch Operations (ALO) model. (KATE is Kennedy Space Center's Knowledge-based Autonomous Test Engineer, a diagnosis and repair expert system created for use on the Space Shuttle's various fluid flow systems.) As a new KATE application, the calibrator described here breaks new ground for KSC's Artificial Intelligence Lab by allowing KATE to both control and measure the hardware she supervises. By automating a formerly manual process, the calibrator: (1) saves the ALO model builder untold amounts of labor; (2) enables quick repairs after workmen accidently adjust ALO's hand valves; and (3) frees the modeler to pursue new KATE applications that previously were too complicated. Also reported are suggestions for enhancing the program: (1) to calibrate ALO's TV cameras, pumps, and sensor tolerances; and (2) to calibrate devices in other KATE models, such as the shuttle's LOX and Environment Control System (ECS).

  10. Least-Squares Camera Calibration Including Lens Distortion and Automatic Editing of Calibration Points

    NASA Technical Reports Server (NTRS)

    Gennery, D. B.

    1998-01-01

    A method is described for calibrating cameras including radial lens distortion, by using known points such as those measured from a calibration fixture. The distortion terms are relative to the optical axis, which is included in the model so that it does not have to be orthogonal to the image sensor plane.

  11. Automatic and robust extrinsic camera calibration for high-accuracy mobile mapping

    NASA Astrophysics Data System (ADS)

    Goeman, Werner; Douterloigne, Koen; Bogaert, Peter; Pires, Rui; Gautama, Sidharta

    2012-10-01

    A mobile mapping system (MMS) is the answer of the geoinformation community to the exponentially growing demand for various geospatial data with increasingly higher accuracies and captured by multiple sensors. As the mobile mapping technology is pushed to explore its use for various applications on water, rail, or road, the need emerges to have an external sensor calibration procedure which is portable, fast and easy to perform. This way, sensors can be mounted and demounted depending on the application requirements without the need for time consuming calibration procedures. A new methodology is presented to provide a high quality external calibration of cameras which is automatic, robust and fool proof.The MMS uses an Applanix POSLV420, which is a tightly coupled GPS/INS positioning system. The cameras used are Point Grey color video cameras synchronized with the GPS/INS system. The method uses a portable, standard ranging pole which needs to be positioned on a known ground control point. For calibration a well studied absolute orientation problem needs to be solved. Here, a mutual information based image registration technique is studied for automatic alignment of the ranging pole. Finally, a few benchmarking tests are done under various lighting conditions which proves the methodology's robustness, by showing high absolute stereo measurement accuracies of a few centimeters.

  12. A Visual Servoing-Based Method for ProCam Systems Calibration

    PubMed Central

    Berry, Francois; Aider, Omar Ait; Mosnier, Jeremie

    2013-01-01

    Projector-camera systems are currently used in a wide field of applications, such as 3D reconstruction and augmented reality, and can provide accurate measurements, depending on the configuration and calibration. Frequently, the calibration task is divided into two steps: camera calibration followed by projector calibration. The latter still poses certain problems that are not easy to solve, such as the difficulty in obtaining a set of 2D–3D points to compute the projection matrix between the projector and the world. Existing methods are either not sufficiently accurate or not flexible. We propose an easy and automatic method to calibrate such systems that consists in projecting a calibration pattern and superimposing it automatically on a known printed pattern. The projected pattern is provided by a virtual camera observing a virtual pattern in an OpenGL model. The projector displays what the virtual camera visualizes. Thus, the projected pattern can be controlled and superimposed on the printed one with the aid of visual servoing. Our experimental results compare favorably with those of other methods considering both usability and accuracy. PMID:24084121

  13. Note: A portable automatic capillary viscometer for transparent and opaque liquids

    NASA Astrophysics Data System (ADS)

    Soltani Ghalehjooghi, A.; Minaei, S.; Gholipour Zanjani, N.; Beheshti, B.

    2017-07-01

    A portable automatic capillary viscometer, equipped with an AVR microcontroller, was designed and developed. The viscometer was calibrated with Certified Reference Material (CRM) s200 and utilized for measurement of kinematic viscosity. A quadratic equation was developed for calibration of the instrument at various temperatures. Also, a model was developed for viscosity determination in terms of the viscometer dimensions. Development of the portable viscometer provides for on-site monitoring of engine oil viscosity.

  14. Automation of Endmember Pixel Selection in SEBAL/METRIC Model

    NASA Astrophysics Data System (ADS)

    Bhattarai, N.; Quackenbush, L. J.; Im, J.; Shaw, S. B.

    2015-12-01

    The commonly applied surface energy balance for land (SEBAL) and its variant, mapping evapotranspiration (ET) at high resolution with internalized calibration (METRIC) models require manual selection of endmember (i.e. hot and cold) pixels to calibrate sensible heat flux. Current approaches for automating this process are based on statistical methods and do not appear to be robust under varying climate conditions and seasons. In this paper, we introduce a new approach based on simple machine learning tools and search algorithms that provides an automatic and time efficient way of identifying endmember pixels for use in these models. The fully automated models were applied on over 100 cloud-free Landsat images with each image covering several eddy covariance flux sites in Florida and Oklahoma. Observed land surface temperatures at automatically identified hot and cold pixels were within 0.5% of those from pixels manually identified by an experienced operator (coefficient of determination, R2, ≥ 0.92, Nash-Sutcliffe efficiency, NSE, ≥ 0.92, and root mean squared error, RMSE, ≤ 1.67 K). Daily ET estimates derived from the automated SEBAL and METRIC models were in good agreement with their manual counterparts (e.g., NSE ≥ 0.91 and RMSE ≤ 0.35 mm day-1). Automated and manual pixel selection resulted in similar estimates of observed ET across all sites. The proposed approach should reduce time demands for applying SEBAL/METRIC models and allow for their more widespread and frequent use. This automation can also reduce potential bias that could be introduced by an inexperienced operator and extend the domain of the models to new users.

  15. Twelve automated thresholding methods for segmentation of PET images: a phantom study.

    PubMed

    Prieto, Elena; Lecumberri, Pablo; Pagola, Miguel; Gómez, Marisol; Bilbao, Izaskun; Ecay, Margarita; Peñuelas, Iván; Martí-Climent, Josep M

    2012-06-21

    Tumor volume delineation over positron emission tomography (PET) images is of great interest for proper diagnosis and therapy planning. However, standard segmentation techniques (manual or semi-automated) are operator dependent and time consuming while fully automated procedures are cumbersome or require complex mathematical development. The aim of this study was to segment PET images in a fully automated way by implementing a set of 12 automated thresholding algorithms, classical in the fields of optical character recognition, tissue engineering or non-destructive testing images in high-tech structures. Automated thresholding algorithms select a specific threshold for each image without any a priori spatial information of the segmented object or any special calibration of the tomograph, as opposed to usual thresholding methods for PET. Spherical (18)F-filled objects of different volumes were acquired on clinical PET/CT and on a small animal PET scanner, with three different signal-to-background ratios. Images were segmented with 12 automatic thresholding algorithms and results were compared with the standard segmentation reference, a threshold at 42% of the maximum uptake. Ridler and Ramesh thresholding algorithms based on clustering and histogram-shape information, respectively, provided better results that the classical 42%-based threshold (p < 0.05). We have herein demonstrated that fully automated thresholding algorithms can provide better results than classical PET segmentation tools.

  16. Twelve automated thresholding methods for segmentation of PET images: a phantom study

    NASA Astrophysics Data System (ADS)

    Prieto, Elena; Lecumberri, Pablo; Pagola, Miguel; Gómez, Marisol; Bilbao, Izaskun; Ecay, Margarita; Peñuelas, Iván; Martí-Climent, Josep M.

    2012-06-01

    Tumor volume delineation over positron emission tomography (PET) images is of great interest for proper diagnosis and therapy planning. However, standard segmentation techniques (manual or semi-automated) are operator dependent and time consuming while fully automated procedures are cumbersome or require complex mathematical development. The aim of this study was to segment PET images in a fully automated way by implementing a set of 12 automated thresholding algorithms, classical in the fields of optical character recognition, tissue engineering or non-destructive testing images in high-tech structures. Automated thresholding algorithms select a specific threshold for each image without any a priori spatial information of the segmented object or any special calibration of the tomograph, as opposed to usual thresholding methods for PET. Spherical 18F-filled objects of different volumes were acquired on clinical PET/CT and on a small animal PET scanner, with three different signal-to-background ratios. Images were segmented with 12 automatic thresholding algorithms and results were compared with the standard segmentation reference, a threshold at 42% of the maximum uptake. Ridler and Ramesh thresholding algorithms based on clustering and histogram-shape information, respectively, provided better results that the classical 42%-based threshold (p < 0.05). We have herein demonstrated that fully automated thresholding algorithms can provide better results than classical PET segmentation tools.

  17. A data reduction package for multiple object spectroscopy

    NASA Technical Reports Server (NTRS)

    Hill, J. M.; Eisenhamer, J. D.; Silva, D. R.

    1986-01-01

    Experience with fiber-optic spectrometers has demonstrated improvements in observing efficiency for clusters of 30 or more objects that must in turn be matched by data reduction capability increases. The Medusa Automatic Reduction System reduces data generated by multiobject spectrometers in the form of two-dimensional images containing 44 to 66 individual spectra, using both software and hardware improvements to efficiently extract the one-dimensional spectra. Attention is given to the ridge-finding algorithm for automatic location of the spectra in the CCD frame. A simultaneous extraction of calibration frames allows an automatic wavelength calibration routine to determine dispersion curves, and both line measurements and cross-correlation techniques are used to determine galaxy redshifts.

  18. Automatic calibration and signal switching system for the particle beam fusion research data acquisition facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boyer, W.B.

    1979-09-01

    This report describes both the hardware and software components of an automatic calibration and signal system (Autocal) for the data acquisition system for the Sandia particle beam fusion research accelerators Hydra, Proto I, and Proto II. The Autocal hardware consists of off-the-shelf commercial equipment. The various hardware components, special modifications and overall system configuration are described. Special software has been developed to support the Autocal hardware. Software operation and maintenance are described.

  19. 40 CFR Appendix F to Part 60 - Quality Assurance Procedures

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... automatically adjust the data to the corrected calibration values (e.g., microprocessor control) must be... calibration values (e.g., microprocessor control), you must program your PM CEMS to record the unadjusted...

  20. 77 FR 5058 - Agency Information Collection Activities; Submission for OMB Review; Comment Request; Automatic...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-01

    ... for OMB Review; Comment Request; Automatic Fire Sensor and Warning Devices Systems; Examination and..., ``Automatic Fire Sensor and Warning Devices Systems,'' to the Office of Management and Budget (OMB) for review... and warning device systems are maintained and calibrated in order to function properly at all times...

  1. Software For Calibration Of Polarimetric SAR Data

    NASA Technical Reports Server (NTRS)

    Van Zyl, Jakob; Zebker, Howard; Freeman, Anthony; Holt, John; Dubois, Pascale; Chapman, Bruce

    1994-01-01

    POLCAL (Polarimetric Radar Calibration) software tool intended to assist in calibration of synthetic-aperture radar (SAR) systems. In particular, calibrates Stokes-matrix-format data produced as standard product by NASA/Jet Propulsion Laboratory (JPL) airborne imaging synthetic aperture radar (AIRSAR). Version 4.0 of POLCAL is upgrade of version 2.0. New options include automatic absolute calibration of 89/90 data, distributed-target analysis, calibration of nearby scenes with corner reflectors, altitude or roll-angle corrections, and calibration of errors introduced by known topography. Reduces crosstalk and corrects phase calibration without use of ground calibration equipment. Written in FORTRAN 77.

  2. Feasibility Study on Fully Automatic High Quality Translation: Volume II. Final Technical Report.

    ERIC Educational Resources Information Center

    Lehmann, Winifred P.; Stachowitz, Rolf

    This second volume of a two-volume report on a fully automatic high quality translation (FAHQT) contains relevant papers contributed by specialists on the topic of machine translation. The papers presented here cover such topics as syntactical analysis in transformational grammar and in machine translation, lexical features in translation and…

  3. Geometry calibration for x-ray equipment in radiation treatment devices and estimation of remaining patient alignment errors

    NASA Astrophysics Data System (ADS)

    Selby, Boris P.; Sakas, Georgios; Walter, Stefan; Stilla, Uwe

    2008-03-01

    Positioning a patient accurately in treatment devices is crucial for radiological treatment, especially if accuracy vantages of particle beam treatment are exploited. To avoid sub-millimeter misalignments, X-ray images acquired from within the device are compared to a CT to compute respective alignment corrections. Unfortunately, deviations of the underlying geometry model for the imaging system degrade the achievable accuracy. We propose an automatic calibration routine, which bases on the geometry of a phantom and its automatic detection in digital radiographs acquired for various geometric device settings during the calibration. The results from the registration of the phantom's X-ray projections and its known geometry are used to update the model of the respective beamlines, which is used to compute the patient alignment correction. The geometric calibration of a beamline takes all nine relevant degrees of freedom into account, including detector translations in three directions, detector tilt by three axes and three possible translations for the X-ray tube. Introducing a stochastic model for the calibration we are able to predict the patient alignment deviations resulting from inaccuracies inherent to the phantom design and the calibration. Comparisons of the alignment results for a treatment device without calibrated imaging systems and a calibrated device show that an accurate calibration can enhance alignment accuracy.

  4. Calibrating the stress-time curve of a combined finite-discrete element method to a Split Hopkinson Pressure Bar experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osthus, Dave; Godinez, Humberto C.; Rougier, Esteban

    We presenmore » t a generic method for automatically calibrating a computer code to an experiment, with uncertainty, for a given “training” set of computer code runs. The calibration technique is general and probabilistic, meaning the calibration uncertainty is represented in the form of a probability distribution. We demonstrate the calibration method by calibrating a combined Finite-Discrete Element Method (FDEM) to a Split Hopkinson Pressure Bar (SHPB) experiment with a granite sample. The probabilistic calibration method combines runs of a FDEM computer simulation for a range of “training” settings and experimental uncertainty to develop a statistical emulator. The process allows for calibration of input parameters and produces output quantities with uncertainty estimates for settings where simulation results are desired. Input calibration and FDEM fitted results are presented. We find that the maximum shear strength σ t max and to a lesser extent maximum tensile strength σ n max govern the behavior of the stress-time curve before and around the peak, while the specific energy in Mode II (shear) E t largely governs the post-peak behavior of the stress-time curve. Good agreement is found between the calibrated FDEM and the SHPB experiment. Interestingly, we find the SHPB experiment to be rather uninformative for calibrating the softening-curve shape parameters (a, b, and c). This work stands as a successful demonstration of how a general probabilistic calibration framework can automatically calibrate FDEM parameters to an experiment.« less

  5. Calibrating the stress-time curve of a combined finite-discrete element method to a Split Hopkinson Pressure Bar experiment

    DOE PAGES

    Osthus, Dave; Godinez, Humberto C.; Rougier, Esteban; ...

    2018-05-01

    We presenmore » t a generic method for automatically calibrating a computer code to an experiment, with uncertainty, for a given “training” set of computer code runs. The calibration technique is general and probabilistic, meaning the calibration uncertainty is represented in the form of a probability distribution. We demonstrate the calibration method by calibrating a combined Finite-Discrete Element Method (FDEM) to a Split Hopkinson Pressure Bar (SHPB) experiment with a granite sample. The probabilistic calibration method combines runs of a FDEM computer simulation for a range of “training” settings and experimental uncertainty to develop a statistical emulator. The process allows for calibration of input parameters and produces output quantities with uncertainty estimates for settings where simulation results are desired. Input calibration and FDEM fitted results are presented. We find that the maximum shear strength σ t max and to a lesser extent maximum tensile strength σ n max govern the behavior of the stress-time curve before and around the peak, while the specific energy in Mode II (shear) E t largely governs the post-peak behavior of the stress-time curve. Good agreement is found between the calibrated FDEM and the SHPB experiment. Interestingly, we find the SHPB experiment to be rather uninformative for calibrating the softening-curve shape parameters (a, b, and c). This work stands as a successful demonstration of how a general probabilistic calibration framework can automatically calibrate FDEM parameters to an experiment.« less

  6. Automatic energy calibration algorithm for an RBS setup

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Silva, Tiago F.; Moro, Marcos V.; Added, Nemitala

    2013-05-06

    This work describes a computer algorithm for automatic extraction of the energy calibration parameters from a Rutherford Back-Scattering Spectroscopy (RBS) spectrum. Parameters like the electronic gain, electronic offset and detection resolution (FWHM) of a RBS setup are usually determined using a standard sample. In our case, the standard sample comprises of a multi-elemental thin film made of a mixture of Ti-Al-Ta that is analyzed at the beginning of each run at defined beam energy. A computer program has been developed to extract automatically the calibration parameters from the spectrum of the standard sample. The code evaluates the first derivative ofmore » the energy spectrum, locates the trailing edges of the Al, Ti and Ta peaks and fits a first order polynomial for the energy-channel relation. The detection resolution is determined fitting the convolution of a pre-calculated theoretical spectrum. To test the code, data of two years have been analyzed and the results compared with the manual calculations done previously, obtaining good agreement.« less

  7. Impact of automatic calibration techniques on HMD life cycle costs and sustainable performance

    NASA Astrophysics Data System (ADS)

    Speck, Richard P.; Herz, Norman E., Jr.

    2000-06-01

    Automatic test and calibration has become a valuable feature in many consumer products--ranging from antilock braking systems to auto-tune TVs. This paper discusses HMDs (Helmet Mounted Displays) and how similar techniques can reduce life cycle costs and increase sustainable performance if they are integrated into a program early enough. Optical ATE (Automatic Test Equipment) is already zeroing distortion in the HMDs and thereby making binocular displays a practical reality. A suitcase sized, field portable optical ATE unit could re-zero these errors in the Ready Room to cancel the effects of aging, minor damage and component replacement. Planning on this would yield large savings through relaxed component specifications and reduced logistic costs. Yet, the sustained performance would far exceed that attained with fixed calibration strategies. Major tactical benefits can come from reducing display errors, particularly in information fusion modules and virtual `beyond visual range' operations. Some versions of the ATE described are in production and examples of high resolution optical test data will be discussed.

  8. 3D Surface Reconstruction and Automatic Camera Calibration

    NASA Technical Reports Server (NTRS)

    Jalobeanu, Andre

    2004-01-01

    Illustrations in this view-graph presentation are presented on a Bayesian approach to 3D surface reconstruction and camera calibration.Existing methods, surface analysis and modeling,preliminary surface reconstruction results, and potential applications are addressed.

  9. SU-C-201-04: Quantification of Perfusion Heterogeneity Based On Texture Analysis for Fully Automatic Detection of Ischemic Deficits From Myocardial Perfusion Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fang, Y; Huang, H; Su, T

    Purpose: Texture-based quantification of image heterogeneity has been a popular topic for imaging studies in recent years. As previous studies mainly focus on oncological applications, we report our recent efforts of applying such techniques on cardiac perfusion imaging. A fully automated procedure has been developed to perform texture analysis for measuring the image heterogeneity. Clinical data were used to evaluate the preliminary performance of such methods. Methods: Myocardial perfusion images of Thallium-201 scans were collected from 293 patients with suspected coronary artery disease. Each subject underwent a Tl-201 scan and a percutaneous coronary intervention (PCI) within three months. The PCImore » Result was used as the gold standard of coronary ischemia of more than 70% stenosis. Each Tl-201 scan was spatially normalized to an image template for fully automatic segmentation of the LV. The segmented voxel intensities were then carried into the texture analysis with our open-source software Chang Gung Image Texture Analysis toolbox (CGITA). To evaluate the clinical performance of the image heterogeneity for detecting the coronary stenosis, receiver operating characteristic (ROC) analysis was used to compute the overall accuracy, sensitivity and specificity as well as the area under curve (AUC). Those indices were compared to those obtained from the commercially available semi-automatic software QPS. Results: With the fully automatic procedure to quantify heterogeneity from Tl-201 scans, we were able to achieve a good discrimination with good accuracy (74%), sensitivity (73%), specificity (77%) and AUC of 0.82. Such performance is similar to those obtained from the semi-automatic QPS software that gives a sensitivity of 71% and specificity of 77%. Conclusion: Based on fully automatic procedures of data processing, our preliminary data indicate that the image heterogeneity of myocardial perfusion imaging can provide useful information for automatic determination of the myocardial ischemia.« less

  10. A fully automatic, threshold-based segmentation method for the estimation of the Metabolic Tumor Volume from PET images: validation on 3D printed anthropomorphic oncological lesions

    NASA Astrophysics Data System (ADS)

    Gallivanone, F.; Interlenghi, M.; Canervari, C.; Castiglioni, I.

    2016-01-01

    18F-Fluorodeoxyglucose (18F-FDG) Positron Emission Tomography (PET) is a standard functional diagnostic technique to in vivo image cancer. Different quantitative paramters can be extracted from PET images and used as in vivo cancer biomarkers. Between PET biomarkers Metabolic Tumor Volume (MTV) has gained an important role in particular considering the development of patient-personalized radiotherapy treatment for non-homogeneous dose delivery. Different imaging processing methods have been developed to define MTV. The different proposed PET segmentation strategies were validated in ideal condition (e.g. in spherical objects with uniform radioactivity concentration), while the majority of cancer lesions doesn't fulfill these requirements. In this context, this work has a twofold objective: 1) to implement and optimize a fully automatic, threshold-based segmentation method for the estimation of MTV, feasible in clinical practice 2) to develop a strategy to obtain anthropomorphic phantoms, including non-spherical and non-uniform objects, miming realistic oncological patient conditions. The developed PET segmentation algorithm combines an automatic threshold-based algorithm for the definition of MTV and a k-means clustering algorithm for the estimation of the background. The method is based on parameters always available in clinical studies and was calibrated using NEMA IQ Phantom. Validation of the method was performed both in ideal (e.g. in spherical objects with uniform radioactivity concentration) and non-ideal (e.g. in non-spherical objects with a non-uniform radioactivity concentration) conditions. The strategy to obtain a phantom with synthetic realistic lesions (e.g. with irregular shape and a non-homogeneous uptake) consisted into the combined use of standard anthropomorphic phantoms commercially and irregular molds generated using 3D printer technology and filled with a radioactive chromatic alginate. The proposed segmentation algorithm was feasible in a clinical context and showed a good accuracy both in ideal and in realistic conditions.

  11. Feasibility Study on Fully Automatic High Quality Translation: Volume I. Final Technical Report.

    ERIC Educational Resources Information Center

    Lehmann, Winifred P.; Stachowitz, Rolf

    The object of this theoretical inquiry is to examine the controversial issue of a fully automatic high quality translation (FAHQT) in the light of past and projected advances in linguistic theory and hardware/software capability. This first volume of a two-volume report discusses the requirements of translation and aspects of human and machine…

  12. VSHEC—A program for the automatic spectrum calibration

    NASA Astrophysics Data System (ADS)

    Zlokazov, V. B.; Utyonkov, V. K.; Tsyganov, Yu. S.

    2013-02-01

    Calibration is the transformation of the output channels of a measuring device into the physical values (energies, times, angles, etc.). If dealt with manually, it is a labor- and time-consuming procedure even if only a few detectors are used. However, the situation changes appreciably if a calibration of multi-detector systems is required, where the number of registering devices extends to hundreds (Tsyganov et al. (2004) [1]). The calibration is aggravated by the fact that needed pivotal channel numbers should be determined from peak-like distributions. But peak distribution is an informal pattern so that a procedure of pattern recognition should be employed to discard the operator interference. The automatic calibration is the determination of the calibration curve parameters on the basis of reference quantity list and the data which partially are characterized by these quantities (energies, angles, etc). The program allows the physicist to perform the calibration of the spectrometric detectors for both the cases: that of one tract and that of many. Program summaryProgram title: VSHEC Catalogue identifier: AENN_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENN_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 6403 No. of bytes in distributed program, including test data, etc.: 325847 Distribution format: tar.gz Programming language: DELPHI-5 and higher. Computer: Any IBM PC compatible. Operating system: Windows XX. Classification: 2.3, 4.9. Nature of problem: Automatic conversion of detector channels into their energy equivalents. Solution method: Automatic decomposition of a spectrum into geometric figures such as peaks and an envelope of peaks from below, estimation of peak centers and search for the maximum peak center subsequence which matches the reference energies in the statistically most plausible way. Running time: On Celeron (R) (CPU 2.66 GHh) it is the time needed for the dialog via the visual interface. Pure computation—less than 1 s for the test run.

  13. Computer Vision Assisted Virtual Reality Calibration

    NASA Technical Reports Server (NTRS)

    Kim, W.

    1999-01-01

    A computer vision assisted semi-automatic virtual reality (VR) calibration technology has been developed that can accurately match a virtual environment of graphically simulated three-dimensional (3-D) models to the video images of the real task environment.

  14. Calibration of a parsimonious distributed ecohydrological daily model in a data-scarce basin by exclusively using the spatio-temporal variation of NDVI

    NASA Astrophysics Data System (ADS)

    Ruiz-Pérez, Guiomar; Koch, Julian; Manfreda, Salvatore; Caylor, Kelly; Francés, Félix

    2017-12-01

    Ecohydrological modeling studies in developing countries, such as sub-Saharan Africa, often face the problem of extensive parametrical requirements and limited available data. Satellite remote sensing data may be able to fill this gap, but require novel methodologies to exploit their spatio-temporal information that could potentially be incorporated into model calibration and validation frameworks. The present study tackles this problem by suggesting an automatic calibration procedure, based on the empirical orthogonal function, for distributed ecohydrological daily models. The procedure is tested with the support of remote sensing data in a data-scarce environment - the upper Ewaso Ngiro river basin in Kenya. In the present application, the TETIS-VEG model is calibrated using only NDVI (Normalized Difference Vegetation Index) data derived from MODIS. The results demonstrate that (1) satellite data of vegetation dynamics can be used to calibrate and validate ecohydrological models in water-controlled and data-scarce regions, (2) the model calibrated using only satellite data is able to reproduce both the spatio-temporal vegetation dynamics and the observed discharge at the outlet and (3) the proposed automatic calibration methodology works satisfactorily and it allows for a straightforward incorporation of spatio-temporal data into the calibration and validation framework of a model.

  15. Intelligent error correction method applied on an active pixel sensor based star tracker

    NASA Astrophysics Data System (ADS)

    Schmidt, Uwe

    2005-10-01

    Star trackers are opto-electronic sensors used on-board of satellites for the autonomous inertial attitude determination. During the last years star trackers became more and more important in the field of the attitude and orbit control system (AOCS) sensors. High performance star trackers are based up today on charge coupled device (CCD) optical camera heads. The active pixel sensor (APS) technology, introduced in the early 90-ties, allows now the beneficial replacement of CCD detectors by APS detectors with respect to performance, reliability, power, mass and cost. The company's heritage in star tracker design started in the early 80-ties with the launch of the worldwide first fully autonomous star tracker system ASTRO1 to the Russian MIR space station. Jena-Optronik recently developed an active pixel sensor based autonomous star tracker "ASTRO APS" as successor of the CCD based star tracker product series ASTRO1, ASTRO5, ASTRO10 and ASTRO15. Key features of the APS detector technology are, a true xy-address random access, the multiple windowing read out and the on-chip signal processing including the analogue to digital conversion. These features can be used for robust star tracking at high slew rates and under worse conditions like stray light and solar flare induced single event upsets. A special algorithm have been developed to manage the typical APS detector error contributors like fixed pattern noise (FPN), dark signal non-uniformity (DSNU) and white spots. The algorithm works fully autonomous and adapts to e.g. increasing DSNU and up-coming white spots automatically without ground maintenance or re-calibration. In contrast to conventional correction methods the described algorithm does not need calibration data memory like full image sized calibration data sets. The application of the presented algorithm managing the typical APS detector error contributors is a key element for the design of star trackers for long term satellite applications like geostationary telecom platforms.

  16. Research on calibration error of carrier phase against antenna arraying

    NASA Astrophysics Data System (ADS)

    Sun, Ke; Hou, Xiaomin

    2016-11-01

    It is the technical difficulty of uplink antenna arraying that signals from various quarters can not be automatically aligned at the target in deep space. The size of the far-field power combining gain is directly determined by the accuracy of carrier phase calibration. It is necessary to analyze the entire arraying system in order to improve the accuracy of the phase calibration. This paper analyzes the factors affecting the calibration error of carrier phase of uplink antenna arraying system including the error of phase measurement and equipment, the error of the uplink channel phase shift, the position error of ground antenna, calibration receiver and target spacecraft, the error of the atmospheric turbulence disturbance. Discuss the spatial and temporal autocorrelation model of atmospheric disturbances. Each antenna of the uplink antenna arraying is no common reference signal for continuous calibration. So it must be a system of the periodic calibration. Calibration is refered to communication of one or more spacecrafts in a certain period. Because the deep space targets are not automatically aligned to multiplexing received signal. Therefore the aligned signal should be done in advance on the ground. Data is shown that the error can be controlled within the range of demand by the use of existing technology to meet the accuracy of carrier phase calibration. The total error can be controlled within a reasonable range.

  17. Extrinsic Calibration of a Laser Galvanometric Setup and a Range Camera.

    PubMed

    Sels, Seppe; Bogaerts, Boris; Vanlanduit, Steve; Penne, Rudi

    2018-05-08

    Currently, galvanometric scanning systems (like the one used in a scanning laser Doppler vibrometer) rely on a planar calibration procedure between a two-dimensional (2D) camera and the laser galvanometric scanning system to automatically aim a laser beam at a particular point on an object. In the case of nonplanar or moving objects, this calibration is not sufficiently accurate anymore. In this work, a three-dimensional (3D) calibration procedure that uses a 3D range sensor is proposed. The 3D calibration is valid for all types of objects and retains its accuracy when objects are moved between subsequent measurement campaigns. The proposed 3D calibration uses a Non-Perspective-n-Point (NPnP) problem solution. The 3D range sensor is used to calculate the position of the object under test relative to the laser galvanometric system. With this extrinsic calibration, the laser galvanometric scanning system can automatically aim a laser beam to this object. In experiments, the mean accuracy of aiming the laser beam on an object is below 10 mm for 95% of the measurements. This achieved accuracy is mainly determined by the accuracy and resolution of the 3D range sensor. The new calibration method is significantly better than the original 2D calibration method, which in our setup achieves errors below 68 mm for 95% of the measurements.

  18. Fully automatic cervical vertebrae segmentation framework for X-ray images.

    PubMed

    Al Arif, S M Masudur Rahman; Knapp, Karen; Slabaugh, Greg

    2018-04-01

    The cervical spine is a highly flexible anatomy and therefore vulnerable to injuries. Unfortunately, a large number of injuries in lateral cervical X-ray images remain undiagnosed due to human errors. Computer-aided injury detection has the potential to reduce the risk of misdiagnosis. Towards building an automatic injury detection system, in this paper, we propose a deep learning-based fully automatic framework for segmentation of cervical vertebrae in X-ray images. The framework first localizes the spinal region in the image using a deep fully convolutional neural network. Then vertebra centers are localized using a novel deep probabilistic spatial regression network. Finally, a novel shape-aware deep segmentation network is used to segment the vertebrae in the image. The framework can take an X-ray image and produce a vertebrae segmentation result without any manual intervention. Each block of the fully automatic framework has been trained on a set of 124 X-ray images and tested on another 172 images, all collected from real-life hospital emergency rooms. A Dice similarity coefficient of 0.84 and a shape error of 1.69 mm have been achieved. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. Automatic Setting Procedure for Exoskeleton-Assisted Overground Gait: Proof of Concept on Stroke Population

    PubMed Central

    Gandolla, Marta; Guanziroli, Eleonora; D'Angelo, Andrea; Cannaviello, Giovanni; Molteni, Franco; Pedrocchi, Alessandra

    2018-01-01

    Stroke-related locomotor impairments are often associated with abnormal timing and intensity of recruitment of the affected and non-affected lower limb muscles. Restoring the proper lower limbs muscles activation is a key factor to facilitate recovery of gait capacity and performance, and to reduce maladaptive plasticity. Ekso is a wearable powered exoskeleton robot able to support over-ground gait training. The user controls the exoskeleton by triggering each single step during the gait cycle. The fine-tuning of the exoskeleton control system is crucial—it is set according to the residual functional abilities of the patient, and it needs to ensure lower limbs powered gait to be the most physiological as possible. This work focuses on the definition of an automatic calibration procedure able to detect the best Ekso setting for each patient. EMG activity has been recorded from Tibialis Anterior, Soleus, Rectus Femoris, and Semitendinosus muscles in a group of 7 healthy controls and 13 neurological patients. EMG signals have been processed so to obtain muscles activation patterns. The mean muscular activation pattern derived from the controls cohort has been set as reference. The developed automatic calibration procedure requires the patient to perform overground walking trials supported by the exoskeleton while changing parameters setting. The Gait Metric index is calculated for each trial, where the closer the performance is to the normative muscular activation pattern, in terms of both relative amplitude and timing, the higher the Gait Metric index is. The trial with the best Gait Metric index corresponds to the best parameters set. It has to be noted that the automatic computational calibration procedure is based on the same number of overground walking trials, and the same experimental set-up as in the current manual calibration procedure. The proposed approach allows supporting the rehabilitation team in the setting procedure. It has been demonstrated to be robust, and to be in agreement with the current gold standard (i.e., manual calibration performed by an expert engineer). The use of a graphical user interface is a promising tool for the effective use of an automatic procedure in a clinical context. PMID:29615890

  20. Automatic Setting Procedure for Exoskeleton-Assisted Overground Gait: Proof of Concept on Stroke Population.

    PubMed

    Gandolla, Marta; Guanziroli, Eleonora; D'Angelo, Andrea; Cannaviello, Giovanni; Molteni, Franco; Pedrocchi, Alessandra

    2018-01-01

    Stroke-related locomotor impairments are often associated with abnormal timing and intensity of recruitment of the affected and non-affected lower limb muscles. Restoring the proper lower limbs muscles activation is a key factor to facilitate recovery of gait capacity and performance, and to reduce maladaptive plasticity. Ekso is a wearable powered exoskeleton robot able to support over-ground gait training. The user controls the exoskeleton by triggering each single step during the gait cycle. The fine-tuning of the exoskeleton control system is crucial-it is set according to the residual functional abilities of the patient, and it needs to ensure lower limbs powered gait to be the most physiological as possible. This work focuses on the definition of an automatic calibration procedure able to detect the best Ekso setting for each patient. EMG activity has been recorded from Tibialis Anterior, Soleus, Rectus Femoris, and Semitendinosus muscles in a group of 7 healthy controls and 13 neurological patients. EMG signals have been processed so to obtain muscles activation patterns. The mean muscular activation pattern derived from the controls cohort has been set as reference. The developed automatic calibration procedure requires the patient to perform overground walking trials supported by the exoskeleton while changing parameters setting. The Gait Metric index is calculated for each trial, where the closer the performance is to the normative muscular activation pattern, in terms of both relative amplitude and timing, the higher the Gait Metric index is. The trial with the best Gait Metric index corresponds to the best parameters set. It has to be noted that the automatic computational calibration procedure is based on the same number of overground walking trials, and the same experimental set-up as in the current manual calibration procedure. The proposed approach allows supporting the rehabilitation team in the setting procedure. It has been demonstrated to be robust, and to be in agreement with the current gold standard (i.e., manual calibration performed by an expert engineer). The use of a graphical user interface is a promising tool for the effective use of an automatic procedure in a clinical context.

  1. Drift-insensitive distributed calibration of probe microscope scanner in nanometer range: Virtual mode

    NASA Astrophysics Data System (ADS)

    Lapshin, Rostislav V.

    2016-08-01

    A method of distributed calibration of a probe microscope scanner is suggested. The main idea consists in a search for a net of local calibration coefficients (LCCs) in the process of automatic measurement of a standard surface, whereby each point of the movement space of the scanner can be characterized by a unique set of scale factors. Feature-oriented scanning (FOS) methodology is used as a basis for implementation of the distributed calibration permitting to exclude in situ the negative influence of thermal drift, creep and hysteresis on the obtained results. Possessing the calibration database enables correcting in one procedure all the spatial systematic distortions caused by nonlinearity, nonorthogonality and spurious crosstalk couplings of the microscope scanner piezomanipulators. To provide high precision of spatial measurements in nanometer range, the calibration is carried out using natural standards - constants of crystal lattice. One of the useful modes of the developed calibration method is a virtual mode. In the virtual mode, instead of measurement of a real surface of the standard, the calibration program makes a surface image ;measurement; of the standard, which was obtained earlier using conventional raster scanning. The application of the virtual mode permits simulation of the calibration process and detail analysis of raster distortions occurring in both conventional and counter surface scanning. Moreover, the mode allows to estimate the thermal drift and the creep velocities acting while surface scanning. Virtual calibration makes possible automatic characterization of a surface by the method of scanning probe microscopy (SPM).

  2. A novel expert system for objective masticatory efficiency assessment

    PubMed Central

    2018-01-01

    Most of the tools and diagnosis models of Masticatory Efficiency (ME) are not well documented or severely limited to simple image processing approaches. This study presents a novel expert system for ME assessment based on automatic recognition of mixture patterns of masticated two-coloured chewing gums using a combination of computational intelligence and image processing techniques. The hypotheses tested were that the proposed system could accurately relate specimens to the number of chewing cycles, and that it could identify differences between the mixture patterns of edentulous individuals prior and after complete denture treatment. This study enrolled 80 fully-dentate adults (41 females and 39 males, 25 ± 5 years of age) as the reference population; and 40 edentulous adults (21 females and 19 males, 72 ± 8.9 years of age) for the testing group. The system was calibrated using the features extracted from 400 samples covering 0, 10, 15, and 20 chewing cycles. The calibrated system was used to automatically analyse and classify a set of 160 specimens retrieved from individuals in the testing group in two appointments. The ME was then computed as the predicted number of chewing strokes that a healthy reference individual would need to achieve a similar degree of mixture measured against the real number of cycles applied to the specimen. The trained classifier obtained a Mathews Correlation Coefficient score of 0.97. ME measurements showed almost perfect agreement considering pre- and post-treatment appointments separately (κ ≥ 0.95). Wilcoxon signed-rank test showed that a complete denture treatment for edentulous patients elicited a statistically significant increase in the ME measurements (Z = -2.31, p < 0.01). We conclude that the proposed expert system proved able and reliable to accurately identify patterns in mixture and provided useful ME measurements. PMID:29385165

  3. Parameter estimation procedure for complex non-linear systems: calibration of ASM No. 1 for N-removal in a full-scale oxidation ditch.

    PubMed

    Abusam, A; Keesman, K J; van Straten, G; Spanjers, H; Meinema, K

    2001-01-01

    When applied to large simulation models, the process of parameter estimation is also called calibration. Calibration of complex non-linear systems, such as activated sludge plants, is often not an easy task. On the one hand, manual calibration of such complex systems is usually time-consuming, and its results are often not reproducible. On the other hand, conventional automatic calibration methods are not always straightforward and often hampered by local minima problems. In this paper a new straightforward and automatic procedure, which is based on the response surface method (RSM) for selecting the best identifiable parameters, is proposed. In RSM, the process response (output) is related to the levels of the input variables in terms of a first- or second-order regression model. Usually, RSM is used to relate measured process output quantities to process conditions. However, in this paper RSM is used for selecting the dominant parameters, by evaluating parameters sensitivity in a predefined region. Good results obtained in calibration of ASM No. 1 for N-removal in a full-scale oxidation ditch proved that the proposed procedure is successful and reliable.

  4. Automatic calibration and control system for a combined oxygen and combustibles analyzer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woolbert, G.D.; Jewett, S.Y.; Robertson, J.W. Jr.

    1989-08-01

    This patent describes an automatic, periodically calibrating system for continuous output of calibrated signals from a combined oxygen and combustibles analyzer. It comprises: a combined oxygen and combustibles analyzer for sensing a level of oxygen and a level of combustibles in a volatile atmosphere and for producing a first sample signal indicative of the oxygen level and a second sample signal indicative of the combustibles level; means for introducing zero and span calibration test gases into the analyzer; means for periodically calibrating the analyzer. This including: a data control unit; a timer unit; a mechanical unit, means for calculating zeromore » and span values for oxygen and combustibles, means for comparing the calculated zero and span values for oxygen and combustibles to the preset alarm limits for oxygen and combustibles, means for activating an operator alarm, means for calculating oxygen and combustibles drift adjustments, a memory unit; and means for applying the oxygen and combustibles drift adjustments concurrently to the first and second sample signals, according to predetermined mathematical relationship, to obtain calibrated output signals indicative of the oxygen and combustibles level in the volatile atmosphere.« less

  5. A portable foot-parameter-extracting system

    NASA Astrophysics Data System (ADS)

    Zhang, MingKai; Liang, Jin; Li, Wenpan; Liu, Shifan

    2016-03-01

    In order to solve the problem of automatic foot measurement in garment customization, a new automatic footparameter- extracting system based on stereo vision, photogrammetry and heterodyne multiple frequency phase shift technology is proposed and implemented. The key technologies applied in the system are studied, including calibration of projector, alignment of point clouds, and foot measurement. Firstly, a new projector calibration algorithm based on plane model has been put forward to get the initial calibration parameters and a feature point detection scheme of calibration board image is developed. Then, an almost perfect match of two clouds is achieved by performing a first alignment using the Sampled Consensus - Initial Alignment algorithm (SAC-IA) and refining the alignment using the Iterative Closest Point algorithm (ICP). Finally, the approaches used for foot-parameterextracting and the system scheme are presented in detail. Experimental results show that the RMS error of the calibration result is 0.03 pixel and the foot parameter extracting experiment shows the feasibility of the extracting algorithm. Compared with the traditional measurement method, the system can be more portable, accurate and robust.

  6. Automatic Implementation of Ttethernet-Based Time-Triggered Avionics Applications

    NASA Astrophysics Data System (ADS)

    Gorcitz, Raul Adrian; Carle, Thomas; Lesens, David; Monchaux, David; Potop-Butucaruy, Dumitru; Sorel, Yves

    2015-09-01

    The design of safety-critical embedded systems such as those used in avionics still involves largely manual phases. But in avionics the definition of standard interfaces embodied in standards such as ARINC 653 or TTEthernet should allow the definition of fully automatic code generation flows that reduce the costs while improving the quality of the generated code, much like compilers have done when replacing manual assembly coding. In this paper, we briefly present such a fully automatic implementation tool, called Lopht, for ARINC653-based time-triggered systems, and then explain how it is currently extended to include support for TTEthernet networks.

  7. Automated full-3D digitization system for documentation of paintings

    NASA Astrophysics Data System (ADS)

    Karaszewski, Maciej; Adamczyk, Marcin; Sitnik, Robert; Michoński, Jakub; Załuski, Wojciech; Bunsch, Eryk; Bolewicki, Paweł

    2013-05-01

    In this paper, a fully automated 3D digitization system for documentation of paintings is presented. It consists of a specially designed frame system for secure fixing of painting, a custom designed, structured light-based, high-resolution measurement head with no IR and UV emission. This device is automatically positioned in two axes (parallel to the surface of digitized painting) with additional manual positioning in third, perpendicular axis. Manual change of observation angle is also possible around two axes to re-measure even partially shadowed areas. The whole system is built in a way which provides full protection of digitized object (moving elements cannot reach its vicinity) and is driven by computer-controlled, highly precise servomechanisms. It can be used for automatic (without any user attention) and fast measurement of the paintings with some limitation to their properties: maximum size of the picture is 2000mm x 2000mm (with deviation of flatness smaller than 20mm) Measurement head is automatically calibrated by the system and its possible working volume starts from 50mm x 50mm x 20mm (10000 points per square mm) and ends at 120mm x 80mm x 60mm (2500 points per square mm). The directional measurements obtained with this system are automatically initially aligned due to the measurement head's position coordinates known from servomechanisms. After the whole painting is digitized, the measurements are fine-aligned with color-based ICP algorithm to remove any influence of possible inaccuracy of positioning devices. We present exemplary digitization results along with the discussion about the opportunities of analysis which appear for such high-resolution, 3D computer models of paintings.

  8. Utilization of Expert Knowledge in a Multi-Objective Hydrologic Model Automatic Calibration Process

    NASA Astrophysics Data System (ADS)

    Quebbeman, J.; Park, G. H.; Carney, S.; Day, G. N.; Micheletty, P. D.

    2016-12-01

    Spatially distributed continuous simulation hydrologic models have a large number of parameters for potential adjustment during the calibration process. Traditional manual calibration approaches of such a modeling system is extremely laborious, which has historically motivated the use of automatic calibration procedures. With a large selection of model parameters, achieving high degrees of objective space fitness - measured with typical metrics such as Nash-Sutcliffe, Kling-Gupta, RMSE, etc. - can easily be achieved using a range of evolutionary algorithms. A concern with this approach is the high degree of compensatory calibration, with many similarly performing solutions, and yet grossly varying parameter set solutions. To help alleviate this concern, and mimic manual calibration processes, expert knowledge is proposed for inclusion within the multi-objective functions, which evaluates the parameter decision space. As a result, Pareto solutions are identified with high degrees of fitness, but also create parameter sets that maintain and utilize available expert knowledge resulting in more realistic and consistent solutions. This process was tested using the joint SNOW-17 and Sacramento Soil Moisture Accounting method (SAC-SMA) within the Animas River basin in Colorado. Three different elevation zones, each with a range of parameters, resulted in over 35 model parameters simultaneously calibrated. As a result, high degrees of fitness were achieved, in addition to the development of more realistic and consistent parameter sets such as those typically achieved during manual calibration procedures.

  9. Experimental investigation of strain errors in stereo-digital image correlation due to camera calibration

    NASA Astrophysics Data System (ADS)

    Shao, Xinxing; Zhu, Feipeng; Su, Zhilong; Dai, Xiangjun; Chen, Zhenning; He, Xiaoyuan

    2018-03-01

    The strain errors in stereo-digital image correlation (DIC) due to camera calibration were investigated using precisely controlled numerical experiments and real experiments. Three-dimensional rigid body motion tests were conducted to examine the effects of camera calibration on the measured results. For a fully accurate calibration, rigid body motion causes negligible strain errors. However, for inaccurately calibrated camera parameters and a short working distance, rigid body motion will lead to more than 50-μɛ strain errors, which significantly affects the measurement. In practical measurements, it is impossible to obtain a fully accurate calibration; therefore, considerable attention should be focused on attempting to avoid these types of errors, especially for high-accuracy strain measurements. It is necessary to avoid large rigid body motions in both two-dimensional DIC and stereo-DIC.

  10. TU-H-CAMPUS-JeP1-02: Fully Automatic Verification of Automatically Contoured Normal Tissues in the Head and Neck

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCarroll, R; UT Health Science Center, Graduate School of Biomedical Sciences, Houston, TX; Beadle, B

    Purpose: To investigate and validate the use of an independent deformable-based contouring algorithm for automatic verification of auto-contoured structures in the head and neck towards fully automated treatment planning. Methods: Two independent automatic contouring algorithms [(1) Eclipse’s Smart Segmentation followed by pixel-wise majority voting, (2) an in-house multi-atlas based method] were used to create contours of 6 normal structures of 10 head-and-neck patients. After rating by a radiation oncologist, the higher performing algorithm was selected as the primary contouring method, the other used for automatic verification of the primary. To determine the ability of the verification algorithm to detect incorrectmore » contours, contours from the primary method were shifted from 0.5 to 2cm. Using a logit model the structure-specific minimum detectable shift was identified. The models were then applied to a set of twenty different patients and the sensitivity and specificity of the models verified. Results: Per physician rating, the multi-atlas method (4.8/5 point scale, with 3 rated as generally acceptable for planning purposes) was selected as primary and the Eclipse-based method (3.5/5) for verification. Mean distance to agreement and true positive rate were selected as covariates in an optimized logit model. These models, when applied to a group of twenty different patients, indicated that shifts could be detected at 0.5cm (brain), 0.75cm (mandible, cord), 1cm (brainstem, cochlea), or 1.25cm (parotid), with sensitivity and specificity greater than 0.95. If sensitivity and specificity constraints are reduced to 0.9, detectable shifts of mandible and brainstem were reduced by 0.25cm. These shifts represent additional safety margins which might be considered if auto-contours are used for automatic treatment planning without physician review. Conclusion: Automatically contoured structures can be automatically verified. This fully automated process could be used to flag auto-contours for special review or used with safety margins in a fully automatic treatment planning system.« less

  11. ARES v2: new features and improved performance

    NASA Astrophysics Data System (ADS)

    Sousa, S. G.; Santos, N. C.; Adibekyan, V.; Delgado-Mena, E.; Israelian, G.

    2015-05-01

    Aims: We present a new upgraded version of ARES. The new version includes a series of interesting new features such as automatic radial velocity correction, a fully automatic continuum determination, and an estimation of the errors for the equivalent widths. Methods: The automatic correction of the radial velocity is achieved with a simple cross-correlation function, and the automatic continuum determination, as well as the estimation of the errors, relies on a new approach to evaluating the spectral noise at the continuum level. Results: ARES v2 is totally compatible with its predecessor. We show that the fully automatic continuum determination is consistent with the previous methods applied for this task. It also presents a significant improvement on its performance thanks to the implementation of a parallel computation using the OpenMP library. Automatic Routine for line Equivalent widths in stellar Spectra - ARES webpage: http://www.astro.up.pt/~sousasag/ares/Based on observations made with ESO Telescopes at the La Silla Paranal Observatory under programme ID 075.D-0800(A).

  12. Dynamic calibration of pan-tilt-zoom cameras for traffic monitoring.

    PubMed

    Song, Kai-Tai; Tai, Jen-Chao

    2006-10-01

    Pan-tilt-zoom (PTZ) cameras have been widely used in recent years for monitoring and surveillance applications. These cameras provide flexible view selection as well as a wider observation range. This makes them suitable for vision-based traffic monitoring and enforcement systems. To employ PTZ cameras for image measurement applications, one first needs to calibrate the camera to obtain meaningful results. For instance, the accuracy of estimating vehicle speed depends on the accuracy of camera calibration and that of vehicle tracking results. This paper presents a novel calibration method for a PTZ camera overlooking a traffic scene. The proposed approach requires no manual operation to select the positions of special features. It automatically uses a set of parallel lane markings and the lane width to compute the camera parameters, namely, focal length, tilt angle, and pan angle. Image processing procedures have been developed for automatically finding parallel lane markings. Interesting experimental results are presented to validate the robustness and accuracy of the proposed method.

  13. Three years of operational experience from Schauinsland CTBT monitoring station.

    PubMed

    Zähringer, M; Bieringer, J; Schlosser, C

    2008-04-01

    Data from three years of operation of a low-level aerosol sampler and analyzer (RASA) at Schauinsland monitoring station are reported. The system is part of the International Monitoring System (IMS) for verification of the Comprehensive Nuclear-Test-Ban Treaty (CTBT). The fully automatic system is capable to measure aerosol borne gamma emitters with high sensitivity and routinely quantifies 7Be and 212Pb. The system had a high level of data availability of 90% within the reporting period. A daily screening process rendered 66 tentative identifications of verification relevant radionuclides since the system entered IMS operation in February 2004. Two of these were real events and associated to a plausible source. The remaining 64 cases can consistently be explained by detector background and statistical phenomena. Inter-comparison with data from a weekly sampler operated at the same station shows instabilities of the calibration during the test phase and a good agreement since certification of the system.

  14. Calibration and Testing of Digital Zenith Camera System Components

    NASA Astrophysics Data System (ADS)

    Ulug, Rasit; Halicioglu, Kerem; Tevfik Ozludemir, M.; Albayrak, Muge; Basoglu, Burak; Deniz, Rasim

    2017-04-01

    Starting from the beginning of the new millennium, thanks to the Charged-Coupled Device (CCD) technology, fully or partly automatic zenith camera systems are designed and used in order to determine astro-geodetic deflections of the vertical components in several countries, including Germany, Switzerland, Serbia, Latvia, Poland, Austria, China and Turkey. The Digital Zenith Camera System (DZCS) of Turkey performed successful observations yet it needs to be improved in terms of automating the system and increasing observation accuracy. In order to optimize the observation time and improve the system, some modifications have been implemented. Through the modification process that started at the beginning of 2016, some DZCS components have been replaced with the new ones and some new additional components have been installed. In this presentation, the ongoing calibration and testing process of the DZCS are summarized in general. In particular, one of the tested system components is the High Resolution Tiltmeter (HRTM), which enable orthogonal orientation of DZCS to the direction of plump line, is discussed. For the calibration of these components, two tiltmeters with different accuracies (1 nrad and 0.001 mrad) were observed nearly 30 days. The data recorded under different environmental conditions were divided into hourly, daily, and weekly subsets. In addition to the effects of temperature and humidity, interoperability of two tiltmeters were also investigated. Results show that with the integration of HRTM and the other implementations, the modified DZCS provides higher accuracy for the determination of vertical deflections.

  15. Analysis of regional rainfall-runoff parameters for the Lake Michigan Diversion hydrological modeling

    USGS Publications Warehouse

    Soong, David T.; Over, Thomas M.

    2015-01-01

    Recalibration of the HSPF parameters to the updated inputs and land covers was completed on two representative watershed models selected from the nine by using a manual method (HSPEXP) and an automatic method (PEST). The objective of the recalibration was to develop a regional parameter set that improves the accuracy in runoff volume prediction for the nine study watersheds. Knowledge about flow and watershed characteristics plays a vital role for validating the calibration in both manual and automatic methods. The best performing parameter set was determined by the automatic calibration method on a two-watershed model. Applying this newly determined parameter set to the nine watersheds for runoff volume simulation resulted in “very good” ratings in five watersheds, an improvement as compared to “very good” ratings achieved for three watersheds by the North Branch parameter set.

  16. Radiometer Calibration and Characterization (RCC) User's Manual: Windows Version 4.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andreas, Afshin M.; Wilcox, Stephen M.

    2016-02-29

    The Radiometer Calibration and Characterization (RCC) software is a data acquisition and data archival system for performing Broadband Outdoor Radiometer Calibrations (BORCAL). RCC provides a unique method of calibrating broadband atmospheric longwave and solar shortwave radiometers using techniques that reduce measurement uncertainty and better characterize a radiometer's response profile. The RCC software automatically monitors and controls many of the components that contribute to uncertainty in an instrument's responsivity. This is a user's manual and guide to the RCC software.

  17. Automation of image data processing. (Polish Title: Automatyzacja proces u przetwarzania danych obrazowych)

    NASA Astrophysics Data System (ADS)

    Preuss, R.

    2014-12-01

    This article discusses the current capabilities of automate processing of the image data on the example of using PhotoScan software by Agisoft. At present, image data obtained by various registration systems (metric and non - metric cameras) placed on airplanes, satellites, or more often on UAVs is used to create photogrammetric products. Multiple registrations of object or land area (large groups of photos are captured) are usually performed in order to eliminate obscured area as well as to raise the final accuracy of the photogrammetric product. Because of such a situation t he geometry of the resulting image blocks is far from the typical configuration of images. For fast images georeferencing automatic image matching algorithms are currently applied. They can create a model of a block in the local coordinate system or using initial exterior orientation and measured control points can provide image georeference in an external reference frame. In the case of non - metric image application, it is also possible to carry out self - calibration process at this stage. Image matching algorithm is also used in generation of dense point clouds reconstructing spatial shape of the object (area). In subsequent processing steps it is possible to obtain typical photogrammetric products such as orthomosaic, DSM or DTM and a photorealistic solid model of an object . All aforementioned processing steps are implemented in a single program in contrary to standard commercial software dividing all steps into dedicated modules. Image processing leading to final geo referenced products can be fully automated including sequential implementation of the processing steps at predetermined control parameters. The paper presents the practical results of the application fully automatic generation of othomosaic for both images obtained by a metric Vexell camera and a block of images acquired by a non - metric UAV system

  18. Definition and sensitivity of the conceptual MORDOR rainfall-runoff model parameters using different multi-criteria calibration strategies

    NASA Astrophysics Data System (ADS)

    Garavaglia, F.; Seyve, E.; Gottardi, F.; Le Lay, M.; Gailhard, J.; Garçon, R.

    2014-12-01

    MORDOR is a conceptual hydrological model extensively used in Électricité de France (EDF, French electric utility company) operational applications: (i) hydrological forecasting, (ii) flood risk assessment, (iii) water balance and (iv) climate change studies. MORDOR is a lumped, reservoir, elevation based model with hourly or daily areal rainfall and air temperature as the driving input data. The principal hydrological processes represented are evapotranspiration, direct and indirect runoff, ground water, snow accumulation and melt and routing. The model has been intensively used at EDF for more than 20 years, in particular for modeling French mountainous watersheds. In the matter of parameters calibration we propose and test alternative multi-criteria techniques based on two specific approaches: automatic calibration using single-objective functions and a priori parameter calibration founded on hydrological watershed features. The automatic calibration approach uses single-objective functions, based on Kling-Gupta efficiency, to quantify the good agreement between the simulated and observed runoff focusing on four different runoff samples: (i) time-series sample, (I) annual hydrological regime, (iii) monthly cumulative distribution functions and (iv) recession sequences.The primary purpose of this study is to analyze the definition and sensitivity of MORDOR parameters testing different calibration techniques in order to: (i) simplify the model structure, (ii) increase the calibration-validation performance of the model and (iii) reduce the equifinality problem of calibration process. We propose an alternative calibration strategy that reaches these goals. The analysis is illustrated by calibrating MORDOR model to daily data for 50 watersheds located in French mountainous regions.

  19. A versatile calibration procedure for portable coded aperture gamma cameras and RGB-D sensors

    NASA Astrophysics Data System (ADS)

    Paradiso, V.; Crivellaro, A.; Amgarou, K.; de Lanaute, N. Blanc; Fua, P.; Liénard, E.

    2018-04-01

    The present paper proposes a versatile procedure for the geometrical calibration of coded aperture gamma cameras and RGB-D depth sensors, using only one radioactive point source and a simple experimental set-up. Calibration data is then used for accurately aligning radiation images retrieved by means of the γ-camera with the respective depth images computed with the RGB-D sensor. The system resulting from such a combination is thus able to retrieve, automatically, the distance of radioactive hotspots by means of pixel-wise mapping between gamma and depth images. This procedure is of great interest for a wide number of applications, ranging from precise automatic estimation of the shape and distance of radioactive objects to Augmented Reality systems. Incidentally, the corresponding results validated the choice of a perspective design model for a coded aperture γ-camera.

  20. Automatic classification of seismic events within a regional seismograph network

    NASA Astrophysics Data System (ADS)

    Tiira, Timo; Kortström, Jari; Uski, Marja

    2015-04-01

    A fully automatic method for seismic event classification within a sparse regional seismograph network is presented. The tool is based on a supervised pattern recognition technique, Support Vector Machine (SVM), trained here to distinguish weak local earthquakes from a bulk of human-made or spurious seismic events. The classification rules rely on differences in signal energy distribution between natural and artificial seismic sources. Seismic records are divided into four windows, P, P coda, S, and S coda. For each signal window STA is computed in 20 narrow frequency bands between 1 and 41 Hz. The 80 discrimination parameters are used as a training data for the SVM. The SVM models are calculated for 19 on-line seismic stations in Finland. The event data are compiled mainly from fully automatic event solutions that are manually classified after automatic location process. The station-specific SVM training events include 11-302 positive (earthquake) and 227-1048 negative (non-earthquake) examples. The best voting rules for combining results from different stations are determined during an independent testing period. Finally, the network processing rules are applied to an independent evaluation period comprising 4681 fully automatic event determinations, of which 98 % have been manually identified as explosions or noise and 2 % as earthquakes. The SVM method correctly identifies 94 % of the non-earthquakes and all the earthquakes. The results imply that the SVM tool can identify and filter out blasts and spurious events from fully automatic event solutions with a high level of confidence. The tool helps to reduce work-load in manual seismic analysis by leaving only ~5 % of the automatic event determinations, i.e. the probable earthquakes for more detailed seismological analysis. The approach presented is easy to adjust to requirements of a denser or wider high-frequency network, once enough training examples for building a station-specific data set are available.

  1. MOESHA: A genetic algorithm for automatic calibration and estimation of parameter uncertainty and sensitivity of hydrologic models

    EPA Science Inventory

    Characterization of uncertainty and sensitivity of model parameters is an essential and often overlooked facet of hydrological modeling. This paper introduces an algorithm called MOESHA that combines input parameter sensitivity analyses with a genetic algorithm calibration routin...

  2. Automatic Camera Calibration for Cultural Heritage Applications Using Unstructured Planar Objects

    NASA Astrophysics Data System (ADS)

    Adam, K.; Kalisperakis, I.; Grammatikopoulos, L.; Karras, G.; Petsa, E.

    2013-07-01

    As a rule, image-based documentation of cultural heritage relies today on ordinary digital cameras and commercial software. As such projects often involve researchers not familiar with photogrammetry, the question of camera calibration is important. Freely available open-source user-friendly software for automatic camera calibration, often based on simple 2D chess-board patterns, are an answer to the demand for simplicity and automation. However, such tools cannot respond to all requirements met in cultural heritage conservation regarding possible imaging distances and focal lengths. Here we investigate the practical possibility of camera calibration from unknown planar objects, i.e. any planar surface with adequate texture; we have focused on the example of urban walls covered with graffiti. Images are connected pair-wise with inter-image homographies, which are estimated automatically through a RANSAC-based approach after extracting and matching interest points with the SIFT operator. All valid points are identified on all images on which they appear. Provided that the image set includes a "fronto-parallel" view, inter-image homographies with this image are regarded as emulations of image-to-world homographies and allow computing initial estimates for the interior and exterior orientation elements. Following this initialization step, the estimates are introduced into a final self-calibrating bundle adjustment. Measures are taken to discard unsuitable images and verify object planarity. Results from practical experimentation indicate that this method may produce satisfactory results. The authors intend to incorporate the described approach into their freely available user-friendly software tool, which relies on chess-boards, to assist non-experts in their projects with image-based approaches.

  3. Accurate and automatic extrinsic calibration method for blade measurement system integrated by different optical sensors

    NASA Astrophysics Data System (ADS)

    He, Wantao; Li, Zhongwei; Zhong, Kai; Shi, Yusheng; Zhao, Can; Cheng, Xu

    2014-11-01

    Fast and precise 3D inspection system is in great demand in modern manufacturing processes. At present, the available sensors have their own pros and cons, and hardly exist an omnipotent sensor to handle the complex inspection task in an accurate and effective way. The prevailing solution is integrating multiple sensors and taking advantages of their strengths. For obtaining a holistic 3D profile, the data from different sensors should be registrated into a coherent coordinate system. However, some complex shape objects own thin wall feather such as blades, the ICP registration method would become unstable. Therefore, it is very important to calibrate the extrinsic parameters of each sensor in the integrated measurement system. This paper proposed an accurate and automatic extrinsic parameter calibration method for blade measurement system integrated by different optical sensors. In this system, fringe projection sensor (FPS) and conoscopic holography sensor (CHS) is integrated into a multi-axis motion platform, and the sensors can be optimally move to any desired position at the object's surface. In order to simple the calibration process, a special calibration artifact is designed according to the characteristics of the two sensors. An automatic registration procedure based on correlation and segmentation is used to realize the artifact datasets obtaining by FPS and CHS rough alignment without any manual operation and data pro-processing, and then the Generalized Gauss-Markoff model is used to estimate the optimization transformation parameters. The experiments show the measurement result of a blade, where several sampled patches are merged into one point cloud, and it verifies the performance of the proposed method.

  4. Development of an in situ calibration technique for combustible gas detectors

    NASA Technical Reports Server (NTRS)

    Shumar, J. W.; Wynveen, R. A.; Lance, N., Jr.; Lantz, J. B.

    1977-01-01

    This paper describes the development of an in situ calibration procedure for combustible gas detectors (CGD). The CGD will be a necessary device for future space vehicles as many subsystems in the Environmental Control/Life Support System utilize or produce hydrogen (H2) gas. Existing calibration techniques are time-consuming and require support equipment such as an environmental chamber and calibration gas supply. The in situ calibration procedure involves utilization of a water vapor electrolysis cell for the automatic in situ generation of a H2/air calibration mixture within the flame arrestor of the CGD. The development effort concluded with the successful demonstration of in situ span calibrations of a CGD.

  5. Automation is an Effective Way to Improve Quality of Verification (Calibration) of Measuring Instruments

    NASA Astrophysics Data System (ADS)

    Golobokov, M.; Danilevich, S.

    2018-04-01

    In order to assess calibration reliability and automate such assessment, procedures for data collection and simulation study of thermal imager calibration procedure have been elaborated. The existing calibration techniques do not always provide high reliability. A new method for analyzing the existing calibration techniques and developing new efficient ones has been suggested and tested. A type of software has been studied that allows generating instrument calibration reports automatically, monitoring their proper configuration, processing measurement results and assessing instrument validity. The use of such software allows reducing man-hours spent on finalization of calibration data 2 to 5 times and eliminating a whole set of typical operator errors.

  6. Automated response matching for organic scintillation detector arrays

    NASA Astrophysics Data System (ADS)

    Aspinall, M. D.; Joyce, M. J.; Cave, F. D.; Plenteda, R.; Tomanin, A.

    2017-07-01

    This paper identifies a digitizer technology with unique features that facilitates feedback control for the realization of a software-based technique for automatically calibrating detector responses. Three such auto-calibration techniques have been developed and are described along with an explanation of the main configuration settings and potential pitfalls. Automating this process increases repeatability, simplifies user operation, enables remote and periodic system calibration where consistency across detectors' responses are critical.

  7. Final results of the PERSEE experiment

    NASA Astrophysics Data System (ADS)

    Le Duigou, J. M.; Lozi, J.; Cassaing, F.; Houairi, K.; Sorrente, B.; Montri, J.; Jacquinod, S.; Reess, J.-M.; Pham, L.; Lhome, E.; Buey, T.; Hénault, F.; Marcotto, A.; Girard, P.; Mauclert, N.; Barillot, M.; Coudé du Foresto, V.; Ollivier, M.

    2012-07-01

    The PERSEE breadboard, developed by a consortium including CNES, IAS, LESIA, OCA, ONERA and TAS since 2005, is a nulling demonstrator that couples an infrared nulling interferometer with a formation flying simulator able to introduce realistic disturbances in the set-up. The general idea is to prove that an adequate optical design can considerably relax the constraints applying at the spacecrafts level of a future interferometric space mission like Darwin/TPF or one of its precursors. The breadboard is now fully operational and the measurements sequences are managed from a remote control room using automatic procedures. A set of excellent results were obtained in 2011. The measured polychromatic nulling depth with non polarized light is 8.8 10-6 stabilized at 9 10-8 in the 1.65-2.45 μm spectral band (37 % bandwidth) during 100 s. This result was extended to a 7h duration thanks to an automatic calibration process. The various contributors are identified and the nulling budget is now well mastered. We also proved that harmonic disturbances in the 1-100 Hz up to several ten’s of nm rms can be very efficiently corrected by a Linear Quadratic Control (LQG) if a sufficient flux is available. These results are important contributions to the feasibility of a future space based nulling interferometer.

  8. Final results of the PERSEE experiment

    NASA Astrophysics Data System (ADS)

    Le Duigou, J.-M.; Lozi, J.; Cassaing, F.; Houairi, K.; Sorrente, B.; Montri, J.; Jacquinod, S.; Réess, J.-M.; Pham, L.; Lhomé, E.; Buey, T.; Hénault, F.; Marcotto, A.; Girard, P.; Mauclert, N.; Barillot, M.; Coudé du Foresto, V.; Ollivier, M.

    2017-11-01

    The PERSEE breadboard, developed by a consortium including CNES, IAS, LESIA, OCA, ONERA and TAS since 2006, is a nulling demonstrator that couples an infrared nulling interferometer with a formation flying simulator able to introduce realistic disturbances in the set-up. The general idea is to prove that an adequate optical design can considerably release the constraints applied at the spacecrafts level of a future interferometric space mission like Darwin/TPF or one of its precursors. The breadboard is now fully operational and the measurements sequences are managed from a remote control room using automatic procedures. A set of excellent results were obtained in 2011: the measured polychromatic nulling depth with non polarized light is 8.8x10-6 stabilized at 9x10-8 in the [1.65-2.45] μm spectral band (37% bandwidth) during 100s. This result was extended to a 7h duration thanks to an automatic calibration process. The various contributors are identified and the nulling budget is now well mastered. We also proved that harmonic disturbances in the 1-100Hz up to several tens of nm rms can be very efficiently corrected by a Linear Quadratic Control (LQG) if a sufficient flux is available. These results are important contributions to the feasibility of a future space based nulling interferometer.

  9. Hand-Eye Calibration of Robonaut

    NASA Technical Reports Server (NTRS)

    Nickels, Kevin; Huber, Eric

    2004-01-01

    NASA's Human Space Flight program depends heavily on Extra-Vehicular Activities (EVA's) performed by human astronauts. EVA is a high risk environment that requires extensive training and ground support. In collaboration with the Defense Advanced Research Projects Agency (DARPA), NASA is conducting a ground development project to produce a robotic astronaut's assistant, called Robonaut, that could help reduce human EVA time and workload. The project described in this paper designed and implemented a hand-eye calibration scheme for Robonaut, Unit A. The intent of this calibration scheme is to improve hand-eye coordination of the robot. The basic approach is to use kinematic and stereo vision measurements, namely the joint angles self-reported by the right arm and 3-D positions of a calibration fixture as measured by vision, to estimate the transformation from Robonaut's base coordinate system to its hand coordinate system and to its vision coordinate system. Two methods of gathering data sets have been developed, along with software to support each. In the first, the system observes the robotic arm and neck angles as the robot is operated under external control, and measures the 3-D position of a calibration fixture using Robonaut's stereo cameras, and logs these data. In the second, the system drives the arm and neck through a set of pre-recorded configurations, and data are again logged. Two variants of the calibration scheme have been developed. The full calibration scheme is a batch procedure that estimates all relevant kinematic parameters of the arm and neck of the robot The daily calibration scheme estimates only joint offsets for each rotational joint on the arm and neck, which are assumed to change from day to day. The schemes have been designed to be automatic and easy to use so that the robot can be fully recalibrated when needed such as after repair, upgrade, etc, and can be partially recalibrated after each power cycle. The scheme has been implemented on Robonaut Unit A and has been shown to reduce mismatch between kinematically derived positions and visually derived positions from a mean of 13.75cm using the previous calibration to means of 1.85cm using a full calibration and 2.02cm using a suboptimal but faster daily calibration. This improved calibration has already enabled the robot to more accurately reach for and grasp objects that it sees within its workspace. The system has been used to support an autonomous wrench-grasping experiment and significantly improved the workspace positioning of the hand based on visually derived wrench position. estimates.

  10. Automatic Calibration of a Semi-Distributed Hydrologic Model Using Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Bekele, E. G.; Nicklow, J. W.

    2005-12-01

    Hydrologic simulation models need to be calibrated and validated before using them for operational predictions. Spatially-distributed hydrologic models generally have a large number of parameters to capture the various physical characteristics of a hydrologic system. Manual calibration of such models is a very tedious and daunting task, and its success depends on the subjective assessment of a particular modeler, which includes knowledge of the basic approaches and interactions in the model. In order to alleviate these shortcomings, an automatic calibration model, which employs an evolutionary optimization technique known as Particle Swarm Optimizer (PSO) for parameter estimation, is developed. PSO is a heuristic search algorithm that is inspired by social behavior of bird flocking or fish schooling. The newly-developed calibration model is integrated to the U.S. Department of Agriculture's Soil and Water Assessment Tool (SWAT). SWAT is a physically-based, semi-distributed hydrologic model that was developed to predict the long term impacts of land management practices on water, sediment and agricultural chemical yields in large complex watersheds with varying soils, land use, and management conditions. SWAT was calibrated for streamflow and sediment concentration. The calibration process involves parameter specification, whereby sensitive model parameters are identified, and parameter estimation. In order to reduce the number of parameters to be calibrated, parameterization was performed. The methodology is applied to a demonstration watershed known as Big Creek, which is located in southern Illinois. Application results show the effectiveness of the approach and model predictions are significantly improved.

  11. 40 CFR 85.2233 - Steady state test equipment calibrations, adjustments, and quality control-EPA 91.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... compensated for automatically and statistical process control demonstrates equal or better quality control... calibrations, adjustments, and quality control-EPA 91. 85.2233 Section 85.2233 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF AIR POLLUTION FROM MOBILE...

  12. An automatic calibration procedure for remote eye-gaze tracking systems.

    PubMed

    Model, Dmitri; Guestrin, Elias D; Eizenman, Moshe

    2009-01-01

    Remote gaze estimation systems use calibration procedures to estimate subject-specific parameters that are needed for the calculation of the point-of-gaze. In these procedures, subjects are required to fixate on a specific point or points at specific time instances. Advanced remote gaze estimation systems can estimate the optical axis of the eye without any personal calibration procedure, but use a single calibration point to estimate the angle between the optical axis and the visual axis (line-of-sight). This paper presents a novel automatic calibration procedure that does not require active user participation. To estimate the angles between the optical and visual axes of each eye, this procedure minimizes the distance between the intersections of the visual axes of the left and right eyes with the surface of a display while subjects look naturally at the display (e.g., watching a video clip). Simulation results demonstrate that the performance of the algorithm improves as the range of viewing angles increases. For a subject sitting 75 cm in front of an 80 cm x 60 cm display (40" TV) the standard deviation of the error in the estimation of the angles between the optical and visual axes is 0.5 degrees.

  13. Fully automatic segmentation of femurs with medullary canal definition in high and in low resolution CT scans.

    PubMed

    Almeida, Diogo F; Ruben, Rui B; Folgado, João; Fernandes, Paulo R; Audenaert, Emmanuel; Verhegghe, Benedict; De Beule, Matthieu

    2016-12-01

    Femur segmentation can be an important tool in orthopedic surgical planning. However, in order to overcome the need of an experienced user with extensive knowledge on the techniques, segmentation should be fully automatic. In this paper a new fully automatic femur segmentation method for CT images is presented. This method is also able to define automatically the medullary canal and performs well even in low resolution CT scans. Fully automatic femoral segmentation was performed adapting a template mesh of the femoral volume to medical images. In order to achieve this, an adaptation of the active shape model (ASM) technique based on the statistical shape model (SSM) and local appearance model (LAM) of the femur with a novel initialization method was used, to drive the template mesh deformation in order to fit the in-image femoral shape in a time effective approach. With the proposed method a 98% convergence rate was achieved. For high resolution CT images group the average error is less than 1mm. For the low resolution image group the results are also accurate and the average error is less than 1.5mm. The proposed segmentation pipeline is accurate, robust and completely user free. The method is robust to patient orientation, image artifacts and poorly defined edges. The results excelled even in CT images with a significant slice thickness, i.e., above 5mm. Medullary canal segmentation increases the geometric information that can be used in orthopedic surgical planning or in finite element analysis. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  14. Modulus design multiwavelength polarization microscope for transmission Mueller matrix imaging

    NASA Astrophysics Data System (ADS)

    Zhou, Jialing; He, Honghui; Chen, Zhenhua; Wang, Ye; Ma, Hui

    2018-01-01

    We have developed a polarization microscope based on a commercial transmission microscope. We replace the halogen light source by a collimated LED light source module of six different colors. We use achromatic polarized optical elements that can cover the six different wavelength ranges in the polarization state generator (PSG) and polarization state analyzer (PSA) modules. The dual-rotating wave plate method is used to measure the Mueller matrix of samples, which requires the simultaneous rotation of the two quarter-wave plates in both PSG and PSA at certain angular steps. A scientific CCD detector is used as the image receiving module. A LabView-based software is developed to control the rotation angels of the wave plates and the exposure time of the detector to allow the system to run fully automatically in preprogrammed schedules. Standard samples, such as air, polarizers, and quarter-wave plates, are used to calibrate the intrinsic Mueller matrix of optical components, such as the objectives, using the eigenvalue calibration method. Errors due to the images walk-off in the PSA are studied. Errors in the Mueller matrices are below 0.01 using air and polarizer as standard samples. Data analysis based on Mueller matrix transformation and Mueller matrix polarization decomposition is used to demonstrate the potential application of this microscope in pathological diagnosis.

  15. Automatic systems and the low-level wind hazard

    NASA Technical Reports Server (NTRS)

    Schaeffer, Dwight R.

    1987-01-01

    Automatic flight control systems provide means for significantly enhancing survivability in severe wind hazards. The technology required to produce the necessary control algorithms is available and has been made technically feasible by the advent of digital flight control systems and accurate, low-noise sensors, especially strap-down inertial sensors. The application of this technology and these means has not generally been enabled except for automatic landing systems, and even then the potential has not been fully exploited. To fully exploit the potential of automatic systems for enhancing safety in wind hazards requires providing incentives, creating demand, inspiring competition, education, and eliminating prejudicial disincentitives to overcome the economic penalties associated with the extensive and riskly development and certification of these systems. If these changes will come about at all, it will likely be through changes in the regulations provided by the certifying agencies.

  16. Geometrical pose and structural estimation from a single image for automatic inspection of filter components

    NASA Astrophysics Data System (ADS)

    Liu, Yonghuai; Rodrigues, Marcos A.

    2000-03-01

    This paper describes research on the application of machine vision techniques to a real time automatic inspection task of air filter components in a manufacturing line. A novel calibration algorithm is proposed based on a special camera setup where defective items would show a large calibration error. The algorithm makes full use of rigid constraints derived from the analysis of geometrical properties of reflected correspondence vectors which have been synthesized into a single coordinate frame and provides a closed form solution to the estimation of all parameters. For a comparative study of performance, we also developed another algorithm based on this special camera setup using epipolar geometry. A number of experiments using synthetic data have shown that the proposed algorithm is generally more accurate and robust than the epipolar geometry based algorithm and that the geometric properties of reflected correspondence vectors provide effective constraints to the calibration of rigid body transformations.

  17. Assessment of Space Power Related Measurement Requirements of the Strategic Defense Initiative

    DTIC Science & Technology

    1989-04-01

    calibration techniques are available and estimated uncertainties vary between 5 and 10%. At low rf power levels (~ 10mW ), NIST maintains standard calibration... bands single or dual six-port automatic network analyzers [24] are used as transfer systems with detectors calibrated using the NIST micro calorimeter...Probable designs for the multimegawatt space reactor program indicate the need to measure neutron fluxes up to 1016 neutrons/cm2- s (1019 neutrons

  18. Automatic camera to laser calibration for high accuracy mobile mapping systems using INS

    NASA Astrophysics Data System (ADS)

    Goeman, Werner; Douterloigne, Koen; Gautama, Sidharta

    2013-09-01

    A mobile mapping system (MMS) is a mobile multi-sensor platform developed by the geoinformation community to support the acquisition of huge amounts of geodata in the form of georeferenced high resolution images and dense laser clouds. Since data fusion and data integration techniques are increasingly able to combine the complementary strengths of different sensor types, the external calibration of a camera to a laser scanner is a common pre-requisite on today's mobile platforms. The methods of calibration, nevertheless, are often relatively poorly documented, are almost always time-consuming, demand expert knowledge and often require a carefully constructed calibration environment. A new methodology is studied and explored to provide a high quality external calibration for a pinhole camera to a laser scanner which is automatic, easy to perform, robust and foolproof. The method presented here, uses a portable, standard ranging pole which needs to be positioned on a known ground control point. For calibration, a well studied absolute orientation problem needs to be solved. In many cases, the camera and laser sensor are calibrated in relation to the INS system. Therefore, the transformation from camera to laser contains the cumulated error of each sensor in relation to the INS. Here, the calibration of the camera is performed in relation to the laser frame using the time synchronization between the sensors for data association. In this study, the use of the inertial relative movement will be explored to collect more useful calibration data. This results in a better intersensor calibration allowing better coloring of the clouds and a more accurate depth mask for images, especially on the edges of objects in the scene.

  19. [The mediating role of anger in the relationship between automatic thoughts and physical aggression in adolescents].

    PubMed

    Yavuzer, Yasemin; Karataş, Zeynep

    2013-01-01

    This study aimed to examine the mediating role of anger in the relationship between automatic thoughts and physical aggression in adolescents. The study included 224 adolescents in the 9th grade of 3 different high schools in central Burdur during the 2011-2012 academic year. Participants completed the Aggression Questionnaire and Automatic Thoughts Scale in their classrooms during counseling sessions. Data were analyzed using simple and multiple linear regression analysis. There were positive correlations between the adolescents' automatic thoughts, and physical aggression, and anger. According to regression analysis, automatic thoughts effectively predicted the level of physical aggression (b= 0.233, P < 0.001)) and anger (b= 0.325, P < 0.001). Analysis of the mediating role of anger showed that anger fully mediated the relationship between automatic thoughts and physical aggression (Sobel z = 5.646, P < 0.001). Anger fully mediated the relationship between automatic thoughts and physical aggression. Providing adolescents with anger management skills training is very important for the prevention of physical aggression. Such training programs should include components related to the development of an awareness of dysfunctional and anger-triggering automatic thoughts, and how to change them. As the study group included adolescents from Burdur, the findings can only be generalized to groups with similar characteristics.

  20. Calibration of Automatically Generated Items Using Bayesian Hierarchical Modeling.

    ERIC Educational Resources Information Center

    Johnson, Matthew S.; Sinharay, Sandip

    For complex educational assessments, there is an increasing use of "item families," which are groups of related items. However, calibration or scoring for such an assessment requires fitting models that take into account the dependence structure inherent among the items that belong to the same item family. C. Glas and W. van der Linden…

  1. Heliostat calibration using attached cameras and artificial targets

    NASA Astrophysics Data System (ADS)

    Burisch, Michael; Sanchez, Marcelino; Olarra, Aitor; Villasante, Cristobal

    2016-05-01

    The efficiency of the solar field greatly depends on the ability of the heliostats to precisely reflect solar radiation onto a central receiver. To control the heliostats with such a precision requires the accurate knowledge of the motion of each of them. The motion of each heliostat can be described by a set of parameters, most notably the position and axis configuration. These parameters have to be determined individually for each heliostat during a calibration process. With the ongoing development of small sized heliostats, the ability to automatically perform such a calibration becomes more and more crucial as possibly hundreds of thousands of heliostats are involved. Furthermore, efficiency becomes an important factor as small sized heliostats potentially have to be recalibrated far more often, due to the limited stability of the components. In the following we present an automatic calibration procedure using cameras attached to each heliostat which are observing different targets spread throughout the solar field. Based on a number of observations of these targets under different heliostat orientations, the parameters describing the heliostat motion can be estimated with high precision.

  2. A fully automatic end-to-end method for content-based image retrieval of CT scans with similar liver lesion annotations.

    PubMed

    Spanier, A B; Caplan, N; Sosna, J; Acar, B; Joskowicz, L

    2018-01-01

    The goal of medical content-based image retrieval (M-CBIR) is to assist radiologists in the decision-making process by retrieving medical cases similar to a given image. One of the key interests of radiologists is lesions and their annotations, since the patient treatment depends on the lesion diagnosis. Therefore, a key feature of M-CBIR systems is the retrieval of scans with the most similar lesion annotations. To be of value, M-CBIR systems should be fully automatic to handle large case databases. We present a fully automatic end-to-end method for the retrieval of CT scans with similar liver lesion annotations. The input is a database of abdominal CT scans labeled with liver lesions, a query CT scan, and optionally one radiologist-specified lesion annotation of interest. The output is an ordered list of the database CT scans with the most similar liver lesion annotations. The method starts by automatically segmenting the liver in the scan. It then extracts a histogram-based features vector from the segmented region, learns the features' relative importance, and ranks the database scans according to the relative importance measure. The main advantages of our method are that it fully automates the end-to-end querying process, that it uses simple and efficient techniques that are scalable to large datasets, and that it produces quality retrieval results using an unannotated CT scan. Our experimental results on 9 CT queries on a dataset of 41 volumetric CT scans from the 2014 Image CLEF Liver Annotation Task yield an average retrieval accuracy (Normalized Discounted Cumulative Gain index) of 0.77 and 0.84 without/with annotation, respectively. Fully automatic end-to-end retrieval of similar cases based on image information alone, rather that on disease diagnosis, may help radiologists to better diagnose liver lesions.

  3. CubiCal: Suite for fast radio interferometric calibration

    NASA Astrophysics Data System (ADS)

    Kenyon, J. S.; Smirnov, O. M.; Grobler, T. L.; Perkins, S. J.

    2018-05-01

    CubiCal implements several accelerated gain solvers which exploit complex optimization for fast radio interferometric gain calibration. The code can be used for both direction-independent and direction-dependent self-calibration. CubiCal is implemented in Python and Cython, and multiprocessing is fully supported.

  4. Fully automatic left ventricular myocardial strain estimation in 2D short-axis tagged magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Morais, Pedro; Queirós, Sandro; Heyde, Brecht; Engvall, Jan; 'hooge, Jan D.; Vilaça, João L.

    2017-09-01

    Cardiovascular diseases are among the leading causes of death and frequently result in local myocardial dysfunction. Among the numerous imaging modalities available to detect these dysfunctional regions, cardiac deformation imaging through tagged magnetic resonance imaging (t-MRI) has been an attractive approach. Nevertheless, fully automatic analysis of these data sets is still challenging. In this work, we present a fully automatic framework to estimate left ventricular myocardial deformation from t-MRI. This strategy performs automatic myocardial segmentation based on B-spline explicit active surfaces, which are initialized using an annular model. A non-rigid image-registration technique is then used to assess myocardial deformation. Three experiments were set up to validate the proposed framework using a clinical database of 75 patients. First, automatic segmentation accuracy was evaluated by comparing against manual delineations at one specific cardiac phase. The proposed solution showed an average perpendicular distance error of 2.35  ±  1.21 mm and 2.27  ±  1.02 mm for the endo- and epicardium, respectively. Second, starting from either manual or automatic segmentation, myocardial tracking was performed and the resulting strain curves were compared. It is shown that the automatic segmentation adds negligible differences during the strain-estimation stage, corroborating its accuracy. Finally, segmental strain was compared with scar tissue extent determined by delay-enhanced MRI. The results proved that both strain components were able to distinguish between normal and infarct regions. Overall, the proposed framework was shown to be accurate, robust, and attractive for clinical practice, as it overcomes several limitations of a manual analysis.

  5. Intercomparison of hydrological model structures and calibration approaches in climate scenario impact projections

    NASA Astrophysics Data System (ADS)

    Vansteenkiste, Thomas; Tavakoli, Mohsen; Ntegeka, Victor; De Smedt, Florimond; Batelaan, Okke; Pereira, Fernando; Willems, Patrick

    2014-11-01

    The objective of this paper is to investigate the effects of hydrological model structure and calibration on climate change impact results in hydrology. The uncertainty in the hydrological impact results is assessed by the relative change in runoff volumes and peak and low flow extremes from historical and future climate conditions. The effect of the hydrological model structure is examined through the use of five hydrological models with different spatial resolutions and process descriptions. These were applied to a medium sized catchment in Belgium. The models vary from the lumped conceptual NAM, PDM and VHM models over the intermediate detailed and distributed WetSpa model to the fully distributed MIKE SHE model. The latter model accounts for the 3D groundwater processes and interacts bi-directionally with a full hydrodynamic MIKE 11 river model. After careful and manual calibration of these models, accounting for the accuracy of the peak and low flow extremes and runoff subflows, and the changes in these extremes for changing rainfall conditions, the five models respond in a similar way to the climate scenarios over Belgium. Future projections on peak flows are highly uncertain with expected increases as well as decreases depending on the climate scenario. The projections on future low flows are more uniform; low flows decrease (up to 60%) for all models and for all climate scenarios. However, the uncertainties in the impact projections are high, mainly in the dry season. With respect to the model structural uncertainty, the PDM model simulates significantly higher runoff peak flows under future wet scenarios, which is explained by its specific model structure. For the low flow extremes, the MIKE SHE model projects significantly lower low flows in dry scenario conditions in comparison to the other models, probably due to its large difference in process descriptions for the groundwater component, the groundwater-river interactions. The effect of the model calibration was tested by comparing the manual calibration approach with automatic calibrations of the VHM model based on different objective functions. The calibration approach did not significantly alter the model results for peak flow, but the low flow projections were again highly influenced. Model choice as well as calibration strategy hence have a critical impact on low flows, more than on peak flows. These results highlight the high uncertainty in low flow modelling, especially in a climate change context.

  6. FlowCal: A user-friendly, open source software tool for automatically converting flow cytometry data from arbitrary to calibrated units

    PubMed Central

    Castillo-Hair, Sebastian M.; Sexton, John T.; Landry, Brian P.; Olson, Evan J.; Igoshin, Oleg A.; Tabor, Jeffrey J.

    2017-01-01

    Flow cytometry is widely used to measure gene expression and other molecular biological processes with single cell resolution via fluorescent probes. Flow cytometers output data in arbitrary units (a.u.) that vary with the probe, instrument, and settings. Arbitrary units can be converted to the calibrated unit molecules of equivalent fluorophore (MEF) using commercially available calibration particles. However, there is no convenient, non-proprietary tool available to perform this calibration. Consequently, most researchers report data in a.u., limiting interpretation. Here, we report a software tool named FlowCal to overcome current limitations. FlowCal can be run using an intuitive Microsoft Excel interface, or customizable Python scripts. The software accepts Flow Cytometry Standard (FCS) files as inputs and is compatible with different calibration particles, fluorescent probes, and cell types. Additionally, FlowCal automatically gates data, calculates common statistics, and produces publication quality plots. We validate FlowCal by calibrating a.u. measurements of E. coli expressing superfolder GFP (sfGFP) collected at 10 different detector sensitivity (gain) settings to a single MEF value. Additionally, we reduce day-to-day variability in replicate E. coli sfGFP expression measurements due to instrument drift by 33%, and calibrate S. cerevisiae mVenus expression data to MEF units. Finally, we demonstrate a simple method for using FlowCal to calibrate fluorescence units across different cytometers. FlowCal should ease the quantitative analysis of flow cytometry data within and across laboratories and facilitate the adoption of standard fluorescence units in synthetic biology and beyond. PMID:27110723

  7. Towards SWOT data assimilation for hydrology : automatic calibration of global flow routing model parameters in the Amazon basin

    NASA Astrophysics Data System (ADS)

    Mouffe, M.; Getirana, A.; Ricci, S. M.; Lion, C.; Biancamaria, S.; Boone, A.; Mognard, N. M.; Rogel, P.

    2011-12-01

    The Surface Water and Ocean Topography (SWOT) mission is a swath mapping radar interferometer that will provide global measurements of water surface elevation (WSE). The revisit time depends upon latitude and varies from two (low latitudes) to ten (high latitudes) per 22-day orbit repeat period. The high resolution and the global coverage of the SWOT data open the way for new hydrology studies. Here, the aim is to investigate the use of virtually generated SWOT data to improve discharge simulation using data assimilation techniques. In the framework of the SWOT virtual mission (VM), this study presents the first results of the automatic calibration of a global flow routing (GFR) scheme using SWOT VM measurements for the Amazon basin. The Hydrological Modeling and Analysis Platform (HyMAP) is used along with the MOCOM-UA multi-criteria global optimization algorithm. HyMAP has a 0.25-degree spatial resolution and runs at the daily time step to simulate discharge, water levels and floodplains. The surface runoff and baseflow drainage derived from the Interactions Sol-Biosphère-Atmosphère (ISBA) model are used as inputs for HyMAP. Previous works showed that the use of ENVISAT data enables the reduction of the uncertainty on some of the hydrological model parameters, such as river width and depth, Manning roughness coefficient and groundwater time delay. In the framework of the SWOT preparation work, the automatic calibration procedure was applied using SWOT VM measurements. For this Observing System Experiment (OSE), the synthetical data were obtained applying an instrument simulator (representing realistic SWOT errors) for one hydrological year to HYMAP simulated WSE using a "true" set of parameters. Only pixels representing rivers larger than 100 meters within the Amazon basin are considered to produce SWOT VM measurements. The automatic calibration procedure leads to the estimation of optimal parametersminimizing objective functions that formulate the difference between SWOT observations and modeled WSE using a perturbed set of parameters. Different formulations of the objective function were used, especially to account for SWOT observation errors, as well as various sets of calibration parameters.

  8. Automatic soldering machine

    NASA Technical Reports Server (NTRS)

    Stein, J. A.

    1974-01-01

    Fully-automatic tube-joint soldering machine can be used to make leakproof joints in aluminum tubes of 3/16 to 2 in. in diameter. Machine consists of temperature-control unit, heater transformer and heater head, vibrator, and associated circuitry controls, and indicators.

  9. Real-time piloted simulation of fully automatic guidance and control for rotorcraft nap-of-the-earth (NOE) flight following planned profiles

    NASA Technical Reports Server (NTRS)

    Clement, Warren F.; Gorder, Pater J.; Jewell, Wayne F.; Coppenbarger, Richard

    1990-01-01

    Developing a single-pilot all-weather NOE capability requires fully automatic NOE navigation and flight control. Innovative guidance and control concepts are being investigated to (1) organize the onboard computer-based storage and real-time updating of NOE terrain profiles and obstacles; (2) define a class of automatic anticipative pursuit guidance algorithms to follow the vertical, lateral, and longitudinal guidance commands; (3) automate a decision-making process for unexpected obstacle avoidance; and (4) provide several rapid response maneuvers. Acquired knowledge from the sensed environment is correlated with the recorded environment which is then used to determine an appropriate evasive maneuver if a nonconformity is observed. This research effort has been evaluated in both fixed-base and moving-base real-time piloted simulations thereby evaluating pilot acceptance of the automated concepts, supervisory override, manual operation, and reengagement of the automatic system.

  10. Surface smoothness: cartilage biomarkers for knee OA beyond the radiologist

    NASA Astrophysics Data System (ADS)

    Tummala, Sudhakar; Dam, Erik B.

    2010-03-01

    Fully automatic imaging biomarkers may allow quantification of patho-physiological processes that a radiologist would not be able to assess reliably. This can introduce new insight but is problematic to validate due to lack of meaningful ground truth expert measurements. Rather than quantification accuracy, such novel markers must therefore be validated against clinically meaningful end-goals such as the ability to allow correct diagnosis. We present a method for automatic cartilage surface smoothness quantification in the knee joint. The quantification is based on a curvature flow method used on tibial and femoral cartilage compartments resulting from an automatic segmentation scheme. These smoothness estimates are validated for their ability to diagnose osteoarthritis and compared to smoothness estimates based on manual expert segmentations and to conventional cartilage volume quantification. We demonstrate that the fully automatic markers eliminate the time required for radiologist annotations, and in addition provide a diagnostic marker superior to the evaluated semi-manual markers.

  11. An integrated exhaust gas analysis system with self-contained data processing and automatic calibration

    NASA Technical Reports Server (NTRS)

    Anderson, R. C.; Summers, R. L.

    1981-01-01

    An integrated gas analysis system designed to operate in automatic, semiautomatic, and manual modes from a remote control panel is described. The system measures the carbon monoxide, oxygen, water vapor, total hydrocarbons, carbon dioxide, and oxides of nitrogen. A pull through design provides increased reliability and eliminates the need for manual flow rate adjustment and pressure correction. The system contains two microprocessors to range the analyzers, calibrate the system, process the raw data to units of concentration, and provides information to the facility research computer and to the operator through terminal and the control panels. After initial setup, the system operates for several hours without significant operator attention.

  12. Partitioning of net carbon dioxide flux measured by automatic transparent chamber

    NASA Astrophysics Data System (ADS)

    Dyukarev, EA

    2018-03-01

    Mathematical model was developed for describing carbon dioxide fluxes at open sedge-sphagnum fen during growing season. The model was calibrated using the results of observations from automatic transparent chamber and it allows us to estimate autotrophic, heterotrophic and ecosystem respiration fluxes, gross and net primary vegetation production, and the net carbon balance.

  13. Using Automatic Item Generation to Meet the Increasing Item Demands of High-Stakes Educational and Occupational Assessment

    ERIC Educational Resources Information Center

    Arendasy, Martin E.; Sommer, Markus

    2012-01-01

    The use of new test administration technologies such as computerized adaptive testing in high-stakes educational and occupational assessments demands large item pools. Classic item construction processes and previous approaches to automatic item generation faced the problems of a considerable loss of items after the item calibration phase. In this…

  14. A Dual-Range Strain Gage Weighing Transducer Employing Automatic Switching

    Treesearch

    Rodger A. Arola

    1968-01-01

    Describes a dual-range strain gage transducer which has proven to be an excellent weight-sensing device for weighing trees and tree-length logs; discusses basic principals of the design and operation; and shows that a single transducer having two sensitivity ranges with automatic internal switching can sense weight with good repeatability and that one calibration curve...

  15. Heliostat kinematic system calibration using uncalibrated cameras

    NASA Astrophysics Data System (ADS)

    Burisch, Michael; Gomez, Luis; Olasolo, David; Villasante, Cristobal

    2017-06-01

    The efficiency of the solar field greatly depends on the ability of the heliostats to precisely reflect solar radiation onto a central receiver. To control the heliostats with such a precision accurate knowledge of the motion of each of them modeled as a kinematic system is required. Determining the parameters of this system for each heliostat by a calibration system is crucial for the efficient operation of the solar field. For small sized heliostats being able to make such a calibration in a fast and automatic manner is imperative as the solar field potentially contain tens or even hundreds of thousands of them. A calibration system which can rapidly recalibrate a whole solar field would also allow reducing costs. Heliostats are generally designed to provide stability over a large period of time. Being able to relax this requirement and compensate any occurring error by adapting parameters in a model, the costs of the heliostat can be reduced. The presented method describes such an automatic calibration system using uncalibrated cameras rigidly attached to each heliostat. The cameras are used to observe targets spread out through the solar field; based on this the kinematic system of the heliostat can be estimated with high precision. A comparison of this approach to similar solutions shows the viability of the proposed solution.

  16. Calibrating Item Families and Summarizing the Results Using Family Expected Response Functions

    ERIC Educational Resources Information Center

    Sinharay, Sandip; Johnson, Matthew S.; Williamson, David M.

    2003-01-01

    Item families, which are groups of related items, are becoming increasingly popular in complex educational assessments. For example, in automatic item generation (AIG) systems, a test may consist of multiple items generated from each of a number of item models. Item calibration or scoring for such an assessment requires fitting models that can…

  17. Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras.

    PubMed

    Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki

    2016-06-24

    Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system.

  18. Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras

    PubMed Central

    Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki

    2016-01-01

    Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system. PMID:27347961

  19. Improving the Traceability of Meteorological Measurements at Automatic Weather Stations in Thailand

    NASA Astrophysics Data System (ADS)

    Keawprasert, T.; Sinhaneti, T.; Phuuntharo, P.; Phanakulwijit, S.; Nimsamer, A.

    2017-08-01

    A joint project between the National Institute of Metrology Thailand (NIMT) and the Thai Meteorology Department (TMD) was established for improving the traceability of meteorology measurements at automatic weather stations (AWSs) in Thailand. The project aimed to improve traceability of air temperature, relative humidity and atmospheric pressure by implementing on-site calibration facilities and developing of new calibration procedures. First, new portable calibration facilities for air temperature, humidity and pressure were set up as working standard of the TMD. A portable humidity calibrator was applied as a uniform and stable source for calibration of thermo-hygrometers. A dew-point hygrometer was employed as reference hygrometer and a platinum resistance thermometer (PRT) traceable to NIMT was used as reference thermometer. The uniformity and stability in both temperature and relative humidity were characterized at NIMT. A transportable pressure calibrator was used for calibration of air pressure sensor. The estimate overall uncertainty of the calibration setup is 0.2 K for air temperature, 1.0 % for relative humidity and 0.2 hPa for atmospheric pressure, respectively. Second, on-site calibration procedures were developed and four AWSs in the central part and the northern of Thailand were chosen as pilot stations for on-site calibration using the new calibration setups and developed calibration procedures. At each station, the calibration was done at the minimum temperature, average temperature and maximum temperature of the year, for air temperature, 20 %, 55 % and 90 % for relative humidity at the average air temperature of that station and at a one-year statistics pressure range for atmospheric pressure at ambient temperature. Additional in-field uncertainty contributions such as the temperature dependence on relative humidity measurement were evaluated and included in the overall uncertainty budget. Preliminary calibration results showed that using a separate PRT probe at these AWSs would be recommended for improving the accuracy of air temperature measurement. In case of relative humidity measurement, the data logger software is needed to be upgraded for achieving higher accuracy of less than 3 %. For atmospheric pressure measurement, a higher accuracy barometer traceable to NIMT could be used to reduce the calibration uncertainty to below 0.2 hPa.

  20. Automatic detection of articulation disorders in children with cleft lip and palate.

    PubMed

    Maier, Andreas; Hönig, Florian; Bocklet, Tobias; Nöth, Elmar; Stelzle, Florian; Nkenke, Emeka; Schuster, Maria

    2009-11-01

    Speech of children with cleft lip and palate (CLP) is sometimes still disordered even after adequate surgical and nonsurgical therapies. Such speech shows complex articulation disorders, which are usually assessed perceptually, consuming time and manpower. Hence, there is a need for an easy to apply and reliable automatic method. To create a reference for an automatic system, speech data of 58 children with CLP were assessed perceptually by experienced speech therapists for characteristic phonetic disorders at the phoneme level. The first part of the article aims to detect such characteristics by a semiautomatic procedure and the second to evaluate a fully automatic, thus simple, procedure. The methods are based on a combination of speech processing algorithms. The semiautomatic method achieves moderate to good agreement (kappa approximately 0.6) for the detection of all phonetic disorders. On a speaker level, significant correlations between the perceptual evaluation and the automatic system of 0.89 are obtained. The fully automatic system yields a correlation on the speaker level of 0.81 to the perceptual evaluation. This correlation is in the range of the inter-rater correlation of the listeners. The automatic speech evaluation is able to detect phonetic disorders at an experts'level without any additional human postprocessing.

  1. Automatic short axis orientation of the left ventricle in 3D ultrasound recordings

    NASA Astrophysics Data System (ADS)

    Pedrosa, João.; Heyde, Brecht; Heeren, Laurens; Engvall, Jan; Zamorano, Jose; Papachristidis, Alexandros; Edvardsen, Thor; Claus, Piet; D'hooge, Jan

    2016-04-01

    The recent advent of three-dimensional echocardiography has led to an increased interest from the scientific community in left ventricle segmentation frameworks for cardiac volume and function assessment. An automatic orientation of the segmented left ventricular mesh is an important step to obtain a point-to-point correspondence between the mesh and the cardiac anatomy. Furthermore, this would allow for an automatic division of the left ventricle into the standard 17 segments and, thus, fully automatic per-segment analysis, e.g. regional strain assessment. In this work, a method for fully automatic short axis orientation of the segmented left ventricle is presented. The proposed framework aims at detecting the inferior right ventricular insertion point. 211 three-dimensional echocardiographic images were used to validate this framework by comparison to manual annotation of the inferior right ventricular insertion point. A mean unsigned error of 8, 05° +/- 18, 50° was found, whereas the mean signed error was 1, 09°. Large deviations between the manual and automatic annotations (> 30°) only occurred in 3, 79% of cases. The average computation time was 666ms in a non-optimized MATLAB environment, which potentiates real-time application. In conclusion, a successful automatic real-time method for orientation of the segmented left ventricle is proposed.

  2. PFMCal : Photonic force microscopy calibration extended for its application in high-frequency microrheology

    NASA Astrophysics Data System (ADS)

    Butykai, A.; Domínguez-García, P.; Mor, F. M.; Gaál, R.; Forró, L.; Jeney, S.

    2017-11-01

    The present document is an update of the previously published MatLab code for the calibration of optical tweezers in the high-resolution detection of the Brownian motion of non-spherical probes [1]. In this instance, an alternative version of the original code, based on the same physical theory [2], but focused on the automation of the calibration of measurements using spherical probes, is outlined. The new added code is useful for high-frequency microrheology studies, where the probe radius is known but the viscosity of the surrounding fluid maybe not. This extended calibration methodology is automatic, without the need of a user's interface. A code for calibration by means of thermal noise analysis [3] is also included; this is a method that can be applied when using viscoelastic fluids if the trap stiffness is previously estimated [4]. The new code can be executed in MatLab and using GNU Octave. Program Files doi:http://dx.doi.org/10.17632/s59f3gz729.1 Licensing provisions: GPLv3 Programming language: MatLab 2016a (MathWorks Inc.) and GNU Octave 4.0 Operating system: Linux and Windows. Supplementary material: A new document README.pdf includes basic running instructions for the new code. Journal reference of previous version: Computer Physics Communications, 196 (2015) 599 Does the new version supersede the previous version?: No. It adds alternative but compatible code while providing similar calibration factors. Nature of problem (approx. 50-250 words): The original code uses a MatLab-provided user's interface, which is not available in GNU Octave, and cannot be used outside of a proprietary software as MatLab. Besides, the process of calibration when using spherical probes needs an automatic method when calibrating big amounts of different data focused to microrheology. Solution method (approx. 50-250 words): The new code can be executed in the latest version of MatLab and using GNU Octave, a free and open-source alternative to MatLab. This code generates an automatic calibration process which requires only to write the input data in the main script. Additionally, we include a calibration method based on thermal noise statistics, which can be used with viscoelastic fluids if the trap stiffness is previously estimated. Reasons for the new version: This version extends the functionality of PFMCal for the particular case of spherical probes and unknown fluid viscosities. The extended code is automatic, works in different operating systems and it is compatible with GNU Octave. Summary of revisions: The original MatLab program in the previous version, which is executed by PFMCal.m, is not changed. Here, we have added two additional main archives named PFMCal_auto.m and PFMCal_histo.m, which implement automatic calculations of the calibration process and calibration through Boltzmann statistics, respectively. The process of calibration using this code for spherical beads is described in the README.pdf file provided in the new code submission. Here, we obtain different calibration factors, β (given in μm/V), according to [2], related to two statistical quantities: the mean-squared displacement (MSD), βMSD, and the velocity autocorrelation function (VAF), βVAF. Using that methodology, the trap stiffness, k, and the zero-shear viscosity of the fluid, η, can be calculated if the value of the particle's radius, a, is previously known. For comparison, we include in the extended code the method of calibration using the corner frequency of the power-spectral density (PSD) [5], providing a calibration factor βPSD. Besides, with the prior estimation of the trap stiffness, along with the known value of the particle's radius, we can use thermal noise statistics to obtain calibration factors, β, according to the quadratic form of the optical potential, βE, and related to the Gaussian distribution of the bead's positions, βσ2. This method has been demonstrated to be applicable to the calibration of optical tweezers when using non-Newtonian viscoelastic polymeric liquids [4]. An example of the results using this calibration process is summarized in Table 1. Using the data provided in the new code submission, for water and acetone fluids, we calculate all the calibration factors by using the original PFMCal.m and by the new non-GUI code PFMCal_auto.m and PFMCal_histo.m. Regarding the new code, PFMCal_auto.m returns η, k, βMSD, βVAF and βPSD, while PFMCal_histo.m provides βσ2 and βE. Table 1 shows how we obtain the expected viscosity of the two fluids at this temperature and how the different methods provide good agreement between trap stiffnesses and calibration factors. Additional comments including Restrictions and Unusual features (approx. 50-250 words): The original code, PFMCal.m, runs under MatLab using the Statistics Toolbox. The extended code, PFMCal_auto.m and PFMCal_histo.m, can be executed without modification using MatLab or GNU Octave. The code has been tested in Linux and Windows operating systems.

  3. Automatic real-time control of suspended sediment based upon high frequency in situ measurements of nephelometric turbidity

    Treesearch

    Jack Lewis; Rand Eads

    1998-01-01

    Abstract - For estimating suspended sediment concentration (SSC) in rivers, turbidity is potentially a much better predictor than water discharge. Since about 1990, it has been feasible to automatically collect high frequency turbidity data at remote sites using battery-powered turbidity probes that are properly mounted in the river or stream. With sensors calibrated...

  4. Automatic Cell Segmentation in Fluorescence Images of Confluent Cell Monolayers Using Multi-object Geometric Deformable Model.

    PubMed

    Yang, Zhen; Bogovic, John A; Carass, Aaron; Ye, Mao; Searson, Peter C; Prince, Jerry L

    2013-03-13

    With the rapid development of microscopy for cell imaging, there is a strong and growing demand for image analysis software to quantitatively study cell morphology. Automatic cell segmentation is an important step in image analysis. Despite substantial progress, there is still a need to improve the accuracy, efficiency, and adaptability to different cell morphologies. In this paper, we propose a fully automatic method for segmenting cells in fluorescence images of confluent cell monolayers. This method addresses several challenges through a combination of ideas. 1) It realizes a fully automatic segmentation process by first detecting the cell nuclei as initial seeds and then using a multi-object geometric deformable model (MGDM) for final segmentation. 2) To deal with different defects in the fluorescence images, the cell junctions are enhanced by applying an order-statistic filter and principal curvature based image operator. 3) The final segmentation using MGDM promotes robust and accurate segmentation results, and guarantees no overlaps and gaps between neighboring cells. The automatic segmentation results are compared with manually delineated cells, and the average Dice coefficient over all distinguishable cells is 0.88.

  5. Research on auto-calibration technology of the image plane's center of 360-degree and all round looking camera

    NASA Astrophysics Data System (ADS)

    Zhang, Shaojun; Xu, Xiping

    2015-10-01

    The 360-degree and all round looking camera, as its characteristics of suitable for automatic analysis and judgment on the ambient environment of the carrier by image recognition algorithm, is usually applied to opto-electronic radar of robots and smart cars. In order to ensure the stability and consistency of image processing results of mass production, it is necessary to make sure the centers of image planes of different cameras are coincident, which requires to calibrate the position of the image plane's center. The traditional mechanical calibration method and electronic adjusting mode of inputting the offsets manually, both exist the problem of relying on human eyes, inefficiency and large range of error distribution. In this paper, an approach of auto- calibration of the image plane of this camera is presented. The imaging of the 360-degree and all round looking camera is a ring-shaped image consisting of two concentric circles, the center of the image is a smaller circle and the outside is a bigger circle. The realization of the technology is just to exploit the above characteristics. Recognizing the two circles through HOUGH TRANSFORM algorithm and calculating the center position, we can get the accurate center of image, that the deviation of the central location of the optic axis and image sensor. The program will set up the image sensor chip through I2C bus automatically, we can adjusting the center of the image plane automatically and accurately. The technique has been applied to practice, promotes productivity and guarantees the consistent quality of products.

  6. Integration of Infrared Thermography and Photogrammetric Surveying of Built Landscape

    NASA Astrophysics Data System (ADS)

    Scaioni, M.; Rosina, E.; L'Erario, A.; Dìaz-Vilariño, L.

    2017-05-01

    The thermal analysis of buildings represents a key-step for reduction of energy consumption, also in the case of Cultural Heritage. Here the complexity of the constructions and the adopted materials might require special analysis and tailored solutions. Infrared Thermography (IRT) is an important non-destructive investigation technique that may aid in the thermal analysis of buildings. The paper reports the application of IRT on a listed building, belonging to the Cultural Heritage and to a residential one, as a demonstration that IRT is a suitable and convenient tool for analysing the existing buildings. The purposes of the analysis are the assessment of the damages and energy efficiency of the building envelope. Since in many cases the complex geometry of historic constructions may involve the thermal analysis, the integration of IRT and accurate 3D models were developed during the latest years. Here authors propose a solution based on the up-to-date photogrammetric solutions for purely image-based 3D modelling, including automatic image orientation/sensor calibration using Structure-from-Motion and dense matching. Thus, an almost fully automatic pipeline for the generation of accurate 3D models showing the temperatures on a building skin in a realistic manner is described, where the only manual task is given by the measurement of a few common points for co-registration of RGB and IR photogrammetric projects.

  7. Applying modern psychometric techniques to melodic discrimination testing: Item response theory, computerised adaptive testing, and automatic item generation.

    PubMed

    Harrison, Peter M C; Collins, Tom; Müllensiefen, Daniel

    2017-06-15

    Modern psychometric theory provides many useful tools for ability testing, such as item response theory, computerised adaptive testing, and automatic item generation. However, these techniques have yet to be integrated into mainstream psychological practice. This is unfortunate, because modern psychometric techniques can bring many benefits, including sophisticated reliability measures, improved construct validity, avoidance of exposure effects, and improved efficiency. In the present research we therefore use these techniques to develop a new test of a well-studied psychological capacity: melodic discrimination, the ability to detect differences between melodies. We calibrate and validate this test in a series of studies. Studies 1 and 2 respectively calibrate and validate an initial test version, while Studies 3 and 4 calibrate and validate an updated test version incorporating additional easy items. The results support the new test's viability, with evidence for strong reliability and construct validity. We discuss how these modern psychometric techniques may also be profitably applied to other areas of music psychology and psychological science in general.

  8. Calibration-free gaze tracking for automatic measurement of visual acuity in human infants.

    PubMed

    Xiong, Chunshui; Huang, Lei; Liu, Changping

    2014-01-01

    Most existing vision-based methods for gaze tracking need a tedious calibration process. In this process, subjects are required to fixate on a specific point or several specific points in space. However, it is hard to cooperate, especially for children and human infants. In this paper, a new calibration-free gaze tracking system and method is presented for automatic measurement of visual acuity in human infants. As far as I know, it is the first time to apply the vision-based gaze tracking in the measurement of visual acuity. Firstly, a polynomial of pupil center-cornea reflections (PCCR) vector is presented to be used as the gaze feature. Then, Gaussian mixture models (GMM) is employed for gaze behavior classification, which is trained offline using labeled data from subjects with healthy eyes. Experimental results on several subjects show that the proposed method is accurate, robust and sufficient for the application of measurement of visual acuity in human infants.

  9. Fully automatic guidance and control for rotorcraft nap-of-the-Earth flight following planned profiles. Volume 1: Real-time piloted simulation

    NASA Technical Reports Server (NTRS)

    Clement, Warren F.; Gorder, Peter J.; Jewell, Wayne F.

    1991-01-01

    Developing a single-pilot, all-weather nap-of-the-earth (NOE) capability requires fully automatic NOE (ANOE) navigation and flight control. Innovative guidance and control concepts are investigated in a four-fold research effort that: (1) organizes the on-board computer-based storage and real-time updating of NOE terrain profiles and obstacles in course-oriented coordinates indexed to the mission flight plan; (2) defines a class of automatic anticipative pursuit guidance algorithms and necessary data preview requirements to follow the vertical, lateral, and longitudinal guidance commands dictated by the updated flight profiles; (3) automates a decision-making process for unexpected obstacle avoidance; and (4) provides several rapid response maneuvers. Acquired knowledge from the sensed environment is correlated with the forehand knowledge of the recorded environment (terrain, cultural features, threats, and targets), which is then used to determine an appropriate evasive maneuver if a nonconformity of the sensed and recorded environments is observed. This four-fold research effort was evaluated in both fixed-based and moving-based real-time piloted simulations, thereby, providing a practical demonstration for evaluating pilot acceptance of the automated concepts, supervisory override, manual operation, and re-engagement of the automatic system. Volume one describes the major components of the guidance and control laws as well as the results of the piloted simulations. Volume two describes the complete mathematical model of the fully automatic guidance system for rotorcraft NOE flight following planned flight profiles.

  10. Fully Automatic Guidance and Control for Rotorcraft Nap-of-the-earth Flight Following Planned Profiles. Volume 2: Mathematical Model

    NASA Technical Reports Server (NTRS)

    Clement, Warren F.; Gorder, Peter J.; Jewell, Wayne F.

    1991-01-01

    Developing a single-pilot, all-weather nap-of-the-earth (NOE) capability requires fully automatic NOE (ANOE) navigation and flight control. Innovative guidance and control concepts are investigated in a four-fold research effort that: (1) organizes the on-board computer-based storage and real-time updating of NOE terrain profiles and obstacles in course-oriented coordinates indexed to the mission flight plan; (2) defines a class of automatic anticipative pursuit guidance algorithms and necessary data preview requirements to follow the vertical, lateral, and longitudinal guidance commands dictated by the updated flight profiles; (3) automates a decision-making process for unexpected obstacle avoidance; and (4) provides several rapid response maneuvers. Acquired knowledge from the sensed environment is correlated with the forehand knowledge of the recorded environment (terrain, cultural features, threats, and targets), which is then used to determine an appropriate evasive maneuver if a nonconformity of the sensed and recorded environments is observed. This four-fold research effort was evaluated in both fixed-base and moving-base real-time piloted simulations; thereby, providing a practical demonstration for evaluating pilot acceptance of the automated concepts, supervisory override, manual operation, and re-engagement of the automatic system. Volume one describes the major components of the guidance and control laws as well as the results of the piloted simulations. Volume two describes the complete mathematical model of the fully automatic guidance system for rotorcraft NOE flight following planned flight profiles.

  11. Fully automatic detection and segmentation of abdominal aortic thrombus in post-operative CTA images using Deep Convolutional Neural Networks.

    PubMed

    López-Linares, Karen; Aranjuelo, Nerea; Kabongo, Luis; Maclair, Gregory; Lete, Nerea; Ceresa, Mario; García-Familiar, Ainhoa; Macía, Iván; González Ballester, Miguel A

    2018-05-01

    Computerized Tomography Angiography (CTA) based follow-up of Abdominal Aortic Aneurysms (AAA) treated with Endovascular Aneurysm Repair (EVAR) is essential to evaluate the progress of the patient and detect complications. In this context, accurate quantification of post-operative thrombus volume is required. However, a proper evaluation is hindered by the lack of automatic, robust and reproducible thrombus segmentation algorithms. We propose a new fully automatic approach based on Deep Convolutional Neural Networks (DCNN) for robust and reproducible thrombus region of interest detection and subsequent fine thrombus segmentation. The DetecNet detection network is adapted to perform region of interest extraction from a complete CTA and a new segmentation network architecture, based on Fully Convolutional Networks and a Holistically-Nested Edge Detection Network, is presented. These networks are trained, validated and tested in 13 post-operative CTA volumes of different patients using a 4-fold cross-validation approach to provide more robustness to the results. Our pipeline achieves a Dice score of more than 82% for post-operative thrombus segmentation and provides a mean relative volume difference between ground truth and automatic segmentation that lays within the experienced human observer variance without the need of human intervention in most common cases. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. Automated contour detection in X-ray left ventricular angiograms using multiview active appearance models and dynamic programming.

    PubMed

    Oost, Elco; Koning, Gerhard; Sonka, Milan; Oemrawsingh, Pranobe V; Reiber, Johan H C; Lelieveldt, Boudewijn P F

    2006-09-01

    This paper describes a new approach to the automated segmentation of X-ray left ventricular (LV) angiograms, based on active appearance models (AAMs) and dynamic programming. A coupling of shape and texture information between the end-diastolic (ED) and end-systolic (ES) frame was achieved by constructing a multiview AAM. Over-constraining of the model was compensated for by employing dynamic programming, integrating both intensity and motion features in the cost function. Two applications are compared: a semi-automatic method with manual model initialization, and a fully automatic algorithm. The first proved to be highly robust and accurate, demonstrating high clinical relevance. Based on experiments involving 70 patient data sets, the algorithm's success rate was 100% for ED and 99% for ES, with average unsigned border positioning errors of 0.68 mm for ED and 1.45 mm for ES. Calculated volumes were accurate and unbiased. The fully automatic algorithm, with intrinsically less user interaction was less robust, but showed a high potential, mostly due to a controlled gradient descent in updating the model parameters. The success rate of the fully automatic method was 91% for ED and 83% for ES, with average unsigned border positioning errors of 0.79 mm for ED and 1.55 mm for ES.

  13. Monitoring groundwater and river interaction along the Hanford reach of the Columbia River

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campbell, M.D.

    1994-04-01

    As an adjunct to efficient Hanford Site characterization and remediation of groundwater contamination, an automatic monitor network has been used to measure Columbia River and adjacent groundwater levels in several areas of the Hanford Site since 1991. Water levels, temperatures, and electrical conductivity measured by the automatic monitor network provided an initial database with which to calibrate models and from which to infer ground and river water interactions for site characterization and remediation activities. Measurements of the dynamic river/aquifer system have been simultaneous at 1-hr intervals, with a quality suitable for hydrologic modeling and for computer model calibration and testing.more » This report describes the equipment, procedures, and results from measurements done in 1993.« less

  14. Simulating soil moisture change in a semiarid rangeland watershed with a process-based water-balance model

    Treesearch

    Howard Evan Canfield; Vicente L. Lopes

    2000-01-01

    A process-based, simulation model for evaporation, soil water and streamflow (BROOK903) was used to estimate soil moisture change on a semiarid rangeland watershed in southeastern Arizona. A sensitivity analysis was performed to select parameters affecting ET and soil moisture for calibration. Automatic parameter calibration was performed using a procedure based on a...

  15. Automatic switching matrix

    DOEpatents

    Schlecht, Martin F.; Kassakian, John G.; Caloggero, Anthony J.; Rhodes, Bruce; Otten, David; Rasmussen, Neil

    1982-01-01

    An automatic switching matrix that includes an apertured matrix board containing a matrix of wires that can be interconnected at each aperture. Each aperture has associated therewith a conductive pin which, when fully inserted into the associated aperture, effects electrical connection between the wires within that particular aperture. Means is provided for automatically inserting the pins in a determined pattern and for removing all the pins to permit other interconnecting patterns.

  16. Laser Calibration of an Impact Disdrometer

    NASA Technical Reports Server (NTRS)

    Lane, John E.; Kasparis, Takis; Metzger, Philip T.; Jones, W. Linwood

    2014-01-01

    A practical approach to developing an operational low-cost disdrometer hinges on implementing an effective in situ adaptive calibration strategy. This calibration strategy lowers the cost of the device and provides a method to guarantee continued automatic calibration. In previous work, a collocated tipping bucket rain gauge was utilized to provide a calibration signal to the disdrometer's digital signal processing software. Rainfall rate is proportional to the 11/3 moment of the drop size distribution (a 7/2 moment can also be assumed, depending on the choice of terminal velocity relationship). In the previous case, the disdrometer calibration was characterized and weighted to the 11/3 moment of the drop size distribution (DSD). Optical extinction by rainfall is proportional to the 2nd moment of the DSD. Using visible laser light as a means to focus and generate an auxiliary calibration signal, the adaptive calibration processing is significantly improved.

  17. Automatic Control Of Length Of Welding Arc

    NASA Technical Reports Server (NTRS)

    Iceland, William F.

    1991-01-01

    Nonlinear relationships among current, voltage, and length stored in electronic memory. Conceptual microprocessor-based control subsystem maintains constant length of welding arc in gas/tungsten arc-welding system, even when welding current varied. Uses feedback of current and voltage from welding arc. Directs motor to set position of torch according to previously measured relationships among current, voltage, and length of arc. Signal paths marked "calibration" or "welding" used during those processes only. Other signal paths used during both processes. Control subsystem added to existing manual or automatic welding system equipped with automatic voltage control.

  18. Performance of automatic scanning microscope for nuclear emulsion experiments

    NASA Astrophysics Data System (ADS)

    Güler, A. Murat; Altınok, Özgür

    2015-12-01

    The impressive improvements in scanning technology and methods let nuclear emulsion to be used as a target in recent large experiments. We report the performance of an automatic scanning microscope for nuclear emulsion experiments. After successful calibration and alignment of the system, we have reached 99% tracking efficiency for the minimum ionizing tracks that penetrating through the emulsions films. The automatic scanning system is successfully used for the scanning of emulsion films in the OPERA experiment and plan to use for the next generation of nuclear emulsion experiments.

  19. Automatized alignment control of wing mechanization in aerodynamic contour of aircraft

    NASA Astrophysics Data System (ADS)

    Odnokurtsev, K. A.

    2018-05-01

    The method of automatized control of accuracy of an aircraft aerodynamic contour when mounting wing mechanization elements is described in the article. A control device in the stand of the wing assembling, equipped with the distance sensors, is suggested to be used. The measurement of control points’ inaccuracies is made automatically in a special computer program. Two kinds of sensor calibration are made in advance in order to increase the accuracy of measurements. As a result, the duration of control and adjustment of mechanization elements is reduced.

  20. Performance of automatic scanning microscope for nuclear emulsion experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Güler, A. Murat, E-mail: mguler@newton.physics.metu.edu.tr; Altınok, Özgür; Tufts University, Medford, MA 02155

    The impressive improvements in scanning technology and methods let nuclear emulsion to be used as a target in recent large experiments. We report the performance of an automatic scanning microscope for nuclear emulsion experiments. After successful calibration and alignment of the system, we have reached 99% tracking efficiency for the minimum ionizing tracks that penetrating through the emulsions films. The automatic scanning system is successfully used for the scanning of emulsion films in the OPERA experiment and plan to use for the next generation of nuclear emulsion experiments.

  1. Aero-Thermal Calibration of the NASA Glenn Icing Research Tunnel (2012 Tests)

    NASA Technical Reports Server (NTRS)

    Pastor-Barsi, Christine; Allen, Arrington E.

    2013-01-01

    A full aero-thermal calibration of the NASA Glenn Icing Research Tunnel (IRT) was completed in 2012 following the major modifications to the facility that included replacement of the refrigeration plant and heat exchanger. The calibration test provided data used to fully document the aero-thermal flow quality in the IRT test section and to construct calibration curves for the operation of the IRT.

  2. A New Calibration Method for Commercial RGB-D Sensors.

    PubMed

    Darwish, Walid; Tang, Shenjun; Li, Wenbin; Chen, Wu

    2017-05-24

    Commercial RGB-D sensors such as Kinect and Structure Sensors have been widely used in the game industry, where geometric fidelity is not of utmost importance. For applications in which high quality 3D is required, i.e., 3D building models of centimeter‑level accuracy, accurate and reliable calibrations of these sensors are required. This paper presents a new model for calibrating the depth measurements of RGB-D sensors based on the structured light concept. Additionally, a new automatic method is proposed for the calibration of all RGB-D parameters, including internal calibration parameters for all cameras, the baseline between the infrared and RGB cameras, and the depth error model. When compared with traditional calibration methods, this new model shows a significant improvement in depth precision for both near and far ranges.

  3. Modulus design multiwavelength polarization microscope for transmission Mueller matrix imaging.

    PubMed

    Zhou, Jialing; He, Honghui; Chen, Zhenhua; Wang, Ye; Ma, Hui

    2018-01-01

    We have developed a polarization microscope based on a commercial transmission microscope. We replace the halogen light source by a collimated LED light source module of six different colors. We use achromatic polarized optical elements that can cover the six different wavelength ranges in the polarization state generator (PSG) and polarization state analyzer (PSA) modules. The dual-rotating wave plate method is used to measure the Mueller matrix of samples, which requires the simultaneous rotation of the two quarter-wave plates in both PSG and PSA at certain angular steps. A scientific CCD detector is used as the image receiving module. A LabView-based software is developed to control the rotation angels of the wave plates and the exposure time of the detector to allow the system to run fully automatically in preprogrammed schedules. Standard samples, such as air, polarizers, and quarter-wave plates, are used to calibrate the intrinsic Mueller matrix of optical components, such as the objectives, using the eigenvalue calibration method. Errors due to the images walk-off in the PSA are studied. Errors in the Mueller matrices are below 0.01 using air and polarizer as standard samples. Data analysis based on Mueller matrix transformation and Mueller matrix polarization decomposition is used to demonstrate the potential application of this microscope in pathological diagnosis. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  4. Fully automatic lesion segmentation in breast MRI using mean-shift and graph-cuts on a region adjacency graph.

    PubMed

    McClymont, Darryl; Mehnert, Andrew; Trakic, Adnan; Kennedy, Dominic; Crozier, Stuart

    2014-04-01

    To present and evaluate a fully automatic method for segmentation (i.e., detection and delineation) of suspicious tissue in breast MRI. The method, based on mean-shift clustering and graph-cuts on a region adjacency graph, was developed and its parameters tuned using multimodal (T1, T2, DCE-MRI) clinical breast MRI data from 35 subjects (training data). It was then tested using two data sets. Test set 1 comprises data for 85 subjects (93 lesions) acquired using the same protocol and scanner system used to acquire the training data. Test set 2 comprises data for eight subjects (nine lesions) acquired using a similar protocol but a different vendor's scanner system. Each lesion was manually delineated in three-dimensions by an experienced breast radiographer to establish segmentation ground truth. The regions of interest identified by the method were compared with the ground truth and the detection and delineation accuracies quantitatively evaluated. One hundred percent of the lesions were detected with a mean of 4.5 ± 1.2 false positives per subject. This false-positive rate is nearly 50% better than previously reported for a fully automatic breast lesion detection system. The median Dice coefficient for Test set 1 was 0.76 (interquartile range, 0.17), and 0.75 (interquartile range, 0.16) for Test set 2. The results demonstrate the efficacy and accuracy of the proposed method as well as its potential for direct application across different MRI systems. It is (to the authors' knowledge) the first fully automatic method for breast lesion detection and delineation in breast MRI.

  5. Automatic Generation of Rasch-Calibrated Items: Figural Matrices Test GEOM and Endless-Loops Test EC

    ERIC Educational Resources Information Center

    Arendasy, Martin

    2005-01-01

    The future of test construction for certain psychological ability domains that can be analyzed well in a structured manner may lie--at the very least for reasons of test security--in the field of automatic item generation. In this context, a question that has not been explicitly addressed is whether it is possible to embed an item response theory…

  6. A fully automatic three-step liver segmentation method on LDA-based probability maps for multiple contrast MR images.

    PubMed

    Gloger, Oliver; Kühn, Jens; Stanski, Adam; Völzke, Henry; Puls, Ralf

    2010-07-01

    Automatic 3D liver segmentation in magnetic resonance (MR) data sets has proven to be a very challenging task in the domain of medical image analysis. There exist numerous approaches for automatic 3D liver segmentation on computer tomography data sets that have influenced the segmentation of MR images. In contrast to previous approaches to liver segmentation in MR data sets, we use all available MR channel information of different weightings and formulate liver tissue and position probabilities in a probabilistic framework. We apply multiclass linear discriminant analysis as a fast and efficient dimensionality reduction technique and generate probability maps then used for segmentation. We develop a fully automatic three-step 3D segmentation approach based upon a modified region growing approach and a further threshold technique. Finally, we incorporate characteristic prior knowledge to improve the segmentation results. This novel 3D segmentation approach is modularized and can be applied for normal and fat accumulated liver tissue properties. Copyright 2010 Elsevier Inc. All rights reserved.

  7. Comparison between manual and semi-automatic segmentation of nasal cavity and paranasal sinuses from CT images.

    PubMed

    Tingelhoff, K; Moral, A I; Kunkel, M E; Rilk, M; Wagner, I; Eichhorn, K G; Wahl, F M; Bootz, F

    2007-01-01

    Segmentation of medical image data is getting more and more important over the last years. The results are used for diagnosis, surgical planning or workspace definition of robot-assisted systems. The purpose of this paper is to find out whether manual or semi-automatic segmentation is adequate for ENT surgical workflow or whether fully automatic segmentation of paranasal sinuses and nasal cavity is needed. We present a comparison of manual and semi-automatic segmentation of paranasal sinuses and the nasal cavity. Manual segmentation is performed by custom software whereas semi-automatic segmentation is realized by a commercial product (Amira). For this study we used a CT dataset of the paranasal sinuses which consists of 98 transversal slices, each 1.0 mm thick, with a resolution of 512 x 512 pixels. For the analysis of both segmentation procedures we used volume, extension (width, length and height), segmentation time and 3D-reconstruction. The segmentation time was reduced from 960 minutes with manual to 215 minutes with semi-automatic segmentation. We found highest variances segmenting nasal cavity. For the paranasal sinuses manual and semi-automatic volume differences are not significant. Dependent on the segmentation accuracy both approaches deliver useful results and could be used for e.g. robot-assisted systems. Nevertheless both procedures are not useful for everyday surgical workflow, because they take too much time. Fully automatic and reproducible segmentation algorithms are needed for segmentation of paranasal sinuses and nasal cavity.

  8. Automatic classification of blank substrate defects

    NASA Astrophysics Data System (ADS)

    Boettiger, Tom; Buck, Peter; Paninjath, Sankaranarayanan; Pereira, Mark; Ronald, Rob; Rost, Dan; Samir, Bhamidipati

    2014-10-01

    Mask preparation stages are crucial in mask manufacturing, since this mask is to later act as a template for considerable number of dies on wafer. Defects on the initial blank substrate, and subsequent cleaned and coated substrates, can have a profound impact on the usability of the finished mask. This emphasizes the need for early and accurate identification of blank substrate defects and the risk they pose to the patterned reticle. While Automatic Defect Classification (ADC) is a well-developed technology for inspection and analysis of defects on patterned wafers and masks in the semiconductors industry, ADC for mask blanks is still in the early stages of adoption and development. Calibre ADC is a powerful analysis tool for fast, accurate, consistent and automatic classification of defects on mask blanks. Accurate, automated classification of mask blanks leads to better usability of blanks by enabling defect avoidance technologies during mask writing. Detailed information on blank defects can help to select appropriate job-decks to be written on the mask by defect avoidance tools [1][4][5]. Smart algorithms separate critical defects from the potentially large number of non-critical defects or false defects detected at various stages during mask blank preparation. Mechanisms used by Calibre ADC to identify and characterize defects include defect location and size, signal polarity (dark, bright) in both transmitted and reflected review images, distinguishing defect signals from background noise in defect images. The Calibre ADC engine then uses a decision tree to translate this information into a defect classification code. Using this automated process improves classification accuracy, repeatability and speed, while avoiding the subjectivity of human judgment compared to the alternative of manual defect classification by trained personnel [2]. This paper focuses on the results from the evaluation of Automatic Defect Classification (ADC) product at MP Mask Technology Center (MPMask). The Calibre ADC tool was qualified on production mask blanks against the manual classification. The classification accuracy of ADC is greater than 95% for critical defects with an overall accuracy of 90%. The sensitivity to weak defect signals and locating the defect in the images is a challenge we are resolving. The performance of the tool has been demonstrated on multiple mask types and is ready for deployment in full volume mask manufacturing production flow. Implementation of Calibre ADC is estimated to reduce the misclassification of critical defects by 60-80%.

  9. Inexpensive portable drug detector

    NASA Technical Reports Server (NTRS)

    Dimeff, J.; Heimbuch, A. H.; Parker, J. A.

    1977-01-01

    Inexpensive, easy-to-use, self-scanning, self-calibrating, portable unit automatically graphs fluorescence spectrum of drug sample. Device also measures rate of movement through chromatographic column for forensic and medical testing.

  10. Immunochemistry for high-throughput screening of human exhaled breath condensate (EBC) media: implementation of automated Quanterix SIMOA instrumentation.

    PubMed

    Pleil, Joachim D; Angrish, Michelle M; Madden, Michael C

    2015-12-11

    Immunochemistry is an important clinical tool for indicating biological pathways leading towards disease. Standard enzyme-linked immunosorbent assays (ELISA) are labor intensive and lack sensitivity at low-level concentrations. Here we report on emerging technology implementing fully-automated ELISA capable of molecular level detection and describe application to exhaled breath condensate (EBC) samples. The Quanterix SIMOA HD-1 analyzer was evaluated for analytical performance for inflammatory cytokines (IL-6, TNF-α, IL-1β and IL-8). The system was challenged with human EBC representing the most dilute and analytically difficult of the biological media. Calibrations from synthetic samples and spiked EBC showed excellent linearity at trace levels (r(2)  >  0.99). Sensitivities varied by analyte, but were robust from ~0.006 (IL-6) to ~0.01 (TNF-α) pg ml(-1). All analytes demonstrated response suppression when diluted with deionized water and so assay buffer diluent was found to be a better choice. Analytical runs required ~45 min setup time for loading samples, reagents, calibrants, etc., after which the instrument performs without further intervention for up to 288 separate samples. Currently, available kits are limited to single-plex analyses and so sample volumes require adjustments. Sample dilutions should be made with assay diluent to avoid response suppression. Automation performs seamlessly and data are automatically analyzed and reported in spreadsheet format. The internal 5-parameter logistic (pl) calibration model should be supplemented with a linear regression spline at the very lowest analyte levels, (<1.3 pg ml(-1)). The implementation of the automated Quanterix platform was successfully demonstrated using EBC, which poses the greatest challenge to ELISA due to limited sample volumes and low protein levels.

  11. Automatic identification of physical activity types and sedentary behaviors from triaxial accelerometer: laboratory-based calibrations are not enough.

    PubMed

    Bastian, Thomas; Maire, Aurélia; Dugas, Julien; Ataya, Abbas; Villars, Clément; Gris, Florence; Perrin, Emilie; Caritu, Yanis; Doron, Maeva; Blanc, Stéphane; Jallon, Pierre; Simon, Chantal

    2015-03-15

    "Objective" methods to monitor physical activity and sedentary patterns in free-living conditions are necessary to further our understanding of their impacts on health. In recent years, many software solutions capable of automatically identifying activity types from portable accelerometry data have been developed, with promising results in controlled conditions, but virtually no reports on field tests. An automatic classification algorithm initially developed using laboratory-acquired data (59 subjects engaging in a set of 24 standardized activities) to discriminate between 8 activity classes (lying, slouching, sitting, standing, walking, running, and cycling) was applied to data collected in the field. Twenty volunteers equipped with a hip-worn triaxial accelerometer performed at their own pace an activity set that included, among others, activities such as walking the streets, running, cycling, and taking the bus. Performances of the laboratory-calibrated classification algorithm were compared with those of an alternative version of the same model including field-collected data in the learning set. Despite good results in laboratory conditions, the performances of the laboratory-calibrated algorithm (assessed by confusion matrices) decreased for several activities when applied to free-living data. Recalibrating the algorithm with data closer to real-life conditions and from an independent group of subjects proved useful, especially for the detection of sedentary behaviors while in transports, thereby improving the detection of overall sitting (sensitivity: laboratory model = 24.9%; recalibrated model = 95.7%). Automatic identification methods should be developed using data acquired in free-living conditions rather than data from standardized laboratory activity sets only, and their limits carefully tested before they are used in field studies. Copyright © 2015 the American Physiological Society.

  12. Method for in-situ calibration of electrophoretic analysis systems

    DOEpatents

    Liu, Changsheng; Zhao, Hequan

    2005-05-08

    An electrophoretic system having a plurality of separation lanes is provided with an automatic calibration feature in which each lane is separately calibrated. For each lane, the calibration coefficients map a spectrum of received channel intensities onto values reflective of the relative likelihood of each of a plurality of dyes being present. Individual peaks, reflective of the influence of a single dye, are isolated from among the various sets of detected light intensity spectra, and these can be used to both detect the number of dye components present, and also to establish exemplary vectors for the calibration coefficients which may then be clustered and further processed to arrive at a calibration matrix for the system. The system of the present invention thus permits one to use different dye sets to tag DNA nucleotides in samples which migrate in separate lanes, and also allows for in-situ calibration with new, previously unused dye sets.

  13. Fully automatic multi-atlas segmentation of CTA for partial volume correction in cardiac SPECT/CT

    NASA Astrophysics Data System (ADS)

    Liu, Qingyi; Mohy-ud-Din, Hassan; Boutagy, Nabil E.; Jiang, Mingyan; Ren, Silin; Stendahl, John C.; Sinusas, Albert J.; Liu, Chi

    2017-05-01

    Anatomical-based partial volume correction (PVC) has been shown to improve image quality and quantitative accuracy in cardiac SPECT/CT. However, this method requires manual segmentation of various organs from contrast-enhanced computed tomography angiography (CTA) data. In order to achieve fully automatic CTA segmentation for clinical translation, we investigated the most common multi-atlas segmentation methods. We also modified the multi-atlas segmentation method by introducing a novel label fusion algorithm for multiple organ segmentation to eliminate overlap and gap voxels. To evaluate our proposed automatic segmentation, eight canine 99mTc-labeled red blood cell SPECT/CT datasets that incorporated PVC were analyzed, using the leave-one-out approach. The Dice similarity coefficient of each organ was computed. Compared to the conventional label fusion method, our proposed label fusion method effectively eliminated gaps and overlaps and improved the CTA segmentation accuracy. The anatomical-based PVC of cardiac SPECT images with automatic multi-atlas segmentation provided consistent image quality and quantitative estimation of intramyocardial blood volume, as compared to those derived using manual segmentation. In conclusion, our proposed automatic multi-atlas segmentation method of CTAs is feasible, practical, and facilitates anatomical-based PVC of cardiac SPECT/CT images.

  14. A New Calibration Method for Commercial RGB-D Sensors

    PubMed Central

    Darwish, Walid; Tang, Shenjun; Li, Wenbin; Chen, Wu

    2017-01-01

    Commercial RGB-D sensors such as Kinect and Structure Sensors have been widely used in the game industry, where geometric fidelity is not of utmost importance. For applications in which high quality 3D is required, i.e., 3D building models of centimeter-level accuracy, accurate and reliable calibrations of these sensors are required. This paper presents a new model for calibrating the depth measurements of RGB-D sensors based on the structured light concept. Additionally, a new automatic method is proposed for the calibration of all RGB-D parameters, including internal calibration parameters for all cameras, the baseline between the infrared and RGB cameras, and the depth error model. When compared with traditional calibration methods, this new model shows a significant improvement in depth precision for both near and far ranges. PMID:28538695

  15. Machine-Aided Translation: From Terminology Banks to Interactive Translation Systems.

    ERIC Educational Resources Information Center

    Greenfield, Concetta C.; Serain, Daniel

    The rapid growth of the need for technical translations in recent years has led specialists to utilize computer technology to improve the efficiency and quality of translation. The two approaches considered were automatic translation and terminology banks. Since the results of fully automatic translation were considered unsatisfactory by various…

  16. A Flexible and Configurable Architecture for Automatic Control Remote Laboratories

    ERIC Educational Resources Information Center

    Kalúz, Martin; García-Zubía, Javier; Fikar, Miroslav; Cirka, Luboš

    2015-01-01

    In this paper, we propose a novel approach in hardware and software architecture design for implementation of remote laboratories for automatic control. In our contribution, we show the solution with flexible connectivity at back-end, providing features of multipurpose usage with different types of experimental devices, and fully configurable…

  17. Ultramap v3 - a Revolution in Aerial Photogrammetry

    NASA Astrophysics Data System (ADS)

    Reitinger, B.; Sormann, M.; Zebedin, L.; Schachinger, B.; Hoefler, M.; Tomasi, R.; Lamperter, M.; Gruber, B.; Schiester, G.; Kobald, M.; Unger, M.; Klaus, A.; Bernoegger, S.; Karner, K.; Wiechert, A.; Ponticelli, M.; Gruber, M.

    2012-07-01

    In the last years, Microsoft has driven innovation in the aerial photogrammetry community. Besides the market leading camera technology, UltraMap has grown to an outstanding photogrammetric workflow system which enables users to effectively work with large digital aerial image blocks in a highly automated way. Best example is the project-based color balancing approach which automatically balances images to a homogeneous block. UltraMap V3 continues innovation, and offers a revolution in terms of ortho processing. A fully automated dense matching module strives for high precision digital surface models (DSMs) which are calculated either on CPUs or on GPUs using a distributed processing framework. By applying constrained filtering algorithms, a digital terrain model can be derived which in turn can be used for fully automated traditional ortho texturing. By having the knowledge about the underlying geometry, seamlines can be generated automatically by applying cost functions in order to minimize visual disturbing artifacts. By exploiting the generated DSM information, a DSMOrtho is created using the balanced input images. Again, seamlines are detected automatically resulting in an automatically balanced ortho mosaic. Interactive block-based radiometric adjustments lead to a high quality ortho product based on UltraCam imagery. UltraMap v3 is the first fully integrated and interactive solution for supporting UltraCam images at best in order to deliver DSM and ortho imagery.

  18. Fully automatic detection and visualization of patient specific coronary supply regions

    NASA Astrophysics Data System (ADS)

    Fritz, Dominik; Wiedemann, Alexander; Dillmann, Ruediger; Scheuering, Michael

    2008-03-01

    Coronary territory maps, which associate myocardial regions with the corresponding coronary artery that supply them, are a common visualization technique to assist the physician in the diagnosis of coronary artery disease. However, the commonly used visualization is based on the AHA-17-segment model, which is an empirical population based model. Therefore, it does not necessarily cope with the often highly individual coronary anatomy of a specific patient. In this paper we introduce a novel fully automatic approach to compute the patient individual coronary supply regions in CTA datasets. This approach is divided in three consecutive steps. First, the aorta is fully automatically located in the dataset with a combination of a Hough transform and a cylindrical model matching approach. Having the location of the aorta, a segmentation and skeletonization of the coronary tree is triggered. In the next step, the three main branches (LAD, LCX and RCX) are automatically labeled, based on the knowledge of the pose of the aorta and the left ventricle. In the last step the labeled coronary tree is projected on the left ventricular surface, which can afterward be subdivided into the coronary supply regions, based on a Voronoi transform. The resulting supply regions can be either shown in 3D on the epicardiac surface of the left ventricle, or as a subdivision of a polarmap.

  19. Fully automatic measurements of axial vertebral rotation for assessment of spinal deformity in idiopathic scoliosis

    NASA Astrophysics Data System (ADS)

    Forsberg, Daniel; Lundström, Claes; Andersson, Mats; Vavruch, Ludvig; Tropp, Hans; Knutsson, Hans

    2013-03-01

    Reliable measurements of spinal deformities in idiopathic scoliosis are vital, since they are used for assessing the degree of scoliosis, deciding upon treatment and monitoring the progression of the disease. However, commonly used two dimensional methods (e.g. the Cobb angle) do not fully capture the three dimensional deformity at hand in scoliosis, of which axial vertebral rotation (AVR) is considered to be of great importance. There are manual methods for measuring the AVR, but they are often time-consuming and related with a high intra- and inter-observer variability. In this paper, we present a fully automatic method for estimating the AVR in images from computed tomography. The proposed method is evaluated on four scoliotic patients with 17 vertebrae each and compared with manual measurements performed by three observers using the standard method by Aaro-Dahlborn. The comparison shows that the difference in measured AVR between automatic and manual measurements are on the same level as the inter-observer difference. This is further supported by a high intraclass correlation coefficient (0.971-0.979), obtained when comparing the automatic measurements with the manual measurements of each observer. Hence, the provided results and the computational performance, only requiring approximately 10 to 15 s for processing an entire volume, demonstrate the potential clinical value of the proposed method.

  20. HIDE & SEEK: End-to-end packages to simulate and process radio survey data

    NASA Astrophysics Data System (ADS)

    Akeret, J.; Seehars, S.; Chang, C.; Monstein, C.; Amara, A.; Refregier, A.

    2017-01-01

    As several large single-dish radio surveys begin operation within the coming decade, a wealth of radio data will become available and provide a new window to the Universe. In order to fully exploit the potential of these datasets, it is important to understand the systematic effects associated with the instrument and the analysis pipeline. A common approach to tackle this is to forward-model the entire system-from the hardware to the analysis of the data products. For this purpose, we introduce two newly developed, open-source Python packages: the HI Data Emulator (HIDE) and the Signal Extraction and Emission Kartographer (SEEK) for simulating and processing single-dish radio survey data. HIDE forward-models the process of collecting astronomical radio signals in a single-dish radio telescope instrument and outputs pixel-level time-ordered-data. SEEK processes the time-ordered-data, removes artifacts from Radio Frequency Interference (RFI), automatically applies flux calibration, and aims to recover the astronomical radio signal. The two packages can be used separately or together depending on the application. Their modular and flexible nature allows easy adaptation to other instruments and datasets. We describe the basic architecture of the two packages and examine in detail the noise and RFI modeling in HIDE, as well as the implementation of gain calibration and RFI mitigation in SEEK. We then apply HIDE &SEEK to forward-model a Galactic survey in the frequency range 990-1260 MHz based on data taken at the Bleien Observatory. For this survey, we expect to cover 70% of the full sky and achieve a median signal-to-noise ratio of approximately 5-6 in the cleanest channels including systematic uncertainties. However, we also point out the potential challenges of high RFI contamination and baseline removal when examining the early data from the Bleien Observatory. The fully documented HIDE &SEEK packages are available at http://hideseek.phys.ethz.ch/ and are published under the GPLv3 license on GitHub.

  1. Nondestructive Vibratory Testing and Evaluation Procedure for Military Roads and Streets.

    DTIC Science & Technology

    1984-07-01

    the addition of an auto- matic data acquisition system to the instrumentation control panel. This system , presently available, would automatically ...the data used to further develop and define the basic correlations. c. Consideration be given to installing an automatic data acquisi- tion system to...glows red any time the force generator is not fully elevated. Depressing this switch will stop the automatic cycle at any point and clear all system

  2. Colorimetric calibration of wound photography with off-the-shelf devices

    NASA Astrophysics Data System (ADS)

    Bala, Subhankar; Sirazitdinova, Ekaterina; Deserno, Thomas M.

    2017-03-01

    Digital cameras are often used in recent days for photographic documentation in medical sciences. However, color reproducibility of same objects suffers from different illuminations and lighting conditions. This variation in color representation is problematic when the images are used for segmentation and measurements based on color thresholds. In this paper, motivated by photographic follow-up of chronic wounds, we assess the impact of (i) gamma correction, (ii) white balancing, (iii) background unification, and (iv) reference card-based color correction. Automatic gamma correction and white balancing are applied to support the calibration procedure, where gamma correction is a nonlinear color transform. For unevenly illuminated images, non- uniform illumination correction is applied. In the last step, we apply colorimetric calibration using a reference color card of 24 patches with known colors. A lattice detection algorithm is used for locating the card. The least squares algorithm is applied for affine color calibration in the RGB model. We have tested the algorithm on images with seven different types of illumination: with and without flash using three different off-the-shelf cameras including smartphones. We analyzed the spread of resulting color value of selected color patch before and after applying the calibration. Additionally, we checked the individual contribution of different steps of the whole calibration process. Using all steps, we were able to achieve a maximum of 81% reduction in standard deviation of color patch values in resulting images comparing to the original images. That supports manual as well as automatic quantitative wound assessments with off-the-shelf devices.

  3. Aero-Thermal Calibration of the NASA Glenn Icing Research Tunnel (2004 and 2005 Tests)

    NASA Technical Reports Server (NTRS)

    Arrington, E. Allen; Pastor, Christine M.; Gonsalez, Jose C.; Curry, Monroe R., III

    2010-01-01

    A full aero-thermal calibration of the NASA Glenn Icing Research Tunnel was completed in 2004 following the replacement of the inlet guide vanes upstream of the tunnel drive system and improvement to the facility total temperature instrumentation. This calibration test provided data used to fully document the aero-thermal flow quality in the IRT test section and to construct calibration curves for the operation of the IRT. The 2004 test was also the first to use the 2-D RTD array, an improved total temperature calibration measurement platform.

  4. Line fiducial material and thickness considerations for ultrasound calibration

    NASA Astrophysics Data System (ADS)

    Ameri, Golafsoun; McLeod, A. J.; Baxter, John S. H.; Chen, Elvis C. S.; Peters, Terry M.

    2015-03-01

    Ultrasound calibration is a necessary procedure in many image-guided interventions, relating the position of tools and anatomical structures in the ultrasound image to a common coordinate system. This is a necessary component of augmented reality environments in image-guided interventions as it allows for a 3D visualization where other surgical tools outside the imaging plane can be found. Accuracy of ultrasound calibration fundamentally affects the total accuracy of this interventional guidance system. Many ultrasound calibration procedures have been proposed based on a variety of phantom materials and geometries. These differences lead to differences in representation of the phantom on the ultrasound image which subsequently affect the ability to accurately and automatically segment the phantom. For example, taut wires are commonly used as line fiducials in ultrasound calibration. However, at large depths or oblique angles, the fiducials appear blurred and smeared in ultrasound images making it hard to localize their cross-section with the ultrasound image plane. Intuitively, larger diameter phantoms with lower echogenicity are more accurately segmented in ultrasound images in comparison to highly reflective thin phantoms. In this work, an evaluation of a variety of calibration phantoms with different geometrical and material properties for the phantomless calibration procedure was performed. The phantoms used in this study include braided wire, plastic straws, and polyvinyl alcohol cryogel tubes with different diameters. Conventional B-mode and synthetic aperture images of the phantoms at different positions were obtained. The phantoms were automatically segmented from the ultrasound images using an ellipse fitting algorithm, the centroid of which is subsequently used as a fiducial for calibration. Calibration accuracy was evaluated for these procedures based on the leave-one-out target registration error. It was shown that larger diameter phantoms with lower echogenicity are more accurately segmented in comparison to highly reflective thin phantoms. This improvement in segmentation accuracy leads to a lower fiducial localization error, which ultimately results in low target registration error. This would have a profound effect on calibration procedures and the feasibility of different calibration procedures in the context of image-guided procedures.

  5. Development of an automated ultrasonic testing system

    NASA Astrophysics Data System (ADS)

    Shuxiang, Jiao; Wong, Brian Stephen

    2005-04-01

    Non-Destructive Testing is necessary in areas where defects in structures emerge over time due to wear and tear and structural integrity is necessary to maintain its usability. However, manual testing results in many limitations: high training cost, long training procedure, and worse, the inconsistent test results. A prime objective of this project is to develop an automatic Non-Destructive testing system for a shaft of the wheel axle of a railway carriage. Various methods, such as the neural network, pattern recognition methods and knowledge-based system are used for the artificial intelligence problem. In this paper, a statistical pattern recognition approach, Classification Tree is applied. Before feature selection, a thorough study on the ultrasonic signals produced was carried out. Based on the analysis of the ultrasonic signals, three signal processing methods were developed to enhance the ultrasonic signals: Cross-Correlation, Zero-Phase filter and Averaging. The target of this step is to reduce the noise and make the signal character more distinguishable. Four features: 1. The Auto Regressive Model Coefficients. 2. Standard Deviation. 3. Pearson Correlation 4. Dispersion Uniformity Degree are selected. And then a Classification Tree is created and applied to recognize the peak positions and amplitudes. Searching local maximum is carried out before feature computing. This procedure reduces much computation time in the real-time testing. Based on this algorithm, a software package called SOFRA was developed to recognize the peaks, calibrate automatically and test a simulated shaft automatically. The automatic calibration procedure and the automatic shaft testing procedure are developed.

  6. Self-calibrating multiplexer circuit

    DOEpatents

    Wahl, Chris P.

    1997-01-01

    A time domain multiplexer system with automatic determination of acceptable multiplexer output limits, error determination, or correction is comprised of a time domain multiplexer, a computer, a constant current source capable of at least three distinct current levels, and two series resistances employed for calibration and testing. A two point linear calibration curve defining acceptable multiplexer voltage limits may be defined by the computer by determining the voltage output of the multiplexer to very accurately known input signals developed from predetermined current levels across the series resistances. Drift in the multiplexer may be detected by the computer when the output voltage limits, expected during normal operation, are exceeded, or the relationship defined by the calibration curve is invalidated.

  7. Automatic inference of geometric camera parameters and inter-camera topology in uncalibrated disjoint surveillance cameras

    NASA Astrophysics Data System (ADS)

    den Hollander, Richard J. M.; Bouma, Henri; Baan, Jan; Eendebak, Pieter T.; van Rest, Jeroen H. C.

    2015-10-01

    Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many cameras, or for frequent ad-hoc deployments of cameras, the cost of this calibration is high. This creates a barrier for the use of video analytics. Automating the calibration allows for a short configuration time, and the use of video analytics in a wider range of scenarios, including ad-hoc crisis situations and large scale surveillance systems. We show an autocalibration method entirely based on pedestrian detections in surveillance video in multiple non-overlapping cameras. In this paper, we show the two main components of automatic calibration. The first shows the intra-camera geometry estimation that leads to an estimate of the tilt angle, focal length and camera height, which is important for the conversion from pixels to meters and vice versa. The second component shows the inter-camera topology inference that leads to an estimate of the distance between cameras, which is important for spatio-temporal analysis of multi-camera tracking. This paper describes each of these methods and provides results on realistic video data.

  8. Robustness of near-infrared calibration models for the prediction of milk constituents during the milking process.

    PubMed

    Melfsen, Andreas; Hartung, Eberhard; Haeussermann, Angelika

    2013-02-01

    The robustness of in-line raw milk analysis with near-infrared spectroscopy (NIRS) was tested with respect to the prediction of the raw milk contents fat, protein and lactose. Near-infrared (NIR) spectra of raw milk (n = 3119) were acquired on three different farms during the milking process of 354 milkings over a period of six months. Calibration models were calculated for: a random data set of each farm (fully random internal calibration); first two thirds of the visits per farm (internal calibration); whole datasets of two of the three farms (external calibration), and combinations of external and internal datasets. Validation was done either on the remaining data set per farm (internal validation) or on data of the remaining farms (external validation). Excellent calibration results were obtained when fully randomised internal calibration sets were used for milk analysis. In this case, RPD values of around ten, five and three for the prediction of fat, protein and lactose content, respectively, were achieved. Farm internal calibrations achieved much poorer prediction results especially for the prediction of protein and lactose with RPD values of around two and one respectively. The prediction accuracy improved when validation was done on spectra of an external farm, mainly due to the higher sample variation in external calibration sets in terms of feeding diets and individual cow effects. The results showed that further improvements were achieved when additional farm information was added to the calibration set. One of the main requirements towards a robust calibration model is the ability to predict milk constituents in unknown future milk samples. The robustness and quality of prediction increases with increasing variation of, e.g., feeding and cow individual milk composition in the calibration model.

  9. Evaluation of “Autotune” calibration against manual calibration of building energy models

    DOE PAGES

    Chaudhary, Gaurav; New, Joshua; Sanyal, Jibonananda; ...

    2016-08-26

    Our paper demonstrates the application of Autotune, a methodology aimed at automatically producing calibrated building energy models using measured data, in two case studies. In the first case, a building model is de-tuned by deliberately injecting faults into more than 60 parameters. This model was then calibrated using Autotune and its accuracy with respect to the original model was evaluated in terms of the industry-standard normalized mean bias error and coefficient of variation of root mean squared error metrics set forth in ASHRAE Guideline 14. In addition to whole-building energy consumption, outputs including lighting, plug load profiles, HVAC energy consumption,more » zone temperatures, and other variables were analyzed. In the second case, Autotune calibration is compared directly to experts’ manual calibration of an emulated-occupancy, full-size residential building with comparable calibration results in much less time. Lastly, our paper concludes with a discussion of the key strengths and weaknesses of auto-calibration approaches.« less

  10. Initial clinical trial of a closed loop, fully automatic intra-aortic balloon pump.

    PubMed

    Kantrowitz, A; Freed, P S; Cardona, R R; Gage, K; Marinescu, G N; Westveld, A H; Litch, B; Suzuki, A; Hayakawa, H; Takano, T

    1992-01-01

    A new generation, closed loop, fully automatic intraaortic balloon pump (CL-IABP) system continuously optimizes diastolic augmentation by adjusting balloon pump parameters beat by beat without operator intervention. In dogs in sinus rhythm and with experimentally induced arrhythmias, the new CL-IABP system provided safe, effective augmentation. To investigate the system's suitability for clinical use, 10 patients meeting standard indications for IABP were studied. The patients were pumped by the fully automatic IABP system for an average of 20 hr (range, 1-48 hr). At start-up, the system optimized pumping parameters within 7-20 sec. Evaluation of 186 recordings made at hourly intervals showed that inflation began within 20 msec of the dicrotic notch 99% of the time. In 100% of the recordings, deflation straddled the first half of ventricular ejection. Peak pressure across the balloon membrane averaged 55 mmHg and, in no case, exceeded 100 mmHg. Examination of the data showed that as soon as the system was actuated it provided consistently beneficial diastolic augmentation without any further operator intervention. Eight patients improved and two died (one of irreversible cardiogenic shock and one of ischemic cardiomyopathy). No complications were attributable to the investigational aspects of the system. A fully automated IABP is feasible in the clinical setting, and it may have advantages relative to current generation IABP systems.

  11. Automatic Assessment of 3D Modeling Exams

    ERIC Educational Resources Information Center

    Sanna, A.; Lamberti, F.; Paravati, G.; Demartini, C.

    2012-01-01

    Computer-based assessment of exams provides teachers and students with two main benefits: fairness and effectiveness in the evaluation process. This paper proposes a fully automatic evaluation tool for the Graphic and Virtual Design (GVD) curriculum at the First School of Architecture of the Politecnico di Torino, Italy. In particular, the tool is…

  12. The Use of Opto-Electronics in Viscometry.

    ERIC Educational Resources Information Center

    Mazza, R. J.; Washbourn, D. H.

    1982-01-01

    Describes a semi-automatic viscometer which incorporates a microprocessor system and uses optoelectronics to detect flow of liquid through the capillary, flow time being displayed on a timer with accuracy of 0.01 second. The system could be made fully automatic with an additional microprocessor circuit and inclusion of a pump. (Author/JN)

  13. INFORMATION STORAGE AND RETRIEVAL, REPORTS ON EVALUATION PROCEDURES AND RESULTS 1965-1967.

    ERIC Educational Resources Information Center

    SALTON, GERALD

    A DETAILED ANALYSIS OF THE RETRIEVAL EVALUATION RESULTS OBTAINED WITH THE AUTOMATIC SMART DOCUMENT RETRIEVAL SYSTEM FOR DOCUMENT COLLECTIONS IN THE FIELDS OF AERODYNAMICS, COMPUTER SCIENCE, AND DOCUMENTATION IS GIVEN IN THIS REPORT. THE VARIOUS COMPONENTS OF FULLY AUTOMATIC DOCUMENT RETRIEVAL SYSTEMS ARE DISCUSSED IN DETAIL, INCLUDING THE FORMS OF…

  14. Atlas-based fuzzy connectedness segmentation and intensity nonuniformity correction applied to brain MRI.

    PubMed

    Zhou, Yongxin; Bai, Jing

    2007-01-01

    A framework that combines atlas registration, fuzzy connectedness (FC) segmentation, and parametric bias field correction (PABIC) is proposed for the automatic segmentation of brain magnetic resonance imaging (MRI). First, the atlas is registered onto the MRI to initialize the following FC segmentation. Original techniques are proposed to estimate necessary initial parameters of FC segmentation. Further, the result of the FC segmentation is utilized to initialize a following PABIC algorithm. Finally, we re-apply the FC technique on the PABIC corrected MRI to get the final segmentation. Thus, we avoid expert human intervention and provide a fully automatic method for brain MRI segmentation. Experiments on both simulated and real MRI images demonstrate the validity of the method, as well as the limitation of the method. Being a fully automatic method, it is expected to find wide applications, such as three-dimensional visualization, radiation therapy planning, and medical database construction.

  15. Automatic bone detection and soft tissue aware ultrasound-CT registration for computer-aided orthopedic surgery.

    PubMed

    Wein, Wolfgang; Karamalis, Athanasios; Baumgartner, Adrian; Navab, Nassir

    2015-06-01

    The transfer of preoperative CT data into the tracking system coordinates within an operating room is of high interest for computer-aided orthopedic surgery. In this work, we introduce a solution for intra-operative ultrasound-CT registration of bones. We have developed methods for fully automatic real-time bone detection in ultrasound images and global automatic registration to CT. The bone detection algorithm uses a novel bone-specific feature descriptor and was thoroughly evaluated on both in-vivo and ex-vivo data. A global optimization strategy aligns the bone surface, followed by a soft tissue aware intensity-based registration to provide higher local registration accuracy. We evaluated the system on femur, tibia and fibula anatomy in a cadaver study with human legs, where magnetically tracked bone markers were implanted to yield ground truth information. An overall median system error of 3.7 mm was achieved on 11 datasets. Global and fully automatic registration of bones aquired with ultrasound to CT is feasible, with bone detection and tracking operating in real time for immediate feedback to the surgeon.

  16. Robot calibration with a photogrammetric on-line system using reseau scanning cameras

    NASA Astrophysics Data System (ADS)

    Diewald, Bernd; Godding, Robert; Henrich, Andreas

    1994-03-01

    The possibility for testing and calibration of industrial robots becomes more and more important for manufacturers and users of such systems. Exacting applications in connection with the off-line programming techniques or the use of robots as measuring machines are impossible without a preceding robot calibration. At the LPA an efficient calibration technique has been developed. Instead of modeling the kinematic behavior of a robot, the new method describes the pose deviations within a user-defined section of the robot's working space. High- precision determination of 3D coordinates of defined path positions is necessary for calibration and can be done by digital photogrammetric systems. For the calibration of a robot at the LPA a digital photogrammetric system with three Rollei Reseau Scanning Cameras was used. This system allows an automatic measurement of a large number of robot poses with high accuracy.

  17. Onsite Calibration of a Precision IPRT Based on Gallium and Gallium-Based Small-Size Eutectic Points

    NASA Astrophysics Data System (ADS)

    Sun, Jianping; Hao, Xiaopeng; Zeng, Fanchao; Zhang, Lin; Fang, Xinyun

    2017-04-01

    Onsite thermometer calibration with temperature scale transfer technology based on fixed points can effectively improve the level of industrial temperature measurement and calibration. The present work performs an onsite calibration of a precision industrial platinum resistance thermometer near room temperature. The calibration is based on a series of small-size eutectic points, including Ga-In (15.7°C), Ga-Sn (20.5°C), Ga-Zn (25.2°C), and a Ga fixed point (29.7°C), developed in a portable multi-point automatic realization apparatus. The temperature plateaus of the Ga-In, Ga-Sn, and Ga-Zn eutectic points and the Ga fixed point last for longer than 2 h, and their reproducibility was better than 5 mK. The device is suitable for calibrating non-detachable temperature sensors in advanced environmental laboratories and industrial fields.

  18. Application of Artificial Intelligence to Improve Aircraft Survivability.

    DTIC Science & Technology

    1985-12-01

    may be as smooth and effective as possible. 3. Fully Automatic Digital Engine Control ( FADEC ) Under development at the Naval Weapons Center, a major...goal of the FADEC program is to significantly reduce engine vulnerability by fully automating the regulation of engine controls. Given a thrust

  19. Temporal Analysis and Automatic Calibration of the Velodyne HDL-32E LiDAR System

    NASA Astrophysics Data System (ADS)

    Chan, T. O.; Lichti, D. D.; Belton, D.

    2013-10-01

    At the end of the first quarter of 2012, more than 600 Velodyne LiDAR systems had been sold worldwide for various robotic and high-accuracy survey applications. The ultra-compact Velodyne HDL-32E LiDAR has become a predominant sensor for many applications that require lower sensor size/weight and cost. For high accuracy applications, cost-effective calibration methods with minimal manual intervention are always desired by users. However, the calibrations are complicated by the Velodyne LiDAR's narrow vertical field of view and the very highly time-variant nature of its measurements. In the paper, the temporal stability of the HDL-32E is first analysed as the motivation for developing a new, automated calibration method. This is followed by a detailed description of the calibration method that is driven by a novel segmentation method for extracting vertical cylindrical features from the Velodyne point clouds. The proposed segmentation method utilizes the Velodyne point cloud's slice-like nature and first decomposes the point clouds into 2D layers. Then the layers are treated as 2D images and are processed with the Generalized Hough Transform which extracts the points distributed in circular patterns from the point cloud layers. Subsequently, the vertical cylindrical features can be readily extracted from the whole point clouds based on the previously extracted points. The points are passed to the calibration that estimates the cylinder parameters and the LiDAR's additional parameters simultaneously by constraining the segmented points to fit to the cylindrical geometric model in such a way the weighted sum of the adjustment residuals are minimized. The proposed calibration is highly automatic and this allows end users to obtain the time-variant additional parameters instantly and frequently whenever there are vertical cylindrical features presenting in scenes. The methods were verified with two different real datasets, and the results suggest that up to 78.43% accuracy improvement for the HDL-32E can be achieved using the proposed calibration method.

  20. When to Make Mountains out of Molehills: The Pros and Cons of Simple and Complex Model Calibration Procedures

    NASA Astrophysics Data System (ADS)

    Smith, K. A.; Barker, L. J.; Harrigan, S.; Prudhomme, C.; Hannaford, J.; Tanguy, M.; Parry, S.

    2017-12-01

    Earth and environmental models are relied upon to investigate system responses that cannot otherwise be examined. In simulating physical processes, models have adjustable parameters which may, or may not, have a physical meaning. Determining the values to assign to these model parameters is an enduring challenge for earth and environmental modellers. Selecting different error metrics by which the models results are compared to observations will lead to different sets of calibrated model parameters, and thus different model results. Furthermore, models may exhibit `equifinal' behaviour, where multiple combinations of model parameters lead to equally acceptable model performance against observations. These decisions in model calibration introduce uncertainty that must be considered when model results are used to inform environmental decision-making. This presentation focusses on the uncertainties that derive from the calibration of a four parameter lumped catchment hydrological model (GR4J). The GR models contain an inbuilt automatic calibration algorithm that can satisfactorily calibrate against four error metrics in only a few seconds. However, a single, deterministic model result does not provide information on parameter uncertainty. Furthermore, a modeller interested in extreme events, such as droughts, may wish to calibrate against more low flows specific error metrics. In a comprehensive assessment, the GR4J model has been run with 500,000 Latin Hypercube Sampled parameter sets across 303 catchments in the United Kingdom. These parameter sets have been assessed against six error metrics, including two drought specific metrics. This presentation compares the two approaches, and demonstrates that the inbuilt automatic calibration can outperform the Latin Hypercube experiment approach in single metric assessed performance. However, it is also shown that there are many merits of the more comprehensive assessment, which allows for probabilistic model results, multi-objective optimisation, and better tailoring to calibrate the model for specific applications such as drought event characterisation. Modellers and decision-makers may be constrained in their choice of calibration method, so it is important that they recognise the strengths and limitations of their chosen approach.

  1. MatchGUI: A Graphical MATLAB-Based Tool for Automatic Image Co-Registration

    NASA Technical Reports Server (NTRS)

    Ansar, Adnan I.

    2011-01-01

    MatchGUI software, based on MATLAB, automatically matches two images and displays the match result by superimposing one image on the other. A slider bar allows focus to shift between the two images. There are tools for zoom, auto-crop to overlap region, and basic image markup. Given a pair of ortho-rectified images (focused primarily on Mars orbital imagery for now), this software automatically co-registers the imagery so that corresponding image pixels are aligned. MatchGUI requires minimal user input, and performs a registration over scale and inplane rotation fully automatically

  2. Laboratory Evaluation of Light Obscuration Particle Counter Contamination Limits for Aviation Fuel

    DTIC Science & Technology

    2015-11-01

    diesel product for ground use (1). At a minimum free water and particulate by color (as specified in the appendix of ASTM D2276) are checked daily...used in the hydraulics/hydraulic fluid industry. In 1999 ISO adopted ISO 11171 Hydraulic fluid power — Calibration of automatic particle counters...for liquids, replacing ISO 4402, as an international standard for the calibration of liquid particle counters giving NIST traceability to particle

  3. Light Obscuration Particle Counter Fuel Contamination Limits

    DTIC Science & Technology

    2015-10-08

    or up to 10 mg/L for product used as a diesel product for ground use (1). At a minimum free water and particulate by color (as specified in the...contamination is frequently used in the hydraulics/hydraulic fluid industry. In 1999 ISO adopted ISO 11171 Hydraulic fluid power — Calibration of automatic...particle counters for liquids, replacing ISO 4402, as an international standard for the calibration of liquid particle counters giving NIST

  4. Automated Camera Array Fine Calibration

    NASA Technical Reports Server (NTRS)

    Clouse, Daniel; Padgett, Curtis; Ansar, Adnan; Cheng, Yang

    2008-01-01

    Using aerial imagery, the JPL FineCalibration (JPL FineCal) software automatically tunes a set of existing CAHVOR camera models for an array of cameras. The software finds matching features in the overlap region between images from adjacent cameras, and uses these features to refine the camera models. It is not necessary to take special imagery of a known target and no surveying is required. JPL FineCal was developed for use with an aerial, persistent surveillance platform.

  5. Fully automatic region of interest selection in glomerular filtration rate estimation from 99mTc-DTPA renogram.

    PubMed

    Lin, Kun-Ju; Huang, Jia-Yann; Chen, Yung-Sheng

    2011-12-01

    Glomerular filtration rate (GFR) is a common accepted standard estimation of renal function. Gamma camera-based methods for estimating renal uptake of (99m)Tc-diethylenetriaminepentaacetic acid (DTPA) without blood or urine sampling have been widely used. Of these, the method introduced by Gates has been the most common method. Currently, most of gamma cameras are equipped with a commercial program for GFR determination, a semi-quantitative analysis by manually drawing region of interest (ROI) over each kidney. Then, the GFR value can be computed from the scintigraphic determination of (99m)Tc-DTPA uptake within the kidney automatically. Delineating the kidney area is difficult when applying a fixed threshold value. Moreover, hand-drawn ROIs are tedious, time consuming, and dependent highly on operator skill. Thus, we developed a fully automatic renal ROI estimation system based on the temporal changes in intensity counts, intensity-pair distribution image contrast enhancement method, adaptive thresholding, and morphological operations that can locate the kidney area and obtain the GFR value from a (99m)Tc-DTPA renogram. To evaluate the performance of the proposed approach, 30 clinical dynamic renograms were introduced. The fully automatic approach failed in one patient with very poor renal function. Four patients had a unilateral kidney, and the others had bilateral kidneys. The automatic contours from the remaining 54 kidneys were compared with the contours of manual drawing. The 54 kidneys were included for area error and boundary error analyses. There was high correlation between two physicians' manual contours and the contours obtained by our approach. For area error analysis, the mean true positive area overlap is 91%, the mean false negative is 13.4%, and the mean false positive is 9.3%. The boundary error is 1.6 pixels. The GFR calculated using this automatic computer-aided approach is reproducible and may be applied to help nuclear medicine physicians in clinical practice.

  6. Self-Calibrating and Remote Programmable Signal Conditioning Amplifier System and Method

    NASA Technical Reports Server (NTRS)

    Medelius, Pedro J. (Inventor); Hallberg, Carl G. (Inventor); Simpson, Howard J., III (Inventor); Thayer, Stephen W. (Inventor)

    1998-01-01

    A self-calibrating, remote programmable signal conditioning amplifier system employs information read from a memory attached to a measurement transducer for automatic calibration. The signal conditioning amplifier is self-calibrated on a continuous basis through use of a dual input path arrangement, with each path containing a multiplexer and a programmable amplifier. A digital signal processor controls operation of the system such that a transducer signal is applied to one of the input paths, while one or more calibration signals are applied to the second input path. Once the second path is calibrated, the digital signal processor switches the transducer signal to the second path. and then calibrates the first path. This process is continually repeated so that each path is calibrated on an essentially continuous basis. Dual output paths are also employed which are calibrated in the same manner. The digital signal processor also allows the implementation of a variety of digital filters which are either programmed into the system or downloaded by an operator, and performs up to eighth order linearization.

  7. Parameter Estimation of Computationally Expensive Watershed Models Through Efficient Multi-objective Optimization and Interactive Decision Analytics

    NASA Astrophysics Data System (ADS)

    Akhtar, Taimoor; Shoemaker, Christine

    2016-04-01

    Watershed model calibration is inherently a multi-criteria problem. Conflicting trade-offs exist between different quantifiable calibration criterions indicating the non-existence of a single optimal parameterization. Hence, many experts prefer a manual approach to calibration where the inherent multi-objective nature of the calibration problem is addressed through an interactive, subjective, time-intensive and complex decision making process. Multi-objective optimization can be used to efficiently identify multiple plausible calibration alternatives and assist calibration experts during the parameter estimation process. However, there are key challenges to the use of multi objective optimization in the parameter estimation process which include: 1) multi-objective optimization usually requires many model simulations, which is difficult for complex simulation models that are computationally expensive; and 2) selection of one from numerous calibration alternatives provided by multi-objective optimization is non-trivial. This study proposes a "Hybrid Automatic Manual Strategy" (HAMS) for watershed model calibration to specifically address the above-mentioned challenges. HAMS employs a 3-stage framework for parameter estimation. Stage 1 incorporates the use of an efficient surrogate multi-objective algorithm, GOMORS, for identification of numerous calibration alternatives within a limited simulation evaluation budget. The novelty of HAMS is embedded in Stages 2 and 3 where an interactive visual and metric based analytics framework is available as a decision support tool to choose a single calibration from the numerous alternatives identified in Stage 1. Stage 2 of HAMS provides a goodness-of-fit measure / metric based interactive framework for identification of a small subset (typically less than 10) of meaningful and diverse set of calibration alternatives from the numerous alternatives obtained in Stage 1. Stage 3 incorporates the use of an interactive visual analytics framework for decision support in selection of one parameter combination from the alternatives identified in Stage 2. HAMS is applied for calibration of flow parameters of a SWAT model, (Soil and Water Assessment Tool) designed to simulate flow in the Cannonsville watershed in upstate New York. Results from the application of HAMS to Cannonsville indicate that efficient multi-objective optimization and interactive visual and metric based analytics can bridge the gap between the effective use of both automatic and manual strategies for parameter estimation of computationally expensive watershed models.

  8. Calibration of a Hall effect displacement measurement system for complex motion analysis using a neural network.

    PubMed

    Northey, G W; Oliver, M L; Rittenhouse, D M

    2006-01-01

    Biomechanics studies often require the analysis of position and orientation. Although a variety of transducer and camera systems can be utilized, a common inexpensive alternative is the Hall effect sensor. Hall effect sensors have been used extensively for one-dimensional position analysis but their non-linear behavior and cross-talk effects make them difficult to calibrate for effective and accurate two- and three-dimensional position and orientation analysis. The aim of this study was to develop and calibrate a displacement measurement system for a hydraulic-actuation joystick used for repetitive motion analysis of heavy equipment operators. The system utilizes an array of four Hall effect sensors that are all active during any joystick movement. This built-in redundancy allows the calibration to utilize fully connected feed forward neural networks in conjunction with a Microscribe 3D digitizer. A fully connected feed forward neural network with one hidden layer containing five neurons was developed. Results indicate that the ability of the neural network to accurately predict the x, y and z coordinates of the joystick handle was good with r(2) values of 0.98 and higher. The calibration technique was found to be equally as accurate when used on data collected 5 days after the initial calibration, indicating the system is robust and stable enough to not require calibration every time the joystick is used. This calibration system allowed an infinite number of joystick orientations and positions to be found within the range of joystick motion.

  9. Automatic segmentation of abdominal organs and adipose tissue compartments in water-fat MRI: Application to weight-loss in obesity.

    PubMed

    Shen, Jun; Baum, Thomas; Cordes, Christian; Ott, Beate; Skurk, Thomas; Kooijman, Hendrik; Rummeny, Ernst J; Hauner, Hans; Menze, Bjoern H; Karampinos, Dimitrios C

    2016-09-01

    To develop a fully automatic algorithm for abdominal organs and adipose tissue compartments segmentation and to assess organ and adipose tissue volume changes in longitudinal water-fat magnetic resonance imaging (MRI) data. Axial two-point Dixon images were acquired in 20 obese women (age range 24-65, BMI 34.9±3.8kg/m(2)) before and after a four-week calorie restriction. Abdominal organs, subcutaneous adipose tissue (SAT) compartments (abdominal, anterior, posterior), SAT regions along the feet-head direction and regional visceral adipose tissue (VAT) were assessed by a fully automatic algorithm using morphological operations and a multi-atlas-based segmentation method. The accuracy of organ segmentation represented by Dice coefficients ranged from 0.672±0.155 for the pancreas to 0.943±0.023 for the liver. Abdominal SAT changes were significantly greater in the posterior than the anterior SAT compartment (-11.4%±5.1% versus -9.5%±6.3%, p<0.001). The loss of VAT that was not located around any organ (-16.1%±8.9%) was significantly greater than the loss of VAT 5cm around liver, left and right kidney, spleen, and pancreas (p<0.05). The presented fully automatic algorithm showed good performance in abdominal adipose tissue and organ segmentation, and allowed the detection of SAT and VAT subcompartments changes during weight loss. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  10. Optimization of Equation of State and Burn Model Parameters for Explosives

    NASA Astrophysics Data System (ADS)

    Bergh, Magnus; Wedberg, Rasmus; Lundgren, Jonas

    2017-06-01

    A reactive burn model implemented in a multi-dimensional hydrocode can be a powerful tool for predicting non-ideal effects as well as initiation phenomena in explosives. Calibration against experiment is, however, critical and non-trivial. Here, a procedure is presented for calibrating the Ignition and Growth Model utilizing hydrocode simulation in conjunction with the optimization program LS-OPT. The model is applied to the explosive PBXN-109. First, a cylinder expansion test is presented together with a new automatic routine for product equation of state calibration. Secondly, rate stick tests and instrumented gap tests are presented. Data from these experiments are used to calibrate burn model parameters. Finally, we discuss the applicability and development of this optimization routine.

  11. Improving the Calibration of the SN Ia Anchor Datasets with a Bayesian Hierarchal Model

    NASA Astrophysics Data System (ADS)

    Currie, Miles; Rubin, David

    2018-01-01

    Inter-survey calibration remains one of the largest systematic uncertainties in SN Ia cosmology today. Ideally, each survey would measure their system throughputs and observe well characterized spectrophotometric standard stars, but many important surveys have not done so. For these surveys, we calibrate using tertiary survey stars tied to SDSS and Pan-STARRS. We improve on previous efforts by taking the spatially variable response of each telescope/camera into account, and using improved color transformations in the surveys’ natural instrumental photometric system. We use a global hierarchical model of the data, automatically providing a covariance matrix of magnitude offsets and bandpass shifts which reduces the systematic uncertainty in inter-survey calibration, thereby providing better cosmological constraints.

  12. Design and analysis of an automatic method of measuring silicon-controlled-rectifier holding current

    NASA Technical Reports Server (NTRS)

    Maslowski, E. A.

    1971-01-01

    The design of an automated SCR holding-current measurement system is described. The circuits used in the measurement system were designed to meet the major requirements of automatic data acquisition, reliability, and repeatability. Performance data are presented and compared with calibration data. The data verified the accuracy of the measurement system. Data taken over a 48-hr period showed that the measurement system operated satisfactorily and met all the design requirements.

  13. Automatic chemical vapor deposition

    NASA Technical Reports Server (NTRS)

    Kennedy, B. W.

    1981-01-01

    Report reviews chemical vapor deposition (CVD) for processing integrated circuits and describes fully automatic machine for CVD. CVD proceeds at relatively low temperature, allows wide choice of film compositions (including graded or abruptly changing compositions), and deposits uniform films of controllable thickness at fairly high growth rate. Report gives overview of hardware, reactants, and temperature ranges used with CVD machine.

  14. A fully Bayesian method for jointly fitting instrumental calibration and X-ray spectral models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Jin; Yu, Yaming; Van Dyk, David A.

    2014-10-20

    Owing to a lack of robust principled methods, systematic instrumental uncertainties have generally been ignored in astrophysical data analysis despite wide recognition of the importance of including them. Ignoring calibration uncertainty can cause bias in the estimation of source model parameters and can lead to underestimation of the variance of these estimates. We previously introduced a pragmatic Bayesian method to address this problem. The method is 'pragmatic' in that it introduced an ad hoc technique that simplified computation by neglecting the potential information in the data for narrowing the uncertainty for the calibration product. Following that work, we use amore » principal component analysis to efficiently represent the uncertainty of the effective area of an X-ray (or γ-ray) telescope. Here, however, we leverage this representation to enable a principled, fully Bayesian method that coherently accounts for the calibration uncertainty in high-energy spectral analysis. In this setting, the method is compared with standard analysis techniques and the pragmatic Bayesian method. The advantage of the fully Bayesian method is that it allows the data to provide information not only for estimation of the source parameters but also for the calibration product—here the effective area, conditional on the adopted spectral model. In this way, it can yield more accurate and efficient estimates of the source parameters along with valid estimates of their uncertainty. Provided that the source spectrum can be accurately described by a parameterized model, this method allows rigorous inference about the effective area by quantifying which possible curves are most consistent with the data.« less

  15. Progress on the Cluster Mission

    NASA Technical Reports Server (NTRS)

    Kivelson, Margaret; Khurana, Krishan; Acuna, Mario (Technical Monitor)

    2002-01-01

    Prof M. G. Kivelson and Dr. K. K. Khurana (UCLA (University of California, Los Angeles)) are co-investigators on the Cluster Magnetometer Consortium (CMC) that provided the fluxgate magnetometers and associated mission support for the Cluster Mission. The CMC designated UCLA as the site with primary responsibility for the inter-calibration of data from the four spacecraft and the production of fully corrected data critical to achieving the mission objectives. UCLA will also participate in the analysis and interpretation of the data. The UCLA group here reports its excellent progress in developing fully intra-calibrated data for large portions of the mission and an excellent start in developing inter-calibrated data for selected time intervals, especially extended intervals in August, 2001 on which a workshop held at ESTEC in March, 2002 focused. In addition, some scientific investigations were initiated and results were reported at meetings.

  16. Day/night whole sky imagers for 24-h cloud and sky assessment: history and overview.

    PubMed

    Shields, Janet E; Karr, Monette E; Johnson, Richard W; Burden, Art R

    2013-03-10

    A family of fully automated digital whole sky imagers (WSIs) has been developed at the Marine Physical Laboratory over many years, for a variety of research and military applications. The most advanced of these, the day/night whole sky imagers (D/N WSIs), acquire digital imagery of the full sky down to the horizon under all conditions from full sunlight to starlight. Cloud algorithms process the imagery to automatically detect the locations of cloud for both day and night. The instruments can provide absolute radiance distribution over the full radiance range from starlight through daylight. The WSIs were fielded in 1984, followed by the D/N WSIs in 1992. These many years of experience and development have resulted in very capable instruments and algorithms that remain unique. This article discusses the history of the development of the D/N WSIs, system design, algorithms, and data products. The paper cites many reports with more detailed technical documentation. Further details of calibration, day and night algorithms, and cloud free line-of-sight results will be discussed in future articles.

  17. An integrated system for rainfall induced shallow landslides modeling

    NASA Astrophysics Data System (ADS)

    Formetta, Giuseppe; Capparelli, Giovanna; Rigon, Riccardo; Versace, Pasquale

    2014-05-01

    Rainfall induced shallow landslides (RISL) cause significant damages involving loss of life and properties. Predict susceptible locations for RISL is a complex task that involves many disciplines: hydrology, geotechnical science, geomorphology, statistic. Usually to accomplish this task two main approaches are used: statistical or physically based model. In this work an open source (OS), 3-D, fully distributed hydrological model was integrated in an OS modeling framework (Object Modeling System). The chain is closed by linking the system to a component for safety factor computation with infinite slope approximation able to take into account layered soils and suction contribution to hillslope stability. The model composition was tested for a case study in Calabria (Italy) in order to simulate the triggering of a landslide happened in the Cosenza Province. The integration in OMS allows the use of other components such as a GIS to manage inputs-output processes, and automatic calibration algorithms to estimate model parameters. Finally, model performances were quantified by comparing modelled and simulated trigger time. This research is supported by Ambito/Settore AMBIENTE E SICUREZZA (PON01_01503) project.

  18. An automated 3D reconstruction method of UAV images

    NASA Astrophysics Data System (ADS)

    Liu, Jun; Wang, He; Liu, Xiaoyang; Li, Feng; Sun, Guangtong; Song, Ping

    2015-10-01

    In this paper a novel fully automated 3D reconstruction approach based on low-altitude unmanned aerial vehicle system (UAVs) images will be presented, which does not require previous camera calibration or any other external prior knowledge. Dense 3D point clouds are generated by integrating orderly feature extraction, image matching, structure from motion (SfM) and multi-view stereo (MVS) algorithms, overcoming many of the cost, time limitations of rigorous photogrammetry techniques. An image topology analysis strategy is introduced to speed up large scene reconstruction by taking advantage of the flight-control data acquired by UAV. Image topology map can significantly reduce the running time of feature matching by limiting the combination of images. A high-resolution digital surface model of the study area is produced base on UAV point clouds by constructing the triangular irregular network. Experimental results show that the proposed approach is robust and feasible for automatic 3D reconstruction of low-altitude UAV images, and has great potential for the acquisition of spatial information at large scales mapping, especially suitable for rapid response and precise modelling in disaster emergency.

  19. Development of a drift-correction procedure for a direct-reading spectrometer

    NASA Technical Reports Server (NTRS)

    Chapman, G. B., II; Gordon, W. A.

    1977-01-01

    A procedure which provides automatic correction for drifts in the radiometric sensitivity of each detector channel in a direct-reading emission spectrometer is described. Such drifts are customarily controlled by the regular analyses of standards, which provide corrections for changes in the excitational, optical, and electronic components of the instrument. This standardization procedure, however, corrects for the optical and electronic drifts. It is a step that must be taken if the time, effort, and cost of processing standards is to be minimized. This method of radiometric drift correction uses a 1,000-W tungsten-halogen reference lamp to illuminate each detector through the same optical path as that traversed during sample analysis. The responses of the detector channels to this reference light are regularly compared with channel response to the same light intensity at the time of analytical calibration in order to determine and correct for drift. Except for placing the lamp in position, the procedure is fully automated and compensates for changes in spectral intensity due to variations in lamp current. A discussion of the implementation of this drift-correction system is included.

  20. Hazardous Environment Robotics

    NASA Technical Reports Server (NTRS)

    1996-01-01

    Jet Propulsion Laboratory (JPL) developed video overlay calibration and demonstration techniques for ground-based telerobotics. Through a technology sharing agreement with JPL, Deneb Robotics added this as an option to its robotics software, TELEGRIP. The software is used for remotely operating robots in nuclear and hazardous environments in industries including automotive and medical. The option allows the operator to utilize video to calibrate 3-D computer models with the actual environment, and thus plan and optimize robot trajectories before the program is automatically generated.

  1. Development of Automatic Control of Bayer Plant Digestion

    NASA Astrophysics Data System (ADS)

    Riffaud, J. P.

    Supervisory computer control has been achieved in Alcan's Bayer Plants at Arvida, Quebec, Canada. The purpose of the automatic control system is to stabilize and consequently increase, the alumina/caustic ratio within the digester train and in the blow-off liquor. Measurements of the electrical conductivity of the liquor are obtained from electrodeless conductivity meters. These signals, along with several others are scanned by the computer and converted to engineering units, using specific relationships which are updated periodically for calibration purposes. On regular time intervals, values of ratio are compared to target values and adjustments are made to the bauxite flow entering the digesters. Dead time compensation included in the control algorithm enables a faster rate for corrections. Modification of production rate is achieved through careful timing of various flow changes. Calibration of the conductivity meters is achieved by sampling at intervals the liquor flowing through them, and analysing it with a thermometric titrator. Calibration of the thermometric titrator is done at intervals with a standard solution. Calculations for both calibrations are performed by computer from data entered by the analyst. The computer was used for on-line data collection, modelling of the digester system, calculation of disturbances and simulation of control strategies before implementing the most successful strategy in the Plant. Control of ratio has been improved by the integrated system, resulting in increased Plant productivity.

  2. Evaluation of the use of performance reference compounds in an oasis-HLB adsorbent based passive sampler for improving water concentration estimates of polar herbicides in freshwater

    USGS Publications Warehouse

    Mazzella, N.; Lissalde, S.; Moreira, S.; Delmas, F.; Mazellier, P.; Huckins, J.N.

    2010-01-01

    Passive samplers such as the Polar Organic Chemical Integrative Sampler (POCIS) are useful tools for monitoring trace levels of polar organic chemicals in aquatic environments. The use of performance reference compounds (PRC) spiked into the POCIS adsorbent for in situ calibration may improve the semiquantitative nature of water concentration estimates based on this type of sampler. In this work, deuterium labeled atrazine-desisopropyl (DIA-d5) was chosen as PRC because of its relatively high fugacity from Oasis HLB (the POCIS adsorbent used) and our earlier evidence of its isotropic exchange. In situ calibration of POCIS spiked with DIA-d5was performed, and the resulting time-weighted average concentration estimates were compared with similar values from an automatic sampler equipped with Oasis HLB cartridges. Before PRC correction, water concentration estimates based on POCIS data sampling ratesfrom a laboratory calibration exposure were systematically lower than the reference concentrations obtained with the automatic sampler. Use of the DIA-d5 PRC data to correct POCIS sampling rates narrowed differences between corresponding values derived from the two methods. Application of PRCs for in situ calibration seems promising for improving POCIS-derived concentration estimates of polar pesticides. However, careful attention must be paid to the minimization of matrix effects when the quantification is performed by HPLC-ESI-MS/MS. ?? 2010 American Chemical Society.

  3. Motor automaticity in Parkinson’s disease

    PubMed Central

    Wu, Tao; Hallett, Mark; Chan, Piu

    2017-01-01

    Bradykinesia is the most important feature contributing to motor difficulties in Parkinson’s disease (PD). However, the pathophysiology underlying bradykinesia is not fully understood. One important aspect is that PD patients have difficulty in performing learned motor skills automatically, but this problem has been generally overlooked. Here we review motor automaticity associated motor deficits in PD, such as reduced arm swing, decreased stride length, freezing of gait, micrographia and reduced facial expression. Recent neuroimaging studies have revealed some neural mechanisms underlying impaired motor automaticity in PD, including less efficient neural coding of movement, failure to shift automated motor skills to the sensorimotor striatum, instability of the automatic mode within the striatum, and use of attentional control and/or compensatory efforts to execute movements usually performed automatically in healthy people. PD patients lose previously acquired automatic skills due to their impaired sensorimotor striatum, and have difficulty in acquiring new automatic skills or restoring lost motor skills. More investigations on the pathophysiology of motor automaticity, the effect of L-dopa or surgical treatments on automaticity, and the potential role of using measures of automaticity in early diagnosis of PD would be valuable. PMID:26102020

  4. Semi-Automatic Determination of Rockfall Trajectories

    PubMed Central

    Volkwein, Axel; Klette, Johannes

    2014-01-01

    In determining rockfall trajectories in the field, it is essential to calibrate and validate rockfall simulation software. This contribution presents an in situ device and a complementary Local Positioning System (LPS) that allow the determination of parts of the trajectory. An assembly of sensors (herein called rockfall sensor) is installed in the falling block recording the 3D accelerations and rotational velocities. The LPS automatically calculates the position of the block along the slope over time based on Wi-Fi signals emitted from the rockfall sensor. The velocity of the block over time is determined through post-processing. The setup of the rockfall sensor is presented followed by proposed calibration and validation procedures. The performance of the LPS is evaluated by means of different experiments. The results allow for a quality analysis of both the obtained field data and the usability of the rockfall sensor for future/further applications in the field. PMID:25268916

  5. MetaQuant: a tool for the automatic quantification of GC/MS-based metabolome data.

    PubMed

    Bunk, Boyke; Kucklick, Martin; Jonas, Rochus; Münch, Richard; Schobert, Max; Jahn, Dieter; Hiller, Karsten

    2006-12-01

    MetaQuant is a Java-based program for the automatic and accurate quantification of GC/MS-based metabolome data. In contrast to other programs MetaQuant is able to quantify hundreds of substances simultaneously with minimal manual intervention. The integration of a self-acting calibration function allows the parallel and fast calibration for several metabolites simultaneously. Finally, MetaQuant is able to import GC/MS data in the common NetCDF format and to export the results of the quantification into Systems Biology Markup Language (SBML), Comma Separated Values (CSV) or Microsoft Excel (XLS) format. MetaQuant is written in Java and is available under an open source license. Precompiled packages for the installation on Windows or Linux operating systems are freely available for download. The source code as well as the installation packages are available at http://bioinformatics.org/metaquant

  6. An Open-Source Automated Peptide Synthesizer Based on Arduino and Python.

    PubMed

    Gali, Hariprasad

    2017-10-01

    The development of the first open-source automated peptide synthesizer, PepSy, using Arduino UNO and readily available components is reported. PepSy was primarily designed to synthesize small peptides in a relatively small scale (<100 µmol). Scripts to operate PepSy in a fully automatic or manual mode were written in Python. Fully automatic script includes functions to carry out resin swelling, resin washing, single coupling, double coupling, Fmoc deprotection, ivDde deprotection, on-resin oxidation, end capping, and amino acid/reagent line cleaning. Several small peptides and peptide conjugates were successfully synthesized on PepSy with reasonably good yields and purity depending on the complexity of the peptide.

  7. Automatic anatomy partitioning of the torso region on CT images by using multiple organ localizations with a group-wise calibration technique

    NASA Astrophysics Data System (ADS)

    Zhou, Xiangrong; Morita, Syoichi; Zhou, Xinxin; Chen, Huayue; Hara, Takeshi; Yokoyama, Ryujiro; Kanematsu, Masayuki; Hoshi, Hiroaki; Fujita, Hiroshi

    2015-03-01

    This paper describes an automatic approach for anatomy partitioning on three-dimensional (3D) computedtomography (CT) images that divide the human torso into several volume-of-interesting (VOI) images based on anatomical definition. The proposed approach combines several individual detections of organ-location with a groupwise organ-location calibration and correction to achieve an automatic and robust multiple-organ localization task. The essence of the proposed method is to jointly detect the 3D minimum bounding box for each type of organ shown on CT images based on intra-organ-image-textures and inter-organ-spatial-relationship in the anatomy. Machine-learning-based template matching and generalized Hough transform-based point-distribution estimation are used in the detection and calibration processes. We apply this approach to the automatic partitioning of a torso region on CT images, which are divided into 35 VOIs presenting major organ regions and tissues required by routine diagnosis in clinical medicine. A database containing 4,300 patient cases of high-resolution 3D torso CT images is used for training and performance evaluations. We confirmed that the proposed method was successful in target organ localization on more than 95% of CT cases. Only two organs (gallbladder and pancreas) showed a lower success rate: 71 and 78% respectively. In addition, we applied this approach to another database that included 287 patient cases of whole-body CT images scanned for positron emission tomography (PET) studies and used for additional performance evaluation. The experimental results showed that no significant difference between the anatomy partitioning results from those two databases except regarding the spleen. All experimental results showed that the proposed approach was efficient and useful in accomplishing localization tasks for major organs and tissues on CT images scanned using different protocols.

  8. Evaluation of Particle Counter Technology for Detection of Fuel Contamination Detection Utilizing Advanced Aviation Forward Area Refueling System

    DTIC Science & Technology

    2014-01-24

    8, Automatic Particle Counter, cleanliness, free water, Diesel 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT none 18. NUMBER OF...aircraft, or up to 10 mg/L for product used as a diesel product for ground use (1). Free water contamination (droplets) may appear as fine droplets or...published several methods and test procedures for the calibration and use of automatic particle counters. The transition of this technology to the fuel

  9. Parts-Per-Billion Mass Measurement Accuracy Achieved through the Combination of Multiple Linear Regression and Automatic Gain Control in a Fourier Transform Ion Cyclotron Resonance Mass Spectrometer

    PubMed Central

    Williams, D. Keith; Muddiman, David C.

    2008-01-01

    Fourier transform ion cyclotron resonance mass spectrometry has the ability to achieve unprecedented mass measurement accuracy (MMA); MMA is one of the most significant attributes of mass spectrometric measurements as it affords extraordinary molecular specificity. However, due to space-charge effects, the achievable MMA significantly depends on the total number of ions trapped in the ICR cell for a particular measurement. Even through the use of automatic gain control (AGC), the total ion population is not constant between spectra. Multiple linear regression calibration in conjunction with AGC is utilized in these experiments to formally account for the differences in total ion population in the ICR cell between the external calibration spectra and experimental spectra. This ability allows for the extension of dynamic range of the instrument while allowing mean MMA values to remain less than 1 ppm. In addition, multiple linear regression calibration is used to account for both differences in total ion population in the ICR cell as well as relative ion abundance of a given species, which also affords mean MMA values at the parts-per-billion level. PMID:17539605

  10. Parameterization and Uncertainty Analysis of SWAT model in Hydrological Simulation of Chaohe River Basin

    NASA Astrophysics Data System (ADS)

    Jie, M.; Zhang, J.; Guo, B. B.

    2017-12-01

    As a typical distributed hydrological model, the SWAT model also has a challenge in calibrating parameters and analysis their uncertainty. This paper chooses the Chaohe River Basin China as the study area, through the establishment of the SWAT model, loading the DEM data of the Chaohe river basin, the watershed is automatically divided into several sub-basins. Analyzing the land use, soil and slope which are on the basis of the sub-basins and calculating the hydrological response unit (HRU) of the study area, after running SWAT model, the runoff simulation values in the watershed are obtained. On this basis, using weather data, known daily runoff of three hydrological stations, combined with the SWAT-CUP automatic program and the manual adjustment method are used to analyze the multi-site calibration of the model parameters. Furthermore, the GLUE algorithm is used to analyze the parameters uncertainty of the SWAT model. Through the sensitivity analysis, calibration and uncertainty study of SWAT, the results indicate that the parameterization of the hydrological characteristics of the Chaohe river is successful and feasible which can be used to simulate the Chaohe river basin.

  11. Automatic Camera Calibration Using Multiple Sets of Pairwise Correspondences.

    PubMed

    Vasconcelos, Francisco; Barreto, Joao P; Boyer, Edmond

    2018-04-01

    We propose a new method to add an uncalibrated node into a network of calibrated cameras using only pairwise point correspondences. While previous methods perform this task using triple correspondences, these are often difficult to establish when there is limited overlap between different views. In such challenging cases we must rely on pairwise correspondences and our solution becomes more advantageous. Our method includes an 11-point minimal solution for the intrinsic and extrinsic calibration of a camera from pairwise correspondences with other two calibrated cameras, and a new inlier selection framework that extends the traditional RANSAC family of algorithms to sampling across multiple datasets. Our method is validated on different application scenarios where a lack of triple correspondences might occur: addition of a new node to a camera network; calibration and motion estimation of a moving camera inside a camera network; and addition of views with limited overlap to a Structure-from-Motion model.

  12. Fully automated tumor segmentation based on improved fuzzy connectedness algorithm in brain MR images.

    PubMed

    Harati, Vida; Khayati, Rasoul; Farzan, Abdolreza

    2011-07-01

    Uncontrollable and unlimited cell growth leads to tumor genesis in the brain. If brain tumors are not diagnosed early and cured properly, they could cause permanent brain damage or even death to patients. As in all methods of treatments, any information about tumor position and size is important for successful treatment; hence, finding an accurate and a fully automated method to give information to physicians is necessary. A fully automatic and accurate method for tumor region detection and segmentation in brain magnetic resonance (MR) images is suggested. The presented approach is an improved fuzzy connectedness (FC) algorithm based on a scale in which the seed point is selected automatically. This algorithm is independent of the tumor type in terms of its pixels intensity. Tumor segmentation evaluation results based on similarity criteria (similarity index (SI), overlap fraction (OF), and extra fraction (EF) are 92.89%, 91.75%, and 3.95%, respectively) indicate a higher performance of the proposed approach compared to the conventional methods, especially in MR images, in tumor regions with low contrast. Thus, the suggested method is useful for increasing the ability of automatic estimation of tumor size and position in brain tissues, which provides more accurate investigation of the required surgery, chemotherapy, and radiotherapy procedures. Copyright © 2011 Elsevier Ltd. All rights reserved.

  13. On-line calibration of high-response pressure transducers during jet-engine testing

    NASA Technical Reports Server (NTRS)

    Armentrout, E. C.

    1974-01-01

    Jet engine testing is reported concerned with the effect of inlet pressure and temperature distortions on engine performance and involves the use of numerous miniature pressure transducers. Despite recent improvements in the manufacture of miniature pressure transducers, they still exhibit sensitivity change and zero-shift with temperature and time. To obtain meaningful data, a calibration system is needed to determine these changes. A system has been developed which provides for computer selection of appropriate reference pressures selected from nine different sources to provide a two- or three-point calibration. Calibrations are made on command, before and sometimes after each data point. A unique no leak matrix valve design is used in the reference pressure system. Zero-shift corrections are measured and the values are automatically inserted into the data reduction program.

  14. Challenges in the Development of a Self-Calibrating Network of Ceilometers.

    NASA Astrophysics Data System (ADS)

    Hervo, Maxime; Wagner, Frank; Mattis, Ina; Baars, Holger; Haefele, Alexander

    2015-04-01

    There are more than 700 Automatic Lidars and Ceilometers (ALCs) currently operating in Europe. Modern ceilometers can do more than simply measure the cloud base height. They can also measure aerosol layers like volcanic ash, Saharan dust or aerosols within the planetary boundary layer. In the frame of E-PROFILE, which is part of EUMETNET, a European network of automatic lidars and ceilometers will be set up exploiting this new capability. To be able to monitor the evolution of aerosol layers over a large spatial scale, the measurements need to be consistent from one site to another. Currently, most of the instruments do not provide calibrated, only relative measurements. Thus, it is necessary to calibrate the instruments to develop a consistent product for all the instruments from various network and to combine them in an European Network like E-PROFILE. As it is not possible to use an external reference (like a sun photometer or a Raman Lidar) to calibrate all the ALCs in the E-PROFILE network, it is necessary to use a self-calibration algorithm. Two calibration methods have been identified which are suited for automated use in a network: the Rayleigh and the liquid cloud calibration methods In the Rayleigh method, backscatter signals from molecules (this is the Rayleigh signal) can be measured and used to calculate the lidar constant (Wiegner et al. 2012). At the wavelength used for most ceilometers, this signal is weak and can be easily measured only during cloud-free nights. However, with the new algorithm implemented in the frame of the TOPROF COST Action, the Rayleigh calibration was successfully performed on a CHM15k for more than 50% of the nights from October 2013 to September 2014. This method was validated against two reference instruments, the collocated EARLINET PollyXT lidar and the CALIPSO space-borne lidar. The lidar constant was on average within 5.5% compare to the lidar constant determined by the EARLINET lidar. It confirms the validity of the self-calibration method. For 3 CALIPSO overpasses the agreement was on average 20.0%. It is less accurate due to the large uncertainties of CALIPSO data close to the surface. In opposition to the Rayleigh method, Cloud calibration method uses the complete attenuation of the transmitter beam by a liquid water cloud to calculate the lidar constant (O'Connor 2004). The main challenge is the selection of accurately measured water clouds. These clouds should not contain any ice crystals and the detector should not get into saturation. The first problem is especially important during winter time and the second problem is especially important for low clouds. Furthermore the overlap function should be known accurately, especially when the water cloud is located at a distance where the overlap between laser beam and telescope field-of-view is still incomplete. In the E-PROFILE pilot network, the Rayleigh calibration is already performed automatically. This demonstration network maked available, in real time, calibrated ALC measurements from 8 instruments of 4 different types in 6 countries. In collaboration with TOPROF and 20 national weathers services, E-PROFILE will provide, in 2017, near real time ALC measurements in most of Europe.

  15. Quantification of regional fat volume in rat MRI

    NASA Astrophysics Data System (ADS)

    Sacha, Jaroslaw P.; Cockman, Michael D.; Dufresne, Thomas E.; Trokhan, Darren

    2003-05-01

    Multiple initiatives in the pharmaceutical and beauty care industries are directed at identifying therapies for weight management. Body composition measurements are critical for such initiatives. Imaging technologies that can be used to measure body composition noninvasively include DXA (dual energy x-ray absorptiometry) and MRI (magnetic resonance imaging). Unlike other approaches, MRI provides the ability to perform localized measurements of fat distribution. Several factors complicate the automatic delineation of fat regions and quantification of fat volumes. These include motion artifacts, field non-uniformity, brightness and contrast variations, chemical shift misregistration, and ambiguity in delineating anatomical structures. We have developed an approach to deal practically with those challenges. The approach is implemented in a package, the Fat Volume Tool, for automatic detection of fat tissue in MR images of the rat abdomen, including automatic discrimination between abdominal and subcutaneous regions. We suppress motion artifacts using masking based on detection of implicit landmarks in the images. Adaptive object extraction is used to compensate for intensity variations. This approach enables us to perform fat tissue detection and quantification in a fully automated manner. The package can also operate in manual mode, which can be used for verification of the automatic analysis or for performing supervised segmentation. In supervised segmentation, the operator has the ability to interact with the automatic segmentation procedures to touch-up or completely overwrite intermediate segmentation steps. The operator's interventions steer the automatic segmentation steps that follow. This improves the efficiency and quality of the final segmentation. Semi-automatic segmentation tools (interactive region growing, live-wire, etc.) improve both the accuracy and throughput of the operator when working in manual mode. The quality of automatic segmentation has been evaluated by comparing the results of fully automated analysis to manual analysis of the same images. The comparison shows a high degree of correlation that validates the quality of the automatic segmentation approach.

  16. Standing on the shoulders of giants: improving medical image segmentation via bias correction.

    PubMed

    Wang, Hongzhi; Das, Sandhitsu; Pluta, John; Craige, Caryne; Altinay, Murat; Avants, Brian; Weiner, Michael; Mueller, Susanne; Yushkevich, Paul

    2010-01-01

    We propose a simple strategy to improve automatic medical image segmentation. The key idea is that without deep understanding of a segmentation method, we can still improve its performance by directly calibrating its results with respect to manual segmentation. We formulate the calibration process as a bias correction problem, which is addressed by machine learning using training data. We apply this methodology on three segmentation problems/methods and show significant improvements for all of them.

  17. Microwave Interferometry (90 GHz) for Hall Thruster Plume Density Characterization

    DTIC Science & Technology

    2005-06-01

    Hall thruster . The interferometer has been modified to overcome initial difficulties encountered during the preliminary testing. The modifications include the ability to perform remote and automated calibrations as well as an aluminum enclosure to shield the interferometer from the Hall thruster plume. With these modifications, it will be possible to make unambiguous electron density measurements of the thruster plume as well as to rapidly and automatically calibrate the interferometer to eliminate the effects of signal drift. Due to the versatility

  18. Elixir - how to handle 2 trillion pixels

    NASA Astrophysics Data System (ADS)

    Magnier, Eugene A.; Cuillandre, Jean-Charles

    2002-12-01

    The Elixir system at CFHT provides automatic data quality assurance and calibration for the wide-field mosaic imager camera CFH12K. Elixir consists of a variety of tools, including: a real-time analysis suite which runs at the telescope to provide quick feedback to the observers; a detailed analysis of the calibration data; and an automated pipeline for processing data to be distributed to observers. To date, 2.4 × 1012 night-time sky pixels from CFH12K have been processed by the Elixir system.

  19. Automatic zebrafish heartbeat detection and analysis for zebrafish embryos.

    PubMed

    Pylatiuk, Christian; Sanchez, Daniela; Mikut, Ralf; Alshut, Rüdiger; Reischl, Markus; Hirth, Sofia; Rottbauer, Wolfgang; Just, Steffen

    2014-08-01

    A fully automatic detection and analysis method of heartbeats in videos of nonfixed and nonanesthetized zebrafish embryos is presented. This method reduces the manual workload and time needed for preparation and imaging of the zebrafish embryos, as well as for evaluating heartbeat parameters such as frequency, beat-to-beat intervals, and arrhythmicity. The method is validated by a comparison of the results from automatic and manual detection of the heart rates of wild-type zebrafish embryos 36-120 h postfertilization and of embryonic hearts with bradycardia and pauses in the cardiac contraction.

  20. Fully automatic segmentation of the femur from 3D-CT images using primitive shape recognition and statistical shape models.

    PubMed

    Ben Younes, Lassad; Nakajima, Yoshikazu; Saito, Toki

    2014-03-01

    Femur segmentation is well established and widely used in computer-assisted orthopedic surgery. However, most of the robust segmentation methods such as statistical shape models (SSM) require human intervention to provide an initial position for the SSM. In this paper, we propose to overcome this problem and provide a fully automatic femur segmentation method for CT images based on primitive shape recognition and SSM. Femur segmentation in CT scans was performed using primitive shape recognition based on a robust algorithm such as the Hough transform and RANdom SAmple Consensus. The proposed method is divided into 3 steps: (1) detection of the femoral head as sphere and the femoral shaft as cylinder in the SSM and the CT images, (2) rigid registration between primitives of SSM and CT image to initialize the SSM into the CT image, and (3) fitting of the SSM to the CT image edge using an affine transformation followed by a nonlinear fitting. The automated method provided good results even with a high number of outliers. The difference of segmentation error between the proposed automatic initialization method and a manual initialization method is less than 1 mm. The proposed method detects primitive shape position to initialize the SSM into the target image. Based on primitive shapes, this method overcomes the problem of inter-patient variability. Moreover, the results demonstrate that our method of primitive shape recognition can be used for 3D SSM initialization to achieve fully automatic segmentation of the femur.

  1. Fully automatic characterization and data collection from crystals of biological macromolecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Svensson, Olof; Malbet-Monaco, Stéphanie; Popov, Alexander

    A fully automatic system has been developed that performs X-ray centring and characterization of, and data collection from, large numbers of cryocooled crystals without human intervention. Considerable effort is dedicated to evaluating macromolecular crystals at synchrotron sources, even for well established and robust systems. Much of this work is repetitive, and the time spent could be better invested in the interpretation of the results. In order to decrease the need for manual intervention in the most repetitive steps of structural biology projects, initial screening and data collection, a fully automatic system has been developed to mount, locate, centre to themore » optimal diffraction volume, characterize and, if possible, collect data from multiple cryocooled crystals. Using the capabilities of pixel-array detectors, the system is as fast as a human operator, taking an average of 6 min per sample depending on the sample size and the level of characterization required. Using a fast X-ray-based routine, samples are located and centred systematically at the position of highest diffraction signal and important parameters for sample characterization, such as flux, beam size and crystal volume, are automatically taken into account, ensuring the calculation of optimal data-collection strategies. The system is now in operation at the new ESRF beamline MASSIF-1 and has been used by both industrial and academic users for many different sample types, including crystals of less than 20 µm in the smallest dimension. To date, over 8000 samples have been evaluated on MASSIF-1 without any human intervention.« less

  2. Automatic segmentation of vessels in in-vivo ultrasound scans

    NASA Astrophysics Data System (ADS)

    Tamimi-Sarnikowski, Philip; Brink-Kjær, Andreas; Moshavegh, Ramin; Arendt Jensen, Jørgen

    2017-03-01

    Ultrasound has become highly popular to monitor atherosclerosis, by scanning the carotid artery. The screening involves measuring the thickness of the vessel wall and diameter of the lumen. An automatic segmentation of the vessel lumen, can enable the determination of lumen diameter. This paper presents a fully automatic segmentation algorithm, for robustly segmenting the vessel lumen in longitudinal B-mode ultrasound images. The automatic segmentation is performed using a combination of B-mode and power Doppler images. The proposed algorithm includes a series of preprocessing steps, and performs a vessel segmentation by use of the marker-controlled watershed transform. The ultrasound images used in the study were acquired using the bk3000 ultrasound scanner (BK Ultrasound, Herlev, Denmark) with two transducers "8L2 Linear" and "10L2w Wide Linear" (BK Ultrasound, Herlev, Denmark). The algorithm was evaluated empirically and applied to a dataset of in-vivo 1770 images recorded from 8 healthy subjects. The segmentation results were compared to manual delineation performed by two experienced users. The results showed a sensitivity and specificity of 90.41+/-11.2 % and 97.93+/-5.7% (mean+/-standard deviation), respectively. The amount of overlap of segmentation and manual segmentation, was measured by the Dice similarity coefficient, which was 91.25+/-11.6%. The empirical results demonstrated the feasibility of segmenting the vessel lumen in ultrasound scans using a fully automatic algorithm.

  3. Automatic testing and assessment of neuroanatomy using a digital brain atlas: method and development of computer- and mobile-based applications.

    PubMed

    Nowinski, Wieslaw L; Thirunavuukarasuu, Arumugam; Ananthasubramaniam, Anand; Chua, Beng Choon; Qian, Guoyu; Nowinska, Natalia G; Marchenko, Yevgen; Volkau, Ihar

    2009-10-01

    Preparation of tests and student's assessment by the instructor are time consuming. We address these two tasks in neuroanatomy education by employing a digital media application with a three-dimensional (3D), interactive, fully segmented, and labeled brain atlas. The anatomical and vascular models in the atlas are linked to Terminologia Anatomica. Because the cerebral models are fully segmented and labeled, our approach enables automatic and random atlas-derived generation of questions to test location and naming of cerebral structures. This is done in four steps: test individualization by the instructor, test taking by the students at their convenience, automatic student assessment by the application, and communication of the individual assessment to the instructor. A computer-based application with an interactive 3D atlas and a preliminary mobile-based application were developed to realize this approach. The application works in two test modes: instructor and student. In the instructor mode, the instructor customizes the test by setting the scope of testing and student performance criteria, which takes a few seconds. In the student mode, the student is tested and automatically assessed. Self-testing is also feasible at any time and pace. Our approach is automatic both with respect to test generation and student assessment. It is also objective, rapid, and customizable. We believe that this approach is novel from computer-based, mobile-based, and atlas-assisted standpoints.

  4. Development of a microcontroller-based automatic control system for the electrohydraulic total artificial heart.

    PubMed

    Kim, H C; Khanwilkar, P S; Bearnson, G B; Olsen, D B

    1997-01-01

    An automatic physiological control system for the actively filled, alternately pumped ventricles of the volumetrically coupled, electrohydraulic total artificial heart (EHTAH) was developed for long-term use. The automatic control system must ensure that the device: 1) maintains a physiological response of cardiac output, 2) compensates for an nonphysiological condition, and 3) is stable, reliable, and operates at a high power efficiency. The developed automatic control system met these requirements both in vitro, in week-long continuous mock circulation tests, and in vivo, in acute open-chested animals (calves). Satisfactory results were also obtained in a series of chronic animal experiments, including 21 days of continuous operation of the fully automatic control mode, and 138 days of operation in a manual mode, in a 159-day calf implant.

  5. AUTOMATIC CALIBRATION OF A STOCHASTIC-LAGRANGIAN TRANSPORT MODEL (SLAM)

    EPA Science Inventory

    Numerical models are a useful tool in evaluating and designing NAPL remediation systems. Traditional constitutive finite difference and finite element models are complex and expensive to apply. For this reason, this paper presents the application of a simplified stochastic-Lagran...

  6. Automatic Classification of Extensive Aftershock Sequences Using Empirical Matched Field Processing

    NASA Astrophysics Data System (ADS)

    Gibbons, Steven J.; Harris, David B.; Kværna, Tormod; Dodge, Douglas A.

    2013-04-01

    The aftershock sequences that follow large earthquakes create considerable problems for data centers attempting to produce comprehensive event bulletins in near real-time. The greatly increased number of events which require processing can overwhelm analyst resources and reduce the capacity for analyzing events of monitoring interest. This exacerbates a potentially reduced detection capability at key stations, due the noise generated by the sequence, and a deterioration in the quality of the fully automatic preliminary event bulletins caused by the difficulty in associating the vast numbers of closely spaced arrivals over the network. Considerable success has been enjoyed by waveform correlation methods for the automatic identification of groups of events belonging to the same geographical source region, facilitating the more time-efficient analysis of event ensembles as opposed to individual events. There are, however, formidable challenges associated with the automation of correlation procedures. The signal generated by a very large earthquake seldom correlates well enough with the signals generated by far smaller aftershocks for a correlation detector to produce statistically significant triggers at the correct times. Correlation between events within clusters of aftershocks is significantly better, although the issues of when and how to initiate new pattern detectors are still being investigated. Empirical Matched Field Processing (EMFP) is a highly promising method for detecting event waveforms suitable as templates for correlation detectors. EMFP is a quasi-frequency-domain technique that calibrates the spatial structure of a wavefront crossing a seismic array in a collection of narrow frequency bands. The amplitude and phase weights that result are applied in a frequency-domain beamforming operation that compensates for scattering and refraction effects not properly modeled by plane-wave beams. It has been demonstrated to outperform waveform correlation as a classifier of ripple-fired mining blasts since the narrowband procedure is insensitive to differences in the source-time functions. For sequences in which the spectral content and time-histories of the signals from the main shock and aftershocks vary greatly, the spatial structure calibrated by EMFP is an invariant that permits reliable detection of events in the specific source region. Examples from the 2005 Kashmir and 2011 Van earthquakes demonstrate how EMFP templates from the main events detect arrivals from the aftershock sequences with high sensitivity and exceptionally low false alarm rates. Classical waveform correlation detectors are demonstrated to fail for these examples. Even arrivals with SNR below unity can produce significant EMFP triggers as the spatial pattern of the incoming wavefront is identified, leading to robust detections at a greater number of stations and potentially more reliable automatic bulletins. False EMFP triggers are readily screened by scanning a space of phase shifts relative to the imposed template. EMFP has the potential to produce a rapid and robust overview of the evolving aftershock sequence such that correlation and subspace detectors can be applied semi-autonomously, with well-chosen parameter specifications, to identify and classify clusters of very closely spaced aftershocks.

  7. Description and calibration of the Langley unitary plan wind tunnel

    NASA Technical Reports Server (NTRS)

    Jackson, C. M., Jr.; Corlett, W. A.; Monta, W. J.

    1981-01-01

    The two test sections of the Langley Unitary Plan Wind Tunnel were calibrated over the operating Mach number range from 1.47 to 4.63. The results of the calibration are presented along with a a description of the facility and its operational capability. The calibrations include Mach number and flow angularity distributions in both test sections at selected Mach numbers and tunnel stagnation pressures. Calibration data are also presented on turbulence, test-section boundary layer characteristics, moisture effects, blockage, and stagnation-temperature distributions. The facility is described in detail including dimensions and capacities where appropriate, and example of special test capabilities are presented. The operating parameters are fully defined and the power consumption characteristics are discussed.

  8. A 3D THz image processing methodology for a fully integrated, semi-automatic and near real-time operational system

    NASA Astrophysics Data System (ADS)

    Brook, A.; Cristofani, E.; Vandewal, M.; Matheis, C.; Jonuscheit, J.; Beigang, R.

    2012-05-01

    The present study proposes a fully integrated, semi-automatic and near real-time mode-operated image processing methodology developed for Frequency-Modulated Continuous-Wave (FMCW) THz images with the center frequencies around: 100 GHz and 300 GHz. The quality control of aeronautics composite multi-layered materials and structures using Non-Destructive Testing is the main focus of this work. Image processing is applied on the 3-D images to extract useful information. The data is processed by extracting areas of interest. The detected areas are subjected to image analysis for more particular investigation managed by a spatial model. Finally, the post-processing stage examines and evaluates the spatial accuracy of the extracted information.

  9. Wavelength calibration of arc spectra using intensity modelling

    NASA Astrophysics Data System (ADS)

    Balona, L. A.

    2010-12-01

    Wavelength calibration for astronomical spectra usually involves the use of different arc lamps for different resolving powers to reduce the problem of line blending. We present a technique which eliminates the necessity of different lamps. A lamp producing a very rich spectrum, normally used only at high resolving powers, can be used at the lowest resolving power as well. This is accomplished by modelling the observed arc spectrum and solving for the wavelength calibration as part of the modelling procedure. Line blending is automatically incorporated as part of the model. The method has been implemented and successfully tested on spectra taken with the Robert Stobie spectrograph of the Southern African Large Telescope.

  10. Development of a Portable Torque Wrench Tester

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Zhang, Q.; Gou, C.; Su, D.

    2018-03-01

    A portable torque wrench tester (PTWT) with calibration range from 0.5 Nm to 60 Nm has been developed and evaluated for periodic or on-site calibration of setting type torque wrenches, indicating type torque wrenches and hand torque screwdrivers. The PTWT is easy to carry with weight about 10 kg, simple and efficient operation and energy saving with an automatic loading and calibrating system. The relative expanded uncertainty of torque realized by the PTWT was estimated to be 0.8%, with the coverage factor k=2. A comparison experiment has been done between the PTWT and a reference torque standard at our laboratory. The consistency between these two devices under the claimed uncertainties was verified.

  11. Self-Calibrating Pressure Transducer

    NASA Technical Reports Server (NTRS)

    Lueck, Dale E. (Inventor)

    2006-01-01

    A self-calibrating pressure transducer is disclosed. The device uses an embedded zirconia membrane which pumps a determined quantity of oxygen into the device. The associated pressure can be determined, and thus, the transducer pressure readings can be calibrated. The zirconia membrane obtains oxygen .from the surrounding environment when possible. Otherwise, an oxygen reservoir or other source is utilized. In another embodiment, a reversible fuel cell assembly is used to pump oxygen and hydrogen into the system. Since a known amount of gas is pumped across the cell, the pressure produced can be determined, and thus, the device can be calibrated. An isolation valve system is used to allow the device to be calibrated in situ. Calibration is optionally automated so that calibration can be continuously monitored. The device is preferably a fully integrated MEMS device. Since the device can be calibrated without removing it from the process, reductions in costs and down time are realized.

  12. Soil Moisture Active/Passive (SMAP) L-band microwave radiometer post-launch calibration

    USDA-ARS?s Scientific Manuscript database

    The SMAP microwave radiometer is a fully-polarimetric L-band radiometer flown on the SMAP satellite in a 6 AM / 6 PM sun-synchronous orbit at 685-km altitude. Since April 2015, the radiometer has been under calibration and validation to assess the quality of the radiometer L1B data product. Calibrat...

  13. Structural deformation measurement via efficient tensor polynomial calibrated electro-active glass targets

    NASA Astrophysics Data System (ADS)

    Gugg, Christoph; Harker, Matthew; O'Leary, Paul

    2013-03-01

    This paper describes the physical setup and mathematical modelling of a device for the measurement of structural deformations over large scales, e.g., a mining shaft. Image processing techniques are used to determine the deformation by measuring the position of a target relative to a reference laser beam. A particular novelty is the incorporation of electro-active glass; the polymer dispersion liquid crystal shutters enable the simultaneous calibration of any number of consecutive measurement units without manual intervention, i.e., the process is fully automatic. It is necessary to compensate for optical distortion if high accuracy is to be achieved in a compact hardware design where lenses with short focal lengths are used. Wide-angle lenses exhibit significant distortion, which are typically characterized using Zernike polynomials. Radial distortion models assume that the lens is rotationally symmetric; such models are insufficient in the application at hand. This paper presents a new coordinate mapping procedure based on a tensor product of discrete orthogonal polynomials. Both lens distortion and the projection are compensated by a single linear transformation. Once calibrated, to acquire the measurement data, it is necessary to localize a single laser spot in the image. For this purpose, complete interpolation and rectification of the image is not required; hence, we have developed a new hierarchical approach based on a quad-tree subdivision. Cross-validation tests verify the validity, demonstrating that the proposed method accurately models both the optical distortion as well as the projection. The achievable accuracy is e <= +/-0.01 [mm] in a field of view of 150 [mm] x 150 [mm] at a distance of the laser source of 120 [m]. Finally, a Kolmogorov Smirnov test shows that the error distribution in localizing a laser spot is Gaussian. Consequently, due to the linearity of the proposed method, this also applies for the algorithm's output. Therefore, first-order covariance propagation provides an accurate estimate of the measurement uncertainty, which is essential for any measurement device.

  14. Self-calibration method for rotating laser positioning system using interscanning technology and ultrasonic ranging.

    PubMed

    Wu, Jun; Yu, Zhijing; Zhuge, Jingchang

    2016-04-01

    A rotating laser positioning system (RLPS) is an efficient measurement method for large-scale metrology. Due to multiple transmitter stations, which consist of a measurement network, the position relationship of these stations must be first calibrated. However, with such auxiliary devices such as a laser tracker, scale bar, and complex calibration process, the traditional calibration methods greatly reduce the measurement efficiency. This paper proposes a self-calibration method for RLPS, which can automatically obtain the position relationship. The method is implemented through interscanning technology by using a calibration bar mounted on the transmitter station. Each bar is composed of three RLPS receivers and one ultrasonic sensor whose coordinates are known in advance. The calibration algorithm is mainly based on multiplane and distance constraints and is introduced in detail through a two-station mathematical model. The repeated experiments demonstrate that the coordinate measurement uncertainty of spatial points by using this method is about 0.1 mm, and the accuracy experiments show that the average coordinate measurement deviation is about 0.3 mm compared with a laser tracker. The accuracy can meet the requirements of most applications, while the calibration efficiency is significantly improved.

  15. Auto-calibrated scanning-angle prism-type total internal reflection microscopy for nanometer-precision axial position determination and optional variable-illumination-depth pseudo total internal reflection microscopy

    DOEpatents

    Fang, Ning; Sun, Wei

    2015-04-21

    A method, apparatus, and system for improved VA-TIRFM microscopy. The method comprises automatically controlled calibration of one or more laser sources by precise control of presentation of each laser relative a sample for small incremental changes of incident angle over a range of critical TIR angles. The calibration then allows precise scanning of the sample for any of those calibrated angles for higher and more accurate resolution, and better reconstruction of the scans for super resolution reconstruction of the sample. Optionally the system can be controlled for incident angles of the excitation laser at sub-critical angles for pseudo TIRFM. Optionally both above-critical angle and sub critical angle measurements can be accomplished with the same system.

  16. Automatic Phase Calibration for RF Cavities using Beam-Loading Signals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edelen, J. P.; Chase, B. E.

    Precise calibration of the cavity phase signals is necessary for the operation of any particle accelerator. For many systems this requires human in the loop adjustments based on measurements of the beam parameters downstream. Some recent work has developed a scheme for the calibration of the cavity phase using beam measurements and beam-loading however this scheme is still a multi-step process that requires heavy automation or human in the loop. In this paper we analyze a new scheme that uses only RF signals reacting to beam-loading to calculate the phase of the beam relative to the cavity. This technique couldmore » be used in slow control loops to provide real-time adjustment of the cavity phase calibration without human intervention thereby increasing the stability and reliability of the accelerator.« less

  17. Use of «MLCM3» software for flash flood forecasting

    NASA Astrophysics Data System (ADS)

    Sokolova, Daria; Kuzmin, Vadim

    2017-04-01

    Accurate and timely flash floods forecasting, especially, in ungauged and poorly gauged basins, is one of the most important and challenging problems to be solved by the international hydrological community.In changing climate and variable anthropogenic impact on river basins, as well as due to low density of surface hydrometeorological network, flash flood forecasting based on "traditional" physically based, or conceptual, or statistical hydrological models often becomes inefficient. Unfortunately, most of river basins in Russia are poorly gauged or ungauged; besides, lack of hydrogeological data is quite typical, especially, in remote regions of Siberia. However, the developing economy and population safety make us to issue warnings based on reliable forecasts. For this purpose, a new hydrological model, MLCM3 (Multi-Layer Conceptual Model, 3rd generation) has been developed in the Russian State Hydrometeorological University. MLCM3 is a "rainfall-runoff"model with flexible structure and high level of"conceptualization".Model forcing includes precipitation and evaporation data basically coming from NWP model output. Water comes to the outlet through several layers; their number as well as two parameters (thickness and infiltration rate) for each of them, surface flow velocity (when the top layer is full of water) are optimized. The main advantage of the MLCM3, in comparison to the Sacramento Soil Moisture Accounting Model (SAC-SMA), Australian Water Balance Model (AWBM), Soil Moisture Accounting and Routing (SMAR) model and similar models, is that its automatic calibration is very fast and efficient with less volume of information. For instance, in comparison to SAC-SMA, which is calibrated using either Shuffled Complex Evolution algorithm (SCE-UA), or Stepwise Line Search (SLS), automatically calibrated MLCM3 gives better or comparable results without using any "a priori" data or essential processor resources. This advantage allows using the MLCM3 for very fast streamflow prediction in many basins. When assimilated NWP model output data used to force the model, the forecasts accuracy is quite acceptable and enough for automatic warning. Also please note that, in comparison to the 2nd generation of the model, a very useful new option has been added. Now it is possible to set upvariable infiltration rate of the top layer; this option is quite promising in terms of spring floods modeling. (At the moment it is necessary to perform more numerical experiments with snow melting; obtained results will be reported later). Recently new software for MLCM3 was developed. It contains quite usual and understandable options. Formation of the model "input" can be done in manual and automatic mode. Manual or automatic calibration of the model can be performed using either purposely developed for this model optimization algorithm, or Nelder-Mead's one, or SLS. For the model calibration, the multi-scale objective function (MSOF) proposed by Koren is used. It has shown its very high efficiency when model forcing data have high level of uncertainty. Other types of objective functions also can be used, such as mean square error and Nash-Sutcliff criterion. The model showed good results in more than 50 tested basins.

  18. Equipment and New Products

    ERIC Educational Resources Information Center

    Poitras, Adrian W., Ed.

    1973-01-01

    The following items are discussed: Digital Counters and Readout Devices, Automatic Burette Outfits, Noise Exposure System, Helium-Cadmium Laser, New pH Buffers and Flip-Top Dispenser, Voltage Calibrator Transfer Standard, Photomicrographic Stereo Zoom Microscope, Portable pH Meter, Micromanipulators, The Snuffer, Electronic Top-Loading Balances,…

  19. New bioreactor for in situ simultaneous measurement of bioluminescence and cell density

    NASA Astrophysics Data System (ADS)

    Picart, Pascal; Bendriaa, Loubna; Daniel, Philippe; Horry, Habib; Durand, Marie-José; Jouvanneau, Laurent; Thouand, Gérald

    2004-03-01

    This article presents a new device devoted to the simultaneous measurement of bioluminescence and optical density of a bioluminescent bacterial culture. It features an optoelectronic bioreactor with a fully autoclavable module, in which the bioluminescent bacteria are cultivated, a modulated laser diode dedicated to optical density measurement, and a detection head for the acquisition of both bioluminescence and optical density signals. Light is detected through a bifurcated fiber bundle. This setup allows the simultaneous estimation of the bioluminescence and the cell density of the culture medium without any sampling. The bioluminescence is measured through a highly sensitive photomultiplier unit which has been photometrically calibrated to allow light flux measurements. This was achieved by considering the bioluminescence spectrum and the full optical transmission of the device. The instrument makes it possible to measure a very weak light flux of only a few pW. The optical density is determined through the laser diode and a photodiode using numerical synchronous detection which is based on the power spectrum density of the recorded signal. The detection was calibrated to measure optical density up to 2.5. The device was validated using the Vibrio fischeri bacterium which was cultivated under continuous culture conditions. A very good correlation between manual and automatic measurements processed with this instrument has been demonstrated. Furthermore, the optoelectronic bioreactor enables determination of the luminance of the bioluminescent bacteria which is estimated to be 6×10-5 W sr-1 m-2 for optical density=0.3. Experimental results are presented and discussed.

  20. A Common Calibration Source Framework for Fully-Polarimetric and Interferometric Radiometers

    NASA Technical Reports Server (NTRS)

    Kim, Edward J.; Davis, Brynmor; Piepmeier, Jeff; Zukor, Dorothy J. (Technical Monitor)

    2000-01-01

    Two types of microwave radiometry--synthetic thinned array radiometry (STAR) and fully-polarimetric (FP) radiometry--have received increasing attention during the last several years. STAR radiometers offer a technological solution to achieving high spatial resolution imaging from orbit without requiring a filled aperture or a moving antenna, and FP radiometers measure extra polarization state information upon which entirely new or more robust geophysical retrieval algorithms can be based. Radiometer configurations used for both STAR and FP instruments share one fundamental feature that distinguishes them from more 'standard' radiometers, namely, they measure correlations between pairs of microwave signals. The calibration requirements for correlation radiometers are broader than those for standard radiometers. Quantities of interest include total powers, complex correlation coefficients, various offsets, and possible nonlinearities. A candidate for an ideal calibration source would be one that injects test signals with precisely controllable correlation coefficients and absolute powers simultaneously into a pair of receivers, permitting all of these calibration quantities to be measured. The complex nature of correlation radiometer calibration, coupled with certain inherent similarities between STAR and FP instruments, suggests significant leverage in addressing both problems together. Recognizing this, a project was recently begun at NASA Goddard Space Flight Center to develop a compact low-power subsystem for spaceflight STAR or FP receiver calibration. We present a common theoretical framework for the design of signals for a controlled correlation calibration source. A statistical model is described, along with temporal and spectral constraints on such signals. Finally, a method for realizing these signals is demonstrated using a Matlab-based implementation.

  1. Assessing the impact of land use change on hydrology by ensemble modeling (LUCHEM). I: Model intercomparison with current land use

    USGS Publications Warehouse

    Breuer, L.; Huisman, J.A.; Willems, P.; Bormann, H.; Bronstert, A.; Croke, B.F.W.; Frede, H.-G.; Graff, T.; Hubrechts, L.; Jakeman, A.J.; Kite, G.; Lanini, J.; Leavesley, G.; Lettenmaier, D.P.; Lindstrom, G.; Seibert, J.; Sivapalan, M.; Viney, N.R.

    2009-01-01

    This paper introduces the project on 'Assessing the impact of land use change on hydrology by ensemble modeling (LUCHEM)' that aims at investigating the envelope of predictions on changes in hydrological fluxes due to land use change. As part of a series of four papers, this paper outlines the motivation and setup of LUCHEM, and presents a model intercomparison for the present-day simulation results. Such an intercomparison provides a valuable basis to investigate the effects of different model structures on model predictions and paves the ground for the analysis of the performance of multi-model ensembles and the reliability of the scenario predictions in companion papers. In this study, we applied a set of 10 lumped, semi-lumped and fully distributed hydrological models that have been previously used in land use change studies to the low mountainous Dill catchment, Germany. Substantial differences in model performance were observed with Nash-Sutcliffe efficiencies ranging from 0.53 to 0.92. Differences in model performance were attributed to (1) model input data, (2) model calibration and (3) the physical basis of the models. The models were applied with two sets of input data: an original and a homogenized data set. This homogenization of precipitation, temperature and leaf area index was performed to reduce the variation between the models. Homogenization improved the comparability of model simulations and resulted in a reduced average bias, although some variation in model data input remained. The effect of the physical differences between models on the long-term water balance was mainly attributed to differences in how models represent evapotranspiration. Semi-lumped and lumped conceptual models slightly outperformed the fully distributed and physically based models. This was attributed to the automatic model calibration typically used for this type of models. Overall, however, we conclude that there was no superior model if several measures of model performance are considered and that all models are suitable to participate in further multi-model ensemble set-ups and land use change scenario investigations. ?? 2008 Elsevier Ltd. All rights reserved.

  2. Algorithm for automatic analysis of electro-oculographic data

    PubMed Central

    2013-01-01

    Background Large amounts of electro-oculographic (EOG) data, recorded during electroencephalographic (EEG) measurements, go underutilized. We present an automatic, auto-calibrating algorithm that allows efficient analysis of such data sets. Methods The auto-calibration is based on automatic threshold value estimation. Amplitude threshold values for saccades and blinks are determined based on features in the recorded signal. The performance of the developed algorithm was tested by analyzing 4854 saccades and 213 blinks recorded in two different conditions: a task where the eye movements were controlled (saccade task) and a task with free viewing (multitask). The results were compared with results from a video-oculography (VOG) device and manually scored blinks. Results The algorithm achieved 93% detection sensitivity for blinks with 4% false positive rate. The detection sensitivity for horizontal saccades was between 98% and 100%, and for oblique saccades between 95% and 100%. The classification sensitivity for horizontal and large oblique saccades (10 deg) was larger than 89%, and for vertical saccades larger than 82%. The duration and peak velocities of the detected horizontal saccades were similar to those in the literature. In the multitask measurement the detection sensitivity for saccades was 97% with a 6% false positive rate. Conclusion The developed algorithm enables reliable analysis of EOG data recorded both during EEG and as a separate metrics. PMID:24160372

  3. Algorithm for automatic analysis of electro-oculographic data.

    PubMed

    Pettersson, Kati; Jagadeesan, Sharman; Lukander, Kristian; Henelius, Andreas; Haeggström, Edward; Müller, Kiti

    2013-10-25

    Large amounts of electro-oculographic (EOG) data, recorded during electroencephalographic (EEG) measurements, go underutilized. We present an automatic, auto-calibrating algorithm that allows efficient analysis of such data sets. The auto-calibration is based on automatic threshold value estimation. Amplitude threshold values for saccades and blinks are determined based on features in the recorded signal. The performance of the developed algorithm was tested by analyzing 4854 saccades and 213 blinks recorded in two different conditions: a task where the eye movements were controlled (saccade task) and a task with free viewing (multitask). The results were compared with results from a video-oculography (VOG) device and manually scored blinks. The algorithm achieved 93% detection sensitivity for blinks with 4% false positive rate. The detection sensitivity for horizontal saccades was between 98% and 100%, and for oblique saccades between 95% and 100%. The classification sensitivity for horizontal and large oblique saccades (10 deg) was larger than 89%, and for vertical saccades larger than 82%. The duration and peak velocities of the detected horizontal saccades were similar to those in the literature. In the multitask measurement the detection sensitivity for saccades was 97% with a 6% false positive rate. The developed algorithm enables reliable analysis of EOG data recorded both during EEG and as a separate metrics.

  4. A Laser-Based Measuring System for Online Quality Control of Car Engine Block.

    PubMed

    Li, Xing-Qiang; Wang, Zhong; Fu, Lu-Hua

    2016-11-08

    For online quality control of car engine production, pneumatic measurement instrument plays an unshakeable role in measuring diameters inside engine block because of its portability and high-accuracy. To the limitation of its measuring principle, however, the working space between the pneumatic device and measured surface is too small to require manual operation. This lowers the measuring efficiency and becomes an obstacle to perform automatic measurement. In this article, a high-speed, automatic measuring system is proposed to take the place of pneumatic devices by using a laser-based measuring unit. The measuring unit is considered as a set of several measuring modules, where each of them acts like a single bore gauge and is made of four laser triangulation sensors (LTSs), which are installed on different positions and in opposite directions. The spatial relationship among these LTSs was calibrated before measurements. Sampling points from measured shaft holes can be collected by the measuring unit. A unified mathematical model was established for both calibration and measurement. Based on the established model, the relative pose between the measuring unit and measured workpiece does not impact the measuring accuracy. This frees the measuring unit from accurate positioning or adjustment, and makes it possible to realize fast and automatic measurement. The proposed system and method were finally validated by experiments.

  5. All-automatic swimmer tracking system based on an optimized scaled composite JTC technique

    NASA Astrophysics Data System (ADS)

    Benarab, D.; Napoléon, T.; Alfalou, A.; Verney, A.; Hellard, P.

    2016-04-01

    In this paper, an all-automatic optimized JTC based swimmer tracking system is proposed and evaluated on real video database outcome from national and international swimming competitions (French National Championship, Limoges 2015, FINA World Championships, Barcelona 2013 and Kazan 2015). First, we proposed to calibrate the swimming pool using the DLT algorithm (Direct Linear Transformation). DLT calculates the homography matrix given a sufficient set of correspondence points between pixels and metric coordinates: i.e. DLT takes into account the dimensions of the swimming pool and the type of the swim. Once the swimming pool is calibrated, we extract the lane. Then we apply a motion detection approach to detect globally the swimmer in this lane. Next, we apply our optimized Scaled Composite JTC which consists of creating an adapted input plane that contains the predicted region and the head reference image. This latter is generated using a composite filter of fin images chosen from the database. The dimension of this reference will be scaled according to the ratio between the head's dimension and the width of the swimming lane. Finally, applying the proposed approach improves the performances of our previous tracking method by adding a detection module in order to achieve an all-automatic swimmer tracking system.

  6. Astrometrica: Astrometric data reduction of CCD images

    NASA Astrophysics Data System (ADS)

    Raab, Herbert

    2012-03-01

    Astrometrica is an interactive software tool for scientific grade astrometric data reduction of CCD images. The current version of the software is for the Windows 32bit operating system family. Astrometrica reads FITS (8, 16 and 32 bit integer files) and SBIG image files. The size of the images is limited only by available memory. It also offers automatic image calibration (Dark Frame and Flat Field correction), automatic reference star identification, automatic moving object detection and identification, and access to new-generation star catalogs (PPMXL, UCAC 3 and CMC-14), in addition to online help and other features. Astrometrica is shareware, available for use for a limited period of time (100 days) for free; special arrangements can be made for educational projects.

  7. Automatic exposure control calibration and optimisation for abdomen, pelvis and lumbar spine imaging with an Agfa computed radiography system.

    PubMed

    Moore, C S; Wood, T J; Avery, G; Balcam, S; Needler, L; Joshi, H; Saunderson, J R; Beavis, A W

    2016-11-07

    The use of three physical image quality metrics, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and mean effective noise equivalent quanta (eNEQ m ) have recently been examined by our group for their appropriateness in the calibration of an automatic exposure control (AEC) device for chest radiography with an Agfa computed radiography (CR) imaging system. This study uses the same methodology but investigates AEC calibration for abdomen, pelvis and spine CR imaging. AEC calibration curves were derived using a simple uniform phantom (equivalent to 20 cm water) to ensure each metric was held constant across the tube voltage range. Each curve was assessed for its clinical appropriateness by generating computer simulated abdomen, pelvis and spine images (created from real patient CT datasets) with appropriate detector air kermas for each tube voltage, and grading these against reference images which were reconstructed at detector air kermas correct for the constant detector dose indicator (DDI) curve currently programmed into the AEC device. All simulated images contained clinically realistic projected anatomy and were scored by experienced image evaluators. Constant DDI and CNR curves did not provide optimized performance but constant eNEQ m and SNR did, with the latter being the preferred calibration metric given that it is easier to measure in practice. This result was consistent with the previous investigation for chest imaging with AEC devices. Medical physicists may therefore use a simple and easily accessible uniform water equivalent phantom to measure the SNR image quality metric described here when calibrating AEC devices for abdomen, pelvis and spine imaging with Agfa CR systems, in the confidence that clinical image quality will be sufficient for the required clinical task. However, to ensure appropriate levels of detector air kerma the advice of expert image evaluators must be sought.

  8. Automatic exposure control calibration and optimisation for abdomen, pelvis and lumbar spine imaging with an Agfa computed radiography system

    NASA Astrophysics Data System (ADS)

    Moore, C. S.; Wood, T. J.; Avery, G.; Balcam, S.; Needler, L.; Joshi, H.; Saunderson, J. R.; Beavis, A. W.

    2016-11-01

    The use of three physical image quality metrics, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and mean effective noise equivalent quanta (eNEQm) have recently been examined by our group for their appropriateness in the calibration of an automatic exposure control (AEC) device for chest radiography with an Agfa computed radiography (CR) imaging system. This study uses the same methodology but investigates AEC calibration for abdomen, pelvis and spine CR imaging. AEC calibration curves were derived using a simple uniform phantom (equivalent to 20 cm water) to ensure each metric was held constant across the tube voltage range. Each curve was assessed for its clinical appropriateness by generating computer simulated abdomen, pelvis and spine images (created from real patient CT datasets) with appropriate detector air kermas for each tube voltage, and grading these against reference images which were reconstructed at detector air kermas correct for the constant detector dose indicator (DDI) curve currently programmed into the AEC device. All simulated images contained clinically realistic projected anatomy and were scored by experienced image evaluators. Constant DDI and CNR curves did not provide optimized performance but constant eNEQm and SNR did, with the latter being the preferred calibration metric given that it is easier to measure in practice. This result was consistent with the previous investigation for chest imaging with AEC devices. Medical physicists may therefore use a simple and easily accessible uniform water equivalent phantom to measure the SNR image quality metric described here when calibrating AEC devices for abdomen, pelvis and spine imaging with Agfa CR systems, in the confidence that clinical image quality will be sufficient for the required clinical task. However, to ensure appropriate levels of detector air kerma the advice of expert image evaluators must be sought.

  9. Development of a generic auto-calibration package for regional ecological modeling and application in the Central Plains of the United States

    USGS Publications Warehouse

    Wu, Yiping; Liu, Shuguang; Li, Zhengpeng; Dahal, Devendra; Young, Claudia J.; Schmidt, Gail L.; Liu, Jinxun; Davis, Brian; Sohl, Terry L.; Werner, Jeremy M.; Oeding, Jennifer

    2014-01-01

    Process-oriented ecological models are frequently used for predicting potential impacts of global changes such as climate and land-cover changes, which can be useful for policy making. It is critical but challenging to automatically derive optimal parameter values at different scales, especially at regional scale, and validate the model performance. In this study, we developed an automatic calibration (auto-calibration) function for a well-established biogeochemical model—the General Ensemble Biogeochemical Modeling System (GEMS)-Erosion Deposition Carbon Model (EDCM)—using data assimilation technique: the Shuffled Complex Evolution algorithm and a model-inversion R package—Flexible Modeling Environment (FME). The new functionality can support multi-parameter and multi-objective auto-calibration of EDCM at the both pixel and regional levels. We also developed a post-processing procedure for GEMS to provide options to save the pixel-based or aggregated county-land cover specific parameter values for subsequent simulations. In our case study, we successfully applied the updated model (EDCM-Auto) for a single crop pixel with a corn–wheat rotation and a large ecological region (Level II)—Central USA Plains. The evaluation results indicate that EDCM-Auto is applicable at multiple scales and is capable to handle land cover changes (e.g., crop rotations). The model also performs well in capturing the spatial pattern of grain yield production for crops and net primary production (NPP) for other ecosystems across the region, which is a good example for implementing calibration and validation of ecological models with readily available survey data (grain yield) and remote sensing data (NPP) at regional and national levels. The developed platform for auto-calibration can be readily expanded to incorporate other model inversion algorithms and potential R packages, and also be applied to other ecological models.

  10. Automatic high throughput empty ISO container verification

    NASA Astrophysics Data System (ADS)

    Chalmers, Alex

    2007-04-01

    Encouraging results are presented for the automatic analysis of radiographic images of a continuous stream of ISO containers to confirm they are truly empty. A series of image processing algorithms are described that process real-time data acquired during the actual inspection of each container and assigns each to one of the classes "empty", "not empty" or "suspect threat". This research is one step towards achieving fully automated analysis of cargo containers.

  11. Flexible Manufacturing System Handbook. Volume IV. Appendices

    DTIC Science & Technology

    1983-02-01

    and Acceptance Test(s)" on page 26 of this Proposal Request. 1.1.10 Options 1. Centralized Automatic Chip/Coolant Recovery System a. Scope The...viable, from manual- ly moving the pallet/fixture/part combinations from machine to machine to fully automatic , unmanned material handling systems , such...English. Where dimensions are shown in metric units, the English system (inch) equivalent will also be shown. Hydraulic, pneumatic , and electrical

  12. Automatic abdominal multi-organ segmentation using deep convolutional neural network and time-implicit level sets.

    PubMed

    Hu, Peijun; Wu, Fa; Peng, Jialin; Bao, Yuanyuan; Chen, Feng; Kong, Dexing

    2017-03-01

    Multi-organ segmentation from CT images is an essential step for computer-aided diagnosis and surgery planning. However, manual delineation of the organs by radiologists is tedious, time-consuming and poorly reproducible. Therefore, we propose a fully automatic method for the segmentation of multiple organs from three-dimensional abdominal CT images. The proposed method employs deep fully convolutional neural networks (CNNs) for organ detection and segmentation, which is further refined by a time-implicit multi-phase evolution method. Firstly, a 3D CNN is trained to automatically localize and delineate the organs of interest with a probability prediction map. The learned probability map provides both subject-specific spatial priors and initialization for subsequent fine segmentation. Then, for the refinement of the multi-organ segmentation, image intensity models, probability priors as well as a disjoint region constraint are incorporated into an unified energy functional. Finally, a novel time-implicit multi-phase level-set algorithm is utilized to efficiently optimize the proposed energy functional model. Our method has been evaluated on 140 abdominal CT scans for the segmentation of four organs (liver, spleen and both kidneys). With respect to the ground truth, average Dice overlap ratios for the liver, spleen and both kidneys are 96.0, 94.2 and 95.4%, respectively, and average symmetric surface distance is less than 1.3 mm for all the segmented organs. The computation time for a CT volume is 125 s in average. The achieved accuracy compares well to state-of-the-art methods with much higher efficiency. A fully automatic method for multi-organ segmentation from abdominal CT images was developed and evaluated. The results demonstrated its potential in clinical usage with high effectiveness, robustness and efficiency.

  13. Fully automatic oil spill detection from COSMO-SkyMed imagery using a neural network approach

    NASA Astrophysics Data System (ADS)

    Avezzano, Ruggero G.; Del Frate, Fabio; Latini, Daniele

    2012-09-01

    The increased amount of available Synthetic Aperture Radar (SAR) images acquired over the ocean represents an extraordinary potential for improving oil spill detection activities. On the other side this involves a growing workload on the operators at analysis centers. In addition, even if the operators go through extensive training to learn manual oil spill detection, they can provide different and subjective responses. Hence, the upgrade and improvements of algorithms for automatic detection that can help in screening the images and prioritizing the alarms are of great benefit. In the framework of an ASI Announcement of Opportunity for the exploitation of COSMO-SkyMed data, a research activity (ASI contract L/020/09/0) aiming at studying the possibility to use neural networks architectures to set up fully automatic processing chains using COSMO-SkyMed imagery has been carried out and results are presented in this paper. The automatic identification of an oil spill is seen as a three step process based on segmentation, feature extraction and classification. We observed that a PCNN (Pulse Coupled Neural Network) was capable of providing a satisfactory performance in the different dark spots extraction, close to what it would be produced by manual editing. For the classification task a Multi-Layer Perceptron (MLP) Neural Network was employed.

  14. Fully Automated Quantification of the Striatal Uptake Ratio of [99mTc]-TRODAT with SPECT Imaging: Evaluation of the Diagnostic Performance in Parkinson's Disease and the Temporal Regression of Striatal Tracer Uptake

    PubMed Central

    Fang, Yu-Hua Dean; Chiu, Shao-Chieh; Lu, Chin-Song; Weng, Yi-Hsin

    2015-01-01

    Purpose. We aimed at improving the existing methods for the fully automatic quantification of striatal uptake of [99mTc]-TRODAT with SPECT imaging. Procedures. A normal [99mTc]-TRODAT template was first formed based on 28 healthy controls. Images from PD patients (n = 365) and nPD subjects (28 healthy controls and 33 essential tremor patients) were spatially normalized to the normal template. We performed an inverse transform on the predefined striatal and reference volumes of interest (VOIs) and applied the transformed VOIs to the original image data to calculate the striatal-to-reference ratio (SRR). The diagnostic performance of the SRR was determined through receiver operating characteristic (ROC) analysis. Results. The SRR measured with our new and automatic method demonstrated excellent diagnostic performance with 92% sensitivity, 90% specificity, 92% accuracy, and an area under the curve (AUC) of 0.94. For the evaluation of the mean SRR and the clinical duration, a quadratic function fit the data with R 2 = 0.84. Conclusions. We developed and validated a fully automatic method for the quantification of the SRR in a large study sample. This method has an excellent diagnostic performance and exhibits a strong correlation between the mean SRR and the clinical duration in PD patients. PMID:26366413

  15. Fully Automated Quantification of the Striatal Uptake Ratio of [(99m)Tc]-TRODAT with SPECT Imaging: Evaluation of the Diagnostic Performance in Parkinson's Disease and the Temporal Regression of Striatal Tracer Uptake.

    PubMed

    Fang, Yu-Hua Dean; Chiu, Shao-Chieh; Lu, Chin-Song; Yen, Tzu-Chen; Weng, Yi-Hsin

    2015-01-01

    We aimed at improving the existing methods for the fully automatic quantification of striatal uptake of [(99m)Tc]-TRODAT with SPECT imaging. A normal [(99m)Tc]-TRODAT template was first formed based on 28 healthy controls. Images from PD patients (n = 365) and nPD subjects (28 healthy controls and 33 essential tremor patients) were spatially normalized to the normal template. We performed an inverse transform on the predefined striatal and reference volumes of interest (VOIs) and applied the transformed VOIs to the original image data to calculate the striatal-to-reference ratio (SRR). The diagnostic performance of the SRR was determined through receiver operating characteristic (ROC) analysis. The SRR measured with our new and automatic method demonstrated excellent diagnostic performance with 92% sensitivity, 90% specificity, 92% accuracy, and an area under the curve (AUC) of 0.94. For the evaluation of the mean SRR and the clinical duration, a quadratic function fit the data with R (2) = 0.84. We developed and validated a fully automatic method for the quantification of the SRR in a large study sample. This method has an excellent diagnostic performance and exhibits a strong correlation between the mean SRR and the clinical duration in PD patients.

  16. Improved diffusing wave spectroscopy based on the automatized determination of the optical transport and absorption mean free path

    NASA Astrophysics Data System (ADS)

    Zhang, Chi; Reufer, Mathias; Gaudino, Danila; Scheffold, Frank

    2017-11-01

    Diffusing wave spectroscopy (DWS) can be employed as an optical rheology tool with numerous applications for studying the structure, dynamics and linear viscoelastic properties of complex fluids, foams, glasses and gels. To carry out DWS measurements, one first needs to quantify the static optical properties of the sample under investigation, i.e. the transport mean free path l * and the absorption length l a. In the absence of absorption this can be done by comparing the diffuse optical transmission to a calibration sample whose l * is known. Performing this comparison however is cumbersome, time consuming, and prone to mistakes by the operator. Moreover, already weak absorption can lead to significant errors. In this paper, we demonstrate the implementation of an automatized approach, based on which the DWS measurement procedure can be simplified significantly. By comparison with a comprehensive set of calibration measurements we cover the entire parameter space relating measured count rates ( CR t , CR b ) to ( l *, l a). Based on this approach we can determine l * and la of an unknown sample accurately thus making the additional measurement of a calibration sample obsolete. We illustrate the use of this approach by monitoring the coarsening of a commercially available shaving foam with DWS.

  17. Pipeline Reduction of Binary Light Curves from Large-Scale Surveys

    NASA Astrophysics Data System (ADS)

    Prša, Andrej; Zwitter, Tomaž

    2007-08-01

    One of the most important changes in observational astronomy of the 21st Century is a rapid shift from classical object-by-object observations to extensive automatic surveys. As CCD detectors are getting better and their prices are getting lower, more and more small and medium-size observatories are refocusing their attention to detection of stellar variability through systematic sky-scanning missions. This trend is additionally powered by the success of pioneering surveys such as ASAS, DENIS, OGLE, TASS, their space counterpart Hipparcos and others. Such surveys produce massive amounts of data and it is not at all clear how these data are to be reduced and analysed. This is especially striking in the eclipsing binary (EB) field, where most frequently used tools are optimized for object-by-object analysis. A clear need for thorough, reliable and fully automated approaches to modeling and analysis of EB data is thus obvious. This task is very difficult because of limited data quality, non-uniform phase coverage and parameter degeneracy. The talk will review recent advancements in putting together semi-automatic and fully automatic pipelines for EB data processing. Automatic procedures have already been used to process the Hipparcos data, LMC/SMC observations, OGLE and ASAS catalogs etc. We shall discuss the advantages and shortcomings of these procedures and overview the current status of automatic EB modeling pipelines for the upcoming missions such as CoRoT, Kepler, Gaia and others.

  18. A system for extracting 3-dimensional measurements from a stereo pair of TV cameras

    NASA Technical Reports Server (NTRS)

    Yakimovsky, Y.; Cunningham, R.

    1976-01-01

    Obtaining accurate three-dimensional (3-D) measurement from a stereo pair of TV cameras is a task requiring camera modeling, calibration, and the matching of the two images of a real 3-D point on the two TV pictures. A system which models and calibrates the cameras and pairs the two images of a real-world point in the two pictures, either manually or automatically, was implemented. This system is operating and provides three-dimensional measurements resolution of + or - mm at distances of about 2 m.

  19. 21 CFR 211.68 - Automatic, mechanical, and electronic equipment.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... SERVICES (CONTINUED) DRUGS: GENERAL CURRENT GOOD MANUFACTURING PRACTICE FOR FINISHED PHARMACEUTICALS... satisfactorily, may be used in the manufacture, processing, packing, and holding of a drug product. If such... designed to assure proper performance. Written records of those calibration checks and inspections shall be...

  20. Estimating Mutual Information for High-to-Low Calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michaud, Isaac James; Williams, Brian J.; Weaver, Brian Phillip

    Presentation shows that KSG 2 is superior to KSG 1 because it scales locally automatically; KSG estimators are limited to a maximum MI due to sample size; LNC extends the capability of KSG without onerous assumptions; iLNC allows LNC to estimate information gain.

  1. A stepwise, multi-objective, multi-variable parameter optimization method for the APEX model

    USDA-ARS?s Scientific Manuscript database

    Proper parameterization enables hydrological models to make reliable estimates of non-point source pollution for effective control measures. The automatic calibration of hydrologic models requires significant computational power limiting its application. The study objective was to develop and eval...

  2. a Novel Approach to Camera Calibration Method for Smart Phones Under Road Environment

    NASA Astrophysics Data System (ADS)

    Lee, Bijun; Zhou, Jian; Ye, Maosheng; Guo, Yuan

    2016-06-01

    Monocular vision-based lane departure warning system has been increasingly used in advanced driver assistance systems (ADAS). By the use of the lane mark detection and identification, we proposed an automatic and efficient camera calibration method for smart phones. At first, we can detect the lane marker feature in a perspective space and calculate edges of lane markers in image sequences. Second, because of the width of lane marker and road lane is fixed under the standard structural road environment, we can automatically build a transformation matrix between perspective space and 3D space and get a local map in vehicle coordinate system. In order to verify the validity of this method, we installed a smart phone in the `Tuzhi' self-driving car of Wuhan University and recorded more than 100km image data on the road in Wuhan. According to the result, we can calculate the positions of lane markers which are accurate enough for the self-driving car to run smoothly on the road.

  3. Formal Specification and Automatic Analysis of Business Processes under Authorization Constraints: An Action-Based Approach

    NASA Astrophysics Data System (ADS)

    Armando, Alessandro; Giunchiglia, Enrico; Ponta, Serena Elisa

    We present an approach to the formal specification and automatic analysis of business processes under authorization constraints based on the action language \\cal{C}. The use of \\cal{C} allows for a natural and concise modeling of the business process and the associated security policy and for the automatic analysis of the resulting specification by using the Causal Calculator (CCALC). Our approach improves upon previous work by greatly simplifying the specification step while retaining the ability to perform a fully automatic analysis. To illustrate the effectiveness of the approach we describe its application to a version of a business process taken from the banking domain and use CCALC to determine resource allocation plans complying with the security policy.

  4. Peristaltic pump-based low range pressure sensor calibration system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vinayakumar, K. B.; Department of Electronic Systems Engineering, Indian Institute of Science, Bangalore 5600012; Naveen Kumar, G.

    2015-11-15

    Peristaltic pumps were normally used to pump liquids in several chemical and biological applications. In the present study, a peristaltic pump was used to pressurize the chamber (positive as well negative pressures) using atmospheric air. In the present paper, we discuss the development and performance study of an automatic pressurization system to calibrate low range (millibar) pressure sensors. The system includes a peristaltic pump, calibrated pressure sensor (master sensor), pressure chamber, and the control electronics. An in-house developed peristaltic pump was used to pressurize the chamber. A closed loop control system has been developed to detect and adjust the pressuremore » leaks in the chamber. The complete system has been integrated into a portable product. The system performance has been studied for a step response and steady state errors. The system is portable, free from oil contaminants, and consumes less power compared to existing pressure calibration systems. The veracity of the system was verified by calibrating an unknown diaphragm based pressure sensor and the results obtained were satisfactory.« less

  5. Peristaltic pump-based low range pressure sensor calibration system

    NASA Astrophysics Data System (ADS)

    Vinayakumar, K. B.; Naveen Kumar, G.; Nayak, M. M.; Dinesh, N. S.; Rajanna, K.

    2015-11-01

    Peristaltic pumps were normally used to pump liquids in several chemical and biological applications. In the present study, a peristaltic pump was used to pressurize the chamber (positive as well negative pressures) using atmospheric air. In the present paper, we discuss the development and performance study of an automatic pressurization system to calibrate low range (millibar) pressure sensors. The system includes a peristaltic pump, calibrated pressure sensor (master sensor), pressure chamber, and the control electronics. An in-house developed peristaltic pump was used to pressurize the chamber. A closed loop control system has been developed to detect and adjust the pressure leaks in the chamber. The complete system has been integrated into a portable product. The system performance has been studied for a step response and steady state errors. The system is portable, free from oil contaminants, and consumes less power compared to existing pressure calibration systems. The veracity of the system was verified by calibrating an unknown diaphragm based pressure sensor and the results obtained were satisfactory.

  6. A Novel Protocol for Model Calibration in Biological Wastewater Treatment

    PubMed Central

    Zhu, Ao; Guo, Jianhua; Ni, Bing-Jie; Wang, Shuying; Yang, Qing; Peng, Yongzhen

    2015-01-01

    Activated sludge models (ASMs) have been widely used for process design, operation and optimization in wastewater treatment plants. However, it is still a challenge to achieve an efficient calibration for reliable application by using the conventional approaches. Hereby, we propose a novel calibration protocol, i.e. Numerical Optimal Approaching Procedure (NOAP), for the systematic calibration of ASMs. The NOAP consists of three key steps in an iterative scheme flow: i) global factors sensitivity analysis for factors fixing; ii) pseudo-global parameter correlation analysis for non-identifiable factors detection; and iii) formation of a parameter subset through an estimation by using genetic algorithm. The validity and applicability are confirmed using experimental data obtained from two independent wastewater treatment systems, including a sequencing batch reactor and a continuous stirred-tank reactor. The results indicate that the NOAP can effectively determine the optimal parameter subset and successfully perform model calibration and validation for these two different systems. The proposed NOAP is expected to use for automatic calibration of ASMs and be applied potentially to other ordinary differential equations models. PMID:25682959

  7. 3D model assisted fully automated scanning laser Doppler vibrometer measurements

    NASA Astrophysics Data System (ADS)

    Sels, Seppe; Ribbens, Bart; Bogaerts, Boris; Peeters, Jeroen; Vanlanduit, Steve

    2017-12-01

    In this paper, a new fully automated scanning laser Doppler vibrometer (LDV) measurement technique is presented. In contrast to existing scanning LDV techniques which use a 2D camera for the manual selection of sample points, we use a 3D Time-of-Flight camera in combination with a CAD file of the test object to automatically obtain measurements at pre-defined locations. The proposed procedure allows users to test prototypes in a shorter time because physical measurement locations are determined without user interaction. Another benefit from this methodology is that it incorporates automatic mapping between a CAD model and the vibration measurements. This mapping can be used to visualize measurements directly on a 3D CAD model. The proposed method is illustrated with vibration measurements of an unmanned aerial vehicle

  8. Multi-objective vs. single-objective calibration of a hydrologic model using single- and multi-objective screening

    NASA Astrophysics Data System (ADS)

    Mai, Juliane; Cuntz, Matthias; Shafii, Mahyar; Zink, Matthias; Schäfer, David; Thober, Stephan; Samaniego, Luis; Tolson, Bryan

    2016-04-01

    Hydrologic models are traditionally calibrated against observed streamflow. Recent studies have shown however, that only a few global model parameters are constrained using this kind of integral signal. They can be identified using prior screening techniques. Since different objectives might constrain different parameters, it is advisable to use multiple information to calibrate those models. One common approach is to combine these multiple objectives (MO) into one single objective (SO) function and allow the use of a SO optimization algorithm. Another strategy is to consider the different objectives separately and apply a MO Pareto optimization algorithm. In this study, two major research questions will be addressed: 1) How do multi-objective calibrations compare with corresponding single-objective calibrations? 2) How much do calibration results deteriorate when the number of calibrated parameters is reduced by a prior screening technique? The hydrologic model employed in this study is a distributed hydrologic model (mHM) with 52 model parameters, i.e. transfer coefficients. The model uses grid cells as a primary hydrologic unit, and accounts for processes like snow accumulation and melting, soil moisture dynamics, infiltration, surface runoff, evapotranspiration, subsurface storage and discharge generation. The model is applied in three distinct catchments over Europe. The SO calibrations are performed using the Dynamically Dimensioned Search (DDS) algorithm with a fixed budget while the MO calibrations are achieved using the Pareto Dynamically Dimensioned Search (PA-DDS) algorithm allowing for the same budget. The two objectives used here are the Nash Sutcliffe Efficiency (NSE) of the simulated streamflow and the NSE of the logarithmic transformation. It is shown that the SO DDS results are located close to the edges of the Pareto fronts of the PA-DDS. The MO calibrations are hence preferable due to their supply of multiple equivalent solutions from which the user can choose at the end due to the specific needs. The sequential single-objective parameter screening was employed prior to the calibrations reducing the number of parameters by at least 50% in the different catchments and for the different single objectives. The single-objective calibrations led to a faster convergence of the objectives and are hence beneficial when using a DDS on single-objectives. The above mentioned parameter screening technique is generalized for multi-objectives and applied before calibration using the PA-DDS algorithm. Two different alternatives of this MO-screening are tested. The comparison of the calibration results using all parameters and using only screened parameters shows for both alternatives that the PA-DDS algorithm does not profit in terms of trade-off size and function evaluations required to achieve converged pareto fronts. This is because the PA-DDS algorithm automatically reduces search space with progress of the calibration run. This automatic reduction should be different for other search algorithms. It is therefore hypothesized that prior screening can but must not be beneficial for parameter estimation dependent on the chosen optimization algorithm.

  9. Brain tumor segmentation in MR slices using improved GrowCut algorithm

    NASA Astrophysics Data System (ADS)

    Ji, Chunhong; Yu, Jinhua; Wang, Yuanyuan; Chen, Liang; Shi, Zhifeng; Mao, Ying

    2015-12-01

    The detection of brain tumor from MR images is very significant for medical diagnosis and treatment. However, the existing methods are mostly based on manual or semiautomatic segmentation which are awkward when dealing with a large amount of MR slices. In this paper, a new fully automatic method for the segmentation of brain tumors in MR slices is presented. Based on the hypothesis of the symmetric brain structure, the method improves the interactive GrowCut algorithm by further using the bounding box algorithm in the pre-processing step. More importantly, local reflectional symmetry is used to make up the deficiency of the bounding box method. After segmentation, 3D tumor image is reconstructed. We evaluate the accuracy of the proposed method on MR slices with synthetic tumors and actual clinical MR images. Result of the proposed method is compared with the actual position of simulated 3D tumor qualitatively and quantitatively. In addition, our automatic method produces equivalent performance as manual segmentation and the interactive GrowCut with manual interference while providing fully automatic segmentation.

  10. Determinants of wood dust exposure in the Danish furniture industry.

    PubMed

    Mikkelsen, Anders B; Schlunssen, Vivi; Sigsgaard, Torben; Schaumburg, Inger

    2002-11-01

    This paper investigates the relation between wood dust exposure in the furniture industry and occupational hygiene variables. During the winter 1997-98 54 factories were visited and 2362 personal, passive inhalable dust samples were obtained; the geometric mean was 0.95 mg/m(3) and the geometric standard deviation was 2.08. In a first measuring round 1685 dust concentrations were obtained. For some of the workers repeated measurements were carried out 1 (351) and 2 weeks (326) after the first measurement. Hygiene variables like job, exhaust ventilation, cleaning procedures, etc., were documented. A multivariate analysis based on mixed effects models was used with hygiene variables being fixed effects and worker, machine, department and factory being random effects. A modified stepwise strategy of model making was adopted taking into account the hierarchically structured variables and making possible the exclusion of non-influential random as well as fixed effects. For woodworking, the following determinants of exposure increase the dust concentration: manual and automatic sanding and use of compressed air with fully automatic and semi-automatic machines and for cleaning of work pieces. Decreased dust exposure resulted from the use of compressed air with manual machines, working at fully automatic or semi-automatic machines, functioning exhaust ventilation, work on the night shift, daily cleaning of rooms, cleaning of work pieces with a brush, vacuum cleaning of machines, supplementary fresh air intake and safety representative elected within the last 2 yr. For handling and assembling, increased exposure results from work at automatic machines and presence of wood dust on the workpieces. Work on the evening shift, supplementary fresh air intake, work in a chair factory and special cleaning staff produced decreased exposure to wood dust. The implications of the results for the prevention of wood dust exposure are discussed.

  11. Automatic segmentation and quantification of the cardiac structures from non-contrast-enhanced cardiac CT scans

    NASA Astrophysics Data System (ADS)

    Shahzad, Rahil; Bos, Daniel; Budde, Ricardo P. J.; Pellikaan, Karlijn; Niessen, Wiro J.; van der Lugt, Aad; van Walsum, Theo

    2017-05-01

    Early structural changes to the heart, including the chambers and the coronary arteries, provide important information on pre-clinical heart disease like cardiac failure. Currently, contrast-enhanced cardiac computed tomography angiography (CCTA) is the preferred modality for the visualization of the cardiac chambers and the coronaries. In clinical practice not every patient undergoes a CCTA scan; many patients receive only a non-contrast-enhanced calcium scoring CT scan (CTCS), which has less radiation dose and does not require the administration of contrast agent. Quantifying cardiac structures in such images is challenging, as they lack the contrast present in CCTA scans. Such quantification would however be relevant, as it enables population based studies with only a CTCS scan. The purpose of this work is therefore to investigate the feasibility of automatic segmentation and quantification of cardiac structures viz whole heart, left atrium, left ventricle, right atrium, right ventricle and aortic root from CTCS scans. A fully automatic multi-atlas-based segmentation approach is used to segment the cardiac structures. Results show that the segmentation overlap between the automatic method and that of the reference standard have a Dice similarity coefficient of 0.91 on average for the cardiac chambers. The mean surface-to-surface distance error over all the cardiac structures is 1.4+/- 1.7 mm. The automatically obtained cardiac chamber volumes using the CTCS scans have an excellent correlation when compared to the volumes in corresponding CCTA scans, a Pearson correlation coefficient (R) of 0.95 is obtained. Our fully automatic method enables large-scale assessment of cardiac structures on non-contrast-enhanced CT scans.

  12. Automated Mounting Bias Calibration for Airborne LIDAR System

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Jiang, W.; Jiang, S.

    2012-07-01

    Mounting bias is the major error source of Airborne LIDAR system. In this paper, an automated calibration method for estimating LIDAR system mounting parameters is introduced. LIDAR direct geo-referencing model is used to calculate systematic errors. Due to LIDAR footprints discretely sampled, the real corresponding laser points are hardly existence among different strips. The traditional corresponding point methodology does not seem to apply to LIDAR strip registration. We proposed a Virtual Corresponding Point Model to resolve the corresponding problem among discrete laser points. Each VCPM contains a corresponding point and three real laser footprints. Two rules are defined to calculate tie point coordinate from real laser footprints. The Scale Invariant Feature Transform (SIFT) is used to extract corresponding points in LIDAR strips, and the automatic flow of LIDAR system calibration based on VCPM is detailed described. The practical examples illustrate the feasibility and effectiveness of the proposed calibration method.

  13. Objective Measurement of Erythema in Psoriasis using Digital Color Photography with Color Calibration

    PubMed Central

    Raina, Abhay; Hennessy, Ricky; Rains, Michael; Allred, James; Hirshburg, Jason M; Diven, Dayna; Markey, Mia K.

    2016-01-01

    Background Traditional metrics for evaluating the severity of psoriasis are subjective, which complicates efforts to measure effective treatments in clinical trials. Methods We collected images of psoriasis plaques and calibrated the coloration of the images according to an included color card. Features were extracted from the images and used to train a linear discriminant analysis classifier with cross-validation to automatically classify the degree of erythema. The results were tested against numerical scores obtained by a panel of dermatologists using a standard rating system. Results Quantitative measures of erythema based on the digital color images showed good agreement with subjective assessment of erythema severity (κ = 0.4203). The color calibration process improved the agreement from κ = 0.2364 to κ = 0.4203. Conclusions We propose a method for the objective measurement of the psoriasis severity parameter of erythema and show that the calibration process improved the results. PMID:26517973

  14. In-flight photogrammetric camera calibration and validation via complementary lidar

    NASA Astrophysics Data System (ADS)

    Gneeniss, A. S.; Mills, J. P.; Miller, P. E.

    2015-02-01

    This research assumes lidar as a reference dataset against which in-flight camera system calibration and validation can be performed. The methodology utilises a robust least squares surface matching algorithm to align a dense network of photogrammetric points to the lidar reference surface, allowing for the automatic extraction of so-called lidar control points (LCPs). Adjustment of the photogrammetric data is then repeated using the extracted LCPs in a self-calibrating bundle adjustment with additional parameters. This methodology was tested using two different photogrammetric datasets, a Microsoft UltraCamX large format camera and an Applanix DSS322 medium format camera. Systematic sensitivity testing explored the influence of the number and weighting of LCPs. For both camera blocks it was found that when the number of control points increase, the accuracy improves regardless of point weighting. The calibration results were compared with those obtained using ground control points, with good agreement found between the two.

  15. Multi-projector auto-calibration and placement optimization for non-planar surfaces

    NASA Astrophysics Data System (ADS)

    Li, Dong; Xie, Jinghui; Zhao, Lu; Zhou, Lijing; Weng, Dongdong

    2015-10-01

    Non-planar projection has been widely applied in virtual reality and digital entertainment and exhibitions because of its flexible layout and immersive display effects. Compared with planar projection, a non-planar projection is more difficult to achieve because projector calibration and image distortion correction are difficult processes. This paper uses a cylindrical screen as an example to present a new method for automatically calibrating a multi-projector system in a non-planar environment without using 3D reconstruction. This method corrects the geometric calibration error caused by the screen's manufactured imperfections, such as an undulating surface or a slant in the vertical plane. In addition, based on actual projection demand, this paper presents the overall performance evaluation criteria for the multi-projector system. According to these criteria, we determined the optimal placement for the projectors. This method also extends to surfaces that can be parameterized, such as spheres, ellipsoids, and paraboloids, and demonstrates a broad applicability.

  16. Automatic Calibration of Global Flow Routing Model Parameters in the Amazon Basin Using Virtual SWOT Data

    NASA Astrophysics Data System (ADS)

    Mouffe, Melodie; Getirana, Augusto; Ricci, Sophie; Lion, Christine; Biancamaria, Sylvian; Boone, Aaron; Mognard, Nelly; Rogel, Philippe

    2013-09-01

    The Surface Water and Ocean Topography (SWOT) wide swath altimetry mission will provide measurements of water surface elevations (WSE) at a global scale. The aim of this study is to investigate the potential of these satellite data for the calibration of the hydrological model HyMAP, over the Amazon river basin. Since SWOT has not yet been launched, synthetical observations are used to calibrate the river bed depth and width, the Manning coefficient and the baseflow concentration time. The calibration process stands in the minimization of a cost function using an evolutionnary, global and multi-objective algorithm that describes the difference between the simulated and the observed WSE. We found that the calibration procedure is able to retrieve an optimal set of parameters such that it brings the simulated WSE closer to the observation. Still with a global calibration procedure where a uniform correction is applied, the improvement is limited to a mean correction over the catchment and the simulation period. We conclude that in order to benefit from the high resolution and complete coverage of the SWOT mission, the calibration process should be achieved sequentially in time over sub-domains as observations become available.

  17. Integrated calibration between digital camera and laser scanner from mobile mapping system for land vehicles

    NASA Astrophysics Data System (ADS)

    Zhao, Guihua; Chen, Hong; Li, Xingquan; Zou, Xiaoliang

    The paper presents the concept of lever arm and boresight angle, the design requirements of calibration sites and the integrated calibration method of boresight angles of digital camera or laser scanner. Taking test data collected by Applanix's LandMark system as an example, the camera calibration method is introduced to be piling three consecutive stereo images and OTF-Calibration method using ground control points. The laser calibration of boresight angle is proposed to use a manual and automatic method with ground control points. Integrated calibration between digital camera and laser scanner is introduced to improve the systemic precision of two sensors. By analyzing the measurement value between ground control points and its corresponding image points in sequence images, a conclusion is that position objects between camera and images are within about 15cm in relative errors and 20cm in absolute errors. By comparing the difference value between ground control points and its corresponding laser point clouds, the errors is less than 20cm. From achieved results of these experiments in analysis, mobile mapping system is efficient and reliable system for generating high-accuracy and high-density road spatial data more rapidly.

  18. An efficient multistage algorithm for full calibration of the hemodynamic model from BOLD signal responses.

    PubMed

    Zambri, Brian; Djellouli, Rabia; Laleg-Kirati, Taous-Meriem

    2017-11-01

    We propose a computational strategy that falls into the category of prediction/correction iterative-type approaches, for calibrating the hemodynamic model. The proposed method is used to estimate consecutively the values of the two sets of model parameters. Numerical results corresponding to both synthetic and real functional magnetic resonance imaging measurements for a single stimulus as well as for multiple stimuli are reported to highlight the capability of this computational methodology to fully calibrate the considered hemodynamic model. Copyright © 2017 John Wiley & Sons, Ltd.

  19. Construction and calibration of a low cost and fully automated vibrating sample magnetometer

    NASA Astrophysics Data System (ADS)

    El-Alaily, T. M.; El-Nimr, M. K.; Saafan, S. A.; Kamel, M. M.; Meaz, T. M.; Assar, S. T.

    2015-07-01

    A low cost vibrating sample magnetometer (VSM) has been constructed by using an electromagnet and an audio loud speaker; where both are controlled by a data acquisition device. The constructed VSM records the magnetic hysteresis loop up to 8.3 KG at room temperature. The apparatus has been calibrated and tested by using magnetic hysteresis data of some ferrite samples measured by two scientifically calibrated magnetometers; model (Lake Shore 7410) and model (LDJ Electronics Inc. Troy, MI). Our VSM lab-built new design proved success and reliability.

  20. Self-calibrating models for dynamic monitoring and diagnosis

    NASA Technical Reports Server (NTRS)

    Kuipers, Benjamin

    1996-01-01

    A method for automatically building qualitative and semi-quantitative models of dynamic systems, and using them for monitoring and fault diagnosis, is developed and demonstrated. The qualitative approach and semi-quantitative method are applied to monitoring observation streams, and to design of non-linear control systems.

  1. Evaluation of a stepwise, multi-objective, multi-variable parameter optimization method for the APEX model

    USDA-ARS?s Scientific Manuscript database

    Hydrologic models are essential tools for environmental assessment of agricultural non-point source pollution. The automatic calibration of hydrologic models, though efficient, demands significant computational power, which can limit its application. The study objective was to investigate a cost e...

  2. Recalculated Areas for Maximum Ice Extents of the Baltic Sea During Winters 1971-2008

    NASA Astrophysics Data System (ADS)

    Niskanen, T.; Vainio, J.; Eriksson, P.; Heiler, I.

    2009-04-01

    Publication of operational ice charts in Finland was started from the Baltic Sea in a year 1915. Until year 1993 all ice charts were hand drawn paper copies but in the year 1993 ice charting software IceMap was introduced. Since then all ice charts were produced digitally. Since the year 1996 IceMap has had an option that user can calculate areas of single ice area polygons in the chart. Using this option the area of the maximum ice extent can be easily solved fully automatically. Before this option was introduced (and in full operation) all maximum extent areas were calculated manually by a planimeter. During recent years it has become clear that some areas calculated before 1996 don't give the same result as IceMap. Differences can come from for example inaccuracy of old coastlines, map projections, the calibration of the planimeter or interpretation of old ice area symbols. Old ice charts since winter 1970-71 have now been scanned, rectified and re-drawn. New maximum ice extent areas for Baltic Sea have now been re-calculated. By these new technological tools it can be concluded that in some cases clear differences can be found.

  3. A fully automated temperature-dependent resistance measurement setup using van der Pauw method

    NASA Astrophysics Data System (ADS)

    Pandey, Shivendra Kumar; Manivannan, Anbarasu

    2018-03-01

    The van der Pauw (VDP) method is widely used to identify the resistance of planar homogeneous samples with four contacts placed on its periphery. We have developed a fully automated thin film resistance measurement setup using the VDP method with the capability of precisely measuring a wide range of thin film resistances from few mΩ up to 10 GΩ under controlled temperatures from room-temperature up to 600 °C. The setup utilizes a robust, custom-designed switching network board (SNB) for measuring current-voltage characteristics automatically at four different source-measure configurations based on the VDP method. Moreover, SNB is connected with low noise shielded coaxial cables that reduce the effect of leakage current as well as the capacitance in the circuit thereby enhancing the accuracy of measurement. In order to enable precise and accurate resistance measurement of the sample, wide range of sourcing currents/voltages are pre-determined with the capability of auto-tuning for ˜12 orders of variation in the resistances. Furthermore, the setup has been calibrated with standard samples and also employed to investigate temperature dependent resistance (few Ω-10 GΩ) measurements for various chalcogenide based phase change thin films (Ge2Sb2Te5, Ag5In5Sb60Te30, and In3SbTe2). This setup would be highly helpful for measurement of temperature-dependent resistance of wide range of materials, i.e., metals, semiconductors, and insulators illuminating information about structural change upon temperature as reflected by change in resistances, which are useful for numerous applications.

  4. A Preliminary Design of a Calibration Chamber for Evaluating the Stability of Unsaturated Soil Slope

    NASA Astrophysics Data System (ADS)

    Hsu, H.-H.

    2012-04-01

    The unsaturated soil slopes, which have ground water tables and are easily failure caused by heavy rainfalls, are widely distributed in the arid and semi-arid areas. For analyzing the stability of slope, in situ tests are the direct methods to obtain the test site characteristics. The cone penetration test (CPT) is a popular in situ test method. Some of the CPT empirical equations established from calibration chamber tests. The CPT performed in calibration chamber was commonly used clean quartz sand as testing material in the past. The silty sand is observed in many actual slopes. Because silty sand is relatively compressible than quartz sand, it is not suitable to apply the correlations between soil properties and CPT results built from quartz sand to silty sand. The experience on CPT calibration in silty sand has been limited. CPT calibration tests were mostly performed in dry or saturated soils. The condition around cone tip during penetration is assumed to be fully drained or fully undrained, yet it was observed to be partially drained for unsaturated soils. Because of the suction matrix has a great effect on the characteristics of unsaturated soils, they are much sensitive to the water content than saturated soils. The design of an unsaturated calibration chamber is in progress. The air pressure is supplied from the top plate and the pore water pressure is provided through the high air entry value ceramic disks located at the bottom plate of chamber cell. To boost and uniform distribute the unsaturated effect, four perforated burettes are installed onto the ceramic disks and stretch upwards to the midheight of specimen. This paper describes design concepts, illustrates this unsaturated calibration chamber, and presents the preliminary test results.

  5. A flow-batch analyzer with piston propulsion applied to automatic preparation of calibration solutions for Mn determination in mineral waters by ET AAS.

    PubMed

    Almeida, Luciano F; Vale, Maria G R; Dessuy, Morgana B; Silva, Márcia M; Lima, Renato S; Santos, Vagner B; Diniz, Paulo H D; Araújo, Mário C U

    2007-10-31

    The increasing development of miniaturized flow systems and the continuous monitoring of chemical processes require dramatically simplified and cheap flow schemes and instrumentation with large potential for miniaturization and consequent portability. For these purposes, the development of systems based on flow and batch technologies may be a good alternative. Flow-batch analyzers (FBA) have been successfully applied to implement analytical procedures, such as: titrations, sample pre-treatment, analyte addition and screening analysis. In spite of its favourable characteristics, the previously proposed FBA uses peristaltic pumps to propel the fluids and this kind of propulsion presents high cost and large dimension, making unfeasible its miniaturization and portability. To overcome these drawbacks, a low cost, robust, compact and non-propelled by peristaltic pump FBA is proposed. It makes use of a lab-made piston coupled to a mixing chamber and a step motor controlled by a microcomputer. The piston-propelled FBA (PFBA) was applied for automatic preparation of calibration solutions for manganese determination in mineral waters by electrothermal atomic-absorption spectrometry (ET AAS). Comparing the results obtained with two sets of calibration curves (five by manual and five by PFBA preparations), no significant statistical differences at a 95% confidence level were observed by applying the paired t-test. The standard deviation of manual and PFBA procedures were always smaller than 0.2 and 0.1mugL(-1), respectively. By using PFBA it was possible to prepare about 80 calibration solutions per hour.

  6. An investigation of automatic exposure control calibration for chest imaging with a computed radiography system.

    PubMed

    Moore, C S; Wood, T J; Avery, G; Balcam, S; Needler, L; Beavis, A W; Saunderson, J R

    2014-05-07

    The purpose of this study was to examine the use of three physical image quality metrics in the calibration of an automatic exposure control (AEC) device for chest radiography with a computed radiography (CR) imaging system. The metrics assessed were signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and mean effective noise equivalent quanta (eNEQm), all measured using a uniform chest phantom. Subsequent calibration curves were derived to ensure each metric was held constant across the tube voltage range. Each curve was assessed for its clinical appropriateness by generating computer simulated chest images with correct detector air kermas for each tube voltage, and grading these against reference images which were reconstructed at detector air kermas correct for the constant detector dose indicator (DDI) curve currently programmed into the AEC device. All simulated chest images contained clinically realistic projected anatomy and anatomical noise and were scored by experienced image evaluators. Constant DDI and CNR curves do not appear to provide optimized performance across the diagnostic energy range. Conversely, constant eNEQm and SNR do appear to provide optimized performance, with the latter being the preferred calibration metric given as it is easier to measure in practice. Medical physicists may use the SNR image quality metric described here when setting up and optimizing AEC devices for chest radiography CR systems with a degree of confidence that resulting clinical image quality will be adequate for the required clinical task. However, this must be done with close cooperation of expert image evaluators, to ensure appropriate levels of detector air kerma.

  7. An investigation of automatic exposure control calibration for chest imaging with a computed radiography system

    NASA Astrophysics Data System (ADS)

    Moore, C. S.; Wood, T. J.; Avery, G.; Balcam, S.; Needler, L.; Beavis, A. W.; Saunderson, J. R.

    2014-05-01

    The purpose of this study was to examine the use of three physical image quality metrics in the calibration of an automatic exposure control (AEC) device for chest radiography with a computed radiography (CR) imaging system. The metrics assessed were signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and mean effective noise equivalent quanta (eNEQm), all measured using a uniform chest phantom. Subsequent calibration curves were derived to ensure each metric was held constant across the tube voltage range. Each curve was assessed for its clinical appropriateness by generating computer simulated chest images with correct detector air kermas for each tube voltage, and grading these against reference images which were reconstructed at detector air kermas correct for the constant detector dose indicator (DDI) curve currently programmed into the AEC device. All simulated chest images contained clinically realistic projected anatomy and anatomical noise and were scored by experienced image evaluators. Constant DDI and CNR curves do not appear to provide optimized performance across the diagnostic energy range. Conversely, constant eNEQm and SNR do appear to provide optimized performance, with the latter being the preferred calibration metric given as it is easier to measure in practice. Medical physicists may use the SNR image quality metric described here when setting up and optimizing AEC devices for chest radiography CR systems with a degree of confidence that resulting clinical image quality will be adequate for the required clinical task. However, this must be done with close cooperation of expert image evaluators, to ensure appropriate levels of detector air kerma.

  8. Ice Sheet Temperature Records - Satellite and In Situ Data from Antarctica and Greenland

    NASA Astrophysics Data System (ADS)

    Shuman, C. A.; Comiso, J. C.

    2001-12-01

    Recently completed decadal-length surface temperature records from Antarctica and Greenland are providing insights into the challenge of detecting climate change. Ice and snow cover at high latitudes influence the global climate system by reflecting much of the incoming solar energy back to space. An expected consequence of global warming is a decrease in area covered by snow and ice and an increase in Earth's absorption of solar radiation. Models have predicted that the effects of climate warming may be amplified at high latitudes; thinning of the Greenland ice sheet margins and the breakup of Antarctic Peninsula ice shelves suggest this process may have begun. Satellite data provide an excellent means of observing climate parameters across both long temporal and remote spatial domains but calibration and validation of their data remains a challenge. Infrared sensors can provide excellent temperature information but cloud cover and calibration remain as problems. Passive-microwave sensors can obtain data during the long polar night and through clouds but have calibration issues and a much lower spatial resolution. Automatic weather stations are generally spatially- and temporally-restricted and may have long gaps due to equipment failure. Stable isotopes of oxygen and hydrogen from ice sheet locations provide another means of determining temperature variations with time but are challenging to calibrate to observed temperatures and also represent restricted areas. This presentation will discuss these issues and elaborate on the development and limitations of composite satellite, automatic weather station, and proxy temperature data from selected sites in Antarctica and Greenland.

  9. User-friendly freehand ultrasound calibration using Lego bricks and automatic registration.

    PubMed

    Xiao, Yiming; Yan, Charles Xiao Bo; Drouin, Simon; De Nigris, Dante; Kochanowska, Anna; Collins, D Louis

    2016-09-01

    As an inexpensive, noninvasive, and portable clinical imaging modality, ultrasound (US) has been widely employed in many interventional procedures for monitoring potential tissue deformation, surgical tool placement, and locating surgical targets. The application requires the spatial mapping between 2D US images and 3D coordinates of the patient. Although positions of the devices (i.e., ultrasound transducer) and the patient can be easily recorded by a motion tracking system, the spatial relationship between the US image and the tracker attached to the US transducer needs to be estimated through an US calibration procedure. Previously, various calibration techniques have been proposed, where a spatial transformation is computed to match the coordinates of corresponding features in a physical phantom and those seen in the US scans. However, most of these methods are difficult to use for novel users. We proposed an ultrasound calibration method by constructing a phantom from simple Lego bricks and applying an automated multi-slice 2D-3D registration scheme without volumetric reconstruction. The method was validated for its calibration accuracy and reproducibility. Our method yields a calibration accuracy of [Formula: see text] mm and a calibration reproducibility of 1.29 mm. We have proposed a robust, inexpensive, and easy-to-use ultrasound calibration method.

  10. AISLE: an automatic volumetric segmentation method for the study of lung allometry.

    PubMed

    Ren, Hongliang; Kazanzides, Peter

    2011-01-01

    We developed a fully automatic segmentation method for volumetric CT (computer tomography) datasets to support construction of a statistical atlas for the study of allometric laws of the lung. The proposed segmentation method, AISLE (Automated ITK-Snap based on Level-set), is based on the level-set implementation from an existing semi-automatic segmentation program, ITK-Snap. AISLE can segment the lung field without human interaction and provide intermediate graphical results as desired. The preliminary experimental results show that the proposed method can achieve accurate segmentation, in terms of volumetric overlap metric, by comparing with the ground-truth segmentation performed by a radiologist.

  11. Calibration of the NASA GRC 16 In. Mass-Flow Plug

    NASA Technical Reports Server (NTRS)

    Davis, David O.; Friedlander, David J.; Saunders, J. David; Frate, Franco C.; Foster, Lancert E.

    2012-01-01

    The results of an experimental calibration of the NASA Glenn Research Center 16 in. Mass-Flow Plug (MFP) are presented and compared to a previously obtained calibration of a 15 in. Mass-Flow Plug. An ASME low-beta, long-radius nozzle was used as the calibration reference. The discharge coefficient for the ASME nozzle was obtained by numerically simulating the flow through the nozzle from the WIND-US code. The results showed agreement between the 15 in. and 16 in. MFPs for area ratios (MFP to pipe area ratio) greater than 0.6 but deviate at area ratios below this value for reasons that are not fully understood. A general uncertainty analysis was also performed and indicates that large uncertainties in the calibration are present for low MFP area ratios.

  12. Calibration of the NASA Glenn Research Center 16 in. Mass-Flow Plug

    NASA Technical Reports Server (NTRS)

    Davis, David O.; Friedlander, David J.; Saunders, J. David; Frate, Franco C.; Foster, Lancert E.

    2014-01-01

    The results of an experimental calibration of the NASA Glenn Research Center 16 in. Mass-Flow Plug (MFP) are presented and compared to a previously obtained calibration of a 15 in. Mass-Flow Plug. An ASME low-beta, long-radius nozzle was used as the calibration reference. The discharge coefficient for the ASME nozzle was obtained by numerically simulating the flow through the nozzle from the WIND-US code. The results showed agreement between the 15 and 16 in. MFPs for area ratios (MFP to pipe area ratio) greater than 0.6 but deviate at area ratios below this value for reasons that are not fully understood. A general uncertainty analysis was also performed and indicates that large uncertainties in the calibration are present for low MFP area ratios.

  13. Soil Moisture Active Passive (SMAP) L-Band Microwave Radiometer Post-Launch Calibration

    NASA Technical Reports Server (NTRS)

    Peng, Jinzheng; Piepmeier, Jeffrey R.; Misra, Sidharth; Dinnat, Emmanuel P.; Hudson, Derek; Le Vine, David M.; De Amici, Giovanni; Mohammed, Priscilla N.; Yueh, Simon H.; Meissner, Thomas

    2016-01-01

    The SMAP microwave radiometer is a fully-polarimetric L-band radiometer flown on the SMAP satellite in a 6 AM/ 6 PM sun-synchronous orbit at 685 km altitude. Since April, 2015, the radiometer is under calibration and validation to assess the quality of the radiometer L1B data product. Calibration methods including the SMAP L1B TA2TB (from Antenna Temperature (TA) to the Earth's surface Brightness Temperature (TB)) algorithm and TA forward models are outlined, and validation approaches to calibration stability/quality are described in this paper including future work. Results show that the current radiometer L1B data satisfies its requirements.

  14. Soil Moisture ActivePassive (SMAP) L-Band Microwave Radiometer Post-Launch Calibration

    NASA Technical Reports Server (NTRS)

    Peng, Jinzheng; Piepmeier, Jeffrey R.; Misra, Sidharth; Dinnat, Emmanuel P.; Hudson, Derek; Le Vine, David M.; De Amici, Giovanni; Mohammed, Priscilla N.; Yueh, Simon H.; Meissner, Thomas

    2016-01-01

    The SMAP microwave radiometer is a fully-polarimetric L-band radiometer flown on the SMAP satellite in a 6 AM/ 6 PM sun-synchronous orbit at 685 km altitude. Since April, 2015, the radiometer is under calibration and validation to assess the quality of the radiometer L1B data product. Calibration methods including the SMAP L1B TA2TB (from Antenna Temperature (TA) to the Earth’s surface Brightness Temperature (TB)) algorithm and TA forward models are outlined, and validation approaches to calibration stability/quality are described in this paper including future work. Results show that the current radiometer L1B data satisfies its requirements.

  15. Enhanced Image-Aided Navigation Algorithm with Automatic Calibration and Affine Distortion Prediction

    DTIC Science & Technology

    2012-03-01

    Lowe, David G. “Distinctive Image Features from Scale-Invariant Keypoints”. International Journal of Computer Vision, 2004. 13. Maybeck, Peter S...Fairfax Drive - 3rd Floor Arlington,VA 22203 Dr. Stefanie Tompkins ; (703)248–1540; Stefanie.Tompkins@darpa.mil DARPA Distribution A. Approved for Public

  16. Automatic Web-based Calibration of Network-Capable Shipboard Sensors

    DTIC Science & Technology

    2007-09-01

    Server, Java , Applet, and Servlet . 16. PRICE CODE 17. SECURITY CLASSIFICATION OF REPORT Unclassified 18. SECURITY CLASSIFICATION OF THIS PAGE...49 b. Sensor Applet...........................................................................49 3. Java Servlet ...Table 1. Required System Environment Variables for Java Servlet Development. ......25 Table 2. Payload Data Format of the POST Requests from

  17. Testing methods and techniques: Environmental testing: A compilation

    NASA Technical Reports Server (NTRS)

    1971-01-01

    Various devices and techniques are described for testing hardware and components in four special environments: low temperature, high temperature, high pressure, and vibration. Items ranging from an automatic calibrator for pressure transducers to a fixture for testing the susceptibility of materials to ignition by electric spark are included.

  18. 7 CFR 802.1 - Qualified laboratories.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... certification program having auditing capability is automatically approved by the Service. (2) Any county or... calibration is approved by the Service. The State approval may be documented by a certificate or letter. The.... (Approved by the Office of Management and Budget under control number 0580-0011) [51 FR 7052, Feb. 28, 1986...

  19. One-day offset in daily hydrologic modeling: An exploration of the issue in automatic model calibration

    USDA-ARS?s Scientific Manuscript database

    The literature of daily hydrologic modelling illustrates that daily simulation models are incapable of accurately representing hydrograph timing due to relationships between precipitation and watershed hydrologic response. For watersheds with a time of concentration less than 24 hrs and a late day p...

  20. Calibration of automatic performance measures - speed and volume data : volume 1, evaluation of the accuracy of traffic volume counts collected by microwave sensors.

    DOT National Transportation Integrated Search

    2015-09-01

    Over the past few years, the Utah Department of Transportation (UDOT) has developed a system called the : Signal Performance Metrics System (SPMS) to evaluate the performance of signalized intersections. This system : currently provides data summarie...

  1. Fully automated, real-time 3D ultrasound segmentation to estimate first trimester placental volume using deep learning.

    PubMed

    Looney, Pádraig; Stevenson, Gordon N; Nicolaides, Kypros H; Plasencia, Walter; Molloholli, Malid; Natsis, Stavros; Collins, Sally L

    2018-06-07

    We present a new technique to fully automate the segmentation of an organ from 3D ultrasound (3D-US) volumes, using the placenta as the target organ. Image analysis tools to estimate organ volume do exist but are too time consuming and operator dependant. Fully automating the segmentation process would potentially allow the use of placental volume to screen for increased risk of pregnancy complications. The placenta was segmented from 2,393 first trimester 3D-US volumes using a semiautomated technique. This was quality controlled by three operators to produce the "ground-truth" data set. A fully convolutional neural network (OxNNet) was trained using this ground-truth data set to automatically segment the placenta. OxNNet delivered state-of-the-art automatic segmentation. The effect of training set size on the performance of OxNNet demonstrated the need for large data sets. The clinical utility of placental volume was tested by looking at predictions of small-for-gestational-age babies at term. The receiver-operating characteristics curves demonstrated almost identical results between OxNNet and the ground-truth). Our results demonstrated good similarity to the ground-truth and almost identical clinical results for the prediction of SGA.

  2. Evaluating Statistical Process Control (SPC) techniques and computing the uncertainty of force calibrations

    NASA Technical Reports Server (NTRS)

    Navard, Sharon E.

    1989-01-01

    In recent years there has been a push within NASA to use statistical techniques to improve the quality of production. Two areas where statistics are used are in establishing product and process quality control of flight hardware and in evaluating the uncertainty of calibration of instruments. The Flight Systems Quality Engineering branch is responsible for developing and assuring the quality of all flight hardware; the statistical process control methods employed are reviewed and evaluated. The Measurement Standards and Calibration Laboratory performs the calibration of all instruments used on-site at JSC as well as those used by all off-site contractors. These calibrations must be performed in such a way as to be traceable to national standards maintained by the National Institute of Standards and Technology, and they must meet a four-to-one ratio of the instrument specifications to calibrating standard uncertainty. In some instances this ratio is not met, and in these cases it is desirable to compute the exact uncertainty of the calibration and determine ways of reducing it. A particular example where this problem is encountered is with a machine which does automatic calibrations of force. The process of force calibration using the United Force Machine is described in detail. The sources of error are identified and quantified when possible. Suggestions for improvement are made.

  3. Automatic Molar Extraction from Dental Panoramic Radiographs for Forensic Personal Identification

    NASA Astrophysics Data System (ADS)

    Samopa, Febriliyan; Asano, Akira; Taguchi, Akira

    Measurement of an individual molar provides rich information for forensic personal identification. We propose a computer-based system for extracting an individual molar from dental panoramic radiographs. A molar is obtained by extracting the region-of-interest, separating the maxilla and mandible, and extracting the boundaries between teeth. The proposed system is almost fully automatic; all that the user has to do is clicking three points on the boundary between the maxilla and the mandible.

  4. Tracking Retreat of the North Seasonal Ice Cap on Mars: Results from the THEMIS Investigation

    NASA Technical Reports Server (NTRS)

    Ivanov, A. B.; Wagstaff, K. L.; Ttus, T. N.

    2005-01-01

    The CO2 ice caps on Mars advance and retreat with the seasons. This phenomenon was first observed by Cassini and then confirmed by numerous ground based observations in 19th and 20th centuries. With the advent of the space age observations of the seasonal ice cap were done by all orbiting spacecraft starting with Mariner 7. Viking Orbiters and more recently the Mars Global Surveyor (particularly Mars Orbiter Camera (MOC) and Thermal Emission Spectrometer (TES) instruments) have accumulated significant data on the retreat of the CO2 seasonal cap. During Mars year 2 of THEMIS operations at Mars, we planned an observational campaign in which the THEMIS instrument (onboard the Mars Odyssey spacecraft) repeatedly observed the north seasonal polar cap from midwinter to late spring. THEMIS allows simultaneous observations in both Thermal IR (12.57 m) and Visible wavelengths (0.65 m). One of the goals for this work is to initiate an interannual program for observations of the seasonal ice caps using the THEMIS instrument. The most efficient way to detect the edge between frost and bare ground is directly onboard of the spacecraft. Prior to onboard software design effort, we have developed two groundbased algorithms for automatically finding the edge of the seasonal polar cap in THEMIS IR data. The first algorithm relies on fully calibrated data and can be used for highly reliable groundbased analyses. The second method was specifically developed for processing raw, uncalibrated data in a highly efficient way. It has the potential to enable automatic, onboard detections of the seasonal cap retreat. We have experimentally confirmed that both methods produce similar results, and we have validated both methods against a model constructed from the MGS TES data from the same season.

  5. Automatic detection of spiculation of pulmonary nodules in computed tomography images

    NASA Astrophysics Data System (ADS)

    Ciompi, F.; Jacobs, C.; Scholten, E. T.; van Riel, S. J.; W. Wille, M. M.; Prokop, M.; van Ginneken, B.

    2015-03-01

    We present a fully automatic method for the assessment of spiculation of pulmonary nodules in low-dose Computed Tomography (CT) images. Spiculation is considered as one of the indicators of nodule malignancy and an important feature to assess in order to decide on a patient-tailored follow-up procedure. For this reason, lung cancer screening scenario would benefit from the presence of a fully automatic system for the assessment of spiculation. The presented framework relies on the fact that spiculated nodules mainly differ from non-spiculated ones in their morphology. In order to discriminate the two categories, information on morphology is captured by sampling intensity profiles along circular patterns on spherical surfaces centered on the nodule, in a multi-scale fashion. Each intensity profile is interpreted as a periodic signal, where the Fourier transform is applied, obtaining a spectrum. A library of spectra is created by clustering data via unsupervised learning. The centroids of the clusters are used to label back each spectrum in the sampling pattern. A compact descriptor encoding the nodule morphology is obtained as the histogram of labels along all the spherical surfaces and used to classify spiculated nodules via supervised learning. We tested our approach on a set of nodules from the Danish Lung Cancer Screening Trial (DLCST) dataset. Our results show that the proposed method outperforms other 3-D descriptors of morphology in the automatic assessment of spiculation.

  6. A Laser-Based Measuring System for Online Quality Control of Car Engine Block

    PubMed Central

    Li, Xing-Qiang; Wang, Zhong; Fu, Lu-Hua

    2016-01-01

    For online quality control of car engine production, pneumatic measurement instrument plays an unshakeable role in measuring diameters inside engine block because of its portability and high-accuracy. To the limitation of its measuring principle, however, the working space between the pneumatic device and measured surface is too small to require manual operation. This lowers the measuring efficiency and becomes an obstacle to perform automatic measurement. In this article, a high-speed, automatic measuring system is proposed to take the place of pneumatic devices by using a laser-based measuring unit. The measuring unit is considered as a set of several measuring modules, where each of them acts like a single bore gauge and is made of four laser triangulation sensors (LTSs), which are installed on different positions and in opposite directions. The spatial relationship among these LTSs was calibrated before measurements. Sampling points from measured shaft holes can be collected by the measuring unit. A unified mathematical model was established for both calibration and measurement. Based on the established model, the relative pose between the measuring unit and measured workpiece does not impact the measuring accuracy. This frees the measuring unit from accurate positioning or adjustment, and makes it possible to realize fast and automatic measurement. The proposed system and method were finally validated by experiments. PMID:27834839

  7. Photometric Calibration of Consumer Video Cameras

    NASA Technical Reports Server (NTRS)

    Suggs, Robert; Swift, Wesley, Jr.

    2007-01-01

    Equipment and techniques have been developed to implement a method of photometric calibration of consumer video cameras for imaging of objects that are sufficiently narrow or sufficiently distant to be optically equivalent to point or line sources. Heretofore, it has been difficult to calibrate consumer video cameras, especially in cases of image saturation, because they exhibit nonlinear responses with dynamic ranges much smaller than those of scientific-grade video cameras. The present method not only takes this difficulty in stride but also makes it possible to extend effective dynamic ranges to several powers of ten beyond saturation levels. The method will likely be primarily useful in astronomical photometry. There are also potential commercial applications in medical and industrial imaging of point or line sources in the presence of saturation.This development was prompted by the need to measure brightnesses of debris in amateur video images of the breakup of the Space Shuttle Columbia. The purpose of these measurements is to use the brightness values to estimate relative masses of debris objects. In most of the images, the brightness of the main body of Columbia was found to exceed the dynamic ranges of the cameras. A similar problem arose a few years ago in the analysis of video images of Leonid meteors. The present method is a refined version of the calibration method developed to solve the Leonid calibration problem. In this method, one performs an endto- end calibration of the entire imaging system, including not only the imaging optics and imaging photodetector array but also analog tape recording and playback equipment (if used) and any frame grabber or other analog-to-digital converter (if used). To automatically incorporate the effects of nonlinearity and any other distortions into the calibration, the calibration images are processed in precisely the same manner as are the images of meteors, space-shuttle debris, or other objects that one seeks to analyze. The light source used to generate the calibration images is an artificial variable star comprising a Newtonian collimator illuminated by a light source modulated by a rotating variable neutral- density filter. This source acts as a point source, the brightness of which varies at a known rate. A video camera to be calibrated is aimed at this source. Fixed neutral-density filters are inserted in or removed from the light path as needed to make the video image of the source appear to fluctuate between dark and saturated bright. The resulting video-image data are analyzed by use of custom software that determines the integrated signal in each video frame and determines the system response curve (measured output signal versus input brightness). These determinations constitute the calibration, which is thereafter used in automatic, frame-by-frame processing of the data from the video images to be analyzed.

  8. POLCAL - POLARIMETRIC RADAR CALIBRATION

    NASA Technical Reports Server (NTRS)

    Vanzyl, J.

    1994-01-01

    Calibration of polarimetric radar systems is a field of research in which great progress has been made over the last few years. POLCAL (Polarimetric Radar Calibration) is a software tool intended to assist in the calibration of Synthetic Aperture Radar (SAR) systems. In particular, POLCAL calibrates Stokes matrix format data produced as the standard product by the NASA/Jet Propulsion Laboratory (JPL) airborne imaging synthetic aperture radar (AIRSAR). POLCAL was designed to be used in conjunction with data collected by the NASA/JPL AIRSAR system. AIRSAR is a multifrequency (6 cm, 24 cm, and 68 cm wavelength), fully polarimetric SAR system which produces 12 x 12 km imagery at 10 m resolution. AIRSTAR was designed as a testbed for NASA's Spaceborne Imaging Radar program. While the images produced after 1991 are thought to be calibrated (phase calibrated, cross-talk removed, channel imbalance removed, and absolutely calibrated), POLCAL can and should still be used to check the accuracy of the calibration and to correct it if necessary. Version 4.0 of POLCAL is an upgrade of POLCAL version 2.0 released to AIRSAR investigators in June, 1990. New options in version 4.0 include automatic absolute calibration of 89/90 data, distributed target analysis, calibration of nearby scenes with calibration parameters from a scene with corner reflectors, altitude or roll angle corrections, and calibration of errors introduced by known topography. Many sources of error can lead to false conclusions about the nature of scatterers on the surface. Errors in the phase relationship between polarization channels result in incorrect synthesis of polarization states. Cross-talk, caused by imperfections in the radar antenna itself, can also lead to error. POLCAL reduces cross-talk and corrects phase calibration without the use of ground calibration equipment. Removing the antenna patterns during SAR processing also forms a very important part of the calibration of SAR data. Errors in the processing altitude or in the aircraft roll angle are possible causes of error in computing the antenna patterns inside the processor. POLCAL uses an altitude error correction algorithm to correctly remove the antenna pattern from the SAR images. POLCAL also uses a topographic calibration algorithm to reduce calibration errors resulting from ground topography. By utilizing the backscatter measurements from either the corner reflectors or a well-known distributed target, POLCAL can correct the residual amplitude offsets in the various polarization channels and correct for the absolute gain of the radar system. POLCAL also gives the user the option of calibrating a scene using the calibration data from a nearby site. This allows precise calibration of all the scenes acquired on a flight line where corner reflectors were present. Construction and positioning of corner reflectors is covered extensively in the program documentation. In an effort to keep the POLCAL code as transportable as possible, the authors eliminated all interactions with a graphics display system. For this reason, it is assumed that users will have their own software for doing the following: (1) synthesize an image using HH or VV polarization, (2) display the synthesized image on any display device, and (3) read the pixel locations of the corner reflectors from the image. The only inputs used by the software (in addition to the input Stokes matrix data file) is a small data file with the corner reflector information. POLCAL is written in FORTRAN 77 for use on Sun series computers running SunOS and DEC VAX computers running VMS. It requires 4Mb of RAM under SunOS and 3.7Mb of RAM under VMS for execution. The standard distribution medium for POLCAL is a .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 9-track 1600 BPI magnetic tape in DEC VAX FILES-11 format or on a TK50 tape cartridge in DEC VAX FILES-11 format. Other distribution media may be available upon request. Documentation is included in the price of the program. POLCAL 4.0 was released in 1992 and is a copyrighted work with all copyright vested in NASA.

  9. A calibration hierarchy for risk models was defined: from utopia to empirical data.

    PubMed

    Van Calster, Ben; Nieboer, Daan; Vergouwe, Yvonne; De Cock, Bavo; Pencina, Michael J; Steyerberg, Ewout W

    2016-06-01

    Calibrated risk models are vital for valid decision support. We define four levels of calibration and describe implications for model development and external validation of predictions. We present results based on simulated data sets. A common definition of calibration is "having an event rate of R% among patients with a predicted risk of R%," which we refer to as "moderate calibration." Weaker forms of calibration only require the average predicted risk (mean calibration) or the average prediction effects (weak calibration) to be correct. "Strong calibration" requires that the event rate equals the predicted risk for every covariate pattern. This implies that the model is fully correct for the validation setting. We argue that this is unrealistic: the model type may be incorrect, the linear predictor is only asymptotically unbiased, and all nonlinear and interaction effects should be correctly modeled. In addition, we prove that moderate calibration guarantees nonharmful decision making. Finally, results indicate that a flexible assessment of calibration in small validation data sets is problematic. Strong calibration is desirable for individualized decision support but unrealistic and counter productive by stimulating the development of overly complex models. Model development and external validation should focus on moderate calibration. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Integrated surface-subsurface model to investigate the role of groundwater in headwater catchment runoff generation: A minimalist approach to parameterisation

    NASA Astrophysics Data System (ADS)

    Ala-aho, Pertti; Soulsby, Chris; Wang, Hailong; Tetzlaff, Doerthe

    2017-04-01

    Understanding the role of groundwater for runoff generation in headwater catchments is a challenge in hydrology, particularly so in data-scarce areas. Fully-integrated surface-subsurface modelling has shown potential in increasing process understanding for runoff generation, but high data requirements and difficulties in model calibration are typically assumed to preclude their use in catchment-scale studies. We used a fully integrated surface-subsurface hydrological simulator to enhance groundwater-related process understanding in a headwater catchment with a rich background in empirical data. To set up the model we used minimal data that could be reasonably expected to exist for any experimental catchment. A novel aspect of our approach was in using simplified model parameterisation and including parameters from all model domains (surface, subsurface, evapotranspiration) in automated model calibration. Calibration aimed not only to improve model fit, but also to test the information content of the observations (streamflow, remotely sensed evapotranspiration, median groundwater level) used in calibration objective functions. We identified sensitive parameters in all model domains (subsurface, surface, evapotranspiration), demonstrating that model calibration should be inclusive of parameters from these different model domains. Incorporating groundwater data in calibration objectives improved the model fit for groundwater levels, but simulations did not reproduce well the remotely sensed evapotranspiration time series even after calibration. Spatially explicit model output improved our understanding of how groundwater functions in maintaining streamflow generation primarily via saturation excess overland flow. Steady groundwater inputs created saturated conditions in the valley bottom riparian peatlands, leading to overland flow even during dry periods. Groundwater on the hillslopes was more dynamic in its response to rainfall, acting to expand the saturated area extent and thereby promoting saturation excess overland flow during rainstorms. Our work shows the potential of using integrated surface-subsurface modelling alongside with rigorous model calibration to better understand and visualise the role of groundwater in runoff generation even with limited datasets.

  11. Estimation of Tree Position and STEM Diameter Using Simultaneous Localization and Mapping with Data from a Backpack-Mounted Laser Scanner

    NASA Astrophysics Data System (ADS)

    Holmgren, J.; Tulldahl, H. M.; Nordlöf, J.; Nyström, M.; Olofsson, K.; Rydell, J.; Willén, E.

    2017-10-01

    A system was developed for automatic estimations of tree positions and stem diameters. The sensor trajectory was first estimated using a positioning system that consists of a low precision inertial measurement unit supported by image matching with data from a stereo-camera. The initial estimation of the sensor trajectory was then calibrated by adjustments of the sensor pose using the laser scanner data. Special features suitable for forest environments were used to solve the correspondence and matching problems. Tree stem diameters were estimated for stem sections using laser data from individual scanner rotations and were then used for calibration of the sensor pose. A segmentation algorithm was used to associate stem sections to individual tree stems. The stem diameter estimates of all stem sections associated to the same tree stem were then combined for estimation of stem diameter at breast height (DBH). The system was validated on four 20 m radius circular plots and manual measured trees were automatically linked to trees detected in laser data. The DBH could be estimated with a RMSE of 19 mm (6 %) and a bias of 8 mm (3 %). The calibrated sensor trajectory and the combined use of circle fits from individual scanner rotations made it possible to obtain reliable DBH estimates also with a low precision positioning system.

  12. Portable precision dc voltage-current transfer standard for electrometer calibration

    USGS Publications Warehouse

    Landis, G.; Godwin, M.

    1982-01-01

    A circuit design is presented for an instrument providing a highly stable and fully adjustable voltage and current in the range of 0-1.999 V or 0-199.9 mV and 10-11-10-15 A. This instrument is used to verify the calibration and performance of dc and vibrating reed electrometers and chart recorders on mass spectrometers of the USGS Isotope Laboratories in Denver.

  13. Fully automated MR liver volumetry using watershed segmentation coupled with active contouring.

    PubMed

    Huynh, Hieu Trung; Le-Trong, Ngoc; Bao, Pham The; Oto, Aytek; Suzuki, Kenji

    2017-02-01

    Our purpose is to develop a fully automated scheme for liver volume measurement in abdominal MR images, without requiring any user input or interaction. The proposed scheme is fully automatic for liver volumetry from 3D abdominal MR images, and it consists of three main stages: preprocessing, rough liver shape generation, and liver extraction. The preprocessing stage reduced noise and enhanced the liver boundaries in 3D abdominal MR images. The rough liver shape was revealed fully automatically by using the watershed segmentation, thresholding transform, morphological operations, and statistical properties of the liver. An active contour model was applied to refine the rough liver shape to precisely obtain the liver boundaries. The liver volumes calculated by the proposed scheme were compared to the "gold standard" references which were estimated by an expert abdominal radiologist. The liver volumes computed by using our developed scheme excellently agreed (Intra-class correlation coefficient was 0.94) with the "gold standard" manual volumes by the radiologist in the evaluation with 27 cases from multiple medical centers. The running time was 8.4 min per case on average. We developed a fully automated liver volumetry scheme in MR, which does not require any interaction by users. It was evaluated with cases from multiple medical centers. The liver volumetry performance of our developed system was comparable to that of the gold standard manual volumetry, and it saved radiologists' time for manual liver volumetry of 24.7 min per case.

  14. An image registration based ultrasound probe calibration

    NASA Astrophysics Data System (ADS)

    Li, Xin; Kumar, Dinesh; Sarkar, Saradwata; Narayanan, Ram

    2012-02-01

    Reconstructed 3D ultrasound of prostate gland finds application in several medical areas such as image guided biopsy, therapy planning and dose delivery. In our application, we use an end-fire probe rotated about its axis to acquire a sequence of rotational slices to reconstruct 3D TRUS (Transrectal Ultrasound) image. The image acquisition system consists of an ultrasound transducer situated on a cradle directly attached to a rotational sensor. However, due to system tolerances, axis of probe does not align exactly with the designed axis of rotation resulting in artifacts in the 3D reconstructed ultrasound volume. We present a rigid registration based automatic probe calibration approach. The method uses a sequence of phantom images, each pair acquired at angular separation of 180 degrees and registers corresponding image pairs to compute the deviation from designed axis. A modified shadow removal algorithm is applied for preprocessing. An attribute vector is constructed from image intensity and a speckle-insensitive information-theoretic feature. We compare registration between the presented method and expert-corrected images in 16 prostate phantom scans. Images were acquired at multiple resolutions, and different misalignment settings from two ultrasound machines. Screenshots from 3D reconstruction are shown before and after misalignment correction. Registration parameters from automatic and manual correction were found to be in good agreement. Average absolute differences of translation and rotation between automatic and manual methods were 0.27 mm and 0.65 degree, respectively. The registration parameters also showed lower variability for automatic registration (pooled standard deviation σtranslation = 0.50 mm, σrotation = 0.52 degree) compared to the manual approach (pooled standard deviation σtranslation = 0.62 mm, σrotation = 0.78 degree).

  15. Laser Balancing

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Mechanical Technology, Incorporated developed a fully automatic laser machining process that allows more precise balancing removes metal faster, eliminates excess metal removal and other operator induced inaccuracies, and provides significant reduction in balancing time. Manufacturing costs are reduced as a result.

  16. A new spectroscopic calibration to determine Teff and [Fe/H] of FGK dwarfs and giants

    NASA Astrophysics Data System (ADS)

    Teixeira, G. D. C.; Sousa, S. G.; Tsantaki, M.; Monteiro, M. J. P. F. G.; Santos, N. C.; Israelian, G.

    2017-10-01

    We present a new spectroscopic calibration for a fast estimate of Teff and [Fe/H] for FGK dwarfs and GK giant stars. We used spectra from a joint sample of 708 stars, composed by 451 FGK dwarfs and 257 GK-giant stars with homogeneously determined spectroscopic stellar parameters. We have derived 322 EW line-ratios and 100 FeI lines that can be used to compute Teff and [Fe/H], respectively. We show that these calibrations are effective for FGK dwarfs and GK-giant stars in the following ranges: 4500 K < Teff < 6500 K, 2.5 < log g < 4.9 dex, and -0.8 < [Fe/H] < 0:5 dex. The new calibration has a standard deviation of 74 K for Teff and 0.07 dex for [Fe/H]. We use four independent samples of stars to test and verify the new calibration, a sample of giant stars, a sample composed of Gaia FGK benchmark stars, a sample of GK-giant stars from the DR1 of the Gaia-ESO survey, and a sample of FGK-dwarf stars. We present a new computer code, GeTCal, for automatically producing new calibration files based on any new sample of stars.

  17. Data analysis and calibration for a bulk-refractive-index-compensated surface plasmon resonance affinity sensor

    NASA Astrophysics Data System (ADS)

    Chinowsky, Timothy M.; Yee, Sinclair S.

    2002-02-01

    Surface plasmon resonance (SPR) affinity sensing, the problem of bulk refractive index (RI) interference in SPR sensing, and a sensor developed to overcome this problem are briefly reviewed. The sensor uses a design based on Texas Instruments' Spreeta SPR sensor to simultaneously measure both bulk and surface RI. The bulk RI measurement is then used to compensate the surface measurement and remove the effects of bulk RI interference. To achieve accurate compensation, robust data analysis and calibration techniques are necessary. Simple linear data analysis techniques derived from measurements of the sensor response were found to provide a versatile, low noise method for extracting measurements of bulk and surface refractive index from the raw sensor data. Automatic calibration using RI gradients was used to correct the linear estimates, enabling the sensor to produce accurate data even when the sensor has a complicated nonlinear response which varies with time. The calibration procedure is described, and the factors influencing calibration accuracy are discussed. Data analysis and calibration principles are illustrated with an experiment in which sucrose and detergent solutions are used to produce changes in bulk and surface RI, respectively.

  18. Cloud Computing with Context Cameras

    NASA Astrophysics Data System (ADS)

    Pickles, A. J.; Rosing, W. E.

    2016-05-01

    We summarize methods and plans to monitor and calibrate photometric observations with our autonomous, robotic network of 2m, 1m and 40cm telescopes. These are sited globally to optimize our ability to observe time-variable sources. Wide field "context" cameras are aligned with our network telescopes and cycle every ˜2 minutes through BVr'i'z' filters, spanning our optical range. We measure instantaneous zero-point offsets and transparency (throughput) against calibrators in the 5-12m range from the all-sky Tycho2 catalog, and periodically against primary standards. Similar measurements are made for all our science images, with typical fields of view of ˜0.5 degrees. These are matched against Landolt, Stetson and Sloan standards, and against calibrators in the 10-17m range from the all-sky APASS catalog. Such measurements provide pretty good instantaneous flux calibration, often to better than 5%, even in cloudy conditions. Zero-point and transparency measurements can be used to characterize, monitor and inter-compare sites and equipment. When accurate calibrations of Target against Standard fields are required, monitoring measurements can be used to select truly photometric periods when accurate calibrations can be automatically scheduled and performed.

  19. Status of calibration and data evaluation of AMSR on board ADEOS-II

    NASA Astrophysics Data System (ADS)

    Imaoka, Keiji; Fujimoto, Yasuhiro; Kachi, Misako; Takeshima, Toshiaki; Igarashi, Tamotsu; Kawanishi, Toneo; Shibata, Akira

    2004-02-01

    The Advanced Microwave Scanning Radiometer (AMSR) is the multi-frequency, passive microwave radiometer on board the Advanced Earth Observing Satellite-II (ADEOS-II), currently called Midori-II. The instrument has eight-frequency channels with dual polarization (except 50-GHz band) covering frequencies between 6.925 and 89.0 GHz. Measurement of 50-GHz channels is the first attempt by this kind of conically scanning microwave radiometers. Basic concept of the instrument including hardware configuration and calibration method is almost the same as that of ASMR for EOS (AMSR-E), the modified version of AMSR. Its swath width of 1,600 km is wider than that of AMSR-E. In parallel with the calibration and data evaluation of AMSR-E instrument, almost identical calibration activities have been made for AMSR instrument. After finished the initial checkout phase, the instrument has been continuously obtaining the data in global basis. Time series of radiometer sensitivities and automatic gain control telemetry indicate the stable instrument performance. For the radiometric calibration, we are now trying to apply the same procedure that is being used for AMSR-E. This paper provides an overview of the instrument characteristics, instrument status, and preliminary results of calibration and data evaluation activities.

  20. Sentinel-2: State of the Image Quality Calibration at the End of the Commissioning

    NASA Astrophysics Data System (ADS)

    Tremas, Thierry; Lonjou, Vincent; Lacherade, Sophie; Gaudel-Vacaresse, Angelique; Languille, Florie

    2016-08-01

    This article summarizes the activity of CNES during the In Orbit Calibration Phase of Sentinel 2A as well as the transfer of production of GIPP (Ground Image Processing Parameters) from CNES to ESRIN. The state of the main calibration parameters and performances, few months before PDGS is declared fully operational, are listed and explained.In radiometry a special attention is paid to the absolute calibration using the on-board diffuser, and the vicarious calibration methods using instrumented or statistically well characterized sites and inter- comparisons with other sensors. Regarding geometry, the presentation focuses on the performances of absolute location with and without reference points. The requirements of multi-band and multi-temporal registration are exposed. Finally, the construction and the rule of the GRI (Ground Reference Images) in the future are explained.

  1. The Breakthrough Listen Search for Intelligent Life: Data Calibration using Pulsars

    NASA Astrophysics Data System (ADS)

    Brinkman-Traverse, Casey Lynn; Gajjar, Vishal; BSRC

    2018-01-01

    The ability to distinguish ET signals requires a deep understanding of the radio telescopes with which we search; therefore, before we observe stars of interest, the Breathrough Listen scientists at Berkeley SETI Research Center first observe a Pulsar with well-documented flux and polarization properties. The process of calibrating the flux and polarization is a lengthy process by hand, so we produced a pipeline code that will automatically calibrate the pulsar in under an hour. Using PSRCHIVE the code coherently dedisperses the pulsed radio signals, and then calibrates the flux using observation files with a noise diode turning on and off. The code was developed using PSR B1937+ 21 and is primarily used on PSR B0329+54. This will expedite the process of assessing the quality of data collected from the Green Bank Telescope in West Virginia and will allow us to more efficiently find life beyond Planet Earth. Additionally, the stability of the B0329+54 calibration data will allow us to analyze data taken on FRB's with confidence of its cosmic origin.

  2. SCALA: In situ calibration for integral field spectrographs

    NASA Astrophysics Data System (ADS)

    Lombardo, S.; Küsters, D.; Kowalski, M.; Aldering, G.; Antilogus, P.; Bailey, S.; Baltay, C.; Barbary, K.; Baugh, D.; Bongard, S.; Boone, K.; Buton, C.; Chen, J.; Chotard, N.; Copin, Y.; Dixon, S.; Fagrelius, P.; Feindt, U.; Fouchez, D.; Gangler, E.; Hayden, B.; Hillebrandt, W.; Hoffmann, A.; Kim, A. G.; Leget, P.-F.; McKay, L.; Nordin, J.; Pain, R.; Pécontal, E.; Pereira, R.; Perlmutter, S.; Rabinowitz, D.; Reif, K.; Rigault, M.; Rubin, D.; Runge, K.; Saunders, C.; Smadja, G.; Suzuki, N.; Taubenberger, S.; Tao, C.; Thomas, R. C.; Nearby Supernova Factory

    2017-11-01

    Aims: The scientific yield of current and future optical surveys is increasingly limited by systematic uncertainties in the flux calibration. This is the case for type Ia supernova (SN Ia) cosmology programs, where an improved calibration directly translates into improved cosmological constraints. Current methodology rests on models of stars. Here we aim to obtain flux calibration that is traceable to state-of-the-art detector-based calibration. Methods: We present the SNIFS Calibration Apparatus (SCALA), a color (relative) flux calibration system developed for the SuperNova integral field spectrograph (SNIFS), operating at the University of Hawaii 2.2 m (UH 88) telescope. Results: By comparing the color trend of the illumination generated by SCALA during two commissioning runs, and to previous laboratory measurements, we show that we can determine the light emitted by SCALA with a long-term repeatability better than 1%. We describe the calibration procedure necessary to control for system aging. We present measurements of the SNIFS throughput as estimated by SCALA observations. Conclusions: The SCALA calibration unit is now fully deployed at the UH 88 telescope, and with it color-calibration between 4000 Å and 9000 Å is stable at the percent level over a one-year baseline.

  3. Scientific assessment of the quality of OSIRIS images

    NASA Astrophysics Data System (ADS)

    Tubiana, C.; Güttler, C.; Kovacs, G.; Bertini, I.; Bodewits, D.; Fornasier, S.; Lara, L.; La Forgia, F.; Magrin, S.; Pajola, M.; Sierks, H.; Barbieri, C.; Lamy, P. L.; Rodrigo, R.; Koschny, D.; Rickman, H.; Keller, H. U.; Agarwal, J.; A'Hearn, M. F.; Barucci, M. A.; Bertaux, J.-L.; Besse, S.; Boudreault, S.; Cremonese, G.; Da Deppo, V.; Davidsson, B.; Debei, S.; De Cecco, M.; El-Maarry, M. R.; Fulle, M.; Groussin, O.; Gutiérrez-Marques, P.; Gutiérrez, P. J.; Hoekzema, N.; Hofmann, M.; Hviid, S. F.; Ip, W.-H.; Jorda, L.; Knollenberg, J.; Kramm, J.-R.; Kührt, E.; Küppers, M.; Lazzarin, M.; Lopez Moreno, J. J.; Marzari, F.; Massironi, M.; Michalik, H.; Moissl, R.; Naletto, G.; Oklay, N.; Scholten, F.; Shi, X.; Thomas, N.; Vincent, J.-B.

    2015-11-01

    Context. OSIRIS, the scientific imaging system onboard the ESA Rosetta spacecraft, has been imaging the nucleus of comet 67P/Churyumov-Gerasimenko and its dust and gas environment since March 2014. The images serve different scientific goals, from morphology and composition studies of the nucleus surface, to the motion and trajectories of dust grains, the general structure of the dust coma, the morphology and intensity of jets, gas distribution, mass loss, and dust and gas production rates. Aims: We present the calibration of the raw images taken by OSIRIS and address the accuracy that we can expect in our scientific results based on the accuracy of the calibration steps that we have performed. Methods: We describe the pipeline that has been developed to automatically calibrate the OSIRIS images. Through a series of steps, radiometrically calibrated and distortion corrected images are produced and can be used for scientific studies. Calibration campaigns were run on the ground before launch and throughout the years in flight to determine the parameters that are used to calibrate the images and to verify their evolution with time. We describe how these parameters were determined and we address their accuracy. Results: We provide a guideline to the level of trust that can be put into the various studies performed with OSIRIS images, based on the accuracy of the image calibration.

  4. Light-Field Correction for Spatial Calibration of Optical See-Through Head-Mounted Displays.

    PubMed

    Itoh, Yuta; Klinker, Gudrun

    2015-04-01

    A critical requirement for AR applications with Optical See-Through Head-Mounted Displays (OST-HMD) is to project 3D information correctly into the current viewpoint of the user - more particularly, according to the user's eye position. Recently-proposed interaction-free calibration methods [16], [17] automatically estimate this projection by tracking the user's eye position, thereby freeing users from tedious manual calibrations. However, the method is still prone to contain systematic calibration errors. Such errors stem from eye-/HMD-related factors and are not represented in the conventional eye-HMD model used for HMD calibration. This paper investigates one of these factors - the fact that optical elements of OST-HMDs distort incoming world-light rays before they reach the eye, just as corrective glasses do. Any OST-HMD requires an optical element to display a virtual screen. Each such optical element has different distortions. Since users see a distorted world through the element, ignoring this distortion degenerates the projection quality. We propose a light-field correction method, based on a machine learning technique, which compensates the world-scene distortion caused by OST-HMD optics. We demonstrate that our method reduces the systematic error and significantly increases the calibration accuracy of the interaction-free calibration.

  5. Transmittance Measurement of a Heliostat Facility used in the Preflight Radiometric Calibration of Earth-Observing Sensors

    NASA Technical Reports Server (NTRS)

    Czapla-Myers, J.; Thome, K.; Anderson, N.; McCorkel, J.; Leisso, N.; Good, W.; Collins, S.

    2009-01-01

    Ball Aerospace and Technologies Corporation in Boulder, Colorado, has developed a heliostat facility that will be used to determine the preflight radiometric calibration of Earth-observing sensors that operate in the solar-reflective regime. While automatically tracking the Sun, the heliostat directs the solar beam inside a thermal vacuum chamber, where the sensor under test resides. The main advantage to using the Sun as the illumination source for preflight radiometric calibration is because it will also be the source of illumination when the sensor is in flight. This minimizes errors in the pre- and post-launch calibration due to spectral mismatches. It also allows the instrument under test to operate at irradiance values similar to those on orbit. The Remote Sensing Group at the University of Arizona measured the transmittance of the heliostat facility using three methods, the first of which is a relative measurement made using a hyperspectral portable spectroradiometer and well-calibrated reference panel. The second method is also a relative measurement, and uses a 12-channel automated solar radiometer. The final method is an absolute measurement using a hyperspectral spectroradiometer and reference panel combination, where the spectroradiometer is calibrated on site using a solar-radiation-based calibration.

  6. An accurate on-site calibration system for electronic voltage transformers using a standard capacitor

    NASA Astrophysics Data System (ADS)

    Hu, Chen; Chen, Mian-zhou; Li, Hong-bin; Zhang, Zhu; Jiao, Yang; Shao, Haiming

    2018-05-01

    Ordinarily electronic voltage transformers (EVTs) are calibrated off-line and the calibration procedure requires complex switching operations, which will influence the reliability of the power grid and induce large economic losses. To overcome this problem, this paper investigates a 110 kV on-site calibration system for EVTs, including a standard channel, a calibrated channel and a PC equipped with the LabView environment. The standard channel employs a standard capacitor and an analogue integrating circuit to reconstruct the primary voltage signal. Moreover, an adaptive full-phase discrete Fourier transform (DFT) algorithm is proposed to extract electrical parameters. The algorithm involves the process of extracting the frequency of the grid, adjusting the operation points, and calculating the results using DFT. In addition, an insulated automatic lifting device is designed to realize the live connection of the standard capacitor, which is driven by a wireless remote controller. A performance test of the capacitor verifies the accurateness of the standard capacitor. A system calibration test shows that the system ratio error is less than 0.04% and the phase error is below 2‧, which meets the requirement of the 0.2 accuracy class. Finally, the developed calibration system was used in a substation, and the field test data validates the availability of the system.

  7. On the dependence of information display quality requirements upon human characteristics and pilot/automatics relations

    NASA Technical Reports Server (NTRS)

    Wilckens, V.

    1972-01-01

    Present information display concepts for pilot landing guidance are outlined considering manual control as well as substitution of man by fully competent automatics. Display improvements are achieved by compressing the distributed indicators into an accumulative display and thus reducing information scanning. Complete integration of quantitative indications, outer loop information, and real world display in a pictorial information channel geometry constitutes an interface with human ability to differentiate and integrate for optimal manual control of the aircraft.

  8. SWAT Check: A screening tool to assist users in the identification of potential model application problems

    USDA-ARS?s Scientific Manuscript database

    The Soil and Water Assessment Tool (SWAT) is a basin scale hydrologic model developed by the US Department of Agriculture-Agricultural Research Service. SWAT's broad applicability, user friendly model interfaces, and automatic calibration software have led to a rapid increase in the number of new u...

  9. 40 CFR 85.2232 - Calibrations, adjustments-EPA 81.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... checks. Within one hour prior to a test, the analyzers shall be zeroed and spanned. Ambient air is acceptable as a zero gas; an electrical span check is acceptable. Zero and span checks shall be made on the lowest range capable of reading the short test standard. Analyzers that perform an automatic zero/span...

  10. Calibration of automatic performance measures - speed and volume data: volume 2, evaluation of the accuracy of approach volume counts and speeds collected by microwave sensors.

    DOT National Transportation Integrated Search

    2016-05-01

    This study evaluated the accuracy of approach volumes and free flow approach speeds collected by the Wavetronix : SmartSensor Advance sensor for the Signal Performance Metrics system of the Utah Department of Transportation (UDOT), : using the field ...

  11. Fully automatic assignment of small molecules' NMR spectra without relying on chemical shift predictions.

    PubMed

    Castillo, Andrés M; Bernal, Andrés; Patiny, Luc; Wist, Julien

    2015-08-01

    We present a method for the automatic assignment of small molecules' NMR spectra. The method includes an automatic and novel self-consistent peak-picking routine that validates NMR peaks in each spectrum against peaks in the same or other spectra that are due to the same resonances. The auto-assignment routine used is based on branch-and-bound optimization and relies predominantly on integration and correlation data; chemical shift information may be included when available to fasten the search and shorten the list of viable assignments, but in most cases tested, it is not required in order to find the correct assignment. This automatic assignment method is implemented as a web-based tool that runs without any user input other than the acquired spectra. Copyright © 2015 John Wiley & Sons, Ltd.

  12. Automatic Detection and Reproduction of Natural Head Position in Stereo-Photogrammetry.

    PubMed

    Hsung, Tai-Chiu; Lo, John; Li, Tik-Shun; Cheung, Lim-Kwong

    2015-01-01

    The aim of this study was to develop an automatic orientation calibration and reproduction method for recording the natural head position (NHP) in stereo-photogrammetry (SP). A board was used as the physical reference carrier for true verticals and NHP alignment mirror orientation. Orientation axes were detected and saved from the digital mesh model of the board. They were used for correcting the pitch, roll and yaw angles of the subsequent captures of patients' facial surfaces, which were obtained without any markings or sensors attached onto the patient. We tested the proposed method on two commercial active (3dMD) and passive (DI3D) SP devices. The reliability of the pitch, roll and yaw for the board placement were within ±0.039904°, ±0.081623°, and ±0.062320°; where standard deviations were 0.020234°, 0.045645° and 0.027211° respectively. Orientation-calibrated stereo-photogrammetry is the most accurate method (angulation deviation within ±0.1°) reported for complete NHP recording with insignificant clinical error.

  13. Automatic Detection and Reproduction of Natural Head Position in Stereo-Photogrammetry

    PubMed Central

    Hsung, Tai-Chiu; Lo, John; Li, Tik-Shun; Cheung, Lim-Kwong

    2015-01-01

    The aim of this study was to develop an automatic orientation calibration and reproduction method for recording the natural head position (NHP) in stereo-photogrammetry (SP). A board was used as the physical reference carrier for true verticals and NHP alignment mirror orientation. Orientation axes were detected and saved from the digital mesh model of the board. They were used for correcting the pitch, roll and yaw angles of the subsequent captures of patients’ facial surfaces, which were obtained without any markings or sensors attached onto the patient. We tested the proposed method on two commercial active (3dMD) and passive (DI3D) SP devices. The reliability of the pitch, roll and yaw for the board placement were within ±0.039904°, ±0.081623°, and ±0.062320°; where standard deviations were 0.020234°, 0.045645° and 0.027211° respectively. Conclusion: Orientation-calibrated stereo-photogrammetry is the most accurate method (angulation deviation within ±0.1°) reported for complete NHP recording with insignificant clinical error. PMID:26125616

  14. A Fully Sensorized Cooperative Robotic System for Surgical Interventions

    PubMed Central

    Tovar-Arriaga, Saúl; Vargas, José Emilio; Ramos, Juan M.; Aceves, Marco A.; Gorrostieta, Efren; Kalender, Willi A.

    2012-01-01

    In this research a fully sensorized cooperative robot system for manipulation of needles is presented. The setup consists of a DLR/KUKA Light Weight Robot III especially designed for safe human/robot interaction, a FD-CT robot-driven angiographic C-arm system, and a navigation camera. Also, new control strategies for robot manipulation in the clinical environment are introduced. A method for fast calibration of the involved components and the preliminary accuracy tests of the whole possible errors chain are presented. Calibration of the robot with the navigation system has a residual error of 0.81 mm (rms) with a standard deviation of ±0.41 mm. The accuracy of the robotic system while targeting fixed points at different positions within the workspace is of 1.2 mm (rms) with a standard deviation of ±0.4 mm. After calibration, and due to close loop control, the absolute positioning accuracy was reduced to the navigation camera accuracy which is of 0.35 mm (rms). The implemented control allows the robot to compensate for small patient movements. PMID:23012551

  15. Polarimetric Calibration and Assessment of GF-3 Images in Steppe

    NASA Astrophysics Data System (ADS)

    Chang, Y.; Yang, J.; Li, P.; Shi, L.; Zhao, L.

    2018-04-01

    The GaoFen-3 (GF-3) satellite is the first fully polarimetric synthetic aperture radar (PolSAR) satellite in China. It has three fully polarimetric imaging modes and is available for many applications. The system has been taken on several calibration experiments after the launch in Inner Mongolia by the Institute of Electronics, Chinese Academy of Sciences (IECAS), and the polarimetric calibration (PolCAL) strategy of GF-3 are also improved. Therefore, it is necessary to assess the image quality before any further applications. In this paper, we evaluated the polarimetric residual errors of GF-3 images that acquired in July 2017 in a steppe site. The results shows that the crosstalk of these images varies from -36 dB to -46 dB, and the channel imbalance varies from -0.43 dB to 0.55 dB with angle varying from -1.6 to 3.6 degree. We also made a PolCAL experiment to restrain the polarimetric distortion afterwards, and the polarimetric quality of the image got better after the PolCAL processing.

  16. An accurate system for onsite calibration of electronic transformers with digital output.

    PubMed

    Zhi, Zhang; Li, Hong-Bin

    2012-06-01

    Calibration systems with digital output are used to replace conventional calibration systems because of principle diversity and characteristics of digital output of electronic transformers. But precision and unpredictable stability limit their onsite application even development. So fully considering the factors influencing accuracy of calibration system and employing simple but reliable structure, an all-digital calibration system with digital output is proposed in this paper. In complicated calibration environments, precision and dynamic range are guaranteed by A/D converter with 24-bit resolution, synchronization error limit is nanosecond by using the novelty synchronization method. In addition, an error correction algorithm based on the differential method by using two-order Hanning convolution window has good inhibition of frequency fluctuation and inter-harmonics interference. To verify the effectiveness, error calibration was carried out in the State Grid Electric Power Research Institute of China and results show that the proposed system can reach the precision class up to 0.05. Actual onsite calibration shows that the system has high accuracy, and is easy to operate with satisfactory stability.

  17. An accurate system for onsite calibration of electronic transformers with digital output

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhi Zhang; Li Hongbin; State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Wuhan 430074

    Calibration systems with digital output are used to replace conventional calibration systems because of principle diversity and characteristics of digital output of electronic transformers. But precision and unpredictable stability limit their onsite application even development. So fully considering the factors influencing accuracy of calibration system and employing simple but reliable structure, an all-digital calibration system with digital output is proposed in this paper. In complicated calibration environments, precision and dynamic range are guaranteed by A/D converter with 24-bit resolution, synchronization error limit is nanosecond by using the novelty synchronization method. In addition, an error correction algorithm based on the differentialmore » method by using two-order Hanning convolution window has good inhibition of frequency fluctuation and inter-harmonics interference. To verify the effectiveness, error calibration was carried out in the State Grid Electric Power Research Institute of China and results show that the proposed system can reach the precision class up to 0.05. Actual onsite calibration shows that the system has high accuracy, and is easy to operate with satisfactory stability.« less

  18. An accurate system for onsite calibration of electronic transformers with digital output

    NASA Astrophysics Data System (ADS)

    Zhi, Zhang; Li, Hong-Bin

    2012-06-01

    Calibration systems with digital output are used to replace conventional calibration systems because of principle diversity and characteristics of digital output of electronic transformers. But precision and unpredictable stability limit their onsite application even development. So fully considering the factors influencing accuracy of calibration system and employing simple but reliable structure, an all-digital calibration system with digital output is proposed in this paper. In complicated calibration environments, precision and dynamic range are guaranteed by A/D converter with 24-bit resolution, synchronization error limit is nanosecond by using the novelty synchronization method. In addition, an error correction algorithm based on the differential method by using two-order Hanning convolution window has good inhibition of frequency fluctuation and inter-harmonics interference. To verify the effectiveness, error calibration was carried out in the State Grid Electric Power Research Institute of China and results show that the proposed system can reach the precision class up to 0.05. Actual onsite calibration shows that the system has high accuracy, and is easy to operate with satisfactory stability.

  19. 24 CFR 1710.506 - State/Federal filing requirements.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... fully explaining the purpose and significance of the amendment and referring to that section and page of... automatically suspended as a result of the state action. No action need be taken by the Secretary to effect the...

  20. 24 CFR 1710.506 - State/Federal filing requirements.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... fully explaining the purpose and significance of the amendment and referring to that section and page of... automatically suspended as a result of the state action. No action need be taken by the Secretary to effect the...

  1. Xenon International Automated Control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2016-08-05

    The Xenon International Automated Control software monitors, displays status, and allows for manual operator control as well as fully automatic control of multiple commercial and PNNL designed hardware components to generate and transmit atmospheric radioxenon concentration measurements every six hours.

  2. A fully-automatic fast segmentation of the sub-basal layer nerves in corneal images.

    PubMed

    Guimarães, Pedro; Wigdahl, Jeff; Poletti, Enea; Ruggeri, Alfredo

    2014-01-01

    Corneal nerves changes have been linked to damage caused by surgical interventions or prolonged contact lens wear. Furthermore nerve tortuosity has been shown to correlate with the severity of diabetic neuropathy. For these reasons there has been an increasing interest on the analysis of these structures. In this work we propose a novel, robust, and fast fully automatic algorithm capable of tracing the sub-basal plexus nerves from human corneal confocal images. We resort to logGabor filters and support vector machines to trace the corneal nerves. The proposed algorithm traced most of the corneal nerves correctly (sensitivity of 0.88 ± 0.06 and false discovery rate of 0.08 ± 0.06). The displayed performance is comparable to a human grader. We believe that the achieved processing time (0.661 ± 0.07 s) and tracing quality are major advantages for the daily clinical practice.

  3. VSOP: the variable star one-shot project. I. Project presentation and first data release

    NASA Astrophysics Data System (ADS)

    Dall, T. H.; Foellmi, C.; Pritchard, J.; Lo Curto, G.; Allende Prieto, C.; Bruntt, H.; Amado, P. J.; Arentoft, T.; Baes, M.; Depagne, E.; Fernandez, M.; Ivanov, V.; Koesterke, L.; Monaco, L.; O'Brien, K.; Sarro, L. M.; Saviane, I.; Scharwächter, J.; Schmidtobreick, L.; Schütz, O.; Seifahrt, A.; Selman, F.; Stefanon, M.; Sterzik, M.

    2007-08-01

    Context: About 500 new variable stars enter the General Catalogue of Variable Stars (GCVS) every year. Most of them however lack spectroscopic observations, which remains critical for a correct assignement of the variability type and for the understanding of the object. Aims: The Variable Star One-shot Project (VSOP) is aimed at (1) providing the variability type and spectral type of all unstudied variable stars, (2) process, publish, and make the data available as automatically as possible, and (3) generate serendipitous discoveries. This first paper describes the project itself, the acquisition of the data, the dataflow, the spectroscopic analysis and the on-line availability of the fully calibrated and reduced data. We also present the results on the 221 stars observed during the first semester of the project. Methods: We used the high-resolution echelle spectrographs HARPS and FEROS in the ESO La Silla Observatory (Chile) to survey known variable stars. Once reduced by the dedicated pipelines, the radial velocities are determined from cross correlation with synthetic template spectra, and the spectral types are determined by an automatic minimum distance matching to synthetic spectra, with traditional manual spectral typing cross-checks. The variability types are determined by manually evaluating the available light curves and the spectroscopy. In the future, a new automatic classifier, currently being developed by members of the VSOP team, based on these spectroscopic data and on the photometric classifier developed for the COROT and Gaia space missions, will be used. Results: We confirm or revise spectral types of 221 variable stars from the GCVS. We identify 26 previously unknown multiple systems, among them several visual binaries with spectroscopic binary individual components. We present new individual results for the multiple systems V349 Vel and BC Gru, for the composite spectrum star V4385 Sgr, for the T Tauri star V1045 Sco, and for DM Boo which we re-classify as a BY Draconis variable. The complete data release can be accessed via the VSOP web site. Based on data obtained at the La Silla Observatory, European Southern Observatory, under program ID 077.D-0085.

  4. Evaluation of Earthquake Detection Performance in Terms of Quality and Speed in SEISCOMP3 Using New Modules Qceval, Npeval and Sceval

    NASA Astrophysics Data System (ADS)

    Roessler, D.; Weber, B.; Ellguth, E.; Spazier, J.

    2017-12-01

    The geometry of seismic monitoring networks, site conditions and data availability as well as monitoring targets and strategies typically impose trade-offs between data quality, earthquake detection sensitivity, false detections and alert times. Network detection capabilities typically change with alteration of the seismic noise level by human activity or by varying weather and sea conditions. To give helpful information to operators and maintenance coordinators, gempa developed a range of tools to evaluate earthquake detection and network performance including qceval, npeval and sceval. qceval is a module which analyzes waveform quality parameters in real-time and deactivates and reactivates data streams based on waveform quality thresholds for automatic processing. For example, thresholds can be defined for latency, delay, timing quality, spikes and gaps count and rms. As changes in the automatic processing have a direct influence on detection quality and speed, another tool called "npeval" was designed to calculate in real-time the expected time needed to detect and locate earthquakes by evaluating the effective network geometry. The effective network geometry is derived from the configuration of stations participating in the detection. The detection times are shown as an additional layer on the map and updated in real-time as soon as the effective network geometry changes. Yet another new tool, "sceval", is an automatic module which classifies located seismic events (Origins) in real-time. sceval evaluates the spatial distribution of the stations contributing to an Origin. It confirms or rejects the status of Origins, adds comments or leaves the Origin unclassified. The comments are passed to an additional sceval plug-in where the end user can customize event types. This unique identification of real and fake events in earthquake catalogues allows to lower network detection thresholds. In real-time monitoring situations operators can limit the processing to events with unclassified Origins, reducing their workload. Classified Origins can be treated specifically by other procedures. These modules have been calibrated and fully tested by several complex seismic monitoring networks in the region of Indonesia and Northern Chile.

  5. Automated magnification calibration in transmission electron microscopy using Fourier analysis of replica images.

    PubMed

    van der Laak, Jeroen A W M; Dijkman, Henry B P M; Pahlplatz, Martin M M

    2006-03-01

    The magnification factor in transmission electron microscopy is not very precise, hampering for instance quantitative analysis of specimens. Calibration of the magnification is usually performed interactively using replica specimens, containing line or grating patterns with known spacing. In the present study, a procedure is described for automated magnification calibration using digital images of a line replica. This procedure is based on analysis of the power spectrum of Fourier transformed replica images, and is compared to interactive measurement in the same images. Images were used with magnification ranging from 1,000 x to 200,000 x. The automated procedure deviated on average 0.10% from interactive measurements. Especially for catalase replicas, the coefficient of variation of automated measurement was considerably smaller (average 0.28%) compared to that of interactive measurement (average 3.5%). In conclusion, calibration of the magnification in digital images from transmission electron microscopy may be performed automatically, using the procedure presented here, with high precision and accuracy.

  6. Model Calibration in Watershed Hydrology

    NASA Technical Reports Server (NTRS)

    Yilmaz, Koray K.; Vrugt, Jasper A.; Gupta, Hoshin V.; Sorooshian, Soroosh

    2009-01-01

    Hydrologic models use relatively simple mathematical equations to conceptualize and aggregate the complex, spatially distributed, and highly interrelated water, energy, and vegetation processes in a watershed. A consequence of process aggregation is that the model parameters often do not represent directly measurable entities and must, therefore, be estimated using measurements of the system inputs and outputs. During this process, known as model calibration, the parameters are adjusted so that the behavior of the model approximates, as closely and consistently as possible, the observed response of the hydrologic system over some historical period of time. This Chapter reviews the current state-of-the-art of model calibration in watershed hydrology with special emphasis on our own contributions in the last few decades. We discuss the historical background that has led to current perspectives, and review different approaches for manual and automatic single- and multi-objective parameter estimation. In particular, we highlight the recent developments in the calibration of distributed hydrologic models using parameter dimensionality reduction sampling, parameter regularization and parallel computing.

  7. Objective measurement of erythema in psoriasis using digital color photography with color calibration.

    PubMed

    Raina, A; Hennessy, R; Rains, M; Allred, J; Hirshburg, J M; Diven, D G; Markey, M K

    2016-08-01

    Traditional metrics for evaluating the severity of psoriasis are subjective, which complicates efforts to measure effective treatments in clinical trials. We collected images of psoriasis plaques and calibrated the coloration of the images according to an included color card. Features were extracted from the images and used to train a linear discriminant analysis classifier with cross-validation to automatically classify the degree of erythema. The results were tested against numerical scores obtained by a panel of dermatologists using a standard rating system. Quantitative measures of erythema based on the digital color images showed good agreement with subjective assessment of erythema severity (κ = 0.4203). The color calibration process improved the agreement from κ = 0.2364 to κ = 0.4203. We propose a method for the objective measurement of the psoriasis severity parameter of erythema and show that the calibration process improved the results. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  8. APEX - the Hyperspectral ESA Airborne Prism Experiment

    PubMed Central

    Itten, Klaus I.; Dell'Endice, Francesco; Hueni, Andreas; Kneubühler, Mathias; Schläpfer, Daniel; Odermatt, Daniel; Seidel, Felix; Huber, Silvia; Schopfer, Jürg; Kellenberger, Tobias; Bühler, Yves; D'Odorico, Petra; Nieke, Jens; Alberti, Edoardo; Meuleman, Koen

    2008-01-01

    The airborne ESA-APEX (Airborne Prism Experiment) hyperspectral mission simulator is described with its distinct specifications to provide high quality remote sensing data. The concept of an automatic calibration, performed in the Calibration Home Base (CHB) by using the Control Test Master (CTM), the In-Flight Calibration facility (IFC), quality flagging (QF) and specific processing in a dedicated Processing and Archiving Facility (PAF), and vicarious calibration experiments are presented. A preview on major applications and the corresponding development efforts to provide scientific data products up to level 2/3 to the user is presented for limnology, vegetation, aerosols, general classification routines and rapid mapping tasks. BRDF (Bidirectional Reflectance Distribution Function) issues are discussed and the spectral database SPECCHIO (Spectral Input/Output) introduced. The optical performance as well as the dedicated software utilities make APEX a state-of-the-art hyperspectral sensor, capable of (a) satisfying the needs of several research communities and (b) helping the understanding of the Earth's complex mechanisms. PMID:27873868

  9. Shaping electromagnetic waves using software-automatically-designed metasurfaces.

    PubMed

    Zhang, Qian; Wan, Xiang; Liu, Shuo; Yuan Yin, Jia; Zhang, Lei; Jun Cui, Tie

    2017-06-15

    We present a fully digital procedure of designing reflective coding metasurfaces to shape reflected electromagnetic waves. The design procedure is completely automatic, controlled by a personal computer. In details, the macro coding units of metasurface are automatically divided into several types (e.g. two types for 1-bit coding, four types for 2-bit coding, etc.), and each type of the macro coding units is formed by discretely random arrangement of micro coding units. By combining an optimization algorithm and commercial electromagnetic software, the digital patterns of the macro coding units are optimized to possess constant phase difference for the reflected waves. The apertures of the designed reflective metasurfaces are formed by arranging the macro coding units with certain coding sequence. To experimentally verify the performance, a coding metasurface is fabricated by automatically designing two digital 1-bit unit cells, which are arranged in array to constitute a periodic coding metasurface to generate the required four-beam radiations with specific directions. Two complicated functional metasurfaces with circularly- and elliptically-shaped radiation beams are realized by automatically designing 4-bit macro coding units, showing excellent performance of the automatic designs by software. The proposed method provides a smart tool to realize various functional devices and systems automatically.

  10. Infrared Sky Imager (IRSI) Instrument Handbook

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morris, Victor R.

    2016-04-01

    The Infrared Sky Imager (IRSI) deployed at the Atmospheric Radiation Measurement (ARM) Climate Research Facility is a Solmirus Corp. All Sky Infrared Visible Analyzer. The IRSI is an automatic, continuously operating, digital imaging and software system designed to capture hemispheric sky images and provide time series retrievals of fractional sky cover during both the day and night. The instrument provides diurnal, radiometrically calibrated sky imagery in the mid-infrared atmospheric window and imagery in the visible wavelengths for cloud retrievals during daylight hours. The software automatically identifies cloudy and clear regions at user-defined intervals and calculates fractional sky cover, providing amore » real-time display of sky conditions.« less

  11. Virtual Instrument for Determining Rate Constant of Second-Order Reaction by pX Based on LabVIEW 8.0.

    PubMed

    Meng, Hu; Li, Jiang-Yuan; Tang, Yong-Huai

    2009-01-01

    The virtual instrument system based on LabVIEW 8.0 for ion analyzer which can measure and analyze ion concentrations in solution is developed and comprises homemade conditioning circuit, data acquiring board, and computer. It can calibrate slope, temperature, and positioning automatically. When applied to determine the reaction rate constant by pX, it achieved live acquiring, real-time displaying, automatical processing of testing data, generating the report of results; and other functions. This method simplifies the experimental operation greatly, avoids complicated procedures of manual processing data and personal error, and improves veracity and repeatability of the experiment results.

  12. Automatic spatiotemporal matching of detected pleural thickenings

    NASA Astrophysics Data System (ADS)

    Chaisaowong, Kraisorn; Keller, Simon Kai; Kraus, Thomas

    2014-01-01

    Pleural thickenings can be found in asbestos exposed patient's lung. Non-invasive diagnosis including CT imaging can detect aggressive malignant pleural mesothelioma in its early stage. In order to create a quantitative documentation of automatic detected pleural thickenings over time, the differences in volume and thickness of the detected thickenings have to be calculated. Physicians usually estimate the change of each thickening via visual comparison which provides neither quantitative nor qualitative measures. In this work, automatic spatiotemporal matching techniques of the detected pleural thickenings at two points of time based on the semi-automatic registration have been developed, implemented, and tested so that the same thickening can be compared fully automatically. As result, the application of the mapping technique using the principal components analysis turns out to be advantageous than the feature-based mapping using centroid and mean Hounsfield Units of each thickening, since the resulting sensitivity was improved to 98.46% from 42.19%, while the accuracy of feature-based mapping is only slightly higher (84.38% to 76.19%).

  13. A repeated-measures analysis of the effects of soft tissues on wrist range of motion in the extant phylogenetic bracket of dinosaurs: Implications for the functional origins of an automatic wrist folding mechanism in Crocodilia.

    PubMed

    Hutson, Joel David; Hutson, Kelda Nadine

    2014-07-01

    A recent study hypothesized that avian-like wrist folding in quadrupedal dinosaurs could have aided their distinctive style of locomotion with semi-pronated and therefore medially facing palms. However, soft tissues that automatically guide avian wrist folding rarely fossilize, and automatic wrist folding of unknown function in extant crocodilians has not been used to test this hypothesis. Therefore, an investigation of the relative contributions of soft tissues to wrist range of motion (ROM) in the extant phylogenetic bracket of dinosaurs, and the quadrupedal function of crocodilian wrist folding, could inform these questions. Here, we repeatedly measured wrist ROM in degrees through fully fleshed, skinned, minus muscles/tendons, minus ligaments, and skeletonized stages in the American alligator Alligator mississippiensis and the ostrich Struthio camelus. The effects of dissection treatment and observer were statistically significant for alligator wrist folding and ostrich wrist flexion, but not ostrich wrist folding. Final skeletonized wrist folding ROM was higher than (ostrich) or equivalent to (alligator) initial fully fleshed ROM, while final ROM was lower than initial ROM for ostrich wrist flexion. These findings suggest that, unlike the hinge/ball and socket-type elbow and shoulder joints in these archosaurs, ROM within gliding/planar diarthrotic joints is more restricted to the extent of articular surfaces. The alligator data indicate that the crocodilian wrist mechanism functions to automatically lock their semi-pronated palms into a rigid column, which supports the hypothesis that this palmar orientation necessitated soft tissue stiffening mechanisms in certain dinosaurs, although ROM-restricted articulations argue against the presence of an extensive automatic mechanism. Anat Rec, 297:1228-1249, 2014. © 2014 Wiley Periodicals, Inc. © 2014 Wiley Periodicals, Inc.

  14. The Elixir System: Data Characterization and Calibration at the Canada-France-Hawaii Telescope

    NASA Astrophysics Data System (ADS)

    Magnier, E. A.; Cuillandre, J.-C.

    2004-05-01

    The Elixir System at the Canada-France-Hawaii Telescope performs data characterization and calibration for all data from the wide-field mosaic imagers CFH12K and MegaPrime. The project has several related goals, including monitoring data quality, providing high-quality master detrend images, determining the photometric and astrometric calibrations, and automatic preprocessing of images for queued service observing (QSO). The Elixir system has been used for all data obtained with CFH12K since the QSO project began in 2001 January. In addition, it has been used to process archival data from the CFH12K and all MegaPrime observations beginning in 2002 December. The Elixir system has been extremely successful in providing well-characterized data to the end observers, who may otherwise be overwhelmed by data-processing concerns.

  15. Fully automated urban traffic system

    NASA Technical Reports Server (NTRS)

    Dobrotin, B. M.; Hansen, G. R.; Peng, T. K. C.; Rennels, D. A.

    1977-01-01

    The replacement of the driver with an automatic system which could perform the functions of guiding and routing a vehicle with a human's capability of responding to changing traffic demands was discussed. The problem was divided into four technological areas; guidance, routing, computing, and communications. It was determined that the latter three areas being developed independent of any need for fully automated urban traffic. A guidance system that would meet system requirements was not being developed but was technically feasible.

  16. Building Extraction from Remote Sensing Data Using Fully Convolutional Networks

    NASA Astrophysics Data System (ADS)

    Bittner, K.; Cui, S.; Reinartz, P.

    2017-05-01

    Building detection and footprint extraction are highly demanded for many remote sensing applications. Though most previous works have shown promising results, the automatic extraction of building footprints still remains a nontrivial topic, especially in complex urban areas. Recently developed extensions of the CNN framework made it possible to perform dense pixel-wise classification of input images. Based on these abilities we propose a methodology, which automatically generates a full resolution binary building mask out of a Digital Surface Model (DSM) using a Fully Convolution Network (FCN) architecture. The advantage of using the depth information is that it provides geometrical silhouettes and allows a better separation of buildings from background as well as through its invariance to illumination and color variations. The proposed framework has mainly two steps. Firstly, the FCN is trained on a large set of patches consisting of normalized DSM (nDSM) as inputs and available ground truth building mask as target outputs. Secondly, the generated predictions from FCN are viewed as unary terms for a Fully connected Conditional Random Fields (FCRF), which enables us to create a final binary building mask. A series of experiments demonstrate that our methodology is able to extract accurate building footprints which are close to the buildings original shapes to a high degree. The quantitative and qualitative analysis show the significant improvements of the results in contrast to the multy-layer fully connected network from our previous work.

  17. A fast and fully automatic registration approach based on point features for multi-source remote-sensing images

    NASA Astrophysics Data System (ADS)

    Yu, Le; Zhang, Dengrong; Holden, Eun-Jung

    2008-07-01

    Automatic registration of multi-source remote-sensing images is a difficult task as it must deal with the varying illuminations and resolutions of the images, different perspectives and the local deformations within the images. This paper proposes a fully automatic and fast non-rigid image registration technique that addresses those issues. The proposed technique performs a pre-registration process that coarsely aligns the input image to the reference image by automatically detecting their matching points by using the scale invariant feature transform (SIFT) method and an affine transformation model. Once the coarse registration is completed, it performs a fine-scale registration process based on a piecewise linear transformation technique using feature points that are detected by the Harris corner detector. The registration process firstly finds in succession, tie point pairs between the input and the reference image by detecting Harris corners and applying a cross-matching strategy based on a wavelet pyramid for a fast search speed. Tie point pairs with large errors are pruned by an error-checking step. The input image is then rectified by using triangulated irregular networks (TINs) to deal with irregular local deformations caused by the fluctuation of the terrain. For each triangular facet of the TIN, affine transformations are estimated and applied for rectification. Experiments with Quickbird, SPOT5, SPOT4, TM remote-sensing images of the Hangzhou area in China demonstrate the efficiency and the accuracy of the proposed technique for multi-source remote-sensing image registration.

  18. Readiness of the ATLAS Tile Calorimeter for LHC collisions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aad, G.; Abbott, B.; Abdallah, J.

    The Tile hadronic calorimeter of the ATLAS detector has undergone extensive testing in the experimental hall since its installation in late 2005. The readout, control and calibration systems have been fully operational since 2007 and the detector has successfully collected data from the LHC single beams in 2008 and first collisions in 2009. This paper gives an overview of the Tile Calorimeter performance as measured using random triggers, calibration data, data from cosmic ray muons and single beam data. The detector operation status, noise characteristics and performance of the calibration systems are presented, as well as the validation of themore » timing and energy calibration carried out with minimum ionising cosmic ray muons data. The calibration systems' precision is well below the design value of 1%. The determination of the global energy scale was performed with an uncertainty of 4%. © 2010 CERN for the benefit of the ATLAS collaboration.« less

  19. Readiness of the ATLAS Tile Calorimeter for LHC collisions

    DOE PAGES

    Aad, G.; Abbott, B.; Abdallah, J.; ...

    2010-12-08

    The Tile hadronic calorimeter of the ATLAS detector has undergone extensive testing in the experimental hall since its installation in late 2005. The readout, control and calibration systems have been fully operational since 2007 and the detector has successfully collected data from the LHC single beams in 2008 and first collisions in 2009. This paper gives an overview of the Tile Calorimeter performance as measured using random triggers, calibration data, data from cosmic ray muons and single beam data. The detector operation status, noise characteristics and performance of the calibration systems are presented, as well as the validation of themore » timing and energy calibration carried out with minimum ionising cosmic ray muons data. The calibration systems' precision is well below the design value of 1%. The determination of the global energy scale was performed with an uncertainty of 4%. © 2010 CERN for the benefit of the ATLAS collaboration.« less

  20. New York State Thruway Authority automatic vehicle classification (AVC) : research report.

    DOT National Transportation Integrated Search

    2008-03-31

    In December 2007, the N.Y.S. Thruway Authority (Thruway) concluded a Federal : funded research effort to study technology and develop a design for retrofitting : devices required in implementing a fully automated vehicle classification system i...

  1. Video auto stitching in multicamera surveillance system

    NASA Astrophysics Data System (ADS)

    He, Bin; Zhao, Gang; Liu, Qifang; Li, Yangyang

    2012-01-01

    This paper concerns the problem of video stitching automatically in a multi-camera surveillance system. Previous approaches have used multiple calibrated cameras for video mosaic in large scale monitoring application. In this work, we formulate video stitching as a multi-image registration and blending problem, and not all cameras are needed to be calibrated except a few selected master cameras. SURF is used to find matched pairs of image key points from different cameras, and then camera pose is estimated and refined. Homography matrix is employed to calculate overlapping pixels and finally implement boundary resample algorithm to blend images. The result of simulation demonstrates the efficiency of our method.

  2. Video auto stitching in multicamera surveillance system

    NASA Astrophysics Data System (ADS)

    He, Bin; Zhao, Gang; Liu, Qifang; Li, Yangyang

    2011-12-01

    This paper concerns the problem of video stitching automatically in a multi-camera surveillance system. Previous approaches have used multiple calibrated cameras for video mosaic in large scale monitoring application. In this work, we formulate video stitching as a multi-image registration and blending problem, and not all cameras are needed to be calibrated except a few selected master cameras. SURF is used to find matched pairs of image key points from different cameras, and then camera pose is estimated and refined. Homography matrix is employed to calculate overlapping pixels and finally implement boundary resample algorithm to blend images. The result of simulation demonstrates the efficiency of our method.

  3. Past and present cosmic structure in the SDSS DR7 main sample

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jasche, J.; Leclercq, F.; Wandelt, B.D., E-mail: jasche@iap.fr, E-mail: florent.leclercq@polytechnique.org, E-mail: wandelt@iap.fr

    2015-01-01

    We present a chrono-cosmography project, aiming at the inference of the four dimensional formation history of the observed large scale structure from its origin to the present epoch. To do so, we perform a full-scale Bayesian analysis of the northern galactic cap of the Sloan Digital Sky Survey (SDSS) Data Release 7 main galaxy sample, relying on a fully probabilistic, physical model of the non-linearly evolved density field. Besides inferring initial conditions from observations, our methodology naturally and accurately reconstructs non-linear features at the present epoch, such as walls and filaments, corresponding to high-order correlation functions generated by late-time structuremore » formation. Our inference framework self-consistently accounts for typical observational systematic and statistical uncertainties such as noise, survey geometry and selection effects. We further account for luminosity dependent galaxy biases and automatic noise calibration within a fully Bayesian approach. As a result, this analysis provides highly-detailed and accurate reconstructions of the present density field on scales larger than ∼ 3 Mpc/h, constrained by SDSS observations. This approach also leads to the first quantitative inference of plausible formation histories of the dynamic large scale structure underlying the observed galaxy distribution. The results described in this work constitute the first full Bayesian non-linear analysis of the cosmic large scale structure with the demonstrated capability of uncertainty quantification. Some of these results will be made publicly available along with this work. The level of detail of inferred results and the high degree of control on observational uncertainties pave the path towards high precision chrono-cosmography, the subject of simultaneously studying the dynamics and the morphology of the inhomogeneous Universe.« less

  4. Submerged flow bridge scour under clear water conditions

    DOT National Transportation Integrated Search

    2012-09-01

    Prediction of pressure flow (vertical contraction) scour underneath a partially or fully submerged bridge superstructure : in an extreme flood event is crucial for bridge safety. An experimentally and numerically calibrated formulation is : developed...

  5. Application of single-image camera calibration for ultrasound augmented laparoscopic visualization

    NASA Astrophysics Data System (ADS)

    Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj

    2015-03-01

    Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool (rdCalib; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery.

  6. Application of single-image camera calibration for ultrasound augmented laparoscopic visualization

    PubMed Central

    Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj

    2017-01-01

    Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool (rdCalib; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery. PMID:28943703

  7. Application of single-image camera calibration for ultrasound augmented laparoscopic visualization.

    PubMed

    Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D; Shekhar, Raj

    2015-03-01

    Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool ( rdCalib ; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker ® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery.

  8. Real-time automatic registration in optical surgical navigation

    NASA Astrophysics Data System (ADS)

    Lin, Qinyong; Yang, Rongqian; Cai, Ken; Si, Xuan; Chen, Xiuwen; Wu, Xiaoming

    2016-05-01

    An image-guided surgical navigation system requires the improvement of the patient-to-image registration time to enhance the convenience of the registration procedure. A critical step in achieving this aim is performing a fully automatic patient-to-image registration. This study reports on a design of custom fiducial markers and the performance of a real-time automatic patient-to-image registration method using these markers on the basis of an optical tracking system for rigid anatomy. The custom fiducial markers are designed to be automatically localized in both patient and image spaces. An automatic localization method is performed by registering a point cloud sampled from the three dimensional (3D) pedestal model surface of a fiducial marker to each pedestal of fiducial markers searched in image space. A head phantom is constructed to estimate the performance of the real-time automatic registration method under four fiducial configurations. The head phantom experimental results demonstrate that the real-time automatic registration method is more convenient, rapid, and accurate than the manual method. The time required for each registration is approximately 0.1 s. The automatic localization method precisely localizes the fiducial markers in image space. The averaged target registration error for the four configurations is approximately 0.7 mm. The automatic registration performance is independent of the positions relative to the tracking system and the movement of the patient during the operation.

  9. Automatic weld torch guidance control system

    NASA Technical Reports Server (NTRS)

    Smaith, H. E.; Wall, W. A.; Burns, M. R., Jr.

    1982-01-01

    A highly reliable, fully digital, closed circuit television optical, type automatic weld seam tracking control system was developed. This automatic tracking equipment is used to reduce weld tooling costs and increase overall automatic welding reliability. The system utilizes a charge injection device digital camera which as 60,512 inidividual pixels as the light sensing elements. Through conventional scanning means, each pixel in the focal plane is sequentially scanned, the light level signal digitized, and an 8-bit word transmitted to scratch pad memory. From memory, the microprocessor performs an analysis of the digital signal and computes the tracking error. Lastly, the corrective signal is transmitted to a cross seam actuator digital drive motor controller to complete the closed loop, feedback, tracking system. This weld seam tracking control system is capable of a tracking accuracy of + or - 0.2 mm, or better. As configured, the system is applicable to square butt, V-groove, and lap joint weldments.

  10. Automatic and semi-automatic approaches for arteriolar-to-venular computation in retinal photographs

    NASA Astrophysics Data System (ADS)

    Mendonça, Ana Maria; Remeseiro, Beatriz; Dashtbozorg, Behdad; Campilho, Aurélio

    2017-03-01

    The Arteriolar-to-Venular Ratio (AVR) is a popular dimensionless measure which allows the assessment of patients' condition for the early diagnosis of different diseases, including hypertension and diabetic retinopathy. This paper presents two new approaches for AVR computation in retinal photographs which include a sequence of automated processing steps: vessel segmentation, caliber measurement, optic disc segmentation, artery/vein classification, region of interest delineation, and AVR calculation. Both approaches have been tested on the INSPIRE-AVR dataset, and compared with a ground-truth provided by two medical specialists. The obtained results demonstrate the reliability of the fully automatic approach which provides AVR ratios very similar to at least one of the observers. Furthermore, the semi-automatic approach, which includes the manual modification of the artery/vein classification if needed, allows to significantly reduce the error to a level below the human error.

  11. Sun Tracker Operates a Year Between Calibrations

    NASA Technical Reports Server (NTRS)

    Berdahl, C. M.

    1984-01-01

    Low-cost modification of Sun tracker automatically compensates equation of time and seasonal variations in declination of Sun. Output of Scotch Yoke drive mechanism adjusted through proper sizing of crank, yoke and other components and through choice of gear ratios to approximate seasonal northand south motion of Sun. Used for industrial solar-energy monitoring and in remote meteorological stations.

  12. Automatic Segmentation of the Eye in 3D Magnetic Resonance Imaging: A Novel Statistical Shape Model for Treatment Planning of Retinoblastoma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ciller, Carlos, E-mail: carlos.cillerruiz@unil.ch; Ophthalmic Technology Group, ARTORG Center of the University of Bern, Bern; Centre d’Imagerie BioMédicale, University of Lausanne, Lausanne

    Purpose: Proper delineation of ocular anatomy in 3-dimensional (3D) imaging is a big challenge, particularly when developing treatment plans for ocular diseases. Magnetic resonance imaging (MRI) is presently used in clinical practice for diagnosis confirmation and treatment planning for treatment of retinoblastoma in infants, where it serves as a source of information, complementary to the fundus or ultrasonographic imaging. Here we present a framework to fully automatically segment the eye anatomy for MRI based on 3D active shape models (ASM), and we validate the results and present a proof of concept to automatically segment pathological eyes. Methods and Materials: Manualmore » and automatic segmentation were performed in 24 images of healthy children's eyes (3.29 ± 2.15 years of age). Imaging was performed using a 3-T MRI scanner. The ASM consists of the lens, the vitreous humor, the sclera, and the cornea. The model was fitted by first automatically detecting the position of the eye center, the lens, and the optic nerve, and then aligning the model and fitting it to the patient. We validated our segmentation method by using a leave-one-out cross-validation. The segmentation results were evaluated by measuring the overlap, using the Dice similarity coefficient (DSC) and the mean distance error. Results: We obtained a DSC of 94.90 ± 2.12% for the sclera and the cornea, 94.72 ± 1.89% for the vitreous humor, and 85.16 ± 4.91% for the lens. The mean distance error was 0.26 ± 0.09 mm. The entire process took 14 seconds on average per eye. Conclusion: We provide a reliable and accurate tool that enables clinicians to automatically segment the sclera, the cornea, the vitreous humor, and the lens, using MRI. We additionally present a proof of concept for fully automatically segmenting eye pathology. This tool reduces the time needed for eye shape delineation and thus can help clinicians when planning eye treatment and confirming the extent of the tumor.« less

  13. Automatic Segmentation of the Eye in 3D Magnetic Resonance Imaging: A Novel Statistical Shape Model for Treatment Planning of Retinoblastoma.

    PubMed

    Ciller, Carlos; De Zanet, Sandro I; Rüegsegger, Michael B; Pica, Alessia; Sznitman, Raphael; Thiran, Jean-Philippe; Maeder, Philippe; Munier, Francis L; Kowal, Jens H; Cuadra, Meritxell Bach

    2015-07-15

    Proper delineation of ocular anatomy in 3-dimensional (3D) imaging is a big challenge, particularly when developing treatment plans for ocular diseases. Magnetic resonance imaging (MRI) is presently used in clinical practice for diagnosis confirmation and treatment planning for treatment of retinoblastoma in infants, where it serves as a source of information, complementary to the fundus or ultrasonographic imaging. Here we present a framework to fully automatically segment the eye anatomy for MRI based on 3D active shape models (ASM), and we validate the results and present a proof of concept to automatically segment pathological eyes. Manual and automatic segmentation were performed in 24 images of healthy children's eyes (3.29 ± 2.15 years of age). Imaging was performed using a 3-T MRI scanner. The ASM consists of the lens, the vitreous humor, the sclera, and the cornea. The model was fitted by first automatically detecting the position of the eye center, the lens, and the optic nerve, and then aligning the model and fitting it to the patient. We validated our segmentation method by using a leave-one-out cross-validation. The segmentation results were evaluated by measuring the overlap, using the Dice similarity coefficient (DSC) and the mean distance error. We obtained a DSC of 94.90 ± 2.12% for the sclera and the cornea, 94.72 ± 1.89% for the vitreous humor, and 85.16 ± 4.91% for the lens. The mean distance error was 0.26 ± 0.09 mm. The entire process took 14 seconds on average per eye. We provide a reliable and accurate tool that enables clinicians to automatically segment the sclera, the cornea, the vitreous humor, and the lens, using MRI. We additionally present a proof of concept for fully automatically segmenting eye pathology. This tool reduces the time needed for eye shape delineation and thus can help clinicians when planning eye treatment and confirming the extent of the tumor. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. Reflector automatic acquisition and pointing based on auto-collimation theodolite.

    PubMed

    Luo, Jun; Wang, Zhiqian; Wen, Zhuoman; Li, Mingzhu; Liu, Shaojin; Shen, Chengwu

    2018-01-01

    An auto-collimation theodolite (ACT) for reflector automatic acquisition and pointing is designed based on the principle of autocollimators and theodolites. First, the principle of auto-collimation and theodolites is reviewed, and then the coaxial ACT structure is developed. Subsequently, the acquisition and pointing strategies for reflector measurements are presented, which first quickly acquires the target over a wide range and then points the laser spot to the charge coupled device zero position. Finally, experiments are conducted to verify the acquisition and pointing performance, including the calibration of the ACT, the comparison of the acquisition mode and pointing mode, and the accuracy measurement in horizontal and vertical directions. In both directions, a measurement accuracy of ±3″ is achieved. The presented ACT is suitable for automatic pointing and monitoring the reflector over a small scanning area and can be used in a wide range of applications such as bridge structure monitoring and cooperative target aiming.

  15. Automatización de la adquisición de campos planos de cielo durante el atardecer

    NASA Astrophysics Data System (ADS)

    Areal, M. B.; Acosta, J. A.; Buccino, A. P.; Perna, P.; Areso, O.; Mauas, P.

    2016-08-01

    Since 2009, the Instituto de Astronomia y Fisica del Espacio keeps in development an optical observatory mainly aimed to the detection of extrasolar planets and the monitoring of stellar activity. In this framework, the telescopes Meade LX200 16 Horacio Ghielmetti in the Complejo Astronomico El Leoncito, and MATE (Magnetic Activity and Transiting Exoplanets) in the Estación de Altura at the Observatorio Astronomico Felix Aguilar were assembled. Both telescopes can operate automatically through all night, which generates a massive volume of data. Because of this, it becomes essential the automatization of the acquisition and analysis of the regular observations as well as the calibration images; in particular the flat fields. In this work a method to simplify and automatize the acquisition of these images was developed. This method uses the luminosity values of the sky, registered by a weather station located next to the observation site.

  16. Reflector automatic acquisition and pointing based on auto-collimation theodolite

    NASA Astrophysics Data System (ADS)

    Luo, Jun; Wang, Zhiqian; Wen, Zhuoman; Li, Mingzhu; Liu, Shaojin; Shen, Chengwu

    2018-01-01

    An auto-collimation theodolite (ACT) for reflector automatic acquisition and pointing is designed based on the principle of autocollimators and theodolites. First, the principle of auto-collimation and theodolites is reviewed, and then the coaxial ACT structure is developed. Subsequently, the acquisition and pointing strategies for reflector measurements are presented, which first quickly acquires the target over a wide range and then points the laser spot to the charge coupled device zero position. Finally, experiments are conducted to verify the acquisition and pointing performance, including the calibration of the ACT, the comparison of the acquisition mode and pointing mode, and the accuracy measurement in horizontal and vertical directions. In both directions, a measurement accuracy of ±3″ is achieved. The presented ACT is suitable for automatic pointing and monitoring the reflector over a small scanning area and can be used in a wide range of applications such as bridge structure monitoring and cooperative target aiming.

  17. Measuring Transmission Efficiencies Of Mass Spectrometers

    NASA Technical Reports Server (NTRS)

    Srivastava, Santosh K.

    1989-01-01

    Coincidence counts yield absolute efficiencies. System measures mass-dependent transmission efficiencies of mass spectrometers, using coincidence-counting techniques reminiscent of those used for many years in calibration of detectors for subatomic particles. Coincidences between detected ions and electrons producing them counted during operation of mass spectrometer. Under certain assumptions regarding inelastic scattering of electrons, electron/ion-coincidence count is direct measure of transmission efficiency of spectrometer. When fully developed, system compact, portable, and used routinely to calibrate mass spectrometers.

  18. Calibration model maintenance in melamine resin production: Integrating drift detection, smart sample selection and model adaptation.

    PubMed

    Nikzad-Langerodi, Ramin; Lughofer, Edwin; Cernuda, Carlos; Reischer, Thomas; Kantner, Wolfgang; Pawliczek, Marcin; Brandstetter, Markus

    2018-07-12

    The physico-chemical properties of Melamine Formaldehyde (MF) based thermosets are largely influenced by the degree of polymerization (DP) in the underlying resin. On-line supervision of the turbidity point by means of vibrational spectroscopy has recently emerged as a promising technique to monitor the DP of MF resins. However, spectroscopic determination of the DP relies on chemometric models, which are usually sensitive to drifts caused by instrumental and/or sample-associated changes occurring over time. In order to detect the time point when drifts start causing prediction bias, we here explore a universal drift detector based on a faded version of the Page-Hinkley (PH) statistic, which we test in three data streams from an industrial MF resin production process. We employ committee disagreement (CD), computed as the variance of model predictions from an ensemble of partial least squares (PLS) models, as a measure for sample-wise prediction uncertainty and use the PH statistic to detect changes in this quantity. We further explore supervised and unsupervised strategies for (semi-)automatic model adaptation upon detection of a drift. For the former, manual reference measurements are requested whenever statistical thresholds on Hotelling's T 2 and/or Q-Residuals are violated. Models are subsequently re-calibrated using weighted partial least squares in order to increase the influence of newer samples, which increases the flexibility when adapting to new (drifted) states. Unsupervised model adaptation is carried out exploiting the dual antecedent-consequent structure of a recently developed fuzzy systems variant of PLS termed FLEXFIS-PLS. In particular, antecedent parts are updated while maintaining the internal structure of the local linear predictors (i.e. the consequents). We found improved drift detection capability of the CD compared to Hotelling's T 2 and Q-Residuals when used in combination with the proposed PH test. Furthermore, we found that active selection of samples by active learning (AL) used for subsequent model adaptation is advantageous compared to passive (random) selection in case that a drift leads to persistent prediction bias allowing more rapid adaptation at lower reference measurement rates. Fully unsupervised adaptation using FLEXFIS-PLS could improve predictive accuracy significantly for light drifts but was not able to fully compensate for prediction bias in case of significant lack of fit w.r.t. the latent variable space. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. Multisite Evaluation of APEX for Water Quality: I. Best Professional Judgment Parameterization.

    PubMed

    Baffaut, Claire; Nelson, Nathan O; Lory, John A; Senaviratne, G M M M Anomaa; Bhandari, Ammar B; Udawatta, Ranjith P; Sweeney, Daniel W; Helmers, Matt J; Van Liew, Mike W; Mallarino, Antonio P; Wortmann, Charles S

    2017-11-01

    The Agricultural Policy Environmental eXtender (APEX) model is capable of estimating edge-of-field water, nutrient, and sediment transport and is used to assess the environmental impacts of management practices. The current practice is to fully calibrate the model for each site simulation, a task that requires resources and data not always available. The objective of this study was to compare model performance for flow, sediment, and phosphorus transport under two parameterization schemes: a best professional judgment (BPJ) parameterization based on readily available data and a fully calibrated parameterization based on site-specific soil, weather, event flow, and water quality data. The analysis was conducted using 12 datasets at four locations representing poorly drained soils and row-crop production under different tillage systems. Model performance was based on the Nash-Sutcliffe efficiency (NSE), the coefficient of determination () and the regression slope between simulated and measured annualized loads across all site years. Although the BPJ model performance for flow was acceptable (NSE = 0.7) at the annual time step, calibration improved it (NSE = 0.9). Acceptable simulation of sediment and total phosphorus transport (NSE = 0.5 and 0.9, respectively) was obtained only after full calibration at each site. Given the unacceptable performance of the BPJ approach, uncalibrated use of APEX for planning or management purposes may be misleading. Model calibration with water quality data prior to using APEX for simulating sediment and total phosphorus loss is essential. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  20. The Gaia FGK benchmark stars. High resolution spectral library

    NASA Astrophysics Data System (ADS)

    Blanco-Cuaresma, S.; Soubiran, C.; Jofré, P.; Heiter, U.

    2014-06-01

    Context. An increasing number of high-resolution stellar spectra is available today thanks to many past and ongoing spectroscopic surveys. Consequently, numerous methods have been developed to perform an automatic spectral analysis on a massive amount of data. When reviewing published results, biases arise and they need to be addressed and minimized. Aims: We are providing a homogeneous library with a common set of calibration stars (known as the Gaia FGK benchmark stars) that will allow us to assess stellar analysis methods and calibrate spectroscopic surveys. Methods: High-resolution and signal-to-noise spectra were compiled from different instruments. We developed an automatic process to homogenize the observed data and assess the quality of the resulting library. Results: We built a high-quality library that will facilitate the assessment of spectral analyses and the calibration of present and future spectroscopic surveys. The automation of the process minimizes the human subjectivity and ensures reproducibility. Additionally, it allows us to quickly adapt the library to specific needs that can arise from future spectroscopic analyses. Based on NARVAL and HARPS data obtained within the Gaia Data Processing and Analysis Consortium (DPAC) and coordinated by the GBOG (Ground-Based Observations for Gaia) working group, and on data retrieved from the ESO-ADP database.The library of spectra is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/566/A98

  1. Impact of model complexity and multi-scale data integration on the estimation of hydrogeological parameters in a dual-porosity aquifer

    NASA Astrophysics Data System (ADS)

    Tamayo-Mas, Elena; Bianchi, Marco; Mansour, Majdi

    2018-03-01

    This study investigates the impact of model complexity and multi-scale prior hydrogeological data on the interpretation of pumping test data in a dual-porosity aquifer (the Chalk aquifer in England, UK). In order to characterize the hydrogeological properties, different approaches ranging from a traditional analytical solution (Theis approach) to more sophisticated numerical models with automatically calibrated input parameters are applied. Comparisons of results from the different approaches show that neither traditional analytical solutions nor a numerical model assuming a homogenous and isotropic aquifer can adequately explain the observed drawdowns. A better reproduction of the observed drawdowns in all seven monitoring locations is instead achieved when medium and local-scale prior information about the vertical hydraulic conductivity (K) distribution is used to constrain the model calibration process. In particular, the integration of medium-scale vertical K variations based on flowmeter measurements lead to an improvement in the goodness-of-fit of the simulated drawdowns of about 30%. Further improvements (up to 70%) were observed when a simple upscaling approach was used to integrate small-scale K data to constrain the automatic calibration process of the numerical model. Although the analysis focuses on a specific case study, these results provide insights about the representativeness of the estimates of hydrogeological properties based on different interpretations of pumping test data, and promote the integration of multi-scale data for the characterization of heterogeneous aquifers in complex hydrogeological settings.

  2. The measurement and evaluation of bidirectional reflectance characteristics of Dunhuang radiometric calibration test site

    NASA Astrophysics Data System (ADS)

    Zhao, Chun-yan; Li, Xin; Wei, Wei; Zheng, Xiao-bing

    2016-10-01

    With the progress of quantitative remote sensing, the acquisition of surface BRDF becomes more and more important. In order to improve the accuracy of the surface BRDF measurements, a VNIR-SWIR Bidirectional Reflectance Automatic Measurement System, which was developed by Hefei Institutes of Physical Science (HIPS), is introduced that allows in situ measurements of hyperspectral bidirectional reflectance data. Hyperspectral bidirectional reflectance distribution function data sets taken with the BRDF automatic measurement system nominally cover the spectral range between 390 and 2390 nm in 971 bands. In July 2007, September 2008, June 2011, we acquired a series of the BRDF data covered Dunhuang radiometric calibration test site in terms of the BRDF measurement system. We have not obtained such comprehensive and accurate data as they are, since 1990s when the site was built up. These data are applied to calibration for FY-2 and other satellites sensors. Field BRDF data of a Dunhuang site surface reveal a strong spectral variability. An anisotropy factor (ANIF), defined as the ratio between the directional reflectance and nadir reflectance over the hemisphere, is introduced as a surrogate measurement for the extent of spectral BRDF effects. The ANIF data show a very high correlation with the solar zenith angle due to multiple scattering effects over a desert site. Since surface geometry, multiple scattering, and BRDF effects are related, these findings may help to derive BRDF model parameters from the in-situ BRDF measurement remotely sensed hyperspectral data sets.

  3. Automatic brain caudate nuclei segmentation and classification in diagnostic of Attention-Deficit/Hyperactivity Disorder.

    PubMed

    Igual, Laura; Soliva, Joan Carles; Escalera, Sergio; Gimeno, Roger; Vilarroya, Oscar; Radeva, Petia

    2012-12-01

    We present a fully automatic diagnostic imaging test for Attention-Deficit/Hyperactivity Disorder diagnosis assistance based on previously found evidences of caudate nucleus volumetric abnormalities. The proposed method consists of different steps: a new automatic method for external and internal segmentation of caudate based on Machine Learning methodologies; the definition of a set of new volume relation features, 3D Dissociated Dipoles, used for caudate representation and classification. We separately validate the contributions using real data from a pediatric population and show precise internal caudate segmentation and discrimination power of the diagnostic test, showing significant performance improvements in comparison to other state-of-the-art methods. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. Automatic three-dimensional measurement of large-scale structure based on vision metrology.

    PubMed

    Zhu, Zhaokun; Guan, Banglei; Zhang, Xiaohu; Li, Daokui; Yu, Qifeng

    2014-01-01

    All relevant key techniques involved in photogrammetric vision metrology for fully automatic 3D measurement of large-scale structure are studied. A new kind of coded target consisting of circular retroreflective discs is designed, and corresponding detection and recognition algorithms based on blob detection and clustering are presented. Then a three-stage strategy starting with view clustering is proposed to achieve automatic network orientation. As for matching of noncoded targets, the concept of matching path is proposed, and matches for each noncoded target are found by determination of the optimal matching path, based on a novel voting strategy, among all possible ones. Experiments on a fixed keel of airship have been conducted to verify the effectiveness and measuring accuracy of the proposed methods.

  5. Machine for Automatic Bacteriological Pour Plate Preparation

    PubMed Central

    Sharpe, A. N.; Biggs, D. R.; Oliver, R. J.

    1972-01-01

    A fully automatic system for preparing poured plates for bacteriological analyses has been constructed and tested. The machine can make decimal dilutions of bacterial suspensions, dispense measured amounts into petri dishes, add molten agar, mix the dish contents, and label the dishes with sample and dilution numbers at the rate of 2,000 dishes per 8-hr day. In addition, the machine can be programmed to select different media so that plates for different types of bacteriological analysis may be made automatically from the same sample. The machine uses only the components of the media and sterile polystyrene petri dishes; requirements for all other materials, such as sterile pipettes and capped bottles of diluents and agar, are eliminated. Images PMID:4560475

  6. Automatic extraction of road features in urban environments using dense ALS data

    NASA Astrophysics Data System (ADS)

    Soilán, Mario; Truong-Hong, Linh; Riveiro, Belén; Laefer, Debra

    2018-02-01

    This paper describes a methodology that automatically extracts semantic information from urban ALS data for urban parameterization and road network definition. First, building façades are segmented from the ground surface by combining knowledge-based information with both voxel and raster data. Next, heuristic rules and unsupervised learning are applied to the ground surface data to distinguish sidewalk and pavement points as a means for curb detection. Then radiometric information was employed for road marking extraction. Using high-density ALS data from Dublin, Ireland, this fully automatic workflow was able to generate a F-score close to 95% for pavement and sidewalk identification with a resolution of 20 cm and better than 80% for road marking detection.

  7. Automatic detection of sleep macrostructure based on a sensorized T-shirt.

    PubMed

    Bianchi, Anna M; Mendez, Martin O

    2010-01-01

    In the present work we apply a fully automatic procedure to the analysis of signal coming from a sensorized T-shit, worn during the night, for sleep evaluation. The goodness and reliability of the signals recorded trough the T-shirt was previously tested, while the employed algorithms for feature extraction and sleep classification were previously developed on standard ECG recordings and the obtained classification was compared to the standard clinical practice based on polysomnography (PSG). In the present work we combined T-shirt recordings and automatic classification and could obtain reliable sleep profiles, i.e. the sleep classification in WAKE, REM (rapid eye movement) and NREM stages, based on heart rate variability (HRV), respiration and movement signals.

  8. First full dynamic range calibration of the JUNGFRAU photon detector

    NASA Astrophysics Data System (ADS)

    Redford, S.; Andrä, M.; Barten, R.; Bergamaschi, A.; Brückner, M.; Dinapoli, R.; Fröjdh, E.; Greiffenberg, D.; Lopez-Cuenca, C.; Mezza, D.; Mozzanica, A.; Ramilli, M.; Ruat, M.; Ruder, C.; Schmitt, B.; Shi, X.; Thattil, D.; Tinti, G.; Vetter, S.; Zhang, J.

    2018-01-01

    The JUNGFRAU detector is a charge integrating hybrid silicon pixel detector developed at the Paul Scherrer Institut for photon science applications, in particular for the upcoming free electron laser SwissFEL. With a high dynamic range, analogue readout, low noise and three automatically switching gains, JUNGFRAU promises excellent performance not only at XFELs but also at synchrotrons in areas such as protein crystallography, ptychography, pump-probe and time resolved measurements. To achieve its full potential, the detector must be calibrated on a pixel-by-pixel basis. This contribution presents the current status of the JUNGFRAU calibration project, in which a variety of input charge sources are used to parametrise the energy response of the detector across four orders of magnitude of dynamic range. Building on preliminary studies, the first full calibration procedure of a JUNGFRAU 0.5 Mpixel module is described. The calibration is validated using alternative sources of charge deposition, including laboratory experiments and measurements at ESRF and LCLS. The findings from these measurements are presented. Calibrated modules have already been used in proof-of-principle style protein crystallography experiments at the SLS. A first look at selected results is shown. Aspects such as the conversion of charge to number of photons, treatment of multi-size pixels and the origin of non-linear response are also discussed.

  9. Influence of local calibration on the quality of online wet weather discharge monitoring: feedback from five international case studies.

    PubMed

    Caradot, Nicolas; Sonnenberg, Hauke; Rouault, Pascale; Gruber, Günter; Hofer, Thomas; Torres, Andres; Pesci, Maria; Bertrand-Krajewski, Jean-Luc

    2015-01-01

    This paper reports about experiences gathered from five online monitoring campaigns in the sewer systems of Berlin (Germany), Graz (Austria), Lyon (France) and Bogota (Colombia) using ultraviolet-visible (UV-VIS) spectrometers and turbidimeters. Online probes are useful for the measurement of highly dynamic processes, e.g. combined sewer overflows (CSO), storm events, and river impacts. The influence of local calibration on the quality of online chemical oxygen demand (COD) measurements of wet weather discharges has been assessed. Results underline the need to establish local calibration functions for both UV-VIS spectrometers and turbidimeters. It is suggested that practitioners calibrate locally their probes using at least 15-20 samples. However, these samples should be collected over several events and cover most of the natural variability of the measured concentration. For this reason, the use of automatic peristaltic samplers in parallel to online monitoring is recommended with short representative sampling campaigns during wet weather discharges. Using reliable calibration functions, COD loads of CSO and storm events can be estimated with a relative uncertainty of approximately 20%. If no local calibration is established, concentrations and loads are estimated with a high error rate, questioning the reliability and meaning of the online measurement. Similar results have been obtained for total suspended solids measurements.

  10. Geometrical calibration of an AOTF hyper-spectral imaging system

    NASA Astrophysics Data System (ADS)

    Špiclin, Žiga; Katrašnik, Jaka; Bürmen, Miran; Pernuš, Franjo; Likar, Boštjan

    2010-02-01

    Optical aberrations present an important problem in optical measurements. Geometrical calibration of an imaging system is therefore of the utmost importance for achieving accurate optical measurements. In hyper-spectral imaging systems, the problem of optical aberrations is even more pronounced because optical aberrations are wavelength dependent. Geometrical calibration must therefore be performed over the entire spectral range of the hyper-spectral imaging system, which is usually far greater than that of the visible light spectrum. This problem is especially adverse in AOTF (Acousto- Optic Tunable Filter) hyper-spectral imaging systems, as the diffraction of light in AOTF filters is dependent on both wavelength and angle of incidence. Geometrical calibration of hyper-spectral imaging system was performed by stable caliber of known dimensions, which was imaged at different wavelengths over the entire spectral range. The acquired images were then automatically registered to the caliber model by both parametric and nonparametric transformation based on B-splines and by minimizing normalized correlation coefficient. The calibration method was tested on an AOTF hyper-spectral imaging system in the near infrared spectral range. The results indicated substantial wavelength dependent optical aberration that is especially pronounced in the spectral range closer to the infrared part of the spectrum. The calibration method was able to accurately characterize the aberrations and produce transformations for efficient sub-pixel geometrical calibration over the entire spectral range, finally yielding better spatial resolution of hyperspectral imaging system.

  11. Landsat 8 on-orbit characterization and calibration system

    USGS Publications Warehouse

    Micijevic, Esad; Morfitt, Ron; Choate, Michael J.

    2011-01-01

    The Landsat Data Continuity Mission (LDCM) is planning to launch the Landsat 8 satellite in December 2012, which continues an uninterrupted record of consistently calibrated globally acquired multispectral images of the Earth started in 1972. The satellite will carry two imaging sensors: the Operational Land Imager (OLI) and the Thermal Infrared Sensor (TIRS). The OLI will provide visible, near-infrared and short-wave infrared data in nine spectral bands while the TIRS will acquire thermal infrared data in two bands. Both sensors have a pushbroom design and consequently, each has a large number of detectors to be characterized. Image and calibration data downlinked from the satellite will be processed by the U.S. Geological Survey (USGS) Earth Resources Observation and Science (EROS) Center using the Landsat 8 Image Assessment System (IAS), a component of the Ground System. In addition to extracting statistics from all Earth images acquired, the IAS will process and trend results from analysis of special calibration acquisitions, such as solar diffuser, lunar, shutter, night, lamp and blackbody data, and preselected calibration sites. The trended data will be systematically processed and analyzed, and calibration and characterization parameters will be updated using both automatic and customized manual tools. This paper describes the analysis tools and the system developed to monitor and characterize on-orbit performance and calibrate the Landsat 8 sensors and image data products.

  12. Demonstration of a vectorial optical field generator with adaptive close loop control.

    PubMed

    Chen, Jian; Kong, Lingjiang; Zhan, Qiwen

    2017-12-01

    We experimentally demonstrate a vectorial optical field generator (VOF-Gen) with an adaptive close loop control. The close loop control capability is illustrated with the calibration of polarization modulation of the system. To calibrate the polarization ratio modulation, we generate 45° linearly polarized beam and make it propagate through a linear analyzer whose transmission axis is orthogonal to the incident beam. For the retardation calibration, circularly polarized beam is employed and a circular polarization analyzer with the opposite chirality is placed in front of the CCD as the detector. In both cases, the close loop control automatically changes the value of the corresponding calibration parameters in the pre-set ranges to generate the phase patterns applied to the spatial light modulators and records the intensity distribution of the output beam by the CCD camera. The optimized calibration parameters are determined corresponding to the minimum total intensity in each case. Several typical kinds of vectorial optical beams are created with and without the obtained calibration parameters, and the full Stokes parameter measurements are carried out to quantitatively analyze the polarization distribution of the generated beams. The comparisons among these results clearly show that the obtained calibration parameters could remarkably improve the accuracy of the polarization modulation of the VOF-Gen, especially for generating elliptically polarized beam with large ellipticity, indicating the significance of the presented close loop in enhancing the performance of the VOF-Gen.

  13. The calculation of aircraft collision probabilities

    DOT National Transportation Integrated Search

    1971-10-01

    The basic limitation of, air traffic compression, from the safety point of view, is the increased risk of collision due to reduced separations. In order to evolve new procedures, and eventually a fully, automatic system, it is desirable to have a mea...

  14. Radio and Optical Telescopes for School Students and Professional Astronomers

    NASA Astrophysics Data System (ADS)

    Hosmer, Laura; Langston, G.; Heatherly, S.; Towner, A. P.; Ford, J.; Simon, R. S.; White, S.; O'Neil, K. L.; Haipslip, J.; Reichart, D.

    2013-01-01

    The NRAO 20m telescope is now on-line as a part of UNC's Skynet worldwide telescope network. The NRAO is completing integration of radio astronomy tools with the Skynet web interface. We present the web interface and astronomy projects that allow students and astronomers from all over the country to become Radio Astronomers. The 20 meter radio telescope at NRAO in Green Bank, WV is dedicated to public education and also is part of an experiment in public funding for astronomy. The telescope has a fantastic new web-based interface, with priority queuing, accommodating priority for paying customers and enabling free use of otherwise unused time. This revival included many software and hardware improvements including automatic calibration and improved time integration resulting in improved data processing, and a new ultra high resolution spectrometer. This new spectrometer is optimized for very narrow spectral lines, which will allow astronomers to study complex molecules and very cold regions of space in remarkable detail. In accordance with focusing on broader impacts, many public outreach and high school education activities have been completed with many confirmed future activities. The 20 meter is now a fully automated, powerful tool capable of professional grade results available to anyone in the world. Drop by our poster and try out real-time telescope control!

  15. Tuned grid generation with ICEM CFD

    NASA Technical Reports Server (NTRS)

    Wulf, Armin; Akdag, Vedat

    1995-01-01

    ICEM CFD is a CAD based grid generation package that supports multiblock structured, unstructured tetrahedral and unstructured hexahedral grids. Major development efforts have been spent to extend ICEM CFD's multiblock structured and hexahedral unstructured grid generation capabilities. The modules added are: a parametric grid generation module and a semi-automatic hexahedral grid generation module. A fully automatic version of the hexahedral grid generation module for around a set of predefined objects in rectilinear enclosures has been developed. These modules will be presented and the procedures used will be described, and examples will be discussed.

  16. Automatic laser beam alignment using blob detection for an environment monitoring spectroscopy

    NASA Astrophysics Data System (ADS)

    Khidir, Jarjees; Chen, Youhua; Anderson, Gary

    2013-05-01

    This paper describes a fully automated system to align an infra-red laser beam with a small retro-reflector over a wide range of distances. The component development and test were especially used for an open-path spectrometer gas detection system. Using blob detection under OpenCV library, an automatic alignment algorithm was designed to achieve fast and accurate target detection in a complex background environment. Test results are presented to show that the proposed algorithm has been successfully applied to various target distances and environment conditions.

  17. Fermentation process tracking through enhanced spectral calibration modeling.

    PubMed

    Triadaphillou, Sophia; Martin, Elaine; Montague, Gary; Norden, Alison; Jeffkins, Paul; Stimpson, Sarah

    2007-06-15

    The FDA process analytical technology (PAT) initiative will materialize in a significant increase in the number of installations of spectroscopic instrumentation. However, to attain the greatest benefit from the data generated, there is a need for calibration procedures that extract the maximum information content. For example, in fermentation processes, the interpretation of the resulting spectra is challenging as a consequence of the large number of wavelengths recorded, the underlying correlation structure that is evident between the wavelengths and the impact of the measurement environment. Approaches to the development of calibration models have been based on the application of partial least squares (PLS) either to the full spectral signature or to a subset of wavelengths. This paper presents a new approach to calibration modeling that combines a wavelength selection procedure, spectral window selection (SWS), where windows of wavelengths are automatically selected which are subsequently used as the basis of the calibration model. However, due to the non-uniqueness of the windows selected when the algorithm is executed repeatedly, multiple models are constructed and these are then combined using stacking thereby increasing the robustness of the final calibration model. The methodology is applied to data generated during the monitoring of broth concentrations in an industrial fermentation process from on-line near-infrared (NIR) and mid-infrared (MIR) spectrometers. It is shown that the proposed calibration modeling procedure outperforms traditional calibration procedures, as well as enabling the identification of the critical regions of the spectra with regard to the fermentation process.

  18. Restoration and Future Analysis of the Apollo Lunar Dust Detector Data

    NASA Astrophysics Data System (ADS)

    McBride, M.; Williams, D. R.; Hills, H. K.

    2012-12-01

    The Dust, Thermal and Radiation Engineering Measurement (DTREM) packages mounted on the central stations of the Apollo 11, 12, 14, and 15 ALSEPs (Apollo Lunar Surface Experiments Packages) measured the outputs of exposed solar cells and thermistors over time. The goal of the experiment, also commonly known as the dust detector, was to study the long-term effects of dust, radiation, and temperature at the lunar surface on solar cells. The original data were never archived with NASA, with the exception of 38 reels of microfilm archived at the National Space Science Data Center. These reels contained images of computer printouts of times and raw and calibrated DTREM data for Apollo 14 and 15. The high volume of data is not readily accessible in this form. The raw telemetry for the DTREM also exists as part of the ALSEP housekeeping (Word 33) telemetry. As part of the lunar data restoration effort we are converting the telemetry to digital tables containing the fully calibrated dust detector data. These restored data sets will be archived through the Lunar Data Node of the Planetary Data System (PDS) for general use by the lunar community. In this form, these data will finally be amenable to study by modern techniques not available during the Apollo era. Over the past year, analysis of the correlation between the NSSDC microfilm record and the raw telemetry was used to determine the translations and calibrations necessary to convert the digital telemetry into a fully calibrated data set giving temperatures and solar cell outputs over time. The final data set consists of a reading every 54 seconds over periods of 5 years for Apollo 14 and 15. The sheer quantity of data shows why a fully digital form is necessary for proper analysis. The Apollo 11 DTREM was designed for a short lifetime and returned less than two lunations of data. We do not currently have the translation and calibration information necessary to convert the raw telemetry to a calibrated data set for Apollo 11, but we have found some preliminary information which we believe will lead to full restoration of this data set. The dust detector on Apollo 12 was configured differently from the other DTREMs. While the Apollo 11, 14, and 15 instruments had three upward-facing solar cells, one glass-covered, one uncovered, and one pre-irradiated and glass-covered, the Apollo 12 dust detector had three identical cells with only one facing upwards. The other two faced to the east and west, respectively. For Apollo 12 we have the raw telemetry but not the necessary calibration information to fully restore these data sets. As with Apollo 11, we are attempting to obtain the required information to translate the raw telemetry counts into voltages and temperatures and apply the correct calibrations. We are also currently analyzing the restored and raw data and will present results of our analysis, including revisiting the earlier published Apollo results. The scientific community has shown great interest in the outcome of these restorations. The microfilm data have been scanned and converted to PDS data sets which have undergone review and will be archived. The digital data sets will soon be available to the full lunar community after restoration has been completed and they have undergone PDS review and validation.

  19. Advances of FishNet towards a fully automatic monitoring system for fish migration

    NASA Astrophysics Data System (ADS)

    Kratzert, Frederik; Mader, Helmut

    2017-04-01

    Restoring the continuum of river networks, affected by anthropogenic constructions, is one of the main objectives of the Water Framework Directive. Regarding fish migration, fish passes are a widely used measure. Often the functionality of these fish passes needs to be assessed by monitoring. Over the last years, we developed a new semi-automatic monitoring system (FishCam) which allows the contact free observation of fish migration in fish passes through videos. The system consists of a detection tunnel, equipped with a camera, a motion sensor and artificial light sources, as well as a software (FishNet), which helps to analyze the video data. In its latest version, the software is capable of detecting and tracking objects in the videos as well as classifying them into "fish" and "no-fish" objects. This allows filtering out the videos containing at least one fish (approx. 5 % of all grabbed videos) and reduces the manual labor to the analysis of these videos. In this state the entire system has already been used in over 20 different fish passes across Austria for a total of over 140 months of monitoring resulting in more than 1.4 million analyzed videos. As a next step towards a fully automatic monitoring system, a key feature is the automatized classification of the detected fish into their species, which is still an unsolved task in a fully automatic monitoring environment. Recent advances in the field of machine learning, especially image classification with deep convolutional neural networks, sound promising in order to solve this problem. In this study, different approaches for the fish species classification are tested. Besides an image-only based classification approach using deep convolutional neural networks, various methods that combine the power of convolutional neural networks as image descriptors with additional features, such as the fish length and the time of appearance, are explored. To facilitate the development and testing phase of this approach, a subset of six fish species of Austrian rivers and streams is considered in this study. All scripts and the data to reproduce the results of this study will be made publicly available on GitHub* at the beginning of the EGU2017 General Assembly. * https://github.com/kratzert/EGU2017_public/

  20. Calibration Method for IATS and Application in Multi-Target Monitoring Using Coded Targets

    NASA Astrophysics Data System (ADS)

    Zhou, Yueyin; Wagner, Andreas; Wunderlich, Thomas; Wasmeier, Peter

    2017-06-01

    The technique of Image Assisted Total Stations (IATS) has been studied for over ten years and is composed of two major parts: one is the calibration procedure which combines the relationship between the camera system and the theodolite system; the other is the automatic target detection on the image by various methods of photogrammetry or computer vision. Several calibration methods have been developed, mostly using prototypes with an add-on camera rigidly mounted on the total station. However, these prototypes are not commercially available. This paper proposes a calibration method based on Leica MS50 which has two built-in cameras each with a resolution of 2560 × 1920 px: an overview camera and a telescope (on-axis) camera. Our work in this paper is based on the on-axis camera which uses the 30-times magnification of the telescope. The calibration consists of 7 parameters to estimate. We use coded targets, which are common tools in photogrammetry for orientation, to detect different targets in IATS images instead of prisms and traditional ATR functions. We test and verify the efficiency and stability of this monitoring method with multi-target.

  1. A combined deep-learning and deformable-model approach to fully automatic segmentation of the left ventricle in cardiac MRI.

    PubMed

    Avendi, M R; Kheradvar, Arash; Jafarkhani, Hamid

    2016-05-01

    Segmentation of the left ventricle (LV) from cardiac magnetic resonance imaging (MRI) datasets is an essential step for calculation of clinical indices such as ventricular volume and ejection fraction. In this work, we employ deep learning algorithms combined with deformable models to develop and evaluate a fully automatic LV segmentation tool from short-axis cardiac MRI datasets. The method employs deep learning algorithms to learn the segmentation task from the ground true data. Convolutional networks are employed to automatically detect the LV chamber in MRI dataset. Stacked autoencoders are used to infer the LV shape. The inferred shape is incorporated into deformable models to improve the accuracy and robustness of the segmentation. We validated our method using 45 cardiac MR datasets from the MICCAI 2009 LV segmentation challenge and showed that it outperforms the state-of-the art methods. Excellent agreement with the ground truth was achieved. Validation metrics, percentage of good contours, Dice metric, average perpendicular distance and conformity, were computed as 96.69%, 0.94, 1.81 mm and 0.86, versus those of 79.2-95.62%, 0.87-0.9, 1.76-2.97 mm and 0.67-0.78, obtained by other methods, respectively. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Automatic segmentation method of pelvic floor levator hiatus in ultrasound using a self-normalizing neural network

    PubMed Central

    Dietz, Hans Peter; D’hooge, Jan; Barratt, Dean; Deprest, Jan

    2018-01-01

    Abstract. Segmentation of the levator hiatus in ultrasound allows the extraction of biometrics, which are of importance for pelvic floor disorder assessment. We present a fully automatic method using a convolutional neural network (CNN) to outline the levator hiatus in a two-dimensional image extracted from a three-dimensional ultrasound volume. In particular, our method uses a recently developed scaled exponential linear unit (SELU) as a nonlinear self-normalizing activation function, which for the first time has been applied in medical imaging with CNN. SELU has important advantages such as being parameter-free and mini-batch independent, which may help to overcome memory constraints during training. A dataset with 91 images from 35 patients during Valsalva, contraction, and rest, all labeled by three operators, is used for training and evaluation in a leave-one-patient-out cross validation. Results show a median Dice similarity coefficient of 0.90 with an interquartile range of 0.08, with equivalent performance to the three operators (with a Williams’ index of 1.03), and outperforming a U-Net architecture without the need for batch normalization. We conclude that the proposed fully automatic method achieved equivalent accuracy in segmenting the pelvic floor levator hiatus compared to a previous semiautomatic approach. PMID:29340289

  3. Automatic segmentation method of pelvic floor levator hiatus in ultrasound using a self-normalizing neural network.

    PubMed

    Bonmati, Ester; Hu, Yipeng; Sindhwani, Nikhil; Dietz, Hans Peter; D'hooge, Jan; Barratt, Dean; Deprest, Jan; Vercauteren, Tom

    2018-04-01

    Segmentation of the levator hiatus in ultrasound allows the extraction of biometrics, which are of importance for pelvic floor disorder assessment. We present a fully automatic method using a convolutional neural network (CNN) to outline the levator hiatus in a two-dimensional image extracted from a three-dimensional ultrasound volume. In particular, our method uses a recently developed scaled exponential linear unit (SELU) as a nonlinear self-normalizing activation function, which for the first time has been applied in medical imaging with CNN. SELU has important advantages such as being parameter-free and mini-batch independent, which may help to overcome memory constraints during training. A dataset with 91 images from 35 patients during Valsalva, contraction, and rest, all labeled by three operators, is used for training and evaluation in a leave-one-patient-out cross validation. Results show a median Dice similarity coefficient of 0.90 with an interquartile range of 0.08, with equivalent performance to the three operators (with a Williams' index of 1.03), and outperforming a U-Net architecture without the need for batch normalization. We conclude that the proposed fully automatic method achieved equivalent accuracy in segmenting the pelvic floor levator hiatus compared to a previous semiautomatic approach.

  4. Automatic analysis of quantitative NMR data of pharmaceutical compound libraries.

    PubMed

    Liu, Xuejun; Kolpak, Michael X; Wu, Jiejun; Leo, Gregory C

    2012-08-07

    In drug discovery, chemical library compounds are usually dissolved in DMSO at a certain concentration and then distributed to biologists for target screening. Quantitative (1)H NMR (qNMR) is the preferred method for the determination of the actual concentrations of compounds because the relative single proton peak areas of two chemical species represent the relative molar concentrations of the two compounds, that is, the compound of interest and a calibrant. Thus, an analyte concentration can be determined using a calibration compound at a known concentration. One particularly time-consuming step in the qNMR analysis of compound libraries is the manual integration of peaks. In this report is presented an automated method for performing this task without prior knowledge of compound structures and by using an external calibration spectrum. The script for automated integration is fast and adaptable to large-scale data sets, eliminating the need for manual integration in ~80% of the cases.

  5. Automated image quality assessment for chest CT scans.

    PubMed

    Reeves, Anthony P; Xie, Yiting; Liu, Shuang

    2018-02-01

    Medical image quality needs to be maintained at standards sufficient for effective clinical reading. Automated computer analytic methods may be applied to medical images for quality assessment. For chest CT scans in a lung cancer screening context, an automated quality assessment method is presented that characterizes image noise and image intensity calibration. This is achieved by image measurements in three automatically segmented homogeneous regions of the scan: external air, trachea lumen air, and descending aorta blood. Profiles of CT scanner behavior are also computed. The method has been evaluated on both phantom and real low-dose chest CT scans and results show that repeatable noise and calibration measures may be realized by automated computer algorithms. Noise and calibration profiles show relevant differences between different scanners and protocols. Automated image quality assessment may be useful for quality control for lung cancer screening and may enable performance improvements to automated computer analysis methods. © 2017 American Association of Physicists in Medicine.

  6. SMAP L-Band Microwave Radiometer: Instrument Design and First Year on Orbit

    NASA Technical Reports Server (NTRS)

    Piepmeier, Jeffrey R.; Focardi, Paolo; Horgan, Kevin; Knuble, Joseph; Ehsan, Negar; Lucey, Jared; Brambora, Clifford; Brown, Paula R.; Hoffman, Pamela J.; French, Richard T.; hide

    2017-01-01

    The Soil Moisture Active Passive (SMAP) L-band microwave radiometer is a conical scanning instrument designed to measure soil moisture with 4 percent volumetric accuracy at 40-kilometer spatial resolution. SMAP is NASA's first Earth Systematic Mission developed in response to its first Earth science decadal survey. Here, the design is reviewed and the results of its first year on orbit are presented. Unique features of radiometer include a large 6-meter rotating reflector, fully polarimetric radiometer receiver with internal calibration, and radio-frequency interference detection and filtering hardware. The radiometer electronics are thermally controlled to achieve good radiometric stability. Analyses of on-orbit results indicate the electrical and thermal characteristics of the electronics and internal calibration sources are very stable and promote excellent gain stability. Radiometer NEdT (Noise Equivalent differential Temperature) less than 1 degree Kelvin for 17-millisecond samples. The gain spectrum exhibits low noise at frequencies greater than 1 megahertz and 1 divided by f (pink) noise rising at longer time scales fully captured by the internal calibration scheme. Results from sky observations and global swath imagery of all four Stokes antenna temperatures indicate the instrument is operating as expected.

  7. Virtual Instrument for Determining Rate Constant of Second-Order Reaction by pX Based on LabVIEW 8.0

    PubMed Central

    Meng, Hu; Li, Jiang-Yuan; Tang, Yong-Huai

    2009-01-01

    The virtual instrument system based on LabVIEW 8.0 for ion analyzer which can measure and analyze ion concentrations in solution is developed and comprises homemade conditioning circuit, data acquiring board, and computer. It can calibrate slope, temperature, and positioning automatically. When applied to determine the reaction rate constant by pX, it achieved live acquiring, real-time displaying, automatical processing of testing data, generating the report of results; and other functions. This method simplifies the experimental operation greatly, avoids complicated procedures of manual processing data and personal error, and improves veracity and repeatability of the experiment results. PMID:19730752

  8. A robust and hierarchical approach for the automatic co-registration of intensity and visible images

    NASA Astrophysics Data System (ADS)

    González-Aguilera, Diego; Rodríguez-Gonzálvez, Pablo; Hernández-López, David; Luis Lerma, José

    2012-09-01

    This paper presents a new robust approach to integrate intensity and visible images which have been acquired with a terrestrial laser scanner and a calibrated digital camera, respectively. In particular, an automatic and hierarchical method for the co-registration of both sensors is developed. The approach integrates several existing solutions to improve the performance of the co-registration between range-based and visible images: the Affine Scale-Invariant Feature Transform (A-SIFT), the epipolar geometry, the collinearity equations, the Groebner basis solution and the RANdom SAmple Consensus (RANSAC), integrating a voting scheme. The approach presented herein improves the existing co-registration approaches in automation, robustness, reliability and accuracy.

  9. Development report: Automatic System Test and Calibration (ASTAC) equipment

    NASA Technical Reports Server (NTRS)

    Thoren, R. J.

    1981-01-01

    A microcomputer based automatic test system was developed for the daily performance monitoring of wind energy system time domain (WEST) analyzer. The test system consists of a microprocessor based controller and hybrid interface unit which are used for inputing prescribed test signals into all WEST subsystems and for monitoring WEST responses to these signals. Performance is compared to theoretically correct performance levels calculated off line on a large general purpose digital computer. Results are displayed on a cathode ray tube or are available from a line printer. Excessive drift and/or lack of repeatability of the high speed analog sections within WEST is easily detected and the malfunctioning hardware identified using this system.

  10. Self-calibrating models for dynamic monitoring and diagnosis

    NASA Technical Reports Server (NTRS)

    Kuipers, Benjamin

    1994-01-01

    The present goal in qualitative reasoning is to develop methods for automatically building qualitative and semiquantitative models of dynamic systems and to use them for monitoring and fault diagnosis. The qualitative approach to modeling provides a guarantee of coverage while our semiquantitative methods support convergence toward a numerical model as observations are accumulated. We have developed and applied methods for automatic creation of qualitative models, developed two methods for obtaining tractable results on problems that were previously intractable for qualitative simulation, and developed more powerful methods for learning semiquantitative models from observations and deriving semiquantitative predictions from them. With these advances, qualitative reasoning comes significantly closer to realizing its aims as a practical engineering method.

  11. SU-E-T-497: Semi-Automated in Vivo Radiochromic Film Dosimetry Using a Novel Image Processing Algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reyhan, M; Yue, N

    Purpose: To validate an automated image processing algorithm designed to detect the center of radiochromic film used for in vivo film dosimetry against the current gold standard of manual selection. Methods: An image processing algorithm was developed to automatically select the region of interest (ROI) in *.tiff images that contain multiple pieces of radiochromic film (0.5x1.3cm{sup 2}). After a user has linked a calibration file to the processing algorithm and selected a *.tiff file for processing, an ROI is automatically detected for all films by a combination of thresholding and erosion, which removes edges and any additional markings for orientation.more » Calibration is applied to the mean pixel values from the ROIs and a *.tiff image is output displaying the original image with an overlay of the ROIs and the measured doses. Validation of the algorithm was determined by comparing in vivo dose determined using the current gold standard (manually drawn ROIs) versus automated ROIs for n=420 scanned films. Bland-Altman analysis, paired t-test, and linear regression were performed to demonstrate agreement between the processes. Results: The measured doses ranged from 0.2-886.6cGy. Bland-Altman analysis of the two techniques (automatic minus manual) revealed a bias of -0.28cGy and a 95% confidence interval of (5.5cGy,-6.1cGy). These values demonstrate excellent agreement between the two techniques. Paired t-test results showed no statistical differences between the two techniques, p=0.98. Linear regression with a forced zero intercept demonstrated that Automatic=0.997*Manual, with a Pearson correlation coefficient of 0.999. The minimal differences between the two techniques may be explained by the fact that the hand drawn ROIs were not identical to the automatically selected ones. The average processing time was 6.7seconds in Matlab on an IntelCore2Duo processor. Conclusion: An automated image processing algorithm has been developed and validated, which will help minimize user interaction and processing time of radiochromic film used for in vivo dosimetry.« less

  12. Automatic phase aberration compensation for digital holographic microscopy based on deep learning background detection.

    PubMed

    Nguyen, Thanh; Bui, Vy; Lam, Van; Raub, Christopher B; Chang, Lin-Ching; Nehmetallah, George

    2017-06-26

    We propose a fully automatic technique to obtain aberration free quantitative phase imaging in digital holographic microscopy (DHM) based on deep learning. The traditional DHM solves the phase aberration compensation problem by manually detecting the background for quantitative measurement. This would be a drawback in real time implementation and for dynamic processes such as cell migration phenomena. A recent automatic aberration compensation approach using principle component analysis (PCA) in DHM avoids human intervention regardless of the cells' motion. However, it corrects spherical/elliptical aberration only and disregards the higher order aberrations. Traditional image segmentation techniques can be employed to spatially detect cell locations. Ideally, automatic image segmentation techniques make real time measurement possible. However, existing automatic unsupervised segmentation techniques have poor performance when applied to DHM phase images because of aberrations and speckle noise. In this paper, we propose a novel method that combines a supervised deep learning technique with convolutional neural network (CNN) and Zernike polynomial fitting (ZPF). The deep learning CNN is implemented to perform automatic background region detection that allows for ZPF to compute the self-conjugated phase to compensate for most aberrations.

  13. Automatic Exposure Control Device for Digital Mammography

    DTIC Science & Technology

    2001-08-01

    developing innovative approaches for controlling DM exposures. These approaches entail using the digital detector and an artificial neural network to...of interest that determine the exposure parameters for the fully exposed image; and (2) to use an artificial neural network to select exposure

  14. Automatic Exposure Control Device for Digital Mammography

    DTIC Science & Technology

    2004-08-01

    developing innovative approaches for controlling DM exposures. These approaches entail using the digital detector and an artificial neural network to...of interest that determine the exposure parameters for the fully exposed image; and (2) to use an artificial neural network to select exposure

  15. A Comprehensive Model of the Meteoroids Environment Around Mercury

    NASA Astrophysics Data System (ADS)

    Pokorny, P.; Sarantos, M.; Janches, D.

    2018-05-01

    We present a comprehensive dynamical model for the meteoroid environment around Mercury comprised of meteoroids originating in asteroids, short and long period comets. Our model is fully calibrated and provides predictions for different values of TAA.

  16. Visible-regime polarimetric imager: a fully polarimetric, real-time imaging system.

    PubMed

    Barter, James D; Thompson, Harold R; Richardson, Christine L

    2003-03-20

    A fully polarimetric optical camera system has been constructed to obtain polarimetric information simultaneously from four synchronized charge-coupled device imagers at video frame rates of 60 Hz and a resolution of 640 x 480 pixels. The imagers view the same scene along the same optical axis by means of a four-way beam-splitting prism similar to ones used for multiple-imager, common-aperture color TV cameras. Appropriate polarizing filters in front of each imager provide the polarimetric information. Mueller matrix analysis of the polarimetric response of the prism, analyzing filters, and imagers is applied to the detected intensities in each imager as a function of the applied state of polarization over a wide range of linear and circular polarization combinations to obtain an average polarimetric calibration consistent to approximately 2%. Higher accuracies can be obtained by improvement of the polarimetric modeling of the splitting prism and by implementation of a pixel-by-pixel calibration.

  17. Numerical Differentiation Methods for Computing Error Covariance Matrices in Item Response Theory Modeling: An Evaluation and a New Proposal

    ERIC Educational Resources Information Center

    Tian, Wei; Cai, Li; Thissen, David; Xin, Tao

    2013-01-01

    In item response theory (IRT) modeling, the item parameter error covariance matrix plays a critical role in statistical inference procedures. When item parameters are estimated using the EM algorithm, the parameter error covariance matrix is not an automatic by-product of item calibration. Cai proposed the use of Supplemented EM algorithm for…

  18. Automation of film densitometry for application in personal monitoring.

    PubMed

    Taheri, M; Movafeghi, A; Rastkhah, N

    2011-03-01

    In this research work, a semi-automatic densitometry system has been developed for large-scale monitoring services by use of film badge dosemeters. The system consists of a charge-coupled device (CCD)-based scanner that can scan optical densities (ODs) up to 4.2, a computer vision algorithm to improve the quality of digitised films and an analyser program to calculate the necessary information, e.g. the mean OD of region of interest and radiation doses. For calibration of the system, two reference films were used. The Microtek scanner International Color Consortium (ICC) profiler is applied for determining the colour attributes of the scanner accurately and a reference of the density step tablet, Bundesanstalt für Materialforschung und-prüfung (BAM) is used for calibrating the automatic conversion of gray-level values to OD values in the range of 0.2-4.0 OD. The system contributes to achieve more objectives and reliable results. So by applying this system, we can digitise a set of 20 films at once and calculate their relative doses less than about 4 min, and meanwhile it causes to avoid disadvantages of manual process and to enhance the accuracy of dosimetry.

  19. Recent advances with quiescent power supply current (I(sub DDQ)) testing at Sandia using the HP82000

    NASA Astrophysics Data System (ADS)

    Righter, A. W.; Leong, D. J.; Cox, L. B.

    Last year at the HP82000 Users Group Meeting, Sandia National Laboratories gave a presentation on I(sub DDQ) testing. This year, some advances are presented on this testing including DUT board fixturing, external DC PMU measurement, and automatic IDD-All circuit calibration. Implementation is examined more than theory, with results presented from Sandia tests. After a brief summary I(sub DDQ) theory and testing concepts, how the break (hold state) vector and data formatting present a test vector generation concern for the HP82000 is described. Fixturing of the DUT board for both types of I(sub DDQ) measurement is then discussed, along with how the continuity test and test vector generation must be taken into account. Results of a test including continuity, IDD-All and I(sub DDQ) Value measurements is shown. Next, measurement of low current using an external PMU is discussed, including noise considerations, implementation and some test results showing nA-range measurements. A method is presented for automatic calibration of the IDD-All analog comparator circuit using RM BASIC on the HP82000, with implementation and measurement results. Finally, future directions for research in this area is explored.

  20. A Modular Hierarchical Approach to 3D Electron Microscopy Image Segmentation

    PubMed Central

    Liu, Ting; Jones, Cory; Seyedhosseini, Mojtaba; Tasdizen, Tolga

    2014-01-01

    The study of neural circuit reconstruction, i.e., connectomics, is a challenging problem in neuroscience. Automated and semi-automated electron microscopy (EM) image analysis can be tremendously helpful for connectomics research. In this paper, we propose a fully automatic approach for intra-section segmentation and inter-section reconstruction of neurons using EM images. A hierarchical merge tree structure is built to represent multiple region hypotheses and supervised classification techniques are used to evaluate their potentials, based on which we resolve the merge tree with consistency constraints to acquire final intra-section segmentation. Then, we use a supervised learning based linking procedure for the inter-section neuron reconstruction. Also, we develop a semi-automatic method that utilizes the intermediate outputs of our automatic algorithm and achieves intra-segmentation with minimal user intervention. The experimental results show that our automatic method can achieve close-to-human intra-segmentation accuracy and state-of-the-art inter-section reconstruction accuracy. We also show that our semi-automatic method can further improve the intra-segmentation accuracy. PMID:24491638

  1. Automatic intraaortic balloon pump timing using an intrabeat dicrotic notch prediction algorithm.

    PubMed

    Schreuder, Jan J; Castiglioni, Alessandro; Donelli, Andrea; Maisano, Francesco; Jansen, Jos R C; Hanania, Ramzi; Hanlon, Pat; Bovelander, Jan; Alfieri, Ottavio

    2005-03-01

    The efficacy of intraaortic balloon counterpulsation (IABP) during arrhythmic episodes is questionable. A novel algorithm for intrabeat prediction of the dicrotic notch was used for real time IABP inflation timing control. A windkessel model algorithm was used to calculate real-time aortic flow from aortic pressure. The dicrotic notch was predicted using a percentage of calculated peak flow. Automatic inflation timing was set at intrabeat predicted dicrotic notch and was combined with automatic IAB deflation. Prophylactic IABP was applied in 27 patients with low ejection fraction (< 35%) undergoing cardiac surgery. Analysis of IABP at a 1:4 ratio revealed that IAB inflation occurred at a mean of 0.6 +/- 5 ms from the dicrotic notch. In all patients accurate automatic timing at a 1:1 assist ratio was performed. Seventeen patients had episodes of severe arrhythmia, the novel IABP inflation algorithm accurately assisted 318 of 320 arrhythmic beats at a 1:1 ratio. The novel real-time intrabeat IABP inflation timing algorithm performed accurately in all patients during both regular rhythms and severe arrhythmia, allowing fully automatic intrabeat IABP timing.

  2. Fully automatic hp-adaptivity for acoustic and electromagnetic scattering in three dimensions

    NASA Astrophysics Data System (ADS)

    Kurtz, Jason Patrick

    We present an algorithm for fully automatic hp-adaptivity for finite element approximations of elliptic and Maxwell boundary value problems in three dimensions. The algorithm automatically generates a sequence of coarse grids, and a corresponding sequence of fine grids, such that the energy norm of the error decreases exponentially with respect to the number of degrees of freedom in either sequence. At each step, we employ a discrete optimization algorithm to determine the refinements for the current coarse grid such that the projection-based interpolation error for the current fine grid solution decreases with an optimal rate with respect to the number of degrees of freedom added by the refinement. The refinements are restricted only by the requirement that the resulting mesh is at most 1-irregular, but they may be anisotropic in both element size h and order of approximation p. While we cannot prove that our method converges at all, we present numerical evidence of exponential convergence for a diverse suite of model problems from acoustic and electromagnetic scattering. In particular we show that our method is well suited to the automatic resolution of exterior problems truncated by the introduction of a perfectly matched layer. To enable and accelerate the solution of these problems on commodity hardware, we include a detailed account of three critical aspects of our implementation, namely an efficient implementation of sum factorization, several efficient interfaces to the direct multi-frontal solver MUMPS, and some fast direct solvers for the computation of a sequence of nested projections.

  3. Method and apparatus for reading meters from a video image

    DOEpatents

    Lewis, Trevor J.; Ferguson, Jeffrey J.

    1997-01-01

    A method and system to enable acquisition of data about an environment from one or more meters using video images. One or more meters are imaged by a video camera and the video signal is digitized. Then, each region of the digital image which corresponds to the indicator of the meter is calibrated and the video signal is analyzed to determine the value indicated by each meter indicator. Finally, from the value indicated by each meter indicator in the calibrated region, a meter reading is generated. The method and system offer the advantages of automatic data collection in a relatively non-intrusive manner without making any complicated or expensive electronic connections, and without requiring intensive manpower.

  4. An airborne sunphotometer for use with helicopters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walthall, C.L.; Halthore, R.N.; Elman, G.C.

    1996-04-01

    One solution for atmospheric correction and calibration of remotely sensed data from airborne platforms is the use of radiometrically calibrated instruments, sunphotometers and an atmospheric radiative transfer model. Sunphotometers are used to measure the direct solar irradiance at the level at which they are operating and the data are used in the computation of atmospheric optical depth. Atmospheric optical depth is an input to atmospheric correction algorithms that convert at-sensor radiance to required surface properties such as reflectance and temperature. Airborne sun photometry has thus far seen limited use and has not been used with a helicopter platform. The hardware,more » software, calibration and deployment of an automatic sun-tracking sunphotometer specifically designed for use on a helicopter are described. Sample data sets taken with the system during the 1994 Boreal Ecosystem and Atmosphere Study (BOREAS) are presented. The addition of the sun photometer to the helicopter system adds another tool for monitoring the environment and makes the helicopter remote sensing system capable of collecting calibrated, atmospherically corrected data independent of the need for measurements from other systems.« less

  5. Results of the 1973 NASA/JPL balloon flight solar cell calibration program

    NASA Technical Reports Server (NTRS)

    Yasui, R. K.; Greenwood, R. F.

    1975-01-01

    High altitude balloon flights carried 37 standard solar cells for calibration above 99.5 percent of the earth's atmosphere. The cells were assembled into standard modules with appropriate resistors to load each cell at short circuit current. Each standardized module was mounted at the apex of the balloon on a sun tracker which automatically maintained normal incidence to the sun within 1.0 deg. The balloons were launched to reach a float altitude of approximately 36.6 km two hours before solar noon and remain at float altitude for two hours beyond solar noon. Telemetered calibration data on each standard solar cell was collected and recorded on magnetic tape. At the end of each float period the solar cell payload was separated from the balloon by radio command and descended via parachute to a ground recovery crew. Standard solar cells calibrated and recovered in this manner are used as primary intensity reference standards in solar simulators and in terrestrial sunlight for evaluating the performance of other solar cells and solar arrays with similar spectral response characteristics.

  6. Concerning the Video Drift Method to Measure Double Stars

    NASA Astrophysics Data System (ADS)

    Nugent, Richard L.; Iverson, Ernest W.

    2015-05-01

    Classical methods to measure position angles and separations of double stars rely on just a few measurements either from visual observations or photographic means. Visual and photographic CCD observations are subject to errors from the following sources: misalignments from eyepiece/camera/barlow lens/micrometer/focal reducers, systematic errors from uncorrected optical distortions, aberrations from the telescope system, camera tilt, magnitude and color effects. Conventional video methods rely on calibration doubles and graphically calculating the east-west direction plus careful choice of select video frames stacked for measurement. Atmospheric motion is one of the larger sources of error in any exposure/measurement method which is on the order of 0.5-1.5. Ideally, if a data set from a short video can be used to derive position angle and separation, with each data set self-calibrating independent of any calibration doubles or star catalogues, this would provide measurements of high systematic accuracy. These aims are achieved by the video drift method first proposed by the authors in 2011. This self calibrating video method automatically analyzes 1,000's of measurements from a short video clip.

  7. Automatic Abstraction in Planning

    NASA Technical Reports Server (NTRS)

    Christensen, J.

    1991-01-01

    Traditionally, abstraction in planning has been accomplished by either state abstraction or operator abstraction, neither of which has been fully automatic. We present a new method, predicate relaxation, for automatically performing state abstraction. PABLO, a nonlinear hierarchical planner, implements predicate relaxation. Theoretical, as well as empirical results are presented which demonstrate the potential advantages of using predicate relaxation in planning. We also present a new definition of hierarchical operators that allows us to guarantee a limited form of completeness. This new definition is shown to be, in some ways, more flexible than previous definitions of hierarchical operators. Finally, a Classical Truth Criterion is presented that is proven to be sound and complete for a planning formalism that is general enough to include most classical planning formalisms that are based on the STRIPS assumption.

  8. A fast and automatic mosaic method for high-resolution satellite images

    NASA Astrophysics Data System (ADS)

    Chen, Hongshun; He, Hui; Xiao, Hongyu; Huang, Jing

    2015-12-01

    We proposed a fast and fully automatic mosaic method for high-resolution satellite images. First, the overlapped rectangle is computed according to geographical locations of the reference and mosaic images and feature points on both the reference and mosaic images are extracted by a scale-invariant feature transform (SIFT) algorithm only from the overlapped region. Then, the RANSAC method is used to match feature points of both images. Finally, the two images are fused into a seamlessly panoramic image by the simple linear weighted fusion method or other method. The proposed method is implemented in C++ language based on OpenCV and GDAL, and tested by Worldview-2 multispectral images with a spatial resolution of 2 meters. Results show that the proposed method can detect feature points efficiently and mosaic images automatically.

  9. Solar-Powered Water Distillation

    NASA Technical Reports Server (NTRS)

    Menninger, F. J.; Elder, R. J.

    1985-01-01

    Solar-powered still produces pure water at rate of 6,000 gallons per year. Still fully automatic and gravity-fed. Only outside electric power is timer clock and solenoid-operated valve. Still saves $5,000 yearly in energy costs and pays for itself in 3 1/2 years.

  10. AutoBayes Program Synthesis System Users Manual

    NASA Technical Reports Server (NTRS)

    Schumann, Johann; Jafari, Hamed; Pressburger, Tom; Denney, Ewen; Buntine, Wray; Fischer, Bernd

    2008-01-01

    Program synthesis is the systematic, automatic construction of efficient executable code from high-level declarative specifications. AutoBayes is a fully automatic program synthesis system for the statistical data analysis domain; in particular, it solves parameter estimation problems. It has seen many successful applications at NASA and is currently being used, for example, to analyze simulation results for Orion. The input to AutoBayes is a concise description of a data analysis problem composed of a parameterized statistical model and a goal that is a probability term involving parameters and input data. The output is optimized and fully documented C/C++ code computing the values for those parameters that maximize the probability term. AutoBayes can solve many subproblems symbolically rather than having to rely on numeric approximation algorithms, thus yielding effective, efficient, and compact code. Statistical analysis is faster and more reliable, because effort can be focused on model development and validation rather than manual development of solution algorithms and code.

  11. Calibration of two passive air samplers for monitoring phthalates and brominated flame-retardants in indoor air.

    PubMed

    Saini, Amandeep; Okeme, Joseph O; Goosey, Emma; Diamond, Miriam L

    2015-10-01

    Two passive air samplers (PAS), polyurethane foam (PUF) disks and Sorbent Impregnated PUF (SIP) disks, were characterized for uptake of phthalates and brominated flame-retardants (BFRs) indoors using fully and partially sheltered housings. Based on calibration against an active low-volume air sampler for gas- and particle-phase compounds, we recommend generic sampling rates of 3.5±0.9 and 1.0±0.4 m(3)/day for partially and fully sheltered housing, respectively, which applies to gas-phase phthalates and BFRs as well as particle-phase DEHP (the later for the partially sheltered PAS). For phthalates, partially sheltered SIPs are recommended. Further, we recommend the use of partially sheltered PAS indoors and a deployment period of one month. The sampling rate for the partially sheltered PUF and SIP of 3.5±0.9 m(3)/day is indistinguishable from that reported for fully sheltered PAS deployed outdoors, indicating the role of the housing outdoors to minimize the effect of variable wind velocities on chemical uptake, versus the partially sheltered PAS deployed indoors to maximize chemical uptake where air flow rates are low. Copyright © 2015. Published by Elsevier Ltd.

  12. Improved method to fully compensate the spatial phase nonuniformity of LCoS devices with a Fizeau interferometer.

    PubMed

    Lu, Qiang; Sheng, Lei; Zeng, Fei; Gao, Shijie; Qiao, Yanfeng

    2016-10-01

    Liquid crystal on silicon (LCoS) devices usually show spatial phase nonuniformity (SPNU) in applications of phase modulation, which comprises the phase retardance nonuniformity (PRNU) as a function of the applied voltage and inherent wavefront distortion (WFD) introduced by the device itself. We propose a multipoint calibration method utilizing a Fizeau interferometer to compensate SPNU of the device. Calibration of PRNU is realized by defining a grid of 3×6 cells onto the aperture and then calculating phase retardance of each cell versus a gradient gray pattern. With designing an adjusted gray pattern calculated by the calibrated multipoint phase retardance function, compensation of inherent WFD is achieved. The peak-to-valley (PV) value of the residual WFD compensated by the multipoint calibration method is significantly reduced from 2.5λ to 0.140λ, while the PV value of the residual WFD after global calibration is reduced to 0.364λ. Experimental results of the generated finite-energy 2D Airy beams in Fourier space demonstrate the effectiveness of this multipoint calibration method.

  13. Calibration of areal surface topography measuring instruments

    NASA Astrophysics Data System (ADS)

    Seewig, J.; Eifler, M.

    2017-06-01

    The ISO standards which are related to the calibration of areal surface topography measuring instruments are the ISO 25178-6xx series which defines the relevant metrological characteristics for the calibration of different measuring principles and the ISO 25178-7xx series which defines the actual calibration procedures. As the field of areal measurement is however not yet fully standardized, there are still open questions to be addressed which are subject to current research. Based on this, selected research results of the authors in this area are presented. This includes the design and fabrication of areal material measures. For this topic, two examples are presented with the direct laser writing of a stepless material measure for the calibration of the height axis which is based on the Abbott- Curve and the manufacturing of a Siemens star for the determination of the lateral resolution limit. Based on these results, as well a new definition for the resolution criterion, the small scale fidelity, which is still under discussion, is presented. Additionally, a software solution for automated calibration procedures is outlined.

  14. Absolute calibration of optical streak cameras on picosecond time scales using supercontinuum generation

    DOE PAGES

    Patankar, S.; Gumbrell, E. T.; Robinson, T. S.; ...

    2017-08-17

    Here we report a new method using high stability, laser-driven supercontinuum generation in a liquid cell to calibrate the absolute photon response of fast optical streak cameras as a function of wavelength when operating at fastest sweep speeds. A stable, pulsed white light source based around the use of self-phase modulation in a salt solution was developed to provide the required brightness on picosecond timescales, enabling streak camera calibration in fully dynamic operation. The measured spectral brightness allowed for absolute photon response calibration over a broad spectral range (425-650nm). Calibrations performed with two Axis Photonique streak cameras using the Photonismore » P820PSU streak tube demonstrated responses which qualitatively follow the photocathode response. Peak sensitivities were 1 photon/count above background. The absolute dynamic sensitivity is less than the static by up to an order of magnitude. We attribute this to the dynamic response of the phosphor being lower.« less

  15. Calibration of the 7—Equation Transition Model for High Reynolds Flows at Low Mach

    NASA Astrophysics Data System (ADS)

    Colonia, S.; Leble, V.; Steijl, R.; Barakos, G.

    2016-09-01

    The numerical simulation of flows over large-scale wind turbine blades without considering the transition from laminar to fully turbulent flow may result in incorrect estimates of the blade loads and performance. Thanks to its relative simplicity and promising results, the Local-Correlation based Transition Modelling concept represents a valid way to include transitional effects into practical CFD simulations. However, the model involves coefficients that need tuning. In this paper, the γ—equation transition model is assessed and calibrated, for a wide range of Reynolds numbers at low Mach, as needed for wind turbine applications. An aerofoil is used to evaluate the original model and calibrate it; while a large scale wind turbine blade is employed to show that the calibrated model can lead to reliable solutions for complex three-dimensional flows. The calibrated model shows promising results for both two-dimensional and three-dimensional flows, even if cross-flow instabilities are neglected.

  16. Corsica: A Multi-Mission Absolute Calibration Site

    NASA Astrophysics Data System (ADS)

    Bonnefond, P.; Exertier, P.; Laurain, O.; Guinle, T.; Femenias, P.

    2013-09-01

    In collaboration with the CNES and NASA oceanographic projects (TOPEX/Poseidon and Jason), the OCA (Observatoire de la Côte d'Azur) developed a verification site in Corsica since 1996, operational since 1998. CALibration/VALidation embraces a wide variety of activities, ranging from the interpretation of information from internal-calibration modes of the sensors to validation of the fully corrected estimates of the reflector heights using in situ data. Now, Corsica is, like the Harvest platform (NASA side) [14], an operating calibration site able to support a continuous monitoring with a high level of accuracy: a 'point calibration' which yields instantaneous bias estimates with a 10-day repeatability of 30 mm (standard deviation) and mean errors of 4 mm (standard error). For a 35-day repeatability (ERS, Envisat), due to a smaller time series, the standard error is about the double ( 7 mm).In this paper, we will present updated results of the absolute Sea Surface Height (SSH) biases for TOPEX/Poseidon (T/P), Jason-1, Jason-2, ERS-2 and Envisat.

  17. Calibration of polyurethane foam (PUF) disk passive air samplers for quantitative measurement of polychlorinated biphenyls (PCBs) and polybrominated diphenyl ethers (PBDEs): factors influencing sampling rates.

    PubMed

    Hazrati, Sadegh; Harrad, Stuart

    2007-03-01

    PUF disk passive air samplers are increasingly employed for monitoring of POPs in ambient air. In order to utilize them as quantitative sampling devices, a calibration experiment was conducted. Time integrated indoor air concentrations of PCBs and PBDEs were obtained from a low volume air sampler operated over a 50 d period alongside the PUF disk samplers in the same office microenvironment. Passive sampling rates for the fully-sheltered sampler design employed in our research were determined for the 51 PCB and 7 PBDE congeners detected in all calibration samples. These values varied from 0.57 to 1.55 m3 d(-1) for individual PCBs and from 1.1 to 1.9 m3 d(-1) for PBDEs. These values are appreciably lower than those reported elsewhere for different PUF disk sampler designs (e.g. partially sheltered) employed under different conditions (e.g. in outdoor air), and derived using different calibration experiment configurations. This suggests that sampling rates derived for a specific sampler configuration deployed under specific environmental conditions, should not be extrapolated to different sampler configurations. Furthermore, our observation of variable congener-specific sampling rates (consistent with other studies), implies that more research is required in order to understand fully the factors that influence sampling rates. Analysis of wipe samples taken from the inside of the sampler housing, revealed evidence that the housing surface scavenges particle bound PBDEs.

  18. Visual vs Fully Automatic Histogram-Based Assessment of Idiopathic Pulmonary Fibrosis (IPF) Progression Using Sequential Multidetector Computed Tomography (MDCT)

    PubMed Central

    Colombi, Davide; Dinkel, Julien; Weinheimer, Oliver; Obermayer, Berenike; Buzan, Teodora; Nabers, Diana; Bauer, Claudia; Oltmanns, Ute; Palmowski, Karin; Herth, Felix; Kauczor, Hans Ulrich; Sverzellati, Nicola

    2015-01-01

    Objectives To describe changes over time in extent of idiopathic pulmonary fibrosis (IPF) at multidetector computed tomography (MDCT) assessed by semi-quantitative visual scores (VSs) and fully automatic histogram-based quantitative evaluation and to test the relationship between these two methods of quantification. Methods Forty IPF patients (median age: 70 y, interquartile: 62-75 years; M:F, 33:7) that underwent 2 MDCT at different time points with a median interval of 13 months (interquartile: 10-17 months) were retrospectively evaluated. In-house software YACTA quantified automatically lung density histogram (10th-90th percentile in 5th percentile steps). Longitudinal changes in VSs and in the percentiles of attenuation histogram were obtained in 20 untreated patients and 20 patients treated with pirfenidone. Pearson correlation analysis was used to test the relationship between VSs and selected percentiles. Results In follow-up MDCT, visual overall extent of parenchymal abnormalities (OE) increased in median by 5 %/year (interquartile: 0 %/y; +11 %/y). Substantial difference was found between treated and untreated patients in HU changes of the 40th and of the 80th percentiles of density histogram. Correlation analysis between VSs and selected percentiles showed higher correlation between the changes (Δ) in OE and Δ 40th percentile (r=0.69; p<0.001) as compared to Δ 80th percentile (r=0.58; p<0.001); closer correlation was found between Δ ground-glass extent and Δ 40th percentile (r=0.66, p<0.001) as compared to Δ 80th percentile (r=0.47, p=0.002), while the Δ reticulations correlated better with the Δ 80th percentile (r=0.56, p<0.001) in comparison to Δ 40th percentile (r=0.43, p=0.003). Conclusions There is a relevant and fully automatically measurable difference at MDCT in VSs and in histogram analysis at one year follow-up of IPF patients, whether treated or untreated: Δ 40th percentile might reflect the change in overall extent of lung abnormalities, notably of ground-glass pattern; furthermore Δ 80th percentile might reveal the course of reticular opacities. PMID:26110421

  19. Empirical transfer functions for stations in the Central California seismological network

    USGS Publications Warehouse

    Bakun, W.H.; Dratler, Jay

    1976-01-01

    A sequence of calibration signals composed of a station identification code, a transient from the release of the seismometer mass at rest from a known displacement from the equilibrium position, and a transient from a known step in voltage to the amplifier input are generated by the automatic daily calibration system (ADCS) now operational in the U.S. Geological Survey central California seismographic network. Documentation of a sequence of interactive programs to compute, from the calibration data, the complex transfer functions for the seismographic system (ground motion through digitizer) the electronics (amplifier through digitizer), and the seismometer alone are presented. The analysis utilizes the Fourier transform technique originally suggested by Espinosa et al (1962). Section I is a general description of seismographic calibration. Section II contrasts the 'Fourier transform' and the 'least-squares' techniques for analyzing transient calibration signals. Theoretical consideration for the Fourier transform technique used here are described in Section III. Section IV is a detailed description of the sequence of calibration signals generated by the ADCS. Section V is a brief 'cookbook description' of the calibration programs; Section VI contains a detailed sample program execution. Section VII suggests the uses of the resultant empirical transfer functions. Supplemental interactive programs by which smooth response functions, suitable for reducing seismic data to ground motion, are also documented in Section VII. Appendices A and B contain complete listings of the Fortran source Codes while Appendix C is an update containing preliminary results obtained from an analysis of some of the calibration signals from stations in the seismographic network near Oroville, California.

  20. Network operability of ground-based microwave radiometers: Calibration and standardization efforts

    NASA Astrophysics Data System (ADS)

    Pospichal, Bernhard; Löhnert, Ulrich; Küchler, Nils; Czekala, Harald

    2017-04-01

    Ground-based microwave radiometers (MWR) are already widely used by national weather services and research institutions all around the world. Most of the instruments operate continuously and are beginning to be implemented into data assimilation for atmospheric models. Especially their potential for continuously observing boundary-layer temperature profiles as well as integrated water vapor and cloud liquid water path makes them valuable for improving short-term weather forecasts. However until now, most MWR have been operated as stand-alone instruments. In order to benefit from a network of these instruments, standardization of calibration, operation and data format is necessary. In the frame of TOPROF (COST Action ES1303) several efforts have been undertaken, such as uncertainty and bias assessment, or calibration intercomparison campaigns. The goal was to establish protocols for providing quality controlled (QC) MWR data and their uncertainties. To this end, standardized calibration procedures for MWR have been developed and recommendations for radiometer users compiled. Based on the results of the TOPROF campaigns, a new, high-accuracy liquid-nitrogen calibration load has been introduced for MWR manufactured by Radiometer Physics GmbH (RPG). The new load improves the accuracy of the measurements considerably and will lead to even more reliable atmospheric observations. Next to the recommendations for set-up, calibration and operation of ground-based MWR within a future network, we will present homogenized methods to determine the accuracy of a running calibration as well as means for automatic data quality control. This sets the stage for the planned microwave calibration center at JOYCE (Jülich Observatory for Cloud Evolution), which will be shortly introduced.

  1. Development of a Machine-Vision System for Recording of Force Calibration Data

    NASA Astrophysics Data System (ADS)

    Heamawatanachai, Sumet; Chaemthet, Kittipong; Changpan, Tawat

    This paper presents the development of a new system for recording of force calibration data using machine vision technology. Real time camera and computer system were used to capture images of the reading from the instruments during calibration. Then, the measurement images were transformed and translated to numerical data using optical character recognition (OCR) technique. These numerical data along with raw images were automatically saved to memories as the calibration database files. With this new system, the human error of recording would be eliminated. The verification experiments were done by using this system for recording the measurement results from an amplifier (DMP 40) with load cell (HBM-Z30-10kN). The NIMT's 100-kN deadweight force standard machine (DWM-100kN) was used to generate test forces. The experiments setup were done in 3 categories; 1) dynamics condition (record during load changing), 2) statics condition (record during fix load), and 3) full calibration experiments in accordance with ISO 376:2011. The captured images from dynamics condition experiment gave >94% without overlapping of number. The results from statics condition experiment were >98% images without overlapping. All measurement images without overlapping were translated to number by the developed program with 100% accuracy. The full calibration experiments also gave 100% accurate results. Moreover, in case of incorrect translation of any result, it is also possible to trace back to the raw calibration image to check and correct it. Therefore, this machine-vision-based system and program should be appropriate for recording of force calibration data.

  2. Calibration of the NASA Glenn 8- by 6-Foot Supersonic Wind Tunnel (1996 and 1997 Tests)

    NASA Technical Reports Server (NTRS)

    Arrington, E. Allen

    2012-01-01

    There were several physical and operational changes made to the NASA Glenn Research Center 8- by 6-Foot Supersonic Wind Tunnel during the period of 1992 through 1996. Following each of these changes, a facility calibration was conducted to provide the required information to support the research test programs. Due to several factors (facility research test schedule, facility downtime and continued facility upgrades), a full test section calibration was not conducted until 1996. This calibration test incorporated all test section configurations and covered the existing operating range of the facility. However, near the end of that test entry, two of the vortex generators mounted on the compressor exit tailcone failed causing minor damage to the honeycomb flow straightener. The vortex generators were removed from the facility and calibration testing was terminated. A follow-up test entry was conducted in 1997 in order to fully calibrate the facility without the effects of the vortex generators and to provide a complete calibration of the newly expanded low speed operating range. During the 1997 tunnel entry, all planned test points required for a complete test section calibration were obtained. This data set included detailed in-plane and axial flow field distributions for use in quantifying the test section flow quality.

  3. Aerial applications dispersal systems control requirements study. [agriculture

    NASA Technical Reports Server (NTRS)

    Bauchspies, J. S.; Cleary, W. L.; Rogers, W. F.; Simpson, W.; Sanders, G. S.

    1980-01-01

    Performance deficiencies in aerial liquid and dry dispersal systems are identified. Five control system concepts are explored: (1) end of field on/off control; (2) manual control of particle size and application rate from the aircraft; (3) manual control of deposit rate on the field; (4) automatic alarm and shut-off control; and (5) fully automatic control. Operational aspects of the concepts and specifications for improved control configurations are discussed in detail. A research plan to provide the technology needed to develop the proposed improvements is presented along with a flight program to verify the benefits achieved.

  4. Automatic inference of multicellular regulatory networks using informative priors.

    PubMed

    Sun, Xiaoyun; Hong, Pengyu

    2009-01-01

    To fully understand the mechanisms governing animal development, computational models and algorithms are needed to enable quantitative studies of the underlying regulatory networks. We developed a mathematical model based on dynamic Bayesian networks to model multicellular regulatory networks that govern cell differentiation processes. A machine-learning method was developed to automatically infer such a model from heterogeneous data. We show that the model inference procedure can be greatly improved by incorporating interaction data across species. The proposed approach was applied to C. elegans vulval induction to reconstruct a model capable of simulating C. elegans vulval induction under 73 different genetic conditions.

  5. The design of digital-adaptive controllers for VTOL aircraft

    NASA Technical Reports Server (NTRS)

    Stengel, R. F.; Broussard, J. R.; Berry, P. W.

    1976-01-01

    Design procedures for VTOL automatic control systems have been developed and are presented. Using linear-optimal estimation and control techniques as a starting point, digital-adaptive control laws have been designed for the VALT Research Aircraft, a tandem-rotor helicopter which is equipped for fully automatic flight in terminal area operations. These control laws are designed to interface with velocity-command and attitude-command guidance logic, which could be used in short-haul VTOL operations. Developments reported here include new algorithms for designing non-zero-set-point digital regulators, design procedures for rate-limited systems, and algorithms for dynamic control trim setting.

  6. The CCD/Transit Instrument (CTI) data-analysis system

    NASA Technical Reports Server (NTRS)

    Cawson, M. G. M.; Mcgraw, J. T.; Keane, M. J.

    1995-01-01

    The automated software system for archiving, analyzing, and interrogating data from the CCD/Transit Instrument (CTI) is described. The CTI collects up to 450 Mbytes of image-data each clear night in the form of a narrow strip of sky observed in two colors. The large data-volumes and the scientific aims of the project make it imperative that the data are analyzed within the 24-hour period following the observations. To this end a fully automatic and self evaluating software system has been developed. The data are collected from the telescope in real-time and then transported to Tucson for analysis. Verification is performed by visual inspection of random subsets of the data and obvious cosmic rays are detected and removed before permanent archival is made to the optical disc. The analysis phase is performed by a pair of linked algorithms, one operating on the absolute pixel-values and the other on the spatial derivative of the data. In this way both isolated and merged images are reliably detected in a single pass. In order to isolate the latter algorithm from the effects of noise spikes a 3x3 Hanning filter is applied to the raw data before the analysis is run. The algorithms reduce the input pixel-data to a database of measured parameters for each image which has been found. A contrast filter is applied in order to assign a detection-probability to each image and then x-y calibration and intensity calibration are performed using known reference stars in the strip. These are added to as necessary by secondary standards boot-strapped from the CTI data itself. The final stages involve merging the new data into the CTI Master-list and History-list and the automatic comparison of each new detection with a set of pre-defined templates in parameter-space to find interesting objects such as supernovae, quasars and variable stars. Each stage of the processing from verification to interesting image selection is performed under a data-logging system which both controls the pipe-lining of data through the system and records key performance monitor parameters which are built into the software. Furthermore, the data from each stage are stored in databases to facilitate evaluation, and all stages offer the facility to enter keyword-indexed free-format text into the data-logging system. In this way a large measure of certification is built into the system to provide the necessary confidence in the end results.

  7. Job expansion : an additional benefit of a computer aided dispatch/automatic vehicle locator (CAD/AVL) system

    DOT National Transportation Integrated Search

    2000-03-01

    The Denver Regional Transportation District (RTD) acquired a CAD/AVL system that became fully operational in 1996. The CAD/AVL system added radio channels and covert alarms in buses, located vehicles in real time, and monitored schedule adherence. Th...

  8. Photogrammetric discharge monitoring of small tropical mountain rivers - A case study at Rivière des Pluies, Réunion island

    NASA Astrophysics Data System (ADS)

    Stumpf, André; Augereau, Emmanuel; Delacourt, Christophe; Bonnier, Julien

    2016-04-01

    Reliable discharge measurements are indispensable for an effective management of natural water resources and floods. Limitations of classical current meter profiling and stage-discharge ratings have stimulated the development of more accurate and efficient gauging techniques. While new discharge measurements technologies such as acoustic doppler current profilers and large-scale image particle velocimetry (LSPIV) have been developed and tested in numerous studies, the continuous monitoring of small mountain rivers and discharge dynamics during strong meteorological events remains challenging. More specifically LSPIV studies are often focused on short-term measurements during flood events and there are still very few studies that address its use for long-term monitoring of small mountain rivers. To fill this gap this study targets the development and testing of largely autonomous photogrammetric discharge measurement system with a special focus on the application to small mountain river with high discharge variability and a mobile riverbed in the tropics. It proposes several enhancements among previous LSPIV methods regarding camera calibration, more efficient processing in image geometry, the automatic detection of the water level as well as the statistical calibration and estimation of the discharge from multiple profiles. To account for changes in the bed topography the riverbed is surveyed repeatedly during the dry seasons using multi-view photogrammetry or terrestrial laser scanners. The presented case study comprises the analysis of several thousand videos spanning over two and a half year (2013-2015) to test the robustness and accuracy of different processing steps. An analysis of the obtained results suggests that the quality of the camera calibration reaches a sub-pixel accuracy. The median accuracy of the watermask detections is F1=0.82, whereas the precision is systematically higher than the recall. The resulting underestimation of the water surface area and level leads to a systematic underestimation of the discharge and error rates of up to 25 %. However, the bias can be effectively removed using a least-square cross-calibration which reduces the error to a MAE of 6.39% and a maximum error of 16.18%. Those error rates are significantly lower than the uncertainties among multiple profiles (30%) and illustrate the importance of the spatial averaging from multiple measurements. The study suggests that LSPIV can already be considered as a valuable tool for the monitoring of torrential flows, whereas further research is still needed to fully integrate night-time observation and stereo-photogrammetric capabilities.

  9. Semi-automatic brain tumor segmentation by constrained MRFs using structural trajectories.

    PubMed

    Zhao, Liang; Wu, Wei; Corso, Jason J

    2013-01-01

    Quantifying volume and growth of a brain tumor is a primary prognostic measure and hence has received much attention in the medical imaging community. Most methods have sought a fully automatic segmentation, but the variability in shape and appearance of brain tumor has limited their success and further adoption in the clinic. In reaction, we present a semi-automatic brain tumor segmentation framework for multi-channel magnetic resonance (MR) images. This framework does not require prior model construction and only requires manual labels on one automatically selected slice. All other slices are labeled by an iterative multi-label Markov random field optimization with hard constraints. Structural trajectories-the medical image analog to optical flow and 3D image over-segmentation are used to capture pixel correspondences between consecutive slices for pixel labeling. We show robustness and effectiveness through an evaluation on the 2012 MICCAI BRATS Challenge Dataset; our results indicate superior performance to baselines and demonstrate the utility of the constrained MRF formulation.

  10. A new method for the automatic interpretation of Schlumberger and Wenner sounding curves

    USGS Publications Warehouse

    Zohdy, A.A.R.

    1989-01-01

    A fast iterative method for the automatic interpretation of Schlumberger and Wenner sounding curves is based on obtaining interpreted depths and resistivities from shifted electrode spacings and adjusted apparent resistivities, respectively. The method is fully automatic. It does not require an initial guess of the number of layers, their thicknesses, or their resistivities; and it does not require extrapolation of incomplete sounding curves. The number of layers in the interpreted model equals the number of digitized points on the sounding curve. The resulting multilayer model is always well-behaved with no thin layers of unusually high or unusually low resistivities. For noisy data, interpretation is done in two sets of iterations (two passes). Anomalous layers, created because of noise in the first pass, are eliminated in the second pass. Such layers are eliminated by considering the best-fitting curve from the first pass to be a smoothed version of the observed curve and automatically reinterpreting it (second pass). The application of the method is illustrated by several examples. -Author

  11. A hybrid 3D region growing and 4D curvature analysis-based automatic abdominal blood vessel segmentation through contrast enhanced CT

    NASA Astrophysics Data System (ADS)

    Maklad, Ahmed S.; Matsuhiro, Mikio; Suzuki, Hidenobu; Kawata, Yoshiki; Niki, Noboru; Shimada, Mitsuo; Iinuma, Gen

    2017-03-01

    In abdominal disease diagnosis and various abdominal surgeries planning, segmentation of abdominal blood vessel (ABVs) is a very imperative task. Automatic segmentation enables fast and accurate processing of ABVs. We proposed a fully automatic approach for segmenting ABVs through contrast enhanced CT images by a hybrid of 3D region growing and 4D curvature analysis. The proposed method comprises three stages. First, candidates of bone, kidneys, ABVs and heart are segmented by an auto-adapted threshold. Second, bone is auto-segmented and classified into spine, ribs and pelvis. Third, ABVs are automatically segmented in two sub-steps: (1) kidneys and abdominal part of the heart are segmented, (2) ABVs are segmented by a hybrid approach that integrates a 3D region growing and 4D curvature analysis. Results are compared with two conventional methods. Results show that the proposed method is very promising in segmenting and classifying bone, segmenting whole ABVs and may have potential utility in clinical use.

  12. An automatic multi-atlas prostate segmentation in MRI using a multiscale representation and a label fusion strategy

    NASA Astrophysics Data System (ADS)

    Álvarez, Charlens; Martínez, Fabio; Romero, Eduardo

    2015-01-01

    The pelvic magnetic Resonance images (MRI) are used in Prostate cancer radiotherapy (RT), a process which is part of the radiation planning. Modern protocols require a manual delineation, a tedious and variable activity that may take about 20 minutes per patient, even for trained experts. That considerable time is an important work ow burden in most radiological services. Automatic or semi-automatic methods might improve the efficiency by decreasing the measure times while conserving the required accuracy. This work presents a fully automatic atlas- based segmentation strategy that selects the more similar templates for a new MRI using a robust multi-scale SURF analysis. Then a new segmentation is achieved by a linear combination of the selected templates, which are previously non-rigidly registered towards the new image. The proposed method shows reliable segmentations, obtaining an average DICE Coefficient of 79%, when comparing with the expert manual segmentation, under a leave-one-out scheme with the training database.

  13. Automatic digital image analysis for identification of mitotic cells in synchronous mammalian cell cultures.

    PubMed

    Eccles, B A; Klevecz, R R

    1986-06-01

    Mitotic frequency in a synchronous culture of mammalian cells was determined fully automatically and in real time using low-intensity phase-contrast microscopy and a newvicon video camera connected to an EyeCom III image processor. Image samples, at a frequency of one per minute for 50 hours, were analyzed by first extracting the high-frequency picture components, then thresholding and probing for annular objects indicative of putative mitotic cells. Both the extraction of high-frequency components and the recognition of rings of varying radii and discontinuities employed novel algorithms. Spatial and temporal relationships between annuli were examined to discern the occurrences of mitoses, and such events were recorded in a computer data file. At present, the automatic analysis is suited for random cell proliferation rate measurements or cell cycle studies. The automatic identification of mitotic cells as described here provides a measure of the average proliferative activity of the cell population as a whole and eliminates more than eight hours of manual review per time-lapse video recording.

  14. Automatic estimation of extent of resection and residual tumor volume of patients with glioblastoma.

    PubMed

    Meier, Raphael; Porz, Nicole; Knecht, Urspeter; Loosli, Tina; Schucht, Philippe; Beck, Jürgen; Slotboom, Johannes; Wiest, Roland; Reyes, Mauricio

    2017-10-01

    OBJECTIVE In the treatment of glioblastoma, residual tumor burden is the only prognostic factor that can be actively influenced by therapy. Therefore, an accurate, reproducible, and objective measurement of residual tumor burden is necessary. This study aimed to evaluate the use of a fully automatic segmentation method-brain tumor image analysis (BraTumIA)-for estimating the extent of resection (EOR) and residual tumor volume (RTV) of contrast-enhancing tumor after surgery. METHODS The imaging data of 19 patients who underwent primary resection of histologically confirmed supratentorial glioblastoma were retrospectively reviewed. Contrast-enhancing tumors apparent on structural preoperative and immediate postoperative MR imaging in this patient cohort were segmented by 4 different raters and the automatic segmentation BraTumIA software. The manual and automatic results were quantitatively compared. RESULTS First, the interrater variabilities in the estimates of EOR and RTV were assessed for all human raters. Interrater agreement in terms of the coefficient of concordance (W) was higher for RTV (W = 0.812; p < 0.001) than for EOR (W = 0.775; p < 0.001). Second, the volumetric estimates of BraTumIA for all 19 patients were compared with the estimates of the human raters, which showed that for both EOR (W = 0.713; p < 0.001) and RTV (W = 0.693; p < 0.001) the estimates of BraTumIA were generally located close to or between the estimates of the human raters. No statistically significant differences were detected between the manual and automatic estimates. BraTumIA showed a tendency to overestimate contrast-enhancing tumors, leading to moderate agreement with expert raters with respect to the literature-based, survival-relevant threshold values for EOR. CONCLUSIONS BraTumIA can generate volumetric estimates of EOR and RTV, in a fully automatic fashion, which are comparable to the estimates of human experts. However, automated analysis showed a tendency to overestimate the volume of a contrast-enhancing tumor, whereas manual analysis is prone to subjectivity, thereby causing considerable interrater variability.

  15. New Teff and [Fe/H] spectroscopic calibration for FGK dwarfs and GK giants

    NASA Astrophysics Data System (ADS)

    Teixeira, G. D. C.; Sousa, S. G.; Tsantaki, M.; Monteiro, M. J. P. F. G.; Santos, N. C.; Israelian, G.

    2016-10-01

    Context. The ever-growing number of large spectroscopic survey programs has increased the importance of fast and reliable methods with which to determine precise stellar parameters. Some of these methods are highly dependent on correct spectroscopic calibrations. Aims: The goal of this work is to obtain a new spectroscopic calibration for a fast estimate of Teff and [Fe/H] for a wide range of stellar spectral types. Methods: We used spectra from a joint sample of 708 stars, compiled from 451 FGK dwarfs and 257 GK-giant stars. We used homogeneously determined spectroscopic stellar parameters to derive temperature calibrations using a set of selected EW line-ratios, and [Fe/H] calibrations using a set of selected Fe I lines. Results: We have derived 322 EW line-ratios and 100 Fe I lines that can be used to compute Teff and [Fe/H], respectively. We show that these calibrations are effective for FGK dwarfs and GK-giant stars in the following ranges: 4500 K

  16. Autonomous calibration of single spin qubit operations

    NASA Astrophysics Data System (ADS)

    Frank, Florian; Unden, Thomas; Zoller, Jonathan; Said, Ressa S.; Calarco, Tommaso; Montangero, Simone; Naydenov, Boris; Jelezko, Fedor

    2017-12-01

    Fully autonomous precise control of qubits is crucial for quantum information processing, quantum communication, and quantum sensing applications. It requires minimal human intervention on the ability to model, to predict, and to anticipate the quantum dynamics, as well as to precisely control and calibrate single qubit operations. Here, we demonstrate single qubit autonomous calibrations via closed-loop optimisations of electron spin quantum operations in diamond. The operations are examined by quantum state and process tomographic measurements at room temperature, and their performances against systematic errors are iteratively rectified by an optimal pulse engineering algorithm. We achieve an autonomous calibrated fidelity up to 1.00 on a time scale of minutes for a spin population inversion and up to 0.98 on a time scale of hours for a single qubit π/2 -rotation within the experimental error of 2%. These results manifest a full potential for versatile quantum technologies.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patankar, S.; Gumbrell, E. T.; Robinson, T. S.

    Here we report a new method using high stability, laser-driven supercontinuum generation in a liquid cell to calibrate the absolute photon response of fast optical streak cameras as a function of wavelength when operating at fastest sweep speeds. A stable, pulsed white light source based around the use of self-phase modulation in a salt solution was developed to provide the required brightness on picosecond timescales, enabling streak camera calibration in fully dynamic operation. The measured spectral brightness allowed for absolute photon response calibration over a broad spectral range (425-650nm). Calibrations performed with two Axis Photonique streak cameras using the Photonismore » P820PSU streak tube demonstrated responses which qualitatively follow the photocathode response. Peak sensitivities were 1 photon/count above background. The absolute dynamic sensitivity is less than the static by up to an order of magnitude. We attribute this to the dynamic response of the phosphor being lower.« less

  18. Sensitivity analysis, calibration, and testing of a distributed hydrological model using error‐based weighting and one objective function

    USGS Publications Warehouse

    Foglia, L.; Hill, Mary C.; Mehl, Steffen W.; Burlando, P.

    2009-01-01

    We evaluate the utility of three interrelated means of using data to calibrate the fully distributed rainfall‐runoff model TOPKAPI as applied to the Maggia Valley drainage area in Switzerland. The use of error‐based weighting of observation and prior information data, local sensitivity analysis, and single‐objective function nonlinear regression provides quantitative evaluation of sensitivity of the 35 model parameters to the data, identification of data types most important to the calibration, and identification of correlations among parameters that contribute to nonuniqueness. Sensitivity analysis required only 71 model runs, and regression required about 50 model runs. The approach presented appears to be ideal for evaluation of models with long run times or as a preliminary step to more computationally demanding methods. The statistics used include composite scaled sensitivities, parameter correlation coefficients, leverage, Cook's D, and DFBETAS. Tests suggest predictive ability of the calibrated model typical of hydrologic models.

  19. Comparison of TLD calibration methods for  192Ir dosimetry

    PubMed Central

    Butler, Duncan J.; Wilfert, Lisa; Ebert, Martin A.; Todd, Stephen P.; Hayton, Anna J.M.; Kron, Tomas

    2013-01-01

    For the purpose of dose measurement using a high‐dose rate  192Ir source, four methods of thermoluminescent dosimeter (TLD) calibration were investigated. Three of the four calibration methods used the  192Ir source. Dwell times were calculated to deliver 1 Gy to the TLDs irradiated either in air or water. Dwell time calculations were confirmed by direct measurement using an ionization chamber. The fourth method of calibration used 6 MV photons from a medical linear accelerator, and an energy correction factor was applied to account for the difference in sensitivity of the TLDs in  192Ir and 6 M V. The results of the four TLD calibration methods are presented in terms of the results of a brachytherapy audit where seven Australian centers irradiated three sets of TLDs in a water phantom. The results were in agreement within estimated uncertainties when the TLDs were calibrated with the  192Ir source. Calibrating TLDs in a phantom similar to that used for the audit proved to be the most practical method and provided the greatest confidence in measured dose. When calibrated using 6 MV photons, the TLD results were consistently higher than the  192Ir−calibrated TLDs, suggesting this method does not fully correct for the response of the TLDs when irradiated in the audit phantom. PACS number: 87 PMID:23318392

  20. Measurement Marker Recognition In A Time Sequence Of Infrared Images For Biomedical Applications

    NASA Astrophysics Data System (ADS)

    Fiorini, A. R.; Fumero, R.; Marchesi, R.

    1986-03-01

    In thermographic measurements, quantitative surface temperature evaluation is often uncertain. The main reason is in the lack of available reference points in transient conditions. Reflective markers were used for automatic marker recognition and pixel coordinate computations. An algorithm selects marker icons to match marker references where particular luminance conditions are satisfied. Automatic marker recognition allows luminance compensation and temperature calibration of recorded infrared images. A biomedical application is presented: the dynamic behaviour of the surface temperature distributions is investigated in order to study the performance of two different pumping systems for extracorporeal circulation. Sequences of images are compared and results are discussed. Finally, the algorithm allows to monitor the experimental environment and to alert for the presence of unusual experimental conditions.

Top