Sample records for scanner inversion algorithms

  1. Top-of-atmosphere radiative fluxes - Validation of ERBE scanner inversion algorithm using Nimbus-7 ERB data

    NASA Technical Reports Server (NTRS)

    Suttles, John T.; Wielicki, Bruce A.; Vemury, Sastri

    1992-01-01

    The ERBE algorithm is applied to the Nimbus-7 earth radiation budget (ERB) scanner data for June 1979 to analyze the performance of an inversion method in deriving top-of-atmosphere albedos and longwave radiative fluxes. The performance is assessed by comparing ERBE algorithm results with appropriate results derived using the sorting-by-angular-bins (SAB) method, the ERB MATRIX algorithm, and the 'new-cloud ERB' (NCLE) algorithm. Comparisons are made for top-of-atmosphere albedos, longwave fluxes, viewing zenith-angle dependence of derived albedos and longwave fluxes, and cloud fractional coverage. Using the SAB method as a reference, the rms accuracy of monthly average ERBE-derived results are estimated to be 0.0165 (5.6 W/sq m) for albedos (shortwave fluxes) and 3.0 W/sq m for longwave fluxes. The ERBE-derived results were found to depend systematically on the viewing zenith angle, varying from near nadir to near the limb by about 10 percent for albedos and by 6-7 percent for longwave fluxes. Analyses indicated that the ERBE angular models are the most likely source of the systematic angular dependences. Comparison of the ERBE-derived cloud fractions, based on a maximum-likelihood estimation method, with results from the NCLE showed agreement within about 10 percent.

  2. Compensation for the signal processing characteristics of ultrasound B-mode scanners in adaptive speckle reduction.

    PubMed

    Crawford, D C; Bell, D S; Bamber, J C

    1993-01-01

    A systematic method to compensate for nonlinear amplification of individual ultrasound B-scanners has been investigated in order to optimise performance of an adaptive speckle reduction (ASR) filter for a wide range of clinical ultrasonic imaging equipment. Three potential methods have been investigated: (1) a method involving an appropriate selection of the speckle recognition feature was successful when the scanner signal processing executes simple logarithmic compressions; (2) an inverse transform (decompression) of the B-mode image was effective in correcting for the measured characteristics of image data compression when the algorithm was implemented in full floating point arithmetic; (3) characterising the behaviour of the statistical speckle recognition feature under conditions of speckle noise was found to be the method of choice for implementation of the adaptive speckle reduction algorithm in limited precision integer arithmetic. In this example, the statistical features of variance and mean were investigated. The third method may be implemented on commercially available fast image processing hardware and is also better suited for transfer into dedicated hardware to facilitate real-time adaptive speckle reduction. A systematic method is described for obtaining ASR calibration data from B-mode images of a speckle producing phantom.

  3. Spatiotemporal matrix image formation for programmable ultrasound scanners

    NASA Astrophysics Data System (ADS)

    Berthon, Beatrice; Morichau-Beauchant, Pierre; Porée, Jonathan; Garofalakis, Anikitos; Tavitian, Bertrand; Tanter, Mickael; Provost, Jean

    2018-02-01

    As programmable ultrasound scanners become more common in research laboratories, it is increasingly important to develop robust software-based image formation algorithms that can be obtained in a straightforward fashion for different types of probes and sequences with a small risk of error during implementation. In this work, we argue that as the computational power keeps increasing, it is becoming practical to directly implement an approximation to the matrix operator linking reflector point targets to the corresponding radiofrequency signals via thoroughly validated and widely available simulations software. Once such a spatiotemporal forward-problem matrix is constructed, standard and thus highly optimized inversion procedures can be leveraged to achieve very high quality images in real time. Specifically, we show that spatiotemporal matrix image formation produces images of similar or enhanced quality when compared against standard delay-and-sum approaches in phantoms and in vivo, and show that this approach can be used to form images even when using non-conventional probe designs for which adapted image formation algorithms are not readily available.

  4. Experimental validation of an OSEM-type iterative reconstruction algorithm for inverse geometry computed tomography

    NASA Astrophysics Data System (ADS)

    David, Sabrina; Burion, Steve; Tepe, Alan; Wilfley, Brian; Menig, Daniel; Funk, Tobias

    2012-03-01

    Iterative reconstruction methods have emerged as a promising avenue to reduce dose in CT imaging. Another, perhaps less well-known, advance has been the development of inverse geometry CT (IGCT) imaging systems, which can significantly reduce the radiation dose delivered to a patient during a CT scan compared to conventional CT systems. Here we show that IGCT data can be reconstructed using iterative methods, thereby combining two novel methods for CT dose reduction. A prototype IGCT scanner was developed using a scanning beam digital X-ray system - an inverse geometry fluoroscopy system with a 9,000 focal spot x-ray source and small photon counting detector. 90 fluoroscopic projections or "superviews" spanning an angle of 360 degrees were acquired of an anthropomorphic phantom mimicking a 1 year-old boy. The superviews were reconstructed with a custom iterative reconstruction algorithm, based on the maximum-likelihood algorithm for transmission tomography (ML-TR). The normalization term was calculated based on flat-field data acquired without a phantom. 15 subsets were used, and a total of 10 complete iterations were performed. Initial reconstructed images showed faithful reconstruction of anatomical details. Good edge resolution and good contrast-to-noise properties were observed. Overall, ML-TR reconstruction of IGCT data collected by a bench-top prototype was shown to be viable, which may be an important milestone in the further development of inverse geometry CT.

  5. GRAPE: a graphical pipeline environment for image analysis in adaptive magnetic resonance imaging.

    PubMed

    Gabr, Refaat E; Tefera, Getaneh B; Allen, William J; Pednekar, Amol S; Narayana, Ponnada A

    2017-03-01

    We present a platform, GRAphical Pipeline Environment (GRAPE), to facilitate the development of patient-adaptive magnetic resonance imaging (MRI) protocols. GRAPE is an open-source project implemented in the Qt C++ framework to enable graphical creation, execution, and debugging of real-time image analysis algorithms integrated with the MRI scanner. The platform provides the tools and infrastructure to design new algorithms, and build and execute an array of image analysis routines, and provides a mechanism to include existing analysis libraries, all within a graphical environment. The application of GRAPE is demonstrated in multiple MRI applications, and the software is described in detail for both the user and the developer. GRAPE was successfully used to implement and execute three applications in MRI of the brain, performed on a 3.0-T MRI scanner: (i) a multi-parametric pipeline for segmenting the brain tissue and detecting lesions in multiple sclerosis (MS), (ii) patient-specific optimization of the 3D fluid-attenuated inversion recovery MRI scan parameters to enhance the contrast of brain lesions in MS, and (iii) an algebraic image method for combining two MR images for improved lesion contrast. GRAPE allows graphical development and execution of image analysis algorithms for inline, real-time, and adaptive MRI applications.

  6. Decomposed direct matrix inversion for fast non-cartesian SENSE reconstructions.

    PubMed

    Qian, Yongxian; Zhang, Zhenghui; Wang, Yi; Boada, Fernando E

    2006-08-01

    A new k-space direct matrix inversion (DMI) method is proposed here to accelerate non-Cartesian SENSE reconstructions. In this method a global k-space matrix equation is established on basic MRI principles, and the inverse of the global encoding matrix is found from a set of local matrix equations by taking advantage of the small extension of k-space coil maps. The DMI algorithm's efficiency is achieved by reloading the precalculated global inverse when the coil maps and trajectories remain unchanged, such as in dynamic studies. Phantom and human subject experiments were performed on a 1.5T scanner with a standard four-channel phased-array cardiac coil. Interleaved spiral trajectories were used to collect fully sampled and undersampled 3D raw data. The equivalence of the global k-space matrix equation to its image-space version, was verified via conjugate gradient (CG) iterative algorithms on a 2x undersampled phantom and numerical-model data sets. When applied to the 2x undersampled phantom and human-subject raw data, the decomposed DMI method produced images with small errors (< or = 3.9%) relative to the reference images obtained from the fully-sampled data, at a rate of 2 s per slice (excluding 4 min for precalculating the global inverse at an image size of 256 x 256). The DMI method may be useful for noise evaluations in parallel coil designs, dynamic MRI, and 3D sodium MRI with fixed coils and trajectories. Copyright 2006 Wiley-Liss, Inc.

  7. Multisource inverse-geometry CT. Part I. System concept and development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Man, Bruno, E-mail: deman@ge.com; Harrison, Dan

    Purpose: This paper presents an overview of multisource inverse-geometry computed tomography (IGCT) as well as the development of a gantry-based research prototype system. The development of the distributed x-ray source is covered in a companion paper [V. B. Neculaes et al., “Multisource inverse-geometry CT. Part II. X-ray source design and prototype,” Med. Phys. 43, 4617–4627 (2016)]. While progress updates of this development have been presented at conferences and in journal papers, this paper is the first comprehensive overview of the multisource inverse-geometry CT concept and prototype. The authors also provide a review of all previous IGCT related publications. Methods: Themore » authors designed and implemented a gantry-based 32-source IGCT scanner with 22 cm field-of-view, 16 cm z-coverage, 1 s rotation time, 1.09 × 1.024 mm detector cell size, as low as 0.4 × 0.8 mm focal spot size and 80–140 kVp x-ray source voltage. The system is built using commercially available CT components and a custom made distributed x-ray source. The authors developed dedicated controls, calibrations, and reconstruction algorithms and evaluated the system performance using phantoms and small animals. Results: The authors performed IGCT system experiments and demonstrated tube current up to 125 mA with up to 32 focal spots. The authors measured a spatial resolution of 13 lp/cm at 5% cutoff. The scatter-to-primary ratio is estimated 62% for a 32 cm water phantom at 140 kVp. The authors scanned several phantoms and small animals. The initial images have relatively high noise due to the low x-ray flux levels but minimal artifacts. Conclusions: IGCT has unique benefits in terms of dose-efficiency and cone-beam artifacts, but comes with challenges in terms of scattered radiation and x-ray flux limits. To the authors’ knowledge, their prototype is the first gantry-based IGCT scanner. The authors summarized the design and implementation of the scanner and the authors presented results with phantoms and small animals.« less

  8. Multisource inverse-geometry CT. Part I. System concept and development

    PubMed Central

    De Man, Bruno; Uribe, Jorge; Baek, Jongduk; Harrison, Dan; Yin, Zhye; Longtin, Randy; Roy, Jaydeep; Waters, Bill; Wilson, Colin; Short, Jonathan; Inzinna, Lou; Reynolds, Joseph; Neculaes, V. Bogdan; Frutschy, Kristopher; Senzig, Bob; Pelc, Norbert

    2016-01-01

    Purpose: This paper presents an overview of multisource inverse-geometry computed tomography (IGCT) as well as the development of a gantry-based research prototype system. The development of the distributed x-ray source is covered in a companion paper [V. B. Neculaes et al., “Multisource inverse-geometry CT. Part II. X-ray source design and prototype,” Med. Phys. 43, 4617–4627 (2016)]. While progress updates of this development have been presented at conferences and in journal papers, this paper is the first comprehensive overview of the multisource inverse-geometry CT concept and prototype. The authors also provide a review of all previous IGCT related publications. Methods: The authors designed and implemented a gantry-based 32-source IGCT scanner with 22 cm field-of-view, 16 cm z-coverage, 1 s rotation time, 1.09 × 1.024 mm detector cell size, as low as 0.4 × 0.8 mm focal spot size and 80–140 kVp x-ray source voltage. The system is built using commercially available CT components and a custom made distributed x-ray source. The authors developed dedicated controls, calibrations, and reconstruction algorithms and evaluated the system performance using phantoms and small animals. Results: The authors performed IGCT system experiments and demonstrated tube current up to 125 mA with up to 32 focal spots. The authors measured a spatial resolution of 13 lp/cm at 5% cutoff. The scatter-to-primary ratio is estimated 62% for a 32 cm water phantom at 140 kVp. The authors scanned several phantoms and small animals. The initial images have relatively high noise due to the low x-ray flux levels but minimal artifacts. Conclusions: IGCT has unique benefits in terms of dose-efficiency and cone-beam artifacts, but comes with challenges in terms of scattered radiation and x-ray flux limits. To the authors’ knowledge, their prototype is the first gantry-based IGCT scanner. The authors summarized the design and implementation of the scanner and the authors presented results with phantoms and small animals. PMID:27487877

  9. Out of lab calibration of a rotating 2D scanner for 3D mapping

    NASA Astrophysics Data System (ADS)

    Koch, Rainer; Böttcher, Lena; Jahrsdörfer, Maximilian; Maier, Johannes; Trommer, Malte; May, Stefan; Nüchter, Andreas

    2017-06-01

    Mapping is an essential task in mobile robotics. To fulfil advanced navigation and manipulation tasks a 3D representation of the environment is required. Applying stereo cameras or Time-of-flight cameras (TOF cameras) are one way to archive this requirement. Unfortunately, they suffer from drawbacks which makes it difficult to map properly. Therefore, costly 3D laser scanners are applied. An inexpensive way to build a 3D representation is to use a 2D laser scanner and rotate the scan plane around an additional axis. A 3D point cloud acquired with such a custom device consists of multiple 2D line scans. Therefore the scanner pose of each line scan need to be determined as well as parameters resulting from a calibration to generate a 3D point cloud. Using external sensor systems are a common method to determine these calibration parameters. This is costly and difficult when the robot needs to be calibrated outside the lab. Thus, this work presents a calibration method applied on a rotating 2D laser scanner. It uses a hardware setup to identify the required parameters for calibration. This hardware setup is light, small, and easy to transport. Hence, an out of lab calibration is possible. Additional a theoretical model was created to test the algorithm and analyse impact of the scanner accuracy. The hardware components of the 3D scanner system are an HOKUYO UTM-30LX-EW 2D laser scanner, a Dynamixel servo-motor, and a control unit. The calibration system consists of an hemisphere. In the inner of the hemisphere a circular plate is mounted. The algorithm needs to be provided with a dataset of a single rotation from the laser scanner. To achieve a proper calibration result the scanner needs to be located in the middle of the hemisphere. By means of geometric formulas the algorithms determine the individual deviations of the placed laser scanner. In order to minimize errors, the algorithm solves the formulas in an iterative process. First, the calibration algorithm was tested with an ideal hemisphere model created in Matlab. Second, laser scanner was mounted differently, the scanner position and the rotation axis was modified. In doing so, every deviation, was compared with the algorithm results. Several measurement settings were tested repeatedly with the 3D scanner system and the calibration system. The results show that the length accuracy of the laser scanner is most critical. It influences the required size of the hemisphere and the calibration accuracy.

  10. Galileo spacecraft autonomous attitude determination using a V-slit star scanner

    NASA Technical Reports Server (NTRS)

    Mobasser, Sohrab; Lin, Shuh-Ren

    1991-01-01

    The autonomous attitude determination system of Galileo spacecraft, consisting of a radiation hardened star scanner and a processing algorithm is presented. The algorithm applying to this system are the sequential star identification and attitude estimation. The star scanner model is reviewed in detail and the flight software parameters that must be updated frequently during flight, due to degradation of the scanner response and the star background change are identified.

  11. Joint Calibration of 3d Laser Scanner and Digital Camera Based on Dlt Algorithm

    NASA Astrophysics Data System (ADS)

    Gao, X.; Li, M.; Xing, L.; Liu, Y.

    2018-04-01

    Design a calibration target that can be scanned by 3D laser scanner while shot by digital camera, achieving point cloud and photos of a same target. A method to joint calibrate 3D laser scanner and digital camera based on Direct Linear Transformation algorithm was proposed. This method adds a distortion model of digital camera to traditional DLT algorithm, after repeating iteration, it can solve the inner and external position element of the camera as well as the joint calibration of 3D laser scanner and digital camera. It comes to prove that this method is reliable.

  12. A prototype table-top inverse-geometry volumetric CT system.

    PubMed

    Schmidt, Taly Gilat; Star-Lack, Josh; Bennett, N Robert; Mazin, Samuel R; Solomon, Edward G; Fahrig, Rebecca; Pelc, Norbert J

    2006-06-01

    A table-top volumetric CT system has been implemented that is able to image a 5-cm-thick volume in one circular scan with no cone-beam artifacts. The prototype inverse-geometry CT (IGCT) scanner consists of a large-area, scanned x-ray source and a detector array that is smaller in the transverse direction. The IGCT geometry provides sufficient volumetric sampling because the source and detector have the same axial, or slice direction, extent. This paper describes the implementation of the table-top IGCT scanner, which is based on the NexRay Scanning-Beam Digital X-ray system (NexRay, Inc., Los Gatos, CA) and an investigation of the system performance. The alignment and flat-field calibration procedures are described, along with a summary of the reconstruction algorithm. The resolution and noise performance of the prototype IGCT system are studied through experiments and further supported by analytical predictions and simulations. To study the presence of cone-beam artifacts, a "Defrise" phantom was scanned on both the prototype IGCT scanner and a micro CT system with a +/-5 cone angle for a 4.5-cm volume thickness. Images of inner ear specimens are presented and compared to those from clinical CT systems. Results showed that the prototype IGCT system has a 0.25-mm isotropic resolution and that noise comparable to that from a clinical scanner with equivalent spatial resolution is achievable. The measured MTF and noise values agreed reasonably well with theoretical predictions and computer simulations. The IGCT system was able to faithfully reconstruct the laminated pattern of the Defrise phantom while the micro CT system suffered severe cone-beam artifacts for the same object. The inner ear acquisition verified that the IGCT system can image a complex anatomical object, and the resulting images exhibited more high-resolution details than the clinical CT acquisition. Overall, the successful implementation of the prototype system supports the IGCT concept for single-rotation volumetric scanning free from cone-beam artifacts.

  13. Description of algorithms for processing Coastal Zone Color Scanner (CZCS) data

    NASA Technical Reports Server (NTRS)

    Zion, P. M.

    1983-01-01

    The algorithms for processing coastal zone color scanner (CZCS) data to geophysical units (pigment concentration) are described. Current public domain information for processing these data is summarized. Calibration, atmospheric correction, and bio-optical algorithms are presented. Three CZCS data processing implementations are compared.

  14. Performance comparison of two resolution modeling PET reconstruction algorithms in terms of physical figures of merit used in quantitative imaging.

    PubMed

    Matheoud, R; Ferrando, O; Valzano, S; Lizio, D; Sacchetti, G; Ciarmiello, A; Foppiano, F; Brambilla, M

    2015-07-01

    Resolution modeling (RM) of PET systems has been introduced in iterative reconstruction algorithms for oncologic PET. The RM recovers the loss of resolution and reduces the associated partial volume effect. While these methods improved the observer performance, particularly in the detection of small and faint lesions, their impact on quantification accuracy still requires thorough investigation. The aim of this study was to characterize the performances of the RM algorithms under controlled conditions simulating a typical (18)F-FDG oncologic study, using an anthropomorphic phantom and selected physical figures of merit, used for image quantification. Measurements were performed on Biograph HiREZ (B_HiREZ) and Discovery 710 (D_710) PET/CT scanners and reconstructions were performed using the standard iterative reconstructions and the RM algorithms associated to each scanner: TrueX and SharpIR, respectively. RM determined a significant improvement in contrast recovery for small targets (≤17 mm diameter) only for the D_710 scanner. The maximum standardized uptake value (SUVmax) increased when RM was applied using both scanners. The SUVmax of small targets was on average lower with the B_HiREZ than with the D_710. Sharp IR improved the accuracy of SUVmax determination, whilst TrueX showed an overestimation of SUVmax for sphere dimensions greater than 22 mm. The goodness of fit of adaptive threshold algorithms worsened significantly when RM algorithms were employed for both scanners. Differences in general quantitative performance were observed for the PET scanners analyzed. Segmentation of PET images using adaptive threshold algorithms should not be undertaken in conjunction with RM reconstructions. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  15. Algorithms for Coastal-Zone Color-Scanner Data

    NASA Technical Reports Server (NTRS)

    1986-01-01

    Software for Nimbus-7 Coastal-Zone Color-Scanner (CZCS) derived products consists of set of scientific algorithms for extracting information from CZCS-gathered data. Software uses CZCS-generated Calibrated RadianceTemperature (CRT) tape as input and outputs computer-compatible tape and film product.

  16. Fast automatic segmentation of anatomical structures in x-ray computed tomography images to improve fluorescence molecular tomography reconstruction.

    PubMed

    Freyer, Marcus; Ale, Angelique; Schulz, Ralf B; Zientkowska, Marta; Ntziachristos, Vasilis; Englmeier, Karl-Hans

    2010-01-01

    The recent development of hybrid imaging scanners that integrate fluorescence molecular tomography (FMT) and x-ray computed tomography (XCT) allows the utilization of x-ray information as image priors for improving optical tomography reconstruction. To fully capitalize on this capacity, we consider a framework for the automatic and fast detection of different anatomic structures in murine XCT images. To accurately differentiate between different structures such as bone, lung, and heart, a combination of image processing steps including thresholding, seed growing, and signal detection are found to offer optimal segmentation performance. The algorithm and its utilization in an inverse FMT scheme that uses priors is demonstrated on mouse images.

  17. Timestamp Offset Determination Between AN Actuated Laser Scanner and its Corresponding Motor

    NASA Astrophysics Data System (ADS)

    Voges, R.; Wieghardt, C. S.; Wagner, B.

    2017-05-01

    Motor actuated 2D laser scanners are key sensors for many robotics applications that need wide ranging but low cost 3D data. There exist many approaches on how to build a 3D laser scanner using this technique, but they often lack proper synchronization for the timestamps of the actuator and the laser scanner. However, to transform the measurement points into three-dimensional space an appropriate synchronization is mandatory. Thus, we propose two different approaches to accomplish the goal of calculating timestamp offsets between laser scanner and motor prior to and after data acquisition. Both approaches use parts of a SLAM algorithm but apply different criteria to find an appropriate solution. While the approach for offset calculation prior to data acquisition exploits the fact that the SLAM algorithm should not register motion for a stationary system, the approach for offset calculation after data acquisition evaluates the perceived clarity of a point cloud created by the SLAM algorithm. Our experiments show that both approaches yield the same results although operating independently on different data, which demonstrates that the results reflect reality with a high probability. Furthermore, our experiments exhibit the significance of a proper synchronization between laser scanner and actuator.

  18. Galileo Attitude Determination: Experiences with a Rotating Star Scanner

    NASA Technical Reports Server (NTRS)

    Merken, L.; Singh, G.

    1991-01-01

    The Galileo experience with a rotating star scanner is discussed in terms of problems encountered in flight, solutions implemented, and lessons learned. An overview of the Galileo project and the attitude and articulation control subsystem is given and the star scanner hardware and relevant software algorithms are detailed. The star scanner is the sole source of inertial attitude reference for this spacecraft. Problem symptoms observed in flight are discussed in terms of effects on spacecraft performance and safety. Sources of thse problems include contributions from flight software idiosyncrasies and inadequate validation of the ground procedures used to identify target stars for use by the autonomous on-board star identification algorithm. Problem fixes (some already implemented and some only proposed) are discussed. A general conclusion is drawn regarding the inherent difficulty of performing simulation tests to validate algorithms which are highly sensitive to external inputs of statistically 'rare' events.

  19. A fast rebinning algorithm for 3D positron emission tomography using John's equation

    NASA Astrophysics Data System (ADS)

    Defrise, Michel; Liu, Xuan

    1999-08-01

    Volume imaging in positron emission tomography (PET) requires the inversion of the three-dimensional (3D) x-ray transform. The usual solution to this problem is based on 3D filtered-backprojection (FBP), but is slow. Alternative methods have been proposed which factor the 3D data into independent 2D data sets corresponding to the 2D Radon transforms of a stack of parallel slices. Each slice is then reconstructed using 2D FBP. These so-called rebinning methods are numerically efficient but are approximate. In this paper a new exact rebinning method is derived by exploiting the fact that the 3D x-ray transform of a function is the solution to the second-order partial differential equation first studied by John. The method is proposed for two sampling schemes, one corresponding to a pair of infinite plane detectors and another one corresponding to a cylindrical multi-ring PET scanner. The new FORE-J algorithm has been implemented for this latter geometry and was compared with the approximate Fourier rebinning algorithm FORE and with another exact rebinning algorithm, FOREX. Results with simulated data demonstrate a significant improvement in accuracy compared to FORE, while the reconstruction time is doubled. Compared to FOREX, the FORE-J algorithm is slightly less accurate but more than three times faster.

  20. Optimizing convergence rates of alternating minimization reconstruction algorithms for real-time explosive detection applications

    NASA Astrophysics Data System (ADS)

    Bosch, Carl; Degirmenci, Soysal; Barlow, Jason; Mesika, Assaf; Politte, David G.; O'Sullivan, Joseph A.

    2016-05-01

    X-ray computed tomography reconstruction for medical, security and industrial applications has evolved through 40 years of experience with rotating gantry scanners using analytic reconstruction techniques such as filtered back projection (FBP). In parallel, research into statistical iterative reconstruction algorithms has evolved to apply to sparse view scanners in nuclear medicine, low data rate scanners in Positron Emission Tomography (PET) [5, 7, 10] and more recently to reduce exposure to ionizing radiation in conventional X-ray CT scanners. Multiple approaches to statistical iterative reconstruction have been developed based primarily on variations of expectation maximization (EM) algorithms. The primary benefit of EM algorithms is the guarantee of convergence that is maintained when iterative corrections are made within the limits of convergent algorithms. The primary disadvantage, however is that strict adherence to correction limits of convergent algorithms extends the number of iterations and ultimate timeline to complete a 3D volumetric reconstruction. Researchers have studied methods to accelerate convergence through more aggressive corrections [1], ordered subsets [1, 3, 4, 9] and spatially variant image updates. In this paper we describe the development of an AM reconstruction algorithm with accelerated convergence for use in a real-time explosive detection application for aviation security. By judiciously applying multiple acceleration techniques and advanced GPU processing architectures, we are able to perform 3D reconstruction of scanned passenger baggage at a rate of 75 slices per second. Analysis of the results on stream of commerce passenger bags demonstrates accelerated convergence by factors of 8 to 15, when comparing images from accelerated and strictly convergent algorithms.

  1. Improvement in detection of small wildfires

    NASA Astrophysics Data System (ADS)

    Sleigh, William J.

    1991-12-01

    Detecting and imaging small wildfires with an Airborne Scanner is done against generally high background levels. The Airborne Scanner System used is a two-channel thermal IR scanner, with one channel selected for imaging the terrain and the other channel sensitive to hotter targets. If a relationship can be determined between the two channels that quantifies the background signal for hotter targets, then an algorithm can be determined that removes the background signal in that channel leaving only the fire signal. The relationship can be determined anywhere between various points in the signal processing of the radiometric data from the radiometric input to the quantized output of the system. As long as only linear operations are performed on the signal, the relationship will only depend on the system gain and offsets within the range of interest. The algorithm can be implemented either by using a look-up table or performing the calculation in the system computer. The current presentation will describe the algorithm, its derivation, and its implementation in the Firefly Wildfire Detection System by means of an off-the-shelf commercial scanner. Improvement over the previous algorithm used and the margin gained for improving the imaging of the terrain will be demonstrated.

  2. Improvement in detection of small wildfires

    NASA Technical Reports Server (NTRS)

    Sleigh, William J.

    1991-01-01

    Detecting and imaging small wildfires with an Airborne Scanner is done against generally high background levels. The Airborne Scanner System used is a two-channel thermal IR scanner, with one channel selected for imaging the terrain and the other channel sensitive to hotter targets. If a relationship can be determined between the two channels that quantifies the background signal for hotter targets, then an algorithm can be determined that removes the background signal in that channel leaving only the fire signal. The relationship can be determined anywhere between various points in the signal processing of the radiometric data from the radiometric input to the quantized output of the system. As long as only linear operations are performed on the signal, the relationship will only depend on the system gain and offsets within the range of interest. The algorithm can be implemented either by using a look-up table or performing the calculation in the system computer. The current presentation will describe the algorithm, its derivation, and its implementation in the Firefly Wildfire Detection System by means of an off-the-shelf commercial scanner. Improvement over the previous algorithm used and the margin gained for improving the imaging of the terrain will be demonstrated.

  3. Objective performance assessment of five computed tomography iterative reconstruction algorithms.

    PubMed

    Omotayo, Azeez; Elbakri, Idris

    2016-11-22

    Iterative algorithms are gaining clinical acceptance in CT. We performed objective phantom-based image quality evaluation of five commercial iterative reconstruction algorithms available on four different multi-detector CT (MDCT) scanners at different dose levels as well as the conventional filtered back-projection (FBP) reconstruction. Using the Catphan500 phantom, we evaluated image noise, contrast-to-noise ratio (CNR), modulation transfer function (MTF) and noise-power spectrum (NPS). The algorithms were evaluated over a CTDIvol range of 0.75-18.7 mGy on four major MDCT scanners: GE DiscoveryCT750HD (algorithms: ASIR™ and VEO™); Siemens Somatom Definition AS+ (algorithm: SAFIRE™); Toshiba Aquilion64 (algorithm: AIDR3D™); and Philips Ingenuity iCT256 (algorithm: iDose4™). Images were reconstructed using FBP and the respective iterative algorithms on the four scanners. Use of iterative algorithms decreased image noise and increased CNR, relative to FBP. In the dose range of 1.3-1.5 mGy, noise reduction using iterative algorithms was in the range of 11%-51% on GE DiscoveryCT750HD, 10%-52% on Siemens Somatom Definition AS+, 49%-62% on Toshiba Aquilion64, and 13%-44% on Philips Ingenuity iCT256. The corresponding CNR increase was in the range 11%-105% on GE, 11%-106% on Siemens, 85%-145% on Toshiba and 13%-77% on Philips respectively. Most algorithms did not affect the MTF, except for VEO™ which produced an increase in the limiting resolution of up to 30%. A shift in the peak of the NPS curve towards lower frequencies and a decrease in NPS amplitude were obtained with all iterative algorithms. VEO™ required long reconstruction times, while all other algorithms produced reconstructions in real time. Compared to FBP, iterative algorithms reduced image noise and increased CNR. The iterative algorithms available on different scanners achieved different levels of noise reduction and CNR increase while spatial resolution improvements were obtained only with VEO™. This study is useful in that it provides performance assessment of the iterative algorithms available from several mainstream CT manufacturers.

  4. Comparison of atmospheric correction algorithms for the Coastal Zone Color Scanner

    NASA Technical Reports Server (NTRS)

    Tanis, F. J.; Jain, S. C.

    1984-01-01

    Before Nimbus-7 Costal Zone Color Scanner (CZC) data can be used to distinguish between coastal water types, methods must be developed for the removal of spatial variations in aerosol path radiance. These can dominate radiance measurements made by the satellite. An assessment is presently made of the ability of four different algorithms to quantitatively remove haze effects; each was adapted for the extraction of the required scene-dependent parameters during an initial pass through the data set The CZCS correction algorithms considered are (1) the Gordon (1981, 1983) algorithm; (2) the Smith and Wilson (1981) iterative algorityhm; (3) the pseudooptical depth method; and (4) the residual component algorithm.

  5. Explosive Detection in Aviation Applications Using CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martz, H E; Crawford, C R

    2011-02-15

    CT scanners are deployed world-wide to detect explosives in checked and carry-on baggage. Though very similar to single- and dual-energy multi-slice CT scanners used today in medical imaging, some recently developed explosives detection scanners employ multiple sources and detector arrays to eliminate mechanical rotation of a gantry, photon counting detectors for spectral imaging, and limited number of views to reduce cost. For each bag scanned, the resulting reconstructed images are first processed by automated threat recognition algorithms to screen for explosives and other threats. Human operators review the images only when these automated algorithms report the presence of possible threats.more » The US Department of Homeland Security (DHS) has requirements for future scanners that include dealing with a larger number of threats, higher probability of detection, lower false alarm rates and lower operating costs. One tactic that DHS is pursuing to achieve these requirements is to augment the capabilities of the established security vendors with third-party algorithm developers. A third-party in this context refers to academics and companies other than the established vendors. DHS is particularly interested in exploring the model that has been used very successfully by the medical imaging industry, in which university researchers develop algorithms that are eventually deployed in commercial medical imaging equipment. The purpose of this paper is to discuss opportunities for third-parties to develop advanced reconstruction and threat detection algorithms.« less

  6. Multimodal Registration of White Matter Brain Data via Optimal Mass Transport.

    PubMed

    Rehman, Tauseefur; Haber, Eldad; Pohl, Kilian M; Haker, Steven; Halle, Mike; Talos, Florin; Wald, Lawrence L; Kikinis, Ron; Tannenbaum, Allen

    2008-09-01

    The elastic registration of medical scans from different acquisition sequences is becoming an important topic for many research labs that would like to continue the post-processing of medical scans acquired via the new generation of high-field-strength scanners. In this note, we present a parameter-free registration algorithm that is well suited for this scenario as it requires no tuning to specific acquisition sequences. The algorithm encompasses a new numerical scheme for computing elastic registration maps based on the minimizing flow approach to optimal mass transport. The approach utilizes all of the gray-scale data in both images, and the optimal mapping from image A to image B is the inverse of the optimal mapping from B to A . Further, no landmarks need to be specified, and the minimizer of the distance functional involved is unique. We apply the algorithm to register the white matter folds of two different scans and use the results to parcellate the cortex of the target image. To the best of our knowledge, this is the first time that the optimal mass transport function has been applied to register large 3D multimodal data sets.

  7. Multimodal Registration of White Matter Brain Data via Optimal Mass Transport

    PubMed Central

    Rehman, Tauseefur; Haber, Eldad; Pohl, Kilian M.; Haker, Steven; Halle, Mike; Talos, Florin; Wald, Lawrence L.; Kikinis, Ron; Tannenbaum, Allen

    2017-01-01

    The elastic registration of medical scans from different acquisition sequences is becoming an important topic for many research labs that would like to continue the post-processing of medical scans acquired via the new generation of high-field-strength scanners. In this note, we present a parameter-free registration algorithm that is well suited for this scenario as it requires no tuning to specific acquisition sequences. The algorithm encompasses a new numerical scheme for computing elastic registration maps based on the minimizing flow approach to optimal mass transport. The approach utilizes all of the gray-scale data in both images, and the optimal mapping from image A to image B is the inverse of the optimal mapping from B to A. Further, no landmarks need to be specified, and the minimizer of the distance functional involved is unique. We apply the algorithm to register the white matter folds of two different scans and use the results to parcellate the cortex of the target image. To the best of our knowledge, this is the first time that the optimal mass transport function has been applied to register large 3D multimodal data sets. PMID:28626844

  8. Optimization of contrast-to-tissue ratio by adaptation of transmitted ternary signal in ultrasound pulse inversion imaging.

    PubMed

    Ménigot, Sébastien; Girault, Jean-Marc

    2013-01-01

    Ultrasound contrast imaging has provided more accurate medical diagnoses thanks to the development of innovating modalities like the pulse inversion imaging. However, this latter modality that improves the contrast-to-tissue ratio (CTR) is not optimal, since the frequency is manually chosen jointly with the probe. However, an optimal choice of this command is possible, but it requires precise information about the transducer and the medium which can be experimentally difficult to obtain, even inaccessible. It turns out that the optimization can become more complex by taking into account the kind of generators, since the generators of electrical signals in a conventional ultrasound scanner can be unipolar, bipolar, or tripolar. Our aim was to seek the ternary command which maximized the CTR. By combining a genetic algorithm and a closed loop, the system automatically proposed the optimal ternary command. In simulation, the gain compared with the usual ternary signal could reach about 3.9 dB. Another interesting finding was that, in contrast to what is generally accepted, the optimal command was not a fixed-frequency signal but had harmonic components.

  9. Detector Position Estimation for PET Scanners.

    PubMed

    Pierce, Larry; Miyaoka, Robert; Lewellen, Tom; Alessio, Adam; Kinahan, Paul

    2012-06-11

    Physical positioning of scintillation crystal detector blocks in Positron Emission Tomography (PET) scanners is not always exact. We test a proof of concept methodology for the determination of the six degrees of freedom for detector block positioning errors by utilizing a rotating point source over stepped axial intervals. To test our method, we created computer simulations of seven Micro Crystal Element Scanner (MiCES) PET systems with randomized positioning errors. The computer simulations show that our positioning algorithm can estimate the positions of the block detectors to an average of one-seventh of the crystal pitch tangentially, and one-third of the crystal pitch axially. Virtual acquisitions of a point source grid and a distributed phantom show that our algorithm improves both the quantitative and qualitative accuracy of the reconstructed objects. We believe this estimation algorithm is a practical and accurate method for determining the spatial positions of scintillation detector blocks.

  10. Development of Great Lakes algorithms for the Nimbus-G coastal zone color scanner

    NASA Technical Reports Server (NTRS)

    Tanis, F. J.; Lyzenga, D. R.

    1981-01-01

    A series of experiments in the Great Lakes designed to evaluate the application of the Nimbus G satellite Coastal Zone Color Scanner (CZCS) were conducted. Absorption and scattering measurement data were reduced to obtain a preliminary optical model for the Great Lakes. Available optical models were used in turn to calculate subsurface reflectances for expected concentrations of chlorophyll-a pigment and suspended minerals. Multiple nonlinear regression techniques were used to derive CZCS water quality prediction equations from Great Lakes simulation data. An existing atmospheric model was combined with a water model to provide the necessary simulation data for evaluation of the preliminary CZCS algorithms. A CZCS scanner model was developed which accounts for image distorting scanner and satellite motions. This model was used in turn to generate mapping polynomials that define the transformation from the original image to one configured in a polyconic projection. Four computer programs (FORTRAN IV) for image transformation are presented.

  11. Comparative Performance Analysis of Different Fingerprint Biometric Scanners for Patient Matching.

    PubMed

    Kasiiti, Noah; Wawira, Judy; Purkayastha, Saptarshi; Were, Martin C

    2017-01-01

    Unique patient identification within health services is an operational challenge in healthcare settings. Use of key identifiers, such as patient names, hospital identification numbers, national ID, and birth date are often inadequate for ensuring unique patient identification. In addition approximate string comparator algorithms, such as distance-based algorithms, have proven suboptimal for improving patient matching, especially in low-resource settings. Biometric approaches may improve unique patient identification. However, before implementing the technology in a given setting, such as health care, the right scanners should be rigorously tested to identify an optimal package for the implementation. This study aimed to investigate the effects of factors such as resolution, template size, and scan capture area on the matching performance of different fingerprint scanners for use within health care settings. Performance analysis of eight different scanners was tested using the demo application distributed as part of the Neurotech Verifinger SDK 6.0.

  12. Fluorescence laminar optical tomography for brain imaging: system implementation and performance evaluation.

    PubMed

    Azimipour, Mehdi; Sheikhzadeh, Mahya; Baumgartner, Ryan; Cullen, Patrick K; Helmstetter, Fred J; Chang, Woo-Jin; Pashaie, Ramin

    2017-01-01

    We present our effort in implementing a fluorescence laminar optical tomography scanner which is specifically designed for noninvasive three-dimensional imaging of fluorescence proteins in the brains of small rodents. A laser beam, after passing through a cylindrical lens, scans the brain tissue from the surface while the emission signal is captured by the epi-fluorescence optics and is recorded using an electron multiplication CCD sensor. Image reconstruction algorithms are developed based on Monte Carlo simulation to model light–tissue interaction and generate the sensitivity matrices. To solve the inverse problem, we used the iterative simultaneous algebraic reconstruction technique. The performance of the developed system was evaluated by imaging microfabricated silicon microchannels embedded inside a substrate with optical properties close to the brain as a tissue phantom and ultimately by scanning brain tissue in vivo. Details of the hardware design and reconstruction algorithms are discussed and several experimental results are presented. The developed system can specifically facilitate neuroscience experiments where fluorescence imaging and molecular genetic methods are used to study the dynamics of the brain circuitries.

  13. Computed Tomography Image Origin Identification Based on Original Sensor Pattern Noise and 3-D Image Reconstruction Algorithm Footprints.

    PubMed

    Duan, Yuping; Bouslimi, Dalel; Yang, Guanyu; Shu, Huazhong; Coatrieux, Gouenou

    2017-07-01

    In this paper, we focus on the "blind" identification of the computed tomography (CT) scanner that has produced a CT image. To do so, we propose a set of noise features derived from the image chain acquisition and which can be used as CT-scanner footprint. Basically, we propose two approaches. The first one aims at identifying a CT scanner based on an original sensor pattern noise (OSPN) that is intrinsic to the X-ray detectors. The second one identifies an acquisition system based on the way this noise is modified by its three-dimensional (3-D) image reconstruction algorithm. As these reconstruction algorithms are manufacturer dependent and kept secret, our features are used as input to train a support vector machine (SVM) based classifier to discriminate acquisition systems. Experiments conducted on images issued from 15 different CT-scanner models of 4 distinct manufacturers demonstrate that our system identifies the origin of one CT image with a detection rate of at least 94% and that it achieves better performance than sensor pattern noise (SPN) based strategy proposed for general public camera devices.

  14. A new scanning device in CT with dose reduction potential

    NASA Astrophysics Data System (ADS)

    Tischenko, Oleg; Xu, Yuan; Hoeschen, Christoph

    2006-03-01

    The amount of x-ray radiation currently applied in CT practice is not utilized optimally. A portion of radiation traversing the patient is either not detected at all or is used ineffectively. The reason lies partly in the reconstruction algorithms and partly in the geometry of the CT scanners designed specifically for these algorithms. In fact, the reconstruction methods widely used in CT are intended to invert the data that correspond to ideal straight lines. However, the collection of such data is often not accurate due to likely movement of the source/detector system of the scanner in the time interval during which all the detectors are read. In this paper, a new design of the scanner geometry is proposed that is immune to the movement of the CT system and will collect all radiation traversing the patient. The proposed scanning design has a potential to reduce the patient dose by a factor of two. Furthermore, it can be used with the existing reconstruction algorithm and it is particularly suitable for OPED, a new robust reconstruction algorithm.

  15. Exact rebinning methods for three-dimensional PET.

    PubMed

    Liu, X; Defrise, M; Michel, C; Sibomana, M; Comtat, C; Kinahan, P; Townsend, D

    1999-08-01

    The high computational cost of data processing in volume PET imaging is still hindering the routine application of this successful technique, especially in the case of dynamic studies. This paper describes two new algorithms based on an exact rebinning equation, which can be applied to accelerate the processing of three-dimensional (3-D) PET data. The first algorithm, FOREPROJ, is a fast-forward projection algorithm that allows calculation of the 3-D attenuation correction factors (ACF's) directly from a two-dimensional (2-D) transmission scan, without first reconstructing the attenuation map and then performing a 3-D forward projection. The use of FOREPROJ speeds up the estimation of the 3-D ACF's by more than a factor five. The second algorithm, FOREX, is a rebinning algorithm that is also more than five times faster, compared to the standard reprojection algorithm (3DRP) and does not suffer from the image distortions generated by the even faster approximate Fourier rebinning (FORE) method at large axial apertures. However, FOREX is probably not required by most existing scanners, as the axial apertures are not large enough to show improvements over FORE with clinical data. Both algorithms have been implemented and applied to data simulated for a scanner with a large axial aperture (30 degrees), and also to data acquired with the ECAT HR and the ECAT HR+ scanners. Results demonstrate the excellent accuracy achieved by these algorithms and the important speedup when the sinogram sizes are powers of two.

  16. MFP scanner motion characterization using self-printed target

    NASA Astrophysics Data System (ADS)

    Kim, Minwoong; Bauer, Peter; Wagner, Jerry K.; Allebach, Jan P.

    2015-01-01

    Multifunctional printers (MFP) are products that combine the functions of a printer, scanner, and copier. Our goal is to help customers to be able to easily diagnose scanner or print quality issues with their products by developing an automated diagnostic system embedded in the product. We specifically focus on the characterization of scanner motions, which may be defective due to irregular movements of the scan-head. The novel design of our test page and two-stage diagnostic algorithm are described in this paper. The most challenging issue is to evaluate the scanner performance properly when both printer and scanner units contribute to the motion errors. In the first stage called the uncorrected-print-error-stage, aperiodic and periodic motion behaviors are characterized in both the spatial and frequency domains. Since it is not clear how much of the error is contributed by each unit, the scanned input is statistically analyzed in the second stage called the corrected-print-error-stage. Finally, the described diagnostic algorithms output the estimated scan error and print error separately as RMS values of the displacement of the scan and print lines, respectively, from their nominal positions in the scanner or printer motion direction. We validate our test page design and approaches by ground truth obtained from a high-precision, chrome-on-glass reticle manufactured using semiconductor chip fabrication technologies.

  17. NOAA-NASA Coastal Zone Color Scanner reanalysis effort.

    PubMed

    Gregg, Watson W; Conkright, Margarita E; O'Reilly, John E; Patt, Frederick S; Wang, Menghua H; Yoder, James A; Casey, Nancy W

    2002-03-20

    Satellite observations of global ocean chlorophyll span more than two decades. However, incompatibilities between processing algorithms prevent us from quantifying natural variability. We applied a comprehensive reanalysis to the Coastal Zone Color Scanner (CZCS) archive, called the National Oceanic and Atmospheric Administration and National Aeronautics and Space Administration (NOAA-NASA) CZCS reanalysis (NCR) effort. NCR consisted of (1) algorithm improvement (AI), where CZCS processing algorithms were improved with modernized atmospheric correction and bio-optical algorithms and (2) blending where in situ data were incorporated into the CZCS AI to minimize residual errors. Global spatial and seasonal patterns of NCR chlorophyll indicated remarkable correspondence with modern sensors, suggesting compatibility. The NCR permits quantitative analyses of interannual and interdecadal trends in global ocean chlorophyll.

  18. What Scanner products are available?

    Atmospheric Science Data Center

    2014-12-08

    ... not provide the full diurnal coverage, which can affect the quality of the shortwave and longwave estimate. ERBS covers all 24-hour local ... algorithm. Because of these differences, it is best to work with these two data sets separately. ERBE/ERBS scanner operated ...

  19. Phase 2 development of Great Lakes algorithms for Nimbus-7 coastal zone color scanner

    NASA Technical Reports Server (NTRS)

    Tanis, Fred J.

    1984-01-01

    A series of experiments have been conducted in the Great Lakes designed to evaluate the application of the NIMBUS-7 Coastal Zone Color Scanner (CZCS). Atmospheric and water optical models were used to relate surface and subsurface measurements to satellite measured radiances. Absorption and scattering measurements were reduced to obtain a preliminary optical model for the Great Lakes. Algorithms were developed for geometric correction, correction for Rayleigh and aerosol path radiance, and prediction of chlorophyll-a pigment and suspended mineral concentrations. The atmospheric algorithm developed compared favorably with existing algorithms and was the only algorithm found to adequately predict the radiance variations in the 670 nm band. The atmospheric correction algorithm developed was designed to extract needed algorithm parameters from the CZCS radiance values. The Gordon/NOAA ocean algorithms could not be demonstrated to work for Great Lakes waters. Predicted values of chlorophyll-a concentration compared favorably with expected and measured data for several areas of the Great Lakes.

  20. Magnetotelluric inversion via reverse time migration algorithm of seismic data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ha, Taeyoung; Shin, Changsoo

    2007-07-01

    We propose a new algorithm for two-dimensional magnetotelluric (MT) inversion. Our algorithm is an MT inversion based on the steepest descent method, borrowed from the backpropagation technique of seismic inversion or reverse time migration, introduced in the middle 1980s by Lailly and Tarantola. The steepest descent direction can be calculated efficiently by using the symmetry of numerical Green's function derived from a mixed finite element method proposed by Nedelec for Maxwell's equation, without calculating the Jacobian matrix explicitly. We construct three different objective functions by taking the logarithm of the complex apparent resistivity as introduced in the recent waveform inversionmore » algorithm by Shin and Min. These objective functions can be naturally separated into amplitude inversion, phase inversion and simultaneous inversion. We demonstrate our algorithm by showing three inversion results for synthetic data.« less

  1. Sorting signed permutations by inversions in O(nlogn) time.

    PubMed

    Swenson, Krister M; Rajan, Vaibhav; Lin, Yu; Moret, Bernard M E

    2010-03-01

    The study of genomic inversions (or reversals) has been a mainstay of computational genomics for nearly 20 years. After the initial breakthrough of Hannenhalli and Pevzner, who gave the first polynomial-time algorithm for sorting signed permutations by inversions, improved algorithms have been designed, culminating with an optimal linear-time algorithm for computing the inversion distance and a subquadratic algorithm for providing a shortest sequence of inversions--also known as sorting by inversions. Remaining open was the question of whether sorting by inversions could be done in O(nlogn) time. In this article, we present a qualified answer to this question, by providing two new sorting algorithms, a simple and fast randomized algorithm and a deterministic refinement. The deterministic algorithm runs in time O(nlogn + kn), where k is a data-dependent parameter. We provide the results of extensive experiments showing that both the average and the standard deviation for k are small constants, independent of the size of the permutation. We conclude (but do not prove) that almost all signed permutations can be sorted by inversions in O(nlogn) time.

  2. Evaluation of scattered radiation emitted from X-ray security scanners on occupational dose to airport personnel

    NASA Astrophysics Data System (ADS)

    Dalah, Entesar; Fakhry, Angham; Mukhtar, Asma; Al Salti, Farah; Bader, May; Khouri, Sara; Al-Zahmi, Reem

    2017-06-01

    Based on security issues and regulations airports are provided with luggage cargo scanners. These scanners utilize ionizing radiation that in principle present health risks toward humans. The study aims to investigate the amount of backscatter produced by passenger luggage and cargo toward airport personnel who are located at different distances from the scanners. To approach our investigation a Thermo Electron Radeye-G probe was used to quantify the backscattered radiation measured in terms of dose-rate emitted from airport scanners, Measurements were taken at the entrance and exit positions of the X-ray tunnel at three different distances (0, 50, and 100 cm) for two different scanners; both scanners include shielding curtains that reduce scattered radiation. Correlation was demonstrated using the Pearson coefficient test. Measurements confirmed an inverse relationship between dose rate and distance. An estimated occupational accumulative dose of 0.88 mSv/y, and 2.04 mSv/y were obtained for personnel working in inspection of carry-on, and cargo, respectively. Findings confirm that the projected dose of security and engineering staff are being well within dose limits.

  3. Reprint of 'Evaluation of Scattered Radiation Emitted From X-ray Security Scanners on Occupational Dose to Airport Personnel'

    NASA Astrophysics Data System (ADS)

    Dalah, Entesar; Fakhry, Angham; Mukhtar, Asma; Al Salti, Farah; Bader, May; Khouri, Sara; Al-Zahmi, Reem

    2017-11-01

    Based on security issues and regulations airports are provided with luggage cargo scanners. These scanners utilize ionizing radiation that in principle present health risks toward humans. The study aims to investigate the amount of backscatter produced by passenger luggage and cargo toward airport personnel who are located at different distances from the scanners. To approach our investigation a Thermo Electron Radeye-G probe was used to quantify the backscattered radiation measured in terms of dose-rate emitted from airport scanners, Measurements were taken at the entrance and exit positions of the X-ray tunnel at three different distances (0, 50, and 100 cm) for two different scanners; both scanners include shielding curtains that reduce scattered radiation. Correlation was demonstrated using the Pearson coefficient test. Measurements confirmed an inverse relationship between dose rate and distance. An estimated occupational accumulative dose of 0.88 mSv/y, and 2.04 mSv/y were obtained for personnel working in inspection of carry-on, and cargo, respectively. Findings confirm that the projected dose of security and engineering staff are being well within dose limits.

  4. An algorithm for automated ROI definition in water or epoxy-filled NEMA NU-2 image quality phantoms.

    PubMed

    Pierce, Larry A; Byrd, Darrin W; Elston, Brian F; Karp, Joel S; Sunderland, John J; Kinahan, Paul E

    2016-01-08

    Drawing regions of interest (ROIs) in positron emission tomography/computed tomography (PET/CT) scans of the National Electrical Manufacturers Association (NEMA) NU-2 Image Quality (IQ) phantom is a time-consuming process that allows for interuser variability in the measurements. In order to reduce operator effort and allow batch processing of IQ phantom images, we propose a fast, robust, automated algorithm for performing IQ phantom sphere localization and analysis. The algorithm is easily altered to accommodate different configurations of the IQ phantom. The proposed algorithm uses information from both the PET and CT image volumes in order to overcome the challenges of detecting the smallest spheres in the PET volume. This algorithm has been released as an open-source plug-in to the Osirix medical image viewing software package. We test the algorithm under various noise conditions, positions within the scanner, air bubbles in the phantom spheres, and scanner misalignment conditions. The proposed algorithm shows run-times between 3 and 4 min and has proven to be robust under all tested conditions, with expected sphere localization deviations of less than 0.2 mm and variations of PET ROI mean and maximum values on the order of 0.5% and 2%, respectively, over multiple PET acquisitions. We conclude that the proposed algorithm is stable when challenged with a variety of physical and imaging anomalies, and that the algorithm can be a valuable tool for those who use the NEMA NU-2 IQ phantom for PET/CT scanner acceptance testing and QA/QC.

  5. MSS D Multispectral Scanner System

    NASA Technical Reports Server (NTRS)

    Lauletta, A. M.; Johnson, R. L.; Brinkman, K. L. (Principal Investigator)

    1982-01-01

    The development and acceptance testing of the 4-band Multispectral Scanners to be flown on LANDSAT D and LANDSAT D Earth resources satellites are summarized. Emphasis is placed on the acceptance test phase of the program. Test history and acceptance test algorithms are discussed. Trend data of all the key performance parameters are included and discussed separately for each of the two multispectral scanner instruments. Anomalies encountered and their resolutions are included.

  6. Impact of event positioning algorithm on performance of a whole-body PET scanner using one-to-one coupled detectors

    NASA Astrophysics Data System (ADS)

    Surti, S.; Karp, J. S.

    2018-03-01

    The advent of silicon photomultipliers (SiPMs) has introduced the possibility of increased detector performance in commercial whole-body PET scanners. The primary advantage of these photodetectors is the ability to couple a single SiPM channel directly to a single pixel of PET scintillator that is typically 4 mm wide (one-to-one coupled detector design). We performed simulation studies to evaluate the impact of three different event positioning algorithms in such detectors: (i) a weighted energy centroid positioning (Anger logic), (ii) identifying the crystal with maximum energy deposition (1st max crystal), and (iii) identifying the crystal with the second highest energy deposition (2nd max crystal). Detector simulations performed with LSO crystals indicate reduced positioning errors when using the 2nd max crystal positioning algorithm. These studies are performed over a range of crystal cross-sections varying from 1  ×  1 mm2 to 4  ×  4 mm2 as well as crystal thickness of 1 cm to 3 cm. System simulations were performed for a whole-body PET scanner (85 cm ring diameter) with a long axial FOV (70 cm long) and show an improvement in reconstructed spatial resolution for a point source when using the 2nd max crystal positioning algorithm. Finally, we observe a 30-40% gain in contrast recovery coefficient values for 1 and 0.5 cm diameter spheres when using the 2nd max crystal positioning algorithm compared to the 1st max crystal positioning algorithm. These results show that there is an advantage to implementing the 2nd max crystal positioning algorithm in a new generation of PET scanners using one-to-one coupled detector design with lutetium based crystals, including LSO, LYSO or scintillators that have similar density and effective atomic number as LSO.

  7. Filtered-backprojection reconstruction for a cone-beam computed tomography scanner with independent source and detector rotations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rit, Simon, E-mail: simon.rit@creatis.insa-lyon.fr; Clackdoyle, Rolf; Keuschnigg, Peter

    Purpose: A new cone-beam CT scanner for image-guided radiotherapy (IGRT) can independently rotate the source and the detector along circular trajectories. Existing reconstruction algorithms are not suitable for this scanning geometry. The authors propose and evaluate a three-dimensional (3D) filtered-backprojection reconstruction for this situation. Methods: The source and the detector trajectories are tuned to image a field-of-view (FOV) that is offset with respect to the center-of-rotation. The new reconstruction formula is derived from the Feldkamp algorithm and results in a similar three-step algorithm: projection weighting, ramp filtering, and weighted backprojection. Simulations of a Shepp Logan digital phantom were used tomore » evaluate the new algorithm with a 10 cm-offset FOV. A real cone-beam CT image with an 8.5 cm-offset FOV was also obtained from projections of an anthropomorphic head phantom. Results: The quality of the cone-beam CT images reconstructed using the new algorithm was similar to those using the Feldkamp algorithm which is used in conventional cone-beam CT. The real image of the head phantom exhibited comparable image quality to that of existing systems. Conclusions: The authors have proposed a 3D filtered-backprojection reconstruction for scanners with independent source and detector rotations that is practical and effective. This algorithm forms the basis for exploiting the scanner’s unique capabilities in IGRT protocols.« less

  8. Parana Basin Structure from Multi-Objective Inversion of Surface Wave and Receiver Function by Competent Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    An, M.; Assumpcao, M.

    2003-12-01

    The joint inversion of receiver function and surface wave is an effective way to diminish the influences of the strong tradeoff among parameters and the different sensitivity to the model parameters in their respective inversions, but the inversion problem becomes more complex. Multi-objective problems can be much more complicated than single-objective inversion in the model selection and optimization. If objectives are involved and conflicting, models can be ordered only partially. In this case, Pareto-optimal preference should be used to select solutions. On the other hand, the inversion to get only a few optimal solutions can not deal properly with the strong tradeoff between parameters, the uncertainties in the observation, the geophysical complexities and even the incompetency of the inversion technique. The effective way is to retrieve the geophysical information statistically from many acceptable solutions, which requires more competent global algorithms. Competent genetic algorithms recently proposed are far superior to the conventional genetic algorithm and can solve hard problems quickly, reliably and accurately. In this work we used one of competent genetic algorithms, Bayesian Optimization Algorithm as the main inverse procedure. This algorithm uses Bayesian networks to draw out inherited information and can use Pareto-optimal preference in the inversion. With this algorithm, the lithospheric structure of Paran"› basin is inverted to fit both the observations of inter-station surface wave dispersion and receiver function.

  9. Incorporation of a two metre long PET scanner in STIR

    NASA Astrophysics Data System (ADS)

    Tsoumpas, C.; Brain, C.; Dyke, T.; Gold, D.

    2015-09-01

    The Explorer project aims to investigate the potential benefits of a total-body 2 metre long PET scanner. The following investigation incorporates this scanner in STIR library and demonstrates the capabilities and weaknesses of existing reconstruction (FBP and OSEM) and single scatter simulation algorithms. It was found that sensible images are reconstructed but at the expense of high memory and processing time demands. FBP requires 4 hours on a core; OSEM: 2 hours per iteration if ran in parallel on 15-cores of a high performance computer. The single scatter simulation algorithm shows that on a short scale, up to a fifth of the scanner length, the assumption that the scatter between direct rings is similar to the scatter between the oblique rings is approximately valid. However, for more extreme cases this assumption is not longer valid, which illustrates that consideration of the oblique rings within the single scatter simulation will be necessary, if this scatter correction is the method of choice.

  10. Magnetic Resonance Elastography: Measurement of Hepatic Stiffness Using Different Direct Inverse Problem Reconstruction Methods in Healthy Volunteers and Patients with Liver Disease.

    PubMed

    Saito, Shigeyoshi; Tanaka, Keiko; Hashido, Takashi

    2016-02-01

    The purpose of this study was to compare the mean hepatic stiffness values obtained by the application of two different direct inverse problem reconstruction methods to magnetic resonance elastography (MRE). Thirteen healthy men (23.2±2.1 years) and 16 patients with liver diseases (78.9±4.3 years; 12 men and 4 women) were examined for this study using a 3.0 T-MRI. The healthy volunteers underwent three consecutive scans, two 70-Hz waveform and a 50-Hz waveform scans. On the other hand, the patients with liver disease underwent scanning using the 70-Hz waveform only. The MRE data for each subject was processed twice for calculation of the mean hepatic stiffness (Pa), once using the multiscale direct inversion (MSDI) and once using the multimodel direct inversion (MMDI). There were no significant differences in the mean stiffness values among the scans obtained with two 70-Hz and different waveforms. However, the mean stiffness values obtained with the MSDI technique (with mask: 2895.3±255.8 Pa, without mask: 2940.6±265.4 Pa) were larger than those obtained with the MMDI technique (with mask: 2614.0±242.1 Pa, without mask: 2699.2±273.5 Pa). The reproducibility of measurements obtained using the two techniques was high for both the healthy volunteers [intraclass correlation coefficients (ICCs): 0.840-0.953] and the patients (ICC: 0.830-0.995). These results suggest that knowledge of the characteristics of different direct inversion algorithms is important for longitudinal liver stiffness assessments such as the comparison of different scanners and evaluation of the response to fibrosis therapy.

  11. Parallelized Bayesian inversion for three-dimensional dental X-ray imaging.

    PubMed

    Kolehmainen, Ville; Vanne, Antti; Siltanen, Samuli; Järvenpää, Seppo; Kaipio, Jari P; Lassas, Matti; Kalke, Martti

    2006-02-01

    Diagnostic and operational tasks based on dental radiology often require three-dimensional (3-D) information that is not available in a single X-ray projection image. Comprehensive 3-D information about tissues can be obtained by computerized tomography (CT) imaging. However, in dental imaging a conventional CT scan may not be available or practical because of high radiation dose, low-resolution or the cost of the CT scanner equipment. In this paper, we consider a novel type of 3-D imaging modality for dental radiology. We consider situations in which projection images of the teeth are taken from a few sparsely distributed projection directions using the dentist's regular (digital) X-ray equipment and the 3-D X-ray attenuation function is reconstructed. A complication in these experiments is that the reconstruction of the 3-D structure based on a few projection images becomes an ill-posed inverse problem. Bayesian inversion is a well suited framework for reconstruction from such incomplete data. In Bayesian inversion, the ill-posed reconstruction problem is formulated in a well-posed probabilistic form in which a priori information is used to compensate for the incomplete information of the projection data. In this paper we propose a Bayesian method for 3-D reconstruction in dental radiology. The method is partially based on Kolehmainen et al. 2003. The prior model for dental structures consist of a weighted l1 and total variation (TV)-prior together with the positivity prior. The inverse problem is stated as finding the maximum a posteriori (MAP) estimate. To make the 3-D reconstruction computationally feasible, a parallelized version of an optimization algorithm is implemented for a Beowulf cluster computer. The method is tested with projection data from dental specimens and patient data. Tomosynthetic reconstructions are given as reference for the proposed method.

  12. Coastal Zone Color Scanner atmospheric correction algorithm - Multiple scattering effects

    NASA Technical Reports Server (NTRS)

    Gordon, Howard R.; Castano, Diego J.

    1987-01-01

    Errors due to multiple scattering which are expected to be encountered in application of the current Coastal Zone Color Scanner (CZCS) atmospheric correction algorithm are analyzed. The analysis is based on radiative transfer computations in model atmospheres, in which the aerosols and molecules are distributed vertically in an exponential manner, with most of the aerosol scattering located below the molecular scattering. A unique feature of the analysis is that it is carried out in scan coordinates rather than typical earth-sun coordinates, making it possible to determine the errors along typical CZCS scan lines. Information provided by the analysis makes it possible to judge the efficacy of the current algorithm with the current sensor and to estimate the impact of the algorithm-induced errors on a variety of applications.

  13. Fourier rebinning and consistency equations for time-of-flight PET planograms

    PubMed Central

    Li, Yusheng; Defrise, Michel; Matej, Samuel; Metzler, Scott D

    2016-01-01

    Due to the unique geometry, dual-panel PET scanners have many advantages in dedicated breast imaging and on-board imaging applications since the compact scanners can be combined with other imaging and treatment modalities. The major challenges of dual-panel PET imaging are the limited-angle problem and data truncation, which can cause artifacts due to incomplete data sampling. The time-of-flight (TOF) information can be a promising solution to reduce these artifacts. The TOF planogram is the native data format for dual-panel TOF PET scanners, and the non-TOF planogram is the 3D extension of linogram. The TOF planograms is five-dimensional while the objects are three-dimensional, and there are two degrees of redundancy. In this paper, we derive consistency equations and Fourier-based rebinning algorithms to provide a complete understanding of the rich structure of the fully 3D TOF planograms. We first derive two consistency equations and John's equation for 3D TOF planograms. By taking the Fourier transforms, we obtain two Fourier consistency equations and the Fourier-John equation, which are the duals of the consistency equations and John's equation, respectively. We then solve the Fourier consistency equations and Fourier-John equation using the method of characteristics. The two degrees of entangled redundancy of the 3D TOF data can be explicitly elicited and exploited by the solutions along the characteristic curves. As the special cases of the general solutions, we obtain Fourier rebinning and consistency equations (FORCEs), and thus we obtain a complete scheme to convert among different types of PET planograms: 3D TOF, 3D non-TOF, 2D TOF and 2D non-TOF planograms. The FORCEs can be used as Fourier-based rebinning algorithms for TOF-PET data reduction, inverse rebinnings for designing fast projectors, or consistency conditions for estimating missing data. As a byproduct, we show the two consistency equations are necessary and sufficient for 3D TOF planograms. Finally, we give numerical examples of implementation of a fast 2D TOF planogram projector and Fourier-based rebinning for a 2D TOF planograms using the FORCEs to show the efficacy of the Fourier-based solutions. PMID:28255191

  14. Fourier rebinning and consistency equations for time-of-flight PET planograms.

    PubMed

    Li, Yusheng; Defrise, Michel; Matej, Samuel; Metzler, Scott D

    2016-01-01

    Due to the unique geometry, dual-panel PET scanners have many advantages in dedicated breast imaging and on-board imaging applications since the compact scanners can be combined with other imaging and treatment modalities. The major challenges of dual-panel PET imaging are the limited-angle problem and data truncation, which can cause artifacts due to incomplete data sampling. The time-of-flight (TOF) information can be a promising solution to reduce these artifacts. The TOF planogram is the native data format for dual-panel TOF PET scanners, and the non-TOF planogram is the 3D extension of linogram. The TOF planograms is five-dimensional while the objects are three-dimensional, and there are two degrees of redundancy. In this paper, we derive consistency equations and Fourier-based rebinning algorithms to provide a complete understanding of the rich structure of the fully 3D TOF planograms. We first derive two consistency equations and John's equation for 3D TOF planograms. By taking the Fourier transforms, we obtain two Fourier consistency equations and the Fourier-John equation, which are the duals of the consistency equations and John's equation, respectively. We then solve the Fourier consistency equations and Fourier-John equation using the method of characteristics. The two degrees of entangled redundancy of the 3D TOF data can be explicitly elicited and exploited by the solutions along the characteristic curves. As the special cases of the general solutions, we obtain Fourier rebinning and consistency equations (FORCEs), and thus we obtain a complete scheme to convert among different types of PET planograms: 3D TOF, 3D non-TOF, 2D TOF and 2D non-TOF planograms. The FORCEs can be used as Fourier-based rebinning algorithms for TOF-PET data reduction, inverse rebinnings for designing fast projectors, or consistency conditions for estimating missing data. As a byproduct, we show the two consistency equations are necessary and sufficient for 3D TOF planograms. Finally, we give numerical examples of implementation of a fast 2D TOF planogram projector and Fourier-based rebinning for a 2D TOF planograms using the FORCEs to show the efficacy of the Fourier-based solutions.

  15. FPGA-Based Front-End Electronics for Positron Emission Tomography

    PubMed Central

    Haselman, Michael; DeWitt, Don; McDougald, Wendy; Lewellen, Thomas K.; Miyaoka, Robert; Hauck, Scott

    2010-01-01

    Modern Field Programmable Gate Arrays (FPGAs) are capable of performing complex discrete signal processing algorithms with clock rates above 100MHz. This combined with FPGA’s low expense, ease of use, and selected dedicated hardware make them an ideal technology for a data acquisition system for positron emission tomography (PET) scanners. Our laboratory is producing a high-resolution, small-animal PET scanner that utilizes FPGAs as the core of the front-end electronics. For this next generation scanner, functions that are typically performed in dedicated circuits, or offline, are being migrated to the FPGA. This will not only simplify the electronics, but the features of modern FPGAs can be utilizes to add significant signal processing power to produce higher resolution images. In this paper two such processes, sub-clock rate pulse timing and event localization, will be discussed in detail. We show that timing performed in the FPGA can achieve a resolution that is suitable for small-animal scanners, and will outperform the analog version given a low enough sampling period for the ADC. We will also show that the position of events in the scanner can be determined in real time using a statistical positioning based algorithm. PMID:21961085

  16. Geometric analysis and restitution of digital multispectral scanner data arrays

    NASA Technical Reports Server (NTRS)

    Baker, J. R.; Mikhail, E. M.

    1975-01-01

    An investigation was conducted to define causes of geometric defects within digital multispectral scanner (MSS) data arrays, to analyze the resulting geometric errors, and to investigate restitution methods to correct or reduce these errors. Geometric transformation relationships for scanned data, from which collinearity equations may be derived, served as the basis of parametric methods of analysis and restitution of MSS digital data arrays. The linearization of these collinearity equations is presented. Algorithms considered for use in analysis and restitution included the MSS collinearity equations, piecewise polynomials based on linearized collinearity equations, and nonparametric algorithms. A proposed system for geometric analysis and restitution of MSS digital data arrays was used to evaluate these algorithms, utilizing actual MSS data arrays. It was shown that collinearity equations and nonparametric algorithms both yield acceptable results, but nonparametric algorithms possess definite advantages in computational efficiency. Piecewise polynomials were found to yield inferior results.

  17. A Hybrid Soft-computing Method for Image Analysis of Digital Plantar Scanners.

    PubMed

    Razjouyan, Javad; Khayat, Omid; Siahi, Mehdi; Mansouri, Ali Alizadeh

    2013-01-01

    Digital foot scanners have been developed in recent years to yield anthropometrists digital image of insole with pressure distribution and anthropometric information. In this paper, a hybrid algorithm containing gray level spatial correlation (GLSC) histogram and Shanbag entropy is presented for analysis of scanned foot images. An evolutionary algorithm is also employed to find the optimum parameters of GLSC and transform function of the membership values. Resulting binary images as the thresholded images are undergone anthropometric measurements taking in to account the scale factor of pixel size to metric scale. The proposed method is finally applied to plantar images obtained through scanning feet of randomly selected subjects by a foot scanner system as our experimental setup described in the paper. Running computation time and the effects of GLSC parameters are investigated in the simulation results.

  18. CERES ERBE-like Instantaneous TOA Estimates (ES-8) in HDF (CER_ES8_Terra-FM1_Edition2)

    NASA Technical Reports Server (NTRS)

    Wielicki, Bruce A. (Principal Investigator)

    The ES-8 archival data product contains a 24-hour, single-satellite, instantaneous view of scanner fluxes at the top-of-atmosphere (TOA) reduced from spacecraft altitude unfiltered radiances using Earth Radiation Budget Experiment (ERBE) scanner Inversion algorithms and the ERBE shortwave (SW) and longwave (LW) Angular Distribution Models (ADMs). The ES-8 also includes the total (TOT), SW, LW, and window (WN) channel radiometric data; SW, LW, and WN unfiltered radiance values; and the ERBE scene identification for each measurement. These data are organized according to the CERES 3.3-second scan into 6.6-second records. As long as there is one valid scanner measurement within a record, the ES-8 record will be generated. The following CERES ES8 data sets are currently available: CER_ES8_TRMM-PFM_Edition1 CER_ES8_TRMM-PFM_Edition2 CER_ES8_TRMM-PFM_Transient-Ops2 CER_ES8_Terra-FM1_Edition1 CER_ES8_Terra-FM2_Edition1 CER_ES8_Terra-FM1_Edition2 CER_ES8_Terra-FM2_Edition2 CER_ES8_Aqua-FM3_Edition1 CER_ES8_Aqua-FM4_Edition1 CER_ES8_Aqua-FM3_Edition2 CER_ES8_Aqua-FM4_Edition2 CER_ES8_Aqua-FM3_Edition1-CV CER_ES8_Aqua-FM4_Edition1-CV CER_ES8_Terra-FM1_Edition1-CV CER_ES8_Terra-FM1_Edition1-CV. [Location=GLOBAL] [Temporal_Coverage: Start_Date=1997-12-27; Stop_Date=2006-01-01] [Spatial_Coverage: Southernmost_Latitude=-90; Northernmost_Latitude=90; Westernmost_Longitude=-180; Easternmost_Longitude=180] [Data_Resolution: Temporal_Resolution=1 day; Temporal_Resolution_Range=Daily - < Weekly].

  19. CERES ERBE-like Instantaneous TOA Estimates (ES-8) in HDF (CER_ES8_Terra-FM1_Edition1-CV)

    NASA Technical Reports Server (NTRS)

    Wielicki, Bruce A. (Principal Investigator)

    The ES-8 archival data product contains a 24-hour, single-satellite, instantaneous view of scanner fluxes at the top-of-atmosphere (TOA) reduced from spacecraft altitude unfiltered radiances using Earth Radiation Budget Experiment (ERBE) scanner Inversion algorithms and the ERBE shortwave (SW) and longwave (LW) Angular Distribution Models (ADMs). The ES-8 also includes the total (TOT), SW, LW, and window (WN) channel radiometric data; SW, LW, and WN unfiltered radiance values; and the ERBE scene identification for each measurement. These data are organized according to the CERES 3.3-second scan into 6.6-second records. As long as there is one valid scanner measurement within a record, the ES-8 record will be generated. The following CERES ES8 data sets are currently available: CER_ES8_TRMM-PFM_Edition1 CER_ES8_TRMM-PFM_Edition2 CER_ES8_TRMM-PFM_Transient-Ops2 CER_ES8_Terra-FM1_Edition1 CER_ES8_Terra-FM2_Edition1 CER_ES8_Terra-FM1_Edition2 CER_ES8_Terra-FM2_Edition2 CER_ES8_Aqua-FM3_Edition1 CER_ES8_Aqua-FM4_Edition1 CER_ES8_Aqua-FM3_Edition2 CER_ES8_Aqua-FM4_Edition2 CER_ES8_Aqua-FM3_Edition1-CV CER_ES8_Aqua-FM4_Edition1-CV CER_ES8_Terra-FM1_Edition1-CV CER_ES8_Terra-FM1_Edition1-CV. [Location=GLOBAL] [Temporal_Coverage: Start_Date=1997-12-27; Stop_Date=2006-09-30] [Spatial_Coverage: Southernmost_Latitude=-90; Northernmost_Latitude=90; Westernmost_Longitude=-180; Easternmost_Longitude=180] [Data_Resolution: Temporal_Resolution=1 day; Temporal_Resolution_Range=Daily - < Weekly].

  20. CERES ERBE-like Instantaneous TOA Estimates (ES-8) in HDF (CER_ES8_Aqua-FM4_Edition1-CV)

    NASA Technical Reports Server (NTRS)

    Wielicki, Bruce A. (Principal Investigator)

    The ES-8 archival data product contains a 24-hour, single-satellite, instantaneous view of scanner fluxes at the top-of-atmosphere (TOA) reduced from spacecraft altitude unfiltered radiances using Earth Radiation Budget Experiment (ERBE) scanner Inversion algorithms and the ERBE shortwave (SW) and longwave (LW) Angular Distribution Models (ADMs). The ES-8 also includes the total (TOT), SW, LW, and window (WN) channel radiometric data; SW, LW, and WN unfiltered radiance values; and the ERBE scene identification for each measurement. These data are organized according to the CERES 3.3-second scan into 6.6-second records. As long as there is one valid scanner measurement within a record, the ES-8 record will be generated. The following CERES ES8 data sets are currently available: CER_ES8_TRMM-PFM_Edition1 CER_ES8_TRMM-PFM_Edition2 CER_ES8_TRMM-PFM_Transient-Ops2 CER_ES8_Terra-FM1_Edition1 CER_ES8_Terra-FM2_Edition1 CER_ES8_Terra-FM1_Edition2 CER_ES8_Terra-FM2_Edition2 CER_ES8_Aqua-FM3_Edition1 CER_ES8_Aqua-FM4_Edition1 CER_ES8_Aqua-FM3_Edition2 CER_ES8_Aqua-FM4_Edition2 CER_ES8_Aqua-FM3_Edition1-CV CER_ES8_Aqua-FM4_Edition1-CV CER_ES8_Terra-FM1_Edition1-CV CER_ES8_Terra-FM1_Edition1-CV. [Location=GLOBAL] [Temporal_Coverage: Start_Date=1997-12-27; Stop_Date=2005-03-29] [Spatial_Coverage: Southernmost_Latitude=-90; Northernmost_Latitude=90; Westernmost_Longitude=-180; Easternmost_Longitude=180] [Data_Resolution: Temporal_Resolution=1 day; Temporal_Resolution_Range=Daily - < Weekly].

  1. CERES ERBE-like Instantaneous TOA Estimates (ES-8) in HDF (CER_ES8_Terra-FM2_Edition1)

    NASA Technical Reports Server (NTRS)

    Wielicki, Bruce A. (Principal Investigator)

    The ES-8 archival data product contains a 24-hour, single-satellite, instantaneous view of scanner fluxes at the top-of-atmosphere (TOA) reduced from spacecraft altitude unfiltered radiances using Earth Radiation Budget Experiment (ERBE) scanner Inversion algorithms and the ERBE shortwave (SW) and longwave (LW) Angular Distribution Models (ADMs). The ES-8 also includes the total (TOT), SW, LW, and window (WN) channel radiometric data; SW, LW, and WN unfiltered radiance values; and the ERBE scene identification for each measurement. These data are organized according to the CERES 3.3-second scan into 6.6-second records. As long as there is one valid scanner measurement within a record, the ES-8 record will be generated. The following CERES ES8 data sets are currently available: CER_ES8_TRMM-PFM_Edition1 CER_ES8_TRMM-PFM_Edition2 CER_ES8_TRMM-PFM_Transient-Ops2 CER_ES8_Terra-FM1_Edition1 CER_ES8_Terra-FM2_Edition1 CER_ES8_Terra-FM1_Edition2 CER_ES8_Terra-FM2_Edition2 CER_ES8_Aqua-FM3_Edition1 CER_ES8_Aqua-FM4_Edition1 CER_ES8_Aqua-FM3_Edition2 CER_ES8_Aqua-FM4_Edition2 CER_ES8_Aqua-FM3_Edition1-CV CER_ES8_Aqua-FM4_Edition1-CV CER_ES8_Terra-FM1_Edition1-CV CER_ES8_Terra-FM1_Edition1-CV. [Location=GLOBAL] [Temporal_Coverage: Start_Date=1997-12-27; Stop_Date=2005-11-01] [Spatial_Coverage: Southernmost_Latitude=-90; Northernmost_Latitude=90; Westernmost_Longitude=-180; Easternmost_Longitude=180] [Data_Resolution: Temporal_Resolution=1 day; Temporal_Resolution_Range=Daily - < Weekly].

  2. CERES ERBE-like Instantaneous TOA Estimates (ES-8) in HDF (CER_ES8_TRMM-PFM_Edition2)

    NASA Technical Reports Server (NTRS)

    Wielicki, Bruce A. (Principal Investigator)

    The ES-8 archival data product contains a 24-hour, single-satellite, instantaneous view of scanner fluxes at the top-of-atmosphere (TOA) reduced from spacecraft altitude unfiltered radiances using Earth Radiation Budget Experiment (ERBE) scanner Inversion algorithms and the ERBE shortwave (SW) and longwave (LW) Angular Distribution Models (ADMs). The ES-8 also includes the total (TOT), SW, LW, and window (WN) channel radiometric data; SW, LW, and WN unfiltered radiance values; and the ERBE scene identification for each measurement. These data are organized according to the CERES 3.3-second scan into 6.6-second records. As long as there is one valid scanner measurement within a record, the ES-8 record will be generated. The following CERES ES8 data sets are currently available: CER_ES8_TRMM-PFM_Edition1 CER_ES8_TRMM-PFM_Edition2 CER_ES8_TRMM-PFM_Transient-Ops2 CER_ES8_Terra-FM1_Edition1 CER_ES8_Terra-FM2_Edition1 CER_ES8_Terra-FM1_Edition2 CER_ES8_Terra-FM2_Edition2 CER_ES8_Aqua-FM3_Edition1 CER_ES8_Aqua-FM4_Edition1 CER_ES8_Aqua-FM3_Edition2 CER_ES8_Aqua-FM4_Edition2 CER_ES8_Aqua-FM3_Edition1-CV CER_ES8_Aqua-FM4_Edition1-CV CER_ES8_Terra-FM1_Edition1-CV CER_ES8_Terra-FM1_Edition1-CV. [Location=GLOBAL] [Temporal_Coverage: Start_Date=1997-12-27; Stop_Date=2000-03-31] [Spatial_Coverage: Southernmost_Latitude=-90; Northernmost_Latitude=90; Westernmost_Longitude=-180; Easternmost_Longitude=180] [Data_Resolution: Temporal_Resolution=1 day; Temporal_Resolution_Range=Daily - < Weekly].

  3. CERES ERBE-like Instantaneous TOA Estimates (ES-8) in HDF (CER_ES8_Aqua-FM3_Edition2)

    NASA Technical Reports Server (NTRS)

    Wielicki, Bruce A. (Principal Investigator)

    The ES-8 archival data product contains a 24-hour, single-satellite, instantaneous view of scanner fluxes at the top-of-atmosphere (TOA) reduced from spacecraft altitude unfiltered radiances using Earth Radiation Budget Experiment (ERBE) scanner Inversion algorithms and the ERBE shortwave (SW) and longwave (LW) Angular Distribution Models (ADMs). The ES-8 also includes the total (TOT), SW, LW, and window (WN) channel radiometric data; SW, LW, and WN unfiltered radiance values; and the ERBE scene identification for each measurement. These data are organized according to the CERES 3.3-second scan into 6.6-second records. As long as there is one valid scanner measurement within a record, the ES-8 record will be generated. The following CERES ES8 data sets are currently available: CER_ES8_TRMM-PFM_Edition1 CER_ES8_TRMM-PFM_Edition2 CER_ES8_TRMM-PFM_Transient-Ops2 CER_ES8_Terra-FM1_Edition1 CER_ES8_Terra-FM2_Edition1 CER_ES8_Terra-FM1_Edition2 CER_ES8_Terra-FM2_Edition2 CER_ES8_Aqua-FM3_Edition1 CER_ES8_Aqua-FM4_Edition1 CER_ES8_Aqua-FM3_Edition2 CER_ES8_Aqua-FM4_Edition2 CER_ES8_Aqua-FM3_Edition1-CV CER_ES8_Aqua-FM4_Edition1-CV CER_ES8_Terra-FM1_Edition1-CV CER_ES8_Terra-FM1_Edition1-CV. [Location=GLOBAL] [Temporal_Coverage: Start_Date=1997-12-27; Stop_Date=2005-12-31] [Spatial_Coverage: Southernmost_Latitude=-90; Northernmost_Latitude=90; Westernmost_Longitude=-180; Easternmost_Longitude=180] [Data_Resolution: Temporal_Resolution=1 day; Temporal_Resolution_Range=Daily - < Weekly].

  4. Validation of the Thematic Mapper radiometric and geometric correction algorithms

    NASA Technical Reports Server (NTRS)

    Fischel, D.

    1984-01-01

    The radiometric and geometric correction algorithms for Thematic Mapper are critical to subsequent successful information extraction. Earlier Landsat scanners, known as Multispectral Scanners, produce imagery which exhibits striping due to mismatching of detector gains and biases. Thematic Mapper exhibits the same phenomenon at three levels: detector-to-detector, scan-to-scan, and multiscan striping. The cause of these variations has been traced to variations in the dark current of the detectors. An alternative formulation has been tested and shown to be very satisfactory. Unfortunately, the Thematic Mapper detectors exhibit saturation effects suffered while viewing extensive cloud areas, and is not easily correctable. The geometric correction algorithm has been shown to be remarkably reliable. Only minor and modest improvements are indicated and shown to be effective.

  5. Qualitative and quantitative evaluation of six algorithms for correcting intensity nonuniformity effects.

    PubMed

    Arnold, J B; Liow, J S; Schaper, K A; Stern, J J; Sled, J G; Shattuck, D W; Worth, A J; Cohen, M S; Leahy, R M; Mazziotta, J C; Rottenberg, D A

    2001-05-01

    The desire to correct intensity nonuniformity in magnetic resonance images has led to the proliferation of nonuniformity-correction (NUC) algorithms with different theoretical underpinnings. In order to provide end users with a rational basis for selecting a given algorithm for a specific neuroscientific application, we evaluated the performance of six NUC algorithms. We used simulated and real MRI data volumes, including six repeat scans of the same subject, in order to rank the accuracy, precision, and stability of the nonuniformity corrections. We also compared algorithms using data volumes from different subjects and different (1.5T and 3.0T) MRI scanners in order to relate differences in algorithmic performance to intersubject variability and/or differences in scanner performance. In phantom studies, the correlation of the extracted with the applied nonuniformity was highest in the transaxial (left-to-right) direction and lowest in the axial (top-to-bottom) direction. Two of the six algorithms demonstrated a high degree of stability, as measured by the iterative application of the algorithm to its corrected output. While none of the algorithms performed ideally under all circumstances, locally adaptive methods generally outperformed nonadaptive methods. Copyright 2001 Academic Press.

  6. FPGA-Based Pulse Pile-Up Correction With Energy and Timing Recovery.

    PubMed

    Haselman, M D; Pasko, J; Hauck, S; Lewellen, T K; Miyaoka, R S

    2012-10-01

    Modern field programmable gate arrays (FPGAs) are capable of performing complex discrete signal processing algorithms with clock rates well above 100 MHz. This, combined with FPGA's low expense, ease of use, and selected dedicated hardware make them an ideal technology for a data acquisition system for a positron emission tomography (PET) scanner. The University of Washington is producing a high-resolution, small-animal PET scanner that utilizes FPGAs as the core of the front-end electronics. For this scanner, functions that are typically performed in dedicated circuits, or offline, are being migrated to the FPGA. This will not only simplify the electronics, but the features of modern FPGAs can be utilized to add significant signal processing power to produce higher quality images. In this paper we report on an all-digital pulse pile-up correction algorithm that has been developed for the FPGA. The pile-up mitigation algorithm will allow the scanner to run at higher count rates without incurring large data losses due to the overlapping of scintillation signals. This correction technique utilizes a reference pulse to extract timing and energy information for most pile-up events. Using pulses acquired from a Zecotech Photonics MAPD-N with an LFS-3 scintillator, we show that good timing and energy information can be achieved in the presence of pile-up utilizing a moderate amount of FPGA resources.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Menegotti, L.; Delana, A.; Martignano, A.

    Film dosimetry is an attractive tool for dose distribution verification in intensity modulated radiotherapy (IMRT). A critical aspect of radiochromic film dosimetry is the scanner used for the readout of the film: the output needs to be calibrated in dose response and corrected for pixel value and spatial dependent nonuniformity caused by light scattering; these procedures can take a long time. A method for a fast and accurate calibration and uniformity correction for radiochromic film dosimetry is presented: a single film exposure is used to do both calibration and correction. Gafchromic EBT films were read with two flatbed charge coupledmore » device scanners (Epson V750 and 1680Pro). The accuracy of the method is investigated with specific dose patterns and an IMRT beam. The comparisons with a two-dimensional array of ionization chambers using a 18x18 cm{sup 2} open field and an inverse pyramid dose pattern show an increment in the percentage of points which pass the gamma analysis (tolerance parameters of 3% and 3 mm), passing from 55% and 64% for the 1680Pro and V750 scanners, respectively, to 94% for both scanners for the 18x18 open field, and from 76% and 75% to 91% for the inverse pyramid pattern. Application to an IMRT beam also shows better gamma index results, passing from 88% and 86% for the two scanners, respectively, to 94% for both. The number of points and dose range considered for correction and calibration appears to be appropriate for use in IMRT verification. The method showed to be fast and to correct properly the nonuniformity and has been adopted for routine clinical IMRT dose verification.« less

  8. Optical Breast Shape Capture and Finite Element Mesh Generation for Electrical Impedance Tomography

    PubMed Central

    Forsyth, J.; Borsic, A.; Halter, R.J.; Hartov, A.; Paulsen, K.D.

    2011-01-01

    X-Ray mammography is the standard for breast cancer screening. The development of alternative imaging modalities is desirable because Mammograms expose patients to ionizing radiation. Electrical Impedance Tomography (EIT) may be used to determine tissue conductivity, a property which is an indicator of cancer presence. EIT is also a low-cost imaging solution and does not involve ionizing radiation. In breast EIT, impedance measurements are made using electrodes placed on the surface of the patient’s breast. The complex conductivity of the volume of the breast is estimated by a reconstruction algorithm. EIT reconstruction is a severely ill-posed inverse problem. As a result, noisy instrumentation and incorrect modelling of the electrodes and domain shape produce significant image artefacts. In this paper, we propose a method that has the potential to reduce these errors by accurately modelling the patient breast shape. A 3D hand-held optical scanner is used to acquire the breast geometry and electrode positions. We develop methods for processing the data from the scanner and producing volume meshes accurately matching the breast surface and electrode locations, which can be used for image reconstruction. We demonstrate this method for a plaster breast phantom and a human subject. Using this approach will allow patient-specific finite element meshes to be generated which has the potential to improve the clinical value of EIT for breast cancer diagnosis. PMID:21646711

  9. Intelligent inversion method for pre-stack seismic big data based on MapReduce

    NASA Astrophysics Data System (ADS)

    Yan, Xuesong; Zhu, Zhixin; Wu, Qinghua

    2018-01-01

    Seismic exploration is a method of oil exploration that uses seismic information; that is, according to the inversion of seismic information, the useful information of the reservoir parameters can be obtained to carry out exploration effectively. Pre-stack data are characterised by a large amount of data, abundant information, and so on, and according to its inversion, the abundant information of the reservoir parameters can be obtained. Owing to the large amount of pre-stack seismic data, existing single-machine environments have not been able to meet the computational needs of the huge amount of data; thus, the development of a method with a high efficiency and the speed to solve the inversion problem of pre-stack seismic data is urgently needed. The optimisation of the elastic parameters by using a genetic algorithm easily falls into a local optimum, which results in a non-obvious inversion effect, especially for the optimisation effect of the density. Therefore, an intelligent optimisation algorithm is proposed in this paper and used for the elastic parameter inversion of pre-stack seismic data. This algorithm improves the population initialisation strategy by using the Gardner formula and the genetic operation of the algorithm, and the improved algorithm obtains better inversion results when carrying out a model test with logging data. All of the elastic parameters obtained by inversion and the logging curve of theoretical model are fitted well, which effectively improves the inversion precision of the density. This algorithm was implemented with a MapReduce model to solve the seismic big data inversion problem. The experimental results show that the parallel model can effectively reduce the running time of the algorithm.

  10. A new correction method serving to eliminate the parabola effect of flatbed scanners used in radiochromic film dosimetry.

    PubMed

    Poppinga, D; Schoenfeld, A A; Doerner, K J; Blanck, O; Harder, D; Poppe, B

    2014-02-01

    The purpose of this study is the correction of the lateral scanner artifact, i.e., the effect that, on a large homogeneously exposed EBT3 film, a flatbed scanner measures different optical densities at different positions along the x axis, the axis parallel to the elongated light source. At constant dose, the measured optical density profiles along this axis have a parabolic shape with significant dose dependent curvature. Therefore, the effect is shortly called the parabola effect. The objective of the algorithm developed in this study is to correct for the parabola effect. Any optical density measured at given position x is transformed into the equivalent optical density c at the apex of the parabola and then converted into the corresponding dose via the calibration of c versus dose. For the present study EBT3 films and an Epson 10000XL scanner including transparency unit were used for the analysis of the parabola effect. The films were irradiated with 6 MV photons from an Elekta Synergy accelerator in a RW3 slab phantom. In order to quantify the effect, ten film pieces with doses graded from 0 to 20.9 Gy were sequentially scanned at eight positions along the x axis and at six positions along the z axis (the movement direction of the light source) both for the portrait and landscape film orientations. In order to test the effectiveness of the new correction algorithm, the dose profiles of an open square field and an IMRT plan were measured by EBT3 films and compared with ionization chamber and ionization chamber array measurement. The parabola effect has been numerically studied over the whole measuring field of the Epson 10000XL scanner for doses up to 20.9 Gy and for both film orientations. The presented algorithm transforms any optical density at position x into the equivalent optical density that would be measured at the same dose at the apex of the parabola. This correction method has been validated up to doses of 5.2 Gy all over the scanner bed with 2D dose distributions of an open square photon field and an IMRT distribution. The algorithm presented in this study quantifies and corrects the parabola effect of EBT3 films scanned in commonly used commercial flatbed scanners at doses up to 5.2 Gy. It is easy to implement, and no additional work steps are necessary in daily routine film dosimetry.

  11. Hybrid-dual-fourier tomographic algorithm for a fast three-dimensionial optical image reconstruction in turbid media

    NASA Technical Reports Server (NTRS)

    Alfano, Robert R. (Inventor); Cai, Wei (Inventor)

    2007-01-01

    A reconstruction technique for reducing computation burden in the 3D image processes, wherein the reconstruction procedure comprises an inverse and a forward model. The inverse model uses a hybrid dual Fourier algorithm that combines a 2D Fourier inversion with a 1D matrix inversion to thereby provide high-speed inverse computations. The inverse algorithm uses a hybrid transfer to provide fast Fourier inversion for data of multiple sources and multiple detectors. The forward model is based on an analytical cumulant solution of a radiative transfer equation. The accurate analytical form of the solution to the radiative transfer equation provides an efficient formalism for fast computation of the forward model.

  12. Physics for clinicians: Fluid-attenuated inversion recovery (FLAIR) and double inversion recovery (DIR) Imaging.

    PubMed

    Saranathan, Manojkumar; Worters, Pauline W; Rettmann, Dan W; Winegar, Blair; Becker, Jennifer

    2017-12-01

    A pedagogical review of fluid-attenuated inversion recovery (FLAIR) and double inversion recovery (DIR) imaging is conducted in this article. The basics of the two pulse sequences are first described, including the details of the inversion preparation and imaging sequences with accompanying mathematical formulae for choosing the inversion time in a variety of scenarios for use on clinical MRI scanners. Magnetization preparation (or T2prep), a strategy for improving image signal-to-noise ratio and contrast and reducing T 1 weighting at high field strengths, is also described. Lastly, image artifacts commonly associated with FLAIR and DIR are described with clinical examples, to help avoid misdiagnosis. 5 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2017;46:1590-1600. © 2017 International Society for Magnetic Resonance in Medicine.

  13. Pattern recognition: A basis for remote sensing data analysis

    NASA Technical Reports Server (NTRS)

    Swain, P. H.

    1973-01-01

    The theoretical basis for the pattern-recognition-oriented algorithms used in the multispectral data analysis software system is discussed. A model of a general pattern recognition system is presented. The receptor or sensor is usually a multispectral scanner. For each ground resolution element the receptor produces n numbers or measurements corresponding to the n channels of the scanner.

  14. Rayleigh wave nonlinear inversion based on the Firefly algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Teng-Fei; Peng, Geng-Xin; Hu, Tian-Yue; Duan, Wen-Sheng; Yao, Feng-Chang; Liu, Yi-Mou

    2014-06-01

    Rayleigh waves have high amplitude, low frequency, and low velocity, which are treated as strong noise to be attenuated in reflected seismic surveys. This study addresses how to identify useful shear wave velocity profile and stratigraphic information from Rayleigh waves. We choose the Firefly algorithm for inversion of surface waves. The Firefly algorithm, a new type of particle swarm optimization, has the advantages of being robust, highly effective, and allows global searching. This algorithm is feasible and has advantages for use in Rayleigh wave inversion with both synthetic models and field data. The results show that the Firefly algorithm, which is a robust and practical method, can achieve nonlinear inversion of surface waves with high resolution.

  15. Large Airborne Full Tensor Gradient Data Inversion Based on a Non-Monotone Gradient Method

    NASA Astrophysics Data System (ADS)

    Sun, Yong; Meng, Zhaohai; Li, Fengting

    2018-03-01

    Following the development of gravity gradiometer instrument technology, the full tensor gravity (FTG) data can be acquired on airborne and marine platforms. Large-scale geophysical data can be obtained using these methods, making such data sets a number of the "big data" category. Therefore, a fast and effective inversion method is developed to solve the large-scale FTG data inversion problem. Many algorithms are available to accelerate the FTG data inversion, such as conjugate gradient method. However, the conventional conjugate gradient method takes a long time to complete data processing. Thus, a fast and effective iterative algorithm is necessary to improve the utilization of FTG data. Generally, inversion processing is formulated by incorporating regularizing constraints, followed by the introduction of a non-monotone gradient-descent method to accelerate the convergence rate of FTG data inversion. Compared with the conventional gradient method, the steepest descent gradient algorithm, and the conjugate gradient algorithm, there are clear advantages of the non-monotone iterative gradient-descent algorithm. Simulated and field FTG data were applied to show the application value of this new fast inversion method.

  16. Electrical conductivity imaging using gradient B, decomposition algorithm in magnetic resonance electrical impedance tomography (MREIT).

    PubMed

    Park, Chunjae; Kwon, Ohin; Woo, Eung Je; Seo, Jin Keun

    2004-03-01

    In magnetic resonance electrical impedance tomography (MREIT), we try to visualize cross-sectional conductivity (or resistivity) images of a subject. We inject electrical currents into the subject through surface electrodes and measure the z component Bz of the induced internal magnetic flux density using an MRI scanner. Here, z is the direction of the main magnetic field of the MRI scanner. We formulate the conductivity image reconstruction problem in MREIT from a careful analysis of the relationship between the injection current and the induced magnetic flux density Bz. Based on the novel mathematical formulation, we propose the gradient Bz decomposition algorithm to reconstruct conductivity images. This new algorithm needs to differentiate Bz only once in contrast to the previously developed harmonic Bz algorithm where the numerical computation of (inverted delta)2Bz is required. The new algorithm, therefore, has the important advantage of much improved noise tolerance. Numerical simulations with added random noise of realistic amounts show the feasibility of the algorithm in practical applications and also its robustness against measurement noise.

  17. A 3D ultrasound scanner: real time filtering and rendering algorithms.

    PubMed

    Cifarelli, D; Ruggiero, C; Brusacà, M; Mazzarella, M

    1997-01-01

    The work described here has been carried out within a collaborative project between DIST and ESAOTE BIOMEDICA aiming to set up a new ultrasonic scanner performing 3D reconstruction. A system is being set up to process and display 3D ultrasonic data in a fast, economical and user friendly way to help the physician during diagnosis. A comparison is presented among several algorithms for digital filtering, data segmentation and rendering for real time, PC based, three-dimensional reconstruction from B-mode ultrasonic biomedical images. Several algorithms for digital filtering have been compared as relates to processing time and to final image quality. Three-dimensional data segmentation techniques and rendering has been carried out with special reference to user friendly features for foreseeable applications and reconstruction speed.

  18. A study on characterization of stratospheric aerosol and gas parameters with the spacecraft solar occultation experiment

    NASA Technical Reports Server (NTRS)

    Chu, W. P.

    1977-01-01

    Spacecraft remote sensing of stratospheric aerosol and ozone vertical profiles using the solar occultation experiment has been analyzed. A computer algorithm has been developed in which a two step inversion of the simulated data can be performed. The radiometric data are first inverted into a vertical extinction profile using a linear inversion algorithm. Then the multiwavelength extinction profiles are solved with a nonlinear least square algorithm to produce aerosol and ozone vertical profiles. Examples of inversion results are shown illustrating the resolution and noise sensitivity of the inversion algorithms.

  19. MR-assisted PET Motion Correction for eurological Studies in an Integrated MR-PET Scanner

    PubMed Central

    Catana, Ciprian; Benner, Thomas; van der Kouwe, Andre; Byars, Larry; Hamm, Michael; Chonde, Daniel B.; Michel, Christian J.; El Fakhri, Georges; Schmand, Matthias; Sorensen, A. Gregory

    2011-01-01

    Head motion is difficult to avoid in long PET studies, degrading the image quality and offsetting the benefit of using a high-resolution scanner. As a potential solution in an integrated MR-PET scanner, the simultaneously acquired MR data can be used for motion tracking. In this work, a novel data processing and rigid-body motion correction (MC) algorithm for the MR-compatible BrainPET prototype scanner is described and proof-of-principle phantom and human studies are presented. Methods To account for motion, the PET prompts and randoms coincidences as well as the sensitivity data are processed in the line or response (LOR) space according to the MR-derived motion estimates. After sinogram space rebinning, the corrected data are summed and the motion corrected PET volume is reconstructed from these sinograms and the attenuation and scatter sinograms in the reference position. The accuracy of the MC algorithm was first tested using a Hoffman phantom. Next, human volunteer studies were performed and motion estimates were obtained using two high temporal resolution MR-based motion tracking techniques. Results After accounting for the physical mismatch between the two scanners, perfectly co-registered MR and PET volumes are reproducibly obtained. The MR output gates inserted in to the PET list-mode allow the temporal correlation of the two data sets within 0.2 s. The Hoffman phantom volume reconstructed processing the PET data in the LOR space was similar to the one obtained processing the data using the standard methods and applying the MC in the image space, demonstrating the quantitative accuracy of the novel MC algorithm. In human volunteer studies, motion estimates were obtained from echo planar imaging and cloverleaf navigator sequences every 3 seconds and 20 ms, respectively. Substantially improved PET images with excellent delineation of specific brain structures were obtained after applying the MC using these MR-based estimates. Conclusion A novel MR-based MC algorithm was developed for the integrated MR-PET scanner. High temporal resolution MR-derived motion estimates (obtained while simultaneously acquiring anatomical or functional MR data) can be used for PET MC. An MR-based MC has the potential to improve PET as a quantitative method, increasing its reliability and reproducibility which could benefit a large number of neurological applications. PMID:21189415

  20. TH-C-18A-06: Combined CT Image Quality and Radiation Dose Monitoring Program Based On Patient Data to Assess Consistency of Clinical Imaging Across Scanner Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christianson, O; Winslow, J; Samei, E

    2014-06-15

    Purpose: One of the principal challenges of clinical imaging is to achieve an ideal balance between image quality and radiation dose across multiple CT models. The number of scanners and protocols at large medical centers necessitates an automated quality assurance program to facilitate this objective. Therefore, the goal of this work was to implement an automated CT image quality and radiation dose monitoring program based on actual patient data and to use this program to assess consistency of protocols across CT scanner models. Methods: Patient CT scans are routed to a HIPPA compliant quality assurance server. CTDI, extracted using opticalmore » character recognition, and patient size, measured from the localizers, are used to calculate SSDE. A previously validated noise measurement algorithm determines the noise in uniform areas of the image across the scanned anatomy to generate a global noise level (GNL). Using this program, 2358 abdominopelvic scans acquired on three commercial CT scanners were analyzed. Median SSDE and GNL were compared across scanner models and trends in SSDE and GNL with patient size were used to determine the impact of differing automatic exposure control (AEC) algorithms. Results: There was a significant difference in both SSDE and GNL across scanner models (9–33% and 15–35% for SSDE and GNL, respectively). Adjusting all protocols to achieve the same image noise would reduce patient dose by 27–45% depending on scanner model. Additionally, differences in AEC methodologies across vendors resulted in disparate relationships of SSDE and GNL with patient size. Conclusion: The difference in noise across scanner models indicates that protocols are not optimally matched to achieve consistent image quality. Our results indicated substantial possibility for dose reduction while achieving more consistent image appearance. Finally, the difference in AEC methodologies suggests the need for size-specific CT protocols to minimize variability in image quality across CT vendors.« less

  1. The whole space three-dimensional magnetotelluric inversion algorithm with static shift correction

    NASA Astrophysics Data System (ADS)

    Zhang, K.

    2016-12-01

    Base on the previous studies on the static shift correction and 3D inversion algorithms, we improve the NLCG 3D inversion method and propose a new static shift correction method which work in the inversion. The static shift correction method is based on the 3D theory and real data. The static shift can be detected by the quantitative analysis of apparent parameters (apparent resistivity and impedance phase) of MT in high frequency range, and completed correction with inversion. The method is an automatic processing technology of computer with 0 cost, and avoids the additional field work and indoor processing with good results.The 3D inversion algorithm is improved (Zhang et al., 2013) base on the NLCG method of Newman & Alumbaugh (2000) and Rodi & Mackie (2001). For the algorithm, we added the parallel structure, improved the computational efficiency, reduced the memory of computer and added the topographic and marine factors. So the 3D inversion could work in general PC with high efficiency and accuracy. And all the MT data of surface stations, seabed stations and underground stations can be used in the inversion algorithm. The verification and application example of 3D inversion algorithm is shown in Figure 1. From the comparison of figure 1, the inversion model can reflect all the abnormal bodies and terrain clearly regardless of what type of data (impedance/tipper/impedance and tipper). And the resolution of the bodies' boundary can be improved by using tipper data. The algorithm is very effective for terrain inversion. So it is very useful for the study of continental shelf with continuous exploration of land, marine and underground.The three-dimensional electrical model of the ore zone reflects the basic information of stratum, rock and structure. Although it cannot indicate the ore body position directly, the important clues are provided for prospecting work by the delineation of diorite pluton uplift range. The test results show that, the high quality of the data processing and efficient inversion method for electromagnetic method is an important guarantee for porphyry ore.

  2. Primary Beam Air Kerma Dependence on Distance from Cargo and People Scanners

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strom, Daniel J.; Cerra, Frank

    The distance dependence of air kerma or dose rate of the primary radiation beam is not obvious for security scanners of cargo and people in which there is relative motion between a collimated source and the person or object being imaged. To study this problem, one fixed line source and three moving-source scan-geometry cases are considered, each characterized by radiation emanating perpendicular to an axis. The cases are 1) a stationary line source of radioactive material, e.g., contaminated solution in a pipe; 2) a moving, uncollimated point source of radiation that is shuttered or off when it is stationary; 3)more » a moving, collimated point source of radiation that is shuttered or off when it is stationary; and 4) a translating, narrow “pencil” beam emanating in a flying-spot, raster pattern. Each case is considered for short and long distances compared to the line source length or path traversed by a moving source. The short distance model pertains mostly to dose to objects being scanned and personnel associated with the screening operation. The long distance model pertains mostly to potential dose to bystanders. For radionuclide sources, the number of nuclear transitions that occur a) per unit length of a line source, or b) during the traversal of a point source, is a unifying concept. The “universal source strength” of air kerma rate at a meter from the source can be used to describe x-ray machine or radionuclide sources. For many cargo and people scanners with highly collimated fan or pencil beams, dose varies as the inverse of the distance from the source in the near field and with the inverse square of the distance beyond a critical radius. Ignoring the inverse square dependence and using inverse distance dependence is conservative in the sense of tending to overestimate dose.« less

  3. Primary Beam Air Kerma Dependence on Distance from Cargo and People Scanners.

    PubMed

    Strom, Daniel J; Cerra, Frank

    2016-06-01

    The distance dependence of air kerma or dose rate of the primary radiation beam is not obvious for security scanners of cargo and people in which there is relative motion between a collimated source and the person or object being imaged. To study this problem, one fixed line source and three moving-source scan-geometry cases are considered, each characterized by radiation emanating perpendicular to an axis. The cases are 1) a stationary line source of radioactive material, e.g., contaminated solution in a pipe; 2) a moving, uncollimated point source of radiation that is shuttered or off when it is stationary; 3) a moving, collimated point source of radiation that is shuttered or off when it is stationary; and 4) a translating, narrow "pencil" beam emanating in a flying-spot, raster pattern. Each case is considered for short and long distances compared to the line source length or path traversed by a moving source. The short distance model pertains mostly to dose to objects being scanned and personnel associated with the screening operation. The long distance model pertains mostly to potential dose to bystanders. For radionuclide sources, the number of nuclear transitions that occur a) per unit length of a line source or b) during the traversal of a point source is a unifying concept. The "universal source strength" of air kerma rate at 1 m from the source can be used to describe x-ray machine or radionuclide sources. For many cargo and people scanners with highly collimated fan or pencil beams, dose varies as the inverse of the distance from the source in the near field and with the inverse square of the distance beyond a critical radius. Ignoring the inverse square dependence and using inverse distance dependence is conservative in the sense of tending to overestimate dose.

  4. Phase-shift focus monitoring techniques

    NASA Astrophysics Data System (ADS)

    McQuillan, Matthew; Roberts, Bill

    2006-03-01

    Depth of focus (DOF) has become a victim of its mathematical relationship with Numerical Aperture (NA). While NA is being increased towards one to maximize scanner resolution capabilities, DOF is being minimized because of its inverse relationship with NA. Moore's law continues to drive the semiconductor industry towards smaller and smaller devices the need for high NA to resolve these shrinking devices will continue to consume the usable depth of focus (UDOF). Due to the shrinking UDOF a demand has been created for a feature or technology that will give engineers the capability to monitor scanner focus. Developing and implementation of various focus monitoring techniques have been used to prevent undetected tool focus excursions. Two overlay techniques to monitor ArF Scanner focus have been evaluated; our evaluation results will be presented here.

  5. Experimental validation of a Monte-Carlo-based inversion scheme for 3D quantitative photoacoustic tomography

    NASA Astrophysics Data System (ADS)

    Buchmann, Jens; Kaplan, Bernhard A.; Prohaska, Steffen; Laufer, Jan

    2017-03-01

    Quantitative photoacoustic tomography (qPAT) aims to extract physiological parameters, such as blood oxygen saturation (sO2), from measured multi-wavelength image data sets. The challenge of this approach lies in the inherently nonlinear fluence distribution in the tissue, which has to be accounted for by using an appropriate model, and the large scale of the inverse problem. In addition, the accuracy of experimental and scanner-specific parameters, such as the wavelength dependence of the incident fluence, the acoustic detector response, the beam profile and divergence, needs to be considered. This study aims at quantitative imaging of blood sO2, as it has been shown to be a more robust parameter compared to absolute concentrations. We propose a Monte-Carlo-based inversion scheme in conjunction with a reduction in the number of variables achieved using image segmentation. The inversion scheme is experimentally validated in tissue-mimicking phantoms consisting of polymer tubes suspended in a scattering liquid. The tubes were filled with chromophore solutions at different concentration ratios. 3-D multi-spectral image data sets were acquired using a Fabry-Perot based PA scanner. A quantitative comparison of the measured data with the output of the forward model is presented. Parameter estimates of chromophore concentration ratios were found to be within 5 % of the true values.

  6. Towards System Calibration of Panoramic Laser Scanners from a Single Station

    PubMed Central

    Medić, Tomislav; Holst, Christoph; Kuhlmann, Heiner

    2017-01-01

    Terrestrial laser scanner measurements suffer from systematic errors due to internal misalignments. The magnitude of the resulting errors in the point cloud in many cases exceeds the magnitude of random errors. Hence, the task of calibrating a laser scanner is important for applications with high accuracy demands. This paper primarily addresses the case of panoramic terrestrial laser scanners. Herein, it is proven that most of the calibration parameters can be estimated from a single scanner station without a need for any reference information. This hypothesis is confirmed through an empirical experiment, which was conducted in a large machine hall using a Leica Scan Station P20 panoramic laser scanner. The calibration approach is based on the widely used target-based self-calibration approach, with small modifications. A new angular parameterization is used in order to implicitly introduce measurements in two faces of the instrument and for the implementation of calibration parameters describing genuine mechanical misalignments. Additionally, a computationally preferable calibration algorithm based on the two-face measurements is introduced. In the end, the calibration results are discussed, highlighting all necessary prerequisites for the scanner calibration from a single scanner station. PMID:28513548

  7. Fusion of Terrestrial and Airborne Laser Data for 3D modeling Applications

    NASA Astrophysics Data System (ADS)

    Mohammed, Hani Mahmoud

    This thesis deals with the 3D modeling phase of the as-built large BIM projects. Among several means of BIM data capturing, such as photogrammetric or range tools, laser scanners have been one of the most efficient and practical tool for a long time. They can generate point clouds with high resolution for 3D models that meet nowadays' market demands. The current 3D modeling projects of as-built BIMs are mainly focused on using one type of laser scanner data, such as Airborne or Terrestrial. According to the literatures, no significant (few) efforts were made towards the fusion of heterogeneous laser scanner data despite its importance. The importance of the fusion of heterogeneous data arises from the fact that no single type of laser data can provide all the information about BIM, especially for large BIM projects that are existing on a large area, such as university buildings, or Heritage places. Terrestrial laser scanners are able to map facades of buildings and other terrestrial objects. However, they lack the ability to map roofs or higher parts in the BIM project. Airborne laser scanner on the other hand, can map roofs of the buildings efficiently and can map only small part of the facades. Short range laser scanners can map the interiors of the BIM projects, while long range scanners are used for mapping wide exterior areas in BIM projects. In this thesis the long range laser scanner data obtained in the Stop-and-Go mapping mode, the short range laser scanner data, obtained in a fully static mapping mode, and the airborne laser data are all fused together to bring a complete effective solution for a large BIM project. Working towards the 3D modeling of BIM projects, the thesis framework starts with the registration of the data, where a new fast automatic registration algorithm were developed. The next step is to recognize the different objects in the BIM project (classification), and obtain 3D models for the buildings. The last step is the development of an occlusion removal algorithm to efficiently retain parts of the buildings occluded by surrounding objects such as trees, vehicles, or street poles.

  8. New Hybrid Algorithms for Estimating Tree Stem Diameters at Breast Height Using a Two Dimensional Terrestrial Laser Scanner

    PubMed Central

    Kong, Jianlei; Ding, Xiaokang; Liu, Jinhao; Yan, Lei; Wang, Jianli

    2015-01-01

    In this paper, a new algorithm to improve the accuracy of estimating diameter at breast height (DBH) for tree trunks in forest areas is proposed. First, the information is collected by a two-dimensional terrestrial laser scanner (2DTLS), which emits laser pulses to generate a point cloud. After extraction and filtration, the laser point clusters of the trunks are obtained, which are optimized by an arithmetic means method. Then, an algebraic circle fitting algorithm in polar form is non-linearly optimized by the Levenberg-Marquardt method to form a new hybrid algorithm, which is used to acquire the diameters and positions of the trees. Compared with previous works, this proposed method improves the accuracy of diameter estimation of trees significantly and effectively reduces the calculation time. Moreover, the experimental results indicate that this method is stable and suitable for the most challenging conditions, which has practical significance in improving the operating efficiency of forest harvester and reducing the risk of causing accidents. PMID:26147726

  9. Phytoplankton pigment concentrations in the Middle Atlantic Bight - Comparison of ship determinations and CZCS estimates. [Coastal Zone Color Scanner

    NASA Technical Reports Server (NTRS)

    Gordon, H. R.; Brown, J. W.; Clark, D. K.; Brown, O. B.; Evans, R. H.; Broenkow, W. W.

    1983-01-01

    The processing algorithms used for relating the apparent color of the ocean observed with the Coastal-Zone Color Scanner on Nimbus-7 to the concentration of phytoplankton pigments (principally the pigment responsible for photosynthesis, chlorophyll-a) are developed and discussed in detail. These algorithms are applied to the shelf and slope waters of the Middle Atlantic Bight and also to Sargasso Sea waters. In all, four images are examined, and the resulting pigment concentrations are compared to continuous measurements made along ship tracks. The results suggest that over the 0.08-1.5 mg/cu m range, the error in the retrieved pigment concentration is of the order of 30-40% for a variety of atmospheric turbidities. In three direct comparisons between ship-measured and satellite-retrieved values of the water-leaving radiance, the atmospheric correction algorithm retrieved the water-leaving radiance with an average error of about 10%. This atmospheric correction algorithm does not require any surface measurements for its application.

  10. Laser Scanner Technology, Ground-Penetrating Radar and Augmented Reality for the Survey and Recovery of Artistic, Archaeological and Cultural Heritage

    NASA Astrophysics Data System (ADS)

    Barrile, V.; Bilotta, G.; Meduri, G. M.; De Carlo, D.; Nunnari, A.

    2017-11-01

    In this study, using technologies such as laser scanner and GPR it was desired to see their potential in the cultural heritage. Also with regard to the processing part we are compared the results obtained by the various commercial software and algorithms developed and implemented in Matlab. Moreover, Virtual Reality and Augmented Reality allow integrating the real world with historical-artistic information, laser scanners and georadar (GPR) data and virtual objects, virtually enriching it with multimedia elements, graphic and textual information accessible through smartphones and tablets.

  11. Localization of a mobile laser scanner via dimensional reduction

    NASA Astrophysics Data System (ADS)

    Lehtola, Ville V.; Virtanen, Juho-Pekka; Vaaja, Matti T.; Hyyppä, Hannu; Nüchter, Andreas

    2016-11-01

    We extend the concept of intrinsic localization from a theoretical one-dimensional (1D) solution onto a 2D manifold that is embedded in a 3D space, and then recover the full six degrees of freedom for a mobile laser scanner with a simultaneous localization and mapping algorithm (SLAM). By intrinsic localization, we mean that no reference coordinate system, such as global navigation satellite system (GNSS), nor inertial measurement unit (IMU) are used. Experiments are conducted with a 2D laser scanner mounted on a rolling prototype platform, VILMA. The concept offers potential in being extendable to other wheeled platforms.

  12. A review of ocean chlorophyll algorithms and primary production models

    NASA Astrophysics Data System (ADS)

    Li, Jingwen; Zhou, Song; Lv, Nan

    2015-12-01

    This paper mainly introduces the five ocean chlorophyll concentration inversion algorithm and 3 main models for computing ocean primary production based on ocean chlorophyll concentration. Through the comparison of five ocean chlorophyll inversion algorithm, sums up the advantages and disadvantages of these algorithm,and briefly analyzes the trend of ocean primary production model.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mahmood, U; Dauer, L; Erdi, Y

    Purpose: Our goal was to evaluate low contrast detectability (LCD) for abdominal CT protocols across two CT scanner manufacturers, while producing a similar noise texture and CTDIvol for acquired images. Methods: A CIRS tissue equivalent LCD phantom containing three columns of 7 spherical targets, ranging from 10 mm to 2.4 mm, that are 5, 10, and 20 HU below the background matrix (HUBB) was scanned using two a GE HD750 64 slice scanner and a Siemens Somatom Definition AS 64 slice scanner. Protocols were designed to deliver a CTDIvol of 12.26 mGy and images were reconstructed with FBP, ASIR andmore » Sapphire. Comparisons were made with those algorithms that had matching noise power spectrum peaks (NPS). NPS information was extracted from a previously published article that matched NPS peak frequencies across manufacturers by calculating the NPS from uniform phantom images reconstructed with several IR algorithms. Results: The minimum detectable lesion size in the 20 HUBB and 10 HUBB column was 6.3 mm, and 10 mm in the 5 HUBB column for the GE HD 750 scanner. The minimum detectable lesion size in the 20 HUBB column was 4.8 mm, in the 10 HUBB column, 9.5 mm, and the 5 HUBB column, 10 mm for the Siemens Somatom Definition AS. Conclusion: Reducing radiation dose while improving or maintaining LCD is possible with application of IR. However, there are several different IR algorithms, with each generating a different resolution and noise texture. In multi-manufacturer settings, matching only the CTDIvol between manufacturers may Result in a loss of clinically relevant information.« less

  14. Automatic planning of needle placement for robot-assisted percutaneous procedures.

    PubMed

    Belbachir, Esia; Golkar, Ehsan; Bayle, Bernard; Essert, Caroline

    2018-04-18

    Percutaneous procedures allow interventional radiologists to perform diagnoses or treatments guided by an imaging device, typically a computed tomography (CT) scanner with a high spatial resolution. To reduce exposure to radiations and improve accuracy, robotic assistance to needle insertion is considered in the case of X-ray guided procedures. We introduce a planning algorithm that computes a needle placement compatible with both the patient's anatomy and the accessibility of the robot within the scanner gantry. Our preoperative planning approach is based on inverse kinematics, fast collision detection, and bidirectional rapidly exploring random trees coupled with an efficient strategy of node addition. The algorithm computes the allowed needle entry zones over the patient's skin (accessibility map) from 3D models of the patient's anatomy, the environment (CT, bed), and the robot. The result includes the admissible robot joint path to target the prescribed internal point, through the entry point. A retrospective study was performed on 16 patients datasets in different conditions: without robot (WR) and with the robot on the left or the right side of the bed (RL/RR). We provide an accessibility map ensuring a collision-free path of the robot and allowing for a needle placement compatible with the patient's anatomy. The result is obtained in an average time of about 1 min, even in difficult cases. The accessibility maps of RL and RR covered about a half of the surface of WR map in average, which offers a variety of options to insert the needle with the robot. We also measured the average distance between the needle and major obstacles such as the vessels and found that RL and RR produced needle placements almost as safe as WR. The introduced planning method helped us prove that it is possible to use such a "general purpose" redundant manipulator equipped with a dedicated tool to perform percutaneous interventions in cluttered spaces like a CT gantry.

  15. Modified look-locker inversion recovery T1 mapping indices: assessment of accuracy and reproducibility between magnetic resonance scanners

    PubMed Central

    2013-01-01

    Background Cardiovascular magnetic resonance (CMR) T1 mapping indices, such as T1 time and partition coefficient (λ), have shown potential to assess diffuse myocardial fibrosis. The purpose of this study was to investigate how scanner and field strength variation affect the accuracy and precision/reproducibility of T1 mapping indices. Methods CMR studies were performed on two 1.5T and three 3T scanners. Eight phantoms were made to mimic the T1/T2 of pre- and post-contrast myocardium and blood at 1.5T and 3T. T1 mapping using MOLLI was performed with simulated heart rate of 40-100 bpm. Inversion recovery spin echo (IR-SE) was the reference standard for T1 determination. Accuracy was defined as the percent error between MOLLI and IR-SE, and scan/re-scan reproducibility was defined as the relative percent mean difference between repeat MOLLI scans. Partition coefficient was estimated by ΔR1myocardium phantom/ΔR1blood phantom. Generalized linear mixed model was used to compare the accuracy and precision/reproducibility of T1 and λ across field strength, scanners, and protocols. Results Field strength significantly affected MOLLI T1 accuracy (6.3% error for 1.5T vs. 10.8% error for 3T, p<0.001) but not λ accuracy (8.8% error for 1.5T vs. 8.0% error for 3T, p=0.11). Partition coefficients of MOLLI were not different between two 1.5T scanners (47.2% vs. 47.9%, p=0.13), and showed only slight variation across three 3T scanners (49.2% vs. 49.8% vs. 49.9%, p=0.016). Partition coefficient also had significantly lower percent error for precision (better scan/re-scan reproducibility) than measurement of individual T1 values (3.6% for λ vs. 4.3%-4.8% for T1 values, approximately, for pre/post blood and myocardium values). Conclusion Based on phantom studies, T1 errors using MOLLI ranged from 6-14% across various MR scanners while errors for partition coefficient were less (6-10%). Compared with absolute T1 times, partition coefficient showed less variability across platforms and field strengths as well as higher precision. PMID:23890156

  16. A 3D inversion for all-space magnetotelluric data with static shift correction

    NASA Astrophysics Data System (ADS)

    Zhang, Kun

    2017-04-01

    Base on the previous studies on the static shift correction and 3D inversion algorithms, we improve the NLCG 3D inversion method and propose a new static shift correction method which work in the inversion. The static shift correction method is based on the 3D theory and real data. The static shift can be detected by the quantitative analysis of apparent parameters (apparent resistivity and impedance phase) of MT in high frequency range, and completed correction with inversion. The method is an automatic processing technology of computer with 0 cost, and avoids the additional field work and indoor processing with good results. The 3D inversion algorithm is improved (Zhang et al., 2013) base on the NLCG method of Newman & Alumbaugh (2000) and Rodi & Mackie (2001). For the algorithm, we added the parallel structure, improved the computational efficiency, reduced the memory of computer and added the topographic and marine factors. So the 3D inversion could work in general PC with high efficiency and accuracy. And all the MT data of surface stations, seabed stations and underground stations can be used in the inversion algorithm.

  17. Why do commercial CT scanners still employ traditional, filtered back-projection for image reconstruction?

    PubMed Central

    Pan, Xiaochuan; Sidky, Emil Y; Vannier, Michael

    2010-01-01

    Despite major advances in x-ray sources, detector arrays, gantry mechanical design and especially computer performance, one component of computed tomography (CT) scanners has remained virtually constant for the past 25 years—the reconstruction algorithm. Fundamental advances have been made in the solution of inverse problems, especially tomographic reconstruction, but these works have not been translated into clinical and related practice. The reasons are not obvious and seldom discussed. This review seeks to examine the reasons for this discrepancy and provides recommendations on how it can be resolved. We take the example of field of compressive sensing (CS), summarizing this new area of research from the eyes of practical medical physicists and explaining the disconnection between theoretical and application-oriented research. Using a few issues specific to CT, which engineers have addressed in very specific ways, we try to distill the mathematical problem underlying each of these issues with the hope of demonstrating that there are interesting mathematical problems of general importance that can result from in depth analysis of specific issues. We then sketch some unconventional CT-imaging designs that have the potential to impact on CT applications, if the link between applied mathematicians and engineers/physicists were stronger. Finally, we close with some observations on how the link could be strengthened. There is, we believe, an important opportunity to rapidly improve the performance of CT and related tomographic imaging techniques by addressing these issues. PMID:20376330

  18. TOPICAL REVIEW: Why do commercial CT scanners still employ traditional, filtered back-projection for image reconstruction?

    NASA Astrophysics Data System (ADS)

    Pan, Xiaochuan; Sidky, Emil Y.; Vannier, Michael

    2009-12-01

    Despite major advances in x-ray sources, detector arrays, gantry mechanical design and especially computer performance, one component of computed tomography (CT) scanners has remained virtually constant for the past 25 years—the reconstruction algorithm. Fundamental advances have been made in the solution of inverse problems, especially tomographic reconstruction, but these works have not been translated into clinical and related practice. The reasons are not obvious and seldom discussed. This review seeks to examine the reasons for this discrepancy and provides recommendations on how it can be resolved. We take the example of field of compressive sensing (CS), summarizing this new area of research from the eyes of practical medical physicists and explaining the disconnection between theoretical and application-oriented research. Using a few issues specific to CT, which engineers have addressed in very specific ways, we try to distill the mathematical problem underlying each of these issues with the hope of demonstrating that there are interesting mathematical problems of general importance that can result from in depth analysis of specific issues. We then sketch some unconventional CT-imaging designs that have the potential to impact on CT applications, if the link between applied mathematicians and engineers/physicists were stronger. Finally, we close with some observations on how the link could be strengthened. There is, we believe, an important opportunity to rapidly improve the performance of CT and related tomographic imaging techniques by addressing these issues.

  19. Recursive partitioned inversion of large (1500 x 1500) symmetric matrices

    NASA Technical Reports Server (NTRS)

    Putney, B. H.; Brownd, J. E.; Gomez, R. A.

    1976-01-01

    A recursive algorithm was designed to invert large, dense, symmetric, positive definite matrices using small amounts of computer core, i.e., a small fraction of the core needed to store the complete matrix. The described algorithm is a generalized Gaussian elimination technique. Other algorithms are also discussed for the Cholesky decomposition and step inversion techniques. The purpose of the inversion algorithm is to solve large linear systems of normal equations generated by working geodetic problems. The algorithm was incorporated into a computer program called SOLVE. In the past the SOLVE program has been used in obtaining solutions published as the Goddard earth models.

  20. An iterative sinogram gap-filling method with object- and scanner-dedicated discrete cosine transform (DCT)-domain filters for high resolution PET scanners.

    PubMed

    Kim, Kwangdon; Lee, Kisung; Lee, Hakjae; Joo, Sungkwan; Kang, Jungwon

    2018-01-01

    We aimed to develop a gap-filling algorithm, in particular the filter mask design method of the algorithm, which optimizes the filter to the imaging object by an adaptive and iterative process, rather than by manual means. Two numerical phantoms (Shepp-Logan and Jaszczak) were used for sinogram generation. The algorithm works iteratively, not only on the gap-filling iteration but also on the mask generation, to identify the object-dedicated low frequency area in the DCT-domain that is to be preserved. We redefine the low frequency preserving region of the filter mask at every gap-filling iteration, and the region verges on the property of the original image in the DCT domain. The previous DCT2 mask for each phantom case had been manually well optimized, and the results show little difference from the reference image and sinogram. We observed little or no difference between the results of the manually optimized DCT2 algorithm and those of the proposed algorithm. The proposed algorithm works well for various types of scanning object and shows results that compare to those of the manually optimized DCT2 algorithm without perfect or full information of the imaging object.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poppinga, D., E-mail: daniela.poppinga@uni-oldenburg.de; Schoenfeld, A. A.; Poppe, B.

    Purpose: The purpose of this study is the correction of the lateral scanner artifact, i.e., the effect that, on a large homogeneously exposed EBT3 film, a flatbed scanner measures different optical densities at different positions along thex axis, the axis parallel to the elongated light source. At constant dose, the measured optical densitiy profiles along this axis have a parabolic shape with significant dose dependent curvature. Therefore, the effect is shortly called the parabola effect. The objective of the algorithm developed in this study is to correct for the parabola effect. Any optical density measured at given position x ismore » transformed into the equivalent optical density c at the apex of the parabola and then converted into the corresponding dose via the calibration of c versus dose. Methods: For the present study EBT3 films and an Epson 10000XL scanner including transparency unit were used for the analysis of the parabola effect. The films were irradiated with 6 MV photons from an Elekta Synergy accelerator in a RW3 slab phantom. In order to quantify the effect, ten film pieces with doses graded from 0 to 20.9 Gy were sequentially scanned at eight positions along thex axis and at six positions along the z axis (the movement direction of the light source) both for the portrait and landscape film orientations. In order to test the effectiveness of the new correction algorithm, the dose profiles of an open square field and an IMRT plan were measured by EBT3 films and compared with ionization chamber and ionization chamber array measurement. Results: The parabola effect has been numerically studied over the whole measuring field of the Epson 10000XL scanner for doses up to 20.9 Gy and for both film orientations. The presented algorithm transforms any optical density at positionx into the equivalent optical density that would be measured at the same dose at the apex of the parabola. This correction method has been validated up to doses of 5.2 Gy all over the scanner bed with 2D dose distributions of an open square photon field and an IMRT distribution. Conclusions: The algorithm presented in this study quantifies and corrects the parabola effect of EBT3 films scanned in commonly used commercial flatbed scanners at doses up to 5.2 Gy. It is easy to implement, and no additional work steps are necessary in daily routine film dosimetry.« less

  2. Limitations of Airway Dimension Measurement on Images Obtained Using Multi-Detector Row Computed Tomography

    PubMed Central

    Oguma, Tsuyoshi; Hirai, Toyohiro; Niimi, Akio; Matsumoto, Hisako; Muro, Shigeo; Shigematsu, Michio; Nishimura, Takashi; Kubo, Yoshiro; Mishima, Michiaki

    2013-01-01

    Objectives (a) To assess the effects of computed tomography (CT) scanners, scanning conditions, airway size, and phantom composition on airway dimension measurement and (b) to investigate the limitations of accurate quantitative assessment of small airways using CT images. Methods An airway phantom, which was constructed using various types of material and with various tube sizes, was scanned using four CT scanner types under different conditions to calculate airway dimensions, luminal area (Ai), and the wall area percentage (WA%). To investigate the limitations of accurate airway dimension measurement, we then developed a second airway phantom with a thinner tube wall, and compared the clinical CT images of healthy subjects with the phantom images scanned using the same CT scanner. The study using clinical CT images was approved by the local ethics committee, and written informed consent was obtained from all subjects. Data were statistically analyzed using one-way ANOVA. Results Errors noted in airway dimension measurement were greater in the tube of small inner radius made of material with a high CT density and on images reconstructed by body algorithm (p<0.001), and there was some variation in error among CT scanners under different fields of view. Airway wall thickness had the maximum effect on the accuracy of measurements with all CT scanners under all scanning conditions, and the magnitude of errors for WA% and Ai varied depending on wall thickness when airways of <1.0-mm wall thickness were measured. Conclusions The parameters of airway dimensions measured were affected by airway size, reconstruction algorithm, composition of the airway phantom, and CT scanner types. In dimension measurement of small airways with wall thickness of <1.0 mm, the accuracy of measurement according to quantitative CT parameters can decrease as the walls become thinner. PMID:24116105

  3. [High resolution reconstruction of PET images using the iterative OSEM algorithm].

    PubMed

    Doll, J; Henze, M; Bublitz, O; Werling, A; Adam, L E; Haberkorn, U; Semmler, W; Brix, G

    2004-06-01

    Improvement of the spatial resolution in positron emission tomography (PET) by incorporation of the image-forming characteristics of the scanner into the process of iterative image reconstruction. All measurements were performed at the whole-body PET system ECAT EXACT HR(+) in 3D mode. The acquired 3D sinograms were sorted into 2D sinograms by means of the Fourier rebinning (FORE) algorithm, which allows the usage of 2D algorithms for image reconstruction. The scanner characteristics were described by a spatially variant line-spread function (LSF), which was determined from activated copper-64 line sources. This information was used to model the physical degradation processes in PET measurements during the course of 2D image reconstruction with the iterative OSEM algorithm. To assess the performance of the high-resolution OSEM algorithm, phantom measurements performed at a cylinder phantom, the hotspot Jaszczack phantom, and the 3D Hoffmann brain phantom as well as different patient examinations were analyzed. Scanner characteristics could be described by a Gaussian-shaped LSF with a full-width at half-maximum increasing from 4.8 mm at the center to 5.5 mm at a radial distance of 10.5 cm. Incorporation of the LSF into the iteration formula resulted in a markedly improved resolution of 3.0 and 3.5 mm, respectively. The evaluation of phantom and patient studies showed that the high-resolution OSEM algorithm not only lead to a better contrast resolution in the reconstructed activity distributions but also to an improved accuracy in the quantification of activity concentrations in small structures without leading to an amplification of image noise or even the occurrence of image artifacts. The spatial and contrast resolution of PET scans can markedly be improved by the presented image restauration algorithm, which is of special interest for the examination of both patients with brain disorders and small animals.

  4. Performance analysis of the Microsoft Kinect sensor for 2D Simultaneous Localization and Mapping (SLAM) techniques.

    PubMed

    Kamarudin, Kamarulzaman; Mamduh, Syed Muhammad; Shakaff, Ali Yeon Md; Zakaria, Ammar

    2014-12-05

    This paper presents a performance analysis of two open-source, laser scanner-based Simultaneous Localization and Mapping (SLAM) techniques (i.e., Gmapping and Hector SLAM) using a Microsoft Kinect to replace the laser sensor. Furthermore, the paper proposes a new system integration approach whereby a Linux virtual machine is used to run the open source SLAM algorithms. The experiments were conducted in two different environments; a small room with no features and a typical office corridor with desks and chairs. Using the data logged from real-time experiments, each SLAM technique was simulated and tested with different parameter settings. The results show that the system is able to achieve real time SLAM operation. The system implementation offers a simple and reliable way to compare the performance of Windows-based SLAM algorithm with the algorithms typically implemented in a Robot Operating System (ROS). The results also indicate that certain modifications to the default laser scanner-based parameters are able to improve the map accuracy. However, the limited field of view and range of Kinect's depth sensor often causes the map to be inaccurate, especially in featureless areas, therefore the Kinect sensor is not a direct replacement for a laser scanner, but rather offers a feasible alternative for 2D SLAM tasks.

  5. Performance Analysis of the Microsoft Kinect Sensor for 2D Simultaneous Localization and Mapping (SLAM) Techniques

    PubMed Central

    Kamarudin, Kamarulzaman; Mamduh, Syed Muhammad; Shakaff, Ali Yeon Md; Zakaria, Ammar

    2014-01-01

    This paper presents a performance analysis of two open-source, laser scanner-based Simultaneous Localization and Mapping (SLAM) techniques (i.e., Gmapping and Hector SLAM) using a Microsoft Kinect to replace the laser sensor. Furthermore, the paper proposes a new system integration approach whereby a Linux virtual machine is used to run the open source SLAM algorithms. The experiments were conducted in two different environments; a small room with no features and a typical office corridor with desks and chairs. Using the data logged from real-time experiments, each SLAM technique was simulated and tested with different parameter settings. The results show that the system is able to achieve real time SLAM operation. The system implementation offers a simple and reliable way to compare the performance of Windows-based SLAM algorithm with the algorithms typically implemented in a Robot Operating System (ROS). The results also indicate that certain modifications to the default laser scanner-based parameters are able to improve the map accuracy. However, the limited field of view and range of Kinect's depth sensor often causes the map to be inaccurate, especially in featureless areas, therefore the Kinect sensor is not a direct replacement for a laser scanner, but rather offers a feasible alternative for 2D SLAM tasks. PMID:25490595

  6. Black hole algorithm for determining model parameter in self-potential data

    NASA Astrophysics Data System (ADS)

    Sungkono; Warnana, Dwa Desa

    2018-01-01

    Analysis of self-potential (SP) data is increasingly popular in geophysical method due to its relevance in many cases. However, the inversion of SP data is often highly nonlinear. Consequently, local search algorithms commonly based on gradient approaches have often failed to find the global optimum solution in nonlinear problems. Black hole algorithm (BHA) was proposed as a solution to such problems. As the name suggests, the algorithm was constructed based on the black hole phenomena. This paper investigates the application of BHA to solve inversions of field and synthetic self-potential (SP) data. The inversion results show that BHA accurately determines model parameters and model uncertainty. This indicates that BHA is highly potential as an innovative approach for SP data inversion.

  7. 3-D CSEM data inversion algorithm based on simultaneously active multiple transmitters concept

    NASA Astrophysics Data System (ADS)

    Dehiya, Rahul; Singh, Arun; Gupta, Pravin Kumar; Israil, Mohammad

    2017-05-01

    We present an algorithm for efficient 3-D inversion of marine controlled-source electromagnetic data. The efficiency is achieved by exploiting the redundancy in data. The data redundancy is reduced by compressing the data through stacking of the response of transmitters which are in close proximity. This stacking is equivalent to synthesizing the data as if the multiple transmitters are simultaneously active. The redundancy in data, arising due to close transmitter spacing, has been studied through singular value analysis of the Jacobian formed in 1-D inversion. This study reveals that the transmitter spacing of 100 m, typically used in marine data acquisition, does result in redundancy in the data. In the proposed algorithm, the data are compressed through stacking which leads to both computational advantage and reduction in noise. The performance of the algorithm for noisy data is demonstrated through the studies on two types of noise, viz., uncorrelated additive noise and correlated non-additive noise. It is observed that in case of uncorrelated additive noise, up to a moderately high (10 percent) noise level the algorithm addresses the noise as effectively as the traditional full data inversion. However, when the noise level in the data is high (20 percent), the algorithm outperforms the traditional full data inversion in terms of data misfit. Similar results are obtained in case of correlated non-additive noise and the algorithm performs better if the level of noise is high. The inversion results of a real field data set are also presented to demonstrate the robustness of the algorithm. The significant computational advantage in all cases presented makes this algorithm a better choice.

  8. Overlay improvements using a real time machine learning algorithm

    NASA Astrophysics Data System (ADS)

    Schmitt-Weaver, Emil; Kubis, Michael; Henke, Wolfgang; Slotboom, Daan; Hoogenboom, Tom; Mulkens, Jan; Coogans, Martyn; ten Berge, Peter; Verkleij, Dick; van de Mast, Frank

    2014-04-01

    While semiconductor manufacturing is moving towards the 14nm node using immersion lithography, the overlay requirements are tightened to below 5nm. Next to improvements in the immersion scanner platform, enhancements in the overlay optimization and process control are needed to enable these low overlay numbers. Whereas conventional overlay control methods address wafer and lot variation autonomously with wafer pre exposure alignment metrology and post exposure overlay metrology, we see a need to reduce these variations by correlating more of the TWINSCAN system's sensor data directly to the post exposure YieldStar metrology in time. In this paper we will present the results of a study on applying a real time control algorithm based on machine learning technology. Machine learning methods use context and TWINSCAN system sensor data paired with post exposure YieldStar metrology to recognize generic behavior and train the control system to anticipate on this generic behavior. Specific for this study, the data concerns immersion scanner context, sensor data and on-wafer measured overlay data. By making the link between the scanner data and the wafer data we are able to establish a real time relationship. The result is an inline controller that accounts for small changes in scanner hardware performance in time while picking up subtle lot to lot and wafer to wafer deviations introduced by wafer processing.

  9. Evaluation of GMI and PMI diffeomorphic‐based demons algorithms for aligning PET and CT Images

    PubMed Central

    Yang, Juan; Zhang, You; Yin, Yong

    2015-01-01

    Fusion of anatomic information in computed tomography (CT) and functional information in F18‐FDG positron emission tomography (PET) is crucial for accurate differentiation of tumor from benign masses, designing radiotherapy treatment plan and staging of cancer. Although current PET and CT images can be acquired from combined F18‐FDG PET/CT scanner, the two acquisitions are scanned separately and take a long time, which may induce potential positional errors in global and local caused by respiratory motion or organ peristalsis. So registration (alignment) of whole‐body PET and CT images is a prerequisite for their meaningful fusion. The purpose of this study was to assess the performance of two multimodal registration algorithms for aligning PET and CT images. The proposed gradient of mutual information (GMI)‐based demons algorithm, which incorporated the GMI between two images as an external force to facilitate the alignment, was compared with the point‐wise mutual information (PMI) diffeomorphic‐based demons algorithm whose external force was modified by replacing the image intensity difference in diffeomorphic demons algorithm with the PMI to make it appropriate for multimodal image registration. Eight patients with esophageal cancer(s) were enrolled in this IRB‐approved study. Whole‐body PET and CT images were acquired from a combined F18‐FDG PET/CT scanner for each patient. The modified Hausdorff distance (dMH) was used to evaluate the registration accuracy of the two algorithms. Of all patients, the mean values and standard deviations (SDs) of dMH were 6.65 (± 1.90) voxels and 6.01 (± 1.90) after the GMI‐based demons and the PMI diffeomorphic‐based demons registration algorithms respectively. Preliminary results on oncological patients showed that the respiratory motion and organ peristalsis in PET/CT esophageal images could not be neglected, although a combined F18‐FDG PET/CT scanner was used for image acquisition. The PMI diffeomorphic‐based demons algorithm was more accurate than the GMI‐based demons algorithm in registering PET/CT esophageal images. PACS numbers: 87.57.nj, 87.57. Q‐, 87.57.uk PMID:26218993

  10. Evaluation of GMI and PMI diffeomorphic-based demons algorithms for aligning PET and CT Images.

    PubMed

    Yang, Juan; Wang, Hongjun; Zhang, You; Yin, Yong

    2015-07-08

    Fusion of anatomic information in computed tomography (CT) and functional information in 18F-FDG positron emission tomography (PET) is crucial for accurate differentiation of tumor from benign masses, designing radiotherapy treatment plan and staging of cancer. Although current PET and CT images can be acquired from combined 18F-FDG PET/CT scanner, the two acquisitions are scanned separately and take a long time, which may induce potential positional errors in global and local caused by respiratory motion or organ peristalsis. So registration (alignment) of whole-body PET and CT images is a prerequisite for their meaningful fusion. The purpose of this study was to assess the performance of two multimodal registration algorithms for aligning PET and CT images. The proposed gradient of mutual information (GMI)-based demons algorithm, which incorporated the GMI between two images as an external force to facilitate the alignment, was compared with the point-wise mutual information (PMI) diffeomorphic-based demons algorithm whose external force was modified by replacing the image intensity difference in diffeomorphic demons algorithm with the PMI to make it appropriate for multimodal image registration. Eight patients with esophageal cancer(s) were enrolled in this IRB-approved study. Whole-body PET and CT images were acquired from a combined 18F-FDG PET/CT scanner for each patient. The modified Hausdorff distance (d(MH)) was used to evaluate the registration accuracy of the two algorithms. Of all patients, the mean values and standard deviations (SDs) of d(MH) were 6.65 (± 1.90) voxels and 6.01 (± 1.90) after the GMI-based demons and the PMI diffeomorphic-based demons registration algorithms respectively. Preliminary results on oncological patients showed that the respiratory motion and organ peristalsis in PET/CT esophageal images could not be neglected, although a combined 18F-FDG PET/CT scanner was used for image acquisition. The PMI diffeomorphic-based demons algorithm was more accurate than the GMI-based demons algorithm in registering PET/CT esophageal images.

  11. Vibration compensation for high speed scanning tunneling microscopy

    NASA Astrophysics Data System (ADS)

    Croft, D.; Devasia, S.

    1999-12-01

    Low scanning speed is a fundamental limitation of scanning tunneling microscopes (STMs), making real time imaging of surface processes and nanofabrication impractical. The effective scanning bandwidth is currently limited by the smallest resonant vibrational frequency of the piezobased positioning system (i.e., scanner) used in the STM. Due to this limitation, the acquired images are distorted during high speed operations. In practice, the achievable scan rates are much less than 1/10th of the resonant vibrational frequency of the STM scanner. To alleviate the scanning speed limitation, this article describes an inversion-based approach that compensates for the structural vibrations in the scanner and thus, allows STM imaging at high scanning speeds (relative to the smallest resonant vibrational frequency). Experimental results are presented to show the increase in scanning speeds achievable by applying the vibration compensation methods.

  12. Optimization of the Inverse Algorithm for Estimating the Optical Properties of Biological Materials Using Spatially-resolved Diffuse Reflectance Technique

    USDA-ARS?s Scientific Manuscript database

    Determination of the optical properties from intact biological materials based on diffusion approximation theory is a complicated inverse problem, and it requires proper implementation of inverse algorithm, instrumentation, and experiment. This work was aimed at optimizing the procedure of estimatin...

  13. Comparison result of inversion of gravity data of a fault by particle swarm optimization and Levenberg-Marquardt methods.

    PubMed

    Toushmalani, Reza

    2013-01-01

    The purpose of this study was to compare the performance of two methods for gravity inversion of a fault. First method [Particle swarm optimization (PSO)] is a heuristic global optimization method and also an optimization algorithm, which is based on swarm intelligence. It comes from the research on the bird and fish flock movement behavior. Second method [The Levenberg-Marquardt algorithm (LM)] is an approximation to the Newton method used also for training ANNs. In this paper first we discussed the gravity field of a fault, then describes the algorithms of PSO and LM And presents application of Levenberg-Marquardt algorithm, and a particle swarm algorithm in solving inverse problem of a fault. Most importantly the parameters for the algorithms are given for the individual tests. Inverse solution reveals that fault model parameters are agree quite well with the known results. A more agreement has been found between the predicted model anomaly and the observed gravity anomaly in PSO method rather than LM method.

  14. 2D joint inversion of CSAMT and magnetic data based on cross-gradient theory

    NASA Astrophysics Data System (ADS)

    Wang, Kun-Peng; Tan, Han-Dong; Wang, Tao

    2017-06-01

    A two-dimensional forward and backward algorithm for the controlled-source audio-frequency magnetotelluric (CSAMT) method is developed to invert data in the entire region (near, transition, and far) and deal with the effects of artificial sources. First, a regularization factor is introduced in the 2D magnetic inversion, and the magnetic susceptibility is updated in logarithmic form so that the inversion magnetic susceptibility is always positive. Second, the joint inversion of the CSAMT and magnetic methods is completed with the introduction of the cross gradient. By searching for the weight of the cross-gradient term in the objective function, the mutual influence between two different physical properties at different locations are avoided. Model tests show that the joint inversion based on cross-gradient theory offers better results than the single-method inversion. The 2D forward and inverse algorithm for CSAMT with source can effectively deal with artificial sources and ensures the reliability of the final joint inversion algorithm.

  15. Influence of Co-57 and CT Transmission Measurements on the Quantification Accuracy and Partial Volume Effect of a Small Animal PET Scanner.

    PubMed

    Mannheim, Julia G; Schmid, Andreas M; Pichler, Bernd J

    2017-12-01

    Non-invasive in vivo positron emission tomography (PET) provides high detection sensitivity in the nano- to picomolar range and in addition to other advantages, the possibility to absolutely quantify the acquired data. The present study focuses on the comparison of transmission data acquired with an X-ray computed tomography (CT) scanner or a Co-57 source for the Inveon small animal PET scanner (Siemens Healthcare, Knoxville, TN, USA), as well as determines their influences on the quantification accuracy and partial volume effect (PVE). A special focus included the impact of the performed calibration on the quantification accuracy. Phantom measurements were carried out to determine the quantification accuracy, the influence of the object size on the quantification, and the PVE for different sphere sizes, along the field of view and for different contrast ratios. An influence of the emission activity on the Co-57 transmission measurements was discovered (deviations up to 24.06 % measured to true activity), whereas no influence of the emission activity on the CT attenuation correction was identified (deviations <3 % for measured to true activity). The quantification accuracy was substantially influenced by the applied calibration factor and by the object size. The PVE demonstrated a dependency on the sphere size, the position within the field of view, the reconstruction and correction algorithms and the count statistics. Depending on the reconstruction algorithm, only ∼30-40 % of the true activity within a small sphere could be resolved. The iterative 3D reconstruction algorithms uncovered substantially increased recovery values compared to the analytical and 2D iterative reconstruction algorithms (up to 70.46 % and 80.82 % recovery for the smallest and largest sphere using iterative 3D reconstruction algorithms). The transmission measurement (CT or Co-57 source) to correct for attenuation did not severely influence the PVE. The analysis of the quantification accuracy and the PVE revealed an influence of the object size, the reconstruction algorithm and the applied corrections. Particularly, the influence of the emission activity during the transmission measurement performed with a Co-57 source must be considered. To receive comparable results, also among different scanner configurations, standardization of the acquisition (imaging parameters, as well as applied reconstruction and correction protocols) is necessary.

  16. Analysis of the Performance of a Laser Scanner for Predictive Automotive Applications

    NASA Astrophysics Data System (ADS)

    Zeisler, J.; Maas, H.-G.

    2015-08-01

    In this paper we evaluate the use of a laser scanner for future advanced driver assistance systems. We focus on the important task of predicting the target vehicle for longitudinal ego vehicle control. Our motivation is to decrease the reaction time of existing systems during cut-in maneuvers of other traffic participants. A state-of-the-art laser scanner, the Ibeo Scala B2 R , is presented, providing its sensing characteristics and the subsequent high level object data output. We evaluate the performance of the scanner towards object tracking with the help of a GPS real time kinematics system on a test track. Two designed scenarios show phases with constant distance and velocity as well as dynamic motion of the vehicles. We provide the results for the error in position and velocity of the scanner and furthermore, review our algorithm for target vehicle prediction. Finally we show the potential of the laser scanner with the estimated error, that leads to a decrease of up to 40% in reaction time with best conditions.

  17. Large area high-speed metrology SPM system.

    PubMed

    Klapetek, P; Valtr, M; Picco, L; Payton, O D; Martinek, J; Yacoot, A; Miles, M

    2015-02-13

    We present a large area high-speed measuring system capable of rapidly generating nanometre resolution scanning probe microscopy data over mm(2) regions. The system combines a slow moving but accurate large area XYZ scanner with a very fast but less accurate small area XY scanner. This arrangement enables very large areas to be scanned by stitching together the small, rapidly acquired, images from the fast XY scanner while simultaneously moving the slow XYZ scanner across the region of interest. In order to successfully merge the image sequences together two software approaches for calibrating the data from the fast scanner are described. The first utilizes the low uncertainty interferometric sensors of the XYZ scanner while the second implements a genetic algorithm with multiple parameter fitting during the data merging step of the image stitching process. The basic uncertainty components related to these high-speed measurements are also discussed. Both techniques are shown to successfully enable high-resolution, large area images to be generated at least an order of magnitude faster than with a conventional atomic force microscope.

  18. Large area high-speed metrology SPM system

    NASA Astrophysics Data System (ADS)

    Klapetek, P.; Valtr, M.; Picco, L.; Payton, O. D.; Martinek, J.; Yacoot, A.; Miles, M.

    2015-02-01

    We present a large area high-speed measuring system capable of rapidly generating nanometre resolution scanning probe microscopy data over mm2 regions. The system combines a slow moving but accurate large area XYZ scanner with a very fast but less accurate small area XY scanner. This arrangement enables very large areas to be scanned by stitching together the small, rapidly acquired, images from the fast XY scanner while simultaneously moving the slow XYZ scanner across the region of interest. In order to successfully merge the image sequences together two software approaches for calibrating the data from the fast scanner are described. The first utilizes the low uncertainty interferometric sensors of the XYZ scanner while the second implements a genetic algorithm with multiple parameter fitting during the data merging step of the image stitching process. The basic uncertainty components related to these high-speed measurements are also discussed. Both techniques are shown to successfully enable high-resolution, large area images to be generated at least an order of magnitude faster than with a conventional atomic force microscope.

  19. A model reduction approach to numerical inversion for a parabolic partial differential equation

    NASA Astrophysics Data System (ADS)

    Borcea, Liliana; Druskin, Vladimir; Mamonov, Alexander V.; Zaslavsky, Mikhail

    2014-12-01

    We propose a novel numerical inversion algorithm for the coefficients of parabolic partial differential equations, based on model reduction. The study is motivated by the application of controlled source electromagnetic exploration, where the unknown is the subsurface electrical resistivity and the data are time resolved surface measurements of the magnetic field. The algorithm presented in this paper considers inversion in one and two dimensions. The reduced model is obtained with rational interpolation in the frequency (Laplace) domain and a rational Krylov subspace projection method. It amounts to a nonlinear mapping from the function space of the unknown resistivity to the small dimensional space of the parameters of the reduced model. We use this mapping as a nonlinear preconditioner for the Gauss-Newton iterative solution of the inverse problem. The advantage of the inversion algorithm is twofold. First, the nonlinear preconditioner resolves most of the nonlinearity of the problem. Thus the iterations are less likely to get stuck in local minima and the convergence is fast. Second, the inversion is computationally efficient because it avoids repeated accurate simulations of the time-domain response. We study the stability of the inversion algorithm for various rational Krylov subspaces, and assess its performance with numerical experiments.

  20. Rotational magneto-acousto-electric tomography (MAET): theory and experimental validation

    PubMed Central

    Kunyansky, L; Ingram, C P; Witte, R S

    2017-01-01

    We present a novel two-dimensional (2D) MAET scanner, with a rotating object of interest and two fixed pairs of electrodes. Such an acquisition scheme, with our novel reconstruction techniques, recovers the boundaries of the regions of constant conductivity uniformly well, regardless of their orientation. We also present a general image reconstruction algorithm for the 2D MAET in a circular chamber with point-like electrodes immersed into the saline surrounding the object. An alternative linearized reconstruction procedure is developed, suitable for recovering the material interfaces (boundaries) when a non-ideal piezoelectric transducer is used for acoustic excitation. The work of the scanner and the linearized reconstruction algorithm is demonstrated using several phantoms made of high-contrast materials and a biological sample. PMID:28323633

  1. Sliding-mode control combined with improved adaptive feedforward for wafer scanner

    NASA Astrophysics Data System (ADS)

    Li, Xiaojie; Wang, Yiguang

    2018-03-01

    In this paper, a sliding-mode control method combined with improved adaptive feedforward is proposed for wafer scanner to improve the tracking performance of the closed-loop system. Particularly, In addition to the inverse model, the nonlinear force ripple effect which may degrade the tracking accuracy of permanent magnet linear motor (PMLM) is considered in the proposed method. The dominant position periodicity of force ripple is determined by using the Fast Fourier Transform (FFT) analysis for experimental data and the improved feedforward control is achieved by the online recursive least-squares (RLS) estimation of the inverse model and the force ripple. The improved adaptive feedforward is given in a general form of nth-order model with force ripple effect. This proposed method is motivated by the motion controller design of the long-stroke PMLM and short-stroke voice coil motor for wafer scanner. The stability of the closed-loop control system and the convergence of the motion tracking are guaranteed by the proposed sliding-mode feedback and adaptive feedforward methods theoretically. Comparative experiments on a precision linear motion platform can verify the correctness and effectiveness of the proposed method. The experimental results show that comparing to traditional method the proposed one has better performance of rapidity and robustness, especially for high speed motion trajectory. And, the improvements on both tracking accuracy and settling time can be achieved.

  2. Assessment of alveolar bone marrow fat content using 15 T MRI.

    PubMed

    Cortes, Arthur Rodriguez Gonzalez; Cohen, Ouri; Zhao, Ming; Aoki, Eduardo Massaharu; Ribeiro, Rodrigo Alves; Abu Nada, Lina; Costa, Claudio; Arita, Emiko Saito; Tamimi, Faleh; Ackerman, Jerome L

    2018-03-01

    Bone marrow fat is inversely correlated with bone mineral density. The aim of this study is to present a method to quantify alveolar bone marrow fat content using a 15 T magnetic resonance imaging (MRI) scanner. A 15 T MRI scanner with a 13-mm inner diameter loop-gap radiofrequency coil was used to scan seven 3-mm diameter alveolar bone biopsy specimens. A 3-D gradient-echo relaxation time (T1)-weighted pulse sequence was chosen to obtain images. All images were obtained with a voxel size (58 µm 3 ) sufficient to resolve trabecular spaces. Automated volume of the bone marrow fat content and derived bone volume fraction (BV/TV) were calculated. Results were compared with actual BV/TV obtained from micro-computed tomography (CT) scans. Mean fat tissue volume was 20.1 ± 11%. There was a significantly strong inverse correlation between fat tissue volume and BV/TV (r = -0.68; P = .045). Furthermore, there was a strong agreement between BV/TV derived from MRI and obtained with micro-CT (interclass correlation coefficient = 0.92; P = .001). Bone marrow fat of small alveolar bone biopsy specimens can be quantified with sufficient spatial resolution using an ultra-high-field MRI scanner and a T1-weighted pulse sequence. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. NLSE: Parameter-Based Inversion Algorithm

    NASA Astrophysics Data System (ADS)

    Sabbagh, Harold A.; Murphy, R. Kim; Sabbagh, Elias H.; Aldrin, John C.; Knopp, Jeremy S.

    Chapter 11 introduced us to the notion of an inverse problem and gave us some examples of the value of this idea to the solution of realistic industrial problems. The basic inversion algorithm described in Chap. 11 was based upon the Gauss-Newton theory of nonlinear least-squares estimation and is called NLSE in this book. In this chapter we will develop the mathematical background of this theory more fully, because this algorithm will be the foundation of inverse methods and their applications during the remainder of this book. We hope, thereby, to introduce the reader to the application of sophisticated mathematical concepts to engineering practice without introducing excessive mathematical sophistication.

  4. Scanner OPC signatures: automatic vendor-to-vendor OPE matching

    NASA Astrophysics Data System (ADS)

    Renwick, Stephen P.

    2009-03-01

    As 193nm lithography continues to be stretched and the k1 factor decreases, optical proximity correction (OPC) has become a vital part of the lithographer's tool kit. Unfortunately, as is now well known, the design variations of lithographic scanners from different vendors cause them to have slightly different optical-proximity effect (OPE) behavior, meaning that they print features through pitch in distinct ways. This in turn means that their response to OPC is not the same, and that an OPC solution designed for a scanner from Company 1 may or may not work properly on a scanner from Company 2. Since OPC is not inexpensive, that causes trouble for chipmakers using more than one brand of scanner. Clearly a scanner-matching procedure is needed to meet this challenge. Previously, automatic matching has only been reported for scanners of different tool generations from the same manufacturer. In contrast, scanners from different companies have been matched using expert tuning and adjustment techniques, frequently requiring laborious test exposures. Automatic matching between scanners from Company 1 and Company 2 has remained an unsettled problem. We have recently solved this problem and introduce a novel method to perform the automatic matching. The success in meeting this challenge required three enabling factors. First, we recognized the strongest drivers of OPE mismatch and are thereby able to reduce the information needed about a tool from another supplier to that information readily available from all modern scanners. Second, we developed a means of reliably identifying the scanners' optical signatures, minimizing dependence on process parameters that can cloud the issue. Third, we carefully employed standard statistical techniques, checking for robustness of the algorithms used and maximizing efficiency. The result is an automatic software system that can predict an OPC matching solution for scanners from different suppliers without requiring expert intervention.

  5. Comparison of statistical sampling methods with ScannerBit, the GAMBIT scanning module

    NASA Astrophysics Data System (ADS)

    Martinez, Gregory D.; McKay, James; Farmer, Ben; Scott, Pat; Roebber, Elinore; Putze, Antje; Conrad, Jan

    2017-11-01

    We introduce ScannerBit, the statistics and sampling module of the public, open-source global fitting framework GAMBIT. ScannerBit provides a standardised interface to different sampling algorithms, enabling the use and comparison of multiple computational methods for inferring profile likelihoods, Bayesian posteriors, and other statistical quantities. The current version offers random, grid, raster, nested sampling, differential evolution, Markov Chain Monte Carlo (MCMC) and ensemble Monte Carlo samplers. We also announce the release of a new standalone differential evolution sampler, Diver, and describe its design, usage and interface to ScannerBit. We subject Diver and three other samplers (the nested sampler MultiNest, the MCMC GreAT, and the native ScannerBit implementation of the ensemble Monte Carlo algorithm T-Walk) to a battery of statistical tests. For this we use a realistic physical likelihood function, based on the scalar singlet model of dark matter. We examine the performance of each sampler as a function of its adjustable settings, and the dimensionality of the sampling problem. We evaluate performance on four metrics: optimality of the best fit found, completeness in exploring the best-fit region, number of likelihood evaluations, and total runtime. For Bayesian posterior estimation at high resolution, T-Walk provides the most accurate and timely mapping of the full parameter space. For profile likelihood analysis in less than about ten dimensions, we find that Diver and MultiNest score similarly in terms of best fit and speed, outperforming GreAT and T-Walk; in ten or more dimensions, Diver substantially outperforms the other three samplers on all metrics.

  6. Angular dependence of multiangle dynamic light scattering for particle size distribution inversion using a self-adapting regularization algorithm

    NASA Astrophysics Data System (ADS)

    Li, Lei; Yu, Long; Yang, Kecheng; Li, Wei; Li, Kai; Xia, Min

    2018-04-01

    The multiangle dynamic light scattering (MDLS) technique can better estimate particle size distributions (PSDs) than single-angle dynamic light scattering. However, determining the inversion range, angular weighting coefficients, and scattering angle combination is difficult but fundamental to the reconstruction for both unimodal and multimodal distributions. In this paper, we propose a self-adapting regularization method called the wavelet iterative recursion nonnegative Tikhonov-Phillips-Twomey (WIRNNT-PT) algorithm. This algorithm combines a wavelet multiscale strategy with an appropriate inversion method and could self-adaptively optimize several noteworthy issues containing the choices of the weighting coefficients, the inversion range and the optimal inversion method from two regularization algorithms for estimating the PSD from MDLS measurements. In addition, the angular dependence of the MDLS for estimating the PSDs of polymeric latexes is thoroughly analyzed. The dependence of the results on the number and range of measurement angles was analyzed in depth to identify the optimal scattering angle combination. Numerical simulations and experimental results for unimodal and multimodal distributions are presented to demonstrate both the validity of the WIRNNT-PT algorithm and the angular dependence of MDLS and show that the proposed algorithm with a six-angle analysis in the 30-130° range can be satisfactorily applied to retrieve PSDs from MDLS measurements.

  7. Fully Automated Detection of Cloud and Aerosol Layers in the CALIPSO Lidar Measurements

    NASA Technical Reports Server (NTRS)

    Vaughan, Mark A.; Powell, Kathleen A.; Kuehn, Ralph E.; Young, Stuart A.; Winker, David M.; Hostetler, Chris A.; Hunt, William H.; Liu, Zhaoyan; McGill, Matthew J.; Getzewich, Brian J.

    2009-01-01

    Accurate knowledge of the vertical and horizontal extent of clouds and aerosols in the earth s atmosphere is critical in assessing the planet s radiation budget and for advancing human understanding of climate change issues. To retrieve this fundamental information from the elastic backscatter lidar data acquired during the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) mission, a selective, iterated boundary location (SIBYL) algorithm has been developed and deployed. SIBYL accomplishes its goals by integrating an adaptive context-sensitive profile scanner into an iterated multiresolution spatial averaging scheme. This paper provides an in-depth overview of the architecture and performance of the SIBYL algorithm. It begins with a brief review of the theory of target detection in noise-contaminated signals, and an enumeration of the practical constraints levied on the retrieval scheme by the design of the lidar hardware, the geometry of a space-based remote sensing platform, and the spatial variability of the measurement targets. Detailed descriptions are then provided for both the adaptive threshold algorithm used to detect features of interest within individual lidar profiles and the fully automated multiresolution averaging engine within which this profile scanner functions. The resulting fusion of profile scanner and averaging engine is specifically designed to optimize the trade-offs between the widely varying signal-to-noise ratio of the measurements and the disparate spatial resolutions of the detection targets. Throughout the paper, specific algorithm performance details are illustrated using examples drawn from the existing CALIPSO dataset. Overall performance is established by comparisons to existing layer height distributions obtained by other airborne and space-based lidars.

  8. MRI-assisted PET motion correction for neurologic studies in an integrated MR-PET scanner.

    PubMed

    Catana, Ciprian; Benner, Thomas; van der Kouwe, Andre; Byars, Larry; Hamm, Michael; Chonde, Daniel B; Michel, Christian J; El Fakhri, Georges; Schmand, Matthias; Sorensen, A Gregory

    2011-01-01

    Head motion is difficult to avoid in long PET studies, degrading the image quality and offsetting the benefit of using a high-resolution scanner. As a potential solution in an integrated MR-PET scanner, the simultaneously acquired MRI data can be used for motion tracking. In this work, a novel algorithm for data processing and rigid-body motion correction (MC) for the MRI-compatible BrainPET prototype scanner is described, and proof-of-principle phantom and human studies are presented. To account for motion, the PET prompt and random coincidences and sensitivity data for postnormalization were processed in the line-of-response (LOR) space according to the MRI-derived motion estimates. The processing time on the standard BrainPET workstation is approximately 16 s for each motion estimate. After rebinning in the sinogram space, the motion corrected data were summed, and the PET volume was reconstructed using the attenuation and scatter sinograms in the reference position. The accuracy of the MC algorithm was first tested using a Hoffman phantom. Next, human volunteer studies were performed, and motion estimates were obtained using 2 high-temporal-resolution MRI-based motion-tracking techniques. After accounting for the misalignment between the 2 scanners, perfectly coregistered MRI and PET volumes were reproducibly obtained. The MRI output gates inserted into the PET list-mode allow the temporal correlation of the 2 datasets within 0.2 ms. The Hoffman phantom volume reconstructed by processing the PET data in the LOR space was similar to the one obtained by processing the data using the standard methods and applying the MC in the image space, demonstrating the quantitative accuracy of the procedure. In human volunteer studies, motion estimates were obtained from echo planar imaging and cloverleaf navigator sequences every 3 s and 20 ms, respectively. Motion-deblurred PET images, with excellent delineation of specific brain structures, were obtained using these 2 MRI-based estimates. An MRI-based MC algorithm was implemented for an integrated MR-PET scanner. High-temporal-resolution MRI-derived motion estimates (obtained while simultaneously acquiring anatomic or functional MRI data) can be used for PET MC. An MRI-based MC method has the potential to improve PET image quality, increasing its reliability, reproducibility, and quantitative accuracy, and to benefit many neurologic applications.

  9. [Study of inversion and classification of particle size distribution under dependent model algorithm].

    PubMed

    Sun, Xiao-Gang; Tang, Hong; Yuan, Gui-Bin

    2008-05-01

    For the total light scattering particle sizing technique, an inversion and classification method was proposed with the dependent model algorithm. The measured particle system was inversed simultaneously by different particle distribution functions whose mathematic model was known in advance, and then classified according to the inversion errors. The simulation experiments illustrated that it is feasible to use the inversion errors to determine the particle size distribution. The particle size distribution function was obtained accurately at only three wavelengths in the visible light range with the genetic algorithm, and the inversion results were steady and reliable, which decreased the number of multi wavelengths to the greatest extent and increased the selectivity of light source. The single peak distribution inversion error was less than 5% and the bimodal distribution inversion error was less than 10% when 5% stochastic noise was put in the transmission extinction measurement values at two wavelengths. The running time of this method was less than 2 s. The method has advantages of simplicity, rapidity, and suitability for on-line particle size measurement.

  10. Seismic waveform tomography with shot-encoding using a restarted L-BFGS algorithm.

    PubMed

    Rao, Ying; Wang, Yanghua

    2017-08-17

    In seismic waveform tomography, or full-waveform inversion (FWI), one effective strategy used to reduce the computational cost is shot-encoding, which encodes all shots randomly and sums them into one super shot to significantly reduce the number of wavefield simulations in the inversion. However, this process will induce instability in the iterative inversion regardless of whether it uses a robust limited-memory BFGS (L-BFGS) algorithm. The restarted L-BFGS algorithm proposed here is both stable and efficient. This breakthrough ensures, for the first time, the applicability of advanced FWI methods to three-dimensional seismic field data. In a standard L-BFGS algorithm, if the shot-encoding remains unchanged, it will generate a crosstalk effect between different shots. This crosstalk effect can only be suppressed by employing sufficient randomness in the shot-encoding. Therefore, the implementation of the L-BFGS algorithm is restarted at every segment. Each segment consists of a number of iterations; the first few iterations use an invariant encoding, while the remainder use random re-coding. This restarted L-BFGS algorithm balances the computational efficiency of shot-encoding, the convergence stability of the L-BFGS algorithm, and the inversion quality characteristic of random encoding in FWI.

  11. Performance Evaluation of Glottal Inverse Filtering Algorithms Using a Physiologically Based Articulatory Speech Synthesizer

    DTIC Science & Technology

    2017-01-05

    1 Performance Evaluation of Glottal Inverse Filtering Algorithms Using a Physiologically Based Articulatory Speech Synthesizer Yu-Ren Chien, Daryush...D. Mehta, Member, IEEE, Jón Guðnason, Matías Zañartu, Member, IEEE, and Thomas F. Quatieri, Fellow, IEEE Abstract—Glottal inverse filtering aims to...of inverse filtering performance has been challenging due to the practical difficulty in measuring the true glottal signals while speech signals are

  12. Validation of a three-dimensional body scanner for body composition measures.

    PubMed

    Harbin, Michelle M; Kasak, Alexander; Ostrem, Joseph D; Dengel, Donald R

    2017-12-29

    The accuracy of an infrared three-dimensional (3D) body scanner in determining body composition was compared against hydrostatic weighing (HW), bioelectrical impedance analysis (BIA), and anthropometry. A total of 265 adults (119 males; age = 22.1 ± 2.5 years; body mass index = 24.5 ± 3.9 kg/m 2 ) had their body fat percent (BF%) estimated from 3D scanning, HW, BIA, skinfolds, and girths. A repeated measures analysis of variance (ANOVA) indicated significant differences among methods (p < 0.001). Multivariate ANOVA indicated a significant main effect of sex and method (p < 0.001), with a non-significant interaction (p = 0.101). Bonferroni post-hoc comparisons identified that BF% from 3D scanning (18.1 ± 7.8%) was significantly less than HW (22.8 ± 8.5%, p < 0.001), BIA (20.1 ± 9.1%, p < 0.001), skinfolds (19.7 ± 9.7%, p < 0.001), and girths (21.2 ± 10.4%, p < 0.001). The 3D scanner decreased in precision with increasing adiposity, potentially resulting from inconsistences in the 3D scanner's analysis algorithm. A correction factor within the algorithm is required before infrared 3D scanning can be considered valid in measuring BF%.

  13. Data-driven CT protocol review and management—experience from a large academic hospital.

    PubMed

    Zhang, Da; Savage, Cristy A; Li, Xinhua; Liu, Bob

    2015-03-01

    Protocol review plays a critical role in CT quality assurance, but large numbers of protocols and inconsistent protocol names on scanners and in exam records make thorough protocol review formidable. In this investigation, we report on a data-driven cataloging process that can be used to assist in the reviewing and management of CT protocols. We collected lists of scanner protocols, as well as 18 months of recent exam records, for 10 clinical scanners. We developed computer algorithms to automatically deconstruct the protocol names on the scanner and in the exam records into core names and descriptive components. Based on the core names, we were able to group the scanner protocols into a much smaller set of "core protocols," and to easily link exam records with the scanner protocols. We calculated the percentage of usage for each core protocol, from which the most heavily used protocols were identified. From the percentage-of-usage data, we found that, on average, 18, 33, and 49 core protocols per scanner covered 80%, 90%, and 95%, respectively, of all exams. These numbers are one order of magnitude smaller than the typical numbers of protocols that are loaded on a scanner (200-300, as reported in the literature). Duplicated, outdated, and rarely used protocols on the scanners were easily pinpointed in the cataloging process. The data-driven cataloging process can facilitate the task of protocol review. Copyright © 2015 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  14. A quantitative comparison of soil moisture inversion algorithms

    NASA Technical Reports Server (NTRS)

    Zyl, J. J. van; Kim, Y.

    2001-01-01

    This paper compares the performance of four bare surface radar soil moisture inversion algorithms in the presence of measurement errors. The particular errors considered include calibration errors, system thermal noise, local topography and vegetation cover.

  15. Three-Dimensional Accuracy of Facial Scan for Facial Deformities in Clinics: A New Evaluation Method for Facial Scanner Accuracy.

    PubMed

    Zhao, Yi-Jiao; Xiong, Yu-Xue; Wang, Yong

    2017-01-01

    In this study, the practical accuracy (PA) of optical facial scanners for facial deformity patients in oral clinic was evaluated. Ten patients with a variety of facial deformities from oral clinical were included in the study. For each patient, a three-dimensional (3D) face model was acquired, via a high-accuracy industrial "line-laser" scanner (Faro), as the reference model and two test models were obtained, via a "stereophotography" (3dMD) and a "structured light" facial scanner (FaceScan) separately. Registration based on the iterative closest point (ICP) algorithm was executed to overlap the test models to reference models, and "3D error" as a new measurement indicator calculated by reverse engineering software (Geomagic Studio) was used to evaluate the 3D global and partial (upper, middle, and lower parts of face) PA of each facial scanner. The respective 3D accuracy of stereophotography and structured light facial scanners obtained for facial deformities was 0.58±0.11 mm and 0.57±0.07 mm. The 3D accuracy of different facial partitions was inconsistent; the middle face had the best performance. Although the PA of two facial scanners was lower than their nominal accuracy (NA), they all met the requirement for oral clinic use.

  16. Comparative calibration of IP scanning equipment

    NASA Astrophysics Data System (ADS)

    Ingenito, F.; Andreoli, P.; Batani, D.; Boutoux, G.; Cipriani, M.; Consoli, F.; Cristofari, G.; Curcio, A.; De Angelis, R.; Di Giorgio, G.; Ducret, J.; Forestier-Colleoni, P.; Hulin, S.; Jakubowska, K.; Rabhi, N.

    2016-05-01

    Imaging Plates (IP) are diagnostic devices which contain a photostimulable phosphor layer that stores the incident radiation dose as a latent image. The image is read with a scanner which stimulates the decay of electrons, previously excited by the incident radiation, by exposition to a laser beam. This results in emitted light, which is detected by photomultiplier tubes; so the latent image is reconstructed. IPs have the interesting feature that can be reused many times, after erasing stored information. Algorithms to convert signals stored in the detector to Photostimulated luminescence (PSL) counts depend on the scanner and are not available on every model. A comparative cross-calibration of the IP scanner Dürr CR35 BIO, used in ABC laboratory, was performed, using the Fujifilm FLA 7000 scanner as a reference, to find the equivalence between grey-scale values given by the Dürr scanner to PSL counts. Using an IP and a 55Fe β-source, we produced pairs of samples with the same exposition times, which were analysed by both scanners, placing particular attention to fading times of the image stored on IPs. Data analysis led us to the determine a conversion formula which can be used to compare data of experiments obtained in different laboratories and to use IP calibrations available, till now, only for Fujifilm scanners.

  17. Inversion of particle-size distribution from angular light-scattering data with genetic algorithms.

    PubMed

    Ye, M; Wang, S; Lu, Y; Hu, T; Zhu, Z; Xu, Y

    1999-04-20

    A stochastic inverse technique based on a genetic algorithm (GA) to invert particle-size distribution from angular light-scattering data is developed. This inverse technique is independent of any given a priori information of particle-size distribution. Numerical tests show that this technique can be successfully applied to inverse problems with high stability in the presence of random noise and low susceptibility to the shape of distributions. It has also been shown that the GA-based inverse technique is more efficient in use of computing time than the inverse Monte Carlo method recently developed by Ligon et al. [Appl. Opt. 35, 4297 (1996)].

  18. Magnetic resonance transverse relaxation time T2 of knee cartilage in osteoarthritis at 3-T: a cross-sectional multicentre, multivendor reproducibility study.

    PubMed

    Balamoody, Sharon; Williams, Tomos G; Wolstenholme, Chris; Waterton, John C; Bowes, Michael; Hodgson, Richard; Zhao, Sha; Scott, Marietta; Taylor, Chris J; Hutchinson, Charles E

    2013-04-01

    The transverse relaxation time (T2) in MR imaging has been identified as a potential biomarker of hyaline cartilage pathology. This study investigates whether MR assessments of T2 are comparable between 3-T scanners from three different vendors. Twelve subjects with symptoms of knee osteoarthritis and one or more risk factors had their knee scanned on each of the three vendors' scanners located in three sites in the U.K. MR data acquisition was based on the United States National Institutes of Health Osteoarthritis Initiative protocol. Measures of cartilage T2 and R2 (inverse of T2) were computed for precision error assessment. Intrascanner reproducibility was also assessed with a phantom (all three scanners) and a cohort of 5 subjects (one scanner only). Whole-organ magnetic resonance (WORM) semiquantitative cartilage scores ranged from minimal to advanced degradation. Intrascanner R2 root-mean-square coefficients of variation (RMSCOV) were low, within the range 2.6 to 6.3% for femoral and tibial regions. For one scanner pair, mean T2 differences ranged from -1.2 to 2.8 ms, with no significant difference observed for the medial tibia and patella regions (p < 0.05). T2 values from the third scanner were systematically lower, producing interscanner mean T2 differences within the range 5.4 to 10.0 ms. Significant interscanner cartilage T2 differences were found and should be accounted for before data from scanners of different vendors are compared.

  19. Amplitude inversion of the 2D analytic signal of magnetic anomalies through the differential evolution algorithm

    NASA Astrophysics Data System (ADS)

    Ekinci, Yunus Levent; Özyalın, Şenol; Sındırgı, Petek; Balkaya, Çağlayan; Göktürkler, Gökhan

    2017-12-01

    In this work, analytic signal amplitude (ASA) inversion of total field magnetic anomalies has been achieved by differential evolution (DE) which is a population-based evolutionary metaheuristic algorithm. Using an elitist strategy, the applicability and effectiveness of the proposed inversion algorithm have been evaluated through the anomalies due to both hypothetical model bodies and real isolated geological structures. Some parameter tuning studies relying mainly on choosing the optimum control parameters of the algorithm have also been performed to enhance the performance of the proposed metaheuristic. Since ASAs of magnetic anomalies are independent of both ambient field direction and the direction of magnetization of the causative sources in a two-dimensional (2D) case, inversions of synthetic noise-free and noisy single model anomalies have produced satisfactory solutions showing the practical applicability of the algorithm. Moreover, hypothetical studies using multiple model bodies have clearly showed that the DE algorithm is able to cope with complicated anomalies and some interferences from neighbouring sources. The proposed algorithm has then been used to invert small- (120 m) and large-scale (40 km) magnetic profile anomalies of an iron deposit (Kesikköprü-Bala, Turkey) and a deep-seated magnetized structure (Sea of Marmara, Turkey), respectively to determine depths, geometries and exact origins of the source bodies. Inversion studies have yielded geologically reasonable solutions which are also in good accordance with the results of normalized full gradient and Euler deconvolution techniques. Thus, we propose the use of DE not only for the amplitude inversion of 2D analytical signals of magnetic profile anomalies having induced or remanent magnetization effects but also the low-dimensional data inversions in geophysics. A part of this paper was presented as an abstract at the 2nd International Conference on Civil and Environmental Engineering, 8-10 May 2017, Cappadocia-Nevşehir (Turkey).

  20. Micro-seismic waveform matching inversion based on gravitational search algorithm and parallel computation

    NASA Astrophysics Data System (ADS)

    Jiang, Y.; Xing, H. L.

    2016-12-01

    Micro-seismic events induced by water injection, mining activity or oil/gas extraction are quite informative, the interpretation of which can be applied for the reconstruction of underground stress and monitoring of hydraulic fracturing progress in oil/gas reservoirs. The source characterises and locations are crucial parameters that required for these purposes, which can be obtained through the waveform matching inversion (WMI) method. Therefore it is imperative to develop a WMI algorithm with high accuracy and convergence speed. Heuristic algorithm, as a category of nonlinear method, possesses a very high convergence speed and good capacity to overcome local minimal values, and has been well applied for many areas (e.g. image processing, artificial intelligence). However, its effectiveness for micro-seismic WMI is still poorly investigated; very few literatures exits that addressing this subject. In this research an advanced heuristic algorithm, gravitational search algorithm (GSA) , is proposed to estimate the focal mechanism (angle of strike, dip and rake) and source locations in three dimension. Unlike traditional inversion methods, the heuristic algorithm inversion does not require the approximation of green function. The method directly interacts with a CPU parallelized finite difference forward modelling engine, and updating the model parameters under GSA criterions. The effectiveness of this method is tested with synthetic data form a multi-layered elastic model; the results indicate GSA can be well applied on WMI and has its unique advantages. Keywords: Micro-seismicity, Waveform matching inversion, gravitational search algorithm, parallel computation

  1. Using a derivative-free optimization method for multiple solutions of inverse transport problems

    DOE PAGES

    Armstrong, Jerawan C.; Favorite, Jeffrey A.

    2016-01-14

    Identifying unknown components of an object that emits radiation is an important problem for national and global security. Radiation signatures measured from an object of interest can be used to infer object parameter values that are not known. This problem is called an inverse transport problem. An inverse transport problem may have multiple solutions and the most widely used approach for its solution is an iterative optimization method. This paper proposes a stochastic derivative-free global optimization algorithm to find multiple solutions of inverse transport problems. The algorithm is an extension of a multilevel single linkage (MLSL) method where a meshmore » adaptive direct search (MADS) algorithm is incorporated into the local phase. Furthermore, numerical test cases using uncollided fluxes of discrete gamma-ray lines are presented to show the performance of this new algorithm.« less

  2. Iterative algorithms for a non-linear inverse problem in atmospheric lidar

    NASA Astrophysics Data System (ADS)

    Denevi, Giulia; Garbarino, Sara; Sorrentino, Alberto

    2017-08-01

    We consider the inverse problem of retrieving aerosol extinction coefficients from Raman lidar measurements. In this problem the unknown and the data are related through the exponential of a linear operator, the unknown is non-negative and the data follow the Poisson distribution. Standard methods work on the log-transformed data and solve the resulting linear inverse problem, but neglect to take into account the noise statistics. In this study we show that proper modelling of the noise distribution can improve substantially the quality of the reconstructed extinction profiles. To achieve this goal, we consider the non-linear inverse problem with non-negativity constraint, and propose two iterative algorithms derived using the Karush-Kuhn-Tucker conditions. We validate the algorithms with synthetic and experimental data. As expected, the proposed algorithms out-perform standard methods in terms of sensitivity to noise and reliability of the estimated profile.

  3. Satellite Imagery Analysis for Nighttime Temperature Inversion Clouds

    NASA Technical Reports Server (NTRS)

    Kawamoto, K.; Minnis, P.; Arduini, R.; Smith, W., Jr.

    2001-01-01

    Clouds play important roles in the climate system. Their optical and microphysical properties, which largely determine their radiative property, need to be investigated. Among several measurement means, satellite remote sensing seems to be the most promising. Since most of the cloud algorithms proposed so far are daytime use which utilizes solar radiation, Minnis et al. (1998) developed a nighttime use one using 3.7-, 11 - and 12-microns channels. Their algorithm, however, has a drawback that is not able to treat temperature inversion cases. We update their algorithm, incorporating new parameterization by Arduini et al. (1999) which is valid for temperature inversion cases. This updated algorithm has been applied to GOES satellite data and reasonable retrieval results were obtained.

  4. Overview of CERES Cloud Properties Derived From VIRS AND MODIS DATA

    NASA Technical Reports Server (NTRS)

    Minis, Patrick; Geier, Erika; Wielicki, Bruce A.; Sun-Mack, Sunny; Chen, Yan; Trepte, Qing Z.; Dong, Xiquan; Doelling, David R.; Ayers, J. Kirk; Khaiyer, Mandana M.

    2006-01-01

    Simultaneous measurement of radiation and cloud fields on a global basis is recognized as a key component in understanding and modeling the interaction between clouds and radiation at the top of the atmosphere, at the surface, and within the atmosphere. The NASA Clouds and Earth s Radiant Energy System (CERES) Project (Wielicki et al., 1998) began addressing this issue in 1998 with its first broadband shortwave and longwave scanner on the Tropical Rainfall Measuring Mission (TRMM). This was followed by the launch of two CERES scanners each on Terra and Aqua during late 1999 and early 2002, respectively. When combined, these satellites should provide the most comprehensive global characterization of clouds and radiation to date. Unfortunately, the TRMM scanner failed during late 1998. The Terra and Aqua scanners continue to operate, however, providing measurements at a minimum of 4 local times each day. CERES was designed to scan in tandem with high resolution imagers so that the cloud conditions could be evaluated for every CERES measurement. The cloud properties are essential for converting CERES radiances shortwave albedo and longwave fluxes needed to define the radiation budget (ERB). They are also needed to unravel the impact of clouds on the ERB. The 5-channel, 2-km Visible Infrared Scanner (VIRS) on the TRMM and the 36-channel 1-km Moderate Resolution Imaging Spectroradiometer (MODIS) on Terra and Aqua are analyzed to define the cloud properties for each CERES footprint. To minimize inter-satellite differences and aid the development of useful climate-scale measurements, it was necessary to ensure that each satellite imager is calibrated in a fashion consistent with its counterpart on the other CERES satellites (Minnis et al., 2006) and that the algorithms are as similar as possible for all of the imagers. Thus, a set of cloud detection and retrieval algorithms were developed that could be applied to all three imagers utilizing as few channels as possible while producing stable and accurate cloud properties. This paper discusses the algorithms and results of applying those techniques to more than 5 years of Terra MODIS, 3 years of Aqua MODIS, and 4 years of TRMM VIRS data.

  5. Objective evaluation of slanted edge charts

    NASA Astrophysics Data System (ADS)

    Hornung, Harvey (.

    2015-01-01

    Camera objective characterization methodologies are widely used in the digital camera industry. Most objective characterization systems rely on a chart with specific patterns, a software algorithm measures a degradation or difference between the captured image and the chart itself. The Spatial Frequency Response (SFR) method, which is part of the ISO 122331 standard, is now very commonly used in the imaging industry, it is a very convenient way to measure a camera Modulation transfer function (MTF). The SFR algorithm can measure frequencies beyond the Nyquist frequency thanks to super-resolution, so it does provide useful information on aliasing and can provide modulation for frequencies between half Nyquist and Nyquist on all color channels of a color sensor with a Bayer pattern. The measurement process relies on a chart that is simple to manufacture: a straight transition from a bright reflectance to a dark one (black and white for instance), while a sine chart requires handling precisely shades of gray which can also create all sort of issues with printers that rely on half-toning. However, no technology can create a perfect edge, so it is important to assess the quality of the chart and understand how it affects the accuracy of the measurement. In this article, I describe a protocol to characterize the MTF of a slanted edge chart, using a high-resolution flatbed scanner. The main idea is to use the RAW output of the scanner as a high-resolution micro-densitometer, since the signal is linear it is suitable to measure the chart MTF using the SFR algorithm. The scanner needs to be calibrated in sharpness: the scanner MTF is measured with a calibrated sine chart and inverted to compensate for the modulation loss from the scanner. Then the true chart MTF is computed. This article compares measured MTF from commercial charts and charts printed on printers, and also compares how of the contrast of the edge (using different shades of gray) can affect the chart MTF, then concludes on what distance range and camera resolution the chart can reliably measure the camera MTF.

  6. Fast polar decomposition of an arbitrary matrix

    NASA Technical Reports Server (NTRS)

    Higham, Nicholas J.; Schreiber, Robert S.

    1988-01-01

    The polar decomposition of an m x n matrix A of full rank, where m is greater than or equal to n, can be computed using a quadratically convergent algorithm. The algorithm is based on a Newton iteration involving a matrix inverse. With the use of a preliminary complete orthogonal decomposition the algorithm can be extended to arbitrary A. How to use the algorithm to compute the positive semi-definite square root of a Hermitian positive semi-definite matrix is described. A hybrid algorithm which adaptively switches from the matrix inversion based iteration to a matrix multiplication based iteration due to Kovarik, and to Bjorck and Bowie is formulated. The decision when to switch is made using a condition estimator. This matrix multiplication rich algorithm is shown to be more efficient on machines for which matrix multiplication can be executed 1.5 times faster than matrix inversion.

  7. Nonlinear inversion of potential-field data using a hybrid-encoding genetic algorithm

    USGS Publications Warehouse

    Chen, C.; Xia, J.; Liu, J.; Feng, G.

    2006-01-01

    Using a genetic algorithm to solve an inverse problem of complex nonlinear geophysical equations is advantageous because it does not require computer gradients of models or "good" initial models. The multi-point search of a genetic algorithm makes it easier to find the globally optimal solution while avoiding falling into a local extremum. As is the case in other optimization approaches, the search efficiency for a genetic algorithm is vital in finding desired solutions successfully in a multi-dimensional model space. A binary-encoding genetic algorithm is hardly ever used to resolve an optimization problem such as a simple geophysical inversion with only three unknowns. The encoding mechanism, genetic operators, and population size of the genetic algorithm greatly affect search processes in the evolution. It is clear that improved operators and proper population size promote the convergence. Nevertheless, not all genetic operations perform perfectly while searching under either a uniform binary or a decimal encoding system. With the binary encoding mechanism, the crossover scheme may produce more new individuals than with the decimal encoding. On the other hand, the mutation scheme in a decimal encoding system will create new genes larger in scope than those in the binary encoding. This paper discusses approaches of exploiting the search potential of genetic operations in the two encoding systems and presents an approach with a hybrid-encoding mechanism, multi-point crossover, and dynamic population size for geophysical inversion. We present a method that is based on the routine in which the mutation operation is conducted in the decimal code and multi-point crossover operation in the binary code. The mix-encoding algorithm is called the hybrid-encoding genetic algorithm (HEGA). HEGA provides better genes with a higher probability by a mutation operator and improves genetic algorithms in resolving complicated geophysical inverse problems. Another significant result is that final solution is determined by the average model derived from multiple trials instead of one computation due to the randomness in a genetic algorithm procedure. These advantages were demonstrated by synthetic and real-world examples of inversion of potential-field data. ?? 2005 Elsevier Ltd. All rights reserved.

  8. Validation Studies of the Accuracy of Various SO2 Gas Retrievals in the Thermal InfraRed (8-14 μm)

    NASA Astrophysics Data System (ADS)

    Gabrieli, A.; Wright, R.; Lucey, P. G.; Porter, J. N.; Honniball, C.; Garbeil, H.; Wood, M.

    2016-12-01

    Quantifying hazardous SO2 in the atmosphere and in volcanic plumes is important for public health and volcanic eruption prediction. Remote sensing measurements of spectral radiance of plumes contain information on the abundance of SO2. However, in order to convert such measurements into SO2 path-concentrations, reliable inversion algorithms are needed. Various techniques can be employed to derive SO2 path-concentrations. The first approach employs a Partial Least Square Regression model trained using MODTRAN5 simulations for a variety of plume and atmospheric conditions. Radiances at many spectral wavelengths (8-14 μm) were used in the algorithm. The second algorithm uses measurements inside and outside the SO2 plume. Measurements in the plume-free region (background sky) make it possible to remove background atmospheric conditions and any instrumental effects. After atmospheric and instrumental effects are removed, MODTRAN5 is used to fit the SO2 spectral feature and obtain SO2 path-concentrations. The two inversion algorithms described above can be compared with the inversion algorithm for SO2 retrievals developed by Prata and Bernardo (2014). Their approach employs three wavelengths to characterize the plume temperature, the atmospheric background, and the SO2 path-concentration. The accuracy of these various techniques requires further investigation in terms of the effects of different atmospheric background conditions. Validating these inversion algorithms is challenging because ground truth measurements are very difficult. However, if the three separate inversion algorithms provide similar SO2 path-concentrations for actual measurements with various background conditions, then this increases confidence in the results. Measurements of sky radiance when looking through SO2 filled gas cells were collected with a Thermal Hyperspectral Imager (THI) under various atmospheric background conditions. These data were processed using the three inversion approaches, which were tested for convergence on the known SO2 gas cell path-concentrations. For this study, the inversion algorithms were modified to account for the gas cell configuration. Results from these studies will be presented, as well as results from SO2 gas plume measurements at Kīlauea volcano, Hawai'i.

  9. Inversion group (IG) fitting: A new T1 mapping method for modified look-locker inversion recovery (MOLLI) that allows arbitrary inversion groupings and rest periods (including no rest period).

    PubMed

    Sussman, Marshall S; Yang, Issac Y; Fok, Kai-Ho; Wintersperger, Bernd J

    2016-06-01

    The Modified Look-Locker Inversion Recovery (MOLLI) technique is used for T1 mapping in the heart. However, a drawback of this technique is that it requires lengthy rest periods in between inversion groupings to allow for complete magnetization recovery. In this work, a new MOLLI fitting algorithm (inversion group [IG] fitting) is presented that allows for arbitrary combinations of inversion groupings and rest periods (including no rest period). Conventional MOLLI algorithms use a three parameter fitting model. In IG fitting, the number of parameters is two plus the number of inversion groupings. This increased number of parameters permits any inversion grouping/rest period combination. Validation was performed through simulation, phantom, and in vivo experiments. IG fitting provided T1 values with less than 1% discrepancy across a range of inversion grouping/rest period combinations. By comparison, conventional three parameter fits exhibited up to 30% discrepancy for some combinations. The one drawback with IG fitting was a loss of precision-approximately 30% worse than the three parameter fits. IG fitting permits arbitrary inversion grouping/rest period combinations (including no rest period). The cost of the algorithm is a loss of precision relative to conventional three parameter fits. Magn Reson Med 75:2332-2340, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  10. Riemann–Hilbert problem approach for two-dimensional flow inverse scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agaltsov, A. D., E-mail: agalets@gmail.com; Novikov, R. G., E-mail: novikov@cmap.polytechnique.fr; IEPT RAS, 117997 Moscow

    2014-10-15

    We consider inverse scattering for the time-harmonic wave equation with first-order perturbation in two dimensions. This problem arises in particular in the acoustic tomography of moving fluid. We consider linearized and nonlinearized reconstruction algorithms for this problem of inverse scattering. Our nonlinearized reconstruction algorithm is based on the non-local Riemann–Hilbert problem approach. Comparisons with preceding results are given.

  11. Fast Nonlinear Generalized Inversion of Gravity Data with Application to the Three-Dimensional Crustal Density Structure of Sichuan Basin, Southwest China

    NASA Astrophysics Data System (ADS)

    Wang, Jun; Meng, Xiaohong; Li, Fang

    2017-11-01

    Generalized inversion is one of the important steps in the quantitative interpretation of gravity data. With appropriate algorithm and parameters, it gives a view of the subsurface which characterizes different geological bodies. However, generalized inversion of gravity data is time consuming due to the large amount of data points and model cells adopted. Incorporating of various prior information as constraints deteriorates the above situation. In the work discussed in this paper, a method for fast nonlinear generalized inversion of gravity data is proposed. The fast multipole method is employed for forward modelling. The inversion objective function is established with weighted data misfit function along with model objective function. The total objective function is solved by a dataspace algorithm. Moreover, depth weighing factor is used to improve depth resolution of the result, and bound constraint is incorporated by a transfer function to limit the model parameters in a reliable range. The matrix inversion is accomplished by a preconditioned conjugate gradient method. With the above algorithm, equivalent density vectors can be obtained, and interpolation is performed to get the finally density model on the fine mesh in the model domain. Testing on synthetic gravity data demonstrated that the proposed method is faster than conventional generalized inversion algorithm to produce an acceptable solution for gravity inversion problem. The new developed inversion method was also applied for inversion of the gravity data collected over Sichuan basin, southwest China. The established density structure in this study helps understanding the crustal structure of Sichuan basin and provides reference for further oil and gas exploration in this area.

  12. A direct method for nonlinear ill-posed problems

    NASA Astrophysics Data System (ADS)

    Lakhal, A.

    2018-02-01

    We propose a direct method for solving nonlinear ill-posed problems in Banach-spaces. The method is based on a stable inversion formula we explicitly compute by applying techniques for analytic functions. Furthermore, we investigate the convergence and stability of the method and prove that the derived noniterative algorithm is a regularization. The inversion formula provides a systematic sensitivity analysis. The approach is applicable to a wide range of nonlinear ill-posed problems. We test the algorithm on a nonlinear problem of travel-time inversion in seismic tomography. Numerical results illustrate the robustness and efficiency of the algorithm.

  13. A gradient based algorithm to solve inverse plane bimodular problems of identification

    NASA Astrophysics Data System (ADS)

    Ran, Chunjiang; Yang, Haitian; Zhang, Guoqing

    2018-02-01

    This paper presents a gradient based algorithm to solve inverse plane bimodular problems of identifying constitutive parameters, including tensile/compressive moduli and tensile/compressive Poisson's ratios. For the forward bimodular problem, a FE tangent stiffness matrix is derived facilitating the implementation of gradient based algorithms, for the inverse bimodular problem of identification, a two-level sensitivity analysis based strategy is proposed. Numerical verification in term of accuracy and efficiency is provided, and the impacts of initial guess, number of measurement points, regional inhomogeneity, and noisy data on the identification are taken into accounts.

  14. LOR-interleaving image reconstruction for PET imaging with fractional-crystal collimation

    NASA Astrophysics Data System (ADS)

    Li, Yusheng; Matej, Samuel; Karp, Joel S.; Metzler, Scott D.

    2015-01-01

    Positron emission tomography (PET) has become an important modality in medical and molecular imaging. However, in most PET applications, the resolution is still mainly limited by the physical crystal sizes or the detector’s intrinsic spatial resolution. To achieve images with better spatial resolution in a central region of interest (ROI), we have previously proposed using collimation in PET scanners. The collimator is designed to partially mask detector crystals to detect lines of response (LORs) within fractional crystals. A sequence of collimator-encoded LORs is measured with different collimation configurations. This novel collimated scanner geometry makes the reconstruction problem challenging, as both detector and collimator effects need to be modeled to reconstruct high-resolution images from collimated LORs. In this paper, we present a LOR-interleaving (LORI) algorithm, which incorporates these effects and has the advantage of reusing existing reconstruction software, to reconstruct high-resolution images for PET with fractional-crystal collimation. We also develop a 3D ray-tracing model incorporating both the collimator and crystal penetration for simulations and reconstructions of the collimated PET. By registering the collimator-encoded LORs with the collimator configurations, high-resolution LORs are restored based on the modeled transfer matrices using the non-negative least-squares method and EM algorithm. The resolution-enhanced images are then reconstructed from the high-resolution LORs using the MLEM or OSEM algorithm. For validation, we applied the LORI method to a small-animal PET scanner, A-PET, with a specially designed collimator. We demonstrate through simulated reconstructions with a hot-rod phantom and MOBY phantom that the LORI reconstructions can substantially improve spatial resolution and quantification compared to the uncollimated reconstructions. The LORI algorithm is crucial to improve overall image quality of collimated PET, which can have significant implications in preclinical and clinical ROI imaging applications.

  15. Multi-GPU parallel algorithm design and analysis for improved inversion of probability tomography with gravity gradiometry data

    NASA Astrophysics Data System (ADS)

    Hou, Zhenlong; Huang, Danian

    2017-09-01

    In this paper, we make a study on the inversion of probability tomography (IPT) with gravity gradiometry data at first. The space resolution of the results is improved by multi-tensor joint inversion, depth weighting matrix and the other methods. Aiming at solving the problems brought by the big data in the exploration, we present the parallel algorithm and the performance analysis combining Compute Unified Device Architecture (CUDA) with Open Multi-Processing (OpenMP) based on Graphics Processing Unit (GPU) accelerating. In the test of the synthetic model and real data from Vinton Dome, we get the improved results. It is also proved that the improved inversion algorithm is effective and feasible. The performance of parallel algorithm we designed is better than the other ones with CUDA. The maximum speedup could be more than 200. In the performance analysis, multi-GPU speedup and multi-GPU efficiency are applied to analyze the scalability of the multi-GPU programs. The designed parallel algorithm is demonstrated to be able to process larger scale of data and the new analysis method is practical.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rickey, Daniel; Sasaki, David; Dubey, Arbind

    Purpose: Three-dimensional printing has been implemented at our institution to create customized treatment accessories including shielding and bolus. In order to effectively use 3D printing, the topography of the patient must first be acquired. To this end, we have evaluated a low-cost structured-light 3D scanner in order to assess the clinical viability of this technology. Methods: For ease of use, the scanner (3D Systems, Sense 3D Scanner) was mounted in a simple gantry that guided its motion and maintained an optimum distance between the scanner and the object. To characterise the spatial accuracy of the scanner, we used a geometricmore » phantom and an anthropomorphic head phantom. The geometric phantom was machined from plastic and had overall dimensions of 24 cm by 15 cm and included a hemispherical and a tetrahedron protrusion roughly the dimensions of an average forehead and nose respectively. Meshes acquired by the optical scanner were compared to meshes generated from high-resolution CT images. Results: Scans were acquired in under one minute. Most of the optical scans contained noticeable artefacts although in most instances these were considered minor. Using an algorithm that calculated distances between the two meshes, we found most of the optical scanner measurements agreed with those from CT to within about 1 mm for the geometric phantom and to within about 2 mm for the head phantom. Conclusion: In summary, we deemed this scanner to be clinically acceptable and it has been used to design treatment accessories for several skin cancer patients.« less

  17. Comparison of trend analyses for Umkehr data using new and previous inversion algorithms

    NASA Technical Reports Server (NTRS)

    Reinsel, Gregory C.; Tam, Wing-Kuen; Ying, Lisa H.

    1994-01-01

    Ozone vertical profile Umkehr data for layers 3-9 obtained from 12 stations, using both previous and new inversion algorithms, were analyzed for trends. The trends estimated for the Umkehr data from the two algorithms were compared using two data periods, 1968-1991 and 1977-1991. Both nonseasonal and seasonal trend models were fitted. The overall annual trends are found to be significantly negative, of the order of -5% per decade, for layers 7 and 8 using both inversion algorithms. The largest negative trends occur in these layers under the new algorithm, whereas in the previous algorithm the most negative trend occurs in layer 9. The trend estimates, both annual and seasonal, are substantially different between the two algorithms mainly for layers 3, 4, and 9, where trends from the new algorithm data are about 2% per decade less negative, with less appreciable differences in layers 7 and 8. The trend results from the two data periods are similar, except for layer 3 where trends become more negative, by about -2% per decade, for 1977-1991.

  18. VES/TEM 1D joint inversion by using Controlled Random Search (CRS) algorithm

    NASA Astrophysics Data System (ADS)

    Bortolozo, Cassiano Antonio; Porsani, Jorge Luís; Santos, Fernando Acácio Monteiro dos; Almeida, Emerson Rodrigo

    2015-01-01

    Electrical (DC) and Transient Electromagnetic (TEM) soundings are used in a great number of environmental, hydrological, and mining exploration studies. Usually, data interpretation is accomplished by individual 1D models resulting often in ambiguous models. This fact can be explained by the way as the two different methodologies sample the medium beneath surface. Vertical Electrical Sounding (VES) is good in marking resistive structures, while Transient Electromagnetic sounding (TEM) is very sensitive to conductive structures. Another difference is VES is better to detect shallow structures, while TEM soundings can reach deeper layers. A Matlab program for 1D joint inversion of VES and TEM soundings was developed aiming at exploring the best of both methods. The program uses CRS - Controlled Random Search - algorithm for both single and 1D joint inversions. Usually inversion programs use Marquadt type algorithms but for electrical and electromagnetic methods, these algorithms may find a local minimum or not converge. Initially, the algorithm was tested with synthetic data, and then it was used to invert experimental data from two places in Paraná sedimentary basin (Bebedouro and Pirassununga cities), both located in São Paulo State, Brazil. Geoelectric model obtained from VES and TEM data 1D joint inversion is similar to the real geological condition, and ambiguities were minimized. Results with synthetic and real data show that 1D VES/TEM joint inversion better recovers simulated models and shows a great potential in geological studies, especially in hydrogeological studies.

  19. Accommodating Chromosome Inversions in Linkage Analysis

    PubMed Central

    Chen, Gary K.; Slaten, Erin; Ophoff, Roel A.; Lange, Kenneth

    2006-01-01

    This work develops a population-genetics model for polymorphic chromosome inversions. The model precisely describes how an inversion changes the nature of and approach to linkage equilibrium. The work also describes algorithms and software for allele-frequency estimation and linkage analysis in the presence of an inversion. The linkage algorithms implemented in the software package Mendel estimate recombination parameters and calculate the posterior probability that each pedigree member carries the inversion. Application of Mendel to eight Centre d'Étude du Polymorphisme Humain pedigrees in a region containing a common inversion on 8p23 illustrates its potential for providing more-precise estimates of the location of an unmapped marker or trait gene. Our expanded cytogenetic analysis of these families further identifies inversion carriers and increases the evidence of linkage. PMID:16826515

  20. Influence of Iterative Reconstruction Algorithms on PET Image Resolution

    NASA Astrophysics Data System (ADS)

    Karpetas, G. E.; Michail, C. M.; Fountos, G. P.; Valais, I. G.; Nikolopoulos, D.; Kandarakis, I. S.; Panayiotakis, G. S.

    2015-09-01

    The aim of the present study was to assess image quality of PET scanners through a thin layer chromatography (TLC) plane source. The source was simulated using a previously validated Monte Carlo model. The model was developed by using the GATE MC package and reconstructed images obtained with the STIR software for tomographic image reconstruction. The simulated PET scanner was the GE DiscoveryST. A plane source consisted of a TLC plate, was simulated by a layer of silica gel on aluminum (Al) foil substrates, immersed in 18F-FDG bath solution (1MBq). Image quality was assessed in terms of the modulation transfer function (MTF). MTF curves were estimated from transverse reconstructed images of the plane source. Images were reconstructed by the maximum likelihood estimation (MLE)-OSMAPOSL, the ordered subsets separable paraboloidal surrogate (OSSPS), the median root prior (MRP) and OSMAPOSL with quadratic prior, algorithms. OSMAPOSL reconstruction was assessed by using fixed subsets and various iterations, as well as by using various beta (hyper) parameter values. MTF values were found to increase with increasing iterations. MTF also improves by using lower beta values. The simulated PET evaluation method, based on the TLC plane source, can be useful in the resolution assessment of PET scanners.

  1. Improved preconditioned conjugate gradient algorithm and application in 3D inversion of gravity-gradiometry data

    NASA Astrophysics Data System (ADS)

    Wang, Tai-Han; Huang, Da-Nian; Ma, Guo-Qing; Meng, Zhao-Hai; Li, Ye

    2017-06-01

    With the continuous development of full tensor gradiometer (FTG) measurement techniques, three-dimensional (3D) inversion of FTG data is becoming increasingly used in oil and gas exploration. In the fast processing and interpretation of large-scale high-precision data, the use of the graphics processing unit process unit (GPU) and preconditioning methods are very important in the data inversion. In this paper, an improved preconditioned conjugate gradient algorithm is proposed by combining the symmetric successive over-relaxation (SSOR) technique and the incomplete Choleksy decomposition conjugate gradient algorithm (ICCG). Since preparing the preconditioner requires extra time, a parallel implement based on GPU is proposed. The improved method is then applied in the inversion of noisecontaminated synthetic data to prove its adaptability in the inversion of 3D FTG data. Results show that the parallel SSOR-ICCG algorithm based on NVIDIA Tesla C2050 GPU achieves a speedup of approximately 25 times that of a serial program using a 2.0 GHz Central Processing Unit (CPU). Real airborne gravity-gradiometry data from Vinton salt dome (southwest Louisiana, USA) are also considered. Good results are obtained, which verifies the efficiency and feasibility of the proposed parallel method in fast inversion of 3D FTG data.

  2. Quantitative estimation of granitoid composition from thermal infrared multispectral scanner (TIMS) data, Desolation Wilderness, northern Sierra Nevada, California

    NASA Technical Reports Server (NTRS)

    Sabine, Charles; Realmuto, Vincent J.; Taranik, James V.

    1994-01-01

    We have produced images that quantitatively depict modal and chemical parameters of granitoids using an image processing algorithm called MINMAP that fits Gaussian curves to normalized emittance spectra recovered from thermal infrared multispectral scanner (TIMS) radiance data. We applied the algorithm to TIMS data from the Desolation Wilderness, an extensively glaciated area near the northern end of the Sierra Nevada batholith that is underlain by Jurassic and Cretaceous plutons that range from diorite and anorthosite to leucogranite. The wavelength corresponding to the calculated emittance minimum lambda(sub min) varies linearly with quartz content, SiO2, and other modal and chemical parameters. Thematic maps of quartz and silica content derived from lambda(sub min) values distinguish bodies of diorite from surrounding granite, identify outcrops of anorthosite, and separate felsic, intermediate, and mafic rocks.

  3. Comparing implementations of penalized weighted least-squares sinogram restoration.

    PubMed

    Forthmann, Peter; Koehler, Thomas; Defrise, Michel; La Riviere, Patrick

    2010-11-01

    A CT scanner measures the energy that is deposited in each channel of a detector array by x rays that have been partially absorbed on their way through the object. The measurement process is complex and quantitative measurements are always and inevitably associated with errors, so CT data must be preprocessed prior to reconstruction. In recent years, the authors have formulated CT sinogram preprocessing as a statistical restoration problem in which the goal is to obtain the best estimate of the line integrals needed for reconstruction from the set of noisy, degraded measurements. The authors have explored both penalized Poisson likelihood (PL) and penalized weighted least-squares (PWLS) objective functions. At low doses, the authors found that the PL approach outperforms PWLS in terms of resolution-noise tradeoffs, but at standard doses they perform similarly. The PWLS objective function, being quadratic, is more amenable to computational acceleration than the PL objective. In this work, the authors develop and compare two different methods for implementing PWLS sinogram restoration with the hope of improving computational performance relative to PL in the standard-dose regime. Sinogram restoration is still significant in the standard-dose regime since it can still outperform standard approaches and it allows for correction of effects that are not usually modeled in standard CT preprocessing. The authors have explored and compared two implementation strategies for PWLS sinogram restoration: (1) A direct matrix-inversion strategy based on the closed-form solution to the PWLS optimization problem and (2) an iterative approach based on the conjugate-gradient algorithm. Obtaining optimal performance from each strategy required modifying the naive off-the-shelf implementations of the algorithms to exploit the particular symmetry and sparseness of the sinogram-restoration problem. For the closed-form approach, the authors subdivided the large matrix inversion into smaller coupled problems and exploited sparseness to minimize matrix operations. For the conjugate-gradient approach, the authors exploited sparseness and preconditioned the problem to speed up convergence. All methods produced qualitatively and quantitatively similar images as measured by resolution-variance tradeoffs and difference images. Despite the acceleration strategies, the direct matrix-inversion approach was found to be uncompetitive with iterative approaches, with a computational burden higher by an order of magnitude or more. The iterative conjugate-gradient approach, however, does appear promising, with computation times half that of the authors' previous penalized-likelihood implementation. Iterative conjugate-gradient based PWLS sinogram restoration with careful matrix optimizations has computational advantages over direct matrix PWLS inversion and over penalized-likelihood sinogram restoration and can be considered a good alternative in standard-dose regimes.

  4. A New Inversion-Based Algorithm for Retrieval of Over-Water Rain Rate from SSM/I Multichannel Imagery

    NASA Technical Reports Server (NTRS)

    Petty, Grant W.; Stettner, David R.

    1994-01-01

    This paper discusses certain aspects of a new inversion based algorithm for the retrieval of rain rate over the open ocean from the special sensor microwave/imager (SSM/I) multichannel imagery. This algorithm takes a more detailed physical approach to the retrieval problem than previously discussed algorithms that perform explicit forward radiative transfer calculations based on detailed model hydrometer profiles and attempt to match the observations to the predicted brightness temperature.

  5. VLSI architectures for computing multiplications and inverses in GF(2m)

    NASA Technical Reports Server (NTRS)

    Wang, C. C.; Truong, T. K.; Shao, H. M.; Deutsch, L. J.; Omura, J. K.

    1985-01-01

    Finite field arithmetic logic is central in the implementation of Reed-Solomon coders and in some cryptographic algorithms. There is a need for good multiplication and inversion algorithms that are easily realized on VLSI chips. Massey and Omura recently developed a new multiplication algorithm for Galois fields based on a normal basis representation. A pipeline structure is developed to realize the Massey-Omura multiplier in the finite field GF(2m). With the simple squaring property of the normal-basis representation used together with this multiplier, a pipeline architecture is also developed for computing inverse elements in GF(2m). The designs developed for the Massey-Omura multiplier and the computation of inverse elements are regular, simple, expandable and, therefore, naturally suitable for VLSI implementation.

  6. VLSI architectures for computing multiplications and inverses in GF(2-m)

    NASA Technical Reports Server (NTRS)

    Wang, C. C.; Truong, T. K.; Shao, H. M.; Deutsch, L. J.; Omura, J. K.; Reed, I. S.

    1983-01-01

    Finite field arithmetic logic is central in the implementation of Reed-Solomon coders and in some cryptographic algorithms. There is a need for good multiplication and inversion algorithms that are easily realized on VLSI chips. Massey and Omura recently developed a new multiplication algorithm for Galois fields based on a normal basis representation. A pipeline structure is developed to realize the Massey-Omura multiplier in the finite field GF(2m). With the simple squaring property of the normal-basis representation used together with this multiplier, a pipeline architecture is also developed for computing inverse elements in GF(2m). The designs developed for the Massey-Omura multiplier and the computation of inverse elements are regular, simple, expandable and, therefore, naturally suitable for VLSI implementation.

  7. VLSI architectures for computing multiplications and inverses in GF(2m).

    PubMed

    Wang, C C; Truong, T K; Shao, H M; Deutsch, L J; Omura, J K; Reed, I S

    1985-08-01

    Finite field arithmetic logic is central in the implementation of Reed-Solomon coders and in some cryptographic algorithms. There is a need for good multiplication and inversion algorithms that can be easily realized on VLSI chips. Massey and Omura recently developed a new multiplication algorithm for Galois fields based on a normal basis representation. In this paper, a pipeline structure is developed to realize the Massey-Omura multiplier in the finite field GF(2m). With the simple squaring property of the normal basis representation used together with this multiplier, a pipeline architecture is developed for computing inverse elements in GF(2m). The designs developed for the Massey-Omura multiplier and the computation of inverse elements are regular, simple, expandable, and therefore, naturally suitable for VLSI implementation.

  8. Acoustic Inversion in Optoacoustic Tomography: A Review

    PubMed Central

    Rosenthal, Amir; Ntziachristos, Vasilis; Razansky, Daniel

    2013-01-01

    Optoacoustic tomography enables volumetric imaging with optical contrast in biological tissue at depths beyond the optical mean free path by the use of optical excitation and acoustic detection. The hybrid nature of optoacoustic tomography gives rise to two distinct inverse problems: The optical inverse problem, related to the propagation of the excitation light in tissue, and the acoustic inverse problem, which deals with the propagation and detection of the generated acoustic waves. Since the two inverse problems have different physical underpinnings and are governed by different types of equations, they are often treated independently as unrelated problems. From an imaging standpoint, the acoustic inverse problem relates to forming an image from the measured acoustic data, whereas the optical inverse problem relates to quantifying the formed image. This review focuses on the acoustic aspects of optoacoustic tomography, specifically acoustic reconstruction algorithms and imaging-system practicalities. As these two aspects are intimately linked, and no silver bullet exists in the path towards high-performance imaging, we adopt a holistic approach in our review and discuss the many links between the two aspects. Four classes of reconstruction algorithms are reviewed: time-domain (so called back-projection) formulae, frequency-domain formulae, time-reversal algorithms, and model-based algorithms. These algorithms are discussed in the context of the various acoustic detectors and detection surfaces which are commonly used in experimental studies. We further discuss the effects of non-ideal imaging scenarios on the quality of reconstruction and review methods that can mitigate these effects. Namely, we consider the cases of finite detector aperture, limited-view tomography, spatial under-sampling of the acoustic signals, and acoustic heterogeneities and losses. PMID:24772060

  9. Calculation method of water injection forward modeling and inversion process in oilfield water injection network

    NASA Astrophysics Data System (ADS)

    Liu, Long; Liu, Wei

    2018-04-01

    A forward modeling and inversion algorithm is adopted in order to determine the water injection plan in the oilfield water injection network. The main idea of the algorithm is shown as follows: firstly, the oilfield water injection network is inversely calculated. The pumping station demand flow is calculated. Then, forward modeling calculation is carried out for judging whether all water injection wells meet the requirements of injection allocation or not. If all water injection wells meet the requirements of injection allocation, calculation is stopped, otherwise the demand injection allocation flow rate of certain step size is reduced aiming at water injection wells which do not meet requirements, and next iterative operation is started. It is not necessary to list the algorithm into water injection network system algorithm, which can be realized easily. Iterative method is used, which is suitable for computer programming. Experimental result shows that the algorithm is fast and accurate.

  10. APPLICATION OF COMPUTER-AIDED TOMOGRAPHY (CAT) AS A POTENTIAL INDICATOR OF MARINE MARCO BENTHIC ACTIVITY ALONG POLLUTION GRADIENTS

    EPA Science Inventory

    Sediment cores were imaged using a local hospital CAT scanner. These image data were transferred to a personal computer at our laboratory using specially developed software. Previously, we reported an inverse correlation (r2 = 0.98, P<0.01) between the average sediment x-ray atte...

  11. A genetic meta-algorithm-assisted inversion approach: hydrogeological study for the determination of volumetric rock properties and matrix and fluid parameters in unsaturated formations

    NASA Astrophysics Data System (ADS)

    Szabó, Norbert Péter

    2018-03-01

    An evolutionary inversion approach is suggested for the interpretation of nuclear and resistivity logs measured by direct-push tools in shallow unsaturated sediments. The efficiency of formation evaluation is improved by estimating simultaneously (1) the petrophysical properties that vary rapidly along a drill hole with depth and (2) the zone parameters that can be treated as constant, in one inversion procedure. In the workflow, the fractional volumes of water, air, matrix and clay are estimated in adjacent depths by linearized inversion, whereas the clay and matrix properties are updated using a float-encoded genetic meta-algorithm. The proposed inversion method provides an objective estimate of the zone parameters that appear in the tool response equations applied to solve the forward problem, which can significantly increase the reliability of the petrophysical model as opposed to setting these parameters arbitrarily. The global optimization meta-algorithm not only assures the best fit between the measured and calculated data but also gives a reliable solution, practically independent of the initial model, as laboratory data are unnecessary in the inversion procedure. The feasibility test uses engineering geophysical sounding logs observed in an unsaturated loessy-sandy formation in Hungary. The multi-borehole extension of the inversion technique is developed to determine the petrophysical properties and their estimation errors along a profile of drill holes. The genetic meta-algorithmic inversion method is recommended for hydrogeophysical logging applications of various kinds to automatically extract the volumetric ratios of rock and fluid constituents as well as the most important zone parameters in a reliable inversion procedure.

  12. Adaptive Inverse Control for Rotorcraft Vibration Reduction

    NASA Technical Reports Server (NTRS)

    Jacklin, Stephen A.

    1985-01-01

    This thesis extends the Least Mean Square (LMS) algorithm to solve the mult!ple-input, multiple-output problem of alleviating N/Rev (revolutions per minute by number of blades) helicopter fuselage vibration by means of adaptive inverse control. A frequency domain locally linear model is used to represent the transfer matrix relating the higher harmonic pitch control inputs to the harmonic vibration outputs to be controlled. By using the inverse matrix as the controller gain matrix, an adaptive inverse regulator is formed to alleviate the N/Rev vibration. The stability and rate of convergence properties of the extended LMS algorithm are discussed. It is shown that the stability ranges for the elements of the stability gain matrix are directly related to the eigenvalues of the vibration signal information matrix for the learning phase, but not for the control phase. The overall conclusion is that the LMS adaptive inverse control method can form a robust vibration control system, but will require some tuning of the input sensor gains, the stability gain matrix, and the amount of control relaxation to be used. The learning curve of the controller during the learning phase is shown to be quantitatively close to that predicted by averaging the learning curves of the normal modes. For higher order transfer matrices, a rough estimate of the inverse is needed to start the algorithm efficiently. The simulation results indicate that the factor which most influences LMS adaptive inverse control is the product of the control relaxation and the the stability gain matrix. A small stability gain matrix makes the controller less sensitive to relaxation selection, and permits faster and more stable vibration reduction, than by choosing the stability gain matrix large and the control relaxation term small. It is shown that the best selections of the stability gain matrix elements and the amount of control relaxation is basically a compromise between slow, stable convergence and fast convergence with increased possibility of unstable identification. In the simulation studies, the LMS adaptive inverse control algorithm is shown to be capable of adapting the inverse (controller) matrix to track changes in the flight conditions. The algorithm converges quickly for moderate disturbances, while taking longer for larger disturbances. Perfect knowledge of the inverse matrix is not required for good control of the N/Rev vibration. However it is shown that measurement noise will prevent the LMS adaptive inverse control technique from controlling the vibration, unless the signal averaging method presented is incorporated into the algorithm.

  13. Time-reversal and Bayesian inversion

    NASA Astrophysics Data System (ADS)

    Debski, Wojciech

    2017-04-01

    Probabilistic inversion technique is superior to the classical optimization-based approach in all but one aspects. It requires quite exhaustive computations which prohibit its use in huge size inverse problems like global seismic tomography or waveform inversion to name a few. The advantages of the approach are, however, so appealing that there is an ongoing continuous afford to make the large inverse task as mentioned above manageable with the probabilistic inverse approach. One of the perspective possibility to achieve this goal relays on exploring the internal symmetry of the seismological modeling problems in hand - a time reversal and reciprocity invariance. This two basic properties of the elastic wave equation when incorporating into the probabilistic inversion schemata open a new horizons for Bayesian inversion. In this presentation we discuss the time reversal symmetry property, its mathematical aspects and propose how to combine it with the probabilistic inverse theory into a compact, fast inversion algorithm. We illustrate the proposed idea with the newly developed location algorithm TRMLOC and discuss its efficiency when applied to mining induced seismic data.

  14. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    NASA Astrophysics Data System (ADS)

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    2016-09-01

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace such that the dimensionality of the problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2-D and a random hydraulic conductivity field in 3-D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ˜101 to ˜102 in a multicore computational environment. Therefore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate to large-scale problems.

  15. An Improved 3D Joint Inversion Method of Potential Field Data Using Cross-Gradient Constraint and LSQR Method

    NASA Astrophysics Data System (ADS)

    Joulidehsar, Farshad; Moradzadeh, Ali; Doulati Ardejani, Faramarz

    2018-06-01

    The joint interpretation of two sets of geophysical data related to the same source is an appropriate method for decreasing non-uniqueness of the resulting models during inversion process. Among the available methods, a method based on using cross-gradient constraint combines two datasets is an efficient approach. This method, however, is time-consuming for 3D inversion and cannot provide an exact assessment of situation and extension of anomaly of interest. In this paper, the first attempt is to speed up the required calculation by substituting singular value decomposition by least-squares QR method to solve the large-scale kernel matrix of 3D inversion, more rapidly. Furthermore, to improve the accuracy of resulting models, a combination of depth-weighing matrix and compacted constraint, as automatic selection covariance of initial parameters, is used in the proposed inversion algorithm. This algorithm was developed in Matlab environment and first implemented on synthetic data. The 3D joint inversion of synthetic gravity and magnetic data shows a noticeable improvement in the results and increases the efficiency of algorithm for large-scale problems. Additionally, a real gravity and magnetic dataset of Jalalabad mine, in southeast of Iran was tested. The obtained results by the improved joint 3D inversion of cross-gradient along with compacted constraint showed a mineralised zone in depth interval of about 110-300 m which is in good agreement with the available drilling data. This is also a further confirmation on the accuracy and progress of the improved inversion algorithm.

  16. Object recognition and pose estimation of planar objects from range data

    NASA Technical Reports Server (NTRS)

    Pendleton, Thomas W.; Chien, Chiun Hong; Littlefield, Mark L.; Magee, Michael

    1994-01-01

    The Extravehicular Activity Helper/Retriever (EVAHR) is a robotic device currently under development at the NASA Johnson Space Center that is designed to fetch objects or to assist in retrieving an astronaut who may have become inadvertently de-tethered. The EVAHR will be required to exhibit a high degree of intelligent autonomous operation and will base much of its reasoning upon information obtained from one or more three-dimensional sensors that it will carry and control. At the highest level of visual cognition and reasoning, the EVAHR will be required to detect objects, recognize them, and estimate their spatial orientation and location. The recognition phase and estimation of spatial pose will depend on the ability of the vision system to reliably extract geometric features of the objects such as whether the surface topologies observed are planar or curved and the spatial relationships between the component surfaces. In order to achieve these tasks, three-dimensional sensing of the operational environment and objects in the environment will therefore be essential. One of the sensors being considered to provide image data for object recognition and pose estimation is a phase-shift laser scanner. The characteristics of the data provided by this scanner have been studied and algorithms have been developed for segmenting range images into planar surfaces, extracting basic features such as surface area, and recognizing the object based on the characteristics of extracted features. Also, an approach has been developed for estimating the spatial orientation and location of the recognized object based on orientations of extracted planes and their intersection points. This paper presents some of the algorithms that have been developed for the purpose of recognizing and estimating the pose of objects as viewed by the laser scanner, and characterizes the desirability and utility of these algorithms within the context of the scanner itself, considering data quality and noise.

  17. Point-of-care mobile digital microscopy and deep learning for the detection of soil-transmitted helminths and Schistosoma haematobium.

    PubMed

    Holmström, Oscar; Linder, Nina; Ngasala, Billy; Mårtensson, Andreas; Linder, Ewert; Lundin, Mikael; Moilanen, Hannu; Suutala, Antti; Diwan, Vinod; Lundin, Johan

    2017-06-01

    Microscopy remains the gold standard in the diagnosis of neglected tropical diseases. As resource limited, rural areas often lack laboratory equipment and trained personnel, new diagnostic techniques are needed. Low-cost, point-of-care imaging devices show potential in the diagnosis of these diseases. Novel, digital image analysis algorithms can be utilized to automate sample analysis. Evaluation of the imaging performance of a miniature digital microscopy scanner for the diagnosis of soil-transmitted helminths and Schistosoma haematobium, and training of a deep learning-based image analysis algorithm for automated detection of soil-transmitted helminths in the captured images. A total of 13 iodine-stained stool samples containing Ascaris lumbricoides, Trichuris trichiura and hookworm eggs and 4 urine samples containing Schistosoma haematobium were digitized using a reference whole slide-scanner and the mobile microscopy scanner. Parasites in the images were identified by visual examination and by analysis with a deep learning-based image analysis algorithm in the stool samples. Results were compared between the digital and visual analysis of the images showing helminth eggs. Parasite identification by visual analysis of digital slides captured with the mobile microscope was feasible for all analyzed parasites. Although the spatial resolution of the reference slide-scanner is higher, the resolution of the mobile microscope is sufficient for reliable identification and classification of all parasites studied. Digital image analysis of stool sample images captured with the mobile microscope showed high sensitivity for detection of all helminths studied (range of sensitivity = 83.3-100%) in the test set (n = 217) of manually labeled helminth eggs. In this proof-of-concept study, the imaging performance of a mobile, digital microscope was sufficient for visual detection of soil-transmitted helminths and Schistosoma haematobium. Furthermore, we show that deep learning-based image analysis can be utilized for the automated detection and classification of helminths in the captured images.

  18. Point-of-care mobile digital microscopy and deep learning for the detection of soil-transmitted helminths and Schistosoma haematobium

    PubMed Central

    Holmström, Oscar; Linder, Nina; Ngasala, Billy; Mårtensson, Andreas; Linder, Ewert; Lundin, Mikael; Moilanen, Hannu; Suutala, Antti; Diwan, Vinod; Lundin, Johan

    2017-01-01

    ABSTRACT Background: Microscopy remains the gold standard in the diagnosis of neglected tropical diseases. As resource limited, rural areas often lack laboratory equipment and trained personnel, new diagnostic techniques are needed. Low-cost, point-of-care imaging devices show potential in the diagnosis of these diseases. Novel, digital image analysis algorithms can be utilized to automate sample analysis. Objective: Evaluation of the imaging performance of a miniature digital microscopy scanner for the diagnosis of soil-transmitted helminths and Schistosoma haematobium, and training of a deep learning-based image analysis algorithm for automated detection of soil-transmitted helminths in the captured images. Methods: A total of 13 iodine-stained stool samples containing Ascaris lumbricoides, Trichuris trichiura and hookworm eggs and 4 urine samples containing Schistosoma haematobium were digitized using a reference whole slide-scanner and the mobile microscopy scanner. Parasites in the images were identified by visual examination and by analysis with a deep learning-based image analysis algorithm in the stool samples. Results were compared between the digital and visual analysis of the images showing helminth eggs. Results: Parasite identification by visual analysis of digital slides captured with the mobile microscope was feasible for all analyzed parasites. Although the spatial resolution of the reference slide-scanner is higher, the resolution of the mobile microscope is sufficient for reliable identification and classification of all parasites studied. Digital image analysis of stool sample images captured with the mobile microscope showed high sensitivity for detection of all helminths studied (range of sensitivity = 83.3–100%) in the test set (n = 217) of manually labeled helminth eggs. Conclusions: In this proof-of-concept study, the imaging performance of a mobile, digital microscope was sufficient for visual detection of soil-transmitted helminths and Schistosoma haematobium. Furthermore, we show that deep learning-based image analysis can be utilized for the automated detection and classification of helminths in the captured images. PMID:28838305

  19. The inverse electroencephalography pipeline

    NASA Astrophysics Data System (ADS)

    Weinstein, David Michael

    The inverse electroencephalography (EEG) problem is defined as determining which regions of the brain are active based on remote measurements recorded with scalp EEG electrodes. An accurate solution to this problem would benefit both fundamental neuroscience research and clinical neuroscience applications. However, constructing accurate patient-specific inverse EEG solutions requires complex modeling, simulation, and visualization algorithms, and to date only a few systems have been developed that provide such capabilities. In this dissertation, a computational system for generating and investigating patient-specific inverse EEG solutions is introduced, and the requirements for each stage of this Inverse EEG Pipeline are defined and discussed. While the requirements of many of the stages are satisfied with existing algorithms, others have motivated research into novel modeling and simulation methods. The principal technical results of this work include novel surface-based volume modeling techniques, an efficient construction for the EEG lead field, and the Open Source release of the Inverse EEG Pipeline software for use by the bioelectric field research community. In this work, the Inverse EEG Pipeline is applied to three research problems in neurology: comparing focal and distributed source imaging algorithms; separating measurements into independent activation components for multifocal epilepsy; and localizing the cortical activity that produces the P300 effect in schizophrenia.

  20. Arikan and Alamouti matrices based on fast block-wise inverse Jacket transform

    NASA Astrophysics Data System (ADS)

    Lee, Moon Ho; Khan, Md Hashem Ali; Kim, Kyeong Jin

    2013-12-01

    Recently, Lee and Hou (IEEE Signal Process Lett 13: 461-464, 2006) proposed one-dimensional and two-dimensional fast algorithms for block-wise inverse Jacket transforms (BIJTs). Their BIJTs are not real inverse Jacket transforms from mathematical point of view because their inverses do not satisfy the usual condition, i.e., the multiplication of a matrix with its inverse matrix is not equal to the identity matrix. Therefore, we mathematically propose a fast block-wise inverse Jacket transform of orders N = 2 k , 3 k , 5 k , and 6 k , where k is a positive integer. Based on the Kronecker product of the successive lower order Jacket matrices and the basis matrix, the fast algorithms for realizing these transforms are obtained. Due to the simple inverse and fast algorithms of Arikan polar binary and Alamouti multiple-input multiple-output (MIMO) non-binary matrices, which are obtained from BIJTs, they can be applied in areas such as 3GPP physical layer for ultra mobile broadband permutation matrices design, first-order q-ary Reed-Muller code design, diagonal channel design, diagonal subchannel decompose for interference alignment, and 4G MIMO long-term evolution Alamouti precoding design.

  1. Reconstruction of the temperature field for inverse ultrasound hyperthermia calculations at a muscle/bone interface.

    PubMed

    Liauh, Chihng-Tsung; Shih, Tzu-Ching; Huang, Huang-Wen; Lin, Win-Li

    2004-02-01

    An inverse algorithm with Tikhonov regularization of order zero has been used to estimate the intensity ratios of the reflected longitudinal wave to the incident longitudinal wave and that of the refracted shear wave to the total transmitted wave into bone in calculating the absorbed power field and then to reconstruct the temperature distribution in muscle and bone regions based on a limited number of temperature measurements during simulated ultrasound hyperthermia. The effects of the number of temperature sensors are investigated, as is the amount of noise superimposed on the temperature measurements, and the effects of the optimal sensor location on the performance of the inverse algorithm. Results show that noisy input data degrades the performance of this inverse algorithm, especially when the number of temperature sensors is small. Results are also presented demonstrating an improvement in the accuracy of the temperature estimates by employing an optimal value of the regularization parameter. Based on the analysis of singular-value decomposition, the optimal sensor position in a case utilizing only one temperature sensor can be determined to make the inverse algorithm converge to the true solution.

  2. Joint Inversion of 1-D Magnetotelluric and Surface-Wave Dispersion Data with an Improved Multi-Objective Genetic Algorithm and Application to the Data of the Longmenshan Fault Zone

    NASA Astrophysics Data System (ADS)

    Wu, Pingping; Tan, Handong; Peng, Miao; Ma, Huan; Wang, Mao

    2018-05-01

    Magnetotellurics and seismic surface waves are two prominent geophysical methods for deep underground exploration. Joint inversion of these two datasets can help enhance the accuracy of inversion. In this paper, we describe a method for developing an improved multi-objective genetic algorithm (NSGA-SBX) and applying it to two numerical tests to verify the advantages of the algorithm. Our findings show that joint inversion with the NSGA-SBX method can improve the inversion results by strengthening structural coupling when the discontinuities of the electrical and velocity models are consistent, and in case of inconsistent discontinuities between these models, joint inversion can retain the advantages of individual inversions. By applying the algorithm to four detection points along the Longmenshan fault zone, we observe several features. The Sichuan Basin demonstrates low S-wave velocity and high conductivity in the shallow crust probably due to thick sedimentary layers. The eastern margin of the Tibetan Plateau shows high velocity and high resistivity in the shallow crust, while two low velocity layers and a high conductivity layer are observed in the middle lower crust, probably indicating the mid-crustal channel flow. Along the Longmenshan fault zone, a high conductivity layer from 8 to 20 km is observed beneath the northern segment and decreases with depth beneath the middle segment, which might be caused by the elevated fluid content of the fault zone.

  3. Analysis of iterative region-of-interest image reconstruction for x-ray computed tomography

    PubMed Central

    Sidky, Emil Y.; Kraemer, David N.; Roth, Erin G.; Ullberg, Christer; Reiser, Ingrid S.; Pan, Xiaochuan

    2014-01-01

    Abstract. One of the challenges for iterative image reconstruction (IIR) is that such algorithms solve an imaging model implicitly, requiring a complete representation of the scanned subject within the viewing domain of the scanner. This requirement can place a prohibitively high computational burden for IIR applied to x-ray computed tomography (CT), especially when high-resolution tomographic volumes are required. In this work, we aim to develop an IIR algorithm for direct region-of-interest (ROI) image reconstruction. The proposed class of IIR algorithms is based on an optimization problem that incorporates a data fidelity term, which compares a derivative of the estimated data with the available projection data. In order to characterize this optimization problem, we apply it to computer-simulated two-dimensional fan-beam CT data, using both ideal noiseless data and realistic data containing a level of noise comparable to that of the breast CT application. The proposed method is demonstrated for both complete field-of-view and ROI imaging. To demonstrate the potential utility of the proposed ROI imaging method, it is applied to actual CT scanner data. PMID:25685824

  4. Analysis of iterative region-of-interest image reconstruction for x-ray computed tomography.

    PubMed

    Sidky, Emil Y; Kraemer, David N; Roth, Erin G; Ullberg, Christer; Reiser, Ingrid S; Pan, Xiaochuan

    2014-10-03

    One of the challenges for iterative image reconstruction (IIR) is that such algorithms solve an imaging model implicitly, requiring a complete representation of the scanned subject within the viewing domain of the scanner. This requirement can place a prohibitively high computational burden for IIR applied to x-ray computed tomography (CT), especially when high-resolution tomographic volumes are required. In this work, we aim to develop an IIR algorithm for direct region-of-interest (ROI) image reconstruction. The proposed class of IIR algorithms is based on an optimization problem that incorporates a data fidelity term, which compares a derivative of the estimated data with the available projection data. In order to characterize this optimization problem, we apply it to computer-simulated two-dimensional fan-beam CT data, using both ideal noiseless data and realistic data containing a level of noise comparable to that of the breast CT application. The proposed method is demonstrated for both complete field-of-view and ROI imaging. To demonstrate the potential utility of the proposed ROI imaging method, it is applied to actual CT scanner data.

  5. Remote sensing of wetland parameters related to carbon cycling

    NASA Technical Reports Server (NTRS)

    Bartlett, David S.; Johnson, Robert W.

    1985-01-01

    Measurement of the rates of important biogeochemical fluxes on regional or global scales is vital to understanding the geochemical and climatic consequences of natural biospheric processes and of human intervention in those processes. Remote data gathering and interpretation techniques were used to examine important cycling processes taking place in wetlands over large geographic expanses. Large area estimation of vegetative biomass and productivity depends upon accurate, consistent measurements of canopy spectral reflectance and upon wide applicability of algorithms relating reflectance to biometric parameters. Results of the use of airborne multispectral scanner data to map above-ground biomass in a Delaware salt marsh are shown. The mapping uses an effective algorithm linking biomass to measured spectral reflectance and a means to correct the scanner data for large variations in the angle of observation of the canopy. The consistency of radiometric biomass algorithms for marsh grass when they are applied over large latitudinal and tidal range gradients were also examined. Results of a 1 year study of methane emissions from tidal wetlands along a salinity gradient show marked effects of temperature, season, and pore-water chemistry in mediating flux to the atmosphere.

  6. A general rough-surface inversion algorithm: Theory and application to SAR data

    NASA Technical Reports Server (NTRS)

    Moghaddam, M.

    1993-01-01

    Rough-surface inversion has significant applications in interpretation of SAR data obtained over bare soil surfaces and agricultural lands. Due to the sparsity of data and the large pixel size in SAR applications, it is not feasible to carry out inversions based on numerical scattering models. The alternative is to use parameter estimation techniques based on approximate analytical or empirical models. Hence, there are two issues to be addressed, namely, what model to choose and what estimation algorithm to apply. Here, a small perturbation model (SPM) is used to express the backscattering coefficients of the rough surface in terms of three surface parameters. The algorithm used to estimate these parameters is based on a nonlinear least-squares criterion. The least-squares optimization methods are widely used in estimation theory, but the distinguishing factor for SAR applications is incorporating the stochastic nature of both the unknown parameters and the data into formulation, which will be discussed in detail. The algorithm is tested with synthetic data, and several Newton-type least-squares minimization methods are discussed to compare their convergence characteristics. Finally, the algorithm is applied to multifrequency polarimetric SAR data obtained over some bare soil and agricultural fields. Results will be shown and compared to ground-truth measurements obtained from these areas. The strength of this general approach to inversion of SAR data is that it can be easily modified for use with any scattering model without changing any of the inversion steps. Note also that, for the same reason it is not limited to inversion of rough surfaces, and can be applied to any parameterized scattering process.

  7. Acoustic Impedance Inversion of Seismic Data Using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Eladj, Said; Djarfour, Noureddine; Ferahtia, Djalal; Ouadfeul, Sid-Ali

    2013-04-01

    The inversion of seismic data can be used to constrain estimates of the Earth's acoustic impedance structure. This kind of problem is usually known to be non-linear, high-dimensional, with a complex search space which may be riddled with many local minima, and results in irregular objective functions. We investigate here the performance and the application of a genetic algorithm, in the inversion of seismic data. The proposed algorithm has the advantage of being easily implemented without getting stuck in local minima. The effects of population size, Elitism strategy, uniform cross-over and lower mutation are examined. The optimum solution parameters and performance were decided as a function of the testing error convergence with respect to the generation number. To calculate the fitness function, we used L2 norm of the sample-to-sample difference between the reference and the inverted trace. The cross-over probability is of 0.9-0.95 and mutation has been tested at 0.01 probability. The application of such a genetic algorithm to synthetic data shows that the inverted acoustic impedance section was efficient. Keywords: Seismic, Inversion, acoustic impedance, genetic algorithm, fitness functions, cross-over, mutation.

  8. Performance impact of mutation operators of a subpopulation-based genetic algorithm for multi-robot task allocation problems.

    PubMed

    Liu, Chun; Kroll, Andreas

    2016-01-01

    Multi-robot task allocation determines the task sequence and distribution for a group of robots in multi-robot systems, which is one of constrained combinatorial optimization problems and more complex in case of cooperative tasks because they introduce additional spatial and temporal constraints. To solve multi-robot task allocation problems with cooperative tasks efficiently, a subpopulation-based genetic algorithm, a crossover-free genetic algorithm employing mutation operators and elitism selection in each subpopulation, is developed in this paper. Moreover, the impact of mutation operators (swap, insertion, inversion, displacement, and their various combinations) is analyzed when solving several industrial plant inspection problems. The experimental results show that: (1) the proposed genetic algorithm can obtain better solutions than the tested binary tournament genetic algorithm with partially mapped crossover; (2) inversion mutation performs better than other tested mutation operators when solving problems without cooperative tasks, and the swap-inversion combination performs better than other tested mutation operators/combinations when solving problems with cooperative tasks. As it is difficult to produce all desired effects with a single mutation operator, using multiple mutation operators (including both inversion and swap) is suggested when solving similar combinatorial optimization problems.

  9. Genetic algorithms and their use in Geophysical Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parker, Paul B.

    1999-04-01

    Genetic algorithms (GAs), global optimization methods that mimic Darwinian evolution are well suited to the nonlinear inverse problems of geophysics. A standard genetic algorithm selects the best or ''fittest'' models from a ''population'' and then applies operators such as crossover and mutation in order to combine the most successful characteristics of each model and produce fitter models. More sophisticated operators have been developed, but the standard GA usually provides a robust and efficient search. Although the choice of parameter settings such as crossover and mutation rate may depend largely on the type of problem being solved, numerous results show thatmore » certain parameter settings produce optimal performance for a wide range of problems and difficulties. In particular, a low (about half of the inverse of the population size) mutation rate is crucial for optimal results, but the choice of crossover method and rate do not seem to affect performance appreciably. Optimal efficiency is usually achieved with smaller (< 50) populations. Lastly, tournament selection appears to be the best choice of selection methods due to its simplicity and its autoscaling properties. However, if a proportional selection method is used such as roulette wheel selection, fitness scaling is a necessity, and a high scaling factor (> 2.0) should be used for the best performance. Three case studies are presented in which genetic algorithms are used to invert for crustal parameters. The first is an inversion for basement depth at Yucca mountain using gravity data, the second an inversion for velocity structure in the crust of the south island of New Zealand using receiver functions derived from teleseismic events, and the third is a similar receiver function inversion for crustal velocities beneath the Mendocino Triple Junction region of Northern California. The inversions demonstrate that genetic algorithms are effective in solving problems with reasonably large numbers of free parameters and with computationally expensive objective function calculations. More sophisticated techniques are presented for special problems. Niching and island model algorithms are introduced as methods to find multiple, distinct solutions to the nonunique problems that are typically seen in geophysics. Finally, hybrid algorithms are investigated as a way to improve the efficiency of the standard genetic algorithm.« less

  10. Genetic algorithms and their use in geophysical problems

    NASA Astrophysics Data System (ADS)

    Parker, Paul Bradley

    Genetic algorithms (GAs), global optimization methods that mimic Darwinian evolution are well suited to the nonlinear inverse problems of geophysics. A standard genetic algorithm selects the best or "fittest" models from a "population" and then applies operators such as crossover and mutation in order to combine the most successful characteristics of each model and produce fitter models. More sophisticated operators have been developed, but the standard GA usually provides a robust and efficient search. Although the choice of parameter settings such as crossover and mutation rate may depend largely on the type of problem being solved, numerous results show that certain parameter settings produce optimal performance for a wide range of problems and difficulties. In particular, a low (about half of the inverse of the population size) mutation rate is crucial for optimal results, but the choice of crossover method and rate do not seem to affect performance appreciably. Also, optimal efficiency is usually achieved with smaller (<50) populations. Lastly, tournament selection appears to be the best choice of selection methods due to its simplicity and its autoscaling properties. However, if a proportional selection method is used such as roulette wheel selection, fitness scaling is a necessity, and a high scaling factor (>2.0) should be used for the best performance. Three case studies are presented in which genetic algorithms are used to invert for crustal parameters. The first is an inversion for basement depth at Yucca mountain using gravity data, the second an inversion for velocity structure in the crust of the south island of New Zealand using receiver functions derived from teleseismic events, and the third is a similar receiver function inversion for crustal velocities beneath the Mendocino Triple Junction region of Northern California. The inversions demonstrate that genetic algorithms are effective in solving problems with reasonably large numbers of free parameters and with computationally expensive objective function calculations. More sophisticated techniques are presented for special problems. Niching and island model algorithms are introduced as methods to find multiple, distinct solutions to the nonunique problems that are typically seen in geophysics. Finally, hybrid algorithms are investigated as a way to improve the efficiency of the standard genetic algorithm.

  11. SU-F-I-24: Feasibility of Magnetic Susceptibility to Relative Electron Density Conversion Method for Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ito, K; Kadoya, N; Chiba, M

    2016-06-15

    Purpose: The aim of this study is to develop radiation treatment planning using magnetic susceptibility obtained from quantitative susceptibility mapping (QSM) via MR imaging. This study demonstrates the feasibility of a method for generating a substitute for a CT image from an MRI. Methods: The head of a healthy volunteer was scanned using a CT scanner and a 3.0 T MRI scanner. The CT imaging was performed with a slice thickness of 2.5 mm at 80 and 120 kV (dual-energy scan). These CT images were converted to relative electron density (rED) using the CT-rED conversion table generated by a previousmore » dual-energy CT scan. The CT-rED conversion table was generated using the conversion of the energy-subtracted CT number to rED via a single linear relationship. One T2 star-weighted 3D gradient echo-based sequence with four different echo times images was acquired using the MRI scanner. These T2 star-weighted images were used to estimate the phase data. To estimate the local field map, a Laplacian unwrapping of the phase and background field removal algorithm were implemented to process phase data. To generate a magnetic susceptibility map from the local field map, we used morphology enabled dipole inversion method. The rED map was resampled to the same resolution as magnetic susceptibility, and the magnetic susceptibility-rED conversion table was obtained via voxel-by-voxel mapping between the magnetic susceptibility and rED maps. Results: A correlation between magnetic susceptibility and rED is not observed through our method. Conclusion: Our results show that the correlation between magnetic susceptibility and rED is not observed. As the next step, we assume that the voxel of the magnetic susceptibility map comprises two materials, such as water (0 ppm) and bone (-2.2 ppm) or water and marrow (0.81ppm). The elements of each voxel were estimated from the ratio of the two materials.« less

  12. What NonScanner products are available?

    Atmospheric Science Data Center

    2014-12-08

    ... product. More information is available in the Edition3 Data Quality Summary, including a special website to obtain user-applied corrected ... algorithm. Because of these differences, it is best to work with these two data sets separately. ERBE/ERBS ...

  13. Key Generation for Fast Inversion of the Paillier Encryption Function

    NASA Astrophysics Data System (ADS)

    Hirano, Takato; Tanaka, Keisuke

    We study fast inversion of the Paillier encryption function. Especially, we focus only on key generation, and do not modify the Paillier encryption function. We propose three key generation algorithms based on the speeding-up techniques for the RSA encryption function. By using our algorithms, the size of the private CRT exponent is half of that of Paillier-CRT. The first algorithm employs the extended Euclidean algorithm. The second algorithm employs factoring algorithms, and can construct the private CRT exponent with low Hamming weight. The third algorithm is a variant of the second one, and has some advantage such as compression of the private CRT exponent and no requirement for factoring algorithms. We also propose the settings of the parameters for these algorithms and analyze the security of the Paillier encryption function by these algorithms against known attacks. Finally, we give experimental results of our algorithms.

  14. Modeling and characterization of the Earth Radiation Budget Experiment (ERBE) nonscanner and scanner sensors

    NASA Technical Reports Server (NTRS)

    Halyo, Nesim; Pandey, Dhirendra K.; Taylor, Deborah B.

    1989-01-01

    The Earth Radiation Budget Experiment (ERBE) is making high-absolute-accuracy measurements of the reflected solar and Earth-emitted radiation as well as the incoming solar radiation from three satellites: ERBS, NOAA-9, and NOAA-10. Each satellite has four Earth-looking nonscanning radiometers and three scanning radiometers. A fifth nonscanner, the solar monitor, measures the incoming solar radiation. The development of the ERBE sensor characterization procedures are described using the calibration data for each of the Earth-looking nonscanners and scanners. Sensor models for the ERBE radiometers are developed including the radiative exchange, conductive heat flow, and electronics processing for transient and steady state conditions. The steady state models are used to interpret the sensor outputs, resulting in the data reduction algorithms for the ERBE instruments. Both ground calibration and flight calibration procedures are treated and analyzed. The ground and flight calibration coefficients for the data reduction algorithms are presented.

  15. Cloud screening Coastal Zone Color Scanner images using channel 5

    NASA Technical Reports Server (NTRS)

    Eckstein, B. A.; Simpson, J. J.

    1991-01-01

    Clouds are removed from Coastal Zone Color Scanner (CZCS) data using channel 5. Instrumentation problems require pre-processing of channel 5 before an intelligent cloud-screening algorithm can be used. For example, at intervals of about 16 lines, the sensor records anomalously low radiances. Moreover, the calibration equation yields negative radiances when the sensor records zero counts, and pixels corrupted by electronic overshoot must also be excluded. The remaining pixels may then be used in conjunction with the procedure of Simpson and Humphrey to determine the CZCS cloud mask. These results plus in situ observations of phytoplankton pigment concentration show that pre-processing and proper cloud-screening of CZCS data are necessary for accurate satellite-derived pigment concentrations. This is especially true in the coastal margins, where pigment content is high and image distortion associated with electronic overshoot is also present. The pre-processing algorithm is critical to obtaining accurate global estimates of pigment from spacecraft data.

  16. Investigation of correlation classification techniques

    NASA Technical Reports Server (NTRS)

    Haskell, R. E.

    1975-01-01

    A two-step classification algorithm for processing multispectral scanner data was developed and tested. The first step is a single pass clustering algorithm that assigns each pixel, based on its spectral signature, to a particular cluster. The output of that step is a cluster tape in which a single integer is associated with each pixel. The cluster tape is used as the input to the second step, where ground truth information is used to classify each cluster using an iterative method of potentials. Once the clusters have been assigned to classes the cluster tape is read pixel-by-pixel and an output tape is produced in which each pixel is assigned to its proper class. In addition to the digital classification programs, a method of using correlation clustering to process multispectral scanner data in real time by means of an interactive color video display is also described.

  17. Generic simulation of multi-element ladar scanner kinematics in USU LadarSIM

    NASA Astrophysics Data System (ADS)

    Omer, David; Call, Benjamin; Pack, Robert; Fullmer, Rees

    2006-05-01

    This paper presents a generic simulation model for a ladar scanner with up to three scan elements, each having a steering, stabilization and/or pattern-scanning role. Of interest is the development of algorithms that automatically generate commands to the scan elements given beam-steering objectives out of the ladar aperture, and the base motion of the sensor platform. First, a straight-forward single-element body-fixed beam-steering methodology is presented. Then a unique multi-element redirective and reflective space-fixed beam-steering methodology is explained. It is shown that standard direction cosine matrix decomposition methods fail when using two orthogonal, space-fixed rotations, thus demanding the development of a new algorithm for beam steering. Finally, a related steering control methodology is presented that uses two separate optical elements mathematically combined to determine the necessary scan element commands. Limits, restrictions, and results on this methodology are presented.

  18. Inverse problem of radiofrequency sounding of ionosphere

    NASA Astrophysics Data System (ADS)

    Velichko, E. N.; Yu. Grishentsev, A.; Korobeynikov, A. G.

    2016-01-01

    An algorithm for the solution of the inverse problem of vertical ionosphere sounding and a mathematical model of noise filtering are presented. An automated system for processing and analysis of spectrograms of vertical ionosphere sounding based on our algorithm is described. It is shown that the algorithm we suggest has a rather high efficiency. This is supported by the data obtained at the ionospheric stations of the so-called “AIS-M” type.

  19. A Computationally Efficient Parallel Levenberg-Marquardt Algorithm for Large-Scale Big-Data Inversion

    NASA Astrophysics Data System (ADS)

    Lin, Y.; O'Malley, D.; Vesselinov, V. V.

    2015-12-01

    Inverse modeling seeks model parameters given a set of observed state variables. However, for many practical problems due to the facts that the observed data sets are often large and model parameters are often numerous, conventional methods for solving the inverse modeling can be computationally expensive. We have developed a new, computationally-efficient Levenberg-Marquardt method for solving large-scale inverse modeling. Levenberg-Marquardt methods require the solution of a dense linear system of equations which can be prohibitively expensive to compute for large-scale inverse problems. Our novel method projects the original large-scale linear problem down to a Krylov subspace, such that the dimensionality of the measurements can be significantly reduced. Furthermore, instead of solving the linear system for every Levenberg-Marquardt damping parameter, we store the Krylov subspace computed when solving the first damping parameter and recycle it for all the following damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved by using these computational techniques. We apply this new inverse modeling method to invert for a random transitivity field. Our algorithm is fast enough to solve for the distributed model parameters (transitivity) at each computational node in the model domain. The inversion is also aided by the use regularization techniques. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. By comparing with a Levenberg-Marquardt method using standard linear inversion techniques, our Levenberg-Marquardt method yields speed-up ratio of 15 in a multi-core computational environment and a speed-up ratio of 45 in a single-core computational environment. Therefore, our new inverse modeling method is a powerful tool for large-scale applications.

  20. DenInv3D: a geophysical software for three-dimensional density inversion of gravity field data

    NASA Astrophysics Data System (ADS)

    Tian, Yu; Ke, Xiaoping; Wang, Yong

    2018-04-01

    This paper presents a three-dimensional density inversion software called DenInv3D that operates on gravity and gravity gradient data. The software performs inversion modelling, kernel function calculation, and inversion calculations using the improved preconditioned conjugate gradient (PCG) algorithm. In the PCG algorithm, due to the uncertainty of empirical parameters, such as the Lagrange multiplier, we use the inflection point of the L-curve as the regularisation parameter. The software can construct unequally spaced grids and perform inversions using such grids, which enables changing the resolution of the inversion results at different depths. Through inversion of airborne gradiometry data on the Australian Kauring test site, we discovered that anomalous blocks of different sizes are present within the study area in addition to the central anomalies. The software of DenInv3D can be downloaded from http://159.226.162.30.

  1. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less

  2. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    DOE PAGES

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    2016-09-01

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less

  3. Embedded real-time image processing hardware for feature extraction and clustering

    NASA Astrophysics Data System (ADS)

    Chiu, Lihu; Chang, Grant

    2003-08-01

    Printronix, Inc. uses scanner-based image systems to perform print quality measurements for line-matrix printers. The size of the image samples and image definition required make commercial scanners convenient to use. The image processing is relatively well defined, and we are able to simplify many of the calculations into hardware equations and "c" code. The process of rapidly prototyping the system using DSP based "c" code gets the algorithms well defined early in the development cycle. Once a working system is defined, the rest of the process involves splitting the task up for the FPGA and the DSP implementation. Deciding which of the two to use, the DSP or the FPGA, is a simple matter of trial benchmarking. There are two kinds of benchmarking: One for speed, and the other for memory. The more memory intensive algorithms should run in the DSP, and the simple real time tasks can use the FPGA most effectively. Once the task is split, we can decide which platform the algorithm should be executed. This involves prototyping all the code in the DSP, then timing various blocks of the algorithm. Slow routines can be optimized using the compiler tools, and if further reduction in time is needed, into tasks that the FPGA can perform.

  4. Assessment of calcium scoring performance in cardiac computed tomography.

    PubMed

    Ulzheimer, Stefan; Kalender, Willi A

    2003-03-01

    Electron beam tomography (EBT) has been used for cardiac diagnosis and the quantitative assessment of coronary calcium since the late 1980s. The introduction of mechanical multi-slice spiral CT (MSCT) scanners with shorter rotation times opened new possibilities of cardiac imaging with conventional CT scanners. The purpose of this work was to qualitatively and quantitatively evaluate the performance for EBT and MSCT for the task of coronary artery calcium imaging as a function of acquisition protocol, heart rate, spiral reconstruction algorithm (where applicable) and calcium scoring method. A cardiac CT semi-anthropomorphic phantom was designed and manufactured for the investigation of all relevant image quality parameters in cardiac CT. This phantom includes various test objects, some of which can be moved within the anthropomorphic phantom in a manner that mimics realistic heart motion. These tools were used to qualitatively and quantitatively demonstrate the accuracy of coronary calcium imaging using typical protocols for an electron beam (Evolution C-150XP, Imatron, South San Francisco, Calif.) and a 0.5-s four-slice spiral CT scanner (Sensation 4, Siemens, Erlangen, Germany). A special focus was put on the method of quantifying coronary calcium, and three scoring systems were evaluated (Agatston, volume, and mass scoring). Good reproducibility in coronary calcium scoring is always the result of a combination of high temporal and spatial resolution; consequently, thin-slice protocols in combination with retrospective gating on MSCT scanners yielded the best results. The Agatston score was found to be the least reproducible scoring method. The hydroxyapatite mass, being better reproducible and comparable on different scanners and being a physical quantitative measure, appears to be the method of choice for future clinical studies. The hydroxyapatite mass is highly correlated to the Agatston score. The introduced phantoms can be used to quantitatively assess the performance characteristics of, for example, different scanners, reconstruction algorithms, and quantification methods in cardiac CT. This is especially important for quantitative tasks, such as the determination of the amount of calcium in the coronary arteries, to achieve high and constant quality in this field.

  5. A hybrid artificial bee colony algorithm and pattern search method for inversion of particle size distribution from spectral extinction data

    NASA Astrophysics Data System (ADS)

    Wang, Li; Li, Feng; Xing, Jian

    2017-10-01

    In this paper, a hybrid artificial bee colony (ABC) algorithm and pattern search (PS) method is proposed and applied for recovery of particle size distribution (PSD) from spectral extinction data. To be more useful and practical, size distribution function is modelled as the general Johnson's ? function that can overcome the difficulty of not knowing the exact type beforehand encountered in many real circumstances. The proposed hybrid algorithm is evaluated through simulated examples involving unimodal, bimodal and trimodal PSDs with different widths and mean particle diameters. For comparison, all examples are additionally validated by the single ABC algorithm. In addition, the performance of the proposed algorithm is further tested by actual extinction measurements with real standard polystyrene samples immersed in water. Simulation and experimental results illustrate that the hybrid algorithm can be used as an effective technique to retrieve the PSDs with high reliability and accuracy. Compared with the single ABC algorithm, our proposed algorithm can produce more accurate and robust inversion results while taking almost comparative CPU time over ABC algorithm alone. The superiority of ABC and PS hybridization strategy in terms of reaching a better balance of estimation accuracy and computation effort increases its potentials as an excellent inversion technique for reliable and efficient actual measurement of PSD.

  6. Panoramic 3D Reconstruction by Fusing Color Intensity and Laser Range Data

    NASA Astrophysics Data System (ADS)

    Jiang, Wei; Lu, Jian

    Technology for capturing panoramic (360 degrees) three-dimensional information in a real environment have many applications in fields: virtual and complex reality, security, robot navigation, and so forth. In this study, we examine an acquisition device constructed of a regular CCD camera and a 2D laser range scanner, along with a technique for panoramic 3D reconstruction using a data fusion algorithm based on an energy minimization framework. The acquisition device can capture two types of data of a panoramic scene without occlusion between two sensors: a dense spatio-temporal volume from a camera and distance information from a laser scanner. We resample the dense spatio-temporal volume for generating a dense multi-perspective panorama that has equal spatial resolution to that of the original images acquired using a regular camera, and also estimate a dense panoramic depth-map corresponding to the generated reference panorama by extracting trajectories from the dense spatio-temporal volume with a selecting camera. Moreover, for determining distance information robustly, we propose a data fusion algorithm that is embedded into an energy minimization framework that incorporates active depth measurements using a 2D laser range scanner and passive geometry reconstruction from an image sequence obtained using the CCD camera. Thereby, measurement precision and robustness can be improved beyond those available by conventional methods using either passive geometry reconstruction (stereo vision) or a laser range scanner. Experimental results using both synthetic and actual images show that our approach can produce high-quality panoramas and perform accurate 3D reconstruction in a panoramic environment.

  7. A compressed sensing based 3D resistivity inversion algorithm for hydrogeological applications

    NASA Astrophysics Data System (ADS)

    Ranjan, Shashi; Kambhammettu, B. V. N. P.; Peddinti, Srinivasa Rao; Adinarayana, J.

    2018-04-01

    Image reconstruction from discrete electrical responses pose a number of computational and mathematical challenges. Application of smoothness constrained regularized inversion from limited measurements may fail to detect resistivity anomalies and sharp interfaces separated by hydro stratigraphic units. Under favourable conditions, compressed sensing (CS) can be thought of an alternative to reconstruct the image features by finding sparse solutions to highly underdetermined linear systems. This paper deals with the development of a CS assisted, 3-D resistivity inversion algorithm for use with hydrogeologists and groundwater scientists. CS based l1-regularized least square algorithm was applied to solve the resistivity inversion problem. Sparseness in the model update vector is introduced through block oriented discrete cosine transformation, with recovery of the signal achieved through convex optimization. The equivalent quadratic program was solved using primal-dual interior point method. Applicability of the proposed algorithm was demonstrated using synthetic and field examples drawn from hydrogeology. The proposed algorithm has outperformed the conventional (smoothness constrained) least square method in recovering the model parameters with much fewer data, yet preserving the sharp resistivity fronts separated by geologic layers. Resistivity anomalies represented by discrete homogeneous blocks embedded in contrasting geologic layers were better imaged using the proposed algorithm. In comparison to conventional algorithm, CS has resulted in an efficient (an increase in R2 from 0.62 to 0.78; a decrease in RMSE from 125.14 Ω-m to 72.46 Ω-m), reliable, and fast converging (run time decreased by about 25%) solution.

  8. Comparison of Compressed Sensing Algorithms for Inversion of 3-D Electrical Resistivity Tomography.

    NASA Astrophysics Data System (ADS)

    Peddinti, S. R.; Ranjan, S.; Kbvn, D. P.

    2016-12-01

    Image reconstruction algorithms derived from electrical resistivity tomography (ERT) are highly non-linear, sparse, and ill-posed. The inverse problem is much severe, when dealing with 3-D datasets that result in large sized matrices. Conventional gradient based techniques using L2 norm minimization with some sort of regularization can impose smoothness constraint on the solution. Compressed sensing (CS) is relatively new technique that takes the advantage of inherent sparsity in parameter space in one or the other form. If favorable conditions are met, CS was proven to be an efficient image reconstruction technique that uses limited observations without losing edge sharpness. This paper deals with the development of an open source 3-D resistivity inversion tool using CS framework. The forward model was adopted from RESINVM3D (Pidlisecky et al., 2007) with CS as the inverse code. Discrete cosine transformation (DCT) function was used to induce model sparsity in orthogonal form. Two CS based algorithms viz., interior point method and two-step IST were evaluated on a synthetic layered model with surface electrode observations. The algorithms were tested (in terms of quality and convergence) under varying degrees of parameter heterogeneity, model refinement, and reduced observation data space. In comparison to conventional gradient algorithms, CS was proven to effectively reconstruct the sub-surface image with less computational cost. This was observed by a general increase in NRMSE from 0.5 in 10 iterations using gradient algorithm to 0.8 in 5 iterations using CS algorithms.

  9. Sparsity constrained split feasibility for dose-volume constraints in inverse planning of intensity-modulated photon or proton therapy

    NASA Astrophysics Data System (ADS)

    Penfold, Scott; Zalas, Rafał; Casiraghi, Margherita; Brooke, Mark; Censor, Yair; Schulte, Reinhard

    2017-05-01

    A split feasibility formulation for the inverse problem of intensity-modulated radiation therapy treatment planning with dose-volume constraints included in the planning algorithm is presented. It involves a new type of sparsity constraint that enables the inclusion of a percentage-violation constraint in the model problem and its handling by continuous (as opposed to integer) methods. We propose an iterative algorithmic framework for solving such a problem by applying the feasibility-seeking CQ-algorithm of Byrne combined with the automatic relaxation method that uses cyclic projections. Detailed implementation instructions are furnished. Functionality of the algorithm was demonstrated through the creation of an intensity-modulated proton therapy plan for a simple 2D C-shaped geometry and also for a realistic base-of-skull chordoma treatment site. Monte Carlo simulations of proton pencil beams of varying energy were conducted to obtain dose distributions for the 2D test case. A research release of the Pinnacle 3 proton treatment planning system was used to extract pencil beam doses for a clinical base-of-skull chordoma case. In both cases the beamlet doses were calculated to satisfy dose-volume constraints according to our new algorithm. Examination of the dose-volume histograms following inverse planning with our algorithm demonstrated that it performed as intended. The application of our proposed algorithm to dose-volume constraint inverse planning was successfully demonstrated. Comparison with optimized dose distributions from the research release of the Pinnacle 3 treatment planning system showed the algorithm could achieve equivalent or superior results.

  10. Some practical aspects of prestack waveform inversion using a genetic algorithm: An example from the east Texas Woodbine gas sand

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mallick, S.

    1999-03-01

    In this paper, a prestack inversion method using a genetic algorithm (GA) is presented, and issues relating to the implementation of prestack GA inversion in practice are discussed. GA is a Monte-Carlo type inversion, using a natural analogy to the biological evolution process. When GA is cast into a Bayesian framework, a priori information of the model parameters and the physics of the forward problem are used to compute synthetic data. These synthetic data can then be matched with observations to obtain approximate estimates of the marginal a posteriori probability density (PPD) functions in the model space. Plots of thesemore » PPD functions allow an interpreter to choose models which best describe the specific geologic setting and lead to an accurate prediction of seismic lithology. Poststack inversion and prestack GA inversion were applied to a Woodbine gas sand data set from East Texas. A comparison of prestack inversion with poststack inversion demonstrates that prestack inversion shows detailed stratigraphic features of the subsurface which are not visible on the poststack inversion.« less

  11. RNA inverse folding using Monte Carlo tree search.

    PubMed

    Yang, Xiufeng; Yoshizoe, Kazuki; Taneda, Akito; Tsuda, Koji

    2017-11-06

    Artificially synthesized RNA molecules provide important ways for creating a variety of novel functional molecules. State-of-the-art RNA inverse folding algorithms can design simple and short RNA sequences of specific GC content, that fold into the target RNA structure. However, their performance is not satisfactory in complicated cases. We present a new inverse folding algorithm called MCTS-RNA, which uses Monte Carlo tree search (MCTS), a technique that has shown exceptional performance in Computer Go recently, to represent and discover the essential part of the sequence space. To obtain high accuracy, initial sequences generated by MCTS are further improved by a series of local updates. Our algorithm has an ability to control the GC content precisely and can deal with pseudoknot structures. Using common benchmark datasets for evaluation, MCTS-RNA showed a lot of promise as a standard method of RNA inverse folding. MCTS-RNA is available at https://github.com/tsudalab/MCTS-RNA .

  12. A sequential coalescent algorithm for chromosomal inversions

    PubMed Central

    Peischl, S; Koch, E; Guerrero, R F; Kirkpatrick, M

    2013-01-01

    Chromosomal inversions are common in natural populations and are believed to be involved in many important evolutionary phenomena, including speciation, the evolution of sex chromosomes and local adaptation. While recent advances in sequencing and genotyping methods are leading to rapidly increasing amounts of genome-wide sequence data that reveal interesting patterns of genetic variation within inverted regions, efficient simulation methods to study these patterns are largely missing. In this work, we extend the sequential Markovian coalescent, an approximation to the coalescent with recombination, to include the effects of polymorphic inversions on patterns of recombination. Results show that our algorithm is fast, memory-efficient and accurate, making it feasible to simulate large inversions in large populations for the first time. The SMC algorithm enables studies of patterns of genetic variation (for example, linkage disequilibria) and tests of hypotheses (using simulation-based approaches) that were previously intractable. PMID:23632894

  13. A Scalable O(N) Algorithm for Large-Scale Parallel First-Principles Molecular Dynamics Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osei-Kuffuor, Daniel; Fattebert, Jean-Luc

    2014-01-01

    Traditional algorithms for first-principles molecular dynamics (FPMD) simulations only gain a modest capability increase from current petascale computers, due to their O(N 3) complexity and their heavy use of global communications. To address this issue, we are developing a truly scalable O(N) complexity FPMD algorithm, based on density functional theory (DFT), which avoids global communications. The computational model uses a general nonorthogonal orbital formulation for the DFT energy functional, which requires knowledge of selected elements of the inverse of the associated overlap matrix. We present a scalable algorithm for approximately computing selected entries of the inverse of the overlap matrix,more » based on an approximate inverse technique, by inverting local blocks corresponding to principal submatrices of the global overlap matrix. The new FPMD algorithm exploits sparsity and uses nearest neighbor communication to provide a computational scheme capable of extreme scalability. Accuracy is controlled by the mesh spacing of the finite difference discretization, the size of the localization regions in which the electronic orbitals are confined, and a cutoff beyond which the entries of the overlap matrix can be omitted when computing selected entries of its inverse. We demonstrate the algorithm's excellent parallel scaling for up to O(100K) atoms on O(100K) processors, with a wall-clock time of O(1) minute per molecular dynamics time step.« less

  14. Clouds and the Earth's Radiant Energy System (CERES) Algorithm Theoretical Basis Document. Volume 3; Cloud Analyses and Determination of Improved Top of Atmosphere Fluxes (Subsystem 4)

    NASA Technical Reports Server (NTRS)

    1995-01-01

    The theoretical bases for the Release 1 algorithms that will be used to process satellite data for investigation of the Clouds and Earth's Radiant Energy System (CERES) are described. The architecture for software implementation of the methodologies is outlined. Volume 3 details the advanced CERES methods for performing scene identification and inverting each CERES scanner radiance to a top-of-the-atmosphere (TOA) flux. CERES determines cloud fraction, height, phase, effective particle size, layering, and thickness from high-resolution, multispectral imager data. CERES derives cloud properties for each pixel of the Tropical Rainfall Measuring Mission (TRMM) visible and infrared scanner and the Earth Observing System (EOS) moderate-resolution imaging spectroradiometer. Cloud properties for each imager pixel are convolved with the CERES footprint point spread function to produce average cloud properties for each CERES scanner radiance. The mean cloud properties are used to determine an angular distribution model (ADM) to convert each CERES radiance to a TOA flux. The TOA fluxes are used in simple parameterization to derive surface radiative fluxes. This state-of-the-art cloud-radiation product will be used to substantially improve our understanding of the complex relationship between clouds and the radiation budget of the Earth-atmosphere system.

  15. In vivo ultrasound imaging of the bone cortex

    NASA Astrophysics Data System (ADS)

    Renaud, Guillaume; Kruizinga, Pieter; Cassereau, Didier; Laugier, Pascal

    2018-06-01

    Current clinical ultrasound scanners cannot be used to image the interior morphology of bones because these scanners fail to address the complicated physics involved for exact image reconstruction. Here, we show that if the physics is properly addressed, bone cortex can be imaged using a conventional transducer array and a programmable ultrasound scanner. We provide in vivo proof for this technique by scanning the radius and tibia of two healthy volunteers and comparing the thickness of the radius bone with high-resolution peripheral x-ray computed tomography. Our method assumes a medium that is composed of different homogeneous layers with unique elastic anisotropy and ultrasonic wave-speed values. The applicable values of these layers are found by optimizing image sharpness and intensity over a range of relevant values. In the algorithm of image reconstruction we take wave refraction between the layers into account using a ray-tracing technique. The estimated values of the ultrasonic wave-speed and anisotropy in cortical bone are in agreement with ex vivo studies reported in the literature. These parameters are of interest since they were proposed as biomarkers for cortical bone quality. In this paper we discuss the physics involved with ultrasound imaging of bone and provide an algorithm to successfully image the first segment of cortical bone.

  16. Handheld laser scanner automatic registration based on random coding

    NASA Astrophysics Data System (ADS)

    He, Lei; Yu, Chun-ping; Wang, Li

    2011-06-01

    Current research on Laser Scanner often focuses mainly on the static measurement. Little use has been made of dynamic measurement, that are appropriate for more problems and situations. In particular, traditional Laser Scanner must Keep stable to scan and measure coordinate transformation parameters between different station. In order to make the scanning measurement intelligently and rapidly, in this paper ,we developed a new registration algorithm for handleheld laser scanner based on the positon of target, which realize the dynamic measurement of handheld laser scanner without any more complex work. the double camera on laser scanner can take photograph of the artificial target points to get the three-dimensional coordinates, this points is designed by random coding. And then, a set of matched points is found from control points to realize the orientation of scanner by the least-square common points transformation. After that the double camera can directly measure the laser point cloud in the surface of object and get the point cloud data in an unified coordinate system. There are three major contributions in the paper. Firstly, a laser scanner based on binocular vision is designed with double camera and one laser head. By those, the real-time orientation of laser scanner is realized and the efficiency is improved. Secondly, the coding marker is introduced to solve the data matching, a random coding method is proposed. Compared with other coding methods,the marker with this method is simple to match and can avoid the shading for the object. Finally, a recognition method of coding maker is proposed, with the use of the distance recognition, it is more efficient. The method present here can be used widely in any measurement from small to huge obiect, such as vehicle, airplane which strengthen its intelligence and efficiency. The results of experiments and theory analzing demonstrate that proposed method could realize the dynamic measurement of handheld laser scanner. Theory analysis and experiment shows the method is reasonable and efficient.

  17. Neural network explanation using inversion.

    PubMed

    Saad, Emad W; Wunsch, Donald C

    2007-01-01

    An important drawback of many artificial neural networks (ANN) is their lack of explanation capability [Andrews, R., Diederich, J., & Tickle, A. B. (1996). A survey and critique of techniques for extracting rules from trained artificial neural networks. Knowledge-Based Systems, 8, 373-389]. This paper starts with a survey of algorithms which attempt to explain the ANN output. We then present HYPINV, a new explanation algorithm which relies on network inversion; i.e. calculating the ANN input which produces a desired output. HYPINV is a pedagogical algorithm, that extracts rules, in the form of hyperplanes. It is able to generate rules with arbitrarily desired fidelity, maintaining a fidelity-complexity tradeoff. To our knowledge, HYPINV is the only pedagogical rule extraction method, which extracts hyperplane rules from continuous or binary attribute neural networks. Different network inversion techniques, involving gradient descent as well as an evolutionary algorithm, are presented. An information theoretic treatment of rule extraction is presented. HYPINV is applied to example synthetic problems, to a real aerospace problem, and compared with similar algorithms using benchmark problems.

  18. Inverse algorithms for 2D shallow water equations in presence of wet dry fronts: Application to flood plain dynamics

    NASA Astrophysics Data System (ADS)

    Monnier, J.; Couderc, F.; Dartus, D.; Larnier, K.; Madec, R.; Vila, J.-P.

    2016-11-01

    The 2D shallow water equations adequately model some geophysical flows with wet-dry fronts (e.g. flood plain or tidal flows); nevertheless deriving accurate, robust and conservative numerical schemes for dynamic wet-dry fronts over complex topographies remains a challenge. Furthermore for these flows, data are generally complex, multi-scale and uncertain. Robust variational inverse algorithms, providing sensitivity maps and data assimilation processes may contribute to breakthrough shallow wet-dry front dynamics modelling. The present study aims at deriving an accurate, positive and stable finite volume scheme in presence of dynamic wet-dry fronts, and some corresponding inverse computational algorithms (variational approach). The schemes and algorithms are assessed on classical and original benchmarks plus a real flood plain test case (Lèze river, France). Original sensitivity maps with respect to the (friction, topography) pair are performed and discussed. The identification of inflow discharges (time series) or friction coefficients (spatially distributed parameters) demonstrate the algorithms efficiency.

  19. Flatbed scanners as a source of imaging. Brightness assessment and additives determination in a nickel electroplating bath.

    PubMed

    Vidal, M; Amigo, J M; Bro, R; Ostra, M; Ubide, C; Zuriarrain, J

    2011-05-23

    Desktop flatbed scanners are very well-known devices that can provide digitized information of flat surfaces. They are practically present in most laboratories as a part of the computer support. Several quality levels can be found in the market, but all of them can be considered as tools with a high performance and low cost. The present paper shows how the information obtained with a scanner, from a flat surface, can be used with fine results for exploratory and quantitative purposes through image analysis. It provides cheap analytical measurements for assessment of quality parameters of coated metallic surfaces and monitoring of electrochemical coating bath lives. The samples used were steel sheets nickel-plated in an electrodeposition bath. The quality of the final deposit depends on the bath conditions and, especially, on the concentration of the additives in the bath. Some additives become degraded with the bath life and so is the quality of the plate finish. Analysis of the scanner images can be used to follow the evolution of the metal deposit and the concentration of additives in the bath. Principal component analysis (PCA) is applied to find significant differences in the coating of sheets, to find directions of maximum variability and to identify odd samples. The results found are favorably compared with those obtained by means of specular reflectance (SR), which is here used as a reference technique. Also the concentration of additives SPB and SA-1 along a nickel bath life can be followed using image data handled with algorithms such as partial least squares (PLS) regression and support vector regression (SVR). The quantitative results obtained with these and other algorithms are compared. All this opens new qualitative and quantitative possibilities to flatbed scanners. Copyright © 2011 Elsevier B.V. All rights reserved.

  20. Recovering Long-wavelength Velocity Models using Spectrogram Inversion with Single- and Multi-frequency Components

    NASA Astrophysics Data System (ADS)

    Ha, J.; Chung, W.; Shin, S.

    2015-12-01

    Many waveform inversion algorithms have been proposed in order to construct subsurface velocity structures from seismic data sets. These algorithms have suffered from computational burden, local minima problems, and the lack of low-frequency components. Computational efficiency can be improved by the application of back-propagation techniques and advances in computing hardware. In addition, waveform inversion algorithms, for obtaining long-wavelength velocity models, could avoid both the local minima problem and the effect of the lack of low-frequency components in seismic data. In this study, we proposed spectrogram inversion as a technique for recovering long-wavelength velocity models. In spectrogram inversion, decomposed frequency components from spectrograms of traces, in the observed and calculated data, are utilized to generate traces with reproduced low-frequency components. Moreover, since each decomposed component can reveal the different characteristics of a subsurface structure, several frequency components were utilized to analyze the velocity features in the subsurface. We performed the spectrogram inversion using a modified SEG/SEGE salt A-A' line. Numerical results demonstrate that spectrogram inversion could also recover the long-wavelength velocity features. However, inversion results varied according to the frequency components utilized. Based on the results of inversion using a decomposed single-frequency component, we noticed that robust inversion results are obtained when a dominant frequency component of the spectrogram was utilized. In addition, detailed information on recovered long-wavelength velocity models was obtained using a multi-frequency component combined with single-frequency components. Numerical examples indicate that various detailed analyses of long-wavelength velocity models can be carried out utilizing several frequency components.

  1. Parallel three-dimensional magnetotelluric inversion using adaptive finite-element method. Part I: theory and synthetic study

    NASA Astrophysics Data System (ADS)

    Grayver, Alexander V.

    2015-07-01

    This paper presents a distributed magnetotelluric inversion scheme based on adaptive finite-element method (FEM). The key novel aspect of the introduced algorithm is the use of automatic mesh refinement techniques for both forward and inverse modelling. These techniques alleviate tedious and subjective procedure of choosing a suitable model parametrization. To avoid overparametrization, meshes for forward and inverse problems were decoupled. For calculation of accurate electromagnetic (EM) responses, automatic mesh refinement algorithm based on a goal-oriented error estimator has been adopted. For further efficiency gain, EM fields for each frequency were calculated using independent meshes in order to account for substantially different spatial behaviour of the fields over a wide range of frequencies. An automatic approach for efficient initial mesh design in inverse problems based on linearized model resolution matrix was developed. To make this algorithm suitable for large-scale problems, it was proposed to use a low-rank approximation of the linearized model resolution matrix. In order to fill a gap between initial and true model complexities and resolve emerging 3-D structures better, an algorithm for adaptive inverse mesh refinement was derived. Within this algorithm, spatial variations of the imaged parameter are calculated and mesh is refined in the neighborhoods of points with the largest variations. A series of numerical tests were performed to demonstrate the utility of the presented algorithms. Adaptive mesh refinement based on the model resolution estimates provides an efficient tool to derive initial meshes which account for arbitrary survey layouts, data types, frequency content and measurement uncertainties. Furthermore, the algorithm is capable to deliver meshes suitable to resolve features on multiple scales while keeping number of unknowns low. However, such meshes exhibit dependency on an initial model guess. Additionally, it is demonstrated that the adaptive mesh refinement can be particularly efficient in resolving complex shapes. The implemented inversion scheme was able to resolve a hemisphere object with sufficient resolution starting from a coarse discretization and refining mesh adaptively in a fully automatic process. The code is able to harness the computational power of modern distributed platforms and is shown to work with models consisting of millions of degrees of freedom. Significant computational savings were achieved by using locally refined decoupled meshes.

  2. Cluster analysis based on dimensional information with applications to feature selection and classification

    NASA Technical Reports Server (NTRS)

    Eigen, D. J.; Fromm, F. R.; Northouse, R. A.

    1974-01-01

    A new clustering algorithm is presented that is based on dimensional information. The algorithm includes an inherent feature selection criterion, which is discussed. Further, a heuristic method for choosing the proper number of intervals for a frequency distribution histogram, a feature necessary for the algorithm, is presented. The algorithm, although usable as a stand-alone clustering technique, is then utilized as a global approximator. Local clustering techniques and configuration of a global-local scheme are discussed, and finally the complete global-local and feature selector configuration is shown in application to a real-time adaptive classification scheme for the analysis of remote sensed multispectral scanner data.

  3. An adaptive inverse kinematics algorithm for robot manipulators

    NASA Technical Reports Server (NTRS)

    Colbaugh, R.; Glass, K.; Seraji, H.

    1990-01-01

    An adaptive algorithm for solving the inverse kinematics problem for robot manipulators is presented. The algorithm is derived using model reference adaptive control (MRAC) theory and is computationally efficient for online applications. The scheme requires no a priori knowledge of the kinematics of the robot if Cartesian end-effector sensing is available, and it requires knowledge of only the forward kinematics if joint position sensing is used. Computer simulation results are given for the redundant seven-DOF robotics research arm, demonstrating that the proposed algorithm yields accurate joint angle trajectories for a given end-effector position/orientation trajectory.

  4. SMI adaptive antenna arrays for weak interfering signals. [Sample Matrix Inversion

    NASA Technical Reports Server (NTRS)

    Gupta, Inder J.

    1986-01-01

    The performance of adaptive antenna arrays in the presence of weak interfering signals (below thermal noise) is studied. It is shown that a conventional adaptive antenna array sample matrix inversion (SMI) algorithm is unable to suppress such interfering signals. To overcome this problem, the SMI algorithm is modified. In the modified algorithm, the covariance matrix is redefined such that the effect of thermal noise on the weights of adaptive arrays is reduced. Thus, the weights are dictated by relatively weak signals. It is shown that the modified algorithm provides the desired interference protection.

  5. Fast inversion of gravity data using the symmetric successive over-relaxation (SSOR) preconditioned conjugate gradient algorithm

    NASA Astrophysics Data System (ADS)

    Meng, Zhaohai; Li, Fengting; Xu, Xuechun; Huang, Danian; Zhang, Dailei

    2017-02-01

    The subsurface three-dimensional (3D) model of density distribution is obtained by solving an under-determined linear equation that is established by gravity data. Here, we describe a new fast gravity inversion method to recover a 3D density model from gravity data. The subsurface will be divided into a large number of rectangular blocks, each with an unknown constant density. The gravity inversion method introduces a stabiliser model norm with a depth weighting function to produce smooth models. The depth weighting function is combined with the model norm to counteract the skin effect of the gravity potential field. As the numbers of density model parameters is NZ (the number of layers in the vertical subsurface domain) times greater than the observed gravity data parameters, the inverse density parameter is larger than the observed gravity data parameters. Solving the full set of gravity inversion equations is very time-consuming, and applying a new algorithm to estimate gravity inversion can significantly reduce the number of iterations and the computational time. In this paper, a new symmetric successive over-relaxation (SSOR) iterative conjugate gradient (CG) method is shown to be an appropriate algorithm to solve this Tikhonov cost function (gravity inversion equation). The new, faster method is applied on Gaussian noise-contaminated synthetic data to demonstrate its suitability for 3D gravity inversion. To demonstrate the performance of the new algorithm on actual gravity data, we provide a case study that includes ground-based measurement of residual Bouguer gravity anomalies over the Humble salt dome near Houston, Gulf Coast Basin, off the shore of Louisiana. A 3D distribution of salt rock concentration is used to evaluate the inversion results recovered by the new SSOR iterative method. In the test model, the density values in the constructed model coincide with the known location and depth of the salt dome.

  6. Coastal Zone Color Scanner atmospheric correction - Influence of El Chichon

    NASA Technical Reports Server (NTRS)

    Gordon, Howard R.; Castano, Diego J.

    1988-01-01

    The addition of an El Chichon-like aerosol layer in the stratosphere is shown to have very little effect on the basic CZCS atmospheric correction algorithm. The additional stratospheric aerosol is found to increase the total radiance exiting the atmosphere, thereby increasing the probability that the sensor will saturate. It is suggested that in the absence of saturation the correction algorithm should perform as well as in the absence of the stratospheric layer.

  7. Thermographic Imaging of Defects in Anisotropic Composites

    NASA Technical Reports Server (NTRS)

    Plotnikov, Y. A.; Winfree, W. P.

    2000-01-01

    Composite materials are of increasing interest to the aerospace industry as a result of their weight versus performance characteristics. One of the disadvantages of composites is the high cost of fabrication and post inspection with conventional ultrasonic scanning systems. The high cost of inspection is driven by the need for scanning systems which can follow large curve surfaces. Additionally, either large water tanks or water squirters are required to couple the ultrasonics into the part. Thermographic techniques offer significant advantages over conventional ultrasonics by not requiring physical coupling between the part and sensor. The thermographic system can easily inspect large curved surface without requiring a surface following scanner. However, implementation of Thermal Nondestructive Evaluations (TNDE) for flaw detection in composite materials and structures requires determining its limit. Advanced algorithms have been developed to enable locating and sizing defects in carbon fiber reinforced plastic (CFRP). Thermal Tomography is a very promising method for visualizing the size and location of defects in materials such as CFRP. However, further investigations are required to determine its capabilities for inspection of thick composites. In present work we have studied influence of the anisotropy on the reconstructed image of a defect generated by an inversion technique. The composite material is considered as homogeneous with macro properties: thermal conductivity K, specific heat c, and density rho. The simulation process involves two sequential steps: solving the three dimensional transient heat diffusion equation for a sample with a defect, then estimating the defect location and size from the surface spatial and temporal thermal distributions (inverse problem), calculated from the simulations.

  8. Image guided constitutive modeling of the silicone brain phantom

    NASA Astrophysics Data System (ADS)

    Puzrin, Alexander; Skrinjar, Oskar; Ozan, Cem; Kim, Sihyun; Mukundan, Srinivasan

    2005-04-01

    The goal of this work is to develop reliable constitutive models of the mechanical behavior of the in-vivo human brain tissue for applications in neurosurgery. We propose to define the mechanical properties of the brain tissue in-vivo, by taking the global MR or CT images of a brain response to ventriculostomy - the relief of the elevated intracranial pressure. 3D image analysis translates these images into displacement fields, which by using inverse analysis allow for the constitutive models of the brain tissue to be developed. We term this approach Image Guided Constitutive Modeling (IGCM). The presented paper demonstrates performance of the IGCM in the controlled environment: on the silicone brain phantoms closely simulating the in-vivo brain geometry, mechanical properties and boundary conditions. The phantom of the left hemisphere of human brain was cast using silicon gel. An inflatable rubber membrane was placed inside the phantom to model the lateral ventricle. The experiments were carried out in a specially designed setup in a CT scanner with submillimeter isotropic voxels. The non-communicative hydrocephalus and ventriculostomy were simulated by consequently inflating and deflating the internal rubber membrane. The obtained images were analyzed to derive displacement fields, meshed, and incorporated into ABAQUS. The subsequent Inverse Finite Element Analysis (based on Levenberg-Marquardt algorithm) allowed for optimization of the parameters of the Mooney-Rivlin non-linear elastic model for the phantom material. The calculated mechanical properties were consistent with those obtained from the element tests, providing justification for the future application of the IGCM to in-vivo brain tissue.

  9. A unified Fourier theory for time-of-flight PET data

    PubMed Central

    Li, Yusheng; Matej, Samuel; Metzler, Scott D

    2016-01-01

    Fully 3D time-of-flight (TOF) PET scanners offer the potential of previously unachievable image quality in clinical PET imaging. TOF measurements add another degree of redundancy for cylindrical PET scanners and make photon-limited TOF-PET imaging more robust than non-TOF PET imaging. The data space for 3D TOF-PET data is five-dimensional with two degrees of redundancy. Previously, consistency equations were used to characterize the redundancy of TOF-PET data. In this paper, we first derive two Fourier consistency equations and Fourier-John equation for 3D TOF PET based on the generalized projection-slice theorem; the three partial differential equations (PDEs) are the dual of the sinogram consistency equations and John's equation. We then solve the three PDEs using the method of characteristics. The two degrees of entangled redundancy of the TOF-PET data can be explicitly elicited and exploited by the solutions of the PDEs along the characteristic curves, which gives a complete understanding of the rich structure of the 3D X-ray transform with TOF measurement. Fourier rebinning equations and other mapping equations among different types of PET data are special cases of the general solutions. We also obtain new Fourier rebinning and consistency equations (FORCEs) from other special cases of the general solutions, and thus we obtain a complete scheme to convert among different types of PET data: 3D TOF, 3D non-TOF, 2D TOF and 2D non-TOF data. The new FORCEs can be used as new Fourier-based rebinning algorithms for TOF-PET data reduction, inverse rebinnings for designing fast projectors, or consistency conditions for estimating missing data. Further, we give a geometric interpretation of the general solutions—the two families of characteristic curves can be obtained by respectively changing the azimuthal and co-polar angles of the biorthogonal coordinates in Fourier space. We conclude the unified Fourier theory by showing that the Fourier consistency equations are necessary and sufficient for 3D X-ray transform with TOF measurement. Finally, we give numerical examples of inverse rebinning for a 3D TOF PET and Fourier-based rebinning for a 2D TOF PET using the FORCEs to show the efficacy of the unified Fourier solutions. PMID:26689836

  10. A unified Fourier theory for time-of-flight PET data.

    PubMed

    Li, Yusheng; Matej, Samuel; Metzler, Scott D

    2016-01-21

    Fully 3D time-of-flight (TOF) PET scanners offer the potential of previously unachievable image quality in clinical PET imaging. TOF measurements add another degree of redundancy for cylindrical PET scanners and make photon-limited TOF-PET imaging more robust than non-TOF PET imaging. The data space for 3D TOF-PET data is five-dimensional with two degrees of redundancy. Previously, consistency equations were used to characterize the redundancy of TOF-PET data. In this paper, we first derive two Fourier consistency equations and Fourier-John equation for 3D TOF PET based on the generalized projection-slice theorem; the three partial differential equations (PDEs) are the dual of the sinogram consistency equations and John's equation. We then solve the three PDEs using the method of characteristics. The two degrees of entangled redundancy of the TOF-PET data can be explicitly elicited and exploited by the solutions of the PDEs along the characteristic curves, which gives a complete understanding of the rich structure of the 3D x-ray transform with TOF measurement. Fourier rebinning equations and other mapping equations among different types of PET data are special cases of the general solutions. We also obtain new Fourier rebinning and consistency equations (FORCEs) from other special cases of the general solutions, and thus we obtain a complete scheme to convert among different types of PET data: 3D TOF, 3D non-TOF, 2D TOF and 2D non-TOF data. The new FORCEs can be used as new Fourier-based rebinning algorithms for TOF-PET data reduction, inverse rebinnings for designing fast projectors, or consistency conditions for estimating missing data. Further, we give a geometric interpretation of the general solutions--the two families of characteristic curves can be obtained by respectively changing the azimuthal and co-polar angles of the biorthogonal coordinates in Fourier space. We conclude the unified Fourier theory by showing that the Fourier consistency equations are necessary and sufficient for 3D x-ray transform with TOF measurement. Finally, we give numerical examples of inverse rebinning for a 3D TOF PET and Fourier-based rebinning for a 2D TOF PET using the FORCEs to show the efficacy of the unified Fourier solutions.

  11. Approximation Of Multi-Valued Inverse Functions Using Clustering And Sugeno Fuzzy Inference

    NASA Technical Reports Server (NTRS)

    Walden, Maria A.; Bikdash, Marwan; Homaifar, Abdollah

    1998-01-01

    Finding the inverse of a continuous function can be challenging and computationally expensive when the inverse function is multi-valued. Difficulties may be compounded when the function itself is difficult to evaluate. We show that we can use fuzzy-logic approximators such as Sugeno inference systems to compute the inverse on-line. To do so, a fuzzy clustering algorithm can be used in conjunction with a discriminating function to split the function data into branches for the different values of the forward function. These data sets are then fed into a recursive least-squares learning algorithm that finds the proper coefficients of the Sugeno approximators; each Sugeno approximator finds one value of the inverse function. Discussions about the accuracy of the approximation will be included.

  12. SWIM: A Semi-Analytical Ocean Color Inversion Algorithm for Optically Shallow Waters

    NASA Technical Reports Server (NTRS)

    McKinna, Lachlan I. W.; Werdell, P. Jeremy; Fearns, Peter R. C. S.; Weeks, Scarla J.; Reichstetter, Martina; Franz, Bryan A.; Shea, Donald M.; Feldman, Gene C.

    2014-01-01

    Ocean color remote sensing provides synoptic-scale, near-daily observations of marine inherent optical properties (IOPs). Whilst contemporary ocean color algorithms are known to perform well in deep oceanic waters, they have difficulty operating in optically clear, shallow marine environments where light reflected from the seafloor contributes to the water-leaving radiance. The effect of benthic reflectance in optically shallow waters is known to adversely affect algorithms developed for optically deep waters [1, 2]. Whilst adapted versions of optically deep ocean color algorithms have been applied to optically shallow regions with reasonable success [3], there is presently no approach that directly corrects for bottom reflectance using existing knowledge of bathymetry and benthic albedo.To address the issue of optically shallow waters, we have developed a semi-analytical ocean color inversion algorithm: the Shallow Water Inversion Model (SWIM). SWIM uses existing bathymetry and a derived benthic albedo map to correct for bottom reflectance using the semi-analytical model of Lee et al [4]. The algorithm was incorporated into the NASA Ocean Biology Processing Groups L2GEN program and tested in optically shallow waters of the Great Barrier Reef, Australia. In-lieu of readily available in situ matchup data, we present a comparison between SWIM and two contemporary ocean color algorithms, the Generalized Inherent Optical Property Algorithm (GIOP) and the Quasi-Analytical Algorithm (QAA).

  13. 2D Seismic Imaging of Elastic Parameters by Frequency Domain Full Waveform Inversion

    NASA Astrophysics Data System (ADS)

    Brossier, R.; Virieux, J.; Operto, S.

    2008-12-01

    Thanks to recent advances in parallel computing, full waveform inversion is today a tractable seismic imaging method to reconstruct physical parameters of the earth interior at different scales ranging from the near- surface to the deep crust. We present a massively parallel 2D frequency-domain full-waveform algorithm for imaging visco-elastic media from multi-component seismic data. The forward problem (i.e. the resolution of the frequency-domain 2D PSV elastodynamics equations) is based on low-order Discontinuous Galerkin (DG) method (P0 and/or P1 interpolations). Thanks to triangular unstructured meshes, the DG method allows accurate modeling of both body waves and surface waves in case of complex topography for a discretization of 10 to 15 cells per shear wavelength. The frequency-domain DG system is solved efficiently for multiple sources with the parallel direct solver MUMPS. The local inversion procedure (i.e. minimization of residuals between observed and computed data) is based on the adjoint-state method which allows to efficiently compute the gradient of the objective function. Applying the inversion hierarchically from the low frequencies to the higher ones defines a multiresolution imaging strategy which helps convergence towards the global minimum. In place of expensive Newton algorithm, the combined use of the diagonal terms of the approximate Hessian matrix and optimization algorithms based on quasi-Newton methods (Conjugate Gradient, LBFGS, ...) allows to improve the convergence of the iterative inversion. The distribution of forward problem solutions over processors driven by a mesh partitioning performed by METIS allows to apply most of the inversion in parallel. We shall present the main features of the parallel modeling/inversion algorithm, assess its scalability and illustrate its performances with realistic synthetic case studies.

  14. An overview of a highly versatile forward and stable inverse algorithm for airborne, ground-based and borehole electromagnetic and electric data

    NASA Astrophysics Data System (ADS)

    Auken, Esben; Christiansen, Anders Vest; Kirkegaard, Casper; Fiandaca, Gianluca; Schamper, Cyril; Behroozmand, Ahmad Ali; Binley, Andrew; Nielsen, Emil; Effersø, Flemming; Christensen, Niels Bøie; Sørensen, Kurt; Foged, Nikolaj; Vignoli, Giulio

    2015-07-01

    We present an overview of a mature, robust and general algorithm providing a single framework for the inversion of most electromagnetic and electrical data types and instrument geometries. The implementation mainly uses a 1D earth formulation for electromagnetics and magnetic resonance sounding (MRS) responses, while the geoelectric responses are both 1D and 2D and the sheet's response models a 3D conductive sheet in a conductive host with an overburden of varying thickness and resistivity. In all cases, the focus is placed on delivering full system forward modelling across all supported types of data. Our implementation is modular, meaning that the bulk of the algorithm is independent of data type, making it easy to add support for new types. Having implemented forward response routines and file I/O for a given data type provides access to a robust and general inversion engine. This engine includes support for mixed data types, arbitrary model parameter constraints, integration of prior information and calculation of both model parameter sensitivity analysis and depth of investigation. We present a review of our implementation and methodology and show four different examples illustrating the versatility of the algorithm. The first example is a laterally constrained joint inversion (LCI) of surface time domain induced polarisation (TDIP) data and borehole TDIP data. The second example shows a spatially constrained inversion (SCI) of airborne transient electromagnetic (AEM) data. The third example is an inversion and sensitivity analysis of MRS data, where the electrical structure is constrained with AEM data. The fourth example is an inversion of AEM data, where the model is described by a 3D sheet in a layered conductive host.

  15. Improvement of Forest Height Retrieval By Integration of Dual-Baseline PolInSAR Data And External DEM Data

    NASA Astrophysics Data System (ADS)

    Xie, Q.; Wang, C.; Zhu, J.; Fu, H.; Wang, C.

    2015-06-01

    In recent years, a lot of studies have shown that polarimetric synthetic aperture radar interferometry (PolInSAR) is a powerful technique for forest height mapping and monitoring. However, few researches address the problem of terrain slope effect, which will be one of the major limitations for forest height inversion in mountain forest area. In this paper, we present a novel forest height retrieval algorithm by integration of dual-baseline PolInSAR data and external DEM data. For the first time, we successfully expand the S-RVoG (Sloped-Random Volume over Ground) model for forest parameters inversion into the case of dual-baseline PolInSAR configuration. In this case, the proposed method not only corrects terrain slope variation effect efficiently, but also involves more observations to improve the accuracy of parameters inversion. In order to demonstrate the performance of the inversion algorithm, a set of quad-pol images acquired at the P-band in interferometric repeat-pass mode by the German Aerospace Center (DLR) with the Experimental SAR (E-SAR) system, in the frame of the BioSAR2008 campaign, has been used for the retrieval of forest height over Krycklan boreal forest in northern Sweden. At the same time, a high accuracy external DEM in the experimental area has been collected for computing terrain slope information, which subsequently is used as an inputting parameter in the S-RVoG model. Finally, in-situ ground truth heights in stand-level have been collected to validate the inversion result. The preliminary results show that the proposed inversion algorithm promises to provide much more accurate estimation of forest height than traditional dualbaseline inversion algorithms.

  16. Development and investigation of a magnetic resonance imaging-compatible microlens-based optical detector

    NASA Astrophysics Data System (ADS)

    Paar, Steffen; Umathum, Reiner; Jiang, Xiaoming; Majer, Charles L.; Peter, Jörg

    2015-09-01

    A noncontact optical detector for in vivo imaging has been developed that is compatible with magnetic resonance imaging (MRI). The optical detector employs microlens arrays and might be classified as a plenoptic camera. As a resulting of its design, the detector possesses a slim thickness and is self-shielding against radio frequency (RF) pulses. For experimental investigation, a total of six optical detectors were arranged in a cylindrical fashion, with the imaged object positioned in the center of this assembly. A purposely designed RF volume resonator coil has been developed and is incorporated within the optical imaging system. The whole assembly was placed into the bore of a 1.5 T patient-sized MRI scanner. Simple-geometry phantom studies were performed to assess compatibility and performance characteristics regarding both optical and MR imaging systems. A bimodal ex vivo nude mouse measurement was conducted. From the MRI data, the subject surface was extracted. Optical images were projected on this surface by means of an inverse mapping algorithm. Simultaneous measurements did not reveal influences from the magnetic field and RF pulses onto optical detector performance (spatial resolution, sensitivity). No significant influence of the optical imaging system onto MRI performance was detectable.

  17. Development and investigation of a magnetic resonance imaging-compatible microlens-based optical detector.

    PubMed

    Paar, Steffen; Umathum, Reiner; Jiang, Xiaoming; Majer, Charles L; Peter, Jörg

    2015-09-01

    A noncontact optical detector for in vivo imaging has been developed that is compatible with magnetic resonance imaging (MRI). The optical detector employs microlens arrays and might be classified as a plenoptic camera. As a resulting of its design, the detector possesses a slim thickness and is self-shielding against radio frequency (RF) pulses. For experimental investigation, a total of six optical detectors were arranged in a cylindrical fashion, with the imaged object positioned in the center of this assembly. A purposely designed RF volume resonator coil has been developed and is incorporated within the optical imaging system. The whole assembly was placed into the bore of a 1.5 T patient-sized MRI scanner. Simple-geometry phantom studies were performed to assess compatibility and performance characteristics regarding both optical and MR imaging systems. A bimodal ex vivo nude mouse measurement was conducted. From the MRI data, the subject surface was extracted. Optical images were projected on this surface by means of an inverse mapping algorithm. Simultaneous measurements did not reveal influences from the magnetic field and RF pulses onto optical detector performance (spatial resolution, sensitivity). No significant influence of the optical imaging system onto MRI performance was detectable.

  18. Relationship between noise, dose, and pitch in cardiac multi-detector row CT.

    PubMed

    Primak, Andrew N; McCollough, Cynthia H; Bruesewitz, Michael R; Zhang, Jie; Fletcher, Joel G

    2006-01-01

    In spiral computed tomography (CT), dose is always inversely proportional to pitch. However, the relationship between noise and pitch (and hence noise and dose) depends on the scanner type (single vs multi-detector row) and reconstruction mode (cardiac vs noncardiac). In single detector row spiral CT, noise is independent of pitch. Conversely, in noncardiac multi-detector row CT, noise depends on pitch because the spiral interpolation algorithm makes use of redundant data from different detector rows to decrease noise for pitch values less than 1 (and increase noise for pitch values > 1). However, in cardiac spiral CT, redundant data cannot be used because such data averaging would degrade the temporal resolution. Therefore, the behavior of noise versus pitch returns to the single detector row paradigm, with noise being independent of pitch. Consequently, since faster rotation times require lower pitch values in cardiac multi-detector row CT, dose is increased without a commensurate decrease in noise. Thus, the use of faster rotation times will improve temporal resolution, not alter noise, and increase dose. For a particular application, the higher dose resulting from faster rotation speeds should be justified by the clinical benefits of the improved temporal resolution. RSNA, 2006

  19. Recursive inverse factorization.

    PubMed

    Rubensson, Emanuel H; Bock, Nicolas; Holmström, Erik; Niklasson, Anders M N

    2008-03-14

    A recursive algorithm for the inverse factorization S(-1)=ZZ(*) of Hermitian positive definite matrices S is proposed. The inverse factorization is based on iterative refinement [A.M.N. Niklasson, Phys. Rev. B 70, 193102 (2004)] combined with a recursive decomposition of S. As the computational kernel is matrix-matrix multiplication, the algorithm can be parallelized and the computational effort increases linearly with system size for systems with sufficiently sparse matrices. Recent advances in network theory are used to find appropriate recursive decompositions. We show that optimization of the so-called network modularity results in an improved partitioning compared to other approaches. In particular, when the recursive inverse factorization is applied to overlap matrices of irregularly structured three-dimensional molecules.

  20. CT head-scan dosimetry in an anthropomorphic phantom and associated measurement of ACR accreditation-phantom imaging metrics under clinically representative scan conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brunner, Claudia C.; Stern, Stanley H.; Chakrabarti, Kish

    2013-08-15

    Purpose: To measure radiation absorbed dose and its distribution in an anthropomorphic head phantom under clinically representative scan conditions in three widely used computed tomography (CT) scanners, and to relate those dose values to metrics such as high-contrast resolution, noise, and contrast-to-noise ratio (CNR) in the American College of Radiology CT accreditation phantom.Methods: By inserting optically stimulated luminescence dosimeters (OSLDs) in the head of an anthropomorphic phantom specially developed for CT dosimetry (University of Florida, Gainesville), we measured dose with three commonly used scanners (GE Discovery CT750 HD, Siemens Definition, Philips Brilliance 64) at two different clinical sites (Walter Reedmore » National Military Medical Center, National Institutes of Health). The scanners were set to operate with the same data-acquisition and image-reconstruction protocols as used clinically for typical head scans, respective of the practices of each facility for each scanner. We also analyzed images of the ACR CT accreditation phantom with the corresponding protocols. While the Siemens Definition and the Philips Brilliance protocols utilized only conventional, filtered back-projection (FBP) image-reconstruction methods, the GE Discovery also employed its particular version of an adaptive statistical iterative reconstruction (ASIR) algorithm that can be blended in desired proportions with the FBP algorithm. We did an objective image-metrics analysis evaluating the modulation transfer function (MTF), noise power spectrum (NPS), and CNR for images reconstructed with FBP. For images reconstructed with ASIR, we only analyzed the CNR, since MTF and NPS results are expected to depend on the object for iterative reconstruction algorithms.Results: The OSLD measurements showed that the Siemens Definition and the Philips Brilliance scanners (located at two different clinical facilities) yield average absorbed doses in tissue of 42.6 and 43.1 mGy, respectively. The GE Discovery delivers about the same amount of dose (43.7 mGy) when run under similar operating and image-reconstruction conditions, i.e., without tube current modulation and ASIR. The image-metrics analysis likewise showed that the MTF, NPS, and CNR associated with the reconstructed images are mutually comparable when the three scanners are run with similar settings, and differences can be attributed to different edge-enhancement properties of the applied reconstruction filters. Moreover, when the GE scanner was operated with the facility's scanner settings for routine head exams, which apply 50% ASIR and use only approximately half of the 100%-FBP dose, the CNR of the images showed no significant change. Even though the CNR alone is not sufficient to characterize the image quality and justify any dose reduction claims, it can be useful as a constancy test metric.Conclusions: This work presents a straightforward method to connect direct measurements of CT dose with objective image metrics such as high-contrast resolution, noise, and CNR. It demonstrates that OSLD measurements in an anthropomorphic head phantom allow a realistic and locally precise estimation of magnitude and spatial distribution of dose in tissue delivered during a typical CT head scan. Additional objective analysis of the images of the ACR accreditation phantom can be used to relate the measured doses to high contrast resolution, noise, and CNR.« less

  1. ANNIT - An Efficient Inversion Algorithm based on Prediction Principles

    NASA Astrophysics Data System (ADS)

    Růžek, B.; Kolář, P.

    2009-04-01

    Solution of inverse problems represents meaningful job in geophysics. The amount of data is continuously increasing, methods of modeling are being improved and the computer facilities are also advancing great technical progress. Therefore the development of new and efficient algorithms and computer codes for both forward and inverse modeling is still up to date. ANNIT is contributing to this stream since it is a tool for efficient solution of a set of non-linear equations. Typical geophysical problems are based on parametric approach. The system is characterized by a vector of parameters p, the response of the system is characterized by a vector of data d. The forward problem is usually represented by unique mapping F(p)=d. The inverse problem is much more complex and the inverse mapping p=G(d) is available in an analytical or closed form only exceptionally and generally it may not exist at all. Technically, both forward and inverse mapping F and G are sets of non-linear equations. ANNIT solves such situation as follows: (i) joint subspaces {pD, pM} of original data and model spaces D, M, resp. are searched for, within which the forward mapping F is sufficiently smooth that the inverse mapping G does exist, (ii) numerical approximation of G in subspaces {pD, pM} is found, (iii) candidate solution is predicted by using this numerical approximation. ANNIT is working in an iterative way in cycles. The subspaces {pD, pM} are searched for by generating suitable populations of individuals (models) covering data and model spaces. The approximation of the inverse mapping is made by using three methods: (a) linear regression, (b) Radial Basis Function Network technique, (c) linear prediction (also known as "Kriging"). The ANNIT algorithm has built in also an archive of already evaluated models. Archive models are re-used in a suitable way and thus the number of forward evaluations is minimized. ANNIT is now implemented both in MATLAB and SCILAB. Numerical tests show good performance of the algorithm. Both versions and documentation are available on Internet and anybody can download them. The goal of this presentation is to offer the algorithm and computer codes for anybody interested in the solution to inverse problems.

  2. Implementation of fast macromolecular proton fraction mapping on 1.5 and 3 Tesla clinical MRI scanners: preliminary experience

    NASA Astrophysics Data System (ADS)

    Yarnykh, V.; Korostyshevskaya, A.

    2017-08-01

    Macromolecular proton fraction (MPF) is a biophysical parameter describing the amount of macromolecular protons involved into magnetization exchange with water protons in tissues. MPF represents a significant interest as a magnetic resonance imaging (MRI) biomarker of myelin for clinical applications. A recent fast MPF mapping method enabled clinical translation of MPF measurements due to time-efficient acquisition based on the single-point constrained fit algorithm. However, previous MPF mapping applications utilized only 3 Tesla MRI scanners and modified pulse sequences, which are not commonly available. This study aimed to test the feasibility of MPF mapping implementation on a 1.5 Tesla clinical scanner using standard manufacturer’s sequences and compare the performance of this method between 1.5 and 3 Tesla scanners. MPF mapping was implemented on 1.5 and 3 Tesla MRI units of one manufacturer with either optimized custom-written or standard product pulse sequences. Whole-brain three-dimensional MPF maps obtained from a single volunteer were compared between field strengths and implementation options. MPF maps demonstrated similar quality at both field strengths. MPF values in segmented brain tissues and specific anatomic regions appeared in close agreement. This experiment demonstrates the feasibility of fast MPF mapping using standard sequences on 1.5 T and 3 T clinical scanners.

  3. Iterative inversion of deformation vector fields with feedback control.

    PubMed

    Dubey, Abhishek; Iliopoulos, Alexandros-Stavros; Sun, Xiaobai; Yin, Fang-Fang; Ren, Lei

    2018-05-14

    Often, the inverse deformation vector field (DVF) is needed together with the corresponding forward DVF in four-dimesional (4D) reconstruction and dose calculation, adaptive radiation therapy, and simultaneous deformable registration. This study aims at improving both accuracy and efficiency of iterative algorithms for DVF inversion, and advancing our understanding of divergence and latency conditions. We introduce a framework of fixed-point iteration algorithms with active feedback control for DVF inversion. Based on rigorous convergence analysis, we design control mechanisms for modulating the inverse consistency (IC) residual of the current iterate, to be used as feedback into the next iterate. The control is designed adaptively to the input DVF with the objective to enlarge the convergence area and expedite convergence. Three particular settings of feedback control are introduced: constant value over the domain throughout the iteration; alternating values between iteration steps; and spatially variant values. We also introduce three spectral measures of the displacement Jacobian for characterizing a DVF. These measures reveal the critical role of what we term the nontranslational displacement component (NTDC) of the DVF. We carry out inversion experiments with an analytical DVF pair, and with DVFs associated with thoracic CT images of six patients at end of expiration and end of inspiration. The NTDC-adaptive iterations are shown to attain a larger convergence region at a faster pace compared to previous nonadaptive DVF inversion iteration algorithms. By our numerical experiments, alternating control yields smaller IC residuals and inversion errors than constant control. Spatially variant control renders smaller residuals and errors by at least an order of magnitude, compared to other schemes, in no more than 10 steps. Inversion results also show remarkable quantitative agreement with analysis-based predictions. Our analysis captures properties of DVF data associated with clinical CT images, and provides new understanding of iterative DVF inversion algorithms with a simple residual feedback control. Adaptive control is necessary and highly effective in the presence of nonsmall NTDCs. The adaptive iterations or the spectral measures, or both, may potentially be incorporated into deformable image registration methods. © 2018 American Association of Physicists in Medicine.

  4. Incorporation of a laser range scanner into image-guided liver surgery: surface acquisition, registration, and tracking.

    PubMed

    Cash, David M; Sinha, Tuhin K; Chapman, William C; Terawaki, Hiromi; Dawant, Benoit M; Galloway, Robert L; Miga, Michael I

    2003-07-01

    As image guided surgical procedures become increasingly diverse, there will be more scenarios where point-based fiducials cannot be accurately localized for registration and rigid body assumptions no longer hold. As a result, procedures will rely more frequently on anatomical surfaces for the basis of image alignment and will require intraoperative geometric data to measure and compensate for tissue deformation in the organ. In this paper we outline methods for which a laser range scanner may be used to accomplish these tasks intraoperatively. A laser range scanner based on the optical principle of triangulation acquires a dense set of three-dimensional point data in a very rapid, noncontact fashion. Phantom studies were performed to test the ability to link range scan data with traditional modes of image-guided surgery data through localization, registration, and tracking in physical space. The experiments demonstrate that the scanner is capable of localizing point-based fiducials to within 0.2 mm and capable of achieving point and surface based registrations with target registration error of less than 2.0 mm. Tracking points in physical space with the range scanning system yields an error of 1.4 +/- 0.8 mm. Surface deformation studies were performed with the range scanner in order to determine if this device was capable of acquiring enough information for compensation algorithms. In the surface deformation studies, the range scanner was able to detect changes in surface shape due to deformation comparable to those detected by tomographic image studies. Use of the range scanner has been approved for clinical trials, and an initial intraoperative range scan experiment is presented. In all of these studies, the primary source of error in range scan data is deterministically related to the position and orientation of the surface within the scanner's field of view. However, this systematic error can be corrected, allowing the range scanner to provide a rapid, robust method of acquiring anatomical surfaces intraoperatively.

  5. Radiative Transfer Modeling and Retrievals for Advanced Hyperspectral Sensors

    NASA Technical Reports Server (NTRS)

    Liu, Xu; Zhou, Daniel K.; Larar, Allen M.; Smith, William L., Sr.; Mango, Stephen A.

    2009-01-01

    A novel radiative transfer model and a physical inversion algorithm based on principal component analysis will be presented. Instead of dealing with channel radiances, the new approach fits principal component scores of these quantities. Compared to channel-based radiative transfer models, the new approach compresses radiances into a much smaller dimension making both forward modeling and inversion algorithm more efficient.

  6. Algorithms and Architectures for Elastic-Wave Inversion Final Report CRADA No. TC02144.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larsen, S.; Lindtjorn, O.

    2017-08-15

    This was a collaborative effort between Lawrence Livermore National Security, LLC as manager and operator of Lawrence Livermore National Laboratory (LLNL) and Schlumberger Technology Corporation (STC), to perform a computational feasibility study that investigates hardware platforms and software algorithms applicable to STC for Reverse Time Migration (RTM) / Reverse Time Inversion (RTI) of 3-D seismic data.

  7. Particle Swarm Optimization algorithms for geophysical inversion, practical hints

    NASA Astrophysics Data System (ADS)

    Garcia Gonzalo, E.; Fernandez Martinez, J.; Fernandez Alvarez, J.; Kuzma, H.; Menendez Perez, C.

    2008-12-01

    PSO is a stochastic optimization technique that has been successfully used in many different engineering fields. PSO algorithm can be physically interpreted as a stochastic damped mass-spring system (Fernandez Martinez and Garcia Gonzalo 2008). Based on this analogy we present a whole family of PSO algorithms and their respective first order and second order stability regions. Their performance is also checked using synthetic functions (Rosenbrock and Griewank) showing a degree of ill-posedness similar to that found in many geophysical inverse problems. Finally, we present the application of these algorithms to the analysis of a Vertical Electrical Sounding inverse problem associated to a seawater intrusion in a coastal aquifer in South Spain. We analyze the role of PSO parameters (inertia, local and global accelerations and discretization step), both in convergence curves and in the a posteriori sampling of the depth of an intrusion. Comparison is made with binary genetic algorithms and simulated annealing. As result of this analysis, practical hints are given to select the correct algorithm and to tune the corresponding PSO parameters. Fernandez Martinez, J.L., Garcia Gonzalo, E., 2008a. The generalized PSO: a new door to PSO evolution. Journal of Artificial Evolution and Applications. DOI:10.1155/2008/861275.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grzetic, S; Weldon, M; Noa, K

    Purpose: This study compares the newly released MaxFOV Revision 1 EFOV reconstruction algorithm for GE RT590 to the older WideView EFOV algorithm. Two radiotherapy overlays from Q-fix and Diacor, are included in our analysis. Hounsfield Units (HU) generated with the WideView algorithm varied in the extended field (beyond 50cm) and the scanned object’s border varied from slice to slice. A validation of HU consistency between the two reconstruction algorithms is performed. Methods: A CatPhan 504 and CIRS062 Electron Density Phantom were scanned on a GE RT590 CT-Simulator. The phantoms were positioned in multiple locations within the scan field of viewmore » so some of the density plugs were outside the 50cm reconstruction circle. Images were reconstructed using both the WideView and MaxFOV algorithms. The HU for each scan were characterized both in average over a volume and in profile. Results: HU values are consistent between the two algorithms. Low-density material will have a slight increase in HU value and high-density material will have a slight decrease in HU value as the distance from the sweet spot increases. Border inconsistencies and shading artifacts are still present with the MaxFOV reconstruction on the Q-fix overlay but not the Diacor overlay (It should be noted that the Q-fix overlay is not currently GE-certified). HU values for water outside the 50cm FOV are within 40HU of reconstructions at the sweet spot of the scanner. CatPhan HU profiles show improvement with the MaxFOV algorithm as it approaches the scanner edge. Conclusion: The new MaxFOV algorithm improves the contour border for objects outside of the standard FOV when using a GE-approved tabletop. Air cavities outside of the standard FOV create inconsistent object borders. HU consistency is within GE specifications and the accuracy of the phantom edge improves. Further adjustments to the algorithm are being investigated by GE.« less

  9. A Comparison of Four-Image Reconstruction Algorithms for 3-D PET Imaging of MDAPET Camera Using Phantom Data

    NASA Astrophysics Data System (ADS)

    Baghaei, H.; Wong, Wai-Hoi; Uribe, J.; Li, Hongdi; Wang, Yu; Liu, Yaqiang; Xing, Tao; Ramirez, R.; Xie, Shuping; Kim, Soonseok

    2004-10-01

    We compared two fully three-dimensional (3-D) image reconstruction algorithms and two 3-D rebinning algorithms followed by reconstruction with a two-dimensional (2-D) filtered-backprojection algorithm for 3-D positron emission tomography (PET) imaging. The two 3-D image reconstruction algorithms were ordered-subsets expectation-maximization (3D-OSEM) and 3-D reprojection (3DRP) algorithms. The two rebinning algorithms were Fourier rebinning (FORE) and single slice rebinning (SSRB). The 3-D projection data used for this work were acquired with a high-resolution PET scanner (MDAPET) with an intrinsic transaxial resolution of 2.8 mm. The scanner has 14 detector rings covering an axial field-of-view of 38.5 mm. We scanned three phantoms: 1) a uniform cylindrical phantom with inner diameter of 21.5 cm; 2) a uniform 11.5-cm cylindrical phantom with four embedded small hot lesions with diameters of 3, 4, 5, and 6 mm; and 3) the 3-D Hoffman brain phantom with three embedded small hot lesion phantoms with diameters of 3, 5, and 8.6 mm in a warm background. Lesions were placed at different radial and axial distances. We evaluated the different reconstruction methods for MDAPET camera by comparing the noise level of images, contrast recovery, and hot lesion detection, and visually compared images. We found that overall the 3D-OSEM algorithm, especially when images post filtered with the Metz filter, produced the best results in terms of contrast-noise tradeoff, and detection of hot spots, and reproduction of brain phantom structures. Even though the MDAPET camera has a relatively small maximum axial acceptance (/spl plusmn/5 deg), images produced with the 3DRP algorithm had slightly better contrast recovery and reproduced the structures of the brain phantom slightly better than the faster 2-D rebinning methods.

  10. 4D inversion of time-lapse magnetotelluric data sets for monitoring geothermal reservoir

    NASA Astrophysics Data System (ADS)

    Nam, Myung Jin; Song, Yoonho; Jang, Hannuree; Kim, Bitnarae

    2017-06-01

    The productivity of a geothermal reservoir, which is a function of the pore-space and fluid-flow path of the reservoir, varies since the properties of the reservoir changes with geothermal reservoir production. Because the variation in the reservoir properties causes changes in electrical resistivity, time-lapse (TL) three-dimensional (3D) magnetotelluric (MT) methods can be applied to monitor the productivity variation of a geothermal reservoir thanks to not only its sensitivity to the electrical resistivity but also its deep depth of survey penetration. For an accurate interpretation of TL MT-data sets, a four-dimensional (4D) MT inversion algorithm has been developed to simultaneously invert all vintage data considering time-coupling between vintages. However, the changes in electrical resistivity of deep geothermal reservoirs are usually small generating minimum variation in TL MT responses. Maximizing the sensitivity of inversion to the changes in resistivity is critical in the success of 4D MT inversion. Thus, we further developed a focused 4D MT inversion method by considering not only the location of a reservoir but also the distribution of newly-generated fractures during the production. For the evaluation of the 4D MT algorithm, we tested our 4D inversion algorithms using synthetic TL MT-data sets.

  11. Performance evaluation of the whole-body PET scanner ECAT EXACT HR/sup +/ following the IEC standard

    NASA Astrophysics Data System (ADS)

    Adam, L.-E.; Zaers, J.; Ostertag, H.; Trojan, H.; Bellemann, M. E.; Brix, G.

    1997-06-01

    The performance parameters of the whole-body PET scanner ECAT EXACT HR/sup +/ (CTI/Siemens, Knoxville, TN) were determined following the standard proposed by the International Electrotechnical Commission (IEC). The tests were expanded by some measurements concerning the accuracy of the correction algorithms and the geometric fidelity of the reconstructed images. The scanner consists of 32 rings, each with 576 BGO detectors (4.05/spl times/4.39/spl times/30 mm/sup 3/), covering an axial field-of-view of 15.5 cm and a patient port of 56.2 cm. The transaxial FWHM determined by a Gaussian fit in the 2D (3D) mode is 4.5 (4.3) mm at the center. It increases to 8.9 (8.3) mm radially and to 5.8 (5.2) mm tangentially at a radial distance of r=20 cm. The average axial resolution varies between 4.9 (4.1) mm FWHM at the center and 8.8 (8.1) mm at r=20 cm. The system sensitivity for unscattered true events is 5.85 (26.4) cps/Bq/ml (measured with a 20 cm cylinder). The 50% dead-time losses were reached for a true event count rate (including scatter) of 286 (500) kcps at an activity concentration of 74 (25) kBq/ml. The system scatter fraction is 0.24 (0.35). With the exception of the 3D attenuation correction algorithm, all correction algorithms work reliably. The results reveal that the ECAT EXACT HR/sup +/ has a good and nearly isotropic spatial resolution. Due to the small detector elements, however, it has a low slice sensitivity which is a limiting factor for image quality.

  12. 3D noise power spectrum applied on clinical MDCT scanners: effects of reconstruction algorithms and reconstruction filters

    NASA Astrophysics Data System (ADS)

    Miéville, Frédéric A.; Bolard, Gregory; Benkreira, Mohamed; Ayestaran, Paul; Gudinchet, François; Bochud, François; Verdun, Francis R.

    2011-03-01

    The noise power spectrum (NPS) is the reference metric for understanding the noise content in computed tomography (CT) images. To evaluate the noise properties of clinical multidetector (MDCT) scanners, local 2D and 3D NPSs were computed for different acquisition reconstruction parameters. A 64- and a 128-MDCT scanners were employed. Measurements were performed on a water phantom in axial and helical acquisition modes. CT dose index was identical for both installations. Influence of parameters such as the pitch, the reconstruction filter (soft, standard and bone) and the reconstruction algorithm (filtered-back projection (FBP), adaptive statistical iterative reconstruction (ASIR)) were investigated. Images were also reconstructed in the coronal plane using a reformat process. Then 2D and 3D NPS methods were computed. In axial acquisition mode, the 2D axial NPS showed an important magnitude variation as a function of the z-direction when measured at the phantom center. In helical mode, a directional dependency with lobular shape was observed while the magnitude of the NPS was kept constant. Important effects of the reconstruction filter, pitch and reconstruction algorithm were observed on 3D NPS results for both MDCTs. With ASIR, a reduction of the NPS magnitude and a shift of the NPS peak to the low frequency range were visible. 2D coronal NPS obtained from the reformat images was impacted by the interpolation when compared to 2D coronal NPS obtained from 3D measurements. The noise properties of volume measured in last generation MDCTs was studied using local 3D NPS metric. However, impact of the non-stationarity noise effect may need further investigations.

  13. Comparing implementations of penalized weighted least-squares sinogram restoration

    PubMed Central

    Forthmann, Peter; Koehler, Thomas; Defrise, Michel; La Riviere, Patrick

    2010-01-01

    Purpose: A CT scanner measures the energy that is deposited in each channel of a detector array by x rays that have been partially absorbed on their way through the object. The measurement process is complex and quantitative measurements are always and inevitably associated with errors, so CT data must be preprocessed prior to reconstruction. In recent years, the authors have formulated CT sinogram preprocessing as a statistical restoration problem in which the goal is to obtain the best estimate of the line integrals needed for reconstruction from the set of noisy, degraded measurements. The authors have explored both penalized Poisson likelihood (PL) and penalized weighted least-squares (PWLS) objective functions. At low doses, the authors found that the PL approach outperforms PWLS in terms of resolution-noise tradeoffs, but at standard doses they perform similarly. The PWLS objective function, being quadratic, is more amenable to computational acceleration than the PL objective. In this work, the authors develop and compare two different methods for implementing PWLS sinogram restoration with the hope of improving computational performance relative to PL in the standard-dose regime. Sinogram restoration is still significant in the standard-dose regime since it can still outperform standard approaches and it allows for correction of effects that are not usually modeled in standard CT preprocessing. Methods: The authors have explored and compared two implementation strategies for PWLS sinogram restoration: (1) A direct matrix-inversion strategy based on the closed-form solution to the PWLS optimization problem and (2) an iterative approach based on the conjugate-gradient algorithm. Obtaining optimal performance from each strategy required modifying the naive off-the-shelf implementations of the algorithms to exploit the particular symmetry and sparseness of the sinogram-restoration problem. For the closed-form approach, the authors subdivided the large matrix inversion into smaller coupled problems and exploited sparseness to minimize matrix operations. For the conjugate-gradient approach, the authors exploited sparseness and preconditioned the problem to speed up convergence. Results: All methods produced qualitatively and quantitatively similar images as measured by resolution-variance tradeoffs and difference images. Despite the acceleration strategies, the direct matrix-inversion approach was found to be uncompetitive with iterative approaches, with a computational burden higher by an order of magnitude or more. The iterative conjugate-gradient approach, however, does appear promising, with computation times half that of the authors’ previous penalized-likelihood implementation. Conclusions: Iterative conjugate-gradient based PWLS sinogram restoration with careful matrix optimizations has computational advantages over direct matrix PWLS inversion and over penalized-likelihood sinogram restoration and can be considered a good alternative in standard-dose regimes. PMID:21158306

  14. A quantitative reconstruction software suite for SPECT imaging

    NASA Astrophysics Data System (ADS)

    Namías, Mauro; Jeraj, Robert

    2017-11-01

    Quantitative Single Photon Emission Tomography (SPECT) imaging allows for measurement of activity concentrations of a given radiotracer in vivo. Although SPECT has usually been perceived as non-quantitative by the medical community, the introduction of accurate CT based attenuation correction and scatter correction from hybrid SPECT/CT scanners has enabled SPECT systems to be as quantitative as Positron Emission Tomography (PET) systems. We implemented a software suite to reconstruct quantitative SPECT images from hybrid or dedicated SPECT systems with a separate CT scanner. Attenuation, scatter and collimator response corrections were included in an Ordered Subset Expectation Maximization (OSEM) algorithm. A novel scatter fraction estimation technique was introduced. The SPECT/CT system was calibrated with a cylindrical phantom and quantitative accuracy was assessed with an anthropomorphic phantom and a NEMA/IEC image quality phantom. Accurate activity measurements were achieved at an organ level. This software suite helps increasing quantitative accuracy of SPECT scanners.

  15. CT brush and CancerZap!: two video games for computed tomography dose minimization.

    PubMed

    Alvare, Graham; Gordon, Richard

    2015-05-12

    X-ray dose from computed tomography (CT) scanners has become a significant public health concern. All CT scanners spray x-ray photons across a patient, including those using compressive sensing algorithms. New technologies make it possible to aim x-ray beams where they are most needed to form a diagnostic or screening image. We have designed a computer game, CT Brush, that takes advantage of this new flexibility. It uses a standard MART algorithm (Multiplicative Algebraic Reconstruction Technique), but with a user defined dynamically selected subset of the rays. The image appears as the player moves the CT brush over an initially blank scene, with dose accumulating with every "mouse down" move. The goal is to find the "tumor" with as few moves (least dose) as possible. We have successfully implemented CT Brush in Java and made it available publicly, requesting crowdsourced feedback on improving the open source code. With this experience, we also outline a "shoot 'em up game" CancerZap! for photon limited CT. We anticipate that human computing games like these, analyzed by methods similar to those used to understand eye tracking, will lead to new object dependent CT algorithms that will require significantly less dose than object independent nonlinear and compressive sensing algorithms that depend on sprayed photons. Preliminary results suggest substantial dose reduction is achievable.

  16. Investigation of the reconstruction accuracy of guided wave tomography using full waveform inversion

    NASA Astrophysics Data System (ADS)

    Rao, Jing; Ratassepp, Madis; Fan, Zheng

    2017-07-01

    Guided wave tomography is a promising tool to accurately determine the remaining wall thicknesses of corrosion damages, which are among the major concerns for many industries. Full Waveform Inversion (FWI) algorithm is an attractive guided wave tomography method, which uses a numerical forward model to predict the waveform of guided waves when propagating through corrosion defects, and an inverse model to reconstruct the thickness map from the ultrasonic signals captured by transducers around the defect. This paper discusses the reconstruction accuracy of the FWI algorithm on plate-like structures by using simulations as well as experiments. It was shown that this algorithm can obtain a resolution of around 0.7 wavelengths for defects with smooth depth variations from the acoustic modeling data, and about 1.5-2 wavelengths from the elastic modeling data. Further analysis showed that the reconstruction accuracy is also dependent on the shape of the defect. It was demonstrated that the algorithm maintains the accuracy in the case of multiple defects compared to conventional algorithms based on Born approximation.

  17. Using Poisson-regularized inversion of Bremsstrahlung emission to extract full electron energy distribution functions from x-ray pulse-height detector data

    NASA Astrophysics Data System (ADS)

    Swanson, C.; Jandovitz, P.; Cohen, S. A.

    2018-02-01

    We measured Electron Energy Distribution Functions (EEDFs) from below 200 eV to over 8 keV and spanning five orders-of-magnitude in intensity, produced in a low-power, RF-heated, tandem mirror discharge in the PFRC-II apparatus. The EEDF was obtained from the x-ray energy distribution function (XEDF) using a novel Poisson-regularized spectrum inversion algorithm applied to pulse-height spectra that included both Bremsstrahlung and line emissions. The XEDF was measured using a specially calibrated Amptek Silicon Drift Detector (SDD) pulse-height system with 125 eV FWHM at 5.9 keV. The algorithm is found to out-perform current leading x-ray inversion algorithms when the error due to counting statistics is high.

  18. A space efficient flexible pivot selection approach to evaluate determinant and inverse of a matrix.

    PubMed

    Jafree, Hafsa Athar; Imtiaz, Muhammad; Inayatullah, Syed; Khan, Fozia Hanif; Nizami, Tajuddin

    2014-01-01

    This paper presents new simple approaches for evaluating determinant and inverse of a matrix. The choice of pivot selection has been kept arbitrary thus they reduce the error while solving an ill conditioned system. Computation of determinant of a matrix has been made more efficient by saving unnecessary data storage and also by reducing the order of the matrix at each iteration, while dictionary notation [1] has been incorporated for computing the matrix inverse thereby saving unnecessary calculations. These algorithms are highly class room oriented, easy to use and implemented by students. By taking the advantage of flexibility in pivot selection, one may easily avoid development of the fractions by most. Unlike the matrix inversion method [2] and [3], the presented algorithms obviate the use of permutations and inverse permutations.

  19. 2.5D complex resistivity modeling and inversion using unstructured grids

    NASA Astrophysics Data System (ADS)

    Xu, Kaijun; Sun, Jie

    2016-04-01

    The characteristic of complex resistivity on rock and ore has been recognized by people for a long time. Generally we have used the Cole-Cole Model(CCM) to describe complex resistivity. It has been proved that the electrical anomaly of geologic body can be quantitative estimated by CCM parameters such as direct resistivity(ρ0), chargeability(m), time constant(τ) and frequency dependence(c). Thus it is very important to obtain the complex parameters of geologic body. It is difficult to approximate complex structures and terrain using traditional rectangular grid. In order to enhance the numerical accuracy and rationality of modeling and inversion, we use an adaptive finite-element algorithm for forward modeling of the frequency-domain 2.5D complex resistivity and implement the conjugate gradient algorithm in the inversion of 2.5D complex resistivity. An adaptive finite element method is applied for solving the 2.5D complex resistivity forward modeling of horizontal electric dipole source. First of all, the CCM is introduced into the Maxwell's equations to calculate the complex resistivity electromagnetic fields. Next, the pseudo delta function is used to distribute electric dipole source. Then the electromagnetic fields can be expressed in terms of the primary fields caused by layered structure and the secondary fields caused by inhomogeneities anomalous conductivity. At last, we calculated the electromagnetic fields response of complex geoelectric structures such as anticline, syncline, fault. The modeling results show that adaptive finite-element methods can automatically improve mesh generation and simulate complex geoelectric models using unstructured grids. The 2.5D complex resistivity invertion is implemented based the conjugate gradient algorithm.The conjugate gradient algorithm doesn't need to compute the sensitivity matrix but directly computes the sensitivity matrix or its transpose multiplying vector. In addition, the inversion target zones are segmented with fine grids and the background zones are segmented with big grid, the method can reduce the grid amounts of inversion, it is very helpful to improve the computational efficiency. The inversion results verify the validity and stability of conjugate gradient inversion algorithm. The results of theoretical calculation indicate that the modeling and inversion of 2.5D complex resistivity using unstructured grids are feasible. Using unstructured grids can improve the accuracy of modeling, but the large number of grids inversion is extremely time-consuming, so the parallel computation for the inversion is necessary. Acknowledgments: We thank to the support of the National Natural Science Foundation of China(41304094).

  20. The NOAA-NASA CZCS Reanalysis Effort

    NASA Technical Reports Server (NTRS)

    Gregg, Watson W.; Conkright, Margarita E.; OReilly, John E.; Patt, Frederick S.; Wang, Meng-Hua; Yoder, James; Casey-McCabe, Nancy; Koblinsky, Chester J. (Technical Monitor)

    2001-01-01

    Satellite observations of global ocean chlorophyll span over two decades. However, incompatibilities between processing algorithms prevent us from quantifying natural variability. We applied a comprehensive reanalysis to the Coastal Zone Color Scanner (CZCS) archive, called the NOAA-NASA CZCS Reanalysis (NCR) Effort. NCR consisted of 1) algorithm improvement (AI), where CZCS processing algorithms were improved using modernized atmospheric correction and bio-optical algorithms, and 2) blending, where in situ data were incorporated into the CZCS AI to minimize residual errors. The results indicated major improvement over the previously available CZCS archive. Global spatial and seasonal patterns of NCR chlorophyll indicated remarkable correspondence with modern sensors, suggesting compatibility. The NCR permits quantitative analyses of interannual and interdecadal trends in global ocean chlorophyll.

  1. Probing numerical Laplace inversion methods for two and three-site molecular exchange between interconnected pore structures.

    PubMed

    Silletta, Emilia V; Franzoni, María B; Monti, Gustavo A; Acosta, Rodolfo H

    2018-01-01

    Two-dimension (2D) Nuclear Magnetic Resonance relaxometry experiments are a powerful tool extensively used to probe the interaction among different pore structures, mostly in inorganic systems. The analysis of the collected experimental data generally consists of a 2D numerical inversion of time-domain data where T 2 -T 2 maps are generated. Through the years, different algorithms for the numerical inversion have been proposed. In this paper, two different algorithms for numerical inversion are tested and compared under different conditions of exchange dynamics; the method based on Butler-Reeds-Dawson (BRD) algorithm and the fast-iterative shrinkage-thresholding algorithm (FISTA) method. By constructing a theoretical model, the algorithms were tested for a two- and three-site porous media, varying the exchange rates parameters, the pore sizes and the signal to noise ratio. In order to test the methods under realistic experimental conditions, a challenging organic system was chosen. The molecular exchange rates of water confined in hierarchical porous polymeric networks were obtained, for a two- and three-site porous media. Data processed with the BRD method was found to be accurate only under certain conditions of the exchange parameters, while data processed with the FISTA method is precise for all the studied parameters, except when SNR conditions are extreme. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Monte Carlo Approach for Estimating Density and Atomic Number From Dual-Energy Computed Tomography Images of Carbonate Rocks

    NASA Astrophysics Data System (ADS)

    Victor, Rodolfo A.; Prodanović, Maša.; Torres-Verdín, Carlos

    2017-12-01

    We develop a new Monte Carlo-based inversion method for estimating electron density and effective atomic number from 3-D dual-energy computed tomography (CT) core scans. The method accounts for uncertainties in X-ray attenuation coefficients resulting from the polychromatic nature of X-ray beam sources of medical and industrial scanners, in addition to delivering uncertainty estimates of inversion products. Estimation of electron density and effective atomic number from CT core scans enables direct deterministic or statistical correlations with salient rock properties for improved petrophysical evaluation; this condition is specifically important in media such as vuggy carbonates where CT resolution better captures core heterogeneity that dominates fluid flow properties. Verification tests of the inversion method performed on a set of highly heterogeneous carbonate cores yield very good agreement with in situ borehole measurements of density and photoelectric factor.

  3. Forward and inverse solutions for three-element Risley prism beam scanners.

    PubMed

    Li, Anhu; Liu, Xingsheng; Sun, Wansong

    2017-04-03

    Scan blind zone and control singularity are two adverse issues for the beam scanning performance in double-prism Risley systems. In this paper, a theoretical model which introduces a third prism is developed. The critical condition for a fully eliminated scan blind zone is determined through a geometric derivation, providing several useful formulae for three-Risley-prism system design. Moreover, inverse solutions for a three-prism system are established, based on the damped least-squares iterative refinement by a forward ray tracing method. It is shown that the efficiency of this iterative calculation of the inverse solutions can be greatly enhanced by a numerical differentiation method. In order to overcome the control singularity problem, the motion law of any one prism in a three-prism system needs to be conditioned, resulting in continuous and steady motion profiles for the other two prisms.

  4. Interpolation bias for the inverse compositional Gauss-Newton algorithm in digital image correlation

    NASA Astrophysics Data System (ADS)

    Su, Yong; Zhang, Qingchuan; Xu, Xiaohai; Gao, Zeren; Wu, Shangquan

    2018-01-01

    It is believed that the classic forward additive Newton-Raphson (FA-NR) algorithm and the recently introduced inverse compositional Gauss-Newton (IC-GN) algorithm give rise to roughly equal interpolation bias. Questioning the correctness of this statement, this paper presents a thorough analysis of interpolation bias for the IC-GN algorithm. A theoretical model is built to analytically characterize the dependence of interpolation bias upon speckle image, target image interpolation, and reference image gradient estimation. The interpolation biases of the FA-NR algorithm and the IC-GN algorithm can be significantly different, whose relative difference can exceed 80%. For the IC-GN algorithm, the gradient estimator can strongly affect the interpolation bias; the relative difference can reach 178%. Since the mean bias errors are insensitive to image noise, the theoretical model proposed remains valid in the presence of noise. To provide more implementation details, source codes are uploaded as a supplement.

  5. Sensitivity estimation in time-of-flight list-mode positron emission tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herraiz, J. L.; Sitek, A., E-mail: sarkadiu@gmail.com

    Purpose: An accurate quantification of the images in positron emission tomography (PET) requires knowing the actual sensitivity at each voxel, which represents the probability that a positron emitted in that voxel is finally detected as a coincidence of two gamma rays in a pair of detectors in the PET scanner. This sensitivity depends on the characteristics of the acquisition, as it is affected by the attenuation of the annihilation gamma rays in the body, and possible variations of the sensitivity of the scanner detectors. In this work, the authors propose a new approach to handle time-of-flight (TOF) list-mode PET data,more » which allows performing either or both, a self-attenuation correction, and self-normalization correction based on emission data only. Methods: The authors derive the theory using a fully Bayesian statistical model of complete data. The authors perform an initial evaluation of algorithms derived from that theory and proposed in this work using numerical 2D list-mode simulations with different TOF resolutions and total number of detected coincidences. Effects of randoms and scatter are not simulated. Results: The authors found that proposed algorithms successfully correct for unknown attenuation and scanner normalization for simulated 2D list-mode TOF-PET data. Conclusions: A new method is presented that can be used for corrections for attenuation and normalization (sensitivity) using TOF list-mode data.« less

  6. Sensitivity estimation in time-of-flight list-mode positron emission tomography.

    PubMed

    Herraiz, J L; Sitek, A

    2015-11-01

    An accurate quantification of the images in positron emission tomography (PET) requires knowing the actual sensitivity at each voxel, which represents the probability that a positron emitted in that voxel is finally detected as a coincidence of two gamma rays in a pair of detectors in the PET scanner. This sensitivity depends on the characteristics of the acquisition, as it is affected by the attenuation of the annihilation gamma rays in the body, and possible variations of the sensitivity of the scanner detectors. In this work, the authors propose a new approach to handle time-of-flight (TOF) list-mode PET data, which allows performing either or both, a self-attenuation correction, and self-normalization correction based on emission data only. The authors derive the theory using a fully Bayesian statistical model of complete data. The authors perform an initial evaluation of algorithms derived from that theory and proposed in this work using numerical 2D list-mode simulations with different TOF resolutions and total number of detected coincidences. Effects of randoms and scatter are not simulated. The authors found that proposed algorithms successfully correct for unknown attenuation and scanner normalization for simulated 2D list-mode TOF-PET data. A new method is presented that can be used for corrections for attenuation and normalization (sensitivity) using TOF list-mode data.

  7. Total-variation based velocity inversion with Bregmanized operator splitting algorithm

    NASA Astrophysics Data System (ADS)

    Zand, Toktam; Gholami, Ali

    2018-04-01

    Many problems in applied geophysics can be formulated as a linear inverse problem. The associated problems, however, are large-scale and ill-conditioned. Therefore, regularization techniques are needed to be employed for solving them and generating a stable and acceptable solution. We consider numerical methods for solving such problems in this paper. In order to tackle the ill-conditioning of the problem we use blockiness as a prior information of the subsurface parameters and formulate the problem as a constrained total variation (TV) regularization. The Bregmanized operator splitting (BOS) algorithm as a combination of the Bregman iteration and the proximal forward backward operator splitting method is developed to solve the arranged problem. Two main advantages of this new algorithm are that no matrix inversion is required and that a discrepancy stopping criterion is used to stop the iterations, which allow efficient solution of large-scale problems. The high performance of the proposed TV regularization method is demonstrated using two different experiments: 1) velocity inversion from (synthetic) seismic data which is based on Born approximation, 2) computing interval velocities from RMS velocities via Dix formula. Numerical examples are presented to verify the feasibility of the proposed method for high-resolution velocity inversion.

  8. Real-time inverse kinematics for the upper limb: a model-based algorithm using segment orientations.

    PubMed

    Borbély, Bence J; Szolgay, Péter

    2017-01-17

    Model based analysis of human upper limb movements has key importance in understanding the motor control processes of our nervous system. Various simulation software packages have been developed over the years to perform model based analysis. These packages provide computationally intensive-and therefore off-line-solutions to calculate the anatomical joint angles from motion captured raw measurement data (also referred as inverse kinematics). In addition, recent developments in inertial motion sensing technology show that it may replace large, immobile and expensive optical systems with small, mobile and cheaper solutions in cases when a laboratory-free measurement setup is needed. The objective of the presented work is to extend the workflow of measurement and analysis of human arm movements with an algorithm that allows accurate and real-time estimation of anatomical joint angles for a widely used OpenSim upper limb kinematic model when inertial sensors are used for movement recording. The internal structure of the selected upper limb model is analyzed and used as the underlying platform for the development of the proposed algorithm. Based on this structure, a prototype marker set is constructed that facilitates the reconstruction of model-based joint angles using orientation data directly available from inertial measurement systems. The mathematical formulation of the reconstruction algorithm is presented along with the validation of the algorithm on various platforms, including embedded environments. Execution performance tables of the proposed algorithm show significant improvement on all tested platforms. Compared to OpenSim's Inverse Kinematics tool 50-15,000x speedup is achieved while maintaining numerical accuracy. The proposed algorithm is capable of real-time reconstruction of standardized anatomical joint angles even in embedded environments, establishing a new way for complex applications to take advantage of accurate and fast model-based inverse kinematics calculations.

  9. Calculating tissue shear modulus and pressure by 2D log-elastographic methods

    NASA Astrophysics Data System (ADS)

    McLaughlin, Joyce R.; Zhang, Ning; Manduca, Armando

    2010-08-01

    Shear modulus imaging, often called elastography, enables detection and characterization of tissue abnormalities. In this paper the data are two displacement components obtained from successive MR or ultrasound data sets acquired while the tissue is excited mechanically. A 2D plane strain elastic model is assumed to govern the 2D displacement, u. The shear modulus, μ, is unknown and whether or not the first Lamé parameter, λ, is known the pressure p = λ∇ sdot u which is present in the plane strain model cannot be measured and is unreliably computed from measured data and can be shown to be an order one quantity in the units kPa. So here we present a 2D log-elastographic inverse algorithm that (1) simultaneously reconstructs the shear modulus, μ, and p, which together satisfy a first-order partial differential equation system, with the goal of imaging μ (2) controls potential exponential growth in the numerical error and (3) reliably reconstructs the quantity p in the inverse algorithm as compared to the same quantity computed with a forward algorithm. This work generalizes the log-elastographic algorithm in Lin et al (2009 Inverse Problems 25) which uses one displacement component, is derived assuming that the component satisfies the wave equation and is tested on synthetic data computed with the wave equation model. The 2D log-elastographic algorithm is tested on 2D synthetic data and 2D in vivo data from Mayo Clinic. We also exhibit examples to show that the 2D log-elastographic algorithm improves the quality of the recovered images as compared to the log-elastographic and direct inversion algorithms.

  10. A Subspace Pursuit–based Iterative Greedy Hierarchical Solution to the Neuromagnetic Inverse Problem

    PubMed Central

    Babadi, Behtash; Obregon-Henao, Gabriel; Lamus, Camilo; Hämäläinen, Matti S.; Brown, Emery N.; Purdon, Patrick L.

    2013-01-01

    Magnetoencephalography (MEG) is an important non-invasive method for studying activity within the human brain. Source localization methods can be used to estimate spatiotemporal activity from MEG measurements with high temporal resolution, but the spatial resolution of these estimates is poor due to the ill-posed nature of the MEG inverse problem. Recent developments in source localization methodology have emphasized temporal as well as spatial constraints to improve source localization accuracy, but these methods can be computationally intense. Solutions emphasizing spatial sparsity hold tremendous promise, since the underlying neurophysiological processes generating MEG signals are often sparse in nature, whether in the form of focal sources, or distributed sources representing large-scale functional networks. Recent developments in the theory of compressed sensing (CS) provide a rigorous framework to estimate signals with sparse structure. In particular, a class of CS algorithms referred to as greedy pursuit algorithms can provide both high recovery accuracy and low computational complexity. Greedy pursuit algorithms are difficult to apply directly to the MEG inverse problem because of the high-dimensional structure of the MEG source space and the high spatial correlation in MEG measurements. In this paper, we develop a novel greedy pursuit algorithm for sparse MEG source localization that overcomes these fundamental problems. This algorithm, which we refer to as the Subspace Pursuit-based Iterative Greedy Hierarchical (SPIGH) inverse solution, exhibits very low computational complexity while achieving very high localization accuracy. We evaluate the performance of the proposed algorithm using comprehensive simulations, as well as the analysis of human MEG data during spontaneous brain activity and somatosensory stimuli. These studies reveal substantial performance gains provided by the SPIGH algorithm in terms of computational complexity, localization accuracy, and robustness. PMID:24055554

  11. Digital Oblique Remote Ionospheric Sensing (DORIS) Program Development

    DTIC Science & Technology

    1992-04-01

    waveforms. A new with the ARTIST software (Reinisch and Iluang. autoscaling technique for oblique ionograms 1983, Gamache et al., 1985) which is...development and performance of a complete oblique ionogram autoscaling and inversion algorithm is presented. The inver.i-,n algorithm uses a three...OTIH radar. 14. SUBJECT TERMS 15. NUMBER OF PAGES Oblique Propagation; Oblique lonogram Autoscaling ; i Electron Density Profile Inversion; Simulated 16

  12. SWIM: A Semi-Analytical Ocean Color Inversion Algorithm for Optically Shallow Waters

    NASA Technical Reports Server (NTRS)

    McKinna, Lachlan I. W.; Werdell, P. Jeremy; Fearns, Peter R. C. S.; Weeks, Scarla J.; Reichstetter, Martina; Franz, Bryan A.; Bailey, Sean W.; Shea, Donald M.; Feldman, Gene C.

    2014-01-01

    In clear shallow waters, light that is transmitted downward through the water column can reflect off the sea floor and thereby influence the water-leaving radiance signal. This effect can confound contemporary ocean color algorithms designed for deep waters where the seafloor has little or no effect on the water-leaving radiance. Thus, inappropriate use of deep water ocean color algorithms in optically shallow regions can lead to inaccurate retrievals of inherent optical properties (IOPs) and therefore have a detrimental impact on IOP-based estimates of marine parameters, including chlorophyll-a and the diffuse attenuation coefficient. In order to improve IOP retrievals in optically shallow regions, a semi-analytical inversion algorithm, the Shallow Water Inversion Model (SWIM), has been developed. Unlike established ocean color algorithms, SWIM considers both the water column depth and the benthic albedo. A radiative transfer study was conducted that demonstrated how SWIM and two contemporary ocean color algorithms, the Generalized Inherent Optical Properties algorithm (GIOP) and Quasi-Analytical Algorithm (QAA), performed in optically deep and shallow scenarios. The results showed that SWIM performed well, whilst both GIOP and QAA showed distinct positive bias in IOP retrievals in optically shallow waters. The SWIM algorithm was also applied to a test region: the Great Barrier Reef, Australia. Using a single test scene and time series data collected by NASA's MODIS-Aqua sensor (2002-2013), a comparison of IOPs retrieved by SWIM, GIOP and QAA was conducted.

  13. Periradicular Infiltration of the Cervical Spine: How New CT Scanner Techniques and Protocol Modifications Contribute to the Achievement of Low-Dose Interventions.

    PubMed

    Elsholtz, Fabian Henry Jürgen; Kamp, Julia Evi-Katrin; Vahldiek, Janis Lucas; Hamm, Bernd; Niehues, Stefan Markus

    2018-06-18

     CT-guided periradicular infiltration of the cervical spine is an effective symptomatic treatment in patients with radiculopathy-associated pain syndromes. This study evaluates the robustness and safety of a low-dose protocol on a CT scanner with iterative reconstruction software.  A total of 183 patients who underwent periradicular infiltration therapy of the cervical spine were included in this study. 82 interventions were performed on a new CT scanner with a new intervention protocol using an iterative reconstruction algorithm. Spot scanning was implemented for planning and a basic low-dose setup of 80 kVp and 5 mAs was established during intermittent fluoroscopy. The comparison group included 101 prior interventions on a scanner without iterative reconstruction. The dose-length product (DLP), number of acquisitions, pain reduction on a numeric analog scale, and protocol changes to achieve a safe intervention were recorded.  The median DLP for the whole intervention was 24.3 mGy*cm in the comparison group and 1.8 mGy*cm in the study group. The median pain reduction was -3 in the study group and -2 in the comparison group. A 5 mAs increase in the tube current-time product was required in 5 patients of the study group.  Implementation of a new scanner and intervention protocol resulted in a 92.6 % dose reduction without a compromise in safety and pain relief. The dose needed here is more than 75 % lower than doses used for similar interventions in published studies. An increase of the tube current-time product was needed in only 6 % of interventions.   · The presented ultra-low-dose protocol allows for a significant dose reduction without compromising outcome.. · The protocol includes spot scanning for planning purposes and a basic setup of 80 kVp and 5 mAs.. · The iterative reconstruction algorithm is activated during fluoroscopy.. · Elsholtz FH, Kamp JE, Vahldiek JL et al. Periradicular Infiltration of the Cervical Spine: How New CT Scanner Techniques and Protocol Modifications Contribute to the Achievement of Low-Dose Interventions. Fortschr Röntgenstr 2018; DOI: 10.1055/a-0632-3930. © Georg Thieme Verlag KG Stuttgart · New York.

  14. Gravity inversion of a fault by Particle swarm optimization (PSO).

    PubMed

    Toushmalani, Reza

    2013-01-01

    Particle swarm optimization is a heuristic global optimization method and also an optimization algorithm, which is based on swarm intelligence. It comes from the research on the bird and fish flock movement behavior. In this paper we introduce and use this method in gravity inverse problem. We discuss the solution for the inverse problem of determining the shape of a fault whose gravity anomaly is known. Application of the proposed algorithm to this problem has proven its capability to deal with difficult optimization problems. The technique proved to work efficiently when tested to a number of models.

  15. A combined direct/inverse three-dimensional transonic wing design method for vector computers

    NASA Technical Reports Server (NTRS)

    Weed, R. A.; Carlson, L. A.; Anderson, W. K.

    1984-01-01

    A three-dimensional transonic-wing design algorithm for vector computers is developed, and the results of sample computations are presented graphically. The method incorporates the direct/inverse scheme of Carlson (1975), a Cartesian grid system with boundary conditions applied at a mean plane, and a potential-flow solver based on the conservative form of the full potential equation and using the ZEBRA II vectorizable solution algorithm of South et al. (1980). The accuracy and consistency of the method with regard to direct and inverse analysis and trailing-edge closure are verified in the test computations.

  16. A new stochastic algorithm for inversion of dust aerosol size distribution

    NASA Astrophysics Data System (ADS)

    Wang, Li; Li, Feng; Yang, Ma-ying

    2015-08-01

    Dust aerosol size distribution is an important source of information about atmospheric aerosols, and it can be determined from multiwavelength extinction measurements. This paper describes a stochastic inverse technique based on artificial bee colony (ABC) algorithm to invert the dust aerosol size distribution by light extinction method. The direct problems for the size distribution of water drop and dust particle, which are the main elements of atmospheric aerosols, are solved by the Mie theory and the Lambert-Beer Law in multispectral region. And then, the parameters of three widely used functions, i.e. the log normal distribution (L-N), the Junge distribution (J-J), and the normal distribution (N-N), which can provide the most useful representation of aerosol size distributions, are inversed by the ABC algorithm in the dependent model. Numerical results show that the ABC algorithm can be successfully applied to recover the aerosol size distribution with high feasibility and reliability even in the presence of random noise.

  17. A Generic 1D Forward Modeling and Inversion Algorithm for TEM Sounding with an Arbitrary Horizontal Loop

    NASA Astrophysics Data System (ADS)

    Li, Zhanhui; Huang, Qinghua; Xie, Xingbing; Tang, Xingong; Chang, Liao

    2016-08-01

    We present a generic 1D forward modeling and inversion algorithm for transient electromagnetic (TEM) data with an arbitrary horizontal transmitting loop and receivers at any depth in a layered earth. Both the Hankel and sine transforms required in the forward algorithm are calculated using the filter method. The adjoint-equation method is used to derive the formulation of data sensitivity at any depth in non-permeable media. The inversion algorithm based on this forward modeling algorithm and sensitivity formulation is developed using the Gauss-Newton iteration method combined with the Tikhonov regularization. We propose a new data-weighting method to minimize the initial model dependence that enhances the convergence stability. On a laptop with a CPU of i7-5700HQ@3.5 GHz, the inversion iteration of a 200 layered input model with a single receiver takes only 0.34 s, while it increases to only 0.53 s for the data from four receivers at a same depth. For the case of four receivers at different depths, the inversion iteration runtime increases to 1.3 s. Modeling the data with an irregular loop and an equal-area square loop indicates that the effect of the loop geometry is significant at early times and vanishes gradually along the diffusion of TEM field. For a stratified earth, inversion of data from more than one receiver is useful in noise reducing to get a more credible layered earth. However, for a resistive layer shielded below a conductive layer, increasing the number of receivers on the ground does not have significant improvement in recovering the resistive layer. Even with a down-hole TEM sounding, the shielded resistive layer cannot be recovered if all receivers are above the shielded resistive layer. However, our modeling demonstrates remarkable improvement in detecting the resistive layer with receivers in or under this layer.

  18. A simulation based method to assess inversion algorithms for transverse relaxation data

    NASA Astrophysics Data System (ADS)

    Ghosh, Supriyo; Keener, Kevin M.; Pan, Yong

    2008-04-01

    NMR relaxometry is a very useful tool for understanding various chemical and physical phenomena in complex multiphase systems. A Carr-Purcell-Meiboom-Gill (CPMG) [P.T. Callaghan, Principles of Nuclear Magnetic Resonance Microscopy, Clarendon Press, Oxford, 1991] experiment is an easy and quick way to obtain transverse relaxation constant (T2) in low field. Most of the samples usually have a distribution of T2 values. Extraction of this distribution of T2s from the noisy decay data is essentially an ill-posed inverse problem. Various inversion approaches have been used to solve this problem, to date. A major issue in using an inversion algorithm is determining how accurate the computed distribution is. A systematic analysis of an inversion algorithm, UPEN [G.C. Borgia, R.J.S. Brown, P. Fantazzini, Uniform-penalty inversion of multiexponential decay data, Journal of Magnetic Resonance 132 (1998) 65-77; G.C. Borgia, R.J.S. Brown, P. Fantazzini, Uniform-penalty inversion of multiexponential decay data II. Data spacing, T2 data, systematic data errors, and diagnostics, Journal of Magnetic Resonance 147 (2000) 273-285] was performed by means of simulated CPMG data generation. Through our simulation technique and statistical analyses, the effects of various experimental parameters on the computed distribution were evaluated. We converged to the true distribution by matching up the inversion results from a series of true decay data and a noisy simulated data. In addition to simulation studies, the same approach was also applied on real experimental data to support the simulation results.

  19. Monte Carlo uncertainty analysis of dose estimates in radiochromic film dosimetry with single-channel and multichannel algorithms.

    PubMed

    Vera-Sánchez, Juan Antonio; Ruiz-Morales, Carmen; González-López, Antonio

    2018-03-01

    To provide a multi-stage model to calculate uncertainty in radiochromic film dosimetry with Monte-Carlo techniques. This new approach is applied to single-channel and multichannel algorithms. Two lots of Gafchromic EBT3 are exposed in two different Varian linacs. They are read with an EPSON V800 flatbed scanner. The Monte-Carlo techniques in uncertainty analysis provide a numerical representation of the probability density functions of the output magnitudes. From this numerical representation, traditional parameters of uncertainty analysis as the standard deviations and bias are calculated. Moreover, these numerical representations are used to investigate the shape of the probability density functions of the output magnitudes. Also, another calibration film is read in four EPSON scanners (two V800 and two 10000XL) and the uncertainty analysis is carried out with the four images. The dose estimates of single-channel and multichannel algorithms show a Gaussian behavior and low bias. The multichannel algorithms lead to less uncertainty in the final dose estimates when the EPSON V800 is employed as reading device. In the case of the EPSON 10000XL, the single-channel algorithms provide less uncertainty in the dose estimates for doses higher than four Gy. A multi-stage model has been presented. With the aid of this model and the use of the Monte-Carlo techniques, the uncertainty of dose estimates for single-channel and multichannel algorithms are estimated. The application of the model together with Monte-Carlo techniques leads to a complete characterization of the uncertainties in radiochromic film dosimetry. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  20. MFP scanner diagnostics using a self-printed target to measure the modulation transfer function

    NASA Astrophysics Data System (ADS)

    Wang, Weibao; Bauer, Peter; Wagner, Jerry; Allebach, Jan P.

    2014-01-01

    In the current market, reduction of warranty costs is an important avenue for improving profitability by manufacturers of printer products. Our goal is to develop an autonomous capability for diagnosis of printer and scanner caused defects with mid-range laser multifunction printers (MFPs), so as to reduce warranty costs. If the scanner unit of the MFP is not performing according to specification, this issue needs to be diagnosed. If there is a print quality issue, this can be diagnosed by printing a special test page that is resident in the firmware of the MFP unit, and then scanning it. However, the reliability of this process will be compromised if the scanner unit is defective. Thus, for both scanner and printer image quality issues, it is important to be able to properly evaluate the scanner performance. In this paper, we consider evaluation of the scanner performance by measuring its modulation transfer function (MTF). The MTF is a fundamental tool for assessing the performance of imaging systems. Several ways have been proposed to measure the MTF, all of which require a special target, for example a slanted-edge target. It is unacceptably expensive to ship every MFP with such a standard target, and to expect that the customer can keep track of it. To reduce this cost, in this paper, we develop new approach to this task. It is based on a self-printed slanted-edge target. Then, we propose algorithms to improve the results using a self-printed slanted-edge target. Finally, we present experimental results for MTF measurement using self-printed targets and compare them to the results obtained with standard targets.

  1. Validation of calculation algorithms for organ doses in CT by measurements on a 5 year old paediatric phantom

    NASA Astrophysics Data System (ADS)

    Dabin, Jérémie; Mencarelli, Alessandra; McMillan, Dayton; Romanyukha, Anna; Struelens, Lara; Lee, Choonsik

    2016-06-01

    Many organ dose calculation tools for computed tomography (CT) scans rely on the assumptions: (1) organ doses estimated for one CT scanner can be converted into organ doses for another CT scanner using the ratio of the Computed Tomography Dose Index (CTDI) between two CT scanners; and (2) helical scans can be approximated as the summation of axial slices covering the same scan range. The current study aims to validate experimentally these two assumptions. We performed organ dose measurements in a 5 year-old physical anthropomorphic phantom for five different CT scanners from four manufacturers. Absorbed doses to 22 organs were measured using thermoluminescent dosimeters for head-to-torso scans. We then compared the measured organ doses with the values calculated from the National Cancer Institute dosimetry system for CT (NCICT) computer program, developed at the National Cancer Institute. Whereas the measured organ doses showed significant variability (coefficient of variation (CoV) up to 53% at 80 kV) across different scanner models, the CoV of organ doses normalised to CTDIvol substantially decreased (12% CoV on average at 80 kV). For most organs, the difference between measured and simulated organ doses was within  ±20% except for the bone marrow, breasts and ovaries. The discrepancies were further explained by additional Monte Carlo calculations of organ doses using a voxel phantom developed from CT images of the physical phantom. The results demonstrate that organ doses calculated for one CT scanner can be used to assess organ doses from other CT scanners with 20% uncertainty (k  =  1), for the scan settings considered in the study.

  2. An Automatic Procedure for Combining Digital Images and Laser Scanner Data

    NASA Astrophysics Data System (ADS)

    Moussa, W.; Abdel-Wahab, M.; Fritsch, D.

    2012-07-01

    Besides improving both the geometry and the visual quality of the model, the integration of close-range photogrammetry and terrestrial laser scanning techniques directs at filling gaps in laser scanner point clouds to avoid modeling errors, reconstructing more details in higher resolution and recovering simple structures with less geometric details. Thus, within this paper a flexible approach for the automatic combination of digital images and laser scanner data is presented. Our approach comprises two methods for data fusion. The first method starts by a marker-free registration of digital images based on a point-based environment model (PEM) of a scene which stores the 3D laser scanner point clouds associated with intensity and RGB values. The PEM allows the extraction of accurate control information for the direct computation of absolute camera orientations with redundant information by means of accurate space resection methods. In order to use the computed relations between the digital images and the laser scanner data, an extended Helmert (seven-parameter) transformation is introduced and its parameters are estimated. Precedent to that, in the second method, the local relative orientation parameters of the camera images are calculated by means of an optimized Structure and Motion (SaM) reconstruction method. Then, using the determined transformation parameters results in having absolute oriented images in relation to the laser scanner data. With the resulting absolute orientations we have employed robust dense image reconstruction algorithms to create oriented dense image point clouds, which are automatically combined with the laser scanner data to form a complete detailed representation of a scene. Examples of different data sets are shown and experimental results demonstrate the effectiveness of the presented procedures.

  3. Combined chirp coded tissue harmonic and fundamental ultrasound imaging for intravascular ultrasound: 20–60 MHz phantom and ex vivo results

    PubMed Central

    Park, Jinhyoung; Li, Xiang; Zhou, Qifa; Shung, K. Kirk

    2013-01-01

    The application of chirp coded excitation to pulse inversion tissue harmonic imaging can increase signal to noise ratio. On the other hand, the elevation of range side lobe level, caused by leakages of the fundamental signal, has been problematic in mechanical scanners which are still the most prevalent in high frequency intravascular ultrasound imaging. Fundamental chirp coded excitation imaging can achieve range side lobe levels lower than –60 dB with Hanning window, but it yields higher side lobes level than pulse inversion chirp coded tissue harmonic imaging (PI-CTHI). Therefore, in this paper a combined pulse inversion chirp coded tissue harmonic and fundamental imaging mode (CPI-CTHI) is proposed to retain the advantages of both chirp coded harmonic and fundamental imaging modes by demonstrating 20–60 MHz phantom and ex vivo results. A simulation study shows that the range side lobe level of CPI-CTHI is 16 dB lower than PI-CTHI, assuming that the transducer translates incident positions by 50 μm when two beamlines of pulse inversion pair are acquired. CPI-CTHI is implemented for a proto-typed intravascular ultrasound scanner capable of combined data acquisition in real-time. A wire phantom study shows that CPI-CTHI has a 12 dB lower range side lobe level and a 7 dB higher echo signal to noise ratio than PI-CTHI, while the lateral resolution and side lobe level are 50 μm finer and –3 dB less than fundamental chirp coded excitation imaging respectively. Ex vivo scanning of a rabbit trachea demonstrates that CPI-CTHI is capable of visualizing blood vessels as small as 200 μm in diameter with 6 dB better tissue contrast than either PI-CTHI or fundamental chirp coded excitation imaging. These results clearly indicate that CPI-CTHI may enhance tissue contrast with less range side lobe level than PI-CTHI. PMID:22871273

  4. A Geophysical Inversion Model Enhancement Technique Based on the Blind Deconvolution

    NASA Astrophysics Data System (ADS)

    Zuo, B.; Hu, X.; Li, H.

    2011-12-01

    A model-enhancement technique is proposed to enhance the geophysical inversion model edges and details without introducing any additional information. Firstly, the theoretic correctness of the proposed geophysical inversion model-enhancement technique is discussed. An inversion MRM (model resolution matrix) convolution approximating PSF (Point Spread Function) method is designed to demonstrate the correctness of the deconvolution model enhancement method. Then, a total-variation regularization blind deconvolution geophysical inversion model-enhancement algorithm is proposed. In previous research, Oldenburg et al. demonstrate the connection between the PSF and the geophysical inverse solution. Alumbaugh et al. propose that more information could be provided by the PSF if we return to the idea of it behaving as an averaging or low pass filter. We consider the PSF as a low pass filter to enhance the inversion model basis on the theory of the PSF convolution approximation. Both the 1D linear and the 2D magnetotelluric inversion examples are used to analyze the validity of the theory and the algorithm. To prove the proposed PSF convolution approximation theory, the 1D linear inversion problem is considered. It shows the ratio of convolution approximation error is only 0.15%. The 2D synthetic model enhancement experiment is presented. After the deconvolution enhancement, the edges of the conductive prism and the resistive host become sharper, and the enhancement result is closer to the actual model than the original inversion model according the numerical statistic analysis. Moreover, the artifacts in the inversion model are suppressed. The overall precision of model increases 75%. All of the experiments show that the structure details and the numerical precision of inversion model are significantly improved, especially in the anomalous region. The correlation coefficient between the enhanced inversion model and the actual model are shown in Fig. 1. The figure illustrates that more information and details structure of the actual model are enhanced through the proposed enhancement algorithm. Using the proposed enhancement method can help us gain a clearer insight into the results of the inversions and help make better informed decisions.

  5. Inverse consistent non-rigid image registration based on robust point set matching

    PubMed Central

    2014-01-01

    Background Robust point matching (RPM) has been extensively used in non-rigid registration of images to robustly register two sets of image points. However, except for the location at control points, RPM cannot estimate the consistent correspondence between two images because RPM is a unidirectional image matching approach. Therefore, it is an important issue to make an improvement in image registration based on RPM. Methods In our work, a consistent image registration approach based on the point sets matching is proposed to incorporate the property of inverse consistency and improve registration accuracy. Instead of only estimating the forward transformation between the source point sets and the target point sets in state-of-the-art RPM algorithms, the forward and backward transformations between two point sets are estimated concurrently in our algorithm. The inverse consistency constraints are introduced to the cost function of RPM and the fuzzy correspondences between two point sets are estimated based on both the forward and backward transformations simultaneously. A modified consistent landmark thin-plate spline registration is discussed in detail to find the forward and backward transformations during the optimization of RPM. The similarity of image content is also incorporated into point matching in order to improve image matching. Results Synthetic data sets, medical images are employed to demonstrate and validate the performance of our approach. The inverse consistent errors of our algorithm are smaller than RPM. Especially, the topology of transformations is preserved well for our algorithm for the large deformation between point sets. Moreover, the distance errors of our algorithm are similar to that of RPM, and they maintain a downward trend as whole, which demonstrates the convergence of our algorithm. The registration errors for image registrations are evaluated also. Again, our algorithm achieves the lower registration errors in same iteration number. The determinant of the Jacobian matrix of the deformation field is used to analyse the smoothness of the forward and backward transformations. The forward and backward transformations estimated by our algorithm are smooth for small deformation. For registration of lung slices and individual brain slices, large or small determinant of the Jacobian matrix of the deformation fields are observed. Conclusions Results indicate the improvement of the proposed algorithm in bi-directional image registration and the decrease of the inverse consistent errors of the forward and the reverse transformations between two images. PMID:25559889

  6. Joint inversion of multiple geophysical and petrophysical data using generalized fuzzy clustering algorithms

    NASA Astrophysics Data System (ADS)

    Sun, Jiajia; Li, Yaoguo

    2017-02-01

    Joint inversion that simultaneously inverts multiple geophysical data sets to recover a common Earth model is increasingly being applied to exploration problems. Petrophysical data can serve as an effective constraint to link different physical property models in such inversions. There are two challenges, among others, associated with the petrophysical approach to joint inversion. One is related to the multimodality of petrophysical data because there often exist more than one relationship between different physical properties in a region of study. The other challenge arises from the fact that petrophysical relationships have different characteristics and can exhibit point, linear, quadratic, or exponential forms in a crossplot. The fuzzy c-means (FCM) clustering technique is effective in tackling the first challenge and has been applied successfully. We focus on the second challenge in this paper and develop a joint inversion method based on variations of the FCM clustering technique. To account for the specific shapes of petrophysical relationships, we introduce several different fuzzy clustering algorithms that are capable of handling different shapes of petrophysical relationships. We present two synthetic and one field data examples and demonstrate that, by choosing appropriate distance measures for the clustering component in the joint inversion algorithm, the proposed joint inversion method provides an effective means of handling common petrophysical situations we encounter in practice. The jointly inverted models have both enhanced structural similarity and increased petrophysical correlation, and better represent the subsurface in the spatial domain and the parameter domain of physical properties.

  7. Computerized tomography platform using beta rays

    NASA Astrophysics Data System (ADS)

    Paetkau, Owen; Parsons, Zachary; Paetkau, Mark

    2017-12-01

    A computerized tomography (CT) system using a 0.1 μCi Sr-90 beta source, Geiger counter, and low density foam samples was developed. A simple algorithm was used to construct images from the data collected with the beta CT scanner. The beta CT system is analogous to X-ray CT as both types of radiation are sensitive to density variations. This system offers a platform for learning opportunities in an undergraduate laboratory, covering topics such as image reconstruction algorithms, radiation exposure, and the energy dependence of absorption.

  8. Tomographic inversion of satellite photometry

    NASA Technical Reports Server (NTRS)

    Solomon, S. C.; Hays, P. B.; Abreu, V. J.

    1984-01-01

    An inversion algorithm capable of reconstructing the volume emission rate of thermospheric airglow features from satellite photometry has been developed. The accuracy and resolution of this technique are investigated using simulated data, and the inversions of several sets of observations taken by the Visible Airglow Experiment are presented.

  9. A comparative study of controlled random search algorithms with application to inverse aerofoil design

    NASA Astrophysics Data System (ADS)

    Manzanares-Filho, N.; Albuquerque, R. B. F.; Sousa, B. S.; Santos, L. G. C.

    2018-06-01

    This article presents a comparative study of some versions of the controlled random search algorithm (CRSA) in global optimization problems. The basic CRSA, originally proposed by Price in 1977 and improved by Ali et al. in 1997, is taken as a starting point. Then, some new modifications are proposed to improve the efficiency and reliability of this global optimization technique. The performance of the algorithms is assessed using traditional benchmark test problems commonly invoked in the literature. This comparative study points out the key features of the modified algorithm. Finally, a comparison is also made in a practical engineering application, namely the inverse aerofoil shape design.

  10. Assessing performance of flaw characterization methods through uncertainty propagation

    NASA Astrophysics Data System (ADS)

    Miorelli, R.; Le Bourdais, F.; Artusi, X.

    2018-04-01

    In this work, we assess the inversion performance in terms of crack characterization and localization based on synthetic signals associated to ultrasonic and eddy current physics. More precisely, two different standard iterative inversion algorithms are used to minimize the discrepancy between measurements (i.e., the tested data) and simulations. Furthermore, in order to speed up the computational time and get rid of the computational burden often associated to iterative inversion algorithms, we replace the standard forward solver by a suitable metamodel fit on a database built offline. In a second step, we assess the inversion performance by adding uncertainties on a subset of the database parameters and then, through the metamodel, we propagate these uncertainties within the inversion procedure. The fast propagation of uncertainties enables efficiently evaluating the impact due to the lack of knowledge on some parameters employed to describe the inspection scenarios, which is a situation commonly encountered in the industrial NDE context.

  11. Using Poisson-regularized inversion of Bremsstrahlung emission to extract full electron energy distribution functions from x-ray pulse-height detector data

    DOE PAGES

    Swanson, C.; Jandovitz, P.; Cohen, S. A.

    2018-02-27

    We measured Electron Energy Distribution Functions (EEDFs) from below 200 eV to over 8 keV and spanning five orders-of-magnitude in intensity, produced in a low-power, RF-heated, tandem mirror discharge in the PFRC-II apparatus. The EEDF was obtained from the x-ray energy distribution function (XEDF) using a novel Poisson-regularized spectrum inversion algorithm applied to pulse-height spectra that included both Bremsstrahlung and line emissions. The XEDF was measured using a specially calibrated Amptek Silicon Drift Detector (SDD) pulse-height system with 125 eV FWHM at 5.9 keV. Finally, the algorithm is found to out-perform current leading x-ray inversion algorithms when the error duemore » to counting statistics is high.« less

  12. Using Poisson-regularized inversion of Bremsstrahlung emission to extract full electron energy distribution functions from x-ray pulse-height detector data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swanson, C.; Jandovitz, P.; Cohen, S. A.

    We measured Electron Energy Distribution Functions (EEDFs) from below 200 eV to over 8 keV and spanning five orders-of-magnitude in intensity, produced in a low-power, RF-heated, tandem mirror discharge in the PFRC-II apparatus. The EEDF was obtained from the x-ray energy distribution function (XEDF) using a novel Poisson-regularized spectrum inversion algorithm applied to pulse-height spectra that included both Bremsstrahlung and line emissions. The XEDF was measured using a specially calibrated Amptek Silicon Drift Detector (SDD) pulse-height system with 125 eV FWHM at 5.9 keV. Finally, the algorithm is found to out-perform current leading x-ray inversion algorithms when the error duemore » to counting statistics is high.« less

  13. Fast estimation of diffusion tensors under Rician noise by the EM algorithm.

    PubMed

    Liu, Jia; Gasbarra, Dario; Railavo, Juha

    2016-01-15

    Diffusion tensor imaging (DTI) is widely used to characterize, in vivo, the white matter of the central nerve system (CNS). This biological tissue contains much anatomic, structural and orientational information of fibers in human brain. Spectral data from the displacement distribution of water molecules located in the brain tissue are collected by a magnetic resonance scanner and acquired in the Fourier domain. After the Fourier inversion, the noise distribution is Gaussian in both real and imaginary parts and, as a consequence, the recorded magnitude data are corrupted by Rician noise. Statistical estimation of diffusion leads a non-linear regression problem. In this paper, we present a fast computational method for maximum likelihood estimation (MLE) of diffusivities under the Rician noise model based on the expectation maximization (EM) algorithm. By using data augmentation, we are able to transform a non-linear regression problem into the generalized linear modeling framework, reducing dramatically the computational cost. The Fisher-scoring method is used for achieving fast convergence of the tensor parameter. The new method is implemented and applied using both synthetic and real data in a wide range of b-amplitudes up to 14,000s/mm(2). Higher accuracy and precision of the Rician estimates are achieved compared with other log-normal based methods. In addition, we extend the maximum likelihood (ML) framework to the maximum a posteriori (MAP) estimation in DTI under the aforementioned scheme by specifying the priors. We will describe how close numerically are the estimators of model parameters obtained through MLE and MAP estimation. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. 3-D Magnetotelluric Forward Modeling And Inversion Incorporating Topography By Using Vector Finite-Element Method Combined With Divergence Corrections Based On The Magnetic Field (VFEH++)

    NASA Astrophysics Data System (ADS)

    Shi, X.; Utada, H.; Jiaying, W.

    2009-12-01

    The vector finite-element method combined with divergence corrections based on the magnetic field H, referred to as VFEH++ method, is developed to simulate the magnetotelluric (MT) responses of 3-D conductivity models. The advantages of the new VFEH++ method are the use of edge-elements to eliminate the vector parasites and the divergence corrections to explicitly guarantee the divergence-free conditions in the whole modeling domain. 3-D MT topographic responses are modeling using the new VFEH++ method, and are compared with those calculated by other numerical methods. The results show that MT responses can be modeled highly accurate using the VFEH+ +method. The VFEH++ algorithm is also employed for the 3-D MT data inversion incorporating topography. The 3-D MT inverse problem is formulated as a minimization problem of the regularized misfit function. In order to avoid the huge memory requirement and very long time for computing the Jacobian sensitivity matrix for Gauss-Newton method, we employ the conjugate gradient (CG) approach to solve the inversion equation. In each iteration of CG algorithm, the cost computation is the product of the Jacobian sensitivity matrix with a model vector x or its transpose with a data vector y, which can be transformed into two pseudo-forwarding modeling. This avoids the full explicitly Jacobian matrix calculation and storage which leads to considerable savings in the memory required by the inversion program in PC computer. The performance of CG algorithm will be illustrated by several typical 3-D models with horizontal earth surface and topographic surfaces. The results show that the VFEH++ and CG algorithms can be effectively employed to 3-D MT field data inversion.

  15. Nonlinear inversion of borehole-radar tomography data to reconstruct velocity and attenuation distribution in earth materials

    USGS Publications Warehouse

    Zhou, C.; Liu, L.; Lane, J.W.

    2001-01-01

    A nonlinear tomographic inversion method that uses first-arrival travel-time and amplitude-spectra information from cross-hole radar measurements was developed to simultaneously reconstruct electromagnetic velocity and attenuation distribution in earth materials. Inversion methods were developed to analyze single cross-hole tomography surveys and differential tomography surveys. Assuming the earth behaves as a linear system, the inversion methods do not require estimation of source radiation pattern, receiver coupling, or geometrical spreading. The data analysis and tomographic inversion algorithm were applied to synthetic test data and to cross-hole radar field data provided by the US Geological Survey (USGS). The cross-hole radar field data were acquired at the USGS fractured-rock field research site at Mirror Lake near Thornton, New Hampshire, before and after injection of a saline tracer, to monitor the transport of electrically conductive fluids in the image plane. Results from the synthetic data test demonstrate the algorithm computational efficiency and indicate that the method robustly can reconstruct electromagnetic (EM) wave velocity and attenuation distribution in earth materials. The field test results outline zones of velocity and attenuation anomalies consistent with the finding of previous investigators; however, the tomograms appear to be quite smooth. Further work is needed to effectively find the optimal smoothness criterion in applying the Tikhonov regularization in the nonlinear inversion algorithms for cross-hole radar tomography. ?? 2001 Elsevier Science B.V. All rights reserved.

  16. Nonlinear Rayleigh wave inversion based on the shuffled frog-leaping algorithm

    NASA Astrophysics Data System (ADS)

    Sun, Cheng-Yu; Wang, Yan-Yan; Wu, Dun-Shi; Qin, Xiao-Jun

    2017-12-01

    At present, near-surface shear wave velocities are mainly calculated through Rayleigh wave dispersion-curve inversions in engineering surface investigations, but the required calculations pose a highly nonlinear global optimization problem. In order to alleviate the risk of falling into a local optimal solution, this paper introduces a new global optimization method, the shuffle frog-leaping algorithm (SFLA), into the Rayleigh wave dispersion-curve inversion process. SFLA is a swarm-intelligence-based algorithm that simulates a group of frogs searching for food. It uses a few parameters, achieves rapid convergence, and is capability of effective global searching. In order to test the reliability and calculation performance of SFLA, noise-free and noisy synthetic datasets were inverted. We conducted a comparative analysis with other established algorithms using the noise-free dataset, and then tested the ability of SFLA to cope with data noise. Finally, we inverted a real-world example to examine the applicability of SFLA. Results from both synthetic and field data demonstrated the effectiveness of SFLA in the interpretation of Rayleigh wave dispersion curves. We found that SFLA is superior to the established methods in terms of both reliability and computational efficiency, so it offers great potential to improve our ability to solve geophysical inversion problems.

  17. WE-G-18C-05: Characterization of Cross-Vendor, Cross-Field Strength MR Image Intensity Variations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paulson, E; Prah, D

    2014-06-15

    Purpose: Variations in MR image intensity and image intensity nonuniformity (IINU) can challenge the accuracy of intensity-based image segmentation and registration algorithms commonly applied in radiotherapy. The goal of this work was to characterize MR image intensity variations across scanner vendors and field strengths commonly used in radiotherapy. Methods: ACR-MRI phantom images were acquired at 1.5T and 3.0T on GE (450w and 750, 23.1), Siemens (Espree and Verio, VB17B), and Philips (Ingenia, 4.1.3) scanners using commercial spin-echo sequences with matched parameters (TE/TR: 20/500 ms, rBW: 62.5 kHz, TH/skip: 5/5mm). Two radiofrequency (RF) coil combinations were used for each scanner: bodymore » coil alone, and combined body and phased-array head coils. Vendorspecific B1- corrections (PURE/Pre-Scan Normalize/CLEAR) were applied in all head coil cases. Images were transferred offline, corrected for IINU using the MNI N3 algorithm, and normalized. Coefficients of variation (CV=σ/μ) and peak image uniformity (PIU = 1−(Smax−Smin)/(Smax+Smin)) estimates were calculated for one homogeneous phantom slice. Kruskal-Wallis and Wilcoxon matched-pairs tests compared mean MR signal intensities and differences between original and N3 image CV and PIU. Results: Wide variations in both MR image intensity and IINU were observed across scanner vendors, field strengths, and RF coil configurations. Applying the MNI N3 correction for IINU resulted in significant improvements in both CV and PIU (p=0.0115, p=0.0235). However, wide variations in overall image intensity persisted, requiring image normalization to improve consistency across vendors, field strengths, and RF coils. These results indicate that B1- correction routines alone may be insufficient in compensating for IINU and image scaling, warranting additional corrections prior to use of MR images in radiotherapy. Conclusions: MR image intensities and IINU vary as a function of scanner vendor, field strength, and RF coil configuration. A two-step strategy consisting of MNI N3 correction followed by normalization was required to improve MR image consistency. Funding provided by Advancing a Healthier Wisconsin.« less

  18. Recursive flexible multibody system dynamics using spatial operators

    NASA Technical Reports Server (NTRS)

    Jain, A.; Rodriguez, G.

    1992-01-01

    This paper uses spatial operators to develop new spatially recursive dynamics algorithms for flexible multibody systems. The operator description of the dynamics is identical to that for rigid multibody systems. Assumed-mode models are used for the deformation of each individual body. The algorithms are based on two spatial operator factorizations of the system mass matrix. The first (Newton-Euler) factorization of the mass matrix leads to recursive algorithms for the inverse dynamics, mass matrix evaluation, and composite-body forward dynamics for the systems. The second (innovations) factorization of the mass matrix, leads to an operator expression for the mass matrix inverse and to a recursive articulated-body forward dynamics algorithm. The primary focus is on serial chains, but extensions to general topologies are also described. A comparison of computational costs shows that the articulated-body, forward dynamics algorithm is much more efficient than the composite-body algorithm for most flexible multibody systems.

  19. Skull removal in MR images using a modified artificial bee colony optimization algorithm.

    PubMed

    Taherdangkoo, Mohammad

    2014-01-01

    Removal of the skull from brain Magnetic Resonance (MR) images is an important preprocessing step required for other image analysis techniques such as brain tissue segmentation. In this paper, we propose a new algorithm based on the Artificial Bee Colony (ABC) optimization algorithm to remove the skull region from brain MR images. We modify the ABC algorithm using a different strategy for initializing the coordinates of scout bees and their direction of search. Moreover, we impose an additional constraint to the ABC algorithm to avoid the creation of discontinuous regions. We found that our algorithm successfully removed all bony skull from a sample of de-identified MR brain images acquired from different model scanners. The obtained results of the proposed algorithm compared with those of previously introduced well known optimization algorithms such as Particle Swarm Optimization (PSO) and Ant Colony Optimization (ACO) demonstrate the superior results and computational performance of our algorithm, suggesting its potential for clinical applications.

  20. ASTEP user's guide and software documentation

    NASA Technical Reports Server (NTRS)

    Gliniewicz, A. S.; Lachowski, H. M.; Pace, W. H., Jr.; Salvato, P., Jr.

    1974-01-01

    The Algorithm Simulation Test and Evaluation Program (ASTEP) is a modular computer program developed for the purpose of testing and evaluating methods of processing remotely sensed multispectral scanner earth resources data. ASTEP is written in FORTRAND V on the UNIVAC 1110 under the EXEC 8 operating system and may be operated in either a batch or interactive mode. The program currently contains over one hundred subroutines consisting of data classification and display algorithms, statistical analysis algorithms, utility support routines, and feature selection capability. The current program can accept data in LARSC1, LARSC2, ERTS, and Universal formats, and can output processed image or data tapes in Universal format.

  1. Image reconstruction and system modeling techniques for virtual-pinhole PET insert systems

    PubMed Central

    Keesing, Daniel B; Mathews, Aswin; Komarov, Sergey; Wu, Heyu; Song, Tae Yong; O'Sullivan, Joseph A; Tai, Yuan-Chuan

    2012-01-01

    Virtual-pinhole PET (VP-PET) imaging is a new technology in which one or more high-resolution detector modules are integrated into a conventional PET scanner with lower-resolution detectors. It can locally enhance the spatial resolution and contrast recovery near the add-on detectors, and depending on the configuration, may also increase the sensitivity of the system. This novel scanner geometry makes the reconstruction problem more challenging compared to the reconstruction of data from a standalone PET scanner, as new techniques are needed to model and account for the non-standard acquisition. In this paper, we present a general framework for fully 3D modeling of an arbitrary VP-PET insert system. The model components are incorporated into a statistical reconstruction algorithm to estimate an image from the multi-resolution data. For validation, we apply the proposed model and reconstruction approach to one of our custom-built VP-PET systems – a half-ring insert device integrated into a clinical PET/CT scanner. Details regarding the most important implementation issues are provided. We show that the proposed data model is consistent with the measured data, and that our approach can lead to reconstructions with improved spatial resolution and lesion detectability. PMID:22490983

  2. Using Laser Scanners to Augment the Systematic Error Pointing Model

    NASA Astrophysics Data System (ADS)

    Wernicke, D. R.

    2016-08-01

    The antennas of the Deep Space Network (DSN) rely on precise pointing algorithms to communicate with spacecraft that are billions of miles away. Although the existing systematic error pointing model is effective at reducing blind pointing errors due to static misalignments, several of its terms have a strong dependence on seasonal and even daily thermal variation and are thus not easily modeled. Changes in the thermal state of the structure create a separation from the model and introduce a varying pointing offset. Compensating for this varying offset is possible by augmenting the pointing model with laser scanners. In this approach, laser scanners mounted to the alidade measure structural displacements while a series of transformations generate correction angles. Two sets of experiments were conducted in August 2015 using commercially available laser scanners. When compared with historical monopulse corrections under similar conditions, the computed corrections are within 3 mdeg of the mean. However, although the results show promise, several key challenges relating to the sensitivity of the optical equipment to sunlight render an implementation of this approach impractical. Other measurement devices such as inclinometers may be implementable at a significantly lower cost.

  3. Stochastic inversion of ocean color data using the cross-entropy method.

    PubMed

    Salama, Mhd Suhyb; Shen, Fang

    2010-01-18

    Improving the inversion of ocean color data is an ever continuing effort to increase the accuracy of derived inherent optical properties. In this paper we present a stochastic inversion algorithm to derive inherent optical properties from ocean color, ship and space borne data. The inversion algorithm is based on the cross-entropy method where sets of inherent optical properties are generated and converged to the optimal set using iterative process. The algorithm is validated against four data sets: simulated, noisy simulated in-situ measured and satellite match-up data sets. Statistical analysis of validation results is based on model-II regression using five goodness-of-fit indicators; only R2 and root mean square of error (RMSE) are mentioned hereafter. Accurate values of total absorption coefficient are derived with R2 > 0.91 and RMSE, of log transformed data, less than 0.55. Reliable values of the total backscattering coefficient are also obtained with R2 > 0.7 (after removing outliers) and RMSE < 0.37. The developed algorithm has the ability to derive reliable results from noisy data with R2 above 0.96 for the total absorption and above 0.84 for the backscattering coefficients. The algorithm is self contained and easy to implement and modify to derive the variability of chlorophyll-a absorption that may correspond to different phytoplankton species. It gives consistently accurate results and is therefore worth considering for ocean color global products.

  4. Genetic Algorithms Evolve Optimized Transforms for Signal Processing Applications

    DTIC Science & Technology

    2005-04-01

    coefficient sets describing inverse transforms and matched forward/ inverse transform pairs that consistently outperform wavelets for image compression and reconstruction applications under conditions subject to quantization error.

  5. Development and Verification of a Novel Robot-Integrated Fringe Projection 3D Scanning System for Large-Scale Metrology.

    PubMed

    Du, Hui; Chen, Xiaobo; Xi, Juntong; Yu, Chengyi; Zhao, Bao

    2017-12-12

    Large-scale surfaces are prevalent in advanced manufacturing industries, and 3D profilometry of these surfaces plays a pivotal role for quality control. This paper proposes a novel and flexible large-scale 3D scanning system assembled by combining a robot, a binocular structured light scanner and a laser tracker. The measurement principle and system construction of the integrated system are introduced. A mathematical model is established for the global data fusion. Subsequently, a robust method is introduced for the establishment of the end coordinate system. As for hand-eye calibration, the calibration ball is observed by the scanner and the laser tracker simultaneously. With this data, the hand-eye relationship is solved, and then an algorithm is built to get the transformation matrix between the end coordinate system and the world coordinate system. A validation experiment is designed to verify the proposed algorithms. Firstly, a hand-eye calibration experiment is implemented and the computation of the transformation matrix is done. Then a car body rear is measured 22 times in order to verify the global data fusion algorithm. The 3D shape of the rear is reconstructed successfully. To evaluate the precision of the proposed method, a metric tool is built and the results are presented.

  6. Motion correction of PET brain images through deconvolution: I. Theoretical development and analysis in software simulations

    NASA Astrophysics Data System (ADS)

    Faber, T. L.; Raghunath, N.; Tudorascu, D.; Votaw, J. R.

    2009-02-01

    Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. Existing correction methods that use known patient motion obtained from tracking devices either require multi-frame acquisitions, detailed knowledge of the scanner, or specialized reconstruction algorithms. A deconvolution algorithm has been developed that alleviates these drawbacks by using the reconstructed image to estimate the original non-blurred image using maximum likelihood estimation maximization (MLEM) techniques. A high-resolution digital phantom was created by shape-based interpolation of the digital Hoffman brain phantom. Three different sets of 20 movements were applied to the phantom. For each frame of the motion, sinograms with attenuation and three levels of noise were simulated and then reconstructed using filtered backprojection. The average of the 20 frames was considered the motion blurred image, which was restored with the deconvolution algorithm. After correction, contrast increased from a mean of 2.0, 1.8 and 1.4 in the motion blurred images, for the three increasing amounts of movement, to a mean of 2.5, 2.4 and 2.2. Mean error was reduced by an average of 55% with motion correction. In conclusion, deconvolution can be used for correction of motion blur when subject motion is known.

  7. The Use of Computer Vision Algorithms for Automatic Orientation of Terrestrial Laser Scanning Data

    NASA Astrophysics Data System (ADS)

    Markiewicz, Jakub Stefan

    2016-06-01

    The paper presents analysis of the orientation of terrestrial laser scanning (TLS) data. In the proposed data processing methodology, point clouds are considered as panoramic images enriched by the depth map. Computer vision (CV) algorithms are used for orientation, which are applied for testing the correctness of the detection of tie points and time of computations, and for assessing difficulties in their implementation. The BRISK, FASRT, MSER, SIFT, SURF, ASIFT and CenSurE algorithms are used to search for key-points. The source data are point clouds acquired using a Z+F 5006h terrestrial laser scanner on the ruins of Iłża Castle, Poland. Algorithms allowing combination of the photogrammetric and CV approaches are also presented.

  8. A simulation study of a dual-plate in-room PET system for dose verification in carbon ion therapy

    NASA Astrophysics Data System (ADS)

    Chen, Ze; Hu, Zheng-Guo; Chen, Jin-Da; Zhang, Xiu-Ling; Guo, Zhong-Yan; Xiao, Guo-Qing; Sun, Zhi-Yu; Huang, Wen-Xue; Wang, Jian-Song

    2014-08-01

    During carbon ion therapy, lots of positron emitters such as 11C, 15O, 10C are generated in irradiated tissues by nuclear reactions, and can be used to track the carbon beam in the tissue by a positron emission tomography (PET) scanner. In this study, an dual-plate in-room PET scanner has been designed and evaluated based on the GATE simulation platform to monitor patient dose in carbon ion therapy. The dual-plate PET is designed to avoid interference with the carbon beamline and with patient positioning. Its performance was compared with that of four-head and full-ring PET scanners. The dual-plate, four-head and full-ring PET scanners consisted of 30, 60, 60 detector modules, respectively, with a 36 cm distance between directly opposite detector modules for dose deposition measurements. Each detector module consisted of a 24×24 array of 2 mm×2 mm×18 mm LYSO pixels coupled to a Hamamatsu H8500 PMT. To estimate the production yield of positron emitters, a 10 cm×15 cm×15 cm cuboid PMMA phantom was irradiated with 172, 200, 250 MeV/u 12C beams. 3D images of the activity distribution measured by the three types of scanner are produced by an iterative reconstruction algorithm. By comparing the longitudinal profile of positron emitters along the carbon beam path, it is indicated that use of the dual-plate PET scanner is feasible for monitoring the dose distribution in carbon ion therapy.

  9. Comparing the accuracy of terrestrial laser scanner in measuring forest inventory variables to enhance better decision making for potential fire hazards

    NASA Astrophysics Data System (ADS)

    Ghimire, Suman; Xystrakis, Fotios; Koutsias, Nikos

    2017-04-01

    Forest inventory variables are essential in accessing the potential of wildfire hazard, obtaining above ground biomass and carbon sequestration which helps developing strategies for sustainable management of forests. Effective management of forest resources relies on the accuracy of such inventory variables. This study aims to compare the accuracy in obtaining the forest inventory variables like diameter at breast height (DBH) and tree height from Terrestrial Laser Scanner (Faro Focus 3D X 330) with that from the traditional forest inventory techniques in the Mediterranean forests of Greece. The data acquisition was carried out on an area of 9,539.8 m2 with six plots each of radius 6 m. Computree algorithm was applied for automatic detection of DBH from terrestrial laser scanner data. Similarly, tree height was estimated manually using CloudCompare software for the terrestrial laser scanner data. The field estimates of DBH and tree height was carried out using calipers and Nikon Forestry 550 Laser Rangefinder. The comparison of DBH measured between field estimates and Terrestrial Laser Scanner (TLS), resulted in R squared values ranging from 0.75 to 0.96 at the plot level. An average R2 and RMSE value of 0.80 and 1.07 m respectively was obtained when comparing the tree height between TLS and field data. Our results confirm that terrestrial laser scanner can provide nondestructive, high-resolution, and precise determination of forest inventory for better decision making in sustainable forest management and assessing potential of forest fire hazards.

  10. Embedding Term Similarity and Inverse Document Frequency into a Logical Model of Information Retrieval.

    ERIC Educational Resources Information Center

    Losada, David E.; Barreiro, Alvaro

    2003-01-01

    Proposes an approach to incorporate term similarity and inverse document frequency into a logical model of information retrieval. Highlights include document representation and matching; incorporating term similarity into the measure of distance; new algorithms for implementation; inverse document frequency; and logical versus classical models of…

  11. Efficient mapping algorithms for scheduling robot inverse dynamics computation on a multiprocessor system

    NASA Technical Reports Server (NTRS)

    Lee, C. S. G.; Chen, C. L.

    1989-01-01

    Two efficient mapping algorithms for scheduling the robot inverse dynamics computation consisting of m computational modules with precedence relationship to be executed on a multiprocessor system consisting of p identical homogeneous processors with processor and communication costs to achieve minimum computation time are presented. An objective function is defined in terms of the sum of the processor finishing time and the interprocessor communication time. The minimax optimization is performed on the objective function to obtain the best mapping. This mapping problem can be formulated as a combination of the graph partitioning and the scheduling problems; both have been known to be NP-complete. Thus, to speed up the searching for a solution, two heuristic algorithms were proposed to obtain fast but suboptimal mapping solutions. The first algorithm utilizes the level and the communication intensity of the task modules to construct an ordered priority list of ready modules and the module assignment is performed by a weighted bipartite matching algorithm. For a near-optimal mapping solution, the problem can be solved by the heuristic algorithm with simulated annealing. These proposed optimization algorithms can solve various large-scale problems within a reasonable time. Computer simulations were performed to evaluate and verify the performance and the validity of the proposed mapping algorithms. Finally, experiments for computing the inverse dynamics of a six-jointed PUMA-like manipulator based on the Newton-Euler dynamic equations were implemented on an NCUBE/ten hypercube computer to verify the proposed mapping algorithms. Computer simulation and experimental results are compared and discussed.

  12. Optimisation in radiotherapy. III: Stochastic optimisation algorithms and conclusions.

    PubMed

    Ebert, M

    1997-12-01

    This is the final article in a three part examination of optimisation in radiotherapy. Previous articles have established the bases and form of the radiotherapy optimisation problem, and examined certain types of optimisation algorithm, namely, those which perform some form of ordered search of the solution space (mathematical programming), and those which attempt to find the closest feasible solution to the inverse planning problem (deterministic inversion). The current paper examines algorithms which search the space of possible irradiation strategies by stochastic methods. The resulting iterative search methods move about the solution space by sampling random variates, which gradually become more constricted as the algorithm converges upon the optimal solution. This paper also discusses the implementation of optimisation in radiotherapy practice.

  13. 1D-VAR Retrieval Using Superchannels

    NASA Technical Reports Server (NTRS)

    Liu, Xu; Zhou, Daniel; Larar, Allen; Smith, William L.; Schluessel, Peter; Mango, Stephen; SaintGermain, Karen

    2008-01-01

    Since modern ultra-spectral remote sensors have thousands of channels, it is difficult to include all of them in a 1D-var retrieval system. We will describe a physical inversion algorithm, which includes all available channels for the atmospheric temperature, moisture, cloud, and surface parameter retrievals. Both the forward model and the inversion algorithm compress the channel radiances into super channels. These super channels are obtained by projecting the radiance spectra onto a set of pre-calculated eigenvectors. The forward model provides both super channel properties and jacobian in EOF space directly. For ultra-spectral sensors such as Infrared Atmospheric Sounding Interferometer (IASI) and the NPOESS Airborne Sounder Testbed Interferometer (NAST), a compression ratio of more than 80 can be achieved, leading to a significant reduction in computations involved in an inversion process. Results will be shown applying the algorithm to real IASI and NAST data.

  14. Inverse transport calculations in optical imaging with subspace optimization algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, Tian, E-mail: tding@math.utexas.edu; Ren, Kui, E-mail: ren@math.utexas.edu

    2014-09-15

    Inverse boundary value problems for the radiative transport equation play an important role in optics-based medical imaging techniques such as diffuse optical tomography (DOT) and fluorescence optical tomography (FOT). Despite the rapid progress in the mathematical theory and numerical computation of these inverse problems in recent years, developing robust and efficient reconstruction algorithms remains a challenging task and an active research topic. We propose here a robust reconstruction method that is based on subspace minimization techniques. The method splits the unknown transport solution (or a functional of it) into low-frequency and high-frequency components, and uses singular value decomposition to analyticallymore » recover part of low-frequency information. Minimization is then applied to recover part of the high-frequency components of the unknowns. We present some numerical simulations with synthetic data to demonstrate the performance of the proposed algorithm.« less

  15. ERBE Geographic Scene and Monthly Snow Data

    NASA Technical Reports Server (NTRS)

    Coleman, Lisa H.; Flug, Beth T.; Gupta, Shalini; Kizer, Edward A.; Robbins, John L.

    1997-01-01

    The Earth Radiation Budget Experiment (ERBE) is a multisatellite system designed to measure the Earth's radiation budget. The ERBE data processing system consists of several software packages or sub-systems, each designed to perform a particular task. The primary task of the Inversion Subsystem is to reduce satellite altitude radiances to fluxes at the top of the Earth's atmosphere. To accomplish this, angular distribution models (ADM's) are required. These ADM's are a function of viewing and solar geometry and of the scene type as determined by the ERBE scene identification algorithm which is a part of the Inversion Subsystem. The Inversion Subsystem utilizes 12 scene types which are determined by the ERBE scene identification algorithm. The scene type is found by combining the most probable cloud cover, which is determined statistically by the scene identification algorithm, with the underlying geographic scene type. This Contractor Report describes how the geographic scene type is determined on a monthly basis.

  16. Transitionless driving on adiabatic search algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oh, Sangchul, E-mail: soh@qf.org.qa; Kais, Sabre, E-mail: kais@purdue.edu; Department of Chemistry, Department of Physics and Birck Nanotechnology Center, Purdue University, West Lafayette, Indiana 47907

    We study quantum dynamics of the adiabatic search algorithm with the equivalent two-level system. Its adiabatic and non-adiabatic evolution is studied and visualized as trajectories of Bloch vectors on a Bloch sphere. We find the change in the non-adiabatic transition probability from exponential decay for the short running time to inverse-square decay in asymptotic running time. The scaling of the critical running time is expressed in terms of the Lambert W function. We derive the transitionless driving Hamiltonian for the adiabatic search algorithm, which makes a quantum state follow the adiabatic path. We demonstrate that a uniform transitionless driving Hamiltonian,more » approximate to the exact time-dependent driving Hamiltonian, can alter the non-adiabatic transition probability from the inverse square decay to the inverse fourth power decay with the running time. This may open up a new but simple way of speeding up adiabatic quantum dynamics.« less

  17. SU-E-I-22: A Comprehensive Investigation of Noise Variations Between the GE Discovery CT750 HD and GE LightSpeed VCT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bache, S; Loyer, E; Stauduhar, P

    2015-06-15

    Purpose: To quantify and compare the noise properties between two GE CT models-the Discovery CT750 HD (aka HD750) and LightSpeed VCT, with the overall goal of assessing the impact in clinical diagnostic practice. Methods: Daily QC data from a fleet of 9 CT scanners currently in clinical use were investigated – 5 HD750 and 4 VCT (over 600 total acquisitions for each scanner). A standard GE QC phantom was scanned daily using two sets of scan parameters with each scanner over 1 year. Water CT number and standard deviation were recorded from the image of water section of the QCmore » phantom. The standard GE QC scan parameters (Pitch = 0.516, 120kVp, 0.4s, 335mA, Small Body SFOV, 5mm thickness) and an in-house developed protocol (Axial, 120kVp, 1.0s, 240mA, Head SFOV, 5mm thickness) were used, with Standard reconstruction algorithm. Noise was measured as the standard deviation in the center of the water phantom image. Inter-model noise distributions and tube output in mR/mAs were compared to assess any relative differences in noise properties. Results: With the in-house protocols, average noise for the five HD750 scanners was ∼9% higher than the VCT scanners (5.8 vs 5.3). For the GE QC protocol, average noise with the HD750 scanners was ∼11% higher than with the VCT scanners (4.8 vs 4.3). This discrepancy in noise between the two models was found despite the tube output in mR/mAs being comparable with the HD750 scanners only having ∼4% lower output (8.0 vs 8.3 mR/mAs). Conclusion: Using identical scan protocols, average noise in images from the HD750 group was higher than that from the VCT group. This confirms feedback from an institutional radiologist’s feedback regarding grainier patient images from HD750 scanners. Further investigation is warranted to assess the noise texture and distribution, as well as clinical impact.« less

  18. An improved grey wolf optimizer algorithm for the inversion of geoelectrical data

    NASA Astrophysics Data System (ADS)

    Li, Si-Yu; Wang, Shu-Ming; Wang, Peng-Fei; Su, Xiao-Lu; Zhang, Xin-Song; Dong, Zhi-Hui

    2018-05-01

    The grey wolf optimizer (GWO) is a novel bionics algorithm inspired by the social rank and prey-seeking behaviors of grey wolves. The GWO algorithm is easy to implement because of its basic concept, simple formula, and small number of parameters. This paper develops a GWO algorithm with a nonlinear convergence factor and an adaptive location updating strategy and applies this improved grey wolf optimizer (improved grey wolf optimizer, IGWO) algorithm to geophysical inversion problems using magnetotelluric (MT), DC resistivity and induced polarization (IP) methods. Numerical tests in MATLAB 2010b for the forward modeling data and the observed data show that the IGWO algorithm can find the global minimum and rarely sinks to the local minima. For further study, inverted results using the IGWO are contrasted with particle swarm optimization (PSO) and the simulated annealing (SA) algorithm. The outcomes of the comparison reveal that the IGWO and PSO similarly perform better in counterpoising exploration and exploitation with a given number of iterations than the SA.

  19. Analytic TOF PET reconstruction algorithm within DIRECT data partitioning framework

    PubMed Central

    Matej, Samuel; Daube-Witherspoon, Margaret E.; Karp, Joel S.

    2016-01-01

    Iterative reconstruction algorithms are routinely used for clinical practice; however, analytic algorithms are relevant candidates for quantitative research studies due to their linear behavior. While iterative algorithms also benefit from the inclusion of accurate data and noise models the widespread use of TOF scanners with less sensitivity to noise and data imperfections make analytic algorithms even more promising. In our previous work we have developed a novel iterative reconstruction approach (Direct Image Reconstruction for TOF) providing convenient TOF data partitioning framework and leading to very efficient reconstructions. In this work we have expanded DIRECT to include an analytic TOF algorithm with confidence weighting incorporating models of both TOF and spatial resolution kernels. Feasibility studies using simulated and measured data demonstrate that analytic-DIRECT with appropriate resolution and regularization filters is able to provide matched bias vs. variance performance to iterative TOF reconstruction with a matched resolution model. PMID:27032968

  20. Analytic TOF PET reconstruction algorithm within DIRECT data partitioning framework

    NASA Astrophysics Data System (ADS)

    Matej, Samuel; Daube-Witherspoon, Margaret E.; Karp, Joel S.

    2016-05-01

    Iterative reconstruction algorithms are routinely used for clinical practice; however, analytic algorithms are relevant candidates for quantitative research studies due to their linear behavior. While iterative algorithms also benefit from the inclusion of accurate data and noise models the widespread use of time-of-flight (TOF) scanners with less sensitivity to noise and data imperfections make analytic algorithms even more promising. In our previous work we have developed a novel iterative reconstruction approach (DIRECT: direct image reconstruction for TOF) providing convenient TOF data partitioning framework and leading to very efficient reconstructions. In this work we have expanded DIRECT to include an analytic TOF algorithm with confidence weighting incorporating models of both TOF and spatial resolution kernels. Feasibility studies using simulated and measured data demonstrate that analytic-DIRECT with appropriate resolution and regularization filters is able to provide matched bias versus variance performance to iterative TOF reconstruction with a matched resolution model.

  1. An effective medium inversion algorithm for gas hydrate quantification and its application to laboratory and borehole measurements of gas hydrate-bearing sediments

    NASA Astrophysics Data System (ADS)

    Chand, Shyam; Minshull, Tim A.; Priest, Jeff A.; Best, Angus I.; Clayton, Christopher R. I.; Waite, William F.

    2006-08-01

    The presence of gas hydrate in marine sediments alters their physical properties. In some circumstances, gas hydrate may cement sediment grains together and dramatically increase the seismic P- and S-wave velocities of the composite medium. Hydrate may also form a load-bearing structure within the sediment microstructure, but with different seismic wave attenuation characteristics, changing the attenuation behaviour of the composite. Here we introduce an inversion algorithm based on effective medium modelling to infer hydrate saturations from velocity and attenuation measurements on hydrate-bearing sediments. The velocity increase is modelled as extra binding developed by gas hydrate that strengthens the sediment microstructure. The attenuation increase is modelled through a difference in fluid flow properties caused by different permeabilities in the sediment and hydrate microstructures. We relate velocity and attenuation increases in hydrate-bearing sediments to their hydrate content, using an effective medium inversion algorithm based on the self-consistent approximation (SCA), differential effective medium (DEM) theory, and Biot and squirt flow mechanisms of fluid flow. The inversion algorithm is able to convert observations in compressional and shear wave velocities and attenuations to hydrate saturation in the sediment pore space. We applied our algorithm to a data set from the Mallik 2L-38 well, Mackenzie delta, Canada, and to data from laboratory measurements on gas-rich and water-saturated sand samples. Predictions using our algorithm match the borehole data and water-saturated laboratory data if the proportion of hydrate contributing to the load-bearing structure increases with hydrate saturation. The predictions match the gas-rich laboratory data if that proportion decreases with hydrate saturation. We attribute this difference to differences in hydrate formation mechanisms between the two environments.

  2. An effective medium inversion algorithm for gas hydrate quantification and its application to laboratory and borehole measurements of gas hydrate-bearing sediments

    USGS Publications Warehouse

    Chand, S.; Minshull, T.A.; Priest, J.A.; Best, A.I.; Clayton, C.R.I.; Waite, W.F.

    2006-01-01

    The presence of gas hydrate in marine sediments alters their physical properties. In some circumstances, gas hydrate may cement sediment grains together and dramatically increase the seismic P- and S-wave velocities of the composite medium. Hydrate may also form a load-bearing structure within the sediment microstructure, but with different seismic wave attenuation characteristics, changing the attenuation behaviour of the composite. Here we introduce an inversion algorithm based on effective medium modelling to infer hydrate saturations from velocity and attenuation measurements on hydrate-bearing sediments. The velocity increase is modelled as extra binding developed by gas hydrate that strengthens the sediment microstructure. The attenuation increase is modelled through a difference in fluid flow properties caused by different permeabilities in the sediment and hydrate microstructures. We relate velocity and attenuation increases in hydrate-bearing sediments to their hydrate content, using an effective medium inversion algorithm based on the self-consistent approximation (SCA), differential effective medium (DEM) theory, and Biot and squirt flow mechanisms of fluid flow. The inversion algorithm is able to convert observations in compressional and shear wave velocities and attenuations to hydrate saturation in the sediment pore space. We applied our algorithm to a data set from the Mallik 2L–38 well, Mackenzie delta, Canada, and to data from laboratory measurements on gas-rich and water-saturated sand samples. Predictions using our algorithm match the borehole data and water-saturated laboratory data if the proportion of hydrate contributing to the load-bearing structure increases with hydrate saturation. The predictions match the gas-rich laboratory data if that proportion decreases with hydrate saturation. We attribute this difference to differences in hydrate formation mechanisms between the two environments.

  3. FWT2D: A massively parallel program for frequency-domain full-waveform tomography of wide-aperture seismic data—Part 1: Algorithm

    NASA Astrophysics Data System (ADS)

    Sourbier, Florent; Operto, Stéphane; Virieux, Jean; Amestoy, Patrick; L'Excellent, Jean-Yves

    2009-03-01

    This is the first paper in a two-part series that describes a massively parallel code that performs 2D frequency-domain full-waveform inversion of wide-aperture seismic data for imaging complex structures. Full-waveform inversion methods, namely quantitative seismic imaging methods based on the resolution of the full wave equation, are computationally expensive. Therefore, designing efficient algorithms which take advantage of parallel computing facilities is critical for the appraisal of these approaches when applied to representative case studies and for further improvements. Full-waveform modelling requires the resolution of a large sparse system of linear equations which is performed with the massively parallel direct solver MUMPS for efficient multiple-shot simulations. Efficiency of the multiple-shot solution phase (forward/backward substitutions) is improved by using the BLAS3 library. The inverse problem relies on a classic local optimization approach implemented with a gradient method. The direct solver returns the multiple-shot wavefield solutions distributed over the processors according to a domain decomposition driven by the distribution of the LU factors. The domain decomposition of the wavefield solutions is used to compute in parallel the gradient of the objective function and the diagonal Hessian, this latter providing a suitable scaling of the gradient. The algorithm allows one to test different strategies for multiscale frequency inversion ranging from successive mono-frequency inversion to simultaneous multifrequency inversion. These different inversion strategies will be illustrated in the following companion paper. The parallel efficiency and the scalability of the code will also be quantified.

  4. Parsimony and goodness-of-fit in multi-dimensional NMR inversion

    NASA Astrophysics Data System (ADS)

    Babak, Petro; Kryuchkov, Sergey; Kantzas, Apostolos

    2017-01-01

    Multi-dimensional nuclear magnetic resonance (NMR) experiments are often used for study of molecular structure and dynamics of matter in core analysis and reservoir evaluation. Industrial applications of multi-dimensional NMR involve a high-dimensional measurement dataset with complicated correlation structure and require rapid and stable inversion algorithms from the time domain to the relaxation rate and/or diffusion domains. In practice, applying existing inverse algorithms with a large number of parameter values leads to an infinite number of solutions with a reasonable fit to the NMR data. The interpretation of such variability of multiple solutions and selection of the most appropriate solution could be a very complex problem. In most cases the characteristics of materials have sparse signatures, and investigators would like to distinguish the most significant relaxation and diffusion values of the materials. To produce an easy to interpret and unique NMR distribution with the finite number of the principal parameter values, we introduce a new method for NMR inversion. The method is constructed based on the trade-off between the conventional goodness-of-fit approach to multivariate data and the principle of parsimony guaranteeing inversion with the least number of parameter values. We suggest performing the inversion of NMR data using the forward stepwise regression selection algorithm. To account for the trade-off between goodness-of-fit and parsimony, the objective function is selected based on Akaike Information Criterion (AIC). The performance of the developed multi-dimensional NMR inversion method and its comparison with conventional methods are illustrated using real data for samples with bitumen, water and clay.

  5. Fully 3D refraction correction dosimetry system.

    PubMed

    Manjappa, Rakesh; Makki, S Sharath; Kumar, Rajesh; Vasu, Ram Mohan; Kanhirodan, Rajan

    2016-02-21

    The irradiation of selective regions in a polymer gel dosimeter results in an increase in optical density and refractive index (RI) at those regions. An optical tomography-based dosimeter depends on rayline path through the dosimeter to estimate and reconstruct the dose distribution. The refraction of light passing through a dose region results in artefacts in the reconstructed images. These refraction errors are dependant on the scanning geometry and collection optics. We developed a fully 3D image reconstruction algorithm, algebraic reconstruction technique-refraction correction (ART-rc) that corrects for the refractive index mismatches present in a gel dosimeter scanner not only at the boundary, but also for any rayline refraction due to multiple dose regions inside the dosimeter. In this study, simulation and experimental studies have been carried out to reconstruct a 3D dose volume using 2D CCD measurements taken for various views. The study also focuses on the effectiveness of using different refractive-index matching media surrounding the gel dosimeter. Since the optical density is assumed to be low for a dosimeter, the filtered backprojection is routinely used for reconstruction. We carry out the reconstructions using conventional algebraic reconstruction (ART) and refractive index corrected ART (ART-rc) algorithms. The reconstructions based on FDK algorithm for cone-beam tomography has also been carried out for comparison. Line scanners and point detectors, are used to obtain reconstructions plane by plane. The rays passing through dose region with a RI mismatch does not reach the detector in the same plane depending on the angle of incidence and RI. In the fully 3D scanning setup using 2D array detectors, light rays that undergo refraction are still collected and hence can still be accounted for in the reconstruction algorithm. It is found that, for the central region of the dosimeter, the usable radius using ART-rc algorithm with water as RI matched medium is 71.8%, an increase of 6.4% compared to that achieved using conventional ART algorithm. Smaller diameter dosimeters are scanned with dry air scanning by using a wide-angle lens that collects refracted light. The images reconstructed using cone beam geometry is seen to deteriorate in some planes as those regions are not scanned. Refraction correction is important and needs to be taken in to consideration to achieve quantitatively accurate dose reconstructions. Refraction modeling is crucial in array based scanners as it is not possible to identify refracted rays in the sinogram space.

  6. Improved L-BFGS diagonal preconditioners for a large-scale 4D-Var inversion system: application to CO2 flux constraints and analysis error calculation

    NASA Astrophysics Data System (ADS)

    Bousserez, Nicolas; Henze, Daven; Bowman, Kevin; Liu, Junjie; Jones, Dylan; Keller, Martin; Deng, Feng

    2013-04-01

    This work presents improved analysis error estimates for 4D-Var systems. From operational NWP models to top-down constraints on trace gas emissions, many of today's data assimilation and inversion systems in atmospheric science rely on variational approaches. This success is due to both the mathematical clarity of these formulations and the availability of computationally efficient minimization algorithms. However, unlike Kalman Filter-based algorithms, these methods do not provide an estimate of the analysis or forecast error covariance matrices, these error statistics being propagated only implicitly by the system. From both a practical (cycling assimilation) and scientific perspective, assessing uncertainties in the solution of the variational problem is critical. For large-scale linear systems, deterministic or randomization approaches can be considered based on the equivalence between the inverse Hessian of the cost function and the covariance matrix of analysis error. For perfectly quadratic systems, like incremental 4D-Var, Lanczos/Conjugate-Gradient algorithms have proven to be most efficient in generating low-rank approximations of the Hessian matrix during the minimization. For weakly non-linear systems though, the Limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS), a quasi-Newton descent algorithm, is usually considered the best method for the minimization. Suitable for large-scale optimization, this method allows one to generate an approximation to the inverse Hessian using the latest m vector/gradient pairs generated during the minimization, m depending upon the available core memory. At each iteration, an initial low-rank approximation to the inverse Hessian has to be provided, which is called preconditioning. The ability of the preconditioner to retain useful information from previous iterations largely determines the efficiency of the algorithm. Here we assess the performance of different preconditioners to estimate the inverse Hessian of a large-scale 4D-Var system. The impact of using the diagonal preconditioners proposed by Gilbert and Le Maréchal (1989) instead of the usual Oren-Spedicato scalar will be first presented. We will also introduce new hybrid methods that combine randomization estimates of the analysis error variance with L-BFGS diagonal updates to improve the inverse Hessian approximation. Results from these new algorithms will be evaluated against standard large ensemble Monte-Carlo simulations. The methods explored here are applied to the problem of inferring global atmospheric CO2 fluxes using remote sensing observations, and are intended to be integrated with the future NASA Carbon Monitoring System.

  7. Unlocking the spatial inversion of large scanning magnetic microscopy datasets

    NASA Astrophysics Data System (ADS)

    Myre, J. M.; Lascu, I.; Andrade Lima, E.; Feinberg, J. M.; Saar, M. O.; Weiss, B. P.

    2013-12-01

    Modern scanning magnetic microscopy provides the ability to perform high-resolution, ultra-high sensitivity moment magnetometry, with spatial resolutions better than 10^-4 m and magnetic moments as weak as 10^-16 Am^2. These microscopy capabilities have enhanced numerous magnetic studies, including investigations of the paleointensity of the Earth's magnetic field, shock magnetization and demagnetization of impacts, magnetostratigraphy, the magnetic record in speleothems, and the records of ancient core dynamos of planetary bodies. A common component among many studies utilizing scanning magnetic microscopy is solving an inverse problem to determine the non-negative magnitude of the magnetic moments that produce the measured component of the magnetic field. The two most frequently used methods to solve this inverse problem are classic fast Fourier techniques in the frequency domain and non-negative least squares (NNLS) methods in the spatial domain. Although Fourier techniques are extremely fast, they typically violate non-negativity and it is difficult to implement constraints associated with the space domain. NNLS methods do not violate non-negativity, but have typically been computation time prohibitive for samples of practical size or resolution. Existing NNLS methods use multiple techniques to attain tractable computation. To reduce computation time in the past, typically sample size or scan resolution would have to be reduced. Similarly, multiple inversions of smaller sample subdivisions can be performed, although this frequently results in undesirable artifacts at subdivision boundaries. Dipole interactions can also be filtered to only compute interactions above a threshold which enables the use of sparse methods through artificial sparsity. To improve upon existing spatial domain techniques, we present the application of the TNT algorithm, named TNT as it is a "dynamite" non-negative least squares algorithm which enhances the performance and accuracy of spatial domain inversions. We show that the TNT algorithm reduces the execution time of spatial domain inversions from months to hours and that inverse solution accuracy is improved as the TNT algorithm naturally produces solutions with small norms. Using sIRM and NRM measures of multiple synthetic and natural samples we show that the capabilities of the TNT algorithm allow very large samples to be inverted without the need for alternative techniques to make the problems tractable. Ultimately, the TNT algorithm enables accurate spatial domain analysis of scanning magnetic microscopy data on an accelerated time scale that renders spatial domain analyses tractable for numerous studies, including searches for the best fit of unidirectional magnetization direction and high-resolution step-wise magnetization and demagnetization.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naseri, M; Rajabi, H; Wang, J

    Purpose: Respiration causes lesion smearing, image blurring and quality degradation, affecting lesion contrast and the ability to define correct lesion size. The spatial resolution of current multi pinhole SPECT (MPHS) scanners is sub-millimeter. Therefore, the effect of motion is more noticeable in comparison to conventional SPECT scanner. Gated imaging aims to reduce motion artifacts. A major issue in gating is the lack of statistics and individual reconstructed frames are noisy. The increased noise in each frame, deteriorates the quantitative accuracy of the MPHS Images. The objective of this work, is to enhance the image quality in 4D-MPHS imaging, by 4Dmore » image reconstruction. Methods: The new algorithm requires deformation vector fields (DVFs) that are calculated by non-rigid Demons registration. The algorithm is based on the motion-incorporated version of ordered subset expectation maximization (OSEM) algorithm. This iterative algorithm is capable to make full use of all projections to reconstruct each individual frame. To evaluate the performance of the proposed algorithm a simulation study was conducted. A fast ray tracing method was used to generate MPHS projections of a 4D digital mouse phantom with a small tumor in liver in eight different respiratory phases. To evaluate the 4D-OSEM algorithm potential, tumor to liver activity ratio was compared with other image reconstruction methods including 3D-MPHS and post reconstruction registered with Demons-derived DVFs. Results: Image quality of 4D-MPHS is greatly improved by the 4D-OSEM algorithm. When all projections are used to reconstruct a 3D-MPHS, motion blurring artifacts are present, leading to overestimation of the tumor size and 24% tumor contrast underestimation. This error reduced to 16% and 10% for post reconstruction registration methods and 4D-OSEM respectively. Conclusion: 4D-OSEM method can be used for motion correction in 4D-MPHS. The statistics and quantification are improved since all projection data are combined together to update the image.« less

  9. Polar research from satellites

    NASA Technical Reports Server (NTRS)

    Thomas, Robert H.

    1991-01-01

    In the polar regions and climate change section, the topics of ocean/atmosphere heat transfer, trace gases, surface albedo, and response to climate warming are discussed. The satellite instruments section is divided into three parts. Part one is about basic principles and covers, choice of frequencies, algorithms, orbits, and remote sensing techniques. Part two is about passive sensors and covers microwave radiometers, medium-resolution visible and infrared sensors, advanced very high resolution radiometers, optical line scanners, earth radiation budget experiment, coastal zone color scanner, high-resolution imagers, and atmospheric sounding. Part three is about active sensors and covers synthetic aperture radar, radar altimeters, scatterometers, and lidar. There is also a next decade section that is followed by a summary and recommendations section.

  10. An enhanced inertial navigation system based on a low-cost IMU and laser scanner

    NASA Astrophysics Data System (ADS)

    Kim, Hyung-Soon; Baeg, Seung-Ho; Yang, Kwang-Woong; Cho, Kuk; Park, Sangdeok

    2012-06-01

    This paper describes an enhanced fusion method for an Inertial Navigation System (INS) based on a 3-axis accelerometer sensor, a 3-axis gyroscope sensor and a laser scanner. In GPS-denied environments, indoor or dense forests, a pure INS odometry is available for estimating the trajectory of a human or robot. However it has a critical implementation problem: a drift error of velocity, position and heading angles. Commonly the problem can be solved by fusing visual landmarks, a magnetometer or radio beacons. These methods are not robust in diverse environments: darkness, fog or sunlight, an unstable magnetic field and an environmental obstacle. We propose to overcome the drift problem using an Iterative Closest Point (ICP) scan matching algorithm with a laser scanner. This system consists of three parts. The first is the INS. It estimates attitude, velocity, position based on a 6-axis Inertial Measurement Unit (IMU) with both 'Heuristic Reduction of Gyro Drift' (HRGD) and 'Heuristic Reduction of Velocity Drift' (HRVD) methods. A frame-to-frame ICP matching algorithm for estimating position and attitude by laser scan data is the second. The third is an extended kalman filter method for multi-sensor data fusing: INS and Laser Range Finder (LRF). The proposed method is simple and robust in diverse environments, so we could reduce the drift error efficiently. We confirm the result comparing an odometry of the experimental result with ICP and LRF aided-INS in a long corridor.

  11. Spatial Distortion in MRI-Guided Stereotactic Procedures: Evaluation in 1.5-, 3- and 7-Tesla MRI Scanners.

    PubMed

    Neumann, Jan-Oliver; Giese, Henrik; Biller, Armin; Nagel, Armin M; Kiening, Karl

    2015-01-01

    Magnetic resonance imaging (MRI) is replacing computed tomography (CT) as the main imaging modality for stereotactic transformations. MRI is prone to spatial distortion artifacts, which can lead to inaccuracy in stereotactic procedures. Modern MRI systems provide distortion correction algorithms that may ameliorate this problem. This study investigates the different options of distortion correction using standard 1.5-, 3- and 7-tesla MRI scanners. A phantom was mounted on a stereotactic frame. One CT scan and three MRI scans were performed. At all three field strengths, two 3-dimensional sequences, volumetric interpolated breath-hold examination (VIBE) and magnetization-prepared rapid acquisition with gradient echo, were acquired, and automatic distortion correction was performed. Global stereotactic transformation of all 13 datasets was performed and two stereotactic planning workflows (MRI only vs. CT/MR image fusion) were subsequently analysed. Distortion correction on the 1.5- and 3-tesla scanners caused a considerable reduction in positional error. The effect was more pronounced when using the VIBE sequences. By using co-registration (CT/MR image fusion), even a lower positional error could be obtained. In ultra-high-field (7 T) MR imaging, distortion correction introduced even higher errors. However, the accuracy of non-corrected 7-tesla sequences was comparable to CT/MR image fusion 3-tesla imaging. MRI distortion correction algorithms can reduce positional errors by up to 60%. For stereotactic applications of utmost precision, we recommend a co-registration to an additional CT dataset. © 2015 S. Karger AG, Basel.

  12. Enabling vendor independent photoacoustic imaging systems with asynchronous laser source

    NASA Astrophysics Data System (ADS)

    Wu, Yixuan; Zhang, Haichong K.; Boctor, Emad M.

    2018-02-01

    Channel data acquisition, and synchronization between laser excitation and PA signal acquisition, are two fundamental hardware requirements for photoacoustic (PA) imaging. Unfortunately, however, neither is equipped by most clinical ultrasound scanners. Therefore, less economical specialized research platforms are used in general, which hinders a smooth clinical transition of PA imaging. In previous studies, we have proposed an algorithm to achieve PA imaging using ultrasound post-beamformed (USPB) RF data instead of channel data. This work focuses on enabling clinical ultrasound scanners to implement PA imaging, without requiring synchronization between the laser excitation and PA signal acquisition. Laser synchronization is inherently consisted of two aspects: frequency and phase information. We synchronize without communicating the laser and the ultrasound scanner by investigating USPB images of a point-target phantom in two steps. First, frequency information is estimated by solving a nonlinear optimization problem, under the assumption that the segmented wave-front can only be beamformed into a single spot when synchronization is achieved. Second, after making frequencies of two systems identical, phase delay is estimated by optimizing the image quality while varying phase value. The proposed method is validated through simulation, by manually adding both frequency and phase errors, then applying the proposed algorithm to correct errors and reconstruct PA images. Compared with the ground truth, simulation results indicate that the remaining errors in frequency correction and phase correction are 0.28% and 2.34%, respectively, which affirm the potential of overcoming hardware barriers on PA imaging through software solution.

  13. Label-free observation of tissues by high-speed stimulated Raman spectral microscopy and independent component analysis

    NASA Astrophysics Data System (ADS)

    Ozeki, Yasuyuki; Otsuka, Yoichi; Sato, Shuya; Hashimoto, Hiroyuki; Umemura, Wataru; Sumimura, Kazuhiko; Nishizawa, Norihiko; Fukui, Kiichi; Itoh, Kazuyoshi

    2013-02-01

    We have developed a video-rate stimulated Raman scattering (SRS) microscope with frame-by-frame wavenumber tunability. The system uses a 76-MHz picosecond Ti:sapphire laser and a subharmonically synchronized, 38-MHz Yb fiber laser. The Yb fiber laser pulses are spectrally sliced by a fast wavelength-tunable filter, which consists of a galvanometer scanner, a 4-f optical system and a reflective grating. The spectral resolution of the filter is ~ 3 cm-1. The wavenumber was scanned from 2800 to 3100 cm-1 with an arbitrary waveform synchronized to the frame trigger. For imaging, we introduced a 8-kHz resonant scanner and a galvanometer scanner. We were able to acquire SRS images of 500 x 480 pixels at a frame rate of 30.8 frames/s. Then these images were processed by principal component analysis followed by a modified algorithm of independent component analysis. This algorithm allows blind separation of constituents with overlapping Raman bands from SRS spectral images. The independent component (IC) spectra give spectroscopic information, and IC images can be used to produce pseudo-color images. We demonstrate various label-free imaging modalities such as 2D spectral imaging of the rat liver, two-color 3D imaging of a vessel in the rat liver, and spectral imaging of several sections of intestinal villi in the mouse. Various structures in the tissues such as lipid droplets, cytoplasm, fibrous texture, nucleus, and water-rich region were successfully visualized.

  14. Total variation regularization of the 3-D gravity inverse problem using a randomized generalized singular value decomposition

    NASA Astrophysics Data System (ADS)

    Vatankhah, Saeed; Renaut, Rosemary A.; Ardestani, Vahid E.

    2018-04-01

    We present a fast algorithm for the total variation regularization of the 3-D gravity inverse problem. Through imposition of the total variation regularization, subsurface structures presenting with sharp discontinuities are preserved better than when using a conventional minimum-structure inversion. The associated problem formulation for the regularization is nonlinear but can be solved using an iteratively reweighted least-squares algorithm. For small-scale problems the regularized least-squares problem at each iteration can be solved using the generalized singular value decomposition. This is not feasible for large-scale, or even moderate-scale, problems. Instead we introduce the use of a randomized generalized singular value decomposition in order to reduce the dimensions of the problem and provide an effective and efficient solution technique. For further efficiency an alternating direction algorithm is used to implement the total variation weighting operator within the iteratively reweighted least-squares algorithm. Presented results for synthetic examples demonstrate that the novel randomized decomposition provides good accuracy for reduced computational and memory demands as compared to use of classical approaches.

  15. Tomography and the Herglotz-Wiechert inverse formulation

    NASA Astrophysics Data System (ADS)

    Nowack, Robert L.

    1990-04-01

    In this paper, linearized tomography and the Herglotz-Wiechert inverse formulation are compared. Tomographic inversions for 2-D or 3-D velocity structure use line integrals along rays and can be written in terms of Radon transforms. For radially concentric structures, Radon transforms are shown to reduce to Abel transforms. Therefore, for straight ray paths, the Abel transform of travel-time is a tomographic algorithm specialized to a one-dimensional radially concentric medium. The Herglotz-Wiechert formulation uses seismic travel-time data to invert for one-dimensional earth structure and is derived using exact ray trajectories by applying an Abel transform. This is of historical interest since it would imply that a specialized tomographic-like algorithm has been used in seismology since the early part of the century (see Herglotz, 1907; Wiechert, 1910). Numerical examples are performed comparing the Herglotz-Wiechert algorithm and linearized tomography along straight rays. Since the Herglotz-Wiechert algorithm is applicable under specific conditions, (the absence of low velocity zones) to non-straight ray paths, the association with tomography may prove to be useful in assessing the uniqueness of tomographic results generalized to curved ray geometries.

  16. Non-laser-based scanner for three-dimensional digitization of historical artifacts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hahn, Daniel V.; Baldwin, Kevin C.; Duncan, Donald D

    2007-05-20

    A 3D scanner, based on incoherent illumination techniques, and associated data-processing algorithms are presented that can be used to scan objects at lateral resolutions ranging from 5 to100 {mu}m (or more) and depth resolutions of approximately 2 {mu}m.The scanner was designed with the specific intent to scan cuneiform tablets but can be utilized for other applications. Photometric stereo techniques are used to obtain both a surface normal map and a parameterized model of the object's bidirectional reflectance distribution function. The normal map is combined with height information,gathered by structured light techniques, to form a consistent 3D surface. Data from Lambertianmore » and specularly diffuse spherical objects are presented and used to quantify the accuracy of the techniques. Scans of a cuneiform tablet are also presented. All presented data are at a lateral resolution of 26.8 {mu}m as this is approximately the minimum resolution deemed necessary to accurately represent cuneiform.« less

  17. Split gradient coils for simultaneous PET-MRI

    PubMed Central

    Poole, Michael; Bowtell, Richard; Green, Dan; Pittard, Simon; Lucas, Alun; Hawkes, Rob; Carpenter, Adrian

    2015-01-01

    Combining positron emission tomography (PET) and MRI necessarily involves an engineering tradeoff as the equipment needed for the two modalities vies for the space closest to the region where the signals originate. In one recently described scanner configuration for simultaneous positron emission tomography–MRI, the positron emission tomography detection scintillating crystals reside in an 80-mm gap between the 2 halves of a 1-T split-magnet cryostat. A novel set of gradient and shim coils has been specially designed for this split MRI scanner to include an 110-mm gap from which wires are excluded so as not to interfere with positron detection. An inverse boundary element method was necessarily employed to design the three orthogonal, shielded gradient coils and shielded Z0 shim coil. The coils have been constructed and tested in the hybrid positron emission tomography-MRI system and successfully used in simultaneous positron emission tomography-MRI experiments. PMID:19780167

  18. Quantitative analysis of SMEX'02 AIRSAR data for soil moisture inversion

    NASA Technical Reports Server (NTRS)

    Zyl, J. J. van; Njoku, E.; Jackson, T.

    2003-01-01

    This paper discusses in detail the characteristics of the AIRSAR data acquired, and provides an initial quantitative assessment of the accuracy of the radar inversion algorithms under these vegetated conditions.

  19. Concurrence control for transactions with priorities

    NASA Technical Reports Server (NTRS)

    Marzullo, Keith

    1989-01-01

    Priority inversion occurs when a process is delayed by the actions of another process with less priority. With atomic transactions, the concurrency control mechanism can cause delays, and without taking priorities into account can be a source of priority inversion. Three traditional concurrency control algorithms are extended so that they are free from unbounded priority inversion.

  20. Inverse scattering approach to improving pattern recognition

    NASA Astrophysics Data System (ADS)

    Chapline, George; Fu, Chi-Yung

    2005-05-01

    The Helmholtz machine provides what may be the best existing model for how the mammalian brain recognizes patterns. Based on the observation that the "wake-sleep" algorithm for training a Helmholtz machine is similar to the problem of finding the potential for a multi-channel Schrodinger equation, we propose that the construction of a Schrodinger potential using inverse scattering methods can serve as a model for how the mammalian brain learns to extract essential information from sensory data. In particular, inverse scattering theory provides a conceptual framework for imagining how one might use EEG and MEG observations of brain-waves together with sensory feedback to improve human learning and pattern recognition. Longer term, implementation of inverse scattering algorithms on a digital or optical computer could be a step towards mimicking the seamless information fusion of the mammalian brain.

  1. A Fine-Grained Pipelined Implementation for Large-Scale Matrix Inversion on FPGA

    NASA Astrophysics Data System (ADS)

    Zhou, Jie; Dou, Yong; Zhao, Jianxun; Xia, Fei; Lei, Yuanwu; Tang, Yuxing

    Large-scale matrix inversion play an important role in many applications. However to the best of our knowledge, there is no FPGA-based implementation. In this paper, we explore the possibility of accelerating large-scale matrix inversion on FPGA. To exploit the computational potential of FPGA, we introduce a fine-grained parallel algorithm for matrix inversion. A scalable linear array processing elements (PEs), which is the core component of the FPGA accelerator, is proposed to implement this algorithm. A total of 12 PEs can be integrated into an Altera StratixII EP2S130F1020C5 FPGA on our self-designed board. Experimental results show that a factor of 2.6 speedup and the maximum power-performance of 41 can be achieved compare to Pentium Dual CPU with double SSE threads.

  2. Inverse Scattering Approach to Improving Pattern Recognition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chapline, G; Fu, C

    2005-02-15

    The Helmholtz machine provides what may be the best existing model for how the mammalian brain recognizes patterns. Based on the observation that the ''wake-sleep'' algorithm for training a Helmholtz machine is similar to the problem of finding the potential for a multi-channel Schrodinger equation, we propose that the construction of a Schrodinger potential using inverse scattering methods can serve as a model for how the mammalian brain learns to extract essential information from sensory data. In particular, inverse scattering theory provides a conceptual framework for imagining how one might use EEG and MEG observations of brain-waves together with sensorymore » feedback to improve human learning and pattern recognition. Longer term, implementation of inverse scattering algorithms on a digital or optical computer could be a step towards mimicking the seamless information fusion of the mammalian brain.« less

  3. Regularized inversion of controlled source audio-frequency magnetotelluric data in horizontally layered transversely isotropic media

    NASA Astrophysics Data System (ADS)

    Zhou, Jianmei; Wang, Jianxun; Shang, Qinglong; Wang, Hongnian; Yin, Changchun

    2014-04-01

    We present an algorithm for inverting controlled source audio-frequency magnetotelluric (CSAMT) data in horizontally layered transversely isotropic (TI) media. The popular inversion method parameterizes the media into a large number of layers which have fixed thickness and only reconstruct the conductivities (e.g. Occam's inversion), which does not enable the recovery of the sharp interfaces between layers. In this paper, we simultaneously reconstruct all the model parameters, including both the horizontal and vertical conductivities and layer depths. Applying the perturbation principle and the dyadic Green's function in TI media, we derive the analytic expression of Fréchet derivatives of CSAMT responses with respect to all the model parameters in the form of Sommerfeld integrals. A regularized iterative inversion method is established to simultaneously reconstruct all the model parameters. Numerical results show that the inverse algorithm, including the depths of the layer interfaces, can significantly improve the inverse results. It can not only reconstruct the sharp interfaces between layers, but also can obtain conductivities close to the true value.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kitanidis, Peter

    As large-scale, commercial storage projects become operational, the problem of utilizing information from diverse sources becomes more critically important. In this project, we developed, tested, and applied an advanced joint data inversion system for CO 2 storage modeling with large data sets for use in site characterization and real-time monitoring. Emphasis was on the development of advanced and efficient computational algorithms for joint inversion of hydro-geophysical data, coupled with state-of-the-art forward process simulations. The developed system consists of (1) inversion tools using characterization data, such as 3D seismic survey (amplitude images), borehole log and core data, as well as hydraulic,more » tracer and thermal tests before CO 2 injection, (2) joint inversion tools for updating the geologic model with the distribution of rock properties, thus reducing uncertainty, using hydro-geophysical monitoring data, and (3) highly efficient algorithms for directly solving the dense or sparse linear algebra systems derived from the joint inversion. The system combines methods from stochastic analysis, fast linear algebra, and high performance computing. The developed joint inversion tools have been tested through synthetic CO 2 storage examples.« less

  5. Modeling and inversion Matlab algorithms for resistivity, induced polarization and seismic data

    NASA Astrophysics Data System (ADS)

    Karaoulis, M.; Revil, A.; Minsley, B. J.; Werkema, D. D.

    2011-12-01

    M. Karaoulis (1), D.D. Werkema (3), A. Revil (1,2), A., B. Minsley (4), (1) Colorado School of Mines, Dept. of Geophysics, Golden, CO, USA. (2) ISTerre, CNRS, UMR 5559, Université de Savoie, Equipe Volcan, Le Bourget du Lac, France. (3) U.S. EPA, ORD, NERL, ESD, CMB, Las Vegas, Nevada, USA . (4) USGS, Federal Center, Lakewood, 10, 80225-0046, CO. Abstract We propose 2D and 3D forward modeling and inversion package for DC resistivity, time domain induced polarization (IP), frequency-domain IP, and seismic refraction data. For the resistivity and IP case, discretization is based on rectangular cells, where each cell has as unknown resistivity in the case of DC modelling, resistivity and chargeability in the time domain IP modelling, and complex resistivity in the spectral IP modelling. The governing partial-differential equations are solved with the finite element method, which can be applied to both real and complex variables that are solved for. For the seismic case, forward modeling is based on solving the eikonal equation using a second-order fast marching method. The wavepaths are materialized by Fresnel volumes rather than by conventional rays. This approach accounts for complicated velocity models and is advantageous because it considers frequency effects on the velocity resolution. The inversion can accommodate data at a single time step, or as a time-lapse dataset if the geophysical data are gathered for monitoring purposes. The aim of time-lapse inversion is to find the change in the velocities or resistivities of each model cell as a function of time. Different time-lapse algorithms can be applied such as independent inversion, difference inversion, 4D inversion, and 4D active time constraint inversion. The forward algorithms are benchmarked against analytical solutions and inversion results are compared with existing ones. The algorithms are packaged as Matlab codes with a simple Graphical User Interface. Although the code is parallelized for multi-core cpus, it is not as fast as machine code. In the case of large datasets, someone should consider transferring parts of the code to C or Fortran through mex files. This code is available through EPA's website on the following link http://www.epa.gov/esd/cmb/GeophysicsWebsite/index.html Although this work was reviewed by EPA and approved for publication, it may not necessarily reflect official Agency policy.

  6. Airway and tissue loading in postinterrupter response of the respiratory system - an identification algorithm construction.

    PubMed

    Jablonski, Ireneusz; Mroczka, Janusz

    2010-01-01

    The paper offers an enhancement of the classical interrupter technique algorithm dedicated to respiratory mechanics measurements. Idea consists in exploitation of information contained in postocclusional transient states during indirect measurement of parameter characteristics by model identification. It needs the adequacy of an inverse analogue to general behavior of the real system and a reliable algorithm of parameter estimation. The second one was a subject of reported works, which finally showed the potential of the approach to separation of airway and tissue response in a case of short-term excitation by interrupter valve operation. Investigations were conducted in a regime of forward-inverse computer experiment.

  7. The attitude inversion method of geostationary satellites based on unscented particle filter

    NASA Astrophysics Data System (ADS)

    Du, Xiaoping; Wang, Yang; Hu, Heng; Gou, Ruixin; Liu, Hao

    2018-04-01

    The attitude information of geostationary satellites is difficult to be obtained since they are presented in non-resolved images on the ground observation equipment in space object surveillance. In this paper, an attitude inversion method for geostationary satellite based on Unscented Particle Filter (UPF) and ground photometric data is presented. The inversion algorithm based on UPF is proposed aiming at the strong non-linear feature in the photometric data inversion for satellite attitude, which combines the advantage of Unscented Kalman Filter (UKF) and Particle Filter (PF). This update method improves the particle selection based on the idea of UKF to redesign the importance density function. Moreover, it uses the RMS-UKF to partially correct the prediction covariance matrix, which improves the applicability of the attitude inversion method in view of UKF and the particle degradation and dilution of the attitude inversion method based on PF. This paper describes the main principles and steps of algorithm in detail, correctness, accuracy, stability and applicability of the method are verified by simulation experiment and scaling experiment in the end. The results show that the proposed method can effectively solve the problem of particle degradation and depletion in the attitude inversion method on account of PF, and the problem that UKF is not suitable for the strong non-linear attitude inversion. However, the inversion accuracy is obviously superior to UKF and PF, in addition, in the case of the inversion with large attitude error that can inverse the attitude with small particles and high precision.

  8. Rayleigh wave dispersion curve inversion by using particle swarm optimization and genetic algorithm

    NASA Astrophysics Data System (ADS)

    Buyuk, Ersin; Zor, Ekrem; Karaman, Abdullah

    2017-04-01

    Inversion of surface wave dispersion curves with its highly nonlinear nature has some difficulties using traditional linear inverse methods due to the need and strong dependence to the initial model, possibility of trapping in local minima and evaluation of partial derivatives. There are some modern global optimization methods to overcome of these difficulties in surface wave analysis such as Genetic algorithm (GA) and Particle Swarm Optimization (PSO). GA is based on biologic evolution consisting reproduction, crossover and mutation operations, while PSO algorithm developed after GA is inspired from the social behaviour of birds or fish of swarms. Utility of these methods require plausible convergence rate, acceptable relative error and optimum computation cost that are important for modelling studies. Even though PSO and GA processes are similar in appearence, the cross-over operation in GA is not used in PSO and the mutation operation is a stochastic process for changing the genes within chromosomes in GA. Unlike GA, the particles in PSO algorithm changes their position with logical velocities according to particle's own experience and swarm's experience. In this study, we applied PSO algorithm to estimate S wave velocities and thicknesses of the layered earth model by using Rayleigh wave dispersion curve and also compared these results with GA and we emphasize on the advantage of using PSO algorithm for geophysical modelling studies considering its rapid convergence, low misfit error and computation cost.

  9. Performance comparisons on spatial lattice algorithm and direct matrix inverse method with application to adaptive arrays processing

    NASA Technical Reports Server (NTRS)

    An, S. H.; Yao, K.

    1986-01-01

    Lattice algorithm has been employed in numerous adaptive filtering applications such as speech analysis/synthesis, noise canceling, spectral analysis, and channel equalization. In this paper the application to adaptive-array processing is discussed. The advantages are fast convergence rate as well as computational accuracy independent of the noise and interference conditions. The results produced by this technique are compared to those obtained by the direct matrix inverse method.

  10. Music algorithm for imaging of a sound-hard arc in limited-view inverse scattering problem

    NASA Astrophysics Data System (ADS)

    Park, Won-Kwang

    2017-07-01

    MUltiple SIgnal Classification (MUSIC) algorithm for a non-iterative imaging of sound-hard arc in limited-view inverse scattering problem is considered. In order to discover mathematical structure of MUSIC, we derive a relationship between MUSIC and an infinite series of Bessel functions of integer order. This structure enables us to examine some properties of MUSIC in limited-view problem. Numerical simulations are performed to support the identified structure of MUSIC.

  11. Direct integration of the inverse Radon equation for X-ray computed tomography.

    PubMed

    Libin, E E; Chakhlov, S V; Trinca, D

    2016-11-22

    A new mathematical appoach using the inverse Radon equation for restoration of images in problems of linear two-dimensional x-ray tomography is formulated. In this approach, Fourier transformation is not used, and it gives the chance to create the practical computing algorithms having more reliable mathematical substantiation. Results of software implementation show that for especially for low number of projections, the described approach performs better than standard X-ray tomographic reconstruction algorithms.

  12. [Orthogonal Vector Projection Algorithm for Spectral Unmixing].

    PubMed

    Song, Mei-ping; Xu, Xing-wei; Chang, Chein-I; An, Ju-bai; Yao, Li

    2015-12-01

    Spectrum unmixing is an important part of hyperspectral technologies, which is essential for material quantity analysis in hyperspectral imagery. Most linear unmixing algorithms require computations of matrix multiplication and matrix inversion or matrix determination. These are difficult for programming, especially hard for realization on hardware. At the same time, the computation costs of the algorithms increase significantly as the number of endmembers grows. Here, based on the traditional algorithm Orthogonal Subspace Projection, a new method called. Orthogonal Vector Projection is prompted using orthogonal principle. It simplifies this process by avoiding matrix multiplication and inversion. It firstly computes the final orthogonal vector via Gram-Schmidt process for each endmember spectrum. And then, these orthogonal vectors are used as projection vector for the pixel signature. The unconstrained abundance can be obtained directly by projecting the signature to the projection vectors, and computing the ratio of projected vector length and orthogonal vector length. Compared to the Orthogonal Subspace Projection and Least Squares Error algorithms, this method does not need matrix inversion, which is much computation costing and hard to implement on hardware. It just completes the orthogonalization process by repeated vector operations, easy for application on both parallel computation and hardware. The reasonability of the algorithm is proved by its relationship with Orthogonal Sub-space Projection and Least Squares Error algorithms. And its computational complexity is also compared with the other two algorithms', which is the lowest one. At last, the experimental results on synthetic image and real image are also provided, giving another evidence for effectiveness of the method.

  13. A Semianalytical Ocean Color Inversion Algorithm with Explicit Water Column Depth and Substrate Reflectance Parameterization

    NASA Technical Reports Server (NTRS)

    Mckinna, Lachlan I. W.; Werdell, P. Jeremy; Fearns, Peter R. C.; Weeks, Scarla J.; Reichstetter, Martina; Franz, Bryan A.; Shea, Donald M.; Feldman, Gene C.

    2015-01-01

    A semianalytical ocean color inversion algorithm was developed for improving retrievals of inherent optical properties (IOPs) in optically shallow waters. In clear, geometrically shallow waters, light reflected off the seafloor can contribute to the water-leaving radiance signal. This can have a confounding effect on ocean color algorithms developed for optically deep waters, leading to an overestimation of IOPs. The algorithm described here, the Shallow Water Inversion Model (SWIM), uses pre-existing knowledge of bathymetry and benthic substrate brightness to account for optically shallow effects. SWIM was incorporated into the NASA Ocean Biology Processing Group's L2GEN code and tested in waters of the Great Barrier Reef, Australia, using the Moderate Resolution Imaging Spectroradiometer (MODIS) Aqua time series (2002-2013). SWIM-derived values of the total non-water absorption coefficient at 443 nm, at(443), the particulate backscattering coefficient at 443 nm, bbp(443), and the diffuse attenuation coefficient at 488 nm, Kd(488), were compared with values derived using the Generalized Inherent Optical Properties algorithm (GIOP) and the Quasi-Analytical Algorithm (QAA). The results indicated that in clear, optically shallow waters SWIM-derived values of at(443), bbp(443), and Kd(443) were realistically lower than values derived using GIOP and QAA, in agreement with radiative transfer modeling. This signified that the benthic reflectance correction was performing as expected. However, in more optically complex waters, SWIM had difficulty converging to a solution, a likely consequence of internal IOP parameterizations. Whilst a comprehensive study of the SWIM algorithm's behavior was conducted, further work is needed to validate the algorithm using in situ data.

  14. Criteria for the use of regression analysis for remote sensing of sediment and pollutants

    NASA Technical Reports Server (NTRS)

    Whitlock, C. H.; Kuo, C. Y.; Lecroy, S. R. (Principal Investigator)

    1982-01-01

    Data analysis procedures for quantification of water quality parameters that are already identified and are known to exist within the water body are considered. The liner multiple-regression technique was examined as a procedure for defining and calibrating data analysis algorithms for such instruments as spectrometers and multispectral scanners.

  15. Adaptive Filtering in the Wavelet Transform Domain Via Genetic Algorithms

    DTIC Science & Technology

    2004-08-01

    inverse transform process. 2. BACKGROUND The image processing research conducted at the AFRL/IFTA Reconfigurable Computing Laboratory has been...coefficients from the wavelet domain back into the original signal domain. In other words, the inverse transform produces the original signal x(t) from the...coefficients for an inverse wavelet transform, such that the MSE of images reconstructed by this inverse transform is significantly less than the mean squared

  16. Novel artefact removal algorithms for co-registered EEG/fMRI based on selective averaging and subtraction.

    PubMed

    de Munck, Jan C; van Houdt, Petra J; Gonçalves, Sónia I; van Wegen, Erwin; Ossenblok, Pauly P W

    2013-01-01

    Co-registered EEG and functional MRI (EEG/fMRI) is a potential clinical tool for planning invasive EEG in patients with epilepsy. In addition, the analysis of EEG/fMRI data provides a fundamental insight into the precise physiological meaning of both fMRI and EEG data. Routine application of EEG/fMRI for localization of epileptic sources is hampered by large artefacts in the EEG, caused by switching of scanner gradients and heartbeat effects. Residuals of the ballistocardiogram (BCG) artefacts are similarly shaped as epileptic spikes, and may therefore cause false identification of spikes. In this study, new ideas and methods are presented to remove gradient artefacts and to reduce BCG artefacts of different shapes that mutually overlap in time. Gradient artefacts can be removed efficiently by subtracting an average artefact template when the EEG sampling frequency and EEG low-pass filtering are sufficient in relation to MR gradient switching (Gonçalves et al., 2007). When this is not the case, the gradient artefacts repeat themselves at time intervals that depend on the remainder between the fMRI repetition time and the closest multiple of the EEG acquisition time. These repetitions are deterministic, but difficult to predict due to the limited precision by which these timings are known. Therefore, we propose to estimate gradient artefact repetitions using a clustering algorithm, combined with selective averaging. Clustering of the gradient artefacts yields cleaner EEG for data recorded during scanning of a 3T scanner when using a sampling frequency of 2048 Hz. It even gives clean EEG when the EEG is sampled with only 256 Hz. Current BCG artefacts-reduction algorithms based on average template subtraction have the intrinsic limitation that they fail to deal properly with artefacts that overlap in time. To eliminate this constraint, the precise timings of artefact overlaps were modelled and represented in a sparse matrix. Next, the artefacts were disentangled with a least squares procedure. The relevance of this approach is illustrated by determining the BCG artefacts in a data set consisting of 29 healthy subjects recorded in a 1.5 T scanner and 15 patients with epilepsy recorded in a 3 T scanner. Analysis of the relationship between artefact amplitude, duration and heartbeat interval shows that in 22% (1.5T data) to 30% (3T data) of the cases BCG artefacts show an overlap. The BCG artefacts of the EEG/fMRI data recorded on the 1.5T scanner show a small negative correlation between HBI and BCG amplitude. In conclusion, the proposed methodology provides a substantial improvement of the quality of the EEG signal without excessive computer power or additional hardware than standard EEG-compatible equipment. Copyright © 2012 Elsevier Inc. All rights reserved.

  17. Nonlinear inversion of electrical resistivity imaging using pruning Bayesian neural networks

    NASA Astrophysics Data System (ADS)

    Jiang, Fei-Bo; Dai, Qian-Wei; Dong, Li

    2016-06-01

    Conventional artificial neural networks used to solve electrical resistivity imaging (ERI) inversion problem suffer from overfitting and local minima. To solve these problems, we propose to use a pruning Bayesian neural network (PBNN) nonlinear inversion method and a sample design method based on the K-medoids clustering algorithm. In the sample design method, the training samples of the neural network are designed according to the prior information provided by the K-medoids clustering results; thus, the training process of the neural network is well guided. The proposed PBNN, based on Bayesian regularization, is used to select the hidden layer structure by assessing the effect of each hidden neuron to the inversion results. Then, the hyperparameter α k , which is based on the generalized mean, is chosen to guide the pruning process according to the prior distribution of the training samples under the small-sample condition. The proposed algorithm is more efficient than other common adaptive regularization methods in geophysics. The inversion of synthetic data and field data suggests that the proposed method suppresses the noise in the neural network training stage and enhances the generalization. The inversion results with the proposed method are better than those of the BPNN, RBFNN, and RRBFNN inversion methods as well as the conventional least squares inversion.

  18. Full-Physics Inverse Learning Machine for Satellite Remote Sensing Retrievals

    NASA Astrophysics Data System (ADS)

    Loyola, D. G.

    2017-12-01

    The satellite remote sensing retrievals are usually ill-posed inverse problems that are typically solved by finding a state vector that minimizes the residual between simulated data and real measurements. The classical inversion methods are very time-consuming as they require iterative calls to complex radiative-transfer forward models to simulate radiances and Jacobians, and subsequent inversion of relatively large matrices. In this work we present a novel and extremely fast algorithm for solving inverse problems called full-physics inverse learning machine (FP-ILM). The FP-ILM algorithm consists of a training phase in which machine learning techniques are used to derive an inversion operator based on synthetic data generated using a radiative transfer model (which expresses the "full-physics" component) and the smart sampling technique, and an operational phase in which the inversion operator is applied to real measurements. FP-ILM has been successfully applied to the retrieval of the SO2 plume height during volcanic eruptions and to the retrieval of ozone profile shapes from UV/VIS satellite sensors. Furthermore, FP-ILM will be used for the near-real-time processing of the upcoming generation of European Sentinel sensors with their unprecedented spectral and spatial resolution and associated large increases in the amount of data.

  19. Lidar-based door and stair detection from a mobile robot

    NASA Astrophysics Data System (ADS)

    Bansal, Mayank; Southall, Ben; Matei, Bogdan; Eledath, Jayan; Sawhney, Harpreet

    2010-04-01

    We present an on-the-move LIDAR-based object detection system for autonomous and semi-autonomous unmanned vehicle systems. In this paper we make several contributions: (i) we describe an algorithm for real-time detection of objects such as doors and stairs in indoor environments; (ii) we describe efficient data structures and algorithms for processing 3D point clouds acquired by laser scanners in a streaming manner, which minimize the memory copying and access. We show qualitative results demonstrating the effectiveness of our approach on runs in an indoor office environment.

  20. The adaptive parallel UKF inversion method for the shape of space objects based on the ground-based photometric data

    NASA Astrophysics Data System (ADS)

    Du, Xiaoping; Wang, Yang; Liu, Hao

    2018-04-01

    The space object in highly elliptical orbit is always presented as an image point on the ground-based imaging equipment so that it is difficult to resolve and identify the shape and attitude directly. In this paper a novel algorithm is presented for the estimation of spacecraft shape. The apparent magnitude model suitable for the inversion of object information such as shape and attitude is established based on the analysis of photometric characteristics. A parallel adaptive shape inversion algorithm based on UKF was designed after the achievement of dynamic equation of the nonlinear, Gaussian system involved with the influence of various dragging forces. The result of a simulation study demonstrate the viability and robustness of the new filter and its fast convergence rate. It realizes the inversion of combination shape with high accuracy, especially for the bus of cube and cylinder. Even though with sparse photometric data, it still can maintain a higher success rate of inversion.

  1. A multiresolution inversion for imaging the ionosphere

    NASA Astrophysics Data System (ADS)

    Yin, Ping; Zheng, Ya-Nan; Mitchell, Cathryn N.; Li, Bo

    2017-06-01

    Ionospheric tomography has been widely employed in imaging the large-scale ionospheric structures at both quiet and storm times. However, the tomographic algorithms to date have not been very effective in imaging of medium- and small-scale ionospheric structures due to limitations of uneven ground-based data distributions and the algorithm itself. Further, the effect of the density and quantity of Global Navigation Satellite Systems data that could help improve the tomographic results for the certain algorithm remains unclear in much of the literature. In this paper, a new multipass tomographic algorithm is proposed to conduct the inversion using intensive ground GPS observation data and is demonstrated over the U.S. West Coast during the period of 16-18 March 2015 which includes an ionospheric storm period. The characteristics of the multipass inversion algorithm are analyzed by comparing tomographic results with independent ionosonde data and Center for Orbit Determination in Europe total electron content estimates. Then, several ground data sets with different data distributions are grouped from the same data source in order to investigate the impact of the density of ground stations on ionospheric tomography results. Finally, it is concluded that the multipass inversion approach offers an improvement. The ground data density can affect tomographic results but only offers improvements up to a density of around one receiver every 150 to 200 km. When only GPS satellites are tracked there is no clear advantage in increasing the density of receivers beyond this level, although this may change if multiple constellations are monitored from each receiving station in the future.

  2. Weak unique continuation property and a related inverse source problem for time-fractional diffusion-advection equations

    NASA Astrophysics Data System (ADS)

    Jiang, Daijun; Li, Zhiyuan; Liu, Yikan; Yamamoto, Masahiro

    2017-05-01

    In this paper, we first establish a weak unique continuation property for time-fractional diffusion-advection equations. The proof is mainly based on the Laplace transform and the unique continuation properties for elliptic and parabolic equations. The result is weaker than its parabolic counterpart in the sense that we additionally impose the homogeneous boundary condition. As a direct application, we prove the uniqueness for an inverse problem on determining the spatial component in the source term by interior measurements. Numerically, we reformulate our inverse source problem as an optimization problem, and propose an iterative thresholding algorithm. Finally, several numerical experiments are presented to show the accuracy and efficiency of the algorithm.

  3. Helios: a Multi-Purpose LIDAR Simulation Framework for Research, Planning and Training of Laser Scanning Operations with Airborne, Ground-Based Mobile and Stationary Platforms

    NASA Astrophysics Data System (ADS)

    Bechtold, S.; Höfle, B.

    2016-06-01

    In many technical domains of modern society, there is a growing demand for fast, precise and automatic acquisition of digital 3D models of a wide variety of physical objects and environments. Laser scanning is a popular and widely used technology to cover this demand, but it is also expensive and complex to use to its full potential. However, there might exist scenarios where the operation of a real laser scanner could be replaced by a computer simulation, in order to save time and costs. This includes scenarios like teaching and training of laser scanning, development of new scanner hardware and scanning methods, or generation of artificial scan data sets to support the development of point cloud processing and analysis algorithms. To test the feasibility of this idea, we have developed a highly flexible laser scanning simulation framework named Heidelberg LiDAR Operations Simulator (HELIOS). HELIOS is implemented as a Java library and split up into a core component and multiple extension modules. Extensible Markup Language (XML) is used to define scanner, platform and scene models and to configure the behaviour of modules. Modules were developed and implemented for (1) loading of simulation assets and configuration (i.e. 3D scene models, scanner definitions, survey descriptions etc.), (2) playback of XML survey descriptions, (3) TLS survey planning (i.e. automatic computation of recommended scanning positions) and (4) interactive real-time 3D visualization of simulated surveys. As a proof of concept, we show the results of two experiments: First, a survey planning test in a scene that was specifically created to evaluate the quality of the survey planning algorithm. Second, a simulated TLS scan of a crop field in a precision farming scenario. The results show that HELIOS fulfills its design goals.

  4. Wavelet-based 3-D inversion for frequency-domain airborne EM data

    NASA Astrophysics Data System (ADS)

    Liu, Yunhe; Farquharson, Colin G.; Yin, Changchun; Baranwal, Vikas C.

    2018-04-01

    In this paper, we propose a new wavelet-based 3-D inversion method for frequency-domain airborne electromagnetic (FDAEM) data. Instead of inverting the model in the space domain using a smoothing constraint, this new method recovers the model in the wavelet domain based on a sparsity constraint. In the wavelet domain, the model is represented by two types of coefficients, which contain both large- and fine-scale informations of the model, meaning the wavelet-domain inversion has inherent multiresolution. In order to accomplish a sparsity constraint, we minimize an L1-norm measure in the wavelet domain that mostly gives a sparse solution. The final inversion system is solved by an iteratively reweighted least-squares method. We investigate different orders of Daubechies wavelets to accomplish our inversion algorithm, and test them on synthetic frequency-domain AEM data set. The results show that higher order wavelets having larger vanishing moments and regularity can deliver a more stable inversion process and give better local resolution, while the lower order wavelets are simpler and less smooth, and thus capable of recovering sharp discontinuities if the model is simple. At last, we test this new inversion algorithm on a frequency-domain helicopter EM (HEM) field data set acquired in Byneset, Norway. Wavelet-based 3-D inversion of HEM data is compared to L2-norm-based 3-D inversion's result to further investigate the features of the new method.

  5. Two-dimensional joint inversion of Magnetotelluric and local earthquake data: Discussion on the contribution to the solution of deep subsurface structures

    NASA Astrophysics Data System (ADS)

    Demirci, İsmail; Dikmen, Ünal; Candansayar, M. Emin

    2018-02-01

    Joint inversion of data sets collected by using several geophysical exploration methods has gained importance and associated algorithms have been developed. To explore the deep subsurface structures, Magnetotelluric and local earthquake tomography algorithms are generally used individually. Due to the usage of natural resources in both methods, it is not possible to increase data quality and resolution of model parameters. For this reason, the solution of the deep structures with the individual usage of the methods cannot be fully attained. In this paper, we firstly focused on the effects of both Magnetotelluric and local earthquake data sets on the solution of deep structures and discussed the results on the basis of the resolving power of the methods. The presence of deep-focus seismic sources increase the resolution of deep structures. Moreover, conductivity distribution of relatively shallow structures can be solved with high resolution by using MT algorithm. Therefore, we developed a new joint inversion algorithm based on the cross gradient function in order to jointly invert Magnetotelluric and local earthquake data sets. In the study, we added a new regularization parameter into the second term of the parameter correction vector of Gallardo and Meju (2003). The new regularization parameter is enhancing the stability of the algorithm and controls the contribution of the cross gradient term in the solution. The results show that even in cases where resistivity and velocity boundaries are different, both methods influence each other positively. In addition, the region of common structural boundaries of the models are clearly mapped compared with original models. Furthermore, deep structures are identified satisfactorily even with using the minimum number of seismic sources. In this paper, in order to understand the future studies, we discussed joint inversion of Magnetotelluric and local earthquake data sets only in two-dimensional space. In the light of these results and by means of the acceleration on the three-dimensional modelling and inversion algorithms, it is thought that it may be easier to identify underground structures with high resolution.

  6. A reversible-jump Markov chain Monte Carlo algorithm for 1D inversion of magnetotelluric data

    NASA Astrophysics Data System (ADS)

    Mandolesi, Eric; Ogaya, Xenia; Campanyà, Joan; Piana Agostinetti, Nicola

    2018-04-01

    This paper presents a new computer code developed to solve the 1D magnetotelluric (MT) inverse problem using a Bayesian trans-dimensional Markov chain Monte Carlo algorithm. MT data are sensitive to the depth-distribution of rock electric conductivity (or its reciprocal, resistivity). The solution provided is a probability distribution - the so-called posterior probability distribution (PPD) for the conductivity at depth, together with the PPD of the interface depths. The PPD is sampled via a reversible-jump Markov Chain Monte Carlo (rjMcMC) algorithm, using a modified Metropolis-Hastings (MH) rule to accept or discard candidate models along the chains. As the optimal parameterization for the inversion process is generally unknown a trans-dimensional approach is used to allow the dataset itself to indicate the most probable number of parameters needed to sample the PPD. The algorithm is tested against two simulated datasets and a set of MT data acquired in the Clare Basin (County Clare, Ireland). For the simulated datasets the correct number of conductive layers at depth and the associated electrical conductivity values is retrieved, together with reasonable estimates of the uncertainties on the investigated parameters. Results from the inversion of field measurements are compared with results obtained using a deterministic method and with well-log data from a nearby borehole. The PPD is in good agreement with the well-log data, showing as a main structure a high conductive layer associated with the Clare Shale formation. In this study, we demonstrate that our new code go beyond algorithms developend using a linear inversion scheme, as it can be used: (1) to by-pass the subjective choices in the 1D parameterizations, i.e. the number of horizontal layers in the 1D parameterization, and (2) to estimate realistic uncertainties on the retrieved parameters. The algorithm is implemented using a simple MPI approach, where independent chains run on isolated CPU, to take full advantage of parallel computer architectures. In case of a large number of data, a master/slave appoach can be used, where the master CPU samples the parameter space and the slave CPUs compute forward solutions.

  7. An Iterative Closest Points Algorithm for Registration of 3D Laser Scanner Point Clouds with Geometric Features.

    PubMed

    He, Ying; Liang, Bin; Yang, Jun; Li, Shunzhi; He, Jin

    2017-08-11

    The Iterative Closest Points (ICP) algorithm is the mainstream algorithm used in the process of accurate registration of 3D point cloud data. The algorithm requires a proper initial value and the approximate registration of two point clouds to prevent the algorithm from falling into local extremes, but in the actual point cloud matching process, it is difficult to ensure compliance with this requirement. In this paper, we proposed the ICP algorithm based on point cloud features (GF-ICP). This method uses the geometrical features of the point cloud to be registered, such as curvature, surface normal and point cloud density, to search for the correspondence relationships between two point clouds and introduces the geometric features into the error function to realize the accurate registration of two point clouds. The experimental results showed that the algorithm can improve the convergence speed and the interval of convergence without setting a proper initial value.

  8. An Iterative Closest Points Algorithm for Registration of 3D Laser Scanner Point Clouds with Geometric Features

    PubMed Central

    Liang, Bin; Yang, Jun; Li, Shunzhi; He, Jin

    2017-01-01

    The Iterative Closest Points (ICP) algorithm is the mainstream algorithm used in the process of accurate registration of 3D point cloud data. The algorithm requires a proper initial value and the approximate registration of two point clouds to prevent the algorithm from falling into local extremes, but in the actual point cloud matching process, it is difficult to ensure compliance with this requirement. In this paper, we proposed the ICP algorithm based on point cloud features (GF-ICP). This method uses the geometrical features of the point cloud to be registered, such as curvature, surface normal and point cloud density, to search for the correspondence relationships between two point clouds and introduces the geometric features into the error function to realize the accurate registration of two point clouds. The experimental results showed that the algorithm can improve the convergence speed and the interval of convergence without setting a proper initial value. PMID:28800096

  9. Maximum likelihood positioning algorithm for high-resolution PET scanners

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gross-Weege, Nicolas, E-mail: nicolas.gross-weege@pmi.rwth-aachen.de, E-mail: schulz@pmi.rwth-aachen.de; Schug, David; Hallen, Patrick

    2016-06-15

    Purpose: In high-resolution positron emission tomography (PET), lightsharing elements are incorporated into typical detector stacks to read out scintillator arrays in which one scintillator element (crystal) is smaller than the size of the readout channel. In order to identify the hit crystal by means of the measured light distribution, a positioning algorithm is required. One commonly applied positioning algorithm uses the center of gravity (COG) of the measured light distribution. The COG algorithm is limited in spatial resolution by noise and intercrystal Compton scatter. The purpose of this work is to develop a positioning algorithm which overcomes this limitation. Methods:more » The authors present a maximum likelihood (ML) algorithm which compares a set of expected light distributions given by probability density functions (PDFs) with the measured light distribution. Instead of modeling the PDFs by using an analytical model, the PDFs of the proposed ML algorithm are generated assuming a single-gamma-interaction model from measured data. The algorithm was evaluated with a hot-rod phantom measurement acquired with the preclinical HYPERION II {sup D} PET scanner. In order to assess the performance with respect to sensitivity, energy resolution, and image quality, the ML algorithm was compared to a COG algorithm which calculates the COG from a restricted set of channels. The authors studied the energy resolution of the ML and the COG algorithm regarding incomplete light distributions (missing channel information caused by detector dead time). Furthermore, the authors investigated the effects of using a filter based on the likelihood values on sensitivity, energy resolution, and image quality. Results: A sensitivity gain of up to 19% was demonstrated in comparison to the COG algorithm for the selected operation parameters. Energy resolution and image quality were on a similar level for both algorithms. Additionally, the authors demonstrated that the performance of the ML algorithm is less prone to missing channel information. A likelihood filter visually improved the image quality, i.e., the peak-to-valley increased up to a factor of 3 for 2-mm-diameter phantom rods by rejecting 87% of the coincidences. A relative improvement of the energy resolution of up to 12.8% was also measured rejecting 91% of the coincidences. Conclusions: The developed ML algorithm increases the sensitivity by correctly handling missing channel information without influencing energy resolution or image quality. Furthermore, the authors showed that energy resolution and image quality can be improved substantially by rejecting events that do not comply well with the single-gamma-interaction model, such as Compton-scattered events.« less

  10. The genetic algorithm: A robust method for stress inversion

    NASA Astrophysics Data System (ADS)

    Thakur, Prithvi; Srivastava, Deepak C.; Gupta, Pravin K.

    2017-01-01

    The stress inversion of geological or geophysical observations is a nonlinear problem. In most existing methods, it is solved by linearization, under certain assumptions. These linear algorithms not only oversimplify the problem but also are vulnerable to entrapment of the solution in a local optimum. We propose the use of a nonlinear heuristic technique, the genetic algorithm, which searches the global optimum without making any linearizing assumption or simplification. The algorithm mimics the natural evolutionary processes of selection, crossover and mutation and, minimizes a composite misfit function for searching the global optimum, the fittest stress tensor. The validity and efficacy of the algorithm are demonstrated by a series of tests on synthetic and natural fault-slip observations in different tectonic settings and also in situations where the observations are noisy. It is shown that the genetic algorithm is superior to other commonly practised methods, in particular, in those tectonic settings where none of the principal stresses is directed vertically and/or the given data set is noisy.

  11. Full-wave Nonlinear Inverse Scattering for Acoustic and Electromagnetic Breast Imaging

    NASA Astrophysics Data System (ADS)

    Haynes, Mark Spencer

    Acoustic and electromagnetic full-wave nonlinear inverse scattering techniques are explored in both theory and experiment with the ultimate aim of noninvasively mapping the material properties of the breast. There is evidence that benign and malignant breast tissue have different acoustic and electrical properties and imaging these properties directly could provide higher quality images with better diagnostic certainty. In this dissertation, acoustic and electromagnetic inverse scattering algorithms are first developed and validated in simulation. The forward solvers and optimization cost functions are modified from traditional forms in order to handle the large or lossy imaging scenes present in ultrasonic and microwave breast imaging. An antenna model is then presented, modified, and experimentally validated for microwave S-parameter measurements. Using the antenna model, a new electromagnetic volume integral equation is derived in order to link the material properties of the inverse scattering algorithms to microwave S-parameters measurements allowing direct comparison of model predictions and measurements in the imaging algorithms. This volume integral equation is validated with several experiments and used as the basis of a free-space inverse scattering experiment, where images of the dielectric properties of plastic objects are formed without the use of calibration targets. These efforts are used as the foundation of a solution and formulation for the numerical characterization of a microwave near-field cavity-based breast imaging system. The system is constructed and imaging results of simple targets are given. Finally, the same techniques are used to explore a new self-characterization method for commercial ultrasound probes. The method is used to calibrate an ultrasound inverse scattering experiment and imaging results of simple targets are presented. This work has demonstrated the feasibility of quantitative microwave inverse scattering by way of a self-consistent characterization formalism, and has made headway in the same area for ultrasound.

  12. Measuring soil moisture with imaging radars

    NASA Technical Reports Server (NTRS)

    Dubois, Pascale C.; Vanzyl, Jakob; Engman, Ted

    1995-01-01

    An empirical model was developed to infer soil moisture and surface roughness from radar data. The accuracy of the inversion technique is assessed by comparing soil moisture obtained with the inversion technique to in situ measurements. The effect of vegetation on the inversion is studied and a method to eliminate the areas where vegetation impairs the algorithm is described.

  13. Concurrency control for transactions with priorities

    NASA Technical Reports Server (NTRS)

    Marzullo, Keith

    1989-01-01

    Priority inversion occurs when a process is delayed by the actions of another process with less priority. With atomic transations, the concurrency control mechanism can cause delays, and without taking priorities into account can be a source of priority inversion. In this paper, three traditional concurrency control algorithms are extended so that they are free from unbounded priority inversion.

  14. Evaluation of prostate segmentation algorithms for MRI: the PROMISE12 challenge

    PubMed Central

    Litjens, Geert; Toth, Robert; van de Ven, Wendy; Hoeks, Caroline; Kerkstra, Sjoerd; van Ginneken, Bram; Vincent, Graham; Guillard, Gwenael; Birbeck, Neil; Zhang, Jindang; Strand, Robin; Malmberg, Filip; Ou, Yangming; Davatzikos, Christos; Kirschner, Matthias; Jung, Florian; Yuan, Jing; Qiu, Wu; Gao, Qinquan; Edwards, Philip “Eddie”; Maan, Bianca; van der Heijden, Ferdinand; Ghose, Soumya; Mitra, Jhimli; Dowling, Jason; Barratt, Dean; Huisman, Henkjan; Madabhushi, Anant

    2014-01-01

    Prostate MRI image segmentation has been an area of intense research due to the increased use of MRI as a modality for the clinical workup of prostate cancer. Segmentation is useful for various tasks, e.g. to accurately localize prostate boundaries for radiotherapy or to initialize multi-modal registration algorithms. In the past, it has been difficult for research groups to evaluate prostate segmentation algorithms on multi-center, multi-vendor and multi-protocol data. Especially because we are dealing with MR images, image appearance, resolution and the presence of artifacts are affected by differences in scanners and/or protocols, which in turn can have a large influence on algorithm accuracy. The Prostate MR Image Segmentation (PROMISE12) challenge was setup to allow a fair and meaningful comparison of segmentation methods on the basis of performance and robustness. In this work we will discuss the initial results of the online PROMISE12 challenge, and the results obtained in the live challenge workshop hosted by the MICCAI2012 conference. In the challenge, 100 prostate MR cases from 4 different centers were included, with differences in scanner manufacturer, field strength and protocol. A total of 11 teams from academic research groups and industry participated. Algorithms showed a wide variety in methods and implementation, including active appearance models, atlas registration and level sets. Evaluation was performed using boundary and volume based metrics which were combined into a single score relating the metrics to human expert performance. The winners of the challenge where the algorithms by teams Imorphics and ScrAutoProstate, with scores of 85.72 and 84.29 overall. Both algorithms where significantly better than all other algorithms in the challenge (p < 0.05) and had an efficient implementation with a run time of 8 minutes and 3 second per case respectively. Overall, active appearance model based approaches seemed to outperform other approaches like multi-atlas registration, both on accuracy and computation time. Although average algorithm performance was good to excellent and the Imorphics algorithm outperformed the second observer on average, we showed that algorithm combination might lead to further improvement, indicating that optimal performance for prostate segmentation is not yet obtained. All results are available online at http://promise12.grand-challenge.org/. PMID:24418598

  15. Temporal resolution improvement using PICCS in MDCT cardiac imaging

    PubMed Central

    Chen, Guang-Hong; Tang, Jie; Hsieh, Jiang

    2009-01-01

    The current paradigm for temporal resolution improvement is to add more source-detector units and∕or increase the gantry rotation speed. The purpose of this article is to present an innovative alternative method to potentially improve temporal resolution by approximately a factor of 2 for all MDCT scanners without requiring hardware modification. The central enabling technology is a most recently developed image reconstruction method: Prior image constrained compressed sensing (PICCS). Using the method, cardiac CT images can be accurately reconstructed using the projection data acquired in an angular range of about 120°, which is roughly 50% of the standard short-scan angular range (∼240° for an MDCT scanner). As a result, the temporal resolution of MDCT cardiac imaging can be universally improved by approximately a factor of 2. In order to validate the proposed method, two in vivo animal experiments were conducted using a state-of-the-art 64-slice CT scanner (GE Healthcare, Waukesha, WI) at different gantry rotation times and different heart rates. One animal was scanned at heart rate of 83 beats per minute (bpm) using 400 ms gantry rotation time and the second animal was scanned at 94 bpm using 350 ms gantry rotation time, respectively. Cardiac coronary CT imaging can be successfully performed at high heart rates using a single-source MDCT scanner and projection data from a single heart beat with gantry rotation times of 400 and 350 ms. Using the proposed PICCS method, the temporal resolution of cardiac CT imaging can be effectively improved by approximately a factor of 2 without modifying any scanner hardware. This potentially provides a new method for single-source MDCT scanners to achieve reliable coronary CT imaging for patients at higher heart rates than the current heart rate limit of 70 bpm without using the well-known multisegment FBP reconstruction algorithm. This method also enables dual-source MDCT scanner to achieve higher temporal resolution without further hardware modifications. PMID:19610302

  16. Temporal resolution improvement using PICCS in MDCT cardiac imaging.

    PubMed

    Chen, Guang-Hong; Tang, Jie; Hsieh, Jiang

    2009-06-01

    The current paradigm for temporal resolution improvement is to add more source-detector units and/or increase the gantry rotation speed. The purpose of this article is to present an innovative alternative method to potentially improve temporal resolution by approximately a factor of 2 for all MDCT scanners without requiring hardware modification. The central enabling technology is a most recently developed image reconstruction method: Prior image constrained compressed sensing (PICCS). Using the method, cardiac CT images can be accurately reconstructed using the projection data acquired in an angular range of about 120 degrees, which is roughly 50% of the standard short-scan angular range (approximately 240 degrees for an MDCT scanner). As a result, the temporal resolution of MDCT cardiac imaging can be universally improved by approximately a factor of 2. In order to validate the proposed method, two in vivo animal experiments were conducted using a state-of-the-art 64-slice CT scanner (GE Healthcare, Waukesha, WI) at different gantry rotation times and different heart rates. One animal was scanned at heart rate of 83 beats per minute (bpm) using 400 ms gantry rotation time and the second animal was scanned at 94 bpm using 350 ms gantry rotation time, respectively. Cardiac coronary CT imaging can be successfully performed at high heart rates using a single-source MDCT scanner and projection data from a single heart beat with gantry rotation times of 400 and 350 ms. Using the proposed PICCS method, the temporal resolution of cardiac CT imaging can be effectively improved by approximately a factor of 2 without modifying any scanner hardware. This potentially provides a new method for single-source MDCT scanners to achieve reliable coronary CT imaging for patients at higher heart rates than the current heart rate limit of 70 bpm without using the well-known multisegment FBP reconstruction algorithm. This method also enables dual-source MDCT scanner to achieve higher temporal resolution without further hardware modifications.

  17. Quantum algorithms for Gibbs sampling and hitting-time estimation

    DOE PAGES

    Chowdhury, Anirban Narayan; Somma, Rolando D.

    2017-02-01

    In this paper, we present quantum algorithms for solving two problems regarding stochastic processes. The first algorithm prepares the thermal Gibbs state of a quantum system and runs in time almost linear in √Nβ/Ζ and polynomial in log(1/ϵ), where N is the Hilbert space dimension, β is the inverse temperature, Ζ is the partition function, and ϵ is the desired precision of the output state. Our quantum algorithm exponentially improves the dependence on 1/ϵ and quadratically improves the dependence on β of known quantum algorithms for this problem. The second algorithm estimates the hitting time of a Markov chain. Formore » a sparse stochastic matrix Ρ, it runs in time almost linear in 1/(ϵΔ 3/2), where ϵ is the absolute precision in the estimation and Δ is a parameter determined by Ρ, and whose inverse is an upper bound of the hitting time. Our quantum algorithm quadratically improves the dependence on 1/ϵ and 1/Δ of the analog classical algorithm for hitting-time estimation. Finally, both algorithms use tools recently developed in the context of Hamiltonian simulation, spectral gap amplification, and solving linear systems of equations.« less

  18. Spatial operator approach to flexible multibody system dynamics and control

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.

    1991-01-01

    The inverse and forward dynamics problems for flexible multibody systems were solved using the techniques of spatially recursive Kalman filtering and smoothing. These algorithms are easily developed using a set of identities associated with mass matrix factorization and inversion. These identities are easily derived using the spatial operator algebra developed by the author. Current work is aimed at computational experiments with the described algorithms and at modelling for control design of limber manipulator systems. It is also aimed at handling and manipulation of flexible objects.

  19. Joint body and surface wave tomography applied to the Toba caldera complex (Indonesia)

    NASA Astrophysics Data System (ADS)

    Jaxybulatov, Kairly; Koulakov, Ivan; Shapiro, Nikolai

    2016-04-01

    We developed a new algorithm for a joint body and surface wave tomography. The algorithm is a modification of the existing LOTOS code (Koulakov, 2009) developed for local earthquake tomography. The input data for the new method are travel times of P and S waves and dispersion curves of Rayleigh and Love waves. The main idea is that the two data types have complementary sensitivities. The body-wave data have good resolution at depth, where we have enough crossing rays between sources and receivers, whereas the surface waves have very good near-surface resolution. The surface wave dispersion curves can be retrieved from the correlations of the ambient seismic noise and in this case the sampled path distribution does not depend on the earthquake sources. The contributions of the two data types to the inversion are controlled by the weighting of the respective equations. One of the clearest cases where such approach may be useful are volcanic systems in subduction zones with their complex magmatic feeding systems that have deep roots in the mantle and intermediate magma chambers in the crust. In these areas, the joint inversion of different types of data helps us to build a comprehensive understanding of the entire system. We apply our algorithm to data collected in the region surrounding the Toba caldera complex (north Sumatra, Indonesia) during two temporary seismic experiments (IRIS, PASSCAL, 1995, GFZ, LAKE TOBA, 2008). We invert 6644 P and 5240 S wave arrivals and ~500 group velocity dispersion curves of Rayleigh and Love waves. We present a series of synthetic tests and real data inversions which show that joint inversion approach gives more reliable results than the separate inversion of two data types. Koulakov, I., LOTOS code for local earthquake tomographic inversion. Benchmarks for testing tomographic algorithms, Bull. seism. Soc. Am., 99(1), 194-214, 2009, doi:10.1785/0120080013

  20. Incremental inverse kinematics based vision servo for autonomous robotic capture of non-cooperative space debris

    NASA Astrophysics Data System (ADS)

    Dong, Gangqi; Zhu, Z. H.

    2016-04-01

    This paper proposed a new incremental inverse kinematics based vision servo approach for robotic manipulators to capture a non-cooperative target autonomously. The target's pose and motion are estimated by a vision system using integrated photogrammetry and EKF algorithm. Based on the estimated pose and motion of the target, the instantaneous desired position of the end-effector is predicted by inverse kinematics and the robotic manipulator is moved incrementally from its current configuration subject to the joint speed limits. This approach effectively eliminates the multiple solutions in the inverse kinematics and increases the robustness of the control algorithm. The proposed approach is validated by a hardware-in-the-loop simulation, where the pose and motion of the non-cooperative target is estimated by a real vision system. The simulation results demonstrate the effectiveness and robustness of the proposed estimation approach for the target and the incremental control strategy for the robotic manipulator.

  1. Quasi-Epipolar Resampling of High Resolution Satellite Stereo Imagery for Semi Global Matching

    NASA Astrophysics Data System (ADS)

    Tatar, N.; Saadatseresht, M.; Arefi, H.; Hadavand, A.

    2015-12-01

    Semi-global matching is a well-known stereo matching algorithm in photogrammetric and computer vision society. Epipolar images are supposed as input of this algorithm. Epipolar geometry of linear array scanners is not a straight line as in case of frame camera. Traditional epipolar resampling algorithms demands for rational polynomial coefficients (RPCs), physical sensor model or ground control points. In this paper we propose a new solution for epipolar resampling method which works without the need for these information. In proposed method, automatic feature extraction algorithms are employed to generate corresponding features for registering stereo pairs. Also original images are divided into small tiles. In this way by omitting the need for extra information, the speed of matching algorithm increased and the need for high temporal memory decreased. Our experiments on GeoEye-1 stereo pair captured over Qom city in Iran demonstrates that the epipolar images are generated with sub-pixel accuracy.

  2. Quantitative imaging technique using the layer-stripping algorithm

    NASA Astrophysics Data System (ADS)

    Beilina, L.

    2017-07-01

    We present the layer-stripping algorithm for the solution of the hyperbolic coefficient inverse problem (CIP). Our numerical examples show quantitative reconstruction of small tumor-like inclusions in two-dimensions.

  3. Directly data processing algorithm for multi-wavelength pyrometer (MWP).

    PubMed

    Xing, Jian; Peng, Bo; Ma, Zhao; Guo, Xin; Dai, Li; Gu, Weihong; Song, Wenlong

    2017-11-27

    Data processing of multi-wavelength pyrometer (MWP) is a difficult problem because unknown emissivity. So far some solutions developed generally assumed particular mathematical relations for emissivity versus wavelength or emissivity versus temperature. Due to the deviation between the hypothesis and actual situation, the inversion results can be seriously affected. So directly data processing algorithm of MWP that does not need to assume the spectral emissivity model in advance is main aim of the study. Two new data processing algorithms of MWP, Gradient Projection (GP) algorithm and Internal Penalty Function (IPF) algorithm, each of which does not require to fix emissivity model in advance, are proposed. The novelty core idea is that data processing problem of MWP is transformed into constraint optimization problem, then it can be solved by GP or IPF algorithms. By comparison of simulation results for some typical spectral emissivity models, it is found that IPF algorithm is superior to GP algorithm in terms of accuracy and efficiency. Rocket nozzle temperature experiment results show that true temperature inversion results from IPF algorithm agree well with the theoretical design temperature as well. So the proposed combination IPF algorithm with MWP is expected to be a directly data processing algorithm to clear up the unknown emissivity obstacle for MWP.

  4. Region of interest processing for iterative reconstruction in x-ray computed tomography

    NASA Astrophysics Data System (ADS)

    Kopp, Felix K.; Nasirudin, Radin A.; Mei, Kai; Fehringer, Andreas; Pfeiffer, Franz; Rummeny, Ernst J.; Noël, Peter B.

    2015-03-01

    The recent advancements in the graphics card technology raised the performance of parallel computing and contributed to the introduction of iterative reconstruction methods for x-ray computed tomography in clinical CT scanners. Iterative maximum likelihood (ML) based reconstruction methods are known to reduce image noise and to improve the diagnostic quality of low-dose CT. However, iterative reconstruction of a region of interest (ROI), especially ML based, is challenging. But for some clinical procedures, like cardiac CT, only a ROI is needed for diagnostics. A high-resolution reconstruction of the full field of view (FOV) consumes unnecessary computation effort that results in a slower reconstruction than clinically acceptable. In this work, we present an extension and evaluation of an existing ROI processing algorithm. Especially improvements for the equalization between regions inside and outside of a ROI are proposed. The evaluation was done on data collected from a clinical CT scanner. The performance of the different algorithms is qualitatively and quantitatively assessed. Our solution to the ROI problem provides an increase in signal-to-noise ratio and leads to visually less noise in the final reconstruction. The reconstruction speed of our technique was observed to be comparable with other previous proposed techniques. The development of ROI processing algorithms in combination with iterative reconstruction will provide higher diagnostic quality in the near future.

  5. Computer-aided diagnosis workstation and network system for chest diagnosis based on multislice CT images

    NASA Astrophysics Data System (ADS)

    Satoh, Hitoshi; Niki, Noboru; Mori, Kiyoshi; Eguchi, Kenji; Kaneko, Masahiro; Kakinuma, Ryutarou; Moriyama, Noriyuki; Ohmatsu, Hironobu; Masuda, Hideo; Machida, Suguru

    2007-03-01

    Multislice CT scanner advanced remarkably at the speed at which the chest CT images were acquired for mass screening. Mass screening based on multislice CT images requires a considerable number of images to be read. It is this time-consuming step that makes the use of helical CT for mass screening impractical at present. To overcome this problem, we have provided diagnostic assistance methods to medical screening specialists by developing a lung cancer screening algorithm that automatically detects suspected lung cancers in helical CT images and a coronary artery calcification screening algorithm that automatically detects suspected coronary artery calcification. Moreover, we have provided diagnostic assistance methods to medical screening specialists by using a lung cancer screening algorithm built into mobile helical CT scanner for the lung cancer mass screening done in the region without the hospital. We also have developed electronic medical recording system and prototype internet system for the community health in two or more regions by using the Virtual Private Network router and Biometric fingerprint authentication system and Biometric face authentication system for safety of medical information. Based on these diagnostic assistance methods, we have now developed a new computer-aided workstation and database that can display suspected lesions three-dimensionally in a short time. This paper describes basic studies that have been conducted to evaluate this new system.

  6. MARS spectral molecular imaging of lamb tissue: data collection and image analysis

    NASA Astrophysics Data System (ADS)

    Aamir, R.; Chernoglazov, A.; Bateman, C. J.; Butler, A. P. H.; Butler, P. H.; Anderson, N. G.; Bell, S. T.; Panta, R. K.; Healy, J. L.; Mohr, J. L.; Rajendran, K.; Walsh, M. F.; de Ruiter, N.; Gieseg, S. P.; Woodfield, T.; Renaud, P. F.; Brooke, L.; Abdul-Majid, S.; Clyne, M.; Glendenning, R.; Bones, P. J.; Billinghurst, M.; Bartneck, C.; Mandalika, H.; Grasset, R.; Schleich, N.; Scott, N.; Nik, S. J.; Opie, A.; Janmale, T.; Tang, D. N.; Kim, D.; Doesburg, R. M.; Zainon, R.; Ronaldson, J. P.; Cook, N. J.; Smithies, D. J.; Hodge, K.

    2014-02-01

    Spectral molecular imaging is a new imaging technique able to discriminate and quantify different components of tissue simultaneously at high spatial and high energy resolution. Our MARS scanner is an x-ray based small animal CT system designed to be used in the diagnostic energy range (20-140 keV). In this paper, we demonstrate the use of the MARS scanner, equipped with the Medipix3RX spectroscopic photon-processing detector, to discriminate fat, calcium, and water in tissue. We present data collected from a sample of lamb meat including bone as an illustrative example of human tissue imaging. The data is analyzed using our 3D Algebraic Reconstruction Algorithm (MARS-ART) and by material decomposition based on a constrained linear least squares algorithm. The results presented here clearly show the quantification of lipid-like, water-like and bone-like components of tissue. However, it is also clear to us that better algorithms could extract more information of clinical interest from our data. Because we are one of the first to present data from multi-energy photon-processing small animal CT systems, we make the raw, partial and fully processed data available with the intention that others can analyze it using their familiar routines. The raw, partially processed and fully processed data of lamb tissue along with the phantom calibration data can be found at http://hdl.handle.net/10092/8531.

  7. Estimating proportions of objects from multispectral scanner data

    NASA Technical Reports Server (NTRS)

    Horwitz, H. M.; Lewis, J. T.; Pentland, A. P.

    1975-01-01

    Progress is reported in developing and testing methods of estimating, from multispectral scanner data, proportions of target classes in a scene when there are a significiant number of boundary pixels. Procedures were developed to exploit: (1) prior information concerning the number of object classes normally occurring in a pixel, and (2) spectral information extracted from signals of adjoining pixels. Two algorithms, LIMMIX and nine-point mixtures, are described along with supporting processing techniques. An important by-product of the procedures, in contrast to the previous method, is that they are often appropriate when the number of spectral bands is small. Preliminary tests on LANDSAT data sets, where target classes were (1) lakes and ponds, and (2) agricultural crops were encouraging.

  8. Solar radiance models for determination of ERBE scanner filter factor

    NASA Technical Reports Server (NTRS)

    Arduini, R. F.

    1985-01-01

    Shortwave spectral radiance models for use in the spectral correction algorithms for the ERBE Scanner Instrument are provided. The required data base was delivered to the ERBe Data Reduction Group in October 1984. It consisted of two sets of data files: (1) the spectral bidirectional angular models and (2) the spectral flux modes. The bidirectional models employ the angular characteristics of reflection by the Earth-atmosphere system and were derived from detailed radiance calculations using a finite difference model of the radiative transfer process. The spectral flux models were created through the use of a delta-Eddington model to economically simulate the effects of atmospheric variability. By combining these data sets, a wide range of radiances may be approximated for a number of scene types.

  9. A comparison of change detection methods using multispectral scanner data

    USGS Publications Warehouse

    Seevers, Paul M.; Jones, Brenda K.; Qiu, Zhicheng; Liu, Yutong

    1994-01-01

    Change detection methods were investigated as a cooperative activity between the U.S. Geological Survey and the National Bureau of Surveying and Mapping, People's Republic of China. Subtraction of band 2, band 3, normalized difference vegetation index, and tasseled cap bands 1 and 2 data from two multispectral scanner images were tested using two sites in the United States and one in the People's Republic of China. A new statistical method also was tested. Band 2 subtraction gives the best results for detecting change from vegetative cover to urban development. The statistical method identifies areas that have changed and uses a fast classification algorithm to classify the original data of the changed areas by land cover type present for each image date.

  10. Comment on 'Aerosol and Rayleigh radiance contributions to Coastal Zone Colour Scanner images' by Eckstein and Simpson

    NASA Technical Reports Server (NTRS)

    Gordon, H. R.; Evans, R. H.

    1993-01-01

    In a recent paper Eckstein and Simpson describe what they believe to be serious difficulties and/or errors with the CZCS (Coastal Zone Color Scanner) processing algorithms based on their analysis of seven images. Here we point out that portions of their analysis, particularly those dealing with multiple scattered Rayleigh radiance, are incorrect. We also argue that other problems they discuss have already been addressed in the literature. Finally, we suggest that many apparent artifacts in CZCS-derived pigment fields are likely to be due to inadequacies in the sensor band set or to poor radiometric stability, both of which will be remedied with the next generation of ocean color sensors.

  11. Sparsity-based acoustic inversion in cross-sectional multiscale optoacoustic imaging.

    PubMed

    Han, Yiyong; Tzoumas, Stratis; Nunes, Antonio; Ntziachristos, Vasilis; Rosenthal, Amir

    2015-09-01

    With recent advancement in hardware of optoacoustic imaging systems, highly detailed cross-sectional images may be acquired at a single laser shot, thus eliminating motion artifacts. Nonetheless, other sources of artifacts remain due to signal distortion or out-of-plane signals. The purpose of image reconstruction algorithms is to obtain the most accurate images from noisy, distorted projection data. In this paper, the authors use the model-based approach for acoustic inversion, combined with a sparsity-based inversion procedure. Specifically, a cost function is used that includes the L1 norm of the image in sparse representation and a total variation (TV) term. The optimization problem is solved by a numerically efficient implementation of a nonlinear gradient descent algorithm. TV-L1 model-based inversion is tested in the cross section geometry for numerically generated data as well as for in vivo experimental data from an adult mouse. In all cases, model-based TV-L1 inversion showed a better performance over the conventional Tikhonov regularization, TV inversion, and L1 inversion. In the numerical examples, the images reconstructed with TV-L1 inversion were quantitatively more similar to the originating images. In the experimental examples, TV-L1 inversion yielded sharper images and weaker streak artifact. The results herein show that TV-L1 inversion is capable of improving the quality of highly detailed, multiscale optoacoustic images obtained in vivo using cross-sectional imaging systems. As a result of its high fidelity, model-based TV-L1 inversion may be considered as the new standard for image reconstruction in cross-sectional imaging.

  12. Discrimination of a chestnut-oak forest unit for geologic mapping by means of a principal component enhancement of Landsat multispectral scanner data.

    USGS Publications Warehouse

    Krohn, M.D.; Milton, N.M.; Segal, D.; Enland, A.

    1981-01-01

    A principal component image enhancement has been effective in applying Landsat data to geologic mapping in a heavily forested area of E Virginia. The image enhancement procedure consists of a principal component transformation, a histogram normalization, and the inverse principal componnet transformation. The enhancement preserves the independence of the principal components, yet produces a more readily interpretable image than does a single principal component transformation. -from Authors

  13. Inverse problems with nonnegative and sparse solutions: algorithms and application to the phase retrieval problem

    NASA Astrophysics Data System (ADS)

    Quy Muoi, Pham; Nho Hào, Dinh; Sahoo, Sujit Kumar; Tang, Dongliang; Cong, Nguyen Huu; Dang, Cuong

    2018-05-01

    In this paper, we study a gradient-type method and a semismooth Newton method for minimization problems in regularizing inverse problems with nonnegative and sparse solutions. We propose a special penalty functional forcing the minimizers of regularized minimization problems to be nonnegative and sparse, and then we apply the proposed algorithms in a practical the problem. The strong convergence of the gradient-type method and the local superlinear convergence of the semismooth Newton method are proven. Then, we use these algorithms for the phase retrieval problem and illustrate their efficiency in numerical examples, particularly in the practical problem of optical imaging through scattering media where all the noises from experiment are presented.

  14. Enhanced image fusion using directional contrast rules in fuzzy transform domain.

    PubMed

    Nandal, Amita; Rosales, Hamurabi Gamboa

    2016-01-01

    In this paper a novel image fusion algorithm based on directional contrast in fuzzy transform (FTR) domain is proposed. Input images to be fused are first divided into several non-overlapping blocks. The components of these sub-blocks are fused using directional contrast based fuzzy fusion rule in FTR domain. The fused sub-blocks are then transformed into original size blocks using inverse-FTR. Further, these inverse transformed blocks are fused according to select maximum based fusion rule for reconstructing the final fused image. The proposed fusion algorithm is both visually and quantitatively compared with other standard and recent fusion algorithms. Experimental results demonstrate that the proposed method generates better results than the other methods.

  15. Efficient electromagnetic source imaging with adaptive standardized LORETA/FOCUSS.

    PubMed

    Schimpf, Paul H; Liu, Hesheng; Ramon, Ceon; Haueisen, Jens

    2005-05-01

    Functional brain imaging and source localization based on the scalp's potential field require a solution to an ill-posed inverse problem with many solutions. This makes it necessary to incorporate a priori knowledge in order to select a particular solution. A computational challenge for some subject-specific head models is that many inverse algorithms require a comprehensive sampling of the candidate source space at the desired resolution. In this study, we present an algorithm that can accurately reconstruct details of localized source activity from a sparse sampling of the candidate source space. Forward computations are minimized through an adaptive procedure that increases source resolution as the spatial extent is reduced. With this algorithm, we were able to compute inverses using only 6% to 11% of the full resolution lead-field, with a localization accuracy that was not significantly different than an exhaustive search through a fully-sampled source space. The technique is, therefore, applicable for use with anatomically-realistic, subject-specific forward models for applications with spatially concentrated source activity.

  16. A recursive algorithm for the three-dimensional imaging of brain electric activity: Shrinking LORETA-FOCUSS.

    PubMed

    Liu, Hesheng; Gao, Xiaorong; Schimpf, Paul H; Yang, Fusheng; Gao, Shangkai

    2004-10-01

    Estimation of intracranial electric activity from the scalp electroencephalogram (EEG) requires a solution to the EEG inverse problem, which is known as an ill-conditioned problem. In order to yield a unique solution, weighted minimum norm least square (MNLS) inverse methods are generally used. This paper proposes a recursive algorithm, termed Shrinking LORETA-FOCUSS, which combines and expands upon the central features of two well-known weighted MNLS methods: LORETA and FOCUSS. This recursive algorithm makes iterative adjustments to the solution space as well as the weighting matrix, thereby dramatically reducing the computation load, and increasing local source resolution. Simulations are conducted on a 3-shell spherical head model registered to the Talairach human brain atlas. A comparative study of four different inverse methods, standard Weighted Minimum Norm, L1-norm, LORETA-FOCUSS and Shrinking LORETA-FOCUSS are presented. The results demonstrate that Shrinking LORETA-FOCUSS is able to reconstruct a three-dimensional source distribution with smaller localization and energy errors compared to the other methods.

  17. A robust method of computing finite difference coefficients based on Vandermonde matrix

    NASA Astrophysics Data System (ADS)

    Zhang, Yijie; Gao, Jinghuai; Peng, Jigen; Han, Weimin

    2018-05-01

    When the finite difference (FD) method is employed to simulate the wave propagation, high-order FD method is preferred in order to achieve better accuracy. However, if the order of FD scheme is high enough, the coefficient matrix of the formula for calculating finite difference coefficients is close to be singular. In this case, when the FD coefficients are computed by matrix inverse operator of MATLAB, inaccuracy can be produced. In order to overcome this problem, we have suggested an algorithm based on Vandermonde matrix in this paper. After specified mathematical transformation, the coefficient matrix is transformed into a Vandermonde matrix. Then the FD coefficients of high-order FD method can be computed by the algorithm of Vandermonde matrix, which prevents the inverse of the singular matrix. The dispersion analysis and numerical results of a homogeneous elastic model and a geophysical model of oil and gas reservoir demonstrate that the algorithm based on Vandermonde matrix has better accuracy compared with matrix inverse operator of MATLAB.

  18. Localization of incipient tip vortex cavitation using ray based matched field inversion method

    NASA Astrophysics Data System (ADS)

    Kim, Dongho; Seong, Woojae; Choo, Youngmin; Lee, Jeunghoon

    2015-10-01

    Cavitation of marine propeller is one of the main contributing factors of broadband radiated ship noise. In this research, an algorithm for the source localization of incipient vortex cavitation is suggested. Incipient cavitation is modeled as monopole type source and matched-field inversion method is applied to find the source position by comparing the spatial correlation between measured and replicated pressure fields at the receiver array. The accuracy of source localization is improved by broadband matched-field inversion technique that enhances correlation by incoherently averaging correlations of individual frequencies. Suggested localization algorithm is verified through known virtual source and model test conducted in Samsung ship model basin cavitation tunnel. It is found that suggested localization algorithm enables efficient localization of incipient tip vortex cavitation using a few pressure data measured on the outer hull above the propeller and practically applicable to the typically performed model scale experiment in a cavitation tunnel at the early design stage.

  19. Retrieval of Aerosol Microphysical Properties from AERONET Photo-Polarimetric Measurements. 2: A New Research Algorithm and Case Demonstration

    NASA Technical Reports Server (NTRS)

    Xu, Xiaoguang; Wang, Jun; Zeng, Jing; Spurr, Robert; Liu, Xiong; Dubovik, Oleg; Li, Li; Li, Zhengqiang; Mishchenko, Michael I.; Siniuk, Aliaksandr; hide

    2015-01-01

    A new research algorithm is presented here as the second part of a two-part study to retrieve aerosol microphysical properties from the multispectral and multiangular photopolarimetric measurements taken by Aerosol Robotic Network's (AERONET's) new-generation Sun photometer. The algorithm uses an advanced UNified and Linearized Vector Radiative Transfer Model and incorporates a statistical optimization approach.While the new algorithmhas heritage from AERONET operational inversion algorithm in constraining a priori and retrieval smoothness, it has two new features. First, the new algorithmretrieves the effective radius, effective variance, and total volume of aerosols associated with a continuous bimodal particle size distribution (PSD) function, while the AERONET operational algorithm retrieves aerosol volume over 22 size bins. Second, our algorithm retrieves complex refractive indices for both fine and coarsemodes,while the AERONET operational algorithm assumes a size-independent aerosol refractive index. Mode-resolved refractive indices can improve the estimate of the single-scattering albedo (SSA) for each aerosol mode and thus facilitate the validation of satellite products and chemistry transport models. We applied the algorithm to a suite of real cases over Beijing_RADI site and found that our retrievals are overall consistent with AERONET operational inversions but can offer mode-resolved refractive index and SSA with acceptable accuracy for the aerosol composed by spherical particles. Along with the retrieval using both radiance and polarization, we also performed radiance-only retrieval to demonstrate the improvements by adding polarization in the inversion. Contrast analysis indicates that with polarization, retrieval error can be reduced by over 50% in PSD parameters, 10-30% in the refractive index, and 10-40% in SSA, which is consistent with theoretical analysis presented in the companion paper of this two-part study.

  20. Implementation of Autonomous Navigation and Mapping using a Laser Line Scanner on a Tactical Unmanned Aerial Vehicle

    DTIC Science & Technology

    2011-12-01

    study new multi-agent algorithms to avoid collision and obstacles. Others, including Hanford et al. [2], have tried to build low-cost experimental...2007. [2] S. D. Hanford , L. N. Long, and J. F. Horn, “A Small Semi-Autonomous Rotary-Wing Unmanned Air Vehicle ( UAV ),” 2003 AIAA Atmospheric

  1. Bayesian approach to inverse statistical mechanics.

    PubMed

    Habeck, Michael

    2014-05-01

    Inverse statistical mechanics aims to determine particle interactions from ensemble properties. This article looks at this inverse problem from a Bayesian perspective and discusses several statistical estimators to solve it. In addition, a sequential Monte Carlo algorithm is proposed that draws the interaction parameters from their posterior probability distribution. The posterior probability involves an intractable partition function that is estimated along with the interactions. The method is illustrated for inverse problems of varying complexity, including the estimation of a temperature, the inverse Ising problem, maximum entropy fitting, and the reconstruction of molecular interaction potentials.

  2. Bayesian approach to inverse statistical mechanics

    NASA Astrophysics Data System (ADS)

    Habeck, Michael

    2014-05-01

    Inverse statistical mechanics aims to determine particle interactions from ensemble properties. This article looks at this inverse problem from a Bayesian perspective and discusses several statistical estimators to solve it. In addition, a sequential Monte Carlo algorithm is proposed that draws the interaction parameters from their posterior probability distribution. The posterior probability involves an intractable partition function that is estimated along with the interactions. The method is illustrated for inverse problems of varying complexity, including the estimation of a temperature, the inverse Ising problem, maximum entropy fitting, and the reconstruction of molecular interaction potentials.

  3. Automated treatment planning for a dedicated multi-source intracranial radiosurgery treatment unit using projected gradient and grassfire algorithms.

    PubMed

    Ghobadi, Kimia; Ghaffari, Hamid R; Aleman, Dionne M; Jaffray, David A; Ruschin, Mark

    2012-06-01

    The purpose of this work is to develop a framework to the inverse problem for radiosurgery treatment planning on the Gamma Knife(®) Perfexion™ (PFX) for intracranial targets. The approach taken in the present study consists of two parts. First, a hybrid grassfire and sphere-packing algorithm is used to obtain shot positions (isocenters) based on the geometry of the target to be treated. For the selected isocenters, a sector duration optimization (SDO) model is used to optimize the duration of radiation delivery from each collimator size from each individual source bank. The SDO model is solved using a projected gradient algorithm. This approach has been retrospectively tested on seven manually planned clinical cases (comprising 11 lesions) including acoustic neuromas and brain metastases. In terms of conformity and organ-at-risk (OAR) sparing, the quality of plans achieved with the inverse planning approach were, on average, improved compared to the manually generated plans. The mean difference in conformity index between inverse and forward plans was -0.12 (range: -0.27 to +0.03) and +0.08 (range: 0.00-0.17) for classic and Paddick definitions, respectively, favoring the inverse plans. The mean difference in volume receiving the prescribed dose (V(100)) between forward and inverse plans was 0.2% (range: -2.4% to +2.0%). After plan renormalization for equivalent coverage (i.e., V(100)), the mean difference in dose to 1 mm(3) of brainstem between forward and inverse plans was -0.24 Gy (range: -2.40 to +2.02 Gy) favoring the inverse plans. Beam-on time varied with the number of isocenters but for the most optimal plans was on average 33 min longer than manual plans (range: -17 to +91 min) when normalized to a calibration dose rate of 3.5 Gy/min. In terms of algorithm performance, the isocenter selection for all the presented plans was performed in less than 3 s, while the SDO was performed in an average of 215 min. PFX inverse planning can be performed using geometric isocenter selection and mathematical modeling and optimization techniques. The obtained treatment plans all meet or exceed clinical guidelines while displaying high conformity. © 2012 American Association of Physicists in Medicine.

  4. On the adequacy of identified Cole Cole models

    NASA Astrophysics Data System (ADS)

    Xiang, Jianping; Cheng, Daizhan; Schlindwein, F. S.; Jones, N. B.

    2003-06-01

    The Cole-Cole model has been widely used to interpret electrical geophysical data. Normally an iterative computer program is used to invert the frequency domain complex impedance data and simple error estimation is obtained from the squared difference of the measured (field) and calculated values over the full frequency range. Recently a new direct inversion algorithm was proposed for the 'optimal' estimation of the Cole-Cole parameters, which differs from existing inversion algorithms in that the estimated parameters are direct solutions of a set of equations without the need for an initial guess for initialisation. This paper first briefly investigates the advantages and disadvantages of the new algorithm compared to the standard Levenberg-Marquardt "ridge regression" algorithm. Then, and more importantly, we address the adequacy of the models resulting from both the "ridge regression" and the new algorithm, using two different statistical tests and we give objective statistical criteria for acceptance or rejection of the estimated models. The first is the standard χ2 technique. The second is a parameter-accuracy based test that uses a joint multi-normal distribution. Numerical results that illustrate the performance of both testing methods are given. The main goals of this paper are (i) to provide the source code for the new ''direct inversion'' algorithm in Matlab and (ii) to introduce and demonstrate two methods to determine the reliability of a set of data before data processing, i.e., to consider the adequacy of the resulting Cole-Cole model.

  5. Iterative updating of model error for Bayesian inversion

    NASA Astrophysics Data System (ADS)

    Calvetti, Daniela; Dunlop, Matthew; Somersalo, Erkki; Stuart, Andrew

    2018-02-01

    In computational inverse problems, it is common that a detailed and accurate forward model is approximated by a computationally less challenging substitute. The model reduction may be necessary to meet constraints in computing time when optimization algorithms are used to find a single estimate, or to speed up Markov chain Monte Carlo (MCMC) calculations in the Bayesian framework. The use of an approximate model introduces a discrepancy, or modeling error, that may have a detrimental effect on the solution of the ill-posed inverse problem, or it may severely distort the estimate of the posterior distribution. In the Bayesian paradigm, the modeling error can be considered as a random variable, and by using an estimate of the probability distribution of the unknown, one may estimate the probability distribution of the modeling error and incorporate it into the inversion. We introduce an algorithm which iterates this idea to update the distribution of the model error, leading to a sequence of posterior distributions that are demonstrated empirically to capture the underlying truth with increasing accuracy. Since the algorithm is not based on rejections, it requires only limited full model evaluations. We show analytically that, in the linear Gaussian case, the algorithm converges geometrically fast with respect to the number of iterations when the data is finite dimensional. For more general models, we introduce particle approximations of the iteratively generated sequence of distributions; we also prove that each element of the sequence converges in the large particle limit under a simplifying assumption. We show numerically that, as in the linear case, rapid convergence occurs with respect to the number of iterations. Additionally, we show through computed examples that point estimates obtained from this iterative algorithm are superior to those obtained by neglecting the model error.

  6. 3-D Inversion of the MT EarthScope Data, Collected Over the East Central United States

    NASA Astrophysics Data System (ADS)

    Gribenko, A. V.; Zhdanov, M. S.

    2017-12-01

    The magnetotelluric (MT) data collected as a part of the EarthScope project provided a unique opportunity to study the conductivity structure of the deep interior of the North American continent. Besides the scientific value of the recovered subsurface models, the data also allowed inversion practitioners to test the robustness of their algorithms applied to regional long-period data. In this paper, we present the results of MT inversion of a subset of the second footprint of the MT data collection covering the East Central United States. Our inversion algorithm implements simultaneous inversion of the full MT impedance data both for the 3-D conductivity distribution and for the distortion matrix. The distortion matrix provides the means to account for the effect of the near-surface geoelectrical inhomogeneities on the MT data. The long-period data do not have the resolution for the small near-surface conductivity anomalies, which makes an application of the distortion matrix especially appropriate. The determined conductivity model of the region agrees well with the known geologic and tectonic features of the East Central United States. The conductivity anomalies recovered by our inversion indicate a possible presence of the hot spot track in the area.

  7. Mathematics of Computed Tomography

    NASA Astrophysics Data System (ADS)

    Hawkins, William Grant

    A review of the applications of the Radon transform is presented, with emphasis on emission computed tomography and transmission computed tomography. The theory of the 2D and 3D Radon transforms, and the effects of attenuation for emission computed tomography are presented. The algebraic iterative methods, their importance and limitations are reviewed. Analytic solutions of the 2D problem the convolution and frequency filtering methods based on linear shift invariant theory, and the solution of the circular harmonic decomposition by integral transform theory--are reviewed. The relation between the invisible kernels, the inverse circular harmonic transform, and the consistency conditions are demonstrated. The discussion and review are extended to the 3D problem-convolution, frequency filtering, spherical harmonic transform solutions, and consistency conditions. The Cormack algorithm based on reconstruction with Zernike polynomials is reviewed. An analogous algorithm and set of reconstruction polynomials is developed for the spherical harmonic transform. The relations between the consistency conditions, boundary conditions and orthogonal basis functions for the 2D projection harmonics are delineated and extended to the 3D case. The equivalence of the inverse circular harmonic transform, the inverse Radon transform, and the inverse Cormack transform is presented. The use of the number of nodes of a projection harmonic as a filter is discussed. Numerical methods for the efficient implementation of angular harmonic algorithms based on orthogonal functions and stable recursion are presented. The derivation of a lower bound for the signal-to-noise ratio of the Cormack algorithm is derived.

  8. Evaluating the design of satellite scanning radiometers for earth radiation budget measurements with system simulations. Part 1: Instantaneous estimates

    NASA Technical Reports Server (NTRS)

    Stowe, Larry; Ardanuy, Philip; Hucek, Richard; Abel, Peter; Jacobowitz, Herbert

    1991-01-01

    A set of system simulations was performed to evaluate candidate scanner configurations to fly as a part of the Earth Radiation Budget Instrument (ERBI) on the polar platforms during the 1990's. The simulation is considered of instantaneous sampling (without diurnal averaging) of the longwave and shortwave fluxes at the top of the atmosphere (TOA). After measurement and subsequent inversion to the TOA, the measured fluxes were compared to the reference fluxes for 2.5 deg lat/long resolution targets. The reference fluxes at this resolution are obtained by integrating over the 25 x 25 = 625 grid elements in each target. The differences between each of these two resultant spatially averaged sets of target measurements (errors) are taken and then statistically summarized. Five instruments are considered: (1) the Conically Scanning Radiometer (CSR); (2) the ERBE Cross Track Scanner; (3) the Nimbus-7 Biaxial Scanner; (4) the Clouds and Earth's Radiant Energy System Instrument (CERES-1); and (5) the Active Cavity Array (ACA). Identical studies of instantaneous error were completed for many days, two seasons, and several satellite equator crossing longitudes. The longwave flux errors were found to have the same space and time characteristics as for the shortwave fluxes, but the errors are only about 25 pct. of the shortwave errors.

  9. Linear feasibility algorithms for treatment planning in interstitial photodynamic therapy

    NASA Astrophysics Data System (ADS)

    Rendon, A.; Beck, J. C.; Lilge, Lothar

    2008-02-01

    Interstitial Photodynamic therapy (IPDT) has been under intense investigation in recent years, with multiple clinical trials underway. This effort has demanded the development of optimization strategies that determine the best locations and output powers for light sources (cylindrical or point diffusers) to achieve an optimal light delivery. Furthermore, we have recently introduced cylindrical diffusers with customizable emission profiles, placing additional requirements on the optimization algorithms, particularly in terms of the stability of the inverse problem. Here, we present a general class of linear feasibility algorithms and their properties. Moreover, we compare two particular instances of these algorithms, which are been used in the context of IPDT: the Cimmino algorithm and a weighted gradient descent (WGD) algorithm. The algorithms were compared in terms of their convergence properties, the cost function they minimize in the infeasible case, their ability to regularize the inverse problem, and the resulting optimal light dose distributions. Our results show that the WGD algorithm overall performs slightly better than the Cimmino algorithm and that it converges to a minimizer of a clinically relevant cost function in the infeasible case. Interestingly however, treatment plans resulting from either algorithms were very similar in terms of the resulting fluence maps and dose volume histograms, once the diffuser powers adjusted to achieve equal prostate coverage.

  10. Laterally constrained inversion for CSAMT data interpretation

    NASA Astrophysics Data System (ADS)

    Wang, Ruo; Yin, Changchun; Wang, Miaoyue; Di, Qingyun

    2015-10-01

    Laterally constrained inversion (LCI) has been successfully applied to the inversion of dc resistivity, TEM and airborne EM data. However, it hasn't been yet applied to the interpretation of controlled-source audio-frequency magnetotelluric (CSAMT) data. In this paper, we apply the LCI method for CSAMT data inversion by preconditioning the Jacobian matrix. We apply a weighting matrix to Jacobian to balance the sensitivity of model parameters, so that the resolution with respect to different model parameters becomes more uniform. Numerical experiments confirm that this can improve the convergence of the inversion. We first invert a synthetic dataset with and without noise to investigate the effect of LCI applications to CSAMT data, for the noise free data, the results show that the LCI method can recover the true model better compared to the traditional single-station inversion; and for the noisy data, the true model is recovered even with a noise level of 8%, indicating that LCI inversions are to some extent noise insensitive. Then, we re-invert two CSAMT datasets collected respectively in a watershed and a coal mine area in Northern China and compare our results with those from previous inversions. The comparison with the previous inversion in a coal mine shows that LCI method delivers smoother layer interfaces that well correlate to seismic data, while comparison with a global searching algorithm of simulated annealing (SA) in a watershed shows that though both methods deliver very similar good results, however, LCI algorithm presented in this paper runs much faster. The inversion results for the coal mine CSAMT survey show that a conductive water-bearing zone that was not revealed by the previous inversions has been identified by the LCI. This further demonstrates that the method presented in this paper works for CSAMT data inversion.

  11. Rapid execution of fan beam image reconstruction algorithms using efficient computational techniques and special-purpose processors

    NASA Astrophysics Data System (ADS)

    Gilbert, B. K.; Robb, R. A.; Chu, A.; Kenue, S. K.; Lent, A. H.; Swartzlander, E. E., Jr.

    1981-02-01

    Rapid advances during the past ten years of several forms of computer-assisted tomography (CT) have resulted in the development of numerous algorithms to convert raw projection data into cross-sectional images. These reconstruction algorithms are either 'iterative,' in which a large matrix algebraic equation is solved by successive approximation techniques; or 'closed form'. Continuing evolution of the closed form algorithms has allowed the newest versions to produce excellent reconstructed images in most applications. This paper will review several computer software and special-purpose digital hardware implementations of closed form algorithms, either proposed during the past several years by a number of workers or actually implemented in commercial or research CT scanners. The discussion will also cover a number of recently investigated algorithmic modifications which reduce the amount of computation required to execute the reconstruction process, as well as several new special-purpose digital hardware implementations under development in laboratories at the Mayo Clinic.

  12. Improved Genetic Algorithm Based on the Cooperation of Elite and Inverse-elite

    NASA Astrophysics Data System (ADS)

    Kanakubo, Masaaki; Hagiwara, Masafumi

    In this paper, we propose an improved genetic algorithm based on the combination of Bee system and Inverse-elitism, both are effective strategies for the improvement of GA. In the Bee system, in the beginning, each chromosome tries to find good solution individually as global search. When some chromosome is regarded as superior one, the other chromosomes try to find solution around there. However, since chromosomes for global search are generated randomly, Bee system lacks global search ability. On the other hand, in the Inverse-elitism, an inverse-elite whose gene values are reversed from the corresponding elite is produced. This strategy greatly contributes to diversification of chromosomes, but it lacks local search ability. In the proposed method, the Inverse-elitism with Pseudo-simplex method is employed for global search of Bee system in order to strengthen global search ability. In addition, it also has strong local search ability. The proposed method has synergistic effects of the three strategies. We confirmed validity and superior performance of the proposed method by computer simulations.

  13. Full-Physics Inverse Learning Machine for Satellite Remote Sensing of Ozone Profile Shapes and Tropospheric Columns

    NASA Astrophysics Data System (ADS)

    Xu, J.; Heue, K.-P.; Coldewey-Egbers, M.; Romahn, F.; Doicu, A.; Loyola, D.

    2018-04-01

    Characterizing vertical distributions of ozone from nadir-viewing satellite measurements is known to be challenging, particularly the ozone information in the troposphere. A novel retrieval algorithm called Full-Physics Inverse Learning Machine (FP-ILM), has been developed at DLR in order to estimate ozone profile shapes based on machine learning techniques. In contrast to traditional inversion methods, the FP-ILM algorithm formulates the profile shape retrieval as a classification problem. Its implementation comprises a training phase to derive an inverse function from synthetic measurements, and an operational phase in which the inverse function is applied to real measurements. This paper extends the ability of the FP-ILM retrieval to derive tropospheric ozone columns from GOME- 2 measurements. Results of total and tropical tropospheric ozone columns are compared with the ones using the official GOME Data Processing (GDP) product and the convective-cloud-differential (CCD) method, respectively. Furthermore, the FP-ILM framework will be used for the near-real-time processing of the new European Sentinel sensors with their unprecedented spectral and spatial resolution and corresponding large increases in the amount of data.

  14. An order (n) algorithm for the dynamics simulation of robotic systems

    NASA Technical Reports Server (NTRS)

    Chun, H. M.; Turner, J. D.; Frisch, Harold P.

    1989-01-01

    The formulation of an Order (n) algorithm for DISCOS (Dynamics Interaction Simulation of Controls and Structures), which is an industry-standard software package for simulation and analysis of flexible multibody systems is presented. For systems involving many bodies, the new Order (n) version of DISCOS is much faster than the current version. Results of the experimental validation of the dynamics software are also presented. The experiment is carried out on a seven-joint robot arm at NASA's Goddard Space Flight Center. The algorithm used in the current version of DISCOS requires the inverse of a matrix whose dimension is equal to the number of constraints in the system. Generally, the number of constraints in a system is roughly proportional to the number of bodies in the system, and matrix inversion requires O(p exp 3) operations, where p is the dimension of the matrix. The current version of DISCOS is therefore considered an Order (n exp 3) algorithm. In contrast, the Order (n) algorithm requires inversion of matrices which are small, and the number of matrices to be inverted increases only linearly with the number of bodies. The newly-developed Order (n) DISCOS is currently capable of handling chain and tree topologies as well as multiple closed loops. Continuing development will extend the capability of the software to deal with typical robotics applications such as put-and-place, multi-arm hand-off and surface sliding.

  15. New in-flight calibration adjustment of the Nimbus 6 and 7 earth radiation budget wide field of view radiometers

    NASA Technical Reports Server (NTRS)

    Kyle, H. L.; House, F. B.; Ardanuy, P. E.; Jacobowitz, H.; Maschhoff, R. H.; Hickey, J. R.

    1984-01-01

    In-flight calibration adjustments are developed to process data obtained from the wide-field-of-view channels of Nimbus-6 and Nimbus-7 after the failure of the Nimbus-7 longwave scanner on June 22, 1980. The sensor characteristics are investigated; the satellite environment is examined in detail; and algorithms are constructed to correct for long-term sensor-response changes, on/off-cycle thermal transients, and filter-dome absorption of longwave radiation. Data and results are presented in graphs and tables, including comparisons of the old and new algorithms.

  16. Simultaneous elastic parameter inversion in 2-D/3-D TTI medium combined later arrival times

    NASA Astrophysics Data System (ADS)

    Bai, Chao-ying; Wang, Tao; Yang, Shang-bei; Li, Xing-wang; Huang, Guo-jiao

    2016-04-01

    Traditional traveltime inversion for anisotropic medium is, in general, based on a "weak" assumption in the anisotropic property, which simplifies both the forward part (ray tracing is performed once only) and the inversion part (a linear inversion solver is possible). But for some real applications, a general (both "weak" and "strong") anisotropic medium should be considered. In such cases, one has to develop a ray tracing algorithm to handle with the general (including "strong") anisotropic medium and also to design a non-linear inversion solver for later tomography. Meanwhile, it is constructive to investigate how much the tomographic resolution can be improved by introducing the later arrivals. For this motivation, we incorporated our newly developed ray tracing algorithm (multistage irregular shortest-path method) for general anisotropic media with a non-linear inversion solver (a damped minimum norm, constrained least squares problem with a conjugate gradient approach) to formulate a non-linear inversion solver for anisotropic medium. This anisotropic traveltime inversion procedure is able to combine the later (reflected) arrival times. Both 2-D/3-D synthetic inversion experiments and comparison tests show that (1) the proposed anisotropic traveltime inversion scheme is able to recover the high contrast anomalies and (2) it is possible to improve the tomographic resolution by introducing the later (reflected) arrivals, but not as expected in the isotropic medium, because the different velocity (qP, qSV and qSH) sensitivities (or derivatives) respective to the different elastic parameters are not the same but are also dependent on the inclination angle.

  17. Software platform for simulation of a prototype proton CT scanner.

    PubMed

    Giacometti, Valentina; Bashkirov, Vladimir A; Piersimoni, Pierluigi; Guatelli, Susanna; Plautz, Tia E; Sadrozinski, Hartmut F-W; Johnson, Robert P; Zatserklyaniy, Andriy; Tessonnier, Thomas; Parodi, Katia; Rosenfeld, Anatoly B; Schulte, Reinhard W

    2017-03-01

    Proton computed tomography (pCT) is a promising imaging technique to substitute or at least complement x-ray CT for more accurate proton therapy treatment planning as it allows calculating directly proton relative stopping power from proton energy loss measurements. A proton CT scanner with a silicon-based particle tracking system and a five-stage scintillating energy detector has been completed. In parallel a modular software platform was developed to characterize the performance of the proposed pCT. The modular pCT software platform consists of (1) a Geant4-based simulation modeling the Loma Linda proton therapy beam line and the prototype proton CT scanner, (2) water equivalent path length (WEPL) calibration of the scintillating energy detector, and (3) image reconstruction algorithm for the reconstruction of the relative stopping power (RSP) of the scanned object. In this work, each component of the modular pCT software platform is described and validated with respect to experimental data and benchmarked against theoretical predictions. In particular, the RSP reconstruction was validated with both experimental scans, water column measurements, and theoretical calculations. The results show that the pCT software platform accurately reproduces the performance of the existing prototype pCT scanner with a RSP agreement between experimental and simulated values to better than 1.5%. The validated platform is a versatile tool for clinical proton CT performance and application studies in a virtual setting. The platform is flexible and can be modified to simulate not yet existing versions of pCT scanners and higher proton energies than those currently clinically available. © 2017 American Association of Physicists in Medicine.

  18. Frequency-domain elastic full waveform inversion using encoded simultaneous sources

    NASA Astrophysics Data System (ADS)

    Jeong, W.; Son, W.; Pyun, S.; Min, D.

    2011-12-01

    Currently, numerous studies have endeavored to develop robust full waveform inversion and migration algorithms. These processes require enormous computational costs, because of the number of sources in the survey. To avoid this problem, the phase encoding technique for prestack migration was proposed by Romero (2000) and Krebs et al. (2009) proposed the encoded simultaneous-source inversion technique in the time domain. On the other hand, Ben-Hadj-Ali et al. (2011) demonstrated the robustness of the frequency-domain full waveform inversion with simultaneous sources for noisy data changing the source assembling. Although several studies on simultaneous-source inversion tried to estimate P- wave velocity based on the acoustic wave equation, seismic migration and waveform inversion based on the elastic wave equations are required to obtain more reliable subsurface information. In this study, we propose a 2-D frequency-domain elastic full waveform inversion technique using phase encoding methods. In our algorithm, the random phase encoding method is employed to calculate the gradients of the elastic parameters, source signature estimation and the diagonal entries of approximate Hessian matrix. The crosstalk for the estimated source signature and the diagonal entries of approximate Hessian matrix are suppressed with iteration as for the gradients. Our 2-D frequency-domain elastic waveform inversion algorithm is composed using the back-propagation technique and the conjugate-gradient method. Source signature is estimated using the full Newton method. We compare the simultaneous-source inversion with the conventional waveform inversion for synthetic data sets of the Marmousi-2 model. The inverted results obtained by simultaneous sources are comparable to those obtained by individual sources, and source signature is successfully estimated in simultaneous source technique. Comparing the inverted results using the pseudo Hessian matrix with previous inversion results provided by the approximate Hessian matrix, it is noted that the latter are better than the former for deeper parts of the model. This work was financially supported by the Brain Korea 21 project of Energy System Engineering, by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2010-0006155), by the Energy Efficiency & Resources of the Korea Institute of Energy Technology Evaluation and Planning (KETEP) grant funded by the Korea government Ministry of Knowledge Economy (No. 2010T100200133).

  19. Real-time 1-D/2-D transient elastography on a standard ultrasound scanner using mechanically induced vibration.

    PubMed

    Azar, Reza Zahiri; Dickie, Kris; Pelissier, Laurent

    2012-10-01

    Transient elastography has been well established in the literature as a means of assessing the elasticity of soft tissue. In this technique, tissue elasticity is estimated from the study of the propagation of the transient shear waves induced by an external or internal source of vibration. Previous studies have focused mainly on custom single-element transducers and ultrafast scanners which are not available in a typical clinical setup. In this work, we report the design and implementation of a transient elastography system on a standard ultrasound scanner that enables quantitative assessment of tissue elasticity in real-time. Two new custom imaging modes are introduced that enable the system to image the axial component of the transient shear wave, in response to an externally induced vibration, in both 1-D and 2-D. Elasticity reconstruction algorithms that estimate the tissue elasticity from these transient waves are also presented. Simulation results are provided to show the advantages and limitations of the proposed system. The performance of the system is also validated experimentally using a commercial elasticity phantom.

  20. Parallel processing architecture for computing inverse differential kinematic equations of the PUMA arm

    NASA Technical Reports Server (NTRS)

    Hsia, T. C.; Lu, G. Z.; Han, W. H.

    1987-01-01

    In advanced robot control problems, on-line computation of inverse Jacobian solution is frequently required. Parallel processing architecture is an effective way to reduce computation time. A parallel processing architecture is developed for the inverse Jacobian (inverse differential kinematic equation) of the PUMA arm. The proposed pipeline/parallel algorithm can be inplemented on an IC chip using systolic linear arrays. This implementation requires 27 processing cells and 25 time units. Computation time is thus significantly reduced.

  1. Cross Correlations for Two-Dimensional Geosynchronous Satellite Imagery Data,

    DTIC Science & Technology

    1980-04-01

    transform of f(x), g(x,u) is the forward transformation kernel, and u assumes values in the range 0, 1, .. ,N-i. Similarly, the inverse transform is given...transform for values of u and v in the range, 0, 1, 2, ..., N-1. To obtain the inverse transform we pre-multiply and post-multiply Eq. (5-7) by an inverse...any algorithm for computing the forward transform can be used directly to obtain the inverse transform simply by multiplying the result of the

  2. Computational structures for robotic computations

    NASA Technical Reports Server (NTRS)

    Lee, C. S. G.; Chang, P. R.

    1987-01-01

    The computational problem of inverse kinematics and inverse dynamics of robot manipulators by taking advantage of parallelism and pipelining architectures is discussed. For the computation of inverse kinematic position solution, a maximum pipelined CORDIC architecture has been designed based on a functional decomposition of the closed-form joint equations. For the inverse dynamics computation, an efficient p-fold parallel algorithm to overcome the recurrence problem of the Newton-Euler equations of motion to achieve the time lower bound of O(log sub 2 n) has also been developed.

  3. Numerical methods for the inverse problem of density functional theory

    DOE PAGES

    Jensen, Daniel S.; Wasserman, Adam

    2017-07-17

    Here, the inverse problem of Kohn–Sham density functional theory (DFT) is often solved in an effort to benchmark and design approximate exchange-correlation potentials. The forward and inverse problems of DFT rely on the same equations but the numerical methods for solving each problem are substantially different. We examine both problems in this tutorial with a special emphasis on the algorithms and error analysis needed for solving the inverse problem. Two inversion methods based on partial differential equation constrained optimization and constrained variational ideas are introduced. We compare and contrast several different inversion methods applied to one-dimensional finite and periodic modelmore » systems.« less

  4. Numerical methods for the inverse problem of density functional theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jensen, Daniel S.; Wasserman, Adam

    Here, the inverse problem of Kohn–Sham density functional theory (DFT) is often solved in an effort to benchmark and design approximate exchange-correlation potentials. The forward and inverse problems of DFT rely on the same equations but the numerical methods for solving each problem are substantially different. We examine both problems in this tutorial with a special emphasis on the algorithms and error analysis needed for solving the inverse problem. Two inversion methods based on partial differential equation constrained optimization and constrained variational ideas are introduced. We compare and contrast several different inversion methods applied to one-dimensional finite and periodic modelmore » systems.« less

  5. Utilization of high-frequency Rayleigh waves in near-surface geophysics

    USGS Publications Warehouse

    Xia, J.; Miller, R.D.; Park, C.B.; Ivanov, J.; Tian, G.; Chen, C.

    2004-01-01

    Shear-wave velocities can be derived from inverting the dispersive phase velocity of the surface. The multichannel analysis of surface waves (MASW) is one technique for inverting high-frequency Rayleigh waves. The process includes acquisition of high-frequency broad-band Rayleigh waves, efficient and accurate algorithms designed to extract Rayleigh-wave dispersion curves from Rayleigh waves, and stable and efficient inversion algorithms to obtain near-surface S-wave velocity profiles. MASW estimates S-wave velocity from multichannel vertical compoent data and consists of data acquisition, dispersion-curve picking, and inversion.

  6. Solving Large-Scale Inverse Magnetostatic Problems using the Adjoint Method

    PubMed Central

    Bruckner, Florian; Abert, Claas; Wautischer, Gregor; Huber, Christian; Vogler, Christoph; Hinze, Michael; Suess, Dieter

    2017-01-01

    An efficient algorithm for the reconstruction of the magnetization state within magnetic components is presented. The occurring inverse magnetostatic problem is solved by means of an adjoint approach, based on the Fredkin-Koehler method for the solution of the forward problem. Due to the use of hybrid FEM-BEM coupling combined with matrix compression techniques the resulting algorithm is well suited for large-scale problems. Furthermore the reconstruction of the magnetization state within a permanent magnet as well as an optimal design application are demonstrated. PMID:28098851

  7. A necessary condition for applying MUSIC algorithm in limited-view inverse scattering problem

    NASA Astrophysics Data System (ADS)

    Park, Taehoon; Park, Won-Kwang

    2015-09-01

    Throughout various results of numerical simulations, it is well-known that MUltiple SIgnal Classification (MUSIC) algorithm can be applied in the limited-view inverse scattering problems. However, the application is somehow heuristic. In this contribution, we identify a necessary condition of MUSIC for imaging of collection of small, perfectly conducting cracks. This is based on the fact that MUSIC imaging functional can be represented as an infinite series of Bessel function of integer order of the first kind. Numerical experiments from noisy synthetic data supports our investigation.

  8. Evaluation of an Inverse Molecular Design Algorithm in a Model Binding Site

    PubMed Central

    Huggins, David J.; Altman, Michael D.; Tidor, Bruce

    2008-01-01

    Computational molecular design is a useful tool in modern drug discovery. Virtual screening is an approach that docks and then scores individual members of compound libraries. In contrast to this forward approach, inverse approaches construct compounds from fragments, such that the computed affinity, or a combination of relevant properties, is optimized. We have recently developed a new inverse approach to drug design based on the dead-end elimination and A* algorithms employing a physical potential function. This approach has been applied to combinatorially constructed libraries of small-molecule ligands to design high-affinity HIV-1 protease inhibitors [M. D. Altman et al. J. Am. Chem. Soc. 130: 6099–6013, 2008]. Here we have evaluated the new method using the well studied W191G mutant of cytochrome c peroxidase. This mutant possesses a charged binding pocket and has been used to evaluate other design approaches. The results show that overall the new inverse approach does an excellent job of separating binders from non-binders. For a few individual cases, scoring inaccuracies led to false positives. The majority of these involve erroneous solvation energy estimation for charged amines, anilinium ions and phenols, which has been observed previously for a variety of scoring algorithms. Interestingly, although inverse approaches are generally expected to identify some but not all binders in a library, due to limited conformational searching, these results show excellent coverage of the known binders while still showing strong discrimination of the non-binders. PMID:18831031

  9. Evaluation of an inverse molecular design algorithm in a model binding site.

    PubMed

    Huggins, David J; Altman, Michael D; Tidor, Bruce

    2009-04-01

    Computational molecular design is a useful tool in modern drug discovery. Virtual screening is an approach that docks and then scores individual members of compound libraries. In contrast to this forward approach, inverse approaches construct compounds from fragments, such that the computed affinity, or a combination of relevant properties, is optimized. We have recently developed a new inverse approach to drug design based on the dead-end elimination and A* algorithms employing a physical potential function. This approach has been applied to combinatorially constructed libraries of small-molecule ligands to design high-affinity HIV-1 protease inhibitors (Altman et al., J Am Chem Soc 2008;130:6099-6013). Here we have evaluated the new method using the well-studied W191G mutant of cytochrome c peroxidase. This mutant possesses a charged binding pocket and has been used to evaluate other design approaches. The results show that overall the new inverse approach does an excellent job of separating binders from nonbinders. For a few individual cases, scoring inaccuracies led to false positives. The majority of these involve erroneous solvation energy estimation for charged amines, anilinium ions, and phenols, which has been observed previously for a variety of scoring algorithms. Interestingly, although inverse approaches are generally expected to identify some but not all binders in a library, due to limited conformational searching, these results show excellent coverage of the known binders while still showing strong discrimination of the nonbinders. (c) 2008 Wiley-Liss, Inc.

  10. A Stochastic Inversion Method for Potential Field Data: Ant Colony Optimization

    NASA Astrophysics Data System (ADS)

    Liu, Shuang; Hu, Xiangyun; Liu, Tianyou

    2014-07-01

    Simulating natural ants' foraging behavior, the ant colony optimization (ACO) algorithm performs excellently in combinational optimization problems, for example the traveling salesman problem and the quadratic assignment problem. However, the ACO is seldom used to inverted for gravitational and magnetic data. On the basis of the continuous and multi-dimensional objective function for potential field data optimization inversion, we present the node partition strategy ACO (NP-ACO) algorithm for inversion of model variables of fixed shape and recovery of physical property distributions of complicated shape models. We divide the continuous variables into discrete nodes and ants directionally tour the nodes by use of transition probabilities. We update the pheromone trails by use of Gaussian mapping between the objective function value and the quantity of pheromone. It can analyze the search results in real time and promote the rate of convergence and precision of inversion. Traditional mapping, including the ant-cycle system, weaken the differences between ant individuals and lead to premature convergence. We tested our method by use of synthetic data and real data from scenarios involving gravity and magnetic anomalies. The inverted model variables and recovered physical property distributions were in good agreement with the true values. The ACO algorithm for binary representation imaging and full imaging can recover sharper physical property distributions than traditional linear inversion methods. The ACO has good optimization capability and some excellent characteristics, for example robustness, parallel implementation, and portability, compared with other stochastic metaheuristics.

  11. Inverse estimation of the spheroidal particle size distribution using Ant Colony Optimization algorithms in multispectral extinction technique

    NASA Astrophysics Data System (ADS)

    He, Zhenzong; Qi, Hong; Wang, Yuqing; Ruan, Liming

    2014-10-01

    Four improved Ant Colony Optimization (ACO) algorithms, i.e. the probability density function based ACO (PDF-ACO) algorithm, the Region ACO (RACO) algorithm, Stochastic ACO (SACO) algorithm and Homogeneous ACO (HACO) algorithm, are employed to estimate the particle size distribution (PSD) of the spheroidal particles. The direct problems are solved by the extended Anomalous Diffraction Approximation (ADA) and the Lambert-Beer law. Three commonly used monomodal distribution functions i.e. the Rosin-Rammer (R-R) distribution function, the normal (N-N) distribution function, and the logarithmic normal (L-N) distribution function are estimated under dependent model. The influence of random measurement errors on the inverse results is also investigated. All the results reveal that the PDF-ACO algorithm is more accurate than the other three ACO algorithms and can be used as an effective technique to investigate the PSD of the spheroidal particles. Furthermore, the Johnson's SB (J-SB) function and the modified beta (M-β) function are employed as the general distribution functions to retrieve the PSD of spheroidal particles using PDF-ACO algorithm. The investigation shows a reasonable agreement between the original distribution function and the general distribution function when only considering the variety of the length of the rotational semi-axis.

  12. Adaptive Filtering in the Wavelet Transform Domain via Genetic Algorithms

    DTIC Science & Technology

    2004-08-06

    wavelet transforms. Whereas the term “evolved” pertains only to the altered wavelet coefficients used during the inverse transform process. 2...words, the inverse transform produces the original signal x(t) from the wavelet and scaling coefficients. )()( ,, tdtx nk n nk k ψ...reconstruct the original signal as accurately as possible. The inverse transform reconstructs an approximation of the original signal (Burrus

  13. 11.2 YIP Human In the Loop Statistical RelationalLearners

    DTIC Science & Technology

    2017-10-23

    learning formalisms including inverse reinforcement learning [4] and statistical relational learning [7, 5, 8]. We have also applied our algorithms in...one introduced for label preferences. 4 Figure 2: Active Advice Seeking for Inverse Reinforcement Learning. active advice seeking is in selecting the...learning tasks. 1.2.1 Sequential Decision-Making Our previous work on advice for inverse reinforcement learning (IRL) defined advice as action

  14. 2D Inversion of Transient Electromagnetic Method (TEM)

    NASA Astrophysics Data System (ADS)

    Bortolozo, Cassiano Antonio; Luís Porsani, Jorge; Acácio Monteiro dos Santos, Fernando

    2017-04-01

    A new methodology was developed for 2D inversion of Transient Electromagnetic Method (TEM). The methodology consists in the elaboration of a set of routines in Matlab code for modeling and inversion of TEM data and the determination of the most efficient field array for the problem. In this research, the 2D TEM modeling uses the finite differences discretization. To solve the inversion problem, were applied an algorithm based on Marquardt technique, also known as Ridge Regression. The algorithm is stable and efficient and it is widely used in geoelectrical inversion problems. The main advantage of 1D survey is the rapid data acquisition in a large area, but in regions with two-dimensional structures or that need more details, is essential to use two-dimensional interpretation methodologies. For an efficient field acquisition we used in an innovative form the fixed-loop array, with a square transmitter loop (200m x 200m) and 25m spacing between the sounding points. The TEM surveys were conducted only inside the transmitter loop, in order to not deal with negative apparent resistivity values. Although it is possible to model the negative values, it makes the inversion convergence more difficult. Therefore the methodology described above has been developed in order to achieve maximum optimization of data acquisition. Since it is necessary only one transmitter loop disposition in the surface for each series of soundings inside the loop. The algorithms were tested with synthetic data and the results were essential to the interpretation of the results with real data and will be useful in future situations. With the inversion of the real data acquired over the Paraná Sedimentary Basin (PSB) was successful realized a 2D TEM inversion. The results indicate a robust geoelectrical characterization for the sedimentary and crystalline aquifers in the PSB. Therefore, using a new and relevant approach for 2D TEM inversion, this research effectively contributed to map the most promising regions for groundwater exploration. In addition, there was the development of new geophysical software that can be applied as an important tool for many geological/hydrogeological applications and educational purposes.

  15. Self-adaptive Solution Strategies

    NASA Technical Reports Server (NTRS)

    Padovan, J.

    1984-01-01

    The development of enhancements to current generation nonlinear finite element algorithms of the incremental Newton-Raphson type was overviewed. Work was introduced on alternative formulations which lead to improve algorithms that avoid the need for global level updating and inversion. To quantify the enhanced Newton-Raphson scheme and the new alternative algorithm, the results of several benchmarks are presented.

  16. Inherent smoothness of intensity patterns for intensity modulated radiation therapy generated by simultaneous projection algorithms

    NASA Astrophysics Data System (ADS)

    Xiao, Ying; Michalski, Darek; Censor, Yair; Galvin, James M.

    2004-07-01

    The efficient delivery of intensity modulated radiation therapy (IMRT) depends on finding optimized beam intensity patterns that produce dose distributions, which meet given constraints for the tumour as well as any critical organs to be spared. Many optimization algorithms that are used for beamlet-based inverse planning are susceptible to large variations of neighbouring intensities. Accurately delivering an intensity pattern with a large number of extrema can prove impossible given the mechanical limitations of standard multileaf collimator (MLC) delivery systems. In this study, we apply Cimmino's simultaneous projection algorithm to the beamlet-based inverse planning problem, modelled mathematically as a system of linear inequalities. We show that using this method allows us to arrive at a smoother intensity pattern. Including nonlinear terms in the simultaneous projection algorithm to deal with dose-volume histogram (DVH) constraints does not compromise this property from our experimental observation. The smoothness properties are compared with those from other optimization algorithms which include simulated annealing and the gradient descent method. The simultaneous property of these algorithms is ideally suited to parallel computing technologies.

  17. Multirate-based fast parallel algorithms for 2-D DHT-based real-valued discrete Gabor transform.

    PubMed

    Tao, Liang; Kwan, Hon Keung

    2012-07-01

    Novel algorithms for the multirate and fast parallel implementation of the 2-D discrete Hartley transform (DHT)-based real-valued discrete Gabor transform (RDGT) and its inverse transform are presented in this paper. A 2-D multirate-based analysis convolver bank is designed for the 2-D RDGT, and a 2-D multirate-based synthesis convolver bank is designed for the 2-D inverse RDGT. The parallel channels in each of the two convolver banks have a unified structure and can apply the 2-D fast DHT algorithm to speed up their computations. The computational complexity of each parallel channel is low and is independent of the Gabor oversampling rate. All the 2-D RDGT coefficients of an image are computed in parallel during the analysis process and can be reconstructed in parallel during the synthesis process. The computational complexity and time of the proposed parallel algorithms are analyzed and compared with those of the existing fastest algorithms for 2-D discrete Gabor transforms. The results indicate that the proposed algorithms are the fastest, which make them attractive for real-time image processing.

  18. Sparsity-based acoustic inversion in cross-sectional multiscale optoacoustic imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, Yiyong; Tzoumas, Stratis; Nunes, Antonio

    2015-09-15

    Purpose: With recent advancement in hardware of optoacoustic imaging systems, highly detailed cross-sectional images may be acquired at a single laser shot, thus eliminating motion artifacts. Nonetheless, other sources of artifacts remain due to signal distortion or out-of-plane signals. The purpose of image reconstruction algorithms is to obtain the most accurate images from noisy, distorted projection data. Methods: In this paper, the authors use the model-based approach for acoustic inversion, combined with a sparsity-based inversion procedure. Specifically, a cost function is used that includes the L1 norm of the image in sparse representation and a total variation (TV) term. Themore » optimization problem is solved by a numerically efficient implementation of a nonlinear gradient descent algorithm. TV–L1 model-based inversion is tested in the cross section geometry for numerically generated data as well as for in vivo experimental data from an adult mouse. Results: In all cases, model-based TV–L1 inversion showed a better performance over the conventional Tikhonov regularization, TV inversion, and L1 inversion. In the numerical examples, the images reconstructed with TV–L1 inversion were quantitatively more similar to the originating images. In the experimental examples, TV–L1 inversion yielded sharper images and weaker streak artifact. Conclusions: The results herein show that TV–L1 inversion is capable of improving the quality of highly detailed, multiscale optoacoustic images obtained in vivo using cross-sectional imaging systems. As a result of its high fidelity, model-based TV–L1 inversion may be considered as the new standard for image reconstruction in cross-sectional imaging.« less

  19. A Strassen-Newton algorithm for high-speed parallelizable matrix inversion

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Ferguson, Helaman R. P.

    1988-01-01

    Techniques are described for computing matrix inverses by algorithms that are highly suited to massively parallel computation. The techniques are based on an algorithm suggested by Strassen (1969). Variations of this scheme use matrix Newton iterations and other methods to improve the numerical stability while at the same time preserving a very high level of parallelism. One-processor Cray-2 implementations of these schemes range from one that is up to 55 percent faster than a conventional library routine to one that is slower than a library routine but achieves excellent numerical stability. The problem of computing the solution to a single set of linear equations is discussed, and it is shown that this problem can also be solved efficiently using these techniques.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall-Anese, Emiliano; Simonetto, Andrea

    This paper focuses on the design of online algorithms based on prediction-correction steps to track the optimal solution of a time-varying constrained problem. Existing prediction-correction methods have been shown to work well for unconstrained convex problems and for settings where obtaining the inverse of the Hessian of the cost function can be computationally affordable. The prediction-correction algorithm proposed in this paper addresses the limitations of existing methods by tackling constrained problems and by designing a first-order prediction step that relies on the Hessian of the cost function (and do not require the computation of its inverse). Analytical results are establishedmore » to quantify the tracking error. Numerical simulations corroborate the analytical results and showcase performance and benefits of the algorithms.« less

  1. On the development of efficient algorithms for three dimensional fluid flow

    NASA Technical Reports Server (NTRS)

    Maccormack, R. W.

    1988-01-01

    The difficulties of constructing efficient algorithms for three-dimensional flow are discussed. Reasonable candidates are analyzed and tested, and most are found to have obvious shortcomings. Yet, there is promise that an efficient class of algorithms exist between the severely time-step sized-limited explicit or approximately factored algorithms and the computationally intensive direct inversion of large sparse matrices by Gaussian elimination.

  2. Computation of Symmetric Discrete Cosine Transform Using Bakhvalov's Algorithm

    NASA Technical Reports Server (NTRS)

    Aburdene, Maurice F.; Strojny, Brian C.; Dorband, John E.

    2005-01-01

    A number of algorithms for recursive computation of the discrete cosine transform (DCT) have been developed recently. This paper presents a new method for computing the discrete cosine transform and its inverse using Bakhvalov's algorithm, a method developed for evaluation of a polynomial at a point. In this paper, we will focus on both the application of the algorithm to the computation of the DCT-I and its complexity. In addition, Bakhvalov s algorithm is compared with Clenshaw s algorithm for the computation of the DCT.

  3. Jump-and-return sandwiches: A new family of binomial-like selective inversion sequences with improved performance

    NASA Astrophysics Data System (ADS)

    Brenner, Tom; Chen, Johnny; Stait-Gardner, Tim; Zheng, Gang; Matsukawa, Shingo; Price, William S.

    2018-03-01

    A new family of binomial-like inversion sequences, named jump-and-return sandwiches (JRS), has been developed by inserting a binomial-like sequence into a standard jump-and-return sequence, discovered through use of a stochastic Genetic Algorithm optimisation. Compared to currently used binomial-like inversion sequences (e.g., 3-9-19 and W5), the new sequences afford wider inversion bands and narrower non-inversion bands with an equal number of pulses. As an example, two jump-and-return sandwich 10-pulse sequences achieved 95% inversion at offsets corresponding to 9.4% and 10.3% of the non-inversion band spacing, compared to 14.7% for the binomial-like W5 inversion sequence, i.e., they afforded non-inversion bands about two thirds the width of the W5 non-inversion band.

  4. Scanning electron microscope fine tuning using four-bar piezoelectric actuated mechanism

    NASA Astrophysics Data System (ADS)

    Hatamleh, Khaled S.; Khasawneh, Qais A.; Al-Ghasem, Adnan; Jaradat, Mohammad A.; Sawaqed, Laith; Al-Shabi, Mohammad

    2018-01-01

    Scanning Electron Microscopes are extensively used for accurate micro/nano images exploring. Several strategies have been proposed to fine tune those microscopes in the past few years. This work presents a new fine tuning strategy of a scanning electron microscope sample table using four bar piezoelectric actuated mechanisms. The introduced paper presents an algorithm to find all possible inverse kinematics solutions of the proposed mechanism. In addition, another algorithm is presented to search for the optimal inverse kinematic solution. Both algorithms are used simultaneously by means of a simulation study to fine tune a scanning electron microscope sample table through a pre-specified circular or linear path of motion. Results of the study shows that, proposed algorithms were able to minimize the power required to drive the piezoelectric actuated mechanism by a ratio of 97.5% for all simulated paths of motion when compared to general non-optimized solution.

  5. Predicting ozone profile shape from satellite UV spectra

    NASA Astrophysics Data System (ADS)

    Xu, Jian; Loyola, Diego; Romahn, Fabian; Doicu, Adrian

    2017-04-01

    Identifying ozone profile shape is a critical yet challenging job for the accurate reconstruction of vertical distributions of atmospheric ozone that is relevant to climate change and air quality. Motivated by the need to develop an approach to reliably and efficiently estimate vertical information of ozone and inspired by the success of machine learning techniques, this work proposes a new algorithm for deriving ozone profile shapes from ultraviolet (UV) absorption spectra that are recorded by satellite instruments, e.g. GOME series and the future Sentinel missions. The proposed algorithm formulates this particular inverse problem in a classification framework rather than a conventional inversion one and places an emphasis on effectively characterizing various profile shapes based on machine learning techniques. Furthermore, a comparison of the ozone profiles from real GOME-2 data estimated by our algorithm and the classical retrieval algorithm (Optimal Estimation Method) is performed.

  6. Anaysis of the quality of image data required by the LANDSAT-4 Thematic Mapper and Multispectral Scanner. [agricultural and forest cover types in California

    NASA Technical Reports Server (NTRS)

    Colwell, R. N. (Principal Investigator)

    1984-01-01

    The spatial, geometric, and radiometric qualities of LANDSAT 4 thematic mapper (TM) and multispectral scanner (MSS) data were evaluated by interpreting, through visual and computer means, film and digital products for selected agricultural and forest cover types in California. Multispectral analyses employing Bayesian maximum likelihood, discrete relaxation, and unsupervised clustering algorithms were used to compare the usefulness of TM and MSS data for discriminating individual cover types. Some of the significant results are as follows: (1) for maximizing the interpretability of agricultural and forest resources, TM color composites should contain spectral bands in the visible, near-reflectance infrared, and middle-reflectance infrared regions, namely TM 4 and TM % and must contain TM 4 in all cases even at the expense of excluding TM 5; (2) using enlarged TM film products, planimetric accuracy of mapped poins was within 91 meters (RMSE east) and 117 meters (RMSE north); (3) using TM digital products, planimetric accuracy of mapped points was within 12.0 meters (RMSE east) and 13.7 meters (RMSE north); and (4) applying a contextual classification algorithm to TM data provided classification accuracies competitive with Bayesian maximum likelihood.

  7. Inverse problem of HIV cell dynamics using Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    González, J. A.; Guzmán, F. S.

    2017-01-01

    In order to describe the cell dynamics of T-cells in a patient infected with HIV, we use a flavour of Perelson's model. This is a non-linear system of Ordinary Differential Equations that describes the evolution of healthy, latently infected, infected T-cell concentrations and the free viral cells. Different parameters in the equations give different dynamics. Considering the concentration of these types of cells is known for a particular patient, the inverse problem consists in estimating the parameters in the model. We solve this inverse problem using a Genetic Algorithm (GA) that minimizes the error between the solutions of the model and the data from the patient. These errors depend on the parameters of the GA, like mutation rate and population, although a detailed analysis of this dependence will be described elsewhere.

  8. Parallel O(log n) algorithms for open- and closed-chain rigid multibody systems based on a new mass matrix factorization technique

    NASA Technical Reports Server (NTRS)

    Fijany, Amir

    1993-01-01

    In this paper, parallel O(log n) algorithms for computation of rigid multibody dynamics are developed. These parallel algorithms are derived by parallelization of new O(n) algorithms for the problem. The underlying feature of these O(n) algorithms is a drastically different strategy for decomposition of interbody force which leads to a new factorization of the mass matrix (M). Specifically, it is shown that a factorization of the inverse of the mass matrix in the form of the Schur Complement is derived as M(exp -1) = C - B(exp *)A(exp -1)B, wherein matrices C, A, and B are block tridiagonal matrices. The new O(n) algorithm is then derived as a recursive implementation of this factorization of M(exp -1). For the closed-chain systems, similar factorizations and O(n) algorithms for computation of Operational Space Mass Matrix lambda and its inverse lambda(exp -1) are also derived. It is shown that these O(n) algorithms are strictly parallel, that is, they are less efficient than other algorithms for serial computation of the problem. But, to our knowledge, they are the only known algorithms that can be parallelized and that lead to both time- and processor-optimal parallel algorithms for the problem, i.e., parallel O(log n) algorithms with O(n) processors. The developed parallel algorithms, in addition to their theoretical significance, are also practical from an implementation point of view due to their simple architectural requirements.

  9. Fixed-point image orthorectification algorithms for reduced computational cost

    NASA Astrophysics Data System (ADS)

    French, Joseph Clinton

    Imaging systems have been applied to many new applications in recent years. With the advent of low-cost, low-power focal planes and more powerful, lower cost computers, remote sensing applications have become more wide spread. Many of these applications require some form of geolocation, especially when relative distances are desired. However, when greater global positional accuracy is needed, orthorectification becomes necessary. Orthorectification is the process of projecting an image onto a Digital Elevation Map (DEM), which removes terrain distortions and corrects the perspective distortion by changing the viewing angle to be perpendicular to the projection plane. Orthorectification is used in disaster tracking, landscape management, wildlife monitoring and many other applications. However, orthorectification is a computationally expensive process due to floating point operations and divisions in the algorithm. To reduce the computational cost of on-board processing, two novel algorithm modifications are proposed. One modification is projection utilizing fixed-point arithmetic. Fixed point arithmetic removes the floating point operations and reduces the processing time by operating only on integers. The second modification is replacement of the division inherent in projection with a multiplication of the inverse. The inverse must operate iteratively. Therefore, the inverse is replaced with a linear approximation. As a result of these modifications, the processing time of projection is reduced by a factor of 1.3x with an average pixel position error of 0.2% of a pixel size for 128-bit integer processing and over 4x with an average pixel position error of less than 13% of a pixel size for a 64-bit integer processing. A secondary inverse function approximation is also developed that replaces the linear approximation with a quadratic. The quadratic approximation produces a more accurate approximation of the inverse, allowing for an integer multiplication calculation to be used in place of the traditional floating point division. This method increases the throughput of the orthorectification operation by 38% when compared to floating point processing. Additionally, this method improves the accuracy of the existing integer-based orthorectification algorithms in terms of average pixel distance, increasing the accuracy of the algorithm by more than 5x. The quadratic function reduces the pixel position error to 2% and is still 2.8x faster than the 128-bit floating point algorithm.

  10. Time-frequency analysis-based time-windowing algorithm for the inverse synthetic aperture radar imaging of ships

    NASA Astrophysics Data System (ADS)

    Zhou, Peng; Zhang, Xi; Sun, Weifeng; Dai, Yongshou; Wan, Yong

    2018-01-01

    An algorithm based on time-frequency analysis is proposed to select an imaging time window for the inverse synthetic aperture radar imaging of ships. An appropriate range bin is selected to perform the time-frequency analysis after radial motion compensation. The selected range bin is that with the maximum mean amplitude among the range bins whose echoes are confirmed to be contributed by a dominant scatter. The criterion for judging whether the echoes of a range bin are contributed by a dominant scatter is key to the proposed algorithm and is therefore described in detail. When the first range bin that satisfies the judgment criterion is found, a sequence composed of the frequencies that have the largest amplitudes in every moment's time-frequency spectrum corresponding to this range bin is employed to calculate the length and the center moment of the optimal imaging time window. Experiments performed with simulation data and real data show the effectiveness of the proposed algorithm, and comparisons between the proposed algorithm and the image contrast-based algorithm (ICBA) are provided. Similar image contrast and lower entropy are acquired using the proposed algorithm as compared with those values when using the ICBA.

  11. Analysis of high altitude remotely sensed data collected in the Nantucket Shoals experiment 4-15 May, 1981

    NASA Technical Reports Server (NTRS)

    Ohlhorst, C. W.

    1982-01-01

    High altitude ocean color scanner ratios of band 2 (456 to 476 nanometers) to band 4 (539 to 559 nanometers) and band 1 (418 to 438 nanometers) to band 3 (498 to 518 nanometers) had high correlation coefficient values (-0.928 and 0.891 respectively) with seven boat sampled chlorophyll a measurements. The range of chlorophyll a concentrations was small (1.7-2.58 mg/cu m.). Each ratio was used to calculate chlorophyll a values for the center pixel of each scan line on flight lines 5 and 6. The two ratios produced dissimilar chlorophyll a trends. Due to the high noise level in the scanner data, no reliable synoptic chlorophyll a map could be generated with either ratio algorithm.

  12. Estimation of Tree Position and STEM Diameter Using Simultaneous Localization and Mapping with Data from a Backpack-Mounted Laser Scanner

    NASA Astrophysics Data System (ADS)

    Holmgren, J.; Tulldahl, H. M.; Nordlöf, J.; Nyström, M.; Olofsson, K.; Rydell, J.; Willén, E.

    2017-10-01

    A system was developed for automatic estimations of tree positions and stem diameters. The sensor trajectory was first estimated using a positioning system that consists of a low precision inertial measurement unit supported by image matching with data from a stereo-camera. The initial estimation of the sensor trajectory was then calibrated by adjustments of the sensor pose using the laser scanner data. Special features suitable for forest environments were used to solve the correspondence and matching problems. Tree stem diameters were estimated for stem sections using laser data from individual scanner rotations and were then used for calibration of the sensor pose. A segmentation algorithm was used to associate stem sections to individual tree stems. The stem diameter estimates of all stem sections associated to the same tree stem were then combined for estimation of stem diameter at breast height (DBH). The system was validated on four 20 m radius circular plots and manual measured trees were automatically linked to trees detected in laser data. The DBH could be estimated with a RMSE of 19 mm (6 %) and a bias of 8 mm (3 %). The calibrated sensor trajectory and the combined use of circle fits from individual scanner rotations made it possible to obtain reliable DBH estimates also with a low precision positioning system.

  13. Performance evaluation of the small-animal PET scanner ClairvivoPET using NEMA NU 4-2008 Standards.

    PubMed

    Sato, K; Shidahara, M; Watabe, H; Watanuki, S; Ishikawa, Y; Arakawa, Y; Nai, Y H; Furumoto, S; Tashiro, M; Shoji, T; Yanai, K; Gonda, K

    2016-01-21

    The aim of this study was to evaluate the performance of ClairvivoPET using NEMA NU4 standards. The ClairvivoPET incorporates a LYSO dual depth-of-interaction detector system with 151 mm axial field of view (FOV). Spatial resolution, sensitivity, counting rate capabilities, and image quality were evaluated using NEMA NU4-2008 standards. Normal mouse imaging was also performed for 10 min after intravenous injection of (18)F(-)-NaF. Data were compared with 19 other preclinical PET scanners. Spatial resolution measured using full width at half maximum on FBP-ramp reconstructed images was 2.16 mm at radial offset 5 mm of the axial centre FOV. The maximum absolute sensitivity for a point source at the FOV centre was 8.72%. Peak noise equivalent counting rate (NECR) was 415 kcps at 14.6 MBq ml(-1). The uniformity with the image-quality phantom was 4.62%. Spillover ratios in the images of air and water filled chambers were 0.19 and 0.06, respectively. Our results were comparable with the 19 other preclinical PET scanners based on NEMA NU4 standards, with excellent sensitivity because of the large FOV. The ClairvivoPET with iterative reconstruction algorithm also provided sufficient visualization of the mouse spine. The high sensitivity and resolution of the ClairvivoPET scanner provided high quality images for preclinical studies.

  14. Performance evaluation of the small-animal PET scanner ClairvivoPET using NEMA NU 4-2008 Standards

    NASA Astrophysics Data System (ADS)

    Sato, K.; Shidahara, M.; Watabe, H.; Watanuki, S.; Ishikawa, Y.; Arakawa, Y.; Nai, YH; Furumoto, S.; Tashiro, M.; Shoji, T.; Yanai, K.; Gonda, K.

    2016-01-01

    The aim of this study was to evaluate the performance of ClairvivoPET using NEMA NU4 standards. The ClairvivoPET incorporates a LYSO dual depth-of-interaction detector system with 151 mm axial field of view (FOV). Spatial resolution, sensitivity, counting rate capabilities, and image quality were evaluated using NEMA NU4-2008 standards. Normal mouse imaging was also performed for 10min after intravenous injection of 18F(-)-NaF. Data were compared with 19 other preclinical PET scanners. Spatial resolution measured using full width at half maximum on FBP-ramp reconstructed images was 2.16 mm at radial offset 5 mm of the axial centre FOV. The maximum absolute sensitivity for a point source at the FOV centre was 8.72%. Peak noise equivalent counting rate (NECR) was 415kcps at 14.6MBq ml-1. The uniformity with the image-quality phantom was 4.62%. Spillover ratios in the images of air and water filled chambers were 0.19 and 0.06, respectively. Our results were comparable with the 19 other preclinical PET scanners based on NEMA NU4 standards, with excellent sensitivity because of the large FOV. The ClairvivoPET with iterative reconstruction algorithm also provided sufficient visualization of the mouse spine. The high sensitivity and resolution of the ClairvivoPET scanner provided high quality images for preclinical studies.

  15. Centroid-moment tensor inversions using high-rate GPS waveforms

    NASA Astrophysics Data System (ADS)

    O'Toole, Thomas B.; Valentine, Andrew P.; Woodhouse, John H.

    2012-10-01

    Displacement time-series recorded by Global Positioning System (GPS) receivers are a new type of near-field waveform observation of the seismic source. We have developed an inversion method which enables the recovery of an earthquake's mechanism and centroid coordinates from such data. Our approach is identical to that of the 'classical' Centroid-Moment Tensor (CMT) algorithm, except that we forward model the seismic wavefield using a method that is amenable to the efficient computation of synthetic GPS seismograms and their partial derivatives. We demonstrate the validity of our approach by calculating CMT solutions using 1 Hz GPS data for two recent earthquakes in Japan. These results are in good agreement with independently determined source models of these events. With wider availability of data, we envisage the CMT algorithm providing a tool for the systematic inversion of GPS waveforms, as is already the case for teleseismic data. Furthermore, this general inversion method could equally be applied to other near-field earthquake observations such as those made using accelerometers.

  16. Simultaneous source and attenuation reconstruction in SPECT using ballistic and single scattering data

    NASA Astrophysics Data System (ADS)

    Courdurier, M.; Monard, F.; Osses, A.; Romero, F.

    2015-09-01

    In medical single-photon emission computed tomography (SPECT) imaging, we seek to simultaneously obtain the internal radioactive sources and the attenuation map using not only ballistic measurements but also first-order scattering measurements and assuming a very specific scattering regime. The problem is modeled using the radiative transfer equation by means of an explicit non-linear operator that gives the ballistic and scattering measurements as a function of the radioactive source and attenuation distributions. First, by differentiating this non-linear operator we obtain a linearized inverse problem. Then, under regularity hypothesis for the source distribution and attenuation map and considering small attenuations, we rigorously prove that the linear operator is invertible and we compute its inverse explicitly. This allows proof of local uniqueness for the non-linear inverse problem. Finally, using the previous inversion result for the linear operator, we propose a new type of iterative algorithm for simultaneous source and attenuation recovery for SPECT based on the Neumann series and a Newton-Raphson algorithm.

  17. Joint inversion of teleseismic receiver functions and magnetotelluric data using a genetic algorithm: Are seismic velocities and electrical conductivities compatible?

    NASA Astrophysics Data System (ADS)

    Moorkamp, M.; Jones, A. G.; Eaton, D. W.

    2007-08-01

    Joint inversion of different kinds of geophysical data has the potential to improve model resolution, under the assumption that the different observations are sensitive to the same subsurface features. Here, we examine the compatibility of P-wave teleseismic receiver functions and long-period magnetotelluric (MT) observations, using joint inversion, to infer one-dimensional lithospheric structure. We apply a genetic algorithm to invert teleseismic and MT data from the Slave craton; a region where previous independent analyses of these data have indicated correlated layering of the lithosphere. Examination of model resolution and parameter trade-off suggests that the main features of this area, the Moho, Central Slave Mantle Conductor and the Lithosphere-Asthenosphere boundary, are sensed to varying degrees by both methods. Thus, joint inversion of these two complementary data sets can be used to construct improved models of the lithosphere. Further studies will be needed to assess whether the approach can be applied globally.

  18. A fast new algorithm for a robot neurocontroller using inverse QR decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morris, A.S.; Khemaissia, S.

    2000-01-01

    A new adaptive neural network controller for robots is presented. The controller is based on direct adaptive techniques. Unlike many neural network controllers in the literature, inverse dynamical model evaluation is not required. A numerically robust, computationally efficient processing scheme for neutral network weight estimation is described, namely, the inverse QR decomposition (INVQR). The inverse QR decomposition and a weighted recursive least-squares (WRLS) method for neural network weight estimation is derived using Cholesky factorization of the data matrix. The algorithm that performs the efficient INVQR of the underlying space-time data matrix may be implemented in parallel on a triangular array.more » Furthermore, its systolic architecture is well suited for VLSI implementation. Another important benefit is well suited for VLSI implementation. Another important benefit of the INVQR decomposition is that it solves directly for the time-recursive least-squares filter vector, while avoiding the sequential back-substitution step required by the QR decomposition approaches.« less

  19. Mathematical modelling of scanner-specific bowtie filters for Monte Carlo CT dosimetry

    NASA Astrophysics Data System (ADS)

    Kramer, R.; Cassola, V. F.; Andrade, M. E. A.; de Araújo, M. W. C.; Brenner, D. J.; Khoury, H. J.

    2017-02-01

    The purpose of bowtie filters in CT scanners is to homogenize the x-ray intensity measured by the detectors in order to improve the image quality and at the same time to reduce the dose to the patient because of the preferential filtering near the periphery of the fan beam. For CT dosimetry, especially for Monte Carlo calculations of organ and tissue absorbed doses to patients, it is important to take the effect of bowtie filters into account. However, material composition and dimensions of these filters are proprietary. Consequently, a method for bowtie filter simulation independent of access to proprietary data and/or to a specific scanner would be of interest to many researchers involved in CT dosimetry. This study presents such a method based on the weighted computer tomography dose index, CTDIw, defined in two cylindrical PMMA phantoms of 16 cm and 32 cm diameter. With an EGSnrc-based Monte Carlo (MC) code, ratios CTDIw/CTDI100,a were calculated for a specific CT scanner using PMMA bowtie filter models based on sigmoid Boltzmann functions combined with a scanner filter factor (SFF) which is modified during calculations until the calculated MC CTDIw/CTDI100,a matches ratios CTDIw/CTDI100,a, determined by measurements or found in publications for that specific scanner. Once the scanner-specific value for an SFF has been found, the bowtie filter algorithm can be used in any MC code to perform CT dosimetry for that specific scanner. The bowtie filter model proposed here was validated for CTDIw/CTDI100,a considering 11 different CT scanners and for CTDI100,c, CTDI100,p and their ratio considering 4 different CT scanners. Additionally, comparisons were made for lateral dose profiles free in air and using computational anthropomorphic phantoms. CTDIw/CTDI100,a determined with this new method agreed on average within 0.89% (max. 3.4%) and 1.64% (max. 4.5%) with corresponding data published by CTDosimetry (www.impactscan.org) for the CTDI HEAD and BODY phantoms, respectively. Comparison with results calculated using proprietary data for the PHILIPS Brilliance 64 scanner showed agreement on average within 2.5% (max. 5.8%) and with data measured for that scanner within 2.1% (max. 3.7%). Ratios of CTDI100,c/CTDI100, p for this study and corresponding data published by CTDosimetry (www.impactscan.org) agree on average within about 11% (max. 28.6%). Lateral dose profiles calculated with the proposed bowtie filter and with proprietary data agreed within 2% (max. 5.9%), and both calculated data agreed within 5.4% (max. 11.2%) with measured results. Application of the proposed bowtie filter and of the exactly modelled filter to human phantom Monte Carlo calculations show agreement on the average within less than 5% (max. 7.9%) for organ and tissue absorbed doses.

  20. An adaptive importance sampling algorithm for Bayesian inversion with multimodal distributions

    DOE PAGES

    Li, Weixuan; Lin, Guang

    2015-03-21

    Parametric uncertainties are encountered in the simulations of many physical systems, and may be reduced by an inverse modeling procedure that calibrates the simulation results to observations on the real system being simulated. Following Bayes’ rule, a general approach for inverse modeling problems is to sample from the posterior distribution of the uncertain model parameters given the observations. However, the large number of repetitive forward simulations required in the sampling process could pose a prohibitive computational burden. This difficulty is particularly challenging when the posterior is multimodal. We present in this paper an adaptive importance sampling algorithm to tackle thesemore » challenges. Two essential ingredients of the algorithm are: 1) a Gaussian mixture (GM) model adaptively constructed as the proposal distribution to approximate the possibly multimodal target posterior, and 2) a mixture of polynomial chaos (PC) expansions, built according to the GM proposal, as a surrogate model to alleviate the computational burden caused by computational-demanding forward model evaluations. In three illustrative examples, the proposed adaptive importance sampling algorithm demonstrates its capabilities of automatically finding a GM proposal with an appropriate number of modes for the specific problem under study, and obtaining a sample accurately and efficiently representing the posterior with limited number of forward simulations.« less

  1. Inversion of Surface-wave Dispersion Curves due to Low-velocity-layer Models

    NASA Astrophysics Data System (ADS)

    Shen, C.; Xia, J.; Mi, B.

    2016-12-01

    A successful inversion relies on exact forward modeling methods. It is a key step to accurately calculate multi-mode dispersion curves of a given model in high-frequency surface-wave (Rayleigh wave and Love wave) methods. For normal models (shear (S)-wave velocity increasing with depth), their theoretical dispersion curves completely match the dispersion spectrum that is generated based on wave equation. For models containing a low-velocity-layer, however, phase velocities calculated by existing forward-modeling algorithms (e.g. Thomson-Haskell algorithm, Knopoff algorithm, fast vector-transfer algorithm and so on) fail to be consistent with the dispersion spectrum at a high frequency range. They will approach a value that close to the surface-wave velocity of the low-velocity-layer under the surface layer, rather than that of the surface layer when their corresponding wavelengths are short enough. This phenomenon conflicts with the characteristics of surface waves, which results in an erroneous inverted model. By comparing the theoretical dispersion curves with simulated dispersion energy, we proposed a direct and essential solution to accurately compute surface-wave phase velocities due to low-velocity-layer models. Based on the proposed forward modeling technique, we can achieve correct inversion for these types of models. Several synthetic data proved the effectiveness of our method.

  2. We introduce an algorithm for the simultaneous reconstruction of faults and slip fields. We prove that the minimum of a related regularized functional converges to the unique solution of the fault inverse problem. We consider a Bayesian approach. We use a parallel multi-core platform and we discuss techniques to save on computational time.

    NASA Astrophysics Data System (ADS)

    Volkov, D.

    2017-12-01

    We introduce an algorithm for the simultaneous reconstruction of faults and slip fields on those faults. We define a regularized functional to be minimized for the reconstruction. We prove that the minimum of that functional converges to the unique solution of the related fault inverse problem. Due to inherent uncertainties in measurements, rather than seeking a deterministic solution to the fault inverse problem, we consider a Bayesian approach. The advantage of such an approach is that we obtain a way of quantifying uncertainties as part of our final answer. On the downside, this Bayesian approach leads to a very large computation. To contend with the size of this computation we developed an algorithm for the numerical solution to the stochastic minimization problem which can be easily implemented on a parallel multi-core platform and we discuss techniques to save on computational time. After showing how this algorithm performs on simulated data and assessing the effect of noise, we apply it to measured data. The data was recorded during a slow slip event in Guerrero, Mexico.

  3. Inversion of oceanic constituents in case I and II waters with genetic programming algorithms.

    PubMed

    Chami, Malik; Robilliard, Denis

    2002-10-20

    A stochastic inverse technique based on agenetic programming (GP) algorithm was developed toinvert oceanic constituents from simulated data for case I and case II water applications. The simulations were carried out with the Ordre Successifs Ocean Atmosphere (OSOA) radiative transfer model. They include the effects of oceanic substances such as algal-related chlorophyll, nonchlorophyllous suspended matter, and dissolved organic matter. The synthetic data set also takes into account the directional effects of particles through a variation of their phase function that makes the simulated data realistic. It is shown that GP can be successfully applied to the inverse problem with acceptable stability in the presence of realistic noise in the data. GP is compared with neural network methodology for case I waters; GP exhibits similar retrieval accuracy, which is greater than for traditional techniques such as band ratio algorithms. The application of GP to real satellite data [a Sea-viewing Wide Field-of-view Sensor (SeaWiFS)] was also carried out for case I waters as a validation. Good agreement was obtained when GP results were compared with the SeaWiFS empirical algorithm. For case II waters the accuracy of GP is less than 33%, which remains satisfactory, at the present time, for remote-sensing purposes.

  4. An adaptive importance sampling algorithm for Bayesian inversion with multimodal distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Weixuan; Lin, Guang, E-mail: guanglin@purdue.edu

    2015-08-01

    Parametric uncertainties are encountered in the simulations of many physical systems, and may be reduced by an inverse modeling procedure that calibrates the simulation results to observations on the real system being simulated. Following Bayes' rule, a general approach for inverse modeling problems is to sample from the posterior distribution of the uncertain model parameters given the observations. However, the large number of repetitive forward simulations required in the sampling process could pose a prohibitive computational burden. This difficulty is particularly challenging when the posterior is multimodal. We present in this paper an adaptive importance sampling algorithm to tackle thesemore » challenges. Two essential ingredients of the algorithm are: 1) a Gaussian mixture (GM) model adaptively constructed as the proposal distribution to approximate the possibly multimodal target posterior, and 2) a mixture of polynomial chaos (PC) expansions, built according to the GM proposal, as a surrogate model to alleviate the computational burden caused by computational-demanding forward model evaluations. In three illustrative examples, the proposed adaptive importance sampling algorithm demonstrates its capabilities of automatically finding a GM proposal with an appropriate number of modes for the specific problem under study, and obtaining a sample accurately and efficiently representing the posterior with limited number of forward simulations.« less

  5. Efficient sampling of parsimonious inversion histories with application to genome rearrangement in Yersinia.

    PubMed

    Miklós, István; Darling, Aaron E

    2009-06-22

    Inversions are among the most common mutations acting on the order and orientation of genes in a genome, and polynomial-time algorithms exist to obtain a minimal length series of inversions that transform one genome arrangement to another. However, the minimum length series of inversions (the optimal sorting path) is often not unique as many such optimal sorting paths exist. If we assume that all optimal sorting paths are equally likely, then statistical inference on genome arrangement history must account for all such sorting paths and not just a single estimate. No deterministic polynomial algorithm is known to count the number of optimal sorting paths nor sample from the uniform distribution of optimal sorting paths. Here, we propose a stochastic method that uniformly samples the set of all optimal sorting paths. Our method uses a novel formulation of parallel Markov chain Monte Carlo. In practice, our method can quickly estimate the total number of optimal sorting paths. We introduce a variant of our approach in which short inversions are modeled to be more likely, and we show how the method can be used to estimate the distribution of inversion lengths and breakpoint usage in pathogenic Yersinia pestis. The proposed method has been implemented in a program called "MC4Inversion." We draw comparison of MC4Inversion to the sampler implemented in BADGER and a previously described importance sampling (IS) technique. We find that on high-divergence data sets, MC4Inversion finds more optimal sorting paths per second than BADGER and the IS technique and simultaneously avoids bias inherent in the IS technique.

  6. Robust inverse-consistent affine CT-MR registration in MRI-assisted and MRI-alone prostate radiation therapy.

    PubMed

    Rivest-Hénault, David; Dowson, Nicholas; Greer, Peter B; Fripp, Jurgen; Dowling, Jason A

    2015-07-01

    CT-MR registration is a critical component of many radiation oncology protocols. In prostate external beam radiation therapy, it allows the propagation of MR-derived contours to reference CT images at the planning stage, and it enables dose mapping during dosimetry studies. The use of carefully registered CT-MR atlases allows the estimation of patient specific electron density maps from MRI scans, enabling MRI-alone radiation therapy planning and treatment adaptation. In all cases, the precision and accuracy achieved by registration influences the quality of the entire process. Most current registration algorithms do not robustly generalize and lack inverse-consistency, increasing the risk of human error and acting as a source of bias in studies where information is propagated in a particular direction, e.g. CT to MR or vice versa. In MRI-based treatment planning where both CT and MR scans serve as spatial references, inverse-consistency is critical, if under-acknowledged. A robust, inverse-consistent, rigid/affine registration algorithm that is well suited to CT-MR alignment in prostate radiation therapy is presented. The presented method is based on a robust block-matching optimization process that utilises a half-way space definition to maintain inverse-consistency. Inverse-consistency substantially reduces the influence of the order of input images, simplifying analysis, and increasing robustness. An open source implementation is available online at http://aehrc.github.io/Mirorr/. Experimental results on a challenging 35 CT-MR pelvis dataset demonstrate that the proposed method is more accurate than other popular registration packages and is at least as accurate as the state of the art, while being more robust and having an order of magnitude higher inverse-consistency than competing approaches. The presented results demonstrate that the proposed registration algorithm is readily applicable to prostate radiation therapy planning. Copyright © 2015. Published by Elsevier B.V.

  7. A comparative study of surface waves inversion techniques at strong motion recording sites in Greece

    USGS Publications Warehouse

    Panagiotis C. Pelekis,; Savvaidis, Alexandros; Kayen, Robert E.; Vlachakis, Vasileios S.; Athanasopoulos, George A.

    2015-01-01

    Surface wave method was used for the estimation of Vs vs depth profile at 10 strong motion stations in Greece. The dispersion data were obtained by SASW method, utilizing a pair of electromechanical harmonic-wave source (shakers) or a random source (drop weight). In this study, three inversion techniques were used a) a recently proposed Simplified Inversion Method (SIM), b) an inversion technique based on a neighborhood algorithm (NA) which allows the incorporation of a priori information regarding the subsurface structure parameters, and c) Occam's inversion algorithm. For each site constant value of Poisson's ratio was assumed (ν=0.4) since the objective of the current study is the comparison of the three inversion schemes regardless the uncertainties resulting due to the lack of geotechnical data. A penalty function was introduced to quantify the deviations of the derived Vs profiles. The Vs models are compared as of Vs(z), Vs30 and EC8 soil category, in order to show the insignificance of the existing variations. The comparison results showed that the average variation of SIM profiles is 9% and 4.9% comparing with NA and Occam's profiles respectively whilst the average difference of Vs30 values obtained from SIM is 7.4% and 5.0% compared with NA and Occam's.

  8. Joint Stochastic Inversion of Pre-Stack 3D Seismic Data and Well Logs for High Resolution Hydrocarbon Reservoir Characterization

    NASA Astrophysics Data System (ADS)

    Torres-Verdin, C.

    2007-05-01

    This paper describes the successful implementation of a new 3D AVA stochastic inversion algorithm to quantitatively integrate pre-stack seismic amplitude data and well logs. The stochastic inversion algorithm is used to characterize flow units of a deepwater reservoir located in the central Gulf of Mexico. Conventional fluid/lithology sensitivity analysis indicates that the shale/sand interface represented by the top of the hydrocarbon-bearing turbidite deposits generates typical Class III AVA responses. On the other hand, layer- dependent Biot-Gassmann analysis shows significant sensitivity of the P-wave velocity and density to fluid substitution. Accordingly, AVA stochastic inversion, which combines the advantages of AVA analysis with those of geostatistical inversion, provided quantitative information about the lateral continuity of the turbidite reservoirs based on the interpretation of inverted acoustic properties (P-velocity, S-velocity, density), and lithotype (sand- shale) distributions. The quantitative use of rock/fluid information through AVA seismic amplitude data, coupled with the implementation of co-simulation via lithotype-dependent multidimensional joint probability distributions of acoustic/petrophysical properties, yields accurate 3D models of petrophysical properties such as porosity and permeability. Finally, by fully integrating pre-stack seismic amplitude data and well logs, the vertical resolution of inverted products is higher than that of deterministic inversions methods.

  9. SeaWiFS technical report series. Volume 4: An analysis of GAC sampling algorithms. A case study

    NASA Technical Reports Server (NTRS)

    Yeh, Eueng-Nan (Editor); Hooker, Stanford B. (Editor); Hooker, Stanford B. (Editor); Mccain, Charles R. (Editor); Fu, Gary (Editor)

    1992-01-01

    The Sea-viewing Wide Field-of-view Sensor (SeaWiFS) instrument will sample at approximately a 1 km resolution at nadir which will be broadcast for reception by realtime ground stations. However, the global data set will be comprised of coarser four kilometer data which will be recorded and broadcast to the SeaWiFS Project for processing. Several algorithms for degrading the one kilometer data to four kilometer data are examined using imagery from the Coastal Zone Color Scanner (CZCS) in an effort to determine which algorithm would best preserve the statistical characteristics of the derived products generated from the one kilometer data. Of the algorithms tested, subsampling based on a fixed pixel within a 4 x 4 pixel array is judged to yield the most consistent results when compared to the one kilometer data products.

  10. Evaluation of algorithms for estimating wheat acreage from multispectral scanner data. [Kansas and Texas

    NASA Technical Reports Server (NTRS)

    Nalepka, R. F. (Principal Investigator); Richardson, W.; Pentland, A. P.

    1976-01-01

    The author has identified the following significant results. Fourteen different classification algorithms were tested for their ability to estimate the proportion of wheat in an area. For some algorithms, accuracy of classification in field centers was observed. The data base consisted of ground truth and LANDSAT data from 55 sections (1 x 1 mile) from five LACIE intensive test sites in Kansas and Texas. Signatures obtained from training fields selected at random from the ground truth were generally representative of the data distribution patterns. LIMMIX, an algorithm that chooses a pure signature when the data point is close enough to a signature mean and otherwise chooses the best mixture of a pair of signatures, reduced the average absolute error to 6.1% and the bias to 1.0%. QRULE run with a null test achieved a similar reduction.

  11. Ionospheric Asymmetry Evaluation using Tomography to Assess the Effectiveness of Radio Occultation Data Inversion

    NASA Astrophysics Data System (ADS)

    Shaikh, M. M.; Notarpietro, R.; Yin, P.; Nava, B.

    2013-12-01

    The Multi-Instrument Data Analysis System (MIDAS) algorithm is based on the oceanographic imaging techniques first applied to do the imaging of 2D slices of the ionosphere. The first version of MIDAS (version 1.0) was able to deal with any line-integral data such as GPS-ground or GPS-LEO differential-phase data or inverted ionograms. The current version extends tomography into four dimensional (lat, long, height and time) spatial-temporal mapping that combines all observations simultaneously in a single inversion with the minimum of a priori assumptions about the form of the ionospheric electron-concentration distribution. This work is an attempt to investigate the Radio Occultation (RO) data assimilation into MIDAS by assessing the ionospheric asymmetry and its impact on RO data inversion, when the Onion-peeling algorithm is used. Ionospheric RO data from COSMIC mission, specifically data collected during 24 September 2011 storm over mid-latitudes, has been used for the data assimilation. Using output electron density data from Midas (with/without RO assimilation) and ideal RO geometries, we tried to assess ionospheric asymmetry. It has been observed that the level of asymmetry was significantly increased when the storm was active. This was due to the increased ionization, which in turn produced large gradients along occulted ray path in the ionosphere. The presence of larger gradients was better observed when Midas was used with RO assimilated data. A very good correlation has been found between the evaluated asymmetry and errors related to the inversion products, when the inversion is performed considering standard techniques based on the assumption of spherical symmetry of the ionosphere. Errors are evaluated considering the peak electron density (NmF2) estimate and the Vertical TEC (VTEC) evaluation. This work highlights the importance of having a tool which should be able to state the effectiveness of Radio Occultation data inversion considering standard algorithms, like Onion-peeling, which are based on ionospheric spherical symmetry assumption. The outcome of this work will lead to find a better inversion algorithm which will deal with the ionospheric asymmetry in more realistic way. This is foreseen as a task for future research. This work has been done under the framework of TRANSMIT project (ITN Marie Curie Actions - GA No. 264476).

  12. Marine Controlled-Source Electromagnetic 2D Inversion for synthetic models.

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Li, Y.

    2016-12-01

    We present a 2D inverse algorithm for frequency domain marine controlled-source electromagnetic (CSEM) data, which is based on the regularized Gauss-Newton approach. As a forward solver, our parallel adaptive finite element forward modeling program is employed. It is a self-adaptive, goal-oriented grid refinement algorithm in which a finite element analysis is performed on a sequence of refined meshes. The mesh refinement process is guided by a dual error estimate weighting to bias refinement towards elements that affect the solution at the EM receiver locations. With the use of the direct solver (MUMPS), we can effectively compute the electromagnetic fields for multi-sources and parametric sensitivities. We also implement the parallel data domain decomposition approach of Key and Ovall (2011), with the goal of being able to compute accurate responses in parallel for complicated models and a full suite of data parameters typical of offshore CSEM surveys. All minimizations are carried out by using the Gauss-Newton algorithm and model perturbations at each iteration step are obtained by using the Inexact Conjugate Gradient iteration method. Synthetic test inversions are presented.

  13. Optimal aperture synthesis radar imaging

    NASA Astrophysics Data System (ADS)

    Hysell, D. L.; Chau, J. L.

    2006-03-01

    Aperture synthesis radar imaging has been used to investigate coherent backscatter from ionospheric plasma irregularities at Jicamarca and elsewhere for several years. Phenomena of interest include equatorial spread F, 150-km echoes, the equatorial electrojet, range-spread meteor trails, and mesospheric echoes. The sought-after images are related to spaced-receiver data mathematically through an integral transform, but direct inversion is generally impractical or suboptimal. We instead turn to statistical inverse theory, endeavoring to utilize fully all available information in the data inversion. The imaging algorithm used at Jicamarca is based on an implementation of the MaxEnt method developed for radio astronomy. Its strategy is to limit the space of candidate images to those that are positive definite, consistent with data to the degree required by experimental confidence limits; smooth (in some sense); and most representative of the class of possible solutions. The algorithm was improved recently by (1) incorporating the antenna radiation pattern in the prior probability and (2) estimating and including the full error covariance matrix in the constraints. The revised algorithm is evaluated using new 28-baseline electrojet data from Jicamarca.

  14. The Natural-CCD Algorithm, a Novel Method to Solve the Inverse Kinematics of Hyper-redundant and Soft Robots.

    PubMed

    Martín, Andrés; Barrientos, Antonio; Del Cerro, Jaime

    2018-03-22

    This article presents a new method to solve the inverse kinematics problem of hyper-redundant and soft manipulators. From an engineering perspective, this kind of robots are underdetermined systems. Therefore, they exhibit an infinite number of solutions for the inverse kinematics problem, and to choose the best one can be a great challenge. A new algorithm based on the cyclic coordinate descent (CCD) and named as natural-CCD is proposed to solve this issue. It takes its name as a result of generating very harmonious robot movements and trajectories that also appear in nature, such as the golden spiral. In addition, it has been applied to perform continuous trajectories, to develop whole-body movements, to analyze motion planning in complex environments, and to study fault tolerance, even for both prismatic and rotational joints. The proposed algorithm is very simple, precise, and computationally efficient. It works for robots either in two or three spatial dimensions and handles a large amount of degrees-of-freedom. Because of this, it is aimed to break down barriers between discrete hyper-redundant and continuum soft robots.

  15. Determining the near-surface current profile from measurements of the wave dispersion relation

    NASA Astrophysics Data System (ADS)

    Smeltzer, Benjamin; Maxwell, Peter; Aesøy, Eirik; Ellingsen, Simen

    2017-11-01

    The current-induced Doppler shifts of waves can yield information about the background mean flow, providing an attractive method of inferring the current profile in the upper layer of the ocean. We present measurements of waves propagating on shear currents in a laboratory water channel, as well as theoretical investigations of inversion techniques for determining the vertical current structure. Spatial and temporal measurements of the free surface profile obtained using a synthetic Schlieren method are analyzed to determine the wave dispersion relation and Doppler shifts as a function of wavelength. The vertical current profile can then be inferred from the Doppler shifts using an inversion algorithm. Most existing algorithms rely on a priori assumptions of the shape of the current profile, and developing a method that uses less stringent assumptions is a focus of this study, allowing for measurement of more general current profiles. The accuracy of current inversion algorithms are evaluated by comparison to measurements of the mean flow profile from particle image velocimetry (PIV), and a discussion of the sensitivity to errors in the Doppler shifts is presented.

  16. Extracting the differential inverse inelastic mean free path and differential surface excitation probability of Tungsten from X-ray photoelectron spectra and electron energy loss spectra

    NASA Astrophysics Data System (ADS)

    Afanas'ev, V. P.; Gryazev, A. S.; Efremenko, D. S.; Kaplya, P. S.; Kuznetcova, A. V.

    2017-12-01

    Precise knowledge of the differential inverse inelastic mean free path (DIIMFP) and differential surface excitation probability (DSEP) of Tungsten is essential for many fields of material science. In this paper, a fitting algorithm is applied for extracting DIIMFP and DSEP from X-ray photoelectron spectra and electron energy loss spectra. The algorithm uses the partial intensity approach as a forward model, in which a spectrum is given as a weighted sum of cross-convolved DIIMFPs and DSEPs. The weights are obtained as solutions of the Riccati and Lyapunov equations derived from the invariant imbedding principle. The inversion algorithm utilizes the parametrization of DIIMFPs and DSEPs on the base of a classical Lorentz oscillator. Unknown parameters of the model are found by using the fitting procedure, which minimizes the residual between measured spectra and forward simulations. It is found that the surface layer of Tungsten contains several sublayers with corresponding Langmuir resonances. The thicknesses of these sublayers are proportional to the periods of corresponding Langmuir oscillations, as predicted by the theory of R.H. Ritchie.

  17. An algorithm for the split-feasibility problems with application to the split-equality problem.

    PubMed

    Chuang, Chih-Sheng; Chen, Chi-Ming

    2017-01-01

    In this paper, we study the split-feasibility problem in Hilbert spaces by using the projected reflected gradient algorithm. As applications, we study the convex linear inverse problem and the split-equality problem in Hilbert spaces, and we give new algorithms for these problems. Finally, numerical results are given for our main results.

  18. Non-recursive augmented Lagrangian algorithms for the forward and inverse dynamics of constrained flexible multibodies

    NASA Technical Reports Server (NTRS)

    Bayo, Eduardo; Ledesma, Ragnar

    1993-01-01

    A technique is presented for solving the inverse dynamics of flexible planar multibody systems. This technique yields the non-causal joint efforts (inverse dynamics) as well as the internal states (inverse kinematics) that produce a prescribed nominal trajectory of the end effector. A non-recursive global Lagrangian approach is used in formulating the equations for motion as well as in solving the inverse dynamics equations. Contrary to the recursive method previously presented, the proposed method solves the inverse problem in a systematic and direct manner for both open-chain as well as closed-chain configurations. Numerical simulation shows that the proposed procedure provides an excellent tracking of the desired end effector trajectory.

  19. Jump-and-return sandwiches: A new family of binomial-like selective inversion sequences with improved performance.

    PubMed

    Brenner, Tom; Chen, Johnny; Stait-Gardner, Tim; Zheng, Gang; Matsukawa, Shingo; Price, William S

    2018-03-01

    A new family of binomial-like inversion sequences, named jump-and-return sandwiches (JRS), has been developed by inserting a binomial-like sequence into a standard jump-and-return sequence, discovered through use of a stochastic Genetic Algorithm optimisation. Compared to currently used binomial-like inversion sequences (e.g., 3-9-19 and W5), the new sequences afford wider inversion bands and narrower non-inversion bands with an equal number of pulses. As an example, two jump-and-return sandwich 10-pulse sequences achieved 95% inversion at offsets corresponding to 9.4% and 10.3% of the non-inversion band spacing, compared to 14.7% for the binomial-like W5 inversion sequence, i.e., they afforded non-inversion bands about two thirds the width of the W5 non-inversion band. Copyright © 2018 Elsevier Inc. All rights reserved.

  20. THE SUCCESSIVE LINEAR ESTIMATOR: A REVISIT. (R827114)

    EPA Science Inventory

    This paper examines the theoretical basis of the successive linear estimator (SLE) that has been developed for the inverse problem in subsurface hydrology. We show that the SLE algorithm is a non-linear iterative estimator to the inverse problem. The weights used in the SLE al...

  1. Spectral fitting inversion of low-frequency normal modes with self-coupling and cross-coupling of toroidal and spheroidal multiplets: numerical experiments to estimate the isotropic and anisotropic velocity structures

    NASA Astrophysics Data System (ADS)

    Oda, Hitoshi

    2016-06-01

    The aspherical structure of the Earth is described in terms of lateral heterogeneity and anisotropy of the P- and S-wave velocities, density heterogeneity, ellipticity and rotation of the Earth and undulation of the discontinuity interfaces of the seismic wave velocities. Its structure significantly influences the normal mode spectra of the Earth's free oscillation in the form of cross-coupling between toroidal and spheroidal multiplets and self-coupling between the singlets forming them. Thus, the aspherical structure must be conversely estimated from the free oscillation spectra influenced by the cross-coupling and self-coupling. In the present study, we improve a spectral fitting inversion algorithm which was developed in a previous study to retrieve the global structures of the isotropic and anisotropic velocities of the P and S waves from the free oscillation spectra. The main improvement is that the geographical distribution of the intensity of the S-wave azimuthal anisotropy is represented by a nonlinear combination of structure coefficients for the anisotropic velocity structure, whereas in the previous study it was expanded into a generalized spherical harmonic series. Consequently, the improved inversion algorithm reduces the number of unknown parameters that must be determined compared to the previous inversion algorithm and employs a one-step inversion method by which the structure coefficients for the isotropic and anisotropic velocities are directly estimated from the fee oscillation spectra. The applicability of the improved inversion is examined by several numerical experiments using synthetic spectral data, which are produced by supposing a variety of isotropic and anisotropic velocity structures, earthquake source parameters and station-event pairs. Furthermore, the robustness of the inversion algorithm is investigated with respect to the back-ground noise contaminating the spectral data as well as truncating the series expansions by finite terms to represent the three-dimensional velocity structures. As a result, it is shown that the improved inversion can estimate not only the isotropic and anisotropic velocity structures but also the depth extent of the anisotropic regions in the Earth. In particular, the cross-coupling modes are essential to correctly estimate the isotropic and anisotropic velocity structures from the normal mode spectra. In addition, we argue that the effect of the seismic anisotropy is not negligible when estimating only the isotropic velocity structure from the spheroidal mode spectra.

  2. A dedicated breast-PET/CT scanner: Evaluation of basic performance characteristics.

    PubMed

    Raylman, Raymond R; Van Kampen, Will; Stolin, Alexander V; Gong, Wenbo; Jaliparthi, Gangadhar; Martone, Peter F; Smith, Mark F; Sarment, David; Clinthorne, Neal H; Perna, Mark

    2018-04-01

    Application of advanced imaging techniques, such as PET and x ray CT, can potentially improve detection of breast cancer. Unfortunately, both modalities have challenges in the detection of some lesions. The combination of the two techniques, however, could potentially lead to an overall improvement in diagnostic breast imaging. The purpose of this investigation is to test the basic performance of a new dedicated breast-PET/CT. The PET component consists of a rotating pair of detectors. Its performance was evaluated using the NEMA NU4-2008 protocols. The CT component utilizes a pulsed x ray source and flat panel detector mounted on the same gantry as the PET scanner. Its performance was assessed using specialized phantoms. The radiation dose to a breast during CT imaging was explored by the measurement of free-in-air kerma and air kerma measured at the center of a 16 cm-diameter PMMA cylinder. Finally, the combined capabilities of the system were demonstrated by imaging of a micro-hot-rod phantom. Overall, performance of the PET component is comparable to many pre-clinical and other dedicated breast-PET scanners. Its spatial resolution is 2.2 mm, 5 mm from the center of the scanner using images created with the single-sliced-filtered-backprojection algorithm. Peak NECR is 24.6 kcps; peak sensitivity is 1.36%; the scatter fraction is 27%. Spatial resolution of the CT scanner is 1.1 lp/mm at 10% MTF. The free-in-air kerma is 2.33 mGy, while the PMMA-air kerma is 1.24 mGy. Finally, combined imaging of a micro-hot-rod phantom illustrated the potential utility of the dual-modality images produced by the system. The basic performance characteristics of a new dedicated breast-PET/CT scanner are good, demonstrating that its performance is similar to current dedicated PET and CT scanners. The potential value of this system is the capability to produce combined duality-modality images that could improve detection of breast disease. The next stage in development of this system is testing with more advanced phantoms and human subjects. © 2018 American Association of Physicists in Medicine.

  3. SU-E-I-17: Evaluation of Commercially Available Extension Plates for the ACR CT Accreditation Phantom

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greene-Donnelly, K; Ogden, K

    Purpose: To evaluate the impact of commercially available extension plates on Hounsfield Unit (HU) values in the ACR CT accreditation phantom (Model 464, Gammex Inc., Middleton, Wi). The extension plates are intended to improve water HU values in scanners where the traditional solution involves scanning the phantom with an adjacent water or CTDI phantom. Methods: The Model 464 phantom was scanned on 9 different CT scanners at 8 separate sites representing 16 and 64 slice MDCT technology from four CT manufacturers. The phantom was scanned with and without the extension plates (Gammex 464 EXTPLT-KIT) in helical and axial modes. Amore » water phantom was also scanned to verify water HU calibration. Technique was 120 kV tube potential, 350 mAs, and 210 mm display field of view. Slice thickness and reconstruction algorithm were based on site clinical protocols. The widest available beam collimation was used. Regions of interest were drawn on the HU test objects in Module 1 of the phantom and mean values recorded. Results: For all axial mode scans, water HU values were within limits with or without the extension plates. For two scanners (both Lightspeed VCT, GE Medical Systems, Waukesha WI), axial mode bone HU values were above the specified range both with and without the extension plates though they were closer to the specified range with the plates installed. In helical scan mode, two scanners (both GE Lightspeed VCT) had water HU values above the specified range without the plates installed. With the plates installed, the water HU values were within range for all scanners in all scan modes. Conclusion: Using the plates, the Lightspeed VCT scanners passed the water HU test when scanning in helical mode. The benefit of the extension plates was evident in helical mode scanning with GE scanners using a nominal 4 cm beam. Disclosure: The extension plates evaluated in this work were provided free of charge to the authors. The authors have no other financial interest in Gammex Inc.« less

  4. Impact of detector design on imaging performance of a long axial field-of-view, whole-body PET scanner

    NASA Astrophysics Data System (ADS)

    Surti, S.; Karp, J. S.

    2015-07-01

    Current generation of commercial time-of-flight (TOF) PET scanners utilize 20-25 mm thick LSO or LYSO crystals and have an axial FOV (AFOV) in the range of 16-22 mm. Longer AFOV scanners would provide increased intrinsic sensitivity and require fewer bed positions for whole-body imaging. Recent simulation work has investigated the sensitivity gains that can be achieved with these long AFOV scanners, and has motivated new areas of investigation such as imaging with a very low dose of injected activity as well as providing whole-body dynamic imaging capability in one bed position. In this simulation work we model a 72 cm long scanner and prioritize the detector design choices in terms of timing resolution, crystal size (spatial resolution), crystal thickness (detector sensitivity), and depth-of-interaction (DOI) measurement capability. The generated list data are reconstructed with a list-mode OSEM algorithm using a Gaussian TOF kernel that depends on the timing resolution and blob basis functions for regularization. We use lesion phantoms and clinically relevant metrics for lesion detectability and contrast measurement. The scan time was fixed at 10 min for imaging a 100 cm long object assuming a 50% overlap between adjacent bed positions. Results show that a 72 cm long scanner can provide a factor of ten reduction in injected activity compared to an identical 18 cm long scanner to get equivalent lesion detectability. While improved timing resolution leads to further gains, using 3 mm (as opposed to 4 mm) wide crystals does not show any significant benefits for lesion detectability. A detector providing 2-level DOI information with equal crystal thickness also does not show significant gains. Finally, a 15 mm thick crystal leads to lower lesion detectability than a 20 mm thick crystal when keeping all other detector parameters (crystal width, timing resolution, and DOI capability) the same. However, improved timing performance with 15 mm thick crystals can provide similar or better performance than that achieved by a detector using 20 mm thick crystals.

  5. SimDoseCT: dose reporting software based on Monte Carlo simulation for a 320 detector-row cone-beam CT scanner and ICRP computational adult phantoms

    NASA Astrophysics Data System (ADS)

    Cros, Maria; Joemai, Raoul M. S.; Geleijns, Jacob; Molina, Diego; Salvadó, Marçal

    2017-08-01

    This study aims to develop and test software for assessing and reporting doses for standard patients undergoing computed tomography (CT) examinations in a 320 detector-row cone-beam scanner. The software, called SimDoseCT, is based on the Monte Carlo (MC) simulation code, which was developed to calculate organ doses and effective doses in ICRP anthropomorphic adult reference computational phantoms for acquisitions with the Aquilion ONE CT scanner (Toshiba). MC simulation was validated by comparing CTDI measurements within standard CT dose phantoms with results from simulation under the same conditions. SimDoseCT consists of a graphical user interface connected to a MySQL database, which contains the look-up-tables that were generated with MC simulations for volumetric acquisitions at different scan positions along the phantom using any tube voltage, bow tie filter, focal spot and nine different beam widths. Two different methods were developed to estimate organ doses and effective doses from acquisitions using other available beam widths in the scanner. A correction factor was used to estimate doses in helical acquisitions. Hence, the user can select any available protocol in the Aquilion ONE scanner for a standard adult male or female and obtain the dose results through the software interface. Agreement within 9% between CTDI measurements and simulations allowed the validation of the MC program. Additionally, the algorithm for dose reporting in SimDoseCT was validated by comparing dose results from this tool with those obtained from MC simulations for three volumetric acquisitions (head, thorax and abdomen). The comparison was repeated using eight different collimations and also for another collimation in a helical abdomen examination. The results showed differences of 0.1 mSv or less for absolute dose in most organs and also in the effective dose calculation. The software provides a suitable tool for dose assessment in standard adult patients undergoing CT examinations in a 320 detector-row cone-beam scanner.

  6. SimDoseCT: dose reporting software based on Monte Carlo simulation for a 320 detector-row cone-beam CT scanner and ICRP computational adult phantoms.

    PubMed

    Cros, Maria; Joemai, Raoul M S; Geleijns, Jacob; Molina, Diego; Salvadó, Marçal

    2017-07-17

    This study aims to develop and test software for assessing and reporting doses for standard patients undergoing computed tomography (CT) examinations in a 320 detector-row cone-beam scanner. The software, called SimDoseCT, is based on the Monte Carlo (MC) simulation code, which was developed to calculate organ doses and effective doses in ICRP anthropomorphic adult reference computational phantoms for acquisitions with the Aquilion ONE CT scanner (Toshiba). MC simulation was validated by comparing CTDI measurements within standard CT dose phantoms with results from simulation under the same conditions. SimDoseCT consists of a graphical user interface connected to a MySQL database, which contains the look-up-tables that were generated with MC simulations for volumetric acquisitions at different scan positions along the phantom using any tube voltage, bow tie filter, focal spot and nine different beam widths. Two different methods were developed to estimate organ doses and effective doses from acquisitions using other available beam widths in the scanner. A correction factor was used to estimate doses in helical acquisitions. Hence, the user can select any available protocol in the Aquilion ONE scanner for a standard adult male or female and obtain the dose results through the software interface. Agreement within 9% between CTDI measurements and simulations allowed the validation of the MC program. Additionally, the algorithm for dose reporting in SimDoseCT was validated by comparing dose results from this tool with those obtained from MC simulations for three volumetric acquisitions (head, thorax and abdomen). The comparison was repeated using eight different collimations and also for another collimation in a helical abdomen examination. The results showed differences of 0.1 mSv or less for absolute dose in most organs and also in the effective dose calculation. The software provides a suitable tool for dose assessment in standard adult patients undergoing CT examinations in a 320 detector-row cone-beam scanner.

  7. An algorithm for hyperspectral remote sensing of aerosols: 1. Development of theoretical framework

    NASA Astrophysics Data System (ADS)

    Hou, Weizhen; Wang, Jun; Xu, Xiaoguang; Reid, Jeffrey S.; Han, Dong

    2016-07-01

    This paper describes the first part of a series of investigations to develop algorithms for simultaneous retrieval of aerosol parameters and surface reflectance from a newly developed hyperspectral instrument, the GEOstationary Trace gas and Aerosol Sensor Optimization (GEO-TASO), by taking full advantage of available hyperspectral measurement information in the visible bands. We describe the theoretical framework of an inversion algorithm for the hyperspectral remote sensing of the aerosol optical properties, in which major principal components (PCs) for surface reflectance is assumed known, and the spectrally dependent aerosol refractive indices are assumed to follow a power-law approximation with four unknown parameters (two for real and two for imaginary part of refractive index). New capabilities for computing the Jacobians of four Stokes parameters of reflected solar radiation at the top of the atmosphere with respect to these unknown aerosol parameters and the weighting coefficients for each PC of surface reflectance are added into the UNified Linearized Vector Radiative Transfer Model (UNL-VRTM), which in turn facilitates the optimization in the inversion process. Theoretical derivations of the formulas for these new capabilities are provided, and the analytical solutions of Jacobians are validated against the finite-difference calculations with relative error less than 0.2%. Finally, self-consistency check of the inversion algorithm is conducted for the idealized green-vegetation and rangeland surfaces that were spectrally characterized by the U.S. Geological Survey digital spectral library. It shows that the first six PCs can yield the reconstruction of spectral surface reflectance with errors less than 1%. Assuming that aerosol properties can be accurately characterized, the inversion yields a retrieval of hyperspectral surface reflectance with an uncertainty of 2% (and root-mean-square error of less than 0.003), which suggests self-consistency in the inversion framework. The next step of using this framework to study the aerosol information content in GEO-TASO measurements is also discussed.

  8. Satellite on-board processing for earth resources data

    NASA Technical Reports Server (NTRS)

    Bodenheimer, R. E.; Gonzalez, R. C.; Gupta, J. N.; Hwang, K.; Rochelle, R. W.; Wilson, J. B.; Wintz, P. A.

    1975-01-01

    Results of a survey of earth resources user applications and their data requirements, earth resources multispectral scanner sensor technology, and preprocessing algorithms for correcting the sensor outputs and for data bulk reduction are presented along with a candidate data format. Computational requirements required to implement the data analysis algorithms are included along with a review of computer architectures and organizations. Computer architectures capable of handling the algorithm computational requirements are suggested and the environmental effects of an on-board processor discussed. By relating performance parameters to the system requirements of each of the user requirements the feasibility of on-board processing is determined for each user. A tradeoff analysis is performed to determine the sensitivity of results to each of the system parameters. Significant results and conclusions are discussed, and recommendations are presented.

  9. Towards operational multisensor registration

    NASA Technical Reports Server (NTRS)

    Rignot, Eric J. M.; Kwok, Ronald; Curlander, John C.

    1991-01-01

    To use data from a number of different remote sensors in a synergistic manner, a multidimensional analysis of the data is necessary. However, prior to this analysis, processing to correct for the systematic geometric distortion characteristic of each sensor is required. Furthermore, the registration process must be fully automated to handle a large volume of data and high data rates. A conceptual approach towards an operational multisensor registration algorithm is presented. The performance requirements of the algorithm are first formulated given the spatially, temporally, and spectrally varying factors that influence the image characteristics and the science requirements of various applications. Several registration techniques that fit within the structure of this algorithm are also presented. Their performance was evaluated using a multisensor test data set assembled from LANDSAT TM, SEASAT, SIR-B, Thermal Infrared Multispectral Scanner (TIMS), and SPOT sensors.

  10. Ability of Magnetic Resonance Elastography to Assess Taut Bands

    PubMed Central

    Chen, Qingshan; Basford, Jeffery; An, Kai-Nan

    2008-01-01

    Background Myofascial taut bands are central to diagnosis of myofascial pain. Despite their importance, we still lack either a laboratory test or imaging technique capable of objectively confirming either their nature or location. This study explores the ability of magnetic resonance elastography to localize and investigate the mechanical properties of myofascial taut bands on the basis of their effects on shear wave propagation. Methods This study was conducted in three phases. The first involved the imaging of taut bands in gel phantoms, the second a finite element modeling of the phantom experiment, and the third a preliminary evaluation involving eight human subjects-four of whom had, and four of whom did not have myofascial pain. Experiments were performed with a 1.5 Tesla magnetic resonance imaging scanner. Shear wave propagation was imaged and shear stiffness was reconstructed using matched filtering stiffness inversion algorithms. Findings The gel phantom imaging and finite element calculation experiments supported our hypothesis that taut bands can be imaged based on its outstanding shear stiffness. The preliminary human study showed a statistically significant 50-100% (p=0.01) increase of shear stiffness in the taut band regions of the involved subjects relative to that of the controls or in nearby uninvolved muscle. Interpretation This study suggests that magnetic resonance elastography may have a potential for objectively characterizing myofascial taut bands that have been up to now detectable only by the clinician's fingers. PMID:18206282

  11. Feasibility of conductivity imaging using subject eddy currents induced by switching of MRI gradients.

    PubMed

    Oran, Omer Faruk; Ider, Yusuf Ziya

    2017-05-01

    To investigate the feasibility of low-frequency conductivity imaging based on measuring the magnetic field due to subject eddy currents induced by switching of MRI z-gradients. We developed a simulation model for calculating subject eddy currents and the magnetic fields they generate (subject eddy fields). The inverse problem of obtaining conductivity distribution from subject eddy fields was formulated as a convection-reaction partial differential equation. For measuring subject eddy fields, a modified spin-echo pulse sequence was used to determine the contribution of subject eddy fields to MR phase images. In the simulations, successful conductivity reconstructions were obtained by solving the derived convection-reaction equation, suggesting that the proposed reconstruction algorithm performs well under ideal conditions. However, the level of the calculated phase due to the subject eddy field in a representative object indicates that this phase is below the noise level and cannot be measured with an uncertainty sufficiently low for accurate conductivity reconstruction. Furthermore, some artifacts other than random noise were observed in the measured phases, which are discussed in relation to the effects of system imperfections during readout. Low-frequency conductivity imaging does not seem feasible using basic pulse sequences such as spin-echo on a clinical MRI scanner. Magn Reson Med 77:1926-1937, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  12. An Efficient and Accurate Genetic Algorithm for Backcalculation of Flexible Pavement Layer Moduli : Executive Summary Report

    DOT National Transportation Integrated Search

    2012-12-01

    Backcalculation of pavement moduli has been an intensively researched subject for more than four decades. Despite the existence of many backcalculation programs employing different backcalculation procedures and algorithms, accurate inverse of the la...

  13. Optimization and performance evaluation of the microPET II scanner for in vivo small-animal imaging

    NASA Astrophysics Data System (ADS)

    Yang, Yongfeng; Tai, Yuan-Chuan; Siegel, Stefan; Newport, Danny F.; Bai, Bing; Li, Quanzheng; Leahy, Richard M.; Cherry, Simon R.

    2004-06-01

    MicroPET II is a newly developed PET (positron emission tomography) scanner designed for high-resolution imaging of small animals. It consists of 17 640 LSO crystals each measuring 0.975 × 0.975 × 12.5 mm3, which are arranged in 42 contiguous rings, with 420 crystals per ring. The scanner has an axial field of view (FOV) of 4.9 cm and a transaxial FOV of 8.5 cm. The purpose of this study was to carefully evaluate the performance of the system and to optimize settings for in vivo mouse and rat imaging studies. The volumetric image resolution was found to depend strongly on the reconstruction algorithm employed and averaged 1.1 mm (1.4 µl) across the central 3 cm of the transaxial FOV when using a statistical reconstruction algorithm with accurate system modelling. The sensitivity, scatter fraction and noise-equivalent count (NEC) rate for mouse- and rat-sized phantoms were measured for different energy and timing windows. Mouse imaging was optimized with a wide open energy window (150-750 keV) and a 10 ns timing window, leading to a sensitivity of 3.3% at the centre of the FOV and a peak NEC rate of 235 000 cps for a total activity of 80 MBq (2.2 mCi) in the phantom. Rat imaging, due to the higher scatter fraction, and the activity that lies outside of the field of view, achieved a maximum NEC rate of 24 600 cps for a total activity of 80 MBq (2.2 mCi) in the phantom, with an energy window of 250-750 keV and a 6 ns timing window. The sensitivity at the centre of the FOV for these settings is 2.1%. This work demonstrates that different scanner settings are necessary to optimize the NEC count rate for different-sized animals and different injected doses. Finally, phantom and in vivo animal studies are presented to demonstrate the capabilities of microPET II for small-animal imaging studies.

  14. Transform Decoding of Reed-Solomon Codes. Volume I. Algorithm and Signal Processing Structure

    DTIC Science & Technology

    1982-11-01

    systematic channel co.’e. 1. lake the inverse transform of the r- ceived se, - nee. 2. Isolate the error syndrome from the inverse transform and use... inverse transform is identic l with interpolation of the polynomial a(z) from its n values. In order to generate a Reed-Solomon (n,k) cooce, we let the set...in accordance with the transform of equation (4). If we were to apply the inverse transform of equa- tion (6) to the coefficient sequence of A(z), we

  15. Modeling the Volcanic Source at Long Valley, CA, Using a Genetic Algorithm Technique

    NASA Technical Reports Server (NTRS)

    Tiampo, Kristy F.

    1999-01-01

    In this project, we attempted to model the deformation pattern due to the magmatic source at Long Valley caldera using a real-value coded genetic algorithm (GA) inversion similar to that found in Michalewicz, 1992. The project has been both successful and rewarding. The genetic algorithm, coded in the C programming language, performs stable inversions over repeated trials, with varying initial and boundary conditions. The original model used a GA in which the geophysical information was coded into the fitness function through the computation of surface displacements for a Mogi point source in an elastic half-space. The program was designed to invert for a spherical magmatic source - its depth, horizontal location and volume - using the known surface deformations. It also included the capability of inverting for multiple sources.

  16. A dynamical regularization algorithm for solving inverse source problems of elliptic partial differential equations

    NASA Astrophysics Data System (ADS)

    Zhang, Ye; Gong, Rongfang; Cheng, Xiaoliang; Gulliksson, Mårten

    2018-06-01

    This study considers the inverse source problem for elliptic partial differential equations with both Dirichlet and Neumann boundary data. The unknown source term is to be determined by additional boundary conditions. Unlike the existing methods found in the literature, which usually employ the first-order in time gradient-like system (such as the steepest descent methods) for numerically solving the regularized optimization problem with a fixed regularization parameter, we propose a novel method with a second-order in time dissipative gradient-like system and a dynamical selected regularization parameter. A damped symplectic scheme is proposed for the numerical solution. Theoretical analysis is given for both the continuous model and the numerical algorithm. Several numerical examples are provided to show the robustness of the proposed algorithm.

  17. A fast reconstruction algorithm for fluorescence optical diffusion tomography based on preiteration.

    PubMed

    Song, Xiaolei; Xiong, Xiaoyun; Bai, Jing

    2007-01-01

    Fluorescence optical diffusion tomography in the near-infrared (NIR) bandwidth is considered to be one of the most promising ways for noninvasive molecular-based imaging. Many reconstructive approaches to it utilize iterative methods for data inversion. However, they are time-consuming and they are far from meeting the real-time imaging demands. In this work, a fast preiteration algorithm based on the generalized inverse matrix is proposed. This method needs only one step of matrix-vector multiplication online, by pushing the iteration process to be executed offline. In the preiteration process, the second-order iterative format is employed to exponentially accelerate the convergence. Simulations based on an analytical diffusion model show that the distribution of fluorescent yield can be well estimated by this algorithm and the reconstructed speed is remarkably increased.

  18. (abstract) Using an Inversion Algorithm to Retrieve Parameters and Monitor Changes over Forested Areas from SAR Data

    NASA Technical Reports Server (NTRS)

    Moghaddam, Mahta

    1995-01-01

    In this work, the application of an inversion algorithm based on a nonlinear opimization technique to retrieve forest parameters from multifrequency polarimetric SAR data is discussed. The approach discussed here allows for retrieving and monitoring changes in forest parameters in a quantative and systematic fashion using SAR data. The parameters to be inverted directly from the data are the electromagnetic scattering properties of the forest components such as their dielectric constants and size characteristics. Once these are known, attributes such as canopy moisture content can be obtained, which are useful in the ecosystem models.

  19. Restart Operator Meta-heuristics for a Problem-Oriented Evolutionary Strategies Algorithm in Inverse Mathematical MISO Modelling Problem Solving

    NASA Astrophysics Data System (ADS)

    Ryzhikov, I. S.; Semenkin, E. S.

    2017-02-01

    This study is focused on solving an inverse mathematical modelling problem for dynamical systems based on observation data and control inputs. The mathematical model is being searched in the form of a linear differential equation, which determines the system with multiple inputs and a single output, and a vector of the initial point coordinates. The described problem is complex and multimodal and for this reason the proposed evolutionary-based optimization technique, which is oriented on a dynamical system identification problem, was applied. To improve its performance an algorithm restart operator was implemented.

  20. Neural learning of constrained nonlinear transformations

    NASA Technical Reports Server (NTRS)

    Barhen, Jacob; Gulati, Sandeep; Zak, Michail

    1989-01-01

    Two issues that are fundamental to developing autonomous intelligent robots, namely, rudimentary learning capability and dexterous manipulation, are examined. A powerful neural learning formalism is introduced for addressing a large class of nonlinear mapping problems, including redundant manipulator inverse kinematics, commonly encountered during the design of real-time adaptive control mechanisms. Artificial neural networks with terminal attractor dynamics are used. The rapid network convergence resulting from the infinite local stability of these attractors allows the development of fast neural learning algorithms. Approaches to manipulator inverse kinematics are reviewed, the neurodynamics model is discussed, and the neural learning algorithm is presented.

Top